hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
7652ae3b0d524768c889f599d6930c699e101246
4,662
md
Markdown
README.md
rohitbhat1603/intro-to-open-source
2897a802dda8f3fe4dcee6c37dde6695e8adb235
[ "MIT" ]
16
2017-08-20T06:08:53.000Z
2021-02-18T03:41:28.000Z
README.md
rohitbhat1603/intro-to-open-source
2897a802dda8f3fe4dcee6c37dde6695e8adb235
[ "MIT" ]
19
2017-08-19T05:32:30.000Z
2021-03-26T18:40:19.000Z
README.md
rohitbhat1603/intro-to-open-source
2897a802dda8f3fe4dcee6c37dde6695e8adb235
[ "MIT" ]
44
2017-08-19T05:09:13.000Z
2021-07-28T13:39:33.000Z
# Workshop: Intro to Open Source 🚀 This repo is for our Women Who Code 2019 Intro to Contributing to Open Source Workshop. This repo serves as a safe test space for those who wish to practice git, making pull requests and responding to issues. ## Workshop Agenda - Overviewing GitHub and Releases - Reviewing the Basics and Learning a Codebase - Your First Open Source Project Commit (look for First Timer's Only PRs) - What to expect - will my PR be merged in right away? How do deal with feedback? - Becoming a Regular Contributor ## Usage In the `Issues` tab, we have a variety of issues with different labels that you can choose. Choose from any of the labels and submit a PR with your changes. We'll review your PR and conduct a mock code review, before ultimately merging your PR. If you have any questions about working in existing open source libraries, open a new issue! ## Git Config First you should tell git your name and email (You can set specific ones for different repositories if you wish.). So if you're Keeley Hammond, you'd do it like this: ``` git config --global user.name "Keeley Hammond" git config --global user.email [email protected] ``` If you want to set up a default editor you can set it using: ``` git config --global core.editor vim ``` To see what configuration settings you have: ``` git config --list ``` ### Clone and fork this repo Before contributing, fork this repo by pressing the "Fork" button inside the Women Who Code repo. This will create your own version of the repo. To clone the respository down to your laptop, click the green clone or download button above and copy the link. After copying the link, go to the path on your own computer that you want to clone this to. For organization, it is more clear when there are separate directories for different repo owners. ### Making a new branch Say you want to make changes on a branch other than master. This is common when wanting to separate different changes. You can make a new branch for fixing a README, for example, using: ``` git checkout -b fix-readme ``` The `-b` flag lets git know that this is a new branch being created. In addition to creating a new branch, you can navigate to different existing branches using the same checkout command, without the flag: ``` git checkout master ``` To access your fix-readme branch again, use: ``` git checkout fix-readme ``` ### Pulling down new content In your terminal, run `git remote -v` to see all of your remotes. Add the Women Who Code repo as an upstream that you can pull from by adding the following: `git remote add [email protected]:wwcodeportland/intro-to-open-source.git` This will allow you to fetch the most recent changes from upstream. * `git fetch`: fetches the changes, but doesn't merge them * `git pull`: does `git fetch` and `git merge`. This results in an [extra commit](https://coderwall.com/p/7aymfa/please-oh-please-use-git-pull-rebase). * `git pull --rebase`: leaves your commits in a straight line without branches ## Contributing to Open Source ### Creating an Issue Click the button at the top that says "Issues" and then the green button to the right. ![create-issue](https://cloud.githubusercontent.com/assets/12282848/16970455/112131a4-4dd1-11e6-890b-697903e9b94b.png) ### Creating a Pull Request Click the button at the top that says "Pull Requests" and then the green button to the right. ![create-pr](https://cloud.githubusercontent.com/assets/12282848/16970458/128818aa-4dd1-11e6-9388-f27a7106cb4e.png) If the pull request (PR) is to fix an existing issue, you can reference it by `#somenumber`, e.g. `#2`. It's common to say "Fixes #somenumber" so when the PR is merged, it closes the corresponding issue. *Tip: Include the issue the PR fixes in the commit message and have descriptive messages.* ### Resources Great websites for people who are new to coding/contributing to open source: * [First timers only](http://www.firsttimersonly.com/) * [Your first PR](https://twitter.com/yourfirstpr) * [Code Newbie](http://www.codenewbie.org/) * [Get involved in tech](http://www.getinvolvedintech.com) ## Some Git/GitHub Resources * [Official git documentation](https://git-scm.com/doc) * [GitHub's Hello-World](https://guides.github.com/activities/hello-world/) * [Brent Beer's OSCON talk: Everything I wish I knew when I started using GitHub](https://www.youtube.com/watch?v=KDUtjZHIx44) ## Contributing to an Open Source Project * [Open Source Guides](https://opensource.guide) * [Course from egghead.io: How to Contribute to an Open Source Project on GitHub](https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github)
48.061856
245
0.760618
eng_Latn
0.98814
7652c8d6ed8c3547bd2d98c6e44c39863917f3c7
480
md
Markdown
windows.media.core/timedtextcue_lines.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
199
2017-02-09T23:13:51.000Z
2022-03-28T15:56:12.000Z
windows.media.core/timedtextcue_lines.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
2,093
2017-02-09T21:52:45.000Z
2022-03-25T22:23:18.000Z
windows.media.core/timedtextcue_lines.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
620
2017-02-08T19:19:44.000Z
2022-03-29T11:38:25.000Z
--- -api-id: P:Windows.Media.Core.TimedTextCue.Lines -api-type: winrt property --- <!-- Property syntax public Windows.Foundation.Collections.IVector<Windows.Media.Core.TimedTextLine> Lines { get; } --> # Windows.Media.Core.TimedTextCue.Lines ## -description Gets the collection of [TimedTextLine](timedtextline.md) objects that convey the text of the cue. ## -property-value A collection of [TimedTextLine](timedtextline.md) objects. ## -remarks ## -examples ## -see-also
20.869565
97
0.74375
yue_Hant
0.894033
5dee44e4658cdba654ba46c427b09f617ff18bd5
53,872
md
Markdown
API.md
bishoprook/Cadd9
a5f6622b10d166e5ad7b245ac881e0e72cf4220e
[ "MIT" ]
4
2020-03-08T12:01:39.000Z
2021-09-07T08:21:41.000Z
API.md
bishoprook/Cadd9
a5f6622b10d166e5ad7b245ac881e0e72cf4220e
[ "MIT" ]
1
2021-09-09T20:24:28.000Z
2021-09-09T20:24:28.000Z
API.md
bishoprook/Cadd9
a5f6622b10d166e5ad7b245ac881e0e72cf4220e
[ "MIT" ]
null
null
null
<a name='assembly'></a> # Cadd9 ## Contents - [Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') - [#ctor(semitones)](#M-Cadd9-Model-Accidental-#ctor-System-Int32- 'Cadd9.Model.Accidental.#ctor(System.Int32)') - [Description](#P-Cadd9-Model-Accidental-Description 'Cadd9.Model.Accidental.Description') - [Semitones](#P-Cadd9-Model-Accidental-Semitones 'Cadd9.Model.Accidental.Semitones') - [Enharmonic(other)](#M-Cadd9-Model-Accidental-Enharmonic-Cadd9-Model-Accidental- 'Cadd9.Model.Accidental.Enharmonic(Cadd9.Model.Accidental)') - [Equals(other)](#M-Cadd9-Model-Accidental-Equals-Cadd9-Model-Accidental- 'Cadd9.Model.Accidental.Equals(Cadd9.Model.Accidental)') - [GetHashCode()](#M-Cadd9-Model-Accidental-GetHashCode 'Cadd9.Model.Accidental.GetHashCode') - [Parse(input)](#M-Cadd9-Model-Accidental-Parse-System-String- 'Cadd9.Model.Accidental.Parse(System.String)') - [ToString()](#M-Cadd9-Model-Accidental-ToString 'Cadd9.Model.Accidental.ToString') - [Alteration](#T-Cadd9-Model-Quality-Alteration 'Cadd9.Model.Quality.Alteration') - [#ctor()](#M-Cadd9-Model-Quality-Alteration-#ctor-System-Nullable{Cadd9-Model-Interval-Generic},Cadd9-Model-Interval- 'Cadd9.Model.Quality.Alteration.#ctor(System.Nullable{Cadd9.Model.Interval.Generic},Cadd9.Model.Interval)') - [Add](#P-Cadd9-Model-Quality-Alteration-Add 'Cadd9.Model.Quality.Alteration.Add') - [Drop](#P-Cadd9-Model-Quality-Alteration-Drop 'Cadd9.Model.Quality.Alteration.Drop') - [Constants](#T-Cadd9-Model-Constants 'Cadd9.Model.Constants') - [MIDDLE_C_MIDI_NUMBER](#F-Cadd9-Model-Constants-MIDDLE_C_MIDI_NUMBER 'Cadd9.Model.Constants.MIDDLE_C_MIDI_NUMBER') - [MIDDLE_C_OCTAVE](#F-Cadd9-Model-Constants-MIDDLE_C_OCTAVE 'Cadd9.Model.Constants.MIDDLE_C_OCTAVE') - [NAMES_PER_OCTAVE](#F-Cadd9-Model-Constants-NAMES_PER_OCTAVE 'Cadd9.Model.Constants.NAMES_PER_OCTAVE') - [SEMITONES_PER_OCTAVE](#F-Cadd9-Model-Constants-SEMITONES_PER_OCTAVE 'Cadd9.Model.Constants.SEMITONES_PER_OCTAVE') - [EnumerableExtensions](#T-Cadd9-Util-EnumerableExtensions 'Cadd9.Util.EnumerableExtensions') - [EveryN\`\`1()](#M-Cadd9-Util-EnumerableExtensions-EveryN``1-System-Collections-Generic-IEnumerable{``0},System-Int32- 'Cadd9.Util.EnumerableExtensions.EveryN``1(System.Collections.Generic.IEnumerable{``0},System.Int32)') - [Generic](#T-Cadd9-Model-Interval-Generic 'Cadd9.Model.Interval.Generic') - [IntExtensions](#T-Cadd9-Util-IntExtensions 'Cadd9.Util.IntExtensions') - [Demodulus()](#M-Cadd9-Util-IntExtensions-Demodulus-System-Int32,System-Int32- 'Cadd9.Util.IntExtensions.Demodulus(System.Int32,System.Int32)') - [Modulus()](#M-Cadd9-Util-IntExtensions-Modulus-System-Int32,System-Int32- 'Cadd9.Util.IntExtensions.Modulus(System.Int32,System.Int32)') - [Ordinal()](#M-Cadd9-Util-IntExtensions-Ordinal-System-Int32- 'Cadd9.Util.IntExtensions.Ordinal(System.Int32)') - [Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') - [#ctor(genericWidth,specificWidth)](#M-Cadd9-Model-Interval-#ctor-Cadd9-Model-Interval-Generic,System-Int32- 'Cadd9.Model.Interval.#ctor(Cadd9.Model.Interval.Generic,System.Int32)') - [Abbreviation](#P-Cadd9-Model-Interval-Abbreviation 'Cadd9.Model.Interval.Abbreviation') - [Description](#P-Cadd9-Model-Interval-Description 'Cadd9.Model.Interval.Description') - [GenericWidth](#P-Cadd9-Model-Interval-GenericWidth 'Cadd9.Model.Interval.GenericWidth') - [SpecificWidth](#P-Cadd9-Model-Interval-SpecificWidth 'Cadd9.Model.Interval.SpecificWidth') - [Between(bottom,top)](#M-Cadd9-Model-Interval-Between-Cadd9-Model-Name,Cadd9-Model-Name- 'Cadd9.Model.Interval.Between(Cadd9.Model.Name,Cadd9.Model.Name)') - [Between(bottom,top)](#M-Cadd9-Model-Interval-Between-Cadd9-Model-Note,Cadd9-Model-Note- 'Cadd9.Model.Interval.Between(Cadd9.Model.Note,Cadd9.Model.Note)') - [Between(bottom,top)](#M-Cadd9-Model-Interval-Between-Cadd9-Model-Pitch,Cadd9-Model-Pitch- 'Cadd9.Model.Interval.Between(Cadd9.Model.Pitch,Cadd9.Model.Pitch)') - [Enharmonic(other)](#M-Cadd9-Model-Interval-Enharmonic-Cadd9-Model-Interval- 'Cadd9.Model.Interval.Enharmonic(Cadd9.Model.Interval)') - [Equals(other)](#M-Cadd9-Model-Interval-Equals-Cadd9-Model-Interval- 'Cadd9.Model.Interval.Equals(Cadd9.Model.Interval)') - [GenericName()](#M-Cadd9-Model-Interval-GenericName-Cadd9-Model-Interval-Generic- 'Cadd9.Model.Interval.GenericName(Cadd9.Model.Interval.Generic)') - [GetHashCode()](#M-Cadd9-Model-Interval-GetHashCode 'Cadd9.Model.Interval.GetHashCode') - [Parse(input)](#M-Cadd9-Model-Interval-Parse-System-String- 'Cadd9.Model.Interval.Parse(System.String)') - [ParseFormal()](#M-Cadd9-Model-Interval-ParseFormal-System-String- 'Cadd9.Model.Interval.ParseFormal(System.String)') - [ParseSimple()](#M-Cadd9-Model-Interval-ParseSimple-System-String- 'Cadd9.Model.Interval.ParseSimple(System.String)') - [ReducedGenericWidth()](#M-Cadd9-Model-Interval-ReducedGenericWidth-Cadd9-Model-Interval-Generic- 'Cadd9.Model.Interval.ReducedGenericWidth(Cadd9.Model.Interval.Generic)') - [ToString()](#M-Cadd9-Model-Interval-ToString 'Cadd9.Model.Interval.ToString') - [op_Addition()](#M-Cadd9-Model-Interval-op_Addition-Cadd9-Model-Interval,Cadd9-Model-Interval- 'Cadd9.Model.Interval.op_Addition(Cadd9.Model.Interval,Cadd9.Model.Interval)') - [op_Subtraction()](#M-Cadd9-Model-Interval-op_Subtraction-Cadd9-Model-Interval,Cadd9-Model-Interval- 'Cadd9.Model.Interval.op_Subtraction(Cadd9.Model.Interval,Cadd9.Model.Interval)') - [Mode](#T-Cadd9-Model-Mode 'Cadd9.Model.Mode') - [#ctor(title,intervals)](#M-Cadd9-Model-Mode-#ctor-System-String,Cadd9-Model-Interval[]- 'Cadd9.Model.Mode.#ctor(System.String,Cadd9.Model.Interval[])') - [#ctor(title,intervals)](#M-Cadd9-Model-Mode-#ctor-System-String,System-String[]- 'Cadd9.Model.Mode.#ctor(System.String,System.String[])') - [Intervals](#P-Cadd9-Model-Mode-Intervals 'Cadd9.Model.Mode.Intervals') - [Title](#P-Cadd9-Model-Mode-Title 'Cadd9.Model.Mode.Title') - [AccidentalFor()](#M-Cadd9-Model-Mode-AccidentalFor-Cadd9-Model-Note,Cadd9-Model-Name- 'Cadd9.Model.Mode.AccidentalFor(Cadd9.Model.Note,Cadd9.Model.Name)') - [Ascend()](#M-Cadd9-Model-Mode-Ascend 'Cadd9.Model.Mode.Ascend') - [DiatonicChord(degree,count)](#M-Cadd9-Model-Mode-DiatonicChord-Cadd9-Model-Degree,System-Int32- 'Cadd9.Model.Mode.DiatonicChord(Cadd9.Model.Degree,System.Int32)') - [Equals(other)](#M-Cadd9-Model-Mode-Equals-Cadd9-Model-Mode- 'Cadd9.Model.Mode.Equals(Cadd9.Model.Mode)') - [GetHashCode()](#M-Cadd9-Model-Mode-GetHashCode 'Cadd9.Model.Mode.GetHashCode') - [Scale()](#M-Cadd9-Model-Mode-Scale-Cadd9-Model-Note- 'Cadd9.Model.Mode.Scale(Cadd9.Model.Note)') - [Scale()](#M-Cadd9-Model-Mode-Scale-Cadd9-Model-Pitch- 'Cadd9.Model.Mode.Scale(Cadd9.Model.Pitch)') - [ToString()](#M-Cadd9-Model-Mode-ToString 'Cadd9.Model.Mode.ToString') - [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') - [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') - [#ctor(name,accidental)](#M-Cadd9-Model-Note-#ctor-Cadd9-Model-Name,Cadd9-Model-Accidental- 'Cadd9.Model.Note.#ctor(Cadd9.Model.Name,Cadd9.Model.Accidental)') - [Accidental](#P-Cadd9-Model-Note-Accidental 'Cadd9.Model.Note.Accidental') - [Description](#P-Cadd9-Model-Note-Description 'Cadd9.Model.Note.Description') - [Name](#P-Cadd9-Model-Note-Name 'Cadd9.Model.Note.Name') - [PitchClass](#P-Cadd9-Model-Note-PitchClass 'Cadd9.Model.Note.PitchClass') - [Apply(interval)](#M-Cadd9-Model-Note-Apply-Cadd9-Model-Interval- 'Cadd9.Model.Note.Apply(Cadd9.Model.Interval)') - [Enharmonic(other)](#M-Cadd9-Model-Note-Enharmonic-Cadd9-Model-Note- 'Cadd9.Model.Note.Enharmonic(Cadd9.Model.Note)') - [Equals(other)](#M-Cadd9-Model-Note-Equals-Cadd9-Model-Note- 'Cadd9.Model.Note.Equals(Cadd9.Model.Note)') - [GetHashCode()](#M-Cadd9-Model-Note-GetHashCode 'Cadd9.Model.Note.GetHashCode') - [Parse(input)](#M-Cadd9-Model-Note-Parse-System-String- 'Cadd9.Model.Note.Parse(System.String)') - [ToString()](#M-Cadd9-Model-Note-ToString 'Cadd9.Model.Note.ToString') - [ParseHelpers](#T-Cadd9-Util-ParseHelpers 'Cadd9.Util.ParseHelpers') - [N()](#M-Cadd9-Util-ParseHelpers-N-System-String- 'Cadd9.Util.ParseHelpers.N(System.String)') - [P()](#M-Cadd9-Util-ParseHelpers-P-System-String- 'Cadd9.Util.ParseHelpers.P(System.String)') - [W()](#M-Cadd9-Util-ParseHelpers-W-System-String- 'Cadd9.Util.ParseHelpers.W(System.String)') - [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') - [#ctor(note,octave)](#M-Cadd9-Model-Pitch-#ctor-Cadd9-Model-Note,System-Int32- 'Cadd9.Model.Pitch.#ctor(Cadd9.Model.Note,System.Int32)') - [#ctor(name,accidental,octave)](#M-Cadd9-Model-Pitch-#ctor-Cadd9-Model-Name,Cadd9-Model-Accidental,System-Int32- 'Cadd9.Model.Pitch.#ctor(Cadd9.Model.Name,Cadd9.Model.Accidental,System.Int32)') - [Description](#P-Cadd9-Model-Pitch-Description 'Cadd9.Model.Pitch.Description') - [Midi](#P-Cadd9-Model-Pitch-Midi 'Cadd9.Model.Pitch.Midi') - [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') - [Octave](#P-Cadd9-Model-Pitch-Octave 'Cadd9.Model.Pitch.Octave') - [Apply(interval)](#M-Cadd9-Model-Pitch-Apply-Cadd9-Model-Interval- 'Cadd9.Model.Pitch.Apply(Cadd9.Model.Interval)') - [DescribeInKey(key,signature)](#M-Cadd9-Model-Pitch-DescribeInKey-Cadd9-Model-Note,Cadd9-Model-Mode- 'Cadd9.Model.Pitch.DescribeInKey(Cadd9.Model.Note,Cadd9.Model.Mode)') - [Enharmonic(other)](#M-Cadd9-Model-Pitch-Enharmonic-Cadd9-Model-Pitch- 'Cadd9.Model.Pitch.Enharmonic(Cadd9.Model.Pitch)') - [Equals(other)](#M-Cadd9-Model-Pitch-Equals-Cadd9-Model-Pitch- 'Cadd9.Model.Pitch.Equals(Cadd9.Model.Pitch)') - [GetHashCode()](#M-Cadd9-Model-Pitch-GetHashCode 'Cadd9.Model.Pitch.GetHashCode') - [InOctave(octave)](#M-Cadd9-Model-Pitch-InOctave-System-Int32- 'Cadd9.Model.Pitch.InOctave(System.Int32)') - [Parse(input)](#M-Cadd9-Model-Pitch-Parse-System-String- 'Cadd9.Model.Pitch.Parse(System.String)') - [ToString()](#M-Cadd9-Model-Pitch-ToString 'Cadd9.Model.Pitch.ToString') - [Transpose(octaves)](#M-Cadd9-Model-Pitch-Transpose-System-Int32- 'Cadd9.Model.Pitch.Transpose(System.Int32)') - [Program](#T-Program 'Program') - [Main()](#M-Program-Main-System-String[]- 'Program.Main(System.String[])') - [Quality](#T-Cadd9-Model-Quality 'Cadd9.Model.Quality') - [#ctor(intervals)](#M-Cadd9-Model-Quality-#ctor-Cadd9-Model-Interval[]- 'Cadd9.Model.Quality.#ctor(Cadd9.Model.Interval[])') - [#ctor(intervals)](#M-Cadd9-Model-Quality-#ctor-System-String[]- 'Cadd9.Model.Quality.#ctor(System.String[])') - [Intervals](#P-Cadd9-Model-Quality-Intervals 'Cadd9.Model.Quality.Intervals') - [Add(add)](#M-Cadd9-Model-Quality-Add-Cadd9-Model-Interval- 'Cadd9.Model.Quality.Add(Cadd9.Model.Interval)') - [Alter(alt)](#M-Cadd9-Model-Quality-Alter-Cadd9-Model-Quality-Alteration- 'Cadd9.Model.Quality.Alter(Cadd9.Model.Quality.Alteration)') - [Apply()](#M-Cadd9-Model-Quality-Apply-Cadd9-Model-Note- 'Cadd9.Model.Quality.Apply(Cadd9.Model.Note)') - [Apply()](#M-Cadd9-Model-Quality-Apply-Cadd9-Model-Pitch- 'Cadd9.Model.Quality.Apply(Cadd9.Model.Pitch)') - [Equals(other)](#M-Cadd9-Model-Quality-Equals-Cadd9-Model-Quality- 'Cadd9.Model.Quality.Equals(Cadd9.Model.Quality)') - [GetHashCode()](#M-Cadd9-Model-Quality-GetHashCode 'Cadd9.Model.Quality.GetHashCode') - [ToString()](#M-Cadd9-Model-Quality-ToString 'Cadd9.Model.Quality.ToString') - [Voicing](#T-Cadd9-Model-Voicing 'Cadd9.Model.Voicing') - [Equals(other)](#M-Cadd9-Model-Voicing-Equals-Cadd9-Model-Voicing- 'Cadd9.Model.Voicing.Equals(Cadd9.Model.Voicing)') - [GetHashCode()](#M-Cadd9-Model-Voicing-GetHashCode 'Cadd9.Model.Voicing.GetHashCode') <a name='T-Cadd9-Model-Accidental'></a> ## Accidental `type` ##### Namespace Cadd9.Model ##### Summary Represents an accidental that shifts some note or pitch by a certain number of semitones away from natural. <a name='M-Cadd9-Model-Accidental-#ctor-System-Int32-'></a> ### #ctor(semitones) `constructor` ##### Summary Returns a new Accidental ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | semitones | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The number of semitones this accidental shifts a pitch or note | <a name='P-Cadd9-Model-Accidental-Description'></a> ### Description `property` ##### Summary A formatted representation of this Accidental as a UTF-8 string. <a name='P-Cadd9-Model-Accidental-Semitones'></a> ### Semitones `property` ##### Summary The number of semitones this accidental shifts a pitch or note. Positive values indicate sharps, negative values indicate flats. <a name='M-Cadd9-Model-Accidental-Enharmonic-Cadd9-Model-Accidental-'></a> ### Enharmonic(other) `method` ##### Summary Returns true if `other` is enharmonic with this Accidental. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') | The Accidental to compare with | ##### Remarks Two accidentals are enharmonic if they are equal or if their `Semitones` differ by an even multiple of 12. For example 5 sharps is enharmonic with 7 flats: C♯♯♯♯♯ and C♭♭♭♭♭♭♭ are both enharmonic with each other and with F natural. <a name='M-Cadd9-Model-Accidental-Equals-Cadd9-Model-Accidental-'></a> ### Equals(other) `method` ##### Summary Determines whether two Accidentals are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') | The Accidental to compare | <a name='M-Cadd9-Model-Accidental-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Accidentals are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Accidental-Parse-System-String-'></a> ### Parse(input) `method` ##### Summary Returns a new Accidental based on the given input string. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | input | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The plain ASCII input string to parse | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.FormatException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.FormatException 'System.FormatException') | The given input cannot be parsed | ##### Remarks Assumes the input will be in plain ASCII: "b" for flats and "#" for sharps. An empty string or the string `"nat"` may be used for natural. <a name='M-Cadd9-Model-Accidental-ToString'></a> ### ToString() `method` ##### Summary Returns a string representation of this Accidental, primarily for debugging purposes. ##### Parameters This method has no parameters. <a name='T-Cadd9-Model-Quality-Alteration'></a> ## Alteration `type` ##### Namespace Cadd9.Model.Quality ##### Summary Encapsulates an alteration of a chord quality, by dropping, adding, or replacing some intervals. <a name='M-Cadd9-Model-Quality-Alteration-#ctor-System-Nullable{Cadd9-Model-Interval-Generic},Cadd9-Model-Interval-'></a> ### #ctor() `constructor` ##### Summary Returns a new Alteration ##### Parameters This constructor has no parameters. <a name='P-Cadd9-Model-Quality-Alteration-Add'></a> ### Add `property` ##### Summary The interval that is added as part of this modification, or null if nothing is added. <a name='P-Cadd9-Model-Quality-Alteration-Drop'></a> ### Drop `property` ##### Summary The generic interval to be removed by this modification, or null if nothing is removed. <a name='T-Cadd9-Model-Constants'></a> ## Constants `type` ##### Namespace Cadd9.Model ##### Summary Defines several frequently-used music theory constants. <a name='F-Cadd9-Model-Constants-MIDDLE_C_MIDI_NUMBER'></a> ### MIDDLE_C_MIDI_NUMBER `constants` ##### Summary The MIDI note number corresponding to middle C ##### Remarks This is not a universal definition. In fact as far as the MIDI standard is concerned, the note numbers are not strictly related to any musical pitches. They are simply a set of 127 sequential numbers that can be turned on and off with control messages. But 60 is used by most device manufacturers. <a name='F-Cadd9-Model-Constants-MIDDLE_C_OCTAVE'></a> ### MIDDLE_C_OCTAVE `constants` ##### Summary The octave number of middle C ##### Remarks There is no universal definition: middle C is also sometimes labeled as C3. <a name='F-Cadd9-Model-Constants-NAMES_PER_OCTAVE'></a> ### NAMES_PER_OCTAVE `constants` ##### Summary The number of note names per octave ##### Remarks Western tonal music is built around the diatonic scale, which is a heptatonic scale (containing seven tones) arranged so there are 2 half-steps (single semitone) and 5 whole-steps (two semitones) separating each. The arrangement of steps and half-steps determines the [Mode](#T-Cadd9-Model-Mode 'Cadd9.Model.Mode') of the scale. <a name='F-Cadd9-Model-Constants-SEMITONES_PER_OCTAVE'></a> ### SEMITONES_PER_OCTAVE `constants` ##### Summary The number of semitones per octave ##### Remarks In Western tonal music using equal temperament, an octave is a doubling of frequency, and it is further subdivided into 12 semitones where each is in a 2^(1/12):1 ratio with the one before it. <a name='T-Cadd9-Util-EnumerableExtensions'></a> ## EnumerableExtensions `type` ##### Namespace Cadd9.Util ##### Summary Adds useful extension methods to [IEnumerable\`1](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Collections.Generic.IEnumerable`1 'System.Collections.Generic.IEnumerable`1') <a name='M-Cadd9-Util-EnumerableExtensions-EveryN``1-System-Collections-Generic-IEnumerable{``0},System-Int32-'></a> ### EveryN\`\`1() `method` ##### Summary Returns the first element in the input and every Nth afterward. ##### Parameters This method has no parameters. ##### Example This example demonstrates how to use [EveryN\`\`1](#M-Cadd9-Util-EnumerableExtensions-EveryN``1-System-Collections-Generic-IEnumerable{``0},System-Int32- 'Cadd9.Util.EnumerableExtensions.EveryN``1(System.Collections.Generic.IEnumerable{``0},System.Int32)') to take every Nth element of a range. ``` var result = Enumerable.Range(1, 10).EveryN(3); foreach (var num in result) { Console.WriteLine(num); } // This produces: 1, 4, 7, 10 ``` <a name='T-Cadd9-Model-Interval-Generic'></a> ## Generic `type` ##### Namespace Cadd9.Model.Interval ##### Summary Represents a generic interval between two notes or pitches independent of semitone width ##### Remarks For instance, the generic interval between any F and any A is always a third. From F to A♯ is an augmented third (5 semitones), and from F to A♭ is a minor third (3 semitones) but the generic interval is always a third. In other words, this is the number of note names shifted from the bottom to top of the interval. <a name='T-Cadd9-Util-IntExtensions'></a> ## IntExtensions `type` ##### Namespace Cadd9.Util ##### Summary Adds extension methods for some useful integer math <a name='M-Cadd9-Util-IntExtensions-Demodulus-System-Int32,System-Int32-'></a> ### Demodulus() `method` ##### Summary Returns the integer congruent to operand (mod modulus) with smallest absolute value. ##### Parameters This method has no parameters. ##### Remarks The range of this method is `[-modulus/2, modulus/2)` -- basically the goal is to return an integer that is modulo-congruent with operand but is as close as possible to zero. Primarily this is used to simplify accidentals as much as possible: if we have 11 sharps, a more ideal (and enharmonic) accidental would be one flat, represented as -1 and congruent to 11 (mod 12). <a name='M-Cadd9-Util-IntExtensions-Modulus-System-Int32,System-Int32-'></a> ### Modulus() `method` ##### Summary Returns the value of `operand` mod `modulus` ##### Parameters This method has no parameters. ##### Remarks Formally this method returns the least non-negative integer `N` such that `operand ≡ N (mod modulus)`. It differs from C#'s `%` operator in its treatment of negative values: `-1 % 7 == -1` while `-1.Modulus(7) == 6`. <a name='M-Cadd9-Util-IntExtensions-Ordinal-System-Int32-'></a> ### Ordinal() `method` ##### Summary Returns a string representation of an integer in ordinal form ##### Parameters This method has no parameters. <a name='T-Cadd9-Model-Interval'></a> ## Interval `type` ##### Namespace Cadd9.Model ##### Summary Represents a musical width between notes or pitches. <a name='M-Cadd9-Model-Interval-#ctor-Cadd9-Model-Interval-Generic,System-Int32-'></a> ### #ctor(genericWidth,specificWidth) `constructor` ##### Summary Creates an Interval with the given generic and specific widths. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | genericWidth | [Cadd9.Model.Interval.Generic](#T-Cadd9-Model-Interval-Generic 'Cadd9.Model.Interval.Generic') | The number of note names spanned by the interval | | specificWidth | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The number of semitones spanned by the interval | <a name='P-Cadd9-Model-Interval-Abbreviation'></a> ### Abbreviation `property` ##### Summary A short formatted description of the interval, like "P4" <a name='P-Cadd9-Model-Interval-Description'></a> ### Description `property` ##### Summary A long-form formatted description of the interval, like "Perfect Fourth" <a name='P-Cadd9-Model-Interval-GenericWidth'></a> ### GenericWidth `property` ##### Summary The generic width of this Interval, in other words, the difference in note names from bottom to top. <a name='P-Cadd9-Model-Interval-SpecificWidth'></a> ### SpecificWidth `property` ##### Summary The specific width of this Interval, in other words, the semitones shifted between the bottom and top. <a name='M-Cadd9-Model-Interval-Between-Cadd9-Model-Name,Cadd9-Model-Name-'></a> ### Between(bottom,top) `method` ##### Summary Returns a new Interval representing the width between two [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name')s. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | bottom | [Cadd9.Model.Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') | The lower [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') to compare | | top | [Cadd9.Model.Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') | The higher [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') to compare | ##### Remarks It is always assumed that the interval is going up from first to second: C to B would give an interval of a major seventh (despite being much closer to go down a minor second). Also for this reason, this method will always produce an interval between unison (inclusive) and an octave (exclusive). <a name='M-Cadd9-Model-Interval-Between-Cadd9-Model-Note,Cadd9-Model-Note-'></a> ### Between(bottom,top) `method` ##### Summary Returns a new Interval representing the width between two [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note')s. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | bottom | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The lower [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') to compare | | top | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The higher [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') to compare | ##### Remarks It is always assumed that the interval is going up from first to second: C♯ to B would give an interval of a minor seventh (despite being much closer to go down a major second). Also for this reason, this method will always produce an interval between unison (inclusive) and an octave (exclusive). <a name='M-Cadd9-Model-Interval-Between-Cadd9-Model-Pitch,Cadd9-Model-Pitch-'></a> ### Between(bottom,top) `method` ##### Summary Returns a new Interval representing the width between two [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch')es. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | bottom | [Cadd9.Model.Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') | The lower [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') to compare | | top | [Cadd9.Model.Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') | The higher [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') to compare | ##### Remarks It is always assumed that the interval is going up from first to second: C♯3 to B4 would give an interval of a minor fifteenth. <a name='M-Cadd9-Model-Interval-Enharmonic-Cadd9-Model-Interval-'></a> ### Enharmonic(other) `method` ##### Summary Returns true if `other` is enharmonically equivalent to this interval. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') | The other Interval to compare | ##### Remarks Two intervals are enharmonic if they have the same specific width. Perfect unison and diminished second, for example, are enharmonic despite having different generic widths. <a name='M-Cadd9-Model-Interval-Equals-Cadd9-Model-Interval-'></a> ### Equals(other) `method` ##### Summary Determines whether two Intervals are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') | The Intervals to compare | <a name='M-Cadd9-Model-Interval-GenericName-Cadd9-Model-Interval-Generic-'></a> ### GenericName() `method` ##### Summary Returns the name of the given generic width ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Interval-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Intervals are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Interval-Parse-System-String-'></a> ### Parse(input) `method` ##### Summary Returns a new Interval by parsing the given string input. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | input | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The input to parse | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.FormatException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.FormatException 'System.FormatException') | The given input cannot be parsed | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | If an illegal modifier is supplied for the interval | ##### Remarks Two formats are accepted: Formal, like `P4` and `d3`, or simple, like `b5` and `#9`. If the simple form is used, then the major/perfect matching interval is sharped the given number of times. The formal form understands (P)erfect, (d)iminished, (m)inor, (M)ajor, and (A)ugmented descriptors for each interval as appropriate. <a name='M-Cadd9-Model-Interval-ParseFormal-System-String-'></a> ### ParseFormal() `method` ##### Summary Parses the given input using "formal" notation, or null if it cannot be parsed accordingly ##### Parameters This method has no parameters. ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | If an illegal modifier is supplied for the interval | ##### Remarks "Formal" notation indicates "P4" for a perfect fourth, "m3" for a minor third, etc. <a name='M-Cadd9-Model-Interval-ParseSimple-System-String-'></a> ### ParseSimple() `method` ##### Summary Parses the given input using "simple" notation, or returns null if it cannot be parsed accordingly ##### Parameters This method has no parameters. ##### Remarks This notation uses "3" for a major third, "b5" for a flat fifth, etc. Commonly used to describe the component intervals of chords. <a name='M-Cadd9-Model-Interval-ReducedGenericWidth-Cadd9-Model-Interval-Generic-'></a> ### ReducedGenericWidth() `method` ##### Summary Returns the generic width from unison to seventh that is enharmonic with the given generic width ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Interval-ToString'></a> ### ToString() `method` ##### Summary A string representation of this Interval, useful for debugging. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Interval-op_Addition-Cadd9-Model-Interval,Cadd9-Model-Interval-'></a> ### op_Addition() `method` ##### Summary Creates a new compound Interval by combining two others. ##### Parameters This method has no parameters. ##### Remarks For example, adding together a perfect octave and a perfect fifth produces a perfect thirteenth. <a name='M-Cadd9-Model-Interval-op_Subtraction-Cadd9-Model-Interval,Cadd9-Model-Interval-'></a> ### op_Subtraction() `method` ##### Summary Creates a new compound Interval by subtracting one from the other. ##### Parameters This method has no parameters. ##### Remarks For example, subtracting a minor second from a perfect octave produces a major seventh. <a name='T-Cadd9-Model-Mode'></a> ## Mode `type` ##### Namespace Cadd9.Model ##### Summary Represents a musical mode based on its component intervals. <a name='M-Cadd9-Model-Mode-#ctor-System-String,Cadd9-Model-Interval[]-'></a> ### #ctor(title,intervals) `constructor` ##### Summary Returns a new Mode ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | title | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The title of this Mode | | intervals | [Cadd9.Model.Interval[]](#T-Cadd9-Model-Interval[] 'Cadd9.Model.Interval[]') | The intervals that make up this Mode | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | Thrown if a generic interval appears multiple times | <a name='M-Cadd9-Model-Mode-#ctor-System-String,System-String[]-'></a> ### #ctor(title,intervals) `constructor` ##### Summary Returns a new Mode ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | title | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The title of this Mode | | intervals | [System.String[]](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String[] 'System.String[]') | String representations of the intervals that make up this Mode | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | Thrown if the given intervals do not feature every generic interval 0-6 exactly once | ##### Remarks The intervals given in this constructor will be parsed according to the behavior in [Parse](#M-Cadd9-Model-Interval-Parse-System-String- 'Cadd9.Model.Interval.Parse(System.String)') which allows short-hand construction. <a name='P-Cadd9-Model-Mode-Intervals'></a> ### Intervals `property` ##### Summary A set of all the Intervals that make up this Mode ##### Remarks <a name='P-Cadd9-Model-Mode-Title'></a> ### Title `property` ##### Summary A descriptive title of this Mode <a name='M-Cadd9-Model-Mode-AccidentalFor-Cadd9-Model-Note,Cadd9-Model-Name-'></a> ### AccidentalFor() `method` ##### Summary Yields the [Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') associated with a given name for a given key. ##### Parameters This method has no parameters. ##### Remarks This may be used, for example, to place sharps and flats on a staff given a particular key. In the D major (Ionian) key, F and C have a sharp accidental while all other names are natural. This may also be used to determine how to render a note: if its accidental is the same as the accidental for its name in the key, no symbol need be added. <a name='M-Cadd9-Model-Mode-Ascend'></a> ### Ascend() `method` ##### Summary Yields every Interval in this mode starting from unison. ##### Parameters This method has no parameters. ##### Remarks Will progress infinitely, so take only what is needed. <a name='M-Cadd9-Model-Mode-DiatonicChord-Cadd9-Model-Degree,System-Int32-'></a> ### DiatonicChord(degree,count) `method` ##### Summary Returns a chord based on stacked thirds from the given scale degree of this mode. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | degree | [Cadd9.Model.Degree](#T-Cadd9-Model-Degree 'Cadd9.Model.Degree') | The scale degree to use as the root | | count | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The number of notes to return (3 = triad, 4 = 7th, 5 = 9th, etc) (min 3) | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | If degree or width are out of bounds | ##### Remarks Each mode has 7 scale degrees that produce 7 signature chord qualities. For example, the major (Ionian) mode's fourth scale degree (iv) is a minor triad, while the Phrygian mode's fifth scale degree (v°) is a diminished triad. Important note: the value of `degree` is treated starting at zero. <a name='M-Cadd9-Model-Mode-Equals-Cadd9-Model-Mode-'></a> ### Equals(other) `method` ##### Summary Determines whether two Modes are value-equivalent. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Mode](#T-Cadd9-Model-Mode 'Cadd9.Model.Mode') | The other Mode to compare | <a name='M-Cadd9-Model-Mode-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Modes are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Mode-Scale-Cadd9-Model-Note-'></a> ### Scale() `method` ##### Summary Yields every [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') in this mode starting from the given tonic. ##### Parameters This method has no parameters. ##### Remarks Will progress infinitely, so take only what is needed. <a name='M-Cadd9-Model-Mode-Scale-Cadd9-Model-Pitch-'></a> ### Scale() `method` ##### Summary Yields every [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') in this mode starting from the given tonic. ##### Parameters This method has no parameters. ##### Remarks Will progress infinitely, so take only what is needed. <a name='M-Cadd9-Model-Mode-ToString'></a> ### ToString() `method` ##### Summary A string representation of this Mode, useful for debugging. ##### Parameters This method has no parameters. <a name='T-Cadd9-Model-Name'></a> ## Name `type` ##### Namespace Cadd9.Model ##### Summary An enumeration of the seven note Names used in Western tonal music. <a name='T-Cadd9-Model-Note'></a> ## Note `type` ##### Namespace Cadd9.Model ##### Summary Represents a note [Name](#P-Cadd9-Model-Note-Name 'Cadd9.Model.Note.Name') with an associated modifying [Accidental](#P-Cadd9-Model-Note-Accidental 'Cadd9.Model.Note.Accidental'). <a name='M-Cadd9-Model-Note-#ctor-Cadd9-Model-Name,Cadd9-Model-Accidental-'></a> ### #ctor(name,accidental) `constructor` ##### Summary Returns a new Note ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | name | [Cadd9.Model.Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') | The [Name](#P-Cadd9-Model-Note-Name 'Cadd9.Model.Note.Name') associated with this Note | | accidental | [Cadd9.Model.Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') | The [Accidental](#P-Cadd9-Model-Note-Accidental 'Cadd9.Model.Note.Accidental') modifying this Note | <a name='P-Cadd9-Model-Note-Accidental'></a> ### Accidental `property` ##### Summary The [Accidental](#P-Cadd9-Model-Note-Accidental 'Cadd9.Model.Note.Accidental') modifying this Note <a name='P-Cadd9-Model-Note-Description'></a> ### Description `property` ##### Summary A formatted representation of this Note as a UTF-8 string. <a name='P-Cadd9-Model-Note-Name'></a> ### Name `property` ##### Summary The [Name](#P-Cadd9-Model-Note-Name 'Cadd9.Model.Note.Name') associated with this Note <a name='P-Cadd9-Model-Note-PitchClass'></a> ### PitchClass `property` ##### Summary The pitch class (0 to 11) of this Note. ##### Remarks The concept of pitch class is often used in post-tonal music to describe pitches without being based in any given heptatonic scale. C is equivalent to a pitch class of 0, and each pitch class going up is separated by a semitone. All Notes that are enharmonic by definition have the same pitch class. <a name='M-Cadd9-Model-Note-Apply-Cadd9-Model-Interval-'></a> ### Apply(interval) `method` ##### Summary Produces a new Note by applying the given [Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval'). ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | interval | [Cadd9.Model.Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') | The width between the current Note and the new Note to be generated. | <a name='M-Cadd9-Model-Note-Enharmonic-Cadd9-Model-Note-'></a> ### Enharmonic(other) `method` ##### Summary Determines whether two Notes are enharmonically equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The other Note to compare | ##### Remarks Two Notes are enharmonic if they map to the same key on a keyboard: for example, while D♯ and E♭ are distinct notes that play different musical roles, they are enharmonic. <a name='M-Cadd9-Model-Note-Equals-Cadd9-Model-Note-'></a> ### Equals(other) `method` ##### Summary Determines whether two Notes are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The other Note to compare | <a name='M-Cadd9-Model-Note-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Notes are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Note-Parse-System-String-'></a> ### Parse(input) `method` ##### Summary Parses the given input as a Note ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | input | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The input to parse | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.FormatException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.FormatException 'System.FormatException') | The given input cannot be parsed | ##### Remarks The input is treated as case-insensitive. The first character is parsed as a [Name](#P-Cadd9-Model-Note-Name 'Cadd9.Model.Note.Name') while the rest is parsed according to [Parse](#M-Cadd9-Model-Accidental-Parse-System-String- 'Cadd9.Model.Accidental.Parse(System.String)'). Examples of valid Notes would include "B", "Ebb", "c#" <a name='M-Cadd9-Model-Note-ToString'></a> ### ToString() `method` ##### Summary Returns a string representation of this Note, primarily for debugging purposes. ##### Parameters This method has no parameters. <a name='T-Cadd9-Util-ParseHelpers'></a> ## ParseHelpers `type` ##### Namespace Cadd9.Util ##### Summary Contains helper methods to parse notes, pitches, and intervals <a name='M-Cadd9-Util-ParseHelpers-N-System-String-'></a> ### N() `method` ##### Summary Parses the given input as a [Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') ##### Parameters This method has no parameters. <a name='M-Cadd9-Util-ParseHelpers-P-System-String-'></a> ### P() `method` ##### Summary Parses the given input as a [Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') ##### Parameters This method has no parameters. <a name='M-Cadd9-Util-ParseHelpers-W-System-String-'></a> ### W() `method` ##### Summary Parses the given input as an [Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') ##### Parameters This method has no parameters. ##### Remarks The letter W (for Width) is used instead of I to prevent collision with [I](#F-Cadd9-Model-Degree-I 'Cadd9.Model.Degree.I') when both are statically included in the same file. <a name='T-Cadd9-Model-Pitch'></a> ## Pitch `type` ##### Namespace Cadd9.Model ##### Summary A particular musical pitch achieved by playing a [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') in a particular octave <a name='M-Cadd9-Model-Pitch-#ctor-Cadd9-Model-Note,System-Int32-'></a> ### #ctor(note,octave) `constructor` ##### Summary Returns a new Pitch ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | note | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') to represent this Pitch | | octave | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The octave corresponding to this Pitch | <a name='M-Cadd9-Model-Pitch-#ctor-Cadd9-Model-Name,Cadd9-Model-Accidental,System-Int32-'></a> ### #ctor(name,accidental,octave) `constructor` ##### Summary Returns a new Pitch ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | name | [Cadd9.Model.Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') | The [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') of this Pitch's [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') | | accidental | [Cadd9.Model.Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') | The [Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') of this Pitch's [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') | | octave | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The octave corresponding to this Pitch | <a name='P-Cadd9-Model-Pitch-Description'></a> ### Description `property` ##### Summary A formatted representation of this Pitch as a UTF-8 string. <a name='P-Cadd9-Model-Pitch-Midi'></a> ### Midi `property` ##### Summary The MIDI note number associated with this Pitch ##### Remarks Though there is no universal standard for what octave represents middle-C, we treat C-4 as middle C, with the MIDI note number 60. <a name='P-Cadd9-Model-Pitch-Note'></a> ### Note `property` ##### Summary The [Note](#P-Cadd9-Model-Pitch-Note 'Cadd9.Model.Pitch.Note') represented by this Pitch <a name='P-Cadd9-Model-Pitch-Octave'></a> ### Octave `property` ##### Summary The octave corresponding to this Pitch, where middle C = C4 <a name='M-Cadd9-Model-Pitch-Apply-Cadd9-Model-Interval-'></a> ### Apply(interval) `method` ##### Summary Produces a new (higher) Pitch by applying the given [Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval'). ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | interval | [Cadd9.Model.Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') | The [Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') between this and the new Pitch | ##### Remarks The Note [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') will be incremented by `interval.Generic`, while the pitch will be incremented by `interval.Specific` (in semitones). The new [Accidental](#T-Cadd9-Model-Accidental 'Cadd9.Model.Accidental') will be set as appropriate to achieve this pitch. <a name='M-Cadd9-Model-Pitch-DescribeInKey-Cadd9-Model-Note,Cadd9-Model-Mode-'></a> ### DescribeInKey(key,signature) `method` ##### Summary A formatted representation of this Pitch as a UTF-8 string in the given key. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | key | [Cadd9.Model.Note](#T-Cadd9-Model-Note 'Cadd9.Model.Note') | The tonic of the key | | signature | [Cadd9.Model.Mode](#T-Cadd9-Model-Mode 'Cadd9.Model.Mode') | The mode of the key | ##### Remarks This method will include a symbol for the accidental only if it is different than the associated [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') in the given key. For example, C♮4 would be rendered "C4" in the key of C Ionian, but would be "C♮4" in the key of D Ionian, because in that key a C would normally be sharp. Likewise, in D Ionian the pitch F♯3 would be rendered as "F3" because F is normally sharp in that key. <a name='M-Cadd9-Model-Pitch-Enharmonic-Cadd9-Model-Pitch-'></a> ### Enharmonic(other) `method` ##### Summary Determines whether two Pitches are enharmonically equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') | The other Pitch to compare | ##### Remarks Two Pitches are enharmonic if they map to the same key on a keyboard: for example, while D♯4 and E♭4 are distinct notes that play different musical roles, they are enharmonic. <a name='M-Cadd9-Model-Pitch-Equals-Cadd9-Model-Pitch-'></a> ### Equals(other) `method` ##### Summary Determines whether two Pitches are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Pitch](#T-Cadd9-Model-Pitch 'Cadd9.Model.Pitch') | The other Pitch to compare | <a name='M-Cadd9-Model-Pitch-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Pitches are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Pitch-InOctave-System-Int32-'></a> ### InOctave(octave) `method` ##### Summary Returns a new Note transposed into the given octave. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | octave | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The octave of the desired note. | <a name='M-Cadd9-Model-Pitch-Parse-System-String-'></a> ### Parse(input) `method` ##### Summary Parses the given input as a Pitch ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | input | [System.String](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String 'System.String') | The input to parse | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.FormatException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.FormatException 'System.FormatException') | The given input cannot be parsed | ##### Remarks The input is treated as case-insensitive. The first character is parsed as a [Name](#T-Cadd9-Model-Name 'Cadd9.Model.Name') and the rest is broken into an accidental part and an octave. The accidental is parsed according to [Parse](#M-Cadd9-Model-Accidental-Parse-System-String- 'Cadd9.Model.Accidental.Parse(System.String)'). Examples of valid Pitches would include "B2", "Ebb17", "c#-1" <a name='M-Cadd9-Model-Pitch-ToString'></a> ### ToString() `method` ##### Summary Returns a string representation of this Pitch, primarily for debugging purposes. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Pitch-Transpose-System-Int32-'></a> ### Transpose(octaves) `method` ##### Summary Returns a new Note transposed by the given number of octaves. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | octaves | [System.Int32](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.Int32 'System.Int32') | The number of octaves to transpose. | ##### Remarks If `octaves` is positive, the pitch will increase. If negative, it will decrease. <a name='T-Program'></a> ## Program `type` ##### Namespace ##### Summary Empty program class ##### Remarks Exists only to allow the library to compile successfully. For some reason, netcoreapp2.2 class libraries still require a class with a Main method. <a name='M-Program-Main-System-String[]-'></a> ### Main() `method` ##### Summary Empty main method ##### Parameters This method has no parameters. <a name='T-Cadd9-Model-Quality'></a> ## Quality `type` ##### Namespace Cadd9.Model ##### Summary Represents a particular chord quality that may be applied with any given root <a name='M-Cadd9-Model-Quality-#ctor-Cadd9-Model-Interval[]-'></a> ### #ctor(intervals) `constructor` ##### Summary Returns a new Quality ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | intervals | [Cadd9.Model.Interval[]](#T-Cadd9-Model-Interval[] 'Cadd9.Model.Interval[]') | The set of Intervals that define this chord quality | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | Thrown if a generic interval appears multiple times | <a name='M-Cadd9-Model-Quality-#ctor-System-String[]-'></a> ### #ctor(intervals) `constructor` ##### Summary Returns a new Quality ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | intervals | [System.String[]](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.String[] 'System.String[]') | String representations of the Intervals for this chord quality | ##### Exceptions | Name | Description | | ---- | ----------- | | [System.ArgumentException](http://msdn.microsoft.com/query/dev14.query?appId=Dev14IDEF1&l=EN-US&k=k:System.ArgumentException 'System.ArgumentException') | Thrown if a generic interval appears multiple times | ##### Remarks The intervals given in this constructor will be parsed according to the behavior in [Parse](#M-Cadd9-Model-Interval-Parse-System-String- 'Cadd9.Model.Interval.Parse(System.String)') which allows short-hand construction. <a name='P-Cadd9-Model-Quality-Intervals'></a> ### Intervals `property` ##### Summary The set of Intervals that define this chord quality <a name='M-Cadd9-Model-Quality-Add-Cadd9-Model-Interval-'></a> ### Add(add) `method` ##### Summary Returns a new Quality by adding the given Interval ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | add | [Cadd9.Model.Interval](#T-Cadd9-Model-Interval 'Cadd9.Model.Interval') | The Interval to add | ##### Remarks This can be used to create a chord with arbitrary extensions, like 9th, 13th, etc. <a name='M-Cadd9-Model-Quality-Alter-Cadd9-Model-Quality-Alteration-'></a> ### Alter(alt) `method` ##### Summary Returns a new Quality by applying the given Alteration. ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | alt | [Cadd9.Model.Quality.Alteration](#T-Cadd9-Model-Quality-Alteration 'Cadd9.Model.Quality.Alteration') | The Alteration to apply | ##### Remarks This is generally used to modify the 3rd or 5th of the quality, like creating a sus2 or a flat-5 chord. <a name='M-Cadd9-Model-Quality-Apply-Cadd9-Model-Note-'></a> ### Apply() `method` ##### Summary Returns a sequence of Notes by applying all of this Quality's Intervals to the given root. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Quality-Apply-Cadd9-Model-Pitch-'></a> ### Apply() `method` ##### Summary Returns a sequence of Pitches by applying all of this Quality's Intervals to the given root. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Quality-Equals-Cadd9-Model-Quality-'></a> ### Equals(other) `method` ##### Summary Determines whether two Qualities are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Quality](#T-Cadd9-Model-Quality 'Cadd9.Model.Quality') | The other Quality to compare | <a name='M-Cadd9-Model-Quality-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Qualities are guaranteed to produce the same result. ##### Parameters This method has no parameters. <a name='M-Cadd9-Model-Quality-ToString'></a> ### ToString() `method` ##### Summary A string representation of this Mode, useful for debugging. ##### Parameters This method has no parameters. <a name='T-Cadd9-Model-Voicing'></a> ## Voicing `type` ##### Namespace Cadd9.Model <a name='M-Cadd9-Model-Voicing-Equals-Cadd9-Model-Voicing-'></a> ### Equals(other) `method` ##### Summary Determines whether two Voicings are value-equivalent ##### Parameters | Name | Type | Description | | ---- | ---- | ----------- | | other | [Cadd9.Model.Voicing](#T-Cadd9-Model-Voicing 'Cadd9.Model.Voicing') | The other Voicings to compare | <a name='M-Cadd9-Model-Voicing-GetHashCode'></a> ### GetHashCode() `method` ##### Summary Produces a high-entropy hash code such that two value-equivalent Qualities are guaranteed to produce the same result. ##### Parameters This method has no parameters.
34.401022
270
0.710128
yue_Hant
0.551699
5def176174eade8aff5e685b3c6110cb1bdae49c
612
md
Markdown
catalog/new-york-de-kangaechuu/en-US_new-york-de-kangaechuu.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/new-york-de-kangaechuu/en-US_new-york-de-kangaechuu.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/new-york-de-kangaechuu/en-US_new-york-de-kangaechuu.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# New York de Kangaechuu ![new-york-de-kangaechuu](https://cdn.myanimelist.net/images/manga/2/183532.jpg) - **type**: manga - **original-name**: ニューヨークで考え中 - **start-date**: 2012-08-09 ## Tags - slice-of-life ## Authors - Kondoh - Akino (Story & Art) ## Sinopse The author, Akino Kondoh, tells her thoughts on integrating the land of Thanksgiving. Beyond the difficulty of learning a foreign language, she recounts her daily anecdotes, between astonishment and questions. (Source: Archy World News) ## Links - [My Anime list](https://myanimelist.net/manga/101237/New_York_de_Kangaechuu)
22.666667
209
0.71732
eng_Latn
0.614257
5df0166368bfb6f26214af95292dad10b26b5028
3,646
md
Markdown
README.md
sovanrithall168/realestate_vue_js_laravel
52437fc56ef142d8e6a7ef2fa201029964f5e327
[ "MIT" ]
2
2020-09-14T12:00:46.000Z
2021-08-31T02:46:47.000Z
README.md
sovanrithall168/realestate_vue_js_laravel
52437fc56ef142d8e6a7ef2fa201029964f5e327
[ "MIT" ]
5
2021-03-10T07:03:00.000Z
2022-02-26T23:26:12.000Z
README.md
sovanrithall168/realestate_vue_js_laravel
52437fc56ef142d8e6a7ef2fa201029964f5e327
[ "MIT" ]
null
null
null
## About Laraveutify Laraveutify is created in order to speed up production with laravel and vue. This application has been configed everything which need to done on using Laravel + Vue + Vuetify. It is also include other neccessary libray such chat, moment,... to complete a full application. ## Guide Line Make sure you have composer and nodejs install on your computer otherwise you can go to install them first by the links [Install composer](https://getcomposer.org/) and [Install nodejs](https://nodejs.org/en/) or [Install yarn](https://yarnpkg.com/) open your terminal git clone https://github.com/kechankrisna/laravuetify.git cd laravuetify composer install npm install php -r "file_exists('.env') || copy('.env.example', '.env');" go to .env to config your database php artisan key:generate php artisan migrate:reset php artisan migrate php artisan passport:install php artisan storage:link go to .env to update your ClIENT_KEY 1 and 2 and config your mail driver open file app/mail.php to config your mailing setup php aritan serve && npm run watch ## Future I am planning to make this framework to work as full dashboard with metarial design supported by vuetify. So that developers can bring it to their production faster without worry about configuration anymore. Thank you. ## Learning Laravel Laravel has the most extensive and thorough [documentation](https://laravel.com/docs) and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework. If you don't feel like reading, [Laracasts](https://laracasts.com) can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library. ## Learning Vue && Vuex && Vue-router Vue.js is an MIT-licensed open source project with its ongoing development made possible entirely by the support of these awesome backers. For questions and support please use the official forum or community chat. The issue list of this repo is exclusively for bug reports and feature requests. [Vue documentation](https://github.com/vuejs/vue). ## Learning Vuetify Vuetify is a Material Design Component Framework for the Vue framework. We believe that you shouldn't need design skills to build beautiful Vue applications. More information please visit [Vuetify documentation](https://github.com/vuetifyjs/vuetify) ## Laravuetify Sponsors We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please contact me through email [email protected] ## Code of Conduct In order to ensure that the Laravel community is welcoming to all, please review and abide by the [Code of Conduct](https://laravel.com/docs/contributions#code-of-conduct). ## Security Vulnerabilities If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via [[email protected]](mailto:[email protected]). All security vulnerabilities will be promptly addressed. ## License The Laravel framework is open-sourced software licensed under the [MIT license](https://opensource.org/licenses/MIT). Vue is MIT-licensed open source project with its ongoing development made possible entirely by the support of these awesome backers. [MIT license](https://opensource.org/licenses/MIT). Vuetify is a Material Design Component Framework for the Vue framework licensed under the [MIT license](https://opensource.org/licenses/MIT).
57.873016
272
0.777839
eng_Latn
0.993085
5df02d442ab2a0a9e95f5ac389d791d1c01e345d
1,358
md
Markdown
data/1003.md
marioa5945/marioa
ac81e4ec26829247b6de1d7437145f2808a973ad
[ "ISC" ]
null
null
null
data/1003.md
marioa5945/marioa
ac81e4ec26829247b6de1d7437145f2808a973ad
[ "ISC" ]
null
null
null
data/1003.md
marioa5945/marioa
ac81e4ec26829247b6de1d7437145f2808a973ad
[ "ISC" ]
null
null
null
title: typescript 中的 any、unknow、never date: 2021/01/29 description: 告别 anyscript,可以使用 eslint 或者 pre-commit hook 进行限制。 # typescript 中的 _any_、 m unknow、never 告别 anyscript,可以使用 eslint 或者 pre-commit hook 进行限制。 ## 干掉 _any_? _any_ 是 toptype 并且也是 subtype,可以赋值给任何类型,也可以被任何类型赋值。 也就是去掉了强制类型检测,违背了使用 TypeScript 的目的。所以选择干掉 any ## 不使用 _any_,没有创建类型的第三方库或全部类型的 _union_ 使用 _unknow_ 类型 _unknow_ 类型是 toptype,即所有类型都可以给 _unknow_ 类型赋值, 但 _unknow_ 类型只能给自己或者 any 赋值。 ```ts // 任何值都可以赋值给unkown类型 let val: unknow = 'hello world!'; val = 1; val = () => 1 + 1; // unkown类型的值不能直接赋值给其他类型,除了unkown类型和any类型 const valueUnknow: unknow = val; const valueAny: any = val; ``` 1. 使用类型断言 ```ts // 字符串类型赋值 const val: unknow = 'hello world!'; let str: string = val as string; ``` 2. 类型判断 ```ts // 字符串类型赋值 let val: unknow = 'aaa'; if (typeof val === 'string') { const str: string = val.slice(1); console.log(str); } // 数组类型赋值 val = [1, 2, 3]; if (val instanceof Array) { const arr: Array<number> = val; console.log(arr); } ``` ## never 与 unknow 相反 never 类型是 subtype,即 _never_ 类型可以赋值给所有类型, 但 _never_ 类型不能被任何类型赋值 应用场景: ```ts // 抛出异常的函数返回值类型为 never function error(message: string): never { throw new Error(message); } // 推断的返回值类型为 never function fail(): never { return error('Something failed'); } // 类型定义 type NonNullable<T> = T extends null | undefined ? never : T; ```
17.636364
62
0.697349
yue_Hant
0.756903
5df06b690755118d2da2ecd64059dbef6c2bff66
1,891
md
Markdown
wdk-ddi-src/content/portcls/nf-portcls-iadapterpnpmanagement-pnpquerystop.md
hsebs/windows-driver-docs-ddi
d655ca4b723edeff140ca2d9cc1bfafd72fcda97
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/portcls/nf-portcls-iadapterpnpmanagement-pnpquerystop.md
hsebs/windows-driver-docs-ddi
d655ca4b723edeff140ca2d9cc1bfafd72fcda97
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/portcls/nf-portcls-iadapterpnpmanagement-pnpquerystop.md
hsebs/windows-driver-docs-ddi
d655ca4b723edeff140ca2d9cc1bfafd72fcda97
[ "CC-BY-4.0", "MIT" ]
1
2021-04-22T21:40:43.000Z
2021-04-22T21:40:43.000Z
--- UID: NF:portcls.IAdapterPnpManagement.PnpQueryStop title: IAdapterPnpManagement::PnpQueryStop (portcls.h) description: PnpQueryStop provides a notification when PnpQueryStop is invoked by portcls just before succeeding the QueryStop IRP. tech.root: audio ms.assetid: ddc729dd-71fe-4341-ba7e-ee05e9f91291 ms.date: 10/31/2018 f1_keywords: - "portcls/IAdapterPnpManagement.PnpQueryStop" ms.keywords: IAdapterPnpManagement::PnpQueryStop, PnpQueryStop, IAdapterPnpManagement.PnpQueryStop, IAdapterPnpManagement::PnpQueryStop, IAdapterPnpManagement.PnpQueryStop req.header: portcls.h req.include-header: req.target-type: req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.lib: req.dll: req.irql: PASSIVE_LEVEL req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: topic_type: - apiref api_type: - COM api_location: - portcls.h api_name: - IAdapterPnpManagement.PnpQueryStop product: - Windows targetos: Windows --- # IAdapterPnpManagement::PnpQueryStop ## -description PnpQueryStop provides a notification when PnpQueryStop is invoked by portcls just before succeeding the QueryStop IRP. ## -remarks PnpQueryStop is invoked by portcls just before succeeding the QueryStop IRP. This is just a notification and the call doesn’t return a value. Note Portcls acquires the device global lock before making this call, thus the miniport must execute this call as fast as possible. While a Stop is pending, Portcls will block (hold) any new create requests. For more information, see [Implement PnP Rebalance for PortCls Audio Drivers](https://docs.microsoft.com/windows-hardware/drivers/audio/implement-pnp-rebalance-for-portcls-audio-drivers). ## -see-also [IAdapterPnpManagement](nn-portcls-iadapterpnpmanagement.md)
31.516667
209
0.778424
eng_Latn
0.606611
5df0ed1b4a3db433021fe9f3442037440bbaa3a1
2,453
md
Markdown
components/aws/sagemaker/tests/integration_tests/README.md
Intellicode/pipelines
f1d90407a8a2f56db11199c9c73e6df6c4a8b093
[ "Apache-2.0" ]
null
null
null
components/aws/sagemaker/tests/integration_tests/README.md
Intellicode/pipelines
f1d90407a8a2f56db11199c9c73e6df6c4a8b093
[ "Apache-2.0" ]
null
null
null
components/aws/sagemaker/tests/integration_tests/README.md
Intellicode/pipelines
f1d90407a8a2f56db11199c9c73e6df6c4a8b093
[ "Apache-2.0" ]
null
null
null
## Requirements 1. [Docker](https://www.docker.com/) 1. [IAM Role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) with a SageMakerFullAccess and AmazonS3FullAccess 1. IAM User credentials with SageMakerFullAccess, AWSCloudFormationFullAccess, IAMFullAccess, AmazonEC2FullAccess, AmazonS3FullAccess permissions 2. The SageMaker WorkTeam and GroundTruth Component tests expect that at least one private workteam already exists in the region where you are running these tests. ## Creating S3 buckets with datasets 1. In the following Python script, change the bucket name and run the [`s3_sample_data_creator.py`](https://github.com/kubeflow/pipelines/tree/master/samples/contrib/aws-samples/mnist-kmeans-sagemaker#the-sample-dataset) to create an S3 bucket with the sample mnist dataset in the region where you want to run the tests. 2. To prepare the dataset for the SageMaker GroundTruth Component test, follow the steps in the `[GroundTruth Sample README](https://github.com/kubeflow/pipelines/tree/master/samples/contrib/aws-samples/ground_truth_pipeline_demo#prep-the-dataset-label-categories-and-ui-template)`. 3. To prepare the processing script for the SageMaker Processing Component tests, upload the `scripts/kmeans_preprocessing.py` script to your bucket. This can be done by replacing `<my-bucket> with your bucket name and running `aws s3 cp scripts/kmeans_preprocessing.py s3://<my-bucket>/mnist_kmeans_example/processing_code/kmeans_preprocessing.py` ## Step to run integration tests 1. Copy the `.env.example` file to `.env` and in the following steps modify the fields of this new file: 1. Configure the AWS credentials fields with those of your IAM User. 1. Update the `SAGEMAKER_EXECUTION_ROLE_ARN` with that of your role created earlier. 1. Update the `S3_DATA_BUCKET` parameter with the name of the bucket created earlier. 1. (Optional) If you have already created an EKS cluster for testing, replace the `EKS_EXISTING_CLUSTER` field with it's name. 1. Build the image by doing the following: 1. Navigate to the root of this github directory. 1. Run `docker build . -f components/aws/sagemaker/tests/integration_tests/Dockerfile -t amazon/integration_test` 1. Run the image, injecting your environment variable files: 1. Navigate to the `components/aws` directory. 1. Run `docker run --env-file components/aws/sagemaker/tests/integration_tests/.env amazon/integration_test`
94.346154
348
0.796983
eng_Latn
0.936054
5df13402eda35d6e9561032e99ef384173ad19e1
244
md
Markdown
README.md
nqhung139/AndroidCN
7ff8dbd0c4c7ec94afd678f69e25978323d1f6c6
[ "MIT" ]
null
null
null
README.md
nqhung139/AndroidCN
7ff8dbd0c4c7ec94afd678f69e25978323d1f6c6
[ "MIT" ]
null
null
null
README.md
nqhung139/AndroidCN
7ff8dbd0c4c7ec94afd678f69e25978323d1f6c6
[ "MIT" ]
null
null
null
# AndroidCN remote android via ssh in local network # Run on Server run ./index.sh & (& to open new terminal in background) when new connected devices # Run on Client bash <(curl http://10.0.4.22/AndroidCN/client/script.sh) => change URL
14.352941
56
0.721311
eng_Latn
0.752382
5df1aee3144819fce8f166d8f3dd69d60baf71c8
1,275
md
Markdown
README.md
belug23/FrogTTSTips
c4b07651bc34cd8d6d969f435aceb7ec2e4108b3
[ "MIT" ]
null
null
null
README.md
belug23/FrogTTSTips
c4b07651bc34cd8d6d969f435aceb7ec2e4108b3
[ "MIT" ]
null
null
null
README.md
belug23/FrogTTSTips
c4b07651bc34cd8d6d969f435aceb7ec2e4108b3
[ "MIT" ]
null
null
null
=============================================================================== Name: Frog TTS Tips Version: 1.0.0 Creator: Belug Website: https://github.com/belug23/FrogTTSTips =============================================================================== This will interface with the FrogTips API, download tips and read them via the windows Text-to-speech. This can use your currency and support cooldowns. If multiple commands are launched back to back, the bot will wait until the first is done before reading the next. You can choose the gender of the voice in the core options of the script settings. === Installation === Prepare your chatbot with those instructions: https://github.com/StreamlabsSupport/Streamlabs-Chatbot/wiki/Prepare-&-Import-Scripts Download the lastest stable release ZIP from https://github.com/belug23/FrogTTSTips/releases Open the script section of the chatbot and use the import command to import the ZIP you downloaded. Change the settings, save then reload scripts. === String placeholders === This script support some placeholders in the strings here's the list - {user} = Viewer's username - {command} = The help command that display conditions or list of sounds commands - {cd} = The cool down time left Have fun.
33.552632
131
0.677647
eng_Latn
0.971946
5df1d16fa019bc8576c5f08cb6efe93a59e79f70
3,122
md
Markdown
docs/framework/data/adonet/ef/language-reference/aggregate-functions-entity-sql.md
atifaziz/core-docs
6eed48a9055c618ccbf656bfe6c9940d6909cc15
[ "CC-BY-4.0", "MIT" ]
4
2017-02-14T15:30:51.000Z
2020-01-10T17:53:41.000Z
docs/framework/data/adonet/ef/language-reference/aggregate-functions-entity-sql.md
atifaziz/core-docs
6eed48a9055c618ccbf656bfe6c9940d6909cc15
[ "CC-BY-4.0", "MIT" ]
885
2020-07-17T05:22:38.000Z
2022-03-09T15:36:53.000Z
docs/framework/data/adonet/ef/language-reference/aggregate-functions-entity-sql.md
atifaziz/core-docs
6eed48a9055c618ccbf656bfe6c9940d6909cc15
[ "CC-BY-4.0", "MIT" ]
7
2016-11-17T03:58:54.000Z
2020-08-01T03:27:57.000Z
--- description: "Learn more about: Aggregate Functions (Entity SQL)" title: "Aggregate Functions (Entity SQL)" ms.date: "03/30/2017" ms.assetid: acfd3149-f519-4c6e-8fe1-b21d243a0e58 --- # Aggregate Functions (Entity SQL) An aggregate is a language construct that condenses a collection into a scalar as a part of a group operation. [!INCLUDE[esql](../../../../../../includes/esql-md.md)] aggregates come in two forms: - [!INCLUDE[esql](../../../../../../includes/esql-md.md)] collection functions that may be used anywhere in an expression. This includes using aggregate functions in projections and predicates that act on collections. Collection functions are the preferred mode of specifying aggregates in [!INCLUDE[esql](../../../../../../includes/esql-md.md)]. - Group aggregates in query expressions that have a GROUP BY clause. As in Transact-SQL, group aggregates accept DISTINCT and ALL as modifiers to the aggregate input. [!INCLUDE[esql](../../../../../../includes/esql-md.md)] first tries to interpret an expression as a collection function and if the expression is in the context of a SELECT expression it interprets it as a group aggregate. [!INCLUDE[esql](../../../../../../includes/esql-md.md)] defines a special aggregate operator called [GROUPPARTITION](grouppartition-entity-sql.md). This operator enables you to get a reference to the grouped input set. This allows more advanced grouping queries, where the results of the GROUP BY clause can be used in places other than group aggregate or collection functions. ## Collection Functions Collection functions operate on collections and return a scalar value. For example, if `orders` is a collection of all `orders`, you can calculate the earliest ship date with the following expression: `min(select value o.ShipDate from LOB.Orders as o)` ## Group Aggregates Group aggregates are calculated over a group result as defined by the GROUP BY clause. The GROUP BY clause partitions data into groups. For each group in the result, the aggregate function is applied and a separate aggregate is calculated by using the elements in each group as inputs to the aggregate calculation. When a GROUP BY clause is used in a SELECT expression, only grouping expression names, aggregates, or constant expressions may be present in the projection, HAVING, or ORDER BY clause. The following example calculates the average quantity ordered for each product. `select p, avg(ol.Quantity) from LOB.OrderLines as ol` `group by ol.Product as p` It is possible to have a group aggregate without an explicit GROUP BY clause in the SELECT expression. All elements will be treated as a single group, equivalent to the case of specifying a grouping based on a constant. `select avg(ol.Quantity) from LOB.OrderLines as ol` `select avg(ol.Quantity) from LOB.OrderLines as ol group by 1` Expressions used in the GROUP BY clause are evaluated by using the same name-resolution scope that would be visible to the WHERE clause expression. ## See also - [Functions](functions-entity-sql.md)
67.869565
503
0.75016
eng_Latn
0.993779
5df1e501b64d523c66cc1c9e554c44f1baa1906a
460
md
Markdown
umt-training-course/jupyter-R/README.md
fjrmoreews/umt-reproducibility
b590090a164815d90ba2ee494cfb1c4accb417da
[ "MIT" ]
null
null
null
umt-training-course/jupyter-R/README.md
fjrmoreews/umt-reproducibility
b590090a164815d90ba2ee494cfb1c4accb417da
[ "MIT" ]
null
null
null
umt-training-course/jupyter-R/README.md
fjrmoreews/umt-reproducibility
b590090a164815d90ba2ee494cfb1c4accb417da
[ "MIT" ]
null
null
null
## run jupyter with R kernel run jupyter with R kernel ``` docker pull cannin/jupyter-r docker rm jupyter docker run --name jupyter -p 8888:8888 -t cannin/jupyter-r ``` then upload the files: pig-ds1.csv R-first-step1.ipynb create a cut im docker image: with customized libraries ``` bash build.sh or docker build -t jupyter-r-custom ./ docker stop jupyter docker rm jupyter docker run --name jupyter -p 8888:8888 -t jupyter-r-custom ```
12.432432
58
0.715217
eng_Latn
0.65582
5df2ee4b0b9e2ca205bf1a8e0410387647aa1efd
7,769
md
Markdown
configuring-infrastructure-components/event-processing/saga-infrastructure.md
gabrielkirsten/reference-guide
0e56572626ddceabfa9b4bcafaf6083f587851b8
[ "Apache-2.0" ]
null
null
null
configuring-infrastructure-components/event-processing/saga-infrastructure.md
gabrielkirsten/reference-guide
0e56572626ddceabfa9b4bcafaf6083f587851b8
[ "Apache-2.0" ]
null
null
null
configuring-infrastructure-components/event-processing/saga-infrastructure.md
gabrielkirsten/reference-guide
0e56572626ddceabfa9b4bcafaf6083f587851b8
[ "Apache-2.0" ]
null
null
null
# Saga Infrastructure Events need to be redirected to the appropriate saga instances. To do so, some infrastructure classes are required. The most important components are the `SagaManager` and the `SagaRepository`. ## Saga Manager Like any component that handles events, the processing is done by an event processor. However, sagas are not singleton instances handling events. They have individual life cycles which need to be managed. Axon supports life cycle management through the `AnnotatedSagaManager`, which is provided to an event processor to perform the actual invocation of handlers. It is initialized using the type of the saga to manage, as well as a `SagaRepository` where sagas of that type can be stored and retrieved. A single `AnnotatedSagaManager` can only manage a single saga type. When using the Configuration API, Axon will use sensible defaults for most components. However, it is highly recommended to define a `SagaStore` implementation to use. The `SagaStore` is the mechanism that 'physically' stores the saga instances somewhere. The `AnnotatedSagaRepository` \(the default\) uses the `SagaStore` to store and retrieve Saga instances as they are required. {% tabs %} {% tab title="Axon Configuration API" %} ```java Configurer configurer = DefaultConfigurer.defaultConfiguration(); configurer.eventProcessing(eventProcessingConfigurer -> eventProcessingConfigurer .registerSaga(MySaga.class, // Axon defaults to an in-memory SagaStore, // defining another is recommended sagaConfigurer -> sagaConfigurer.configureSagaStore(c -> new JpaSagaStore(...)))); // alternatively, it is possible to register a single SagaStore for all Saga types: configurer.registerComponent(SagaStore.class, c -> new JpaSagaStore(...)); ``` {% endtab %} {% tab title="Spring Boot AutoConfiguration" %} ```java @Saga(sagaStore = "mySagaStore") public class MySaga {...} ... // somewhere in configuration @Bean public SagaStore mySagaStore() { return new MongoSagaStore(...); // default is JpaSagaStore } ``` {% endtab %} {% endtabs %} ## Saga repository and saga store The `SagaRepository` is responsible for storing and retrieving sagas, for use by the `SagaManager`. It is capable of retrieving specific saga instances by their identifier as well as by their association values. There are some special requirements, however. Since concurrency handling in sagas is a very delicate procedure, the repository must ensure that for each conceptual saga instance \(with an equal identifier\) only a single instance exists in the JVM. Axon provides the `AnnotatedSagaRepository` implementation, which allows the lookup of saga instances while guaranteeing that only a single instance of the saga may be accessed at the same time. It uses a `SagaStore` to perform the actual persistence of saga instances. The choice for the implementation to use depends mainly on the storage engine used by the application. Axon provides the `JdbcSagaStore`, `InMemorySagaStore`, `JpaSagaStore` and `MongoSagaStore`. In some cases, applications benefit from caching saga instances. In that case, there is a `CachingSagaStore` which wraps another implementation to add caching behavior. Note that the `CachingSagaStore` is a write-through cache, which means save operations are always immediately forwarded to the backing Store, to ensure data safety. ### JpaSagaStore The `JpaSagaStore` uses JPA to store the state and association values of sagas. Sagas themselves do not need any JPA annotations; Axon will serialize the sagas using a `Serializer` \(similar to event serialization, you can choose between an `XStreamSerializer`, `JacksonSerializer` or `JavaSerializer`, which can be set by configuring the default `Serializer` in your application. For more details, see [Serializers](../../operations-guide/production-considerations/serializers.md). The `JpaSagaStore` is configured with an `EntityManagerProvider`, which provides access to an `EntityManager` instance to use. This abstraction allows for the use of both application managed and container managed `EntityManager`s. Optionally, you can define the serializer to serialize the Saga instances with. Axon defaults to the `XStreamSerializer`. ### JdbcSagaStore The `JdbcSagaStore` uses plain JDBC to store stage instances and their association values. Similar to the `JpaSagaStore`, saga instances do not need to be aware of how they are stored. They are serialized using a serializer. The `JdbcSagaStore` is initialized with either a `DataSource` or a `ConnectionProvider`. While not required, when initializing with a `ConnectionProvider`, it is recommended to wrap the implementation in a `UnitOfWorkAwareConnectionProviderWrapper`. It will check the current Unit of Work for an already open database connection, to ensure that all activity within a unit of work is done on a single connection. Unlike JPA, the `JdbcSagaRepository` uses plain SQL statements to store and retrieve information. This may mean that some operations depend on the database specific SQL dialect. It may also be the case that certain database vendors provide non-standard features that you would like to use. To allow for this, you can provide your own `SagaSqlSchema`. The `SagaSqlSchema` is an interface that defines all the operations the repository needs to perform on the underlying database. It allows you to customize the SQL statement executed for each operation. The default is the `GenericSagaSqlSchema`. Other implementations available are `PostgresSagaSqlSchema`, `Oracle11SagaSqlSchema` and `HsqlSagaSchema`. ### MongoSagaStore The `MongoSagaStore` stores the saga instances and their associations in a MongoDB database. The `MongoSagaStore` stores all sagas in a single collection in a MongoDB database. For each saga instance, a single document is created. The `MongoSagaStore` also ensures that at any time, only a single Saga instance exists for any unique Saga in a single JVM. This ensures that no state changes are lost due to concurrency issues. The `MongoSagaStore` is initialized using a `MongoTemplate` and optionally a `Serializer`. The `MongoTemplate` provides a reference to the collection to store the sagas in. Axon provides the `DefaultMongoTemplate`, which takes a `MongoClient` instance as well as the database name and name of the collection to store the sagas in. The database name and collection name may be omitted. In that case, they default to `"axonframework"` and `"sagas"`, respectively. ## Caching If a database backed saga storage is used, saving and loading saga instances may be a relatively expensive operation. In situations where the same saga instance is invoked multiple times within a short time span, a cache can be especially beneficial to the application's performance. Axon provides the `CachingSagaStore` implementation. It is a `SagaStore` that wraps another one, which does the actual storage. When loading sagas or association values, the `CachingSagaStore` will first consult its caches, before delegating to the wrapped repository. When storing information, all calls are always delegated to ensure that the backing storage always has a consistent view on the saga's state. To configure caching, simply wrap any `SagaStore` in a `CachingSagaStore`. The constructor of the `CachingSagaStore` takes three parameters: 1. The `SagaStore` to wrap 2. The cache to use for association values 3. The cache to use for saga instances The latter two arguments may refer to the same cache, or to different ones. This depends on the eviction requirements of your specific application.
51.450331
123
0.781053
eng_Latn
0.996443
5df36a5bc353663ecc74c83d8aae1c0fb473b253
2,946
md
Markdown
_posts/2018/2018-07-02-the-case-for-publish-subscribe.md
MartinDan1/codepediaorg.github.io
bbd6174d14917b3ef8e8b483f0dabc23f4392f3d
[ "MIT" ]
8
2019-05-24T18:09:04.000Z
2021-11-02T03:21:42.000Z
_posts/2018/2018-07-02-the-case-for-publish-subscribe.md
MartinDan1/codepediaorg.github.io
bbd6174d14917b3ef8e8b483f0dabc23f4392f3d
[ "MIT" ]
1
2021-05-10T12:04:05.000Z
2021-05-10T12:04:05.000Z
_posts/2018/2018-07-02-the-case-for-publish-subscribe.md
MartinDan1/codepediaorg.github.io
bbd6174d14917b3ef8e8b483f0dabc23f4392f3d
[ "MIT" ]
6
2020-01-10T12:45:54.000Z
2021-02-25T12:28:08.000Z
--- layout: post title: The case for publish/subscribe in your software architecture considerations description: "The case for publish/subscribe in your architecture considerations: loose coupling and asynchronous consumption of messages" author: ama permalink: /ama/the-case-for-publish-subscribe-in-software-architecture published: true categories: [article] tags: [architecture, design-patterns, cloud] --- So you are developing a core platform for your enterprise. Your platform becomes successful, you have more clients consuming your services. Some (most) will need some core data synchronized in their systems. You might start experiencing the following pains ## Pains 1. You might use a push mechanism to notify clients about an core element update (e.g. user or organisation data update). But what happens when a new client ist added? Well your core platform needs to be aware of it. This might imply code or configuration changes, new deployments etc. - ** it does not scale ** 2. If a client of the platform is not online the time the update notification is pushed, it will miss the update, which might lead to data inconsistency. ## Pub/Subscribe to the rescue Both issues are elegantly solved if you use a publish/subscribe mechanism. Your core platform, the publisher, will publish the update event to a message broker, that will be consumed by one or several clients, subscribers. See the diagram below for the 1000 words, worth of explanation: <!--more--> <figure> <img src="https://www.codepedia.org/images/posts/publish-subscribe-architecture/publish-subscribe-architecture.png" alt="" /> <figcaption>Model architecture for a publish/subscribe scenario in enterprise</figcaption> </figure> Well, a few words won't hurt anyone: your core platform might get relevant notification events from business or private clients. now instead of pushing the notification to the consuming clients it publishes it to a topic (might be a cloud service), where is persisted for a configured duration or consumed by all subscribers. The simplest scenario is to use a broadcast pattern for the topic where every subscription gets a copy of each message sent to the topic. Other topologies could be set up for more advanced scenarios via message filtering and routing. This usually involves additional topics or queues. Most potent brokers offer such abilities. ### Benefits The benefits are the solving of the previously mentioned points: - **decoupling**: the publisher is not aware anymore of consuming clients - **asynchronous consumption (offline support)**: subscribers might consume the messages at a later time This is no silver bullet for all your architecture needs, you might need tightly coupled systems to guarantee message delivery for example. Just don't forget about it, when you are designing your next (enterprise) project. # References: * https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
53.563636
155
0.794297
eng_Latn
0.999209
5df4544cb3b8e5a20af6ae4178809a99d1e261f7
5,963
md
Markdown
docs/src/config.md
kmaasrud/doctor
65e39730a437d1bac0b9d5464ae5153671e51c5c
[ "MIT" ]
6
2021-04-24T04:10:56.000Z
2021-11-06T18:19:31.000Z
docs/src/config.md
kmaasrud/doctor
65e39730a437d1bac0b9d5464ae5153671e51c5c
[ "MIT" ]
29
2021-04-14T08:17:02.000Z
2021-07-06T11:27:07.000Z
docs/src/config.md
kmaasrud/doctor
65e39730a437d1bac0b9d5464ae5153671e51c5c
[ "MIT" ]
1
2021-10-07T16:55:23.000Z
2021-10-07T16:55:23.000Z
Your Doctor document can be configured quite extensively with the `doctor.toml` file. This file allows you to specify metadata, apply styling, supply information to Pandoc and/or the $\TeX$ engine, and much more. I've chosen the TOML specification since I consider it the most human-friendly configuration interface - it's legibility being the main draw. The configuration file has six main *tables* (which is TOML terminology for a collection of key-value pairs following a header): `[meta]`, `[build]`, `[style]`, `[bib]`, `[latex]` and `[html]`. They contain the configurations as listed below. # `[meta]` These options relate to the metadata of your document. | **Config name** | **Description** | |--|:--| | `title` | The document's title. | | `author` | The document's author or a list of authors. | | `date` | The date of your document. If the date is `"today"` or `"now"`, Doctor will insert the current date. | Here's an example of a `[meta]` table: ```toml [meta] title = "Our amazing report" author = ["Jane Doe", "John Doe"] date = "February 17th 1998" ``` # `[build]` These options allow you to tune how the document is built. | **Config name** | **Description** | |--|:--| | `engine` | The $\TeX$ engine you want to use to build your PDF. The options are: `pdflatex`, `lualatex`, `xelatex`, `latexmk` and `tectonic`. If no engine is specified, Doctor will use `pdflatex` as the default. | | `filename` | The filename you want for your exported document. You do not need to specify an extension, Doctor will automatically append `.pdf` when exporting as a PDF and `.html` when exporting as HTML. | | `output-format` | The format of your exported document. The options are `"html"` or `"pdf"`. | | `lua-filters` | Boolean that specifies whether or not to use the embedded Lua filters. This option is mainly for debugging. The default is `true`, and turning it `false` will stop some functionality like cross-referencing. | Here's an example of a `[build]` table: ```toml [build] engine = "lualatex" filename = "awesome-document" output-format = "html" lua-filters = false ``` # `[style]` These are options that specify how you want your document presented. | **Config name** | **Description** | |--|:--| | `two-column` | If this option is `true`, the PDF document will be formatted with two columns on each page. The default is `false`. | | `number-sections` | Boolean that specifies whether or not you want the sections of your document numbered. If you want to use cross-referencing with sections, this option must be true. | | `document-class` | The $\LaTeX$ document class you want to use. Can be any of the ones listed [here](https://ctan.org/topic/class). Beware that not all classes are tested with the Doctor syntax, so some might not work as expected. | | `class-options` | Options for the document class. The available options depend on the chosen document class, but some commonly used ones are listed [here](https://en.wikibooks.org/wiki/LaTeX/Document_Structure#Document_Class_Options). | Here's an example of a `[style]` table: ```toml [style] two-column = true number-sections = true document-class = "report" class-options = "landscape" ``` # `[bib]` These options all relate to citations and the bibliography of your document. | **Config name** | **Description** | |--|:--| | `csl` | The CSL ([Citation Style Language](https://citationstyles.org/)) style to use for your citations. You can either use one of the CSL styles that come prepackaged with Doctor (listed [here](bib#csl)), a CSL file in your `assets` folder or a URL that points to a CSL file available on the internet. For local files, you need only specify the name, not the `.csl` extension. The default CSL style is the [Chicago Manual of Style 17th edition](https://csl.mendeley.com/styleInfo/?styleId=http%3A%2F%2Fwww.zotero.org%2Fstyles%2Fchicago-author-date). | | `bibliography-title` | The title of the bibliography section. The default is no title. | | `bibliography-file` | The name of your BibTeX file. All paths are relative to the `assets` directory. The default is `references.bib` | | `include-bibliography` | Boolean that specifies whether or not you want your bibliography included in your document. The default is `true`. | Here's an example of a `[bib]` table: ```toml [bib] csl = "apa" bibliography-title = "References" include-bibliography = false ``` # `[latex]` These are options that are specific for $\LaTeX$ and PDF output. | **Config name** | **Description** | |--|:--| | `packages` | A list of strings specifying the packages you want used. This is similar to `\usepackage{package}` in a $\LaTeX$ document, but much less verbose. If you want to supply options to you package inclusion, you can write them with brackets as you would normally. An example could be when you want to specify a language to Babel. In this case, you could write `packages = ["[norsk]{babel}"]`. | | `header` | Whatever you want parsed by $\LaTeX$ as header content. For simple package inclusions, the above option is recommended, but if more advanced headers are required, you can use this option. To avoid having to escape macros or manually write newline characters, you can use a multiline literal string, which is surrounded by `'''`. | Here's an example of a `[latex]` table: ```toml [latex] packages = ["graphicx", "placeins", "[utf8]{inputenc}"] header = '''\makeatletter \newcommand*{\centerfloat}{% \parindent \z@ \leftskip \z@ \@plus 1fil \@minus \textwidth \rightskip\leftskip \parfillskip \z@skip} \makeatother''' ``` # `[html]` These are options that are specific for HTML output, but since you cannot output HTML yet, these do nothing at the moment. | **Config name** | **Description** | |--|:--| | `header` | Whatever you want included in the `<head>` of you HTML document. To avoid having to escape macros or manually write newline characters, you can use a multiline literal string, which is surrounded by `'''`. |
51.405172
555
0.721281
eng_Latn
0.99613
5df487ba0d943730f59bf8e5ebe834409fad1237
146
md
Markdown
README.md
ParkanonTulikukko/movieDatabase
026bd6d3ff70b4ac11e0067c705d592e5b484fb7
[ "MIT" ]
null
null
null
README.md
ParkanonTulikukko/movieDatabase
026bd6d3ff70b4ac11e0067c705d592e5b484fb7
[ "MIT" ]
null
null
null
README.md
ParkanonTulikukko/movieDatabase
026bd6d3ff70b4ac11e0067c705d592e5b484fb7
[ "MIT" ]
null
null
null
# movieDatabase Spring Boot application for keeping data about movies and actors. Exercise for Helsinki University’s backend programming course.
48.666667
129
0.835616
eng_Latn
0.910845
5df4cd451cae79a08983592c97ee6ffd6016bae4
1,708
md
Markdown
results/referenceaudioanalyzer/referenceaudioanalyzer_hdm-x_harman_over-ear_2018/Beyerdynamic DT 990 250 Ohm (worn earpads)/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
results/referenceaudioanalyzer/referenceaudioanalyzer_hdm-x_harman_over-ear_2018/Beyerdynamic DT 990 250 Ohm (worn earpads)/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
results/referenceaudioanalyzer/referenceaudioanalyzer_hdm-x_harman_over-ear_2018/Beyerdynamic DT 990 250 Ohm (worn earpads)/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
# Beyerdynamic DT 990 250 Ohm (worn earpads) See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info. ### Parametric EQs In case of using parametric equalizer, apply preamp of **-13.56dB** and build filters manually with these parameters. The first 5 filters can be used independently. When using independent subset of filters, apply preamp of **-13.56 dB**. | Type | Fc | Q | Gain | |--------:|------------:|-----:|---------:| | Peaking | 12.73 Hz | 0.7 | 1.24 dB | | Peaking | 17.53 Hz | 0.52 | 12.82 dB | | Peaking | 174.22 Hz | 0.43 | -3.85 dB | | Peaking | 4021.21 Hz | 3.56 | 5.85 dB | | Peaking | 5992.53 Hz | 2.34 | 8.05 dB | | Peaking | 2465.56 Hz | 1.91 | -2.02 dB | | Peaking | 3510.77 Hz | 6.76 | 1.77 dB | | Peaking | 4942.31 Hz | 3.99 | 0.68 dB | | Peaking | 9056.45 Hz | 1.95 | 1.76 dB | | Peaking | 19201.38 Hz | 0.39 | -6.38 dB | ### Fixed Band EQs In case of using fixed band (also called graphic) equalizer, apply preamp of **-11.91dB** (if available) and set gains manually with these parameters. | Type | Fc | Q | Gain | |--------:|------------:|-----:|---------:| | Peaking | 31.25 Hz | 1.41 | 11.83 dB | | Peaking | 62.50 Hz | 1.41 | -0.62 dB | | Peaking | 125.00 Hz | 1.41 | -2.45 dB | | Peaking | 250.00 Hz | 1.41 | -3.16 dB | | Peaking | 500.00 Hz | 1.41 | -1.08 dB | | Peaking | 1000.00 Hz | 1.41 | 0.51 dB | | Peaking | 2000.00 Hz | 1.41 | -3.28 dB | | Peaking | 4000.00 Hz | 1.41 | 6.93 dB | | Peaking | 8000.00 Hz | 1.41 | 3.94 dB | | Peaking | 16000.01 Hz | 1.41 | -6.59 dB | ### Graphs ![](./Beyerdynamic%20DT%20990%20250%20Ohm%20(worn%20earpads).png)
42.7
98
0.566159
eng_Latn
0.663745
5df4d6be425040d9335134cec487d811ec9e703b
142
md
Markdown
README.md
boosterl/worldbeyondlinux
0682eeb79bfa63bc26f6ce3a147a40fd65647679
[ "MIT" ]
null
null
null
README.md
boosterl/worldbeyondlinux
0682eeb79bfa63bc26f6ce3a147a40fd65647679
[ "MIT" ]
null
null
null
README.md
boosterl/worldbeyondlinux
0682eeb79bfa63bc26f6ce3a147a40fd65647679
[ "MIT" ]
null
null
null
# The World Beyond Linux This is the repository containing the source code of my blog. I use [hugo](https://gohugo.io/) for managing my blog.
47.333333
116
0.753521
eng_Latn
0.99302
5df5d6f25da4a3388db1ebfc03ff79831d2b2e0e
1,193
md
Markdown
README.md
peteygao/charity-watchdog
994d89a150e7fd756d12cd72643b3b536c164a53
[ "MIT" ]
1
2019-02-16T03:30:54.000Z
2019-02-16T03:30:54.000Z
README.md
peteygao/charity-watchdog
994d89a150e7fd756d12cd72643b3b536c164a53
[ "MIT" ]
null
null
null
README.md
peteygao/charity-watchdog
994d89a150e7fd756d12cd72643b3b536c164a53
[ "MIT" ]
null
null
null
# Charity Watchdog (ETH Denver Hackathon 2019 Submission) Please note that this app was developed specifically for ETH Denver hackathon. Goal: Using the Ethereum network to bring transparency and accountability to charity spends. ## Local Development Because this app is made of two npm projects, there are two places to run `npm` commands: 1. **Node API server** at the root `./` 1. **React UI** in `react-ui/` directory. ### Run the API server In a terminal: ```bash # Initial setup (install dependencies based on package-lock.json) npm ci # Start the server npm start ``` #### Install new npm packages for API server ```bash npm install package-name --save ``` ### Run the React UI The React app is configured to proxy backend requests to the local Node server. (See [`"proxy"` config](react-ui/package.json)) In a separate terminal from the API server, start the UI: ```bash # Always change directory, first cd react-ui/ # Initial setup (install dependencies based on package-lock.json) npm ci # Start the React front-end npm start ``` #### Install new npm packages for React UI ```bash # Always change directory, first cd react-ui/ npm install package-name --save ```
20.929825
127
0.731769
eng_Latn
0.979671
5df60e21728625cef642e96d186ab861eeba412c
1,383
md
Markdown
criar_maquina/README.md
lucaslellis/gcp_laboratorio_kvm
71096e287125d2fe2305813cfce98cb78062e23b
[ "MIT" ]
null
null
null
criar_maquina/README.md
lucaslellis/gcp_laboratorio_kvm
71096e287125d2fe2305813cfce98cb78062e23b
[ "MIT" ]
null
null
null
criar_maquina/README.md
lucaslellis/gcp_laboratorio_kvm
71096e287125d2fe2305813cfce98cb78062e23b
[ "MIT" ]
null
null
null
criar_maquina ========= Role para criação de máquina para host KVM e Vagrant Requisitos ------------ N/A Variáveis da Role -------------- Estas variáveis devem ser definidas antes da execução da role: * {{ discos }} - Lista contendo os discos que devem ser criados. Cada disco deve conter os campos nome, tamanho (em GB) e tipo. * {{ gcp_cred_contents }} - Conteúdo do arquivo JSON da conta de serviço do Google Cloud. * {{ gcp_cred_kind }} - Tipo de credencial do Google Cloud. * {{ gcp_project }} - ID do projeto do Google Cloud. * {{ ip_interno }} - IP interno a ser usado na máquina. * {{ network_tier }} - Network Tier da rede a ser criada. * {{ nome_imagem_so }} - Nome da imagem para o disco de boot. * {{ nome_ip_externo }} - Nome do recurso do IP externo. * {{ nome_ip_interno }} - Nome do recurso do IP externo. * {{ nome_rede_vpc }} - Nome da rede VPC. * {{ nome_vm }} - Nome da Maquina Virtual. * {{ region }} - Regiao em que devem ser criados os recursos. * {{ tipo_maquina }} - Tipo de Maquina Virtual. * {{ zone }} - Zona em que devem ser criados os recursos. Dependências ------------ N/A Playbook de exemplo ---------------- ```yaml - name: Criar maquina para servir de host KVM hosts: localhost gather_facts: false roles: - criar_maquina ``` Licença ------- BSD Autor ------------------ [Lucas Pimentel Lellis](mailto:[email protected])
24.696429
127
0.65799
por_Latn
0.990604
5df61c7b94d6eedaa6c3ff9f30e7730a9a2efe58
2,189
md
Markdown
README.md
Magtheridon96/Unfinished-RPG
f002a7e19684f31e6158ca78e24bdd514932d72c
[ "MIT" ]
1
2015-09-14T13:19:16.000Z
2015-09-14T13:19:16.000Z
README.md
Magtheridon96/Unfinished-RPG
f002a7e19684f31e6158ca78e24bdd514932d72c
[ "MIT" ]
null
null
null
README.md
Magtheridon96/Unfinished-RPG
f002a7e19684f31e6158ca78e24bdd514932d72c
[ "MIT" ]
null
null
null
# Unfinished RPG This is an abandoned project I worked on as a teenager. It was a game designed by my friends, developed by me. Game development is no longer interesting to me, so I'm not going to be maintaining this code anymore. # Dependencies The code requires Boost 1.55 or newer, and SFML 2.1 or newer. # Screenshots The project changed a lot over time. First, it was a Mario clone and a tech demo for quad tree implementation with collision, rendering and all the other simple stuff. ![s8](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss8.PNG) Over time, it changed into a map editor with some nice features and a novel "Snap-to-Game" mode. ![s7](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss7.png) ![s9](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss9.PNG) ![s5](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss5.png) Later on, I implemented a much nicer tiling scheme and tiling algorithm so tiles would blend in nicely. ![s6](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss6.png) Fast forward a year and many changes, we reach a unit editor. ![s1](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss1.PNG) ![s2](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss2.PNG) Map editor's looking not so bad. I wasn't so good at design and UX at the time. ![s3](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss3.PNG) And the last feature I implemented was a cinematic system with letterboxes. The cinematics are scripted with a simple procedural language. The specification of the language wasn't really documented by me perfectly when I made it, but it's easily deducable upon reading the code file responsible for parsing it, or by reading the examples given in one of the repository folders. (I'm sorry for the lack of documentation) ![s4](https://github.com/Magtheridon96/Unfinished-RPG/blob/master/SS/ss4.PNG) # What Now? I just wanted to share this to the public as sort of a blog post of some old work I did. It was quite a lot of work. Shame to throw it away. Might as well throw it online where garbage code can roam free.
54.725
419
0.773869
eng_Latn
0.990682
5df6327708b5d00ee8826a24ac116fa08bc33606
1,119
md
Markdown
_proceedings/2010-12-01-Ensemble-determination-using-the-TOPSIS-decision-support-system-in-multi-objective-evolutionary-neur.md
pagutierrez/pagutierrez.github.io
45b9cc8aa7759b1cefb693176d125c9a16f9fdb4
[ "MIT" ]
null
null
null
_proceedings/2010-12-01-Ensemble-determination-using-the-TOPSIS-decision-support-system-in-multi-objective-evolutionary-neur.md
pagutierrez/pagutierrez.github.io
45b9cc8aa7759b1cefb693176d125c9a16f9fdb4
[ "MIT" ]
null
null
null
_proceedings/2010-12-01-Ensemble-determination-using-the-TOPSIS-decision-support-system-in-multi-objective-evolutionary-neur.md
pagutierrez/pagutierrez.github.io
45b9cc8aa7759b1cefb693176d125c9a16f9fdb4
[ "MIT" ]
null
null
null
--- title: "Ensemble determination using the TOPSIS decision support system in multi-objective evolutionary neural network classifiers" collection: proceedings permalink: /proceeding/2010-12-01-Ensemble-determination-using-the-TOPSIS-decision-support-system-in-multi-objective-evolutionary-neur date: 2010-12-01 venue: 'In Proceedings of the 10th International Conference on Intelligent Systems Design and Applications (ISDA2010)' citation: 'Manuel Cruz-Ramírez, Juan Carlos Fernández, Javier Sánchez-Monedero, Francisco Fernandez-Navarro, César Hervás-Martínez, <strong>Pedro Antonio Gutiérrez</strong>, M.T. Lamata, &quot;Ensemble determination using the TOPSIS decision support system in multi-objective evolutionary neural network classifiers.&quot; In Proceedings of the 10th International Conference on Intelligent Systems Design and Applications (ISDA2010), 2010, Cairo, Egypt, pp.513-518.' --- Use [Google Scholar](https://scholar.google.com/scholar?q=Ensemble+determination+using+the+TOPSIS+decision+support+system+in+multi+objective+evolutionary+neural+network+classifiers){:target="_blank"} for full citation
124.333333
466
0.82395
eng_Latn
0.347849
5df67b3b4fa3262c6292e13567dd69d147fa25b4
6,503
md
Markdown
docs/standard/data/sqlite/compare.md
billybax/docs.ru-ru
e8f4d9f4f39f8ea383d06eba5a106594048b43e2
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/data/sqlite/compare.md
billybax/docs.ru-ru
e8f4d9f4f39f8ea383d06eba5a106594048b43e2
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/data/sqlite/compare.md
billybax/docs.ru-ru
e8f4d9f4f39f8ea383d06eba5a106594048b43e2
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Сравнение с System. Data. SQLite ms.date: 12/13/2019 description: Описывает некоторые различия между библиотеками Microsoft. Data. SQLite и System. Data. SQLite. ms.openlocfilehash: 076bbc6f746cf9296c96ec73047397a21a3b2558 ms.sourcegitcommit: 7088f87e9a7da144266135f4b2397e611cf0a228 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 01/11/2020 ms.locfileid: "75900714" --- # <a name="comparison-to-systemdatasqlite"></a>Сравнение с System. Data. SQLite В 2005 Роберт Симпсон создал System. Data. SQLite, поставщик SQLite для ADO.NET 2,0. В 2010 команда SQLite заняла обслуживание и разработку проекта. Также стоит отметить, что команда Mono Развилка код в 2007 как Mono. Data. SQLite. System. Data. SQLite имеет долгий журнал и превратился в стабильный и полнофункциональный поставщик ADO.NET с помощью инструментов Visual Studio. Новые выпуски продолжат поставлять сборки, совместимые с каждой версией .NET Framework обратно в версии 2,0 и даже .NET Compact Framework 3,5. Первая версия .NET Core (выпущенная в 2016) была единственной, простой, современной и кросс-платформенной реализацией .NET. Устаревшие интерфейсы API и API с более современными альтернативными вариантами были намеренно удалены. ADO.NET не включал какие-либо интерфейсы API набора данных (включая DataTable и DataAdapter). Группа Entity Framework была в некоторой степени знакома с базой кода System. Data. SQLite. Брице Ламбсон, участник команды EF, ранее помогал команде SQLite добавить поддержку Entity Framework версий 5 и 6. Брице также поэкспериментировать с собственной реализацией поставщика SQLite ADO.NET в то же время, когда планируется выполнение .NET Core. После длительного обсуждения Группа Entity Framework решила создать Microsoft. Data. SQLite на основе прототипа Брице. Это позволит им создавать новые упрощенные и современные реализации, которые будут согласованы с целями .NET Core. В качестве примера того, что мы имеем в виду более современных, здесь приведен код для создания [определяемой пользователем функции](user-defined-functions.md) в System. Data. SQLite и Microsoft. Data. SQLite. ```csharp // System.Data.SQLite connection.BindFunction( new SQLiteFunctionAttribute("ceiling", 1, FunctionType.Scalar), (Func<object[], object>)((object[] args) => Math.Ceiling((double)((object[])args[1])[0])), null); // Microsoft.Data.Sqlite connection.CreateFunction( "ceiling", (double arg) => Math.Ceiling(arg)); ``` В 2017 .NET Core 2,0 изменила стратегию. Было принято решение о том, что совместимость с .NET Framework была крайне важной для успешной работы .NET Core. Многие из удаленных API, включая API набора данных, были добавлены обратно. Как и для многих других, этот незаблокированный компонент System. Data. SQLite также позволяет перенести его в .NET Core. Исходная цель Microsoft. Data. SQLite — облегченная и современная, однако все еще остается. Дополнительные сведения об интерфейсах API ADO.NET, не реализованных в Microsoft. Data. SQLite, см. в разделе [ограничения ADO.NET](adonet-limitations.md) . При добавлении новых функций в Microsoft. Data. SQLite учитывается структура System. Data. SQLite. Мы пытаемся, по возможности, уменьшить изменения между двумя, чтобы упростить переход между ними. ## <a name="data-types"></a>Типы данных Крупнейшим различием между Microsoft. Data. SQLite и System. Data. SQLite является обработка типов данных. Как описано в разделе [типы данных](types.md), Microsoft. Data. SQLite не пытается скрыть базовую разрешающую часть SQLite, что позволяет указать любую произвольную строку в качестве типа столбца и имеет четыре примитивных типа: integer, Real, Text и BLOB. System. Data. SQLite применяет дополнительную семантику к типам столбцов, сопоставляя их непосредственно с типами .NET. Это дает поставщику более строго типизированное поведение, но имеет некоторые грубые края. Например, им пришлось ввести новую инструкцию SQL (типы), чтобы указать типы столбцов выражений в инструкциях SELECT. ## <a name="connection-strings"></a>Строки подключения Microsoft. Data. SQLite имеет гораздо меньше ключевых слов [строки подключения](connection-strings.md) . В следующей таблице показаны альтернативные варианты, которые можно использовать вместо. | Ключевое слово | Альтернатива | | ---------------- | --------------------------------------------------- | | Размер кэша | Отправить `PRAGMA cache_size = <pages>` | | Время ожидания по умолчанию | Использование свойства DefaultTimeout в Склитеконнектион | | фаилифмиссинг | Использование `Mode=ReadWrite` | | фуллури | Использование ключевого слова источника данных | | Режим ведения журнала | Отправить `PRAGMA journal_mode = <mode>` | | Формат прежних версий | Отправить `PRAGMA legacy_file_format = 1` | | Максимальное число страниц | Отправить `PRAGMA max_page_count = <pages>` | | Размер страницы | Отправить `PRAGMA page_size = <bytes>` | | Только чтение | Использование `Mode=ReadOnly` | | Synchronous | Отправить `PRAGMA synchronous = <mode>` | | URI-код | Использование ключевого слова источника данных | | UseUTF16Encoding | Отправить `PRAGMA encoding = 'UTF-16'` | ## <a name="authorization"></a>Авторизация Microsoft. Data. SQLite не имеет API, предоставляющего обратный вызов авторизации SQLite. Чтобы отправить отзыв об этой функции, используйте [#13835](https://github.com/dotnet/efcore/issues/13835) выпуска. ## <a name="data-change-notifications"></a>Уведомления об изменении данных Microsoft. Data. SQLite не имеет интерфейсов API, предоставляя уведомления об изменении данных SQLite. Чтобы отправить отзыв об этой функции, используйте [#13827](https://github.com/dotnet/efcore/issues/13827) выпуска. ## <a name="virtual-table-modules"></a>Модули виртуальной таблицы Microsoft. Data. SQLite не имеет API для создания виртуальных модулей таблиц. Чтобы отправить отзыв об этой функции, используйте [#13823](https://github.com/dotnet/efcore/issues/13823) выпуска. ## <a name="see-also"></a>См. также: * [Типы данных](types.md) * [Строки подключения](connection-strings.md) * [Шифрование](encryption.md) * [Ограничения ADO.NET](adonet-limitations.md) * [Ограничения Dapper](dapper-limitations.md)
78.349398
600
0.739505
rus_Cyrl
0.929707
5df6dc6a7094bc31eda950b8541518fa478b1d22
40,644
md
Markdown
README.md
SilvioWeging/kASA
65740ee0f885693cd86bc3a1ea74c67ca7174130
[ "BSL-1.0" ]
19
2019-07-25T13:31:58.000Z
2021-09-23T11:19:22.000Z
README.md
SilvioWeging/kASA
65740ee0f885693cd86bc3a1ea74c67ca7174130
[ "BSL-1.0" ]
7
2019-04-04T13:20:58.000Z
2021-04-20T18:51:21.000Z
README.md
SilvioWeging/kASA
65740ee0f885693cd86bc3a1ea74c67ca7174130
[ "BSL-1.0" ]
4
2019-08-20T00:52:25.000Z
2021-04-20T18:43:38.000Z
# kASA ![GitHub tag (latest SemVer)](https://img.shields.io/github/v/tag/SilvioWeging/kASA) [![GitHub issues](https://img.shields.io/github/issues/SilvioWeging/kASA.svg)](https://github.com/SilvioWeging/kASA/issues) ![GitHub All Releases](https://img.shields.io/github/downloads/SilvioWeging/kASA/total.svg) This is the official repository of kASA - <u>k</u>-Mer <u>A</u>nalysis of <u>S</u>equences based on <u>A</u>mino acid-like encoding, the published paper can be found [here](https://academic.oup.com/nar/advance-article/doi/10.1093/nar/gkab200/6204649). The README file is quite large so it might make sense to have a look at the wiki. I will also give context on code updates there. ## Table of content - [Things to know](#things-to-know-before-you-start) - [Prerequisites](#prerequisites) - [Setup](#setup) * [Linux](#linux) * [macOS](#macOS) * [Windows](#windows) - [TL;DR](#tldr) - [Modes and paramameters](#modes-and-parameters) * [Basic](#basic) * [Content file](#generate-a-content-file) * [Build](#build) * [Identify](#identify) + [Output](#output) * [Identify multiple](#identify-multiple) * [Update](#update) * [Shrink](#shrink) * [Merge](#merge) * [Miscellaneous](#miscellaneous) - [Useful scripts](#useful-scripts) - [TODOS/Upcoming](#todos-and-upcoming) - [License](#license) ## Things to know before you start This tool is designed to read genomic sequences (also called Reads) and identify known parts by exactly matching k-mers to a reference database. In order to do this, you need to set kASA up locally (no admin account needed!), create an index out of the genomic reference (for now, we will not provide standard indices) and then put in a file containing Reads. If you can't find a feature, please take a look at the [TODOS](#Todos-and-upcoming) down below before opening an Issue. It may very well be, that I'm already working on it. Words like `<this>` are meant as placeholders to be filled with your specifics e.g. name, paths, ... Folders and paths are recognized as such by letting a parameter end with a "/". k can range between 1 and 12 or 1 and 25. These values are determined by the bit size of the integers the k-mers are saved into (64 bit or 128 bit). The modes [build](#build) and [identify](#identify) determine this size by the maximum k you want to use so if you'd like to use a larger range of k's, the correct bit size is chosen. Choosing a k larger than 12 doubles the index size and impacts performance since 128 bit integers are not supported natively by current CPU architectures (two 64 bit integers substitute one 128 sized integer). Should you by accident try to use a larger k than supported (e.g. the index was built with 64 bit but you try to use 128/k=25 in `identify`), an error will be thrown. ## Prerequisites Some scripts in the `/scripts` folder need Python 3.*, others are shell scripts. Most can be used just for convenience (see [Useful scripts](#useful-scripts)) but are not necessary for kASA. You can use the system specific pre-compiled binaries in the `/bin` folder but I cannot guarantee that they will be universal enough to run on your system as well. Note, that kASA is a console application so if you want to use these binaries, you must either use a terminal (Linux, macOS, Linux Subsystem for Windows) or PowerShell (Windows). A GUI may be implemented, depending on the amount of requests in the [poll](https://github.com/SilvioWeging/kASA/issues/1). If you're using the PowerShell, don't forget to add ".exe" at the end of each call to kASA: `.\<path to kASA>\kASA.exe`. If you have to compile the code, you'll need the following: * On Linux: cmake version >= 2.8 (I use version 3.10.2), gcc & g++ version (at least) 5.5, 6.5, 7.5, or 8.4 * On macOS: cmake as above, LLVM/Clang 9.0 or Apple Clang 9.0 (usually part of Xcode) * On Windows: Visual Studio 2019 (I use version 16.8.6 with Visual C++ 2019) kASA depends on the [STXXL](https://stxxl.org/) and [Gzstream](https://www.cs.unc.edu/Research/compgeom/gzstream/) but contains all necessary files so you don't need to download those. Last but not least: kASA provides an error (starts with "ERROR: ") and an output (starts with "OUT: ") stream. You can seperate them with 2> or 1>. ## Setup ### Linux Install cmake if you haven't already. Open a terminal and navigate to the folder in which you would like to have kASA. Clone the repository with `git clone https://github.com/SilvioWeging/kASA.git`. First, build the zlib by going into the folder `zlib`. Call `chmod +x configure` to give the file `configure` execution rights. Create the folder `zlibBuild` with `mkdir zlibBuild` and `cd` into it. Type `../configure` and after that `make`. Now for kASA itself, please type the following commands: * `cd <installPath>/build` (or create/rename the folder) * `cmake -DCMAKE_BUILD_TYPE=Release ..` or * You may need to specify the compiler in your path: `cmake -DCMAKE_BUILD_TYPE=Release -D CMAKE_C_COMPILER=gcc -D CMAKE_CXX_COMPILER=g++ ..` * Should multiple gcc/g++ versions be installed on your system (where the version at the end may be 5, 6, 7, or 8): `cmake -DCMAKE_BUILD_TYPE=Release -D CMAKE_C_COMPILER=gcc-8 -D CMAKE_CXX_COMPILER=g++-8 ..` * `make` ### macOS If you have Clang with LLVM installed (check with `clang --version`, is usually included in Xcode) do the same as above (Linux). Should you prefer the GCC toolchain or don't have cmake installed, the following steps might be helpful: * [Command Line Tools](http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/) * [Homebrew](https://brew.sh/) * [GCC](https://discussions.apple.com/thread/8336714) * cmake: open a terminal and type `brew install cmake` Afterwards, please type this into your terminal: ``` export CC=/usr/local/bin/gcc export CXX=/usr/local/bin/g++ ``` and proceed as in the Linux part starting from "Clone ...". ### Windows If you only wish to use the .exe, you still need the Visual C++ Redistributable from [here](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0). It can then be called via the PowerShell since it has no GUI (yet). If you want to build the project with Visual Studio instead, please do the following: Clone the repository and open the file `slnForVS/kASA.sln` with Visual Studio 2019. Do [this](https://docs.microsoft.com/en-us/cpp/windows/how-to-use-the-windows-10-sdk-in-a-windows-desktop-application?view=vs-2017) to update to your Windows SDK-Version. Switch to Release mode with x64. Build the project. Change Parameters in Property Page &rarr; Debugging &rarr; Command Arguments. You don't need to include "kASA" at the beginning like in the examples. Just start right with the mode e.g. `identify` when specifying parameters. Run without debugging. ## TL;DR ``` build/kASA build -d <path and name of index file to be build> -i <fasta or folder with fastas> -m <amount of memory in GB you want to use> -n <number of CPUs you want to use> -f <accToTaxFile(s)> -y <folder with nodes.dmp and names.dmp> -u <taxonomic level, e.g. species> <verbose> e.g.: [weging@example:/kASA$] build/kASA build -d example/work/index/exampleIndex -i example/work/db/example.fasta -m 8 -n 2 -f example/taxonomy/acc2Tax/ -y example/taxonomy/ -u species -v build/kASA identify -d <path and name of small index file> -i <input file> -p <path and name of profile output> -q <path and name of read wise output> -m <amount of memory in GB you want to use> -n <number of CPUs you want to use> e.g.: [weging@example:/kASA$] build/kASA identify -d example/work/index/exampleIndex -i example/work/input/example.fastq.gz -p example/work/results/example.csv -q example/work/results/example.json -m 5 -n 2 ``` ## Modes and parameters In this part, you can learn how to `build` your index, `identify` known stuff, `update` your index or `shrink` it to fit onto smaller platforms like external drives. The first letter after a "-" is the short version of a parameter, "--" is the longer one. If no `<...>` follows a parameter, it's a boolean flag. If you want, you can use a json file containing all parameters instead. Just use the one on this git and configure it. Give this file to kASA with `--parameters <file>` and everything will be fetched from there. ### Basic Some parameters which are used by most modes: ##### Mandatory * `-d (--database) <file>`: Actually path and name of the index but let's call it database since `-i` was already taken... * `-c (--content) <file>`: Points to the content file. Can be defaulted when calling `build`, see [here](#generate-a-content-file). * `-o (--outgoing) <file>`: Only important for `shrink` and `update`. This file can be either the existing index or a new file name, depending on whether you want to keep the old file or not. ##### Optional * `<mode> --help`: Shows all parameters for a given mode. * `-t (--temp) <path>`: Path to temporary directory where files are stored that are either deleted automatically or can savely be deleted after kASA finishes. Defaults depend on your OS, [here](https://stxxl.org/tags/1.4.1/install_config.html) are some details. Typically, the path of the executable is used. * `-n (--threads) <number>`: Number of parallel threads. If you are trying to use more than your system supports, a warning will be printed. Recommendation for different settings (due to I/O bottleneck): HDD: 1, SSD: 2-4, RAM disk: 2-?. Note, that when compiling with C\+\+17 enabled on Windows, some routines use all available cores provided by the hardware (because of the implementation of parallel STL algorithms). Default: 1. * `-m (--memory) <number>`: Amount of RAM in Gigabytes kASA will use. If you don't provide enough, a warning will be written and it attempts to use as little as possible but may crash. If you provide more than your system can handle, it prints out a warning and may crash or thrash (slow down). If you write "inf" instead of a number, kASA assumes that you have no memory limit. Default: 5 GB. * `-x (--callidx) <number>`: Number given to this call of kASA so that no problems with temporary files occur if multiple instances of kASA are running at the same time. Default: 0. * `-v (--verbose)`: Prints out a little more information e.g. how much percent of your input was already read and analysed (if your input is not gzipped). Default: off. * `-a (--alphabet) <file> <number>`: If you'd like to use a different translation alphabet formated in the NCBI compliant way, provide the file (gc.prt) and the id (can be a string). Please use only letters in the range ['A',']'] from the ASCII table for your custom alphabet. Default: Hardcoded translation table. ### Generate a content file ##### Context To fully use kASA, you first need a genomic database that you can create by either concatenating some fasta files (containing DNA) into one big file or putting them into a folder. From this data, a so called *content file* is created via the mode `generateCF`. This file contains a mapping of the accession numbers to the corresponding taxonomic IDs and names by using the NCBI taxonomy. Since it's only a text file, you can edit it if you want or just copy one from somewhere else. Furthermore, you'll need the NCBI taxonomy files `nodes.dmp` and `names.dmp` from [here](https://ftp.ncbi.nih.gov/pub/taxonomy/) contained in one of the `taxdump.*` files. The NCBI taxonomy offers multiple files for a mapping from accession number to taxid (see [here](https://ftp.ncbi.nih.gov/pub/taxonomy/accession2taxid/)) . If you know beforehand which one contains your mappings, just hand this to kASA. If not, please put them in one folder and hand the path to kASA instead. It's not necessary to uncompress them (the `.gz` at the end determines in which mode it'll be read). Accession numbers in the fasta file(s) should be placed either right after the ">" e.g. ">CP023965.1 Proteus vulgaris" or inside the old format e.g. ">gi|89106884|ref|AC_000091.1| Escherichia coli str. K-12 substr. W3110". Anything else will get a dummy taxid. If the content file contains entries with "EWAN_...", this stands for "Entries Without Accession Numbers". "unnamed" means they have a taxid but no name could be found (maybe due to deprecation). This mode can be coupled with [Build](#build) by calling `build` and providing the same parameters described here but leaving out `-c`. This creates a content file next to the index named `<index name>_content.txt` which then is considered the default. This eliminates the necessity of providing the `-c` parameter in almost every call. Note, that this step is optional if you provide your own content file or another index with the same content file shall be created (this means creating subsets of a full index is possible with the same content file). The accepted format per line is as follows: ``` <Name> <taxid of specified level e.g. species> <taxids on lowest level> <accession numbers> ``` which for example could look like this: ``` Proteus vulgaris 585 585 CP023965.1;NZ_NBUT01000031.1 Hyphomicrobium denitrificans 53399 582899;670307 NC_014313.1;NC_021172.1 ``` No header line is necessary. This file can be given to `build` via the `-c` parameter. Note that the taxids in the second column must be sorted either numerically or lexicographically. In Unix systems, a call to `sort -t$'\t' -n -k2 <content file> > <sorted content file>` does the trick. ##### Necessary parameters * `-i (--input) <file/folder>`: Fasta file(s), can be gzipped. If you want to process multiple files at once, put them inside a folder and let the path end with `/`. No default. * `-u (--level) <level>`: Taxonomic level at which you want to operate. All levels used in the NCBI taxonomy are available as well. To name a few: subspecies, species, genus, family, order, class, phylum, kingdom, superkingdom. Choose "lowest" if you want no linkage at a higher node in the taxonomic tree, this corresponds to other tools' "sequence" level. That means that no real taxid will be given and the name will be the line from the fasta containing the accession number. Default: species. * `-f (--acc2tax) <folder or file>`: As mentioned, either the folder containing the translation tables from accession number to taxid or a specific file. Can be gzipped. * `-y (--taxonomy)` <folder>: This folder should contain the `nodes.dmp` and the `names.dmp` files. * `-c (--content) <file>`: Here, this parameter specifies where the content file should be written to. ##### Optional paramameters * `--taxidasstr`: Taxonomic IDs are treated as strings and not integers. A fifth column will be added to the content file indicating the integer associated with this taxid. ##### Example call ``` <path to kASA>/kASA generateCF -i <fastaFile(s)> -c <content file> -f <accToTaxFile(s)> -y <folder with nodes.dmp and names.dmp> -u <taxonomic level, e.g. species> (-v ) e.g.: [weging@example:/kASA$] build/kASA generateCF -i example/work/db/example.fasta -c example/work/content.txt -f taxonomy/acc2Tax/ -y taxonomy/ -u species -v ``` ### Build ##### Context This mode creates an index file, a frequency file (containing the amount of k-mers for each taxon) and a prefix trie out of the fasta file(s) you used in the previous mode. This step can take much space and time, depending on the complexity and size of your database. Native support of translated sequences as a database will be added in a future update. The content file from the previous mode is given to kASA via the `-c` parameter or can be created together with the index. ##### Necessary parameters * `-i (--input) <file/folder>`: Fasta file(s), can be gzipped. If you want to process multiple files at once, put them inside a folder and let the path end with `/`. No default. * `-d (--database) <file>`: Actually path and name of the index but let's call it database since `-i` was already taken... ##### Optional paramameters * `-c (--content) <file>`: Path and name of the content file either downloaded or created from genomic data. * `-a (--alphabet) <file> <number>`: If you'd like to use a different translation alphabet formated in the NCBI compliant way, provide the file (gc.prt) and the id (can be a string). Please use only letters in the range ['A',']'] from the ASCII table for your custom alphabet. Default: Hardcoded translation table. * `--three`: Use only three reading frames instead of six. Halves index size but implies the usage of `--six` during identification if the orientation of the reads is unknown. Default: off. * `--one`: Use only one reading frame instead of six. Reduces final index size significantly but sacrifices accuracy and robustness. Default: off. * `--taxidasstr`: Taxonomic IDs are treated as strings and not integers. A fifth column will be added to the content file indicating the integer associated with this taxid. * `--kH <12 or 25>`: Signal which bit size you want to use for the index (25 for 128, 12 for 64). ``` <path to kASA>/kASA build -c <content file> -d <path and name of the index file> -i <folder or file> -t <temporary directory> -m <amount of RAM kASA can use> -n <number of threads> e.g.: [weging@example:/kASA$] build/kASA build -c example/work/content.txt -d example/work/index/exampleIndex -i example/work/db/example.fasta -m 8 -t example/work/tmp/ -n 2 Create content file and index: [weging@example:/kASA$] build/kASA build -d example/work/index/exampleIndex -i example/work/db/example.fasta -m 8 -t example/work/tmp/ -n 2 -f taxonomy/acc2Tax/ -y taxonomy/ -u species -v ``` ### Identify ##### Context This mode compares sequencing data with the built index. You can input fasta or fastq files (depending on whether the first symbol is a `>` or `@`), gzipped or not. If you want to put in multiple files, move them to a folder and place the path next to `-i`. The string given in `-p` and `-q` will then serve as a prefix concatenated with `"_<filename without path>.<csv or json>"`. kASA supports paired-end files which are synchronous. Protein sequences are detected automatically. If you've used a custom alphabet for conversion, just use the same here by copying the `-a <file> <number>` part of your `build` call. Since kASA uses k-mers, a `k` can be given to influence accuracy. You can set these bounds by yourself with the `-k` parameter, the default lower bound is 7, the upper 12. Smaller `k`'s than 6 only make sense if your data is very noisy or you're working on amino acid level. If your read length is smaller than ![equation](http://www.sciweavers.org/tex2img.php?eq=%24k_%7Blower%7D%20%5Ccdot%203%24&bc=White&fc=Black&im=png&fs=12&ff=arev&edit=0) on DNA/RNA level, it will be padded. If you want to optimise precision over sensitivity, you could use `k 12 12` and/or filter out low scoring reads (e.g. by ignoring everything below 0.4 (Relative Score)). Another important thing here is the output. Or the output**s** if you want. kASA can give you two files, one contains the per-read information, which taxa were found (identification file, by default in json format) and the other a table of how much of each taxon was found (the profile, a csv file). But because too much information isn't always nice, you can specify how much taxa with different score shall be shown for each read (e.g. `-b 5` shows best 5 hits). The per read error score ranges from -1 to 1. A 1 means that the best score deviates as far as possible from the optimal score, 0 means a perfect match and -1 means that the reverse complement also fits perfectly. In tsv format, only the error of the best score is printed. Note, that if you input a folder, file names are appended to your string given via `-p` or `-q`. If for example a folder contains two files named `example1.fq` and `example2.fasta` with `-p example/work/results/out_` as a parameter, then kASA will generate two output files named `out_example1.fq.csv` and `out_example2.fasta.csv`. If a read cannot be identified, the array "Top hits" in json format is empty, and in tsv format "-" is printed in every column instead of taxa, names and scores. The "Top hits" array can contain multiple entries, especially if a k-mer Score is "close enough" to the highest score (all scores are normalized to [0,1] and everything with a score of more than 0.8 is considered a "Top hit"). Otherwise it contains the entry with the highest relative Score and all other hits are saved into the "Further hits" array. The first line of the profile is always "not identified" followed by zeroes for the unique and non-unique frequencies but with values for the overall frequencies describing the fracture of the k-mers from the input, which could not be identified. ##### Necessary paramameters * `-i (--input) <file/folder>`: Fastq or fasta file(s), can be gzipped. If you want to process multiple files at once, put them inside a folder and let the path end with `/`. No default. * `-p (--profile) <file>`: Path and name of the profile that is put out. * `-q (--rtt) <file>`: Path and name of the read ID to tax IDs output file. If not given, a profile-only version of kASA will be used which is much faster! ##### Optional paramameters * `-r (--ram)`: Loads the index into primary memory. If you don't provide enough RAM for this, it will fall back to using secondary memory. Default: false. * `-a (--alphabet) <file> <number>`: If you'd like to use a different translation alphabet formated in the NCBI compliant way, provide the file (gc.prt) and the id (can be a string). Please use only letters in the range ['A',']'] from the ASCII table for your custom alphabet. Default: Hardcoded translation table. * `-k <upper> <lower>`: Bounds for `k`, all `k`'s in between will be evaluated as well. If your intuition is more like `<lower> <upper>` then that's okay too. Default: 12 7. * `--kH <upper>`: Set only the upper bound. If the index has been built with 128 bit size, k can be up to 25. * `--kL <lower>`: Set only the lower bound. * `-b (--beasts) <number>`: Number of hit taxa shown for each read. Default: 3. * `-e (--unique)`: Ignores duplicates of `k`-mers in every read. This helps removing bias from repeats but messes with error scores, so consider it BETA. * `--json`: Sets the output format to json. Default. * `--jsonl`: Sets the output format to json lines. * `--tsv`: Sets the output format to a tab separated, per line format. * `--kraken`: Sets the output format to a kraken like tsv format. * `--threshold <float>:` Set a minimum relative score so that everything below it will not be included in the output. For not-so-noisy data and reads of length 100, we recommend a value of 0.4. Default: 0.0. * `--six`: Use all six reading frames instead of three. Doubles number of input k-mers but avoids artifacts due to additional reverse complement DNA inside some genomes. Default: off. * `--one`: Use only one reading frame instead of three. Speeds up the tool significantly but sacrifices robustness. Default: off. * `-1`: First file in a paired-end pair. * `-2`: Second file in a paired-end pair. Both `-1` and `-2` must be used and `-i` will be ignored for this call. Paired-end can only be files, no folders. * `--coverage`: Appends total counts and coverage percentage to the profile. If for example a file contained a whole genome of a taxon, the count should be equal to the number of k-mers in the index and the coverage be 100%. Therefore: the higher the coverage, the more likely it is for that taxon to truly be inside the sequenced data. Input must be processed in one go and not in chunks so please provide enough RAM. Default: off. ##### Example call ``` <path to kASA>/kASA identify -c <content file> -d <path and name of index file> -i <input file or folder> -p <path and name of profile output> -q <path and name of read wise analysis> -m <amount of available GB> -t <path to temporary directory> -k <highest k> <lowest k> -n <number of parallel threads> e.g.: [weging@example:/kASA$] build/kASA identify -c example/work/content.txt -d example/work/index/exampleIndex -i example/work/input/example.fastq.gz -p example/work/results/example.csv -q example/work/results/example.json -m 8 -t example/work/tmp/ -k 12 9 -n 2 ``` #### Output ##### Normal: ###### Identification ``` [ { "Read number": 0, "Specifier from input file": "NZ_CP013542.1+NZ_JFYQ01000033.1", "Top hits": [ { "tax ID": "396", "Name": "Rhizobium phaseoli", "k-mer Score": 33.32, "Relative Score": 1.03767e+00, "Error": 0.67 } ], "Further hits": [ { "tax ID": "1270", "Name": "Micrococcus luteus", "k-mer Score": 31.42, "Relative Score": 9.71243e-01 "Error": 0.69 }, { "tax ID": "2128", "Name": "Mycoplasma flocculare", "k-mer Score": 0.0833333, "Relative Score": 4.561024e-03, "Error": 0.99729 } ] } ] ``` ###### Profile |#tax ID|Name|Unique counts k=12|Unique counts k=11|...|Unique rel. freq. k=12|...|Non-unique counts k=12|...|Non-unique rel. freq. k=12|...|Overall rel. freq. k=12| ... | Overall unique rel. freq. k=12| ... | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |9606,|Homo sapiens,|121252166,|111556464,|...|0.87,|...|2001658992,|...|0.79,|...|0.65,|...|0.83|...| There are two relative frequencies because your index may be very ambigious (e.g. it only consists of E. Coli species) and thus has only few unique hits. To get a hint which one would be more relevant to you, check your index with a call to `redundancy` in the [Miscellaneous](#miscellaneous) section. Relative frequencies in the human readable profile are given for the largest k. The "Overall (unique) relative frequency" is calculated by dividing the (non-)unique counts by the total number of k-mers from the input. This is also printed in the verbose mode like: "OUT: Number of k-mers in input: ... of which ... % were identified." for the largest k. ### Identify multiple ##### Context This mode calls identify on multiple files at the same time. On a single CPU system, using all available cores for one file after another is the usual use case. On HPCCs however, it makes more sense to utilize the many-cores-many-files architecture so that multiple files can be processed concurrently. This mode does just that. If you e.g. provide 40 cores and have 30 files to process, the files will be sorted by file size and the largest 10 get two cores while the others get one. If we have e.g. 40 files and 30 cores, we first process 30 files with one core each and then the remaining 10. This way we approximate a solution to the job shop problem. It is implemented with a workqueue so no synchronisation apart from managing the queue is done. The index and trie is loaded once in the beginning and then all threads access that index (in RAM or not). Furthermore, you need to have at least two files in the folder of inputs (calling this on one file makes no sense). ##### Necessary paramameters * `-i (--input) <folder>`: Folder containing fastq or fasta files, can be gzipped. No default. * `-p (--profile) <file>`: Path and prefix of the profiles which will be put out. * `-q (--rtt) <file>`: Path and prefix of the read ID to tax IDs output files. ##### Optional paramameters * `-r (--ram)`: Loads the index into primary memory. If you don't provide enough RAM for this, it will fall back to using secondary memory. Default: false. * `-a (--alphabet) <file> <number>`: If you'd like to use a different translation alphabet formated in the NCBI compliant way, provide the file (gc.prt) and the id (can be a string). Please use only letters in the range ['A',']'] from the ASCII table for your custom alphabet. Default: Hardcoded translation table. * `-k <upper> <lower>`: Bounds for `k`, all `k`'s in between will be evaluated as well. If your intuition is more like `<lower> <upper>` then that's okay too. Default: 12 7. * `--kH <upper>`: Set only the upper bound. If the index has been built with 128 bit size, k can be up to 25. * `--kL <lower>`: Set only the lower bound * `-b (--beasts) <number>`: Number of hit taxa shown for each read. Default: 3. * `-e (--unique)`: Ignores duplicates of `k`-mers in every read. This helps removing bias from repeats but messes with error scores, so consider it BETA. * `--json`: Sets the output format to json. Default. * `--jsonl`: Sets the output format to json lines. * `--tsv`: Sets the output format to a tab separated, per line format. * `--kraken`: Sets the output format to a kraken like tsv format. * `--threshold <float>:` Set a minimum relative score so that everything below it will not be included in the output. For not-so-noisy data and reads of length 100, we recommend a value of 0.4. Default: 0.0. * `--six`: Use all six reading frames instead of three. Doubles number of input k-mers but avoids artifacts due to additional reverse complement DNA inside some genomes. Default: off. * `--one`: Use only one reading frame instead of three. Speeds up the tool significantly but sacrifices robustness. Default: off. ##### Example call ``` <path to kASA>/kASA identify_multiple -c <content file> -d <path and name of index file> -i <folder> -p <path and prefix of profile outputs> -q <path and prefix of read wise analyses> -m <amount of available GB> -t <path to temporary directory> -k <highest k> <lowest k> -n <number of parallel threads> e.g.: [weging@example:/kASA$] build/kASA identify_multiple -c example/work/content.txt -d example/work/index/exampleIndex -i example/work/input/ -p example/work/results/example_ -q example/work/results/example_ -m 8 -t example/work/tmp/ -k 12 9 -n 4 ``` ### Update ##### Context Keeping the same index for years may not be that good an idea so kASA gives you the possibility to add genomic material to an existing index. First, you need a fasta file or a folder with fasta files and a call to `update` with the `-o` parameter to specify where to put the new index if you don't want to overwrite the existing one. Next, since the content file is updated as well, you'll need the same parameters as in [generateCF](#generate-a-content-file) (meaning `-u <...> -f <...> -y <...>`). If you've updated your content file manually then just add it via the `-c <...>` parameter. If you want to delete entries from the index because they are not desired or deprecated in the NCBI taxonomy, add the `delnodes.dmp` file via the `-l` parameter. It's not necessary to change the content file in this case although you should at some point to not clutter it too much... If you've created the content file together with the index, this default content file will be used. ##### Necessary paramameters * `-i (--input) <file/folder>`: Fasta file(s), can be gzipped. If you want to process multiple files at once, put them inside a folder and let the path end with `/`. No default. * `-o (--outgoing) <file>`: Either the existing index or a new file name, depending on whether you want to keep the old file or not. Default: overwrite. * `-l (--deleted) <file>`: delete taxa via the NCBI taxonomy file. ##### Optional paramameters * `-a (--alphabet) <file> <number>`: If you'd like to use a different translation alphabet formated in the NCBI compliant way, provide the file (gc.prt) and the id (can be a string). Please use only letters in the range ['A',']'] from the ASCII table for your custom alphabet. Default: Hardcoded translation table. * `--three`: Use only three reading frames instead of six. Default: off. ##### Example calls ``` <path to kASA>/kASA update -d <path and name of the index file> -o <path and name of the new index> -i <folder or file> -t <temporary directory> -m <amount of RAM> -f <accToTaxFile(s)> -y <folder with nodes.dmp and names.dmp> -u <taxonomic level, e.g. species> e.g.: [weging@example:/kASA$] build/kASA update -c example/work/content.txt -d example/work/index/exampleIndex -o example/work/index/updatedIndex -i example/work/db/16S_NCBI.fasta -t example/work/tmp/ -m 8 -f taxonomy/acc2Tax/ -y taxonomy/ -u species <path to kASA>/kASA delete -c <content file> -d <path and name of the index file> -o <path and name of the new index> -l <delnodes.dmp> -t <temporary directory> -m <amount of RAM> e.g.: [weging@example:/kASA$] build/kASA delete -c example/work/content.txt -d example/work/index/exampleIndex -o example/work/index/updatedIndex -l taxonomy/delnodes.dmp -t example/work/tmp/ -m 8 ``` ### Shrink ##### Context Should updating your index not happen that often or you would like better performance and less space usage on your disk, shrinking it does the trick. kASA has multiple options: 1. The first way deletes a certain percentage of k-mers from every taxon. This may be lossy but impacts the accuracy not that much if your sequencing depth is high enough. 2. The second option is lossless but it assumes, that your content file is not larger than 65535 entries and that you don't need k's smaller than 7. This will reduce the size of the index by half but your index cannot be updated afterwards. Great for storing the index on an external drive. 3. This lossy option determines the (normalized binary) entropy of every k-mer and throws away anything not containing enough information. For example: AAABBBAAABBB would be thrown away but ABCDEFGAAABC wouldn't. The parameter `-o` also decides here, where to put your new index. ##### Necessary paramameters * `-s (--strategy) <1, 2 or 3>`: Shrink the index in the first or second way. Default is 2. * `-g (--percentage) <integer>`: Deletes the given percentage of k-mers from every taxon. This parameter may also be applied when building the index (for example: -g 50 skips every second k-mer). * `-o (--outgoing) <file>`: Output path and name of your shrunken index file. Your other index cannot be overwritten with this. Default: takes your index file and appends a "_s". ##### Example call ``` <path to kASA>/kASA shrink -c <content file> -d <path and name of the index file> -o <path and name of the new index> -s <1 or 2> -g <percentage> -t <temporary directory> e.g.: [weging@example:/kASA$] build/kASA shrink -c example/work/content.txt -d example/work/index/exampleIndex -o example/work/index/exampleIndex_s -s 2 -t example/work/tmp/ e.g.: [weging@example:/kASA$] build/kASA shrink -c example/work/content.txt -d example/work/index/exampleIndex -s 1 -g 25 -t example/work/tmp/ e.g.: [weging@example:/kASA$] build/kASA build -c example/work/content.txt -d example/work/index/exampleIndex -g 50 -i example/work/db/example.fasta -m 8 -t example/work/tmp/ -n 2 ``` ### Merge ##### Context This mode merges two indices into one. Both indices must have been created with the same bit size (64 or 128). The content files are also merged. You cannot overwrite indices this way. ##### Necessary paramameters * `-c (--content) <file>`: Content file that already contains all taxa of both indices (if you have it already, for example). Default: none. * `-c1 <file>`: Content file of the first index. Default: <index>_content.txt * `-c2 <file>`: Content file of the second index. Default: Same as above. * `-co <file>`: Content file in which the two will be merged. Default: Same as above. * `--firstIndex <file>`: First index. * `--secondIndex <file>`: Second index. * `-o (--outgoing) <file>`: Resulting merged index. Default: None. ##### Example call ``` <path to kASA>/kASA merge -c <content file> -d <path and name of the first index file> -i <path and name of the second index> -o <path and name of the new index> or <path to kASA>/kASA merge -co <resulting content file> -c1 <content file of first index> -c2 <content file of second index> -d <path and name of the first index file> -i <path and name of the second index> -o <path and name of the new index> e.g.: [weging@example:/kASA$] build/kASA merge -co example/work/content_merged.txt -c1 example/work/index/index_1_content.txt -c2 example/work/index/index_2_content.txt -d example/work/index/index_1 -i example/work/index/index_2 -o example/work/index/index_merged ``` ### Miscellaneous 1. If you've lost your frequency file "`<index name>_f.txt`" and have not shrunken the index via mode 2, you can create one without building the index anew: ``` <path to kASA>/kASA getFrequency -c <content file> -d <path and name of the index file> -t <temporary directory> -m <amount of RAM> -n <number of threads> ``` 2. If you've lost your trie file "`<index name>_trie`" and have not shrunken the index via mode 2, your call's gonna look like: ``` <path to kASA>/kASA trie -d <path and name of the index file> -t <temporary directory> ``` 3. If you've lost the `_info.txt` file associated with your index: Get the size in bytes, divide by 12 (not shrunken) or 6 (shrunken via method 2) and create a `<index name>_info.txt` file with the result as first line in it. For the shrunken index, add a 3 as second line in the file (which indicates, that it has been shrunken). 4. You can measure the redundancy of your (non-halved) index via: ``` <path to kASA>/kASA redundancy -c <content file> -d <path and name of the index file> -t <temporary directory> ``` This gives you a hint whether you should look at the unique relative frequencies and/or the non-unique relative frequencies. It's measured by counting how many tax IDs 99% of your k-mers have. If for example almost every k-mer has only one associated taxon, that's pretty unique. 5. The folder `example/` contains a minimum working example for most of the calls presented in this Readme with a dummy taxonomy, .fasta and .fastq.gz file. If you have [Snakemake](https://snakemake.readthedocs.io/en/stable/getting_started/installation.html) installed, you can use the Snakemake file inside the folder to run all possible modes. Go inside the folder with `cd` and type `snakemake --config path=../build/` to point to the path where executable of kASA lies. If anything goes awry, please tell me. ## Useful scripts - jsonToFrequencies.py: Creates a profile based on the most prominent taxa per read. Usage: `-i <kASA output> -o <result> (-t <threshold for rel. score>)`. Consumes a lot of memory because the json file is loaded into memory. - jsonLToFrequencies.py: Same as above but with json lines as input format. Identification with kASA needs to be run with `--jsonl` in order to use this script. Much more lightweight on memory. - tsvToFrequencies.py: Same as above but for `--tsv`. - sumFreqsOnTaxLvl.py: Takes the output of any of the above scripts and sums them up on a specified level. Needs the NCBI taxonomy. Usage: `-i <kASA output> -o <result> -n <nodes.dmp> -m <names.dmp> -r <rank, e.g. species or genus>`. - csvToCAMI.py: Converts a profiling output into the CAMI profile format. Needs the NCBI 'nodes.dmp' and 'names.dmp' file for upwards traversing of the taxonomic tree. Usage: `-i <kASA output> -o <result> -n <nodes.dmp> -m <names.dmp> -k <k value> -u <u: unique, o: overall, n: non-unique> (-t <threshold>)`. - camiToKrona.py: Converts the CAMI profile to a file format readable by [Krona](https://github.com/marbl/Krona/wiki). Usage: `-i <cami file> -o <krona file>`. - jsonToCAMIBin.py: Converts the json output file into the CAMI binning format. Usage: `-i <json file> -o <cami file>`. - jsonToJsonL.py: Converts a json file to a json line formated file. Usage: `<json file> <json line file>`. - getNotIdentified.py: Returns all reads which were not identified. Useful for further studies or bugfixing. Usage: `-i <json> -f <fastq or fasta> -o <output> -t <threshold>`. - reconstructDNA.py: Algorithmic proof of our method. See supplemental file 1 from our paper. Usage: `<DNA sequence>`. ## Todos and upcoming - ~~Kraken-like output out of kASAs identification file~~ - ~~Reworked building algorithm~~ - ~~Join two built indices~~ - ~~New shrink mode deleting k-mers that are overrepresented~~ - ~~Native support of the nr and other translated sequences~~ - ~~Allow gzipped files as input for `build`~~ - ~~RAM mode~~ - ~~Support of Clang (macOS)~~ - ~~Snakemake pipeline for quality control~~ - ~~TaxIDs can now be strings as well~~ - ~~Consideration of paired-end information~~ - ~~Larger k's than 12~~ - Profiles normalized to genome length, for now you could hack that with the frequency file - Support of [Recentrifuge](https://github.com/khyox/recentrifuge) - Support of [bioconda](https://bioconda.github.io/)/[Snakemake](https://snakemake.readthedocs.io/en/stable/) - Small collection of adapter sequences - bzip2 support - Live streaming of .bcl files - Support of ARM64 architecture ## License This project is licensed under the Boost License 1.0 - see the [LICENSE](https://github.com/SilvioWeging/kASA/blob/master/LICENSE.txt) file for details
81.614458
709
0.734303
eng_Latn
0.995134
5df6e1084ea68bac816f20607c5e9f406884e981
5,081
md
Markdown
wdk-ddi-src/content/ksmedia/ns-ksmedia-ksaudio_position.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ksmedia/ns-ksmedia-ksaudio_position.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ksmedia/ns-ksmedia-ksaudio_position.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NS:ksmedia.KSAUDIO_POSITION title: KSAUDIO_POSITION author: windows-driver-content description: The KSAUDIO_POSITION structure specifies the current positions of the play and write cursors in the sound buffer for an audio stream. old-location: audio\ksaudio_position.htm old-project: audio ms.assetid: 91658dfc-dad4-4fbb-8688-13971e7275e2 ms.author: windowsdriverdev ms.date: 2/27/2018 ms.keywords: "*PKSAUDIO_POSITION, KSAUDIO_POSITION, KSAUDIO_POSITION structure [Audio Devices], PKSAUDIO_POSITION, PKSAUDIO_POSITION structure pointer [Audio Devices], aud-prop_0518af7c-0c1d-4710-8879-43bb42e1ba2a.xml, audio.ksaudio_position, ksmedia/KSAUDIO_POSITION, ksmedia/PKSAUDIO_POSITION" ms.prod: windows-hardware ms.technology: windows-devices ms.topic: struct req.header: ksmedia.h req.include-header: Ksmedia.h req.target-type: Windows req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - ksmedia.h api_name: - KSAUDIO_POSITION product: Windows targetos: Windows req.typenames: KSAUDIO_POSITION, *PKSAUDIO_POSITION --- # KSAUDIO_POSITION structure ## -description The KSAUDIO_POSITION structure specifies the current positions of the play and write cursors in the sound buffer for an audio stream. ## -syntax ```` typedef struct { ULONGLONG PlayOffset; ULONGLONG WriteOffset; DWORDLONG PlayOffset; DWORDLONG WriteOffset; } KSAUDIO_POSITION, *PKSAUDIO_POSITION; ```` ## -struct-fields ### -field PlayOffset Specifies the current play position as a byte offset. Specifies the current play position as a byte offset. ### -field WriteOffset Specifies the current write position as a byte offset. Specifies the current write position as a byte offset. ## -remarks This structure is used to get and set the data value for the <a href="https://msdn.microsoft.com/library/windows/hardware/ff537297">KSPROPERTY_AUDIO_POSITION</a> property. For a looped client buffer (with stream type <a href="https://msdn.microsoft.com/library/windows/hardware/ff563381">KSINTERFACE_STANDARD_LOOPED_STREAMING</a>), <b>PlayOffset</b> and <b>WriteOffset</b> are byte offsets into the client buffer. When either offset reaches the end of the buffer, it wraps around to the start of the buffer. Hence, neither offset ever exceeds the buffer size. For a nonlooped client buffer (with stream type <a href="https://msdn.microsoft.com/library/windows/hardware/ff563384">KSINTERFACE_STANDARD_STREAMING</a>), <b>PlayOffset</b> and <b>WriteOffset</b> are not offsets into any one physical buffer that either your driver has allocated or a client has allocated. Instead, these offsets are stream-relative and can be thought of as offsets into an idealized buffer that contains the entire stream and is contiguous from beginning to end. Any internal offsets that point into the actual physical buffers that contain the data need to be maintained separately. During playback, the <b>PlayOffset</b> and <b>WriteOffset</b> values are interpreted as follows: <ul> <li> <b>PlayOffset</b> is the offset of the last byte in the buffer that has played. <b>PlayOffset</b> + 1 is the offset of the next byte that will play. </li> <li> <b>WriteOffset</b> is the offset of the last byte in the playback buffer. </li> </ul> When a client submits another buffer to the device for playback, <b>WriteOffset</b> will increment upon receipt of that buffer to indicate the new <b>WriteOffset</b> value, but <b>PlayOffset</b> does not change until after that buffer has actually been played by the device. During recording, the <b>PlayOffset</b> and <b>WriteOffset</b> values are interpreted as follows: <ul> <li> <b>PlayOffset</b> is the offset of the last byte in the buffer that has been captured. <b>PlayOffset</b> + 1 is the offset of the next byte that will be captured. </li> <li> <b>WriteOffset</b> is the offset of the last byte in the capture buffer. </li> </ul> When an application submits another buffer to the device for capturing, the <b>WriteOffset</b> value will increment upon receipt of that buffer. The <b>PlayOffset</b> value will not change until data has actually been captured into the buffer. The space between <b>PlayOffset</b> and <b>WriteOffset</b> is considered off-limits to the client because it represents the portion of the client buffer that has already been sent to the driver and might still be in use by the driver. For more information, see <a href="https://msdn.microsoft.com/893fea84-9136-4107-96d2-8a4e2ab7bd2a">Audio Position Property</a>. ## -see-also <a href="https://msdn.microsoft.com/library/windows/hardware/ff563384">KSINTERFACE_STANDARD_STREAMING</a> <a href="https://msdn.microsoft.com/library/windows/hardware/ff537297">KSPROPERTY_AUDIO_POSITION</a> <a href="https://msdn.microsoft.com/library/windows/hardware/ff563381">KSINTERFACE_STANDARD_LOOPED_STREAMING</a>    
33.649007
601
0.776816
eng_Latn
0.97221
5df74ad4f04352458b71e0e929b36da272639113
904
md
Markdown
content/classes/3.1/Ground.md
jppresents/Documentation
74a2ec9a0204fced583e0dc1008f7d6c53677801
[ "Apache-2.0" ]
null
null
null
content/classes/3.1/Ground.md
jppresents/Documentation
74a2ec9a0204fced583e0dc1008f7d6c53677801
[ "Apache-2.0" ]
null
null
null
content/classes/3.1/Ground.md
jppresents/Documentation
74a2ec9a0204fced583e0dc1008f7d6c53677801
[ "Apache-2.0" ]
null
null
null
--- TAGS: --- ## Description class [Ground](/classes/3.1/Ground) extends _Primitive ## Constructor ## new [Ground](/classes/3.1/Ground)(id, scene, width, height, subdivisions, canBeRegenerated, mesh) #### Parameters | Name | Type | Description ---|---|---|--- | id | string | | scene | [Scene](/classes/3.1/Scene) | | width | number | | height | number | | subdivisions | number | optional | canBeRegenerated | boolean | ## Members ### width : number ### height : number ### subdivisions : number ## Methods ### copy(id) &rarr; [Geometry](/classes/3.1/Geometry) #### Parameters | Name | Type | Description ---|---|---|--- | id | string | ### serialize() &rarr; any ### static Parse(parsedGround, scene) &rarr; Nullable&lt;undefined&gt; #### Parameters | Name | Type | Description ---|---|---|--- | parsedGround | any | | scene | [Scene](/classes/3.1/Scene) |
15.322034
100
0.597345
eng_Latn
0.752268
5df79312b089af64a3a94209ce9707a3521abf10
72
md
Markdown
README.md
zhangs41/yygh-admin
38982cbb93345f40d65faf949ee6be2ee9b5f9e2
[ "MIT" ]
null
null
null
README.md
zhangs41/yygh-admin
38982cbb93345f40d65faf949ee6be2ee9b5f9e2
[ "MIT" ]
null
null
null
README.md
zhangs41/yygh-admin
38982cbb93345f40d65faf949ee6be2ee9b5f9e2
[ "MIT" ]
null
null
null
# yygh-admin #### 介绍 项目上云截图展示https://www.yuque.com/jinminghd/kb/gmeql6
14.4
49
0.722222
kor_Hang
0.124685
5df80df50eb2142a6459026739798730ef09589c
77,691
md
Markdown
docs/standard/base-types/custom-date-and-time-format-strings.md
juucustodio/docs.pt-br
a3c389ac92d6e3c69928c7b906e48fbb308dc41f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/base-types/custom-date-and-time-format-strings.md
juucustodio/docs.pt-br
a3c389ac92d6e3c69928c7b906e48fbb308dc41f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/base-types/custom-date-and-time-format-strings.md
juucustodio/docs.pt-br
a3c389ac92d6e3c69928c7b906e48fbb308dc41f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Cadeias de caracteres de formato de data e hora personalizado description: Aprenda a usar cadeias de caracteres de formato de data e hora personalizadas para converter valores DateTime ou DateTimeOffset em representações de texto ou para analisar cadeias de caracteres de datas & horas. ms.date: 03/30/2017 ms.technology: dotnet-standard ms.topic: reference dev_langs: - csharp - vb helpviewer_keywords: - formatting [.NET Framework], dates - custom DateTime format string - format specifiers, custom date and time - format strings - custom date and time format strings - formatting [.NET Framework], time - date and time strings ms.assetid: 98b374e3-0cc2-4c78-ab44-efb671d71984 ms.openlocfilehash: 48e1b40ddd4bc7fae7d65660adf216756d7c83f7 ms.sourcegitcommit: 2987e241e2f76c9248d2146bf2761a33e2c7a882 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 08/14/2020 ms.locfileid: "88228738" --- # <a name="custom-date-and-time-format-strings"></a>Cadeias de caracteres de formato de data e hora personalizado Uma cadeia de caracteres de formato de data e hora define a representação de texto de um valor <xref:System.DateTime> ou <xref:System.DateTimeOffset> que é resultante de uma operação de formatação. Ela também pode definir a representação de um valor de data e hora necessário em uma operação de análise para converter com êxito a cadeia de caracteres para uma data e hora. Uma cadeia de caracteres de formato personalizado consiste em um ou mais especificadores de formato de data e hora personalizado. Qualquer cadeia de caracteres que não é uma [cadeia de caracteres de formato de data e hora padrão](standard-date-and-time-format-strings.md) é interpretada como uma cadeia de caracteres de formato de data e hora personalizado. > [!TIP] > Baixe o **Utilitário de Formatação**, um aplicativo do Windows Forms do .NET Core que permite aplicar cadeias de caracteres de formato a valores numéricos ou de data e hora e exibir a cadeia de caracteres de resultado. O código-fonte está disponível para o [C#](https://docs.microsoft.com/samples/dotnet/samples/windowsforms-formatting-utility-cs) e o [Visual Basic](https://docs.microsoft.com/samples/dotnet/samples/windowsforms-formatting-utility-vb). As cadeias de caracteres de formato de data e hora personalizado podem ser usadas tanto com valores <xref:System.DateTime> quanto <xref:System.DateTimeOffset>. [!INCLUDE[C# interactive-note](~/includes/csharp-interactive-with-utc-partial-note.md)] <a name="table"></a> Em operações de formatação, cadeias de caracteres de formato de data e hora personalizadas podem ser usadas com o `ToString` método de uma instância de data e hora ou com um método que dá suporte à formatação composta. O exemplo a seguir ilustra ambos os usos. [!code-csharp-interactive[Formatting.DateAndTime.Custom#17](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/custandformatting1.cs#17)] [!code-vb[Formatting.DateAndTime.Custom#17](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/custandformatting1.vb#17)] Em operações de análise, as cadeias de caracteres de formato de data e hora personalizado podem ser usadas com os métodos <xref:System.DateTime.ParseExact%2A?displayProperty=nameWithType>, <xref:System.DateTime.TryParseExact%2A?displayProperty=nameWithType>, <xref:System.DateTimeOffset.ParseExact%2A?displayProperty=nameWithType> e <xref:System.DateTimeOffset.TryParseExact%2A?displayProperty=nameWithType>. Esses métodos exigem que uma cadeia de caracteres de entrada esteja exatamente de acordo com um padrão específico para que a operação de análise obtenha êxito. O exemplo a seguir ilustra uma chamada ao método <xref:System.DateTimeOffset.ParseExact%28System.String%2CSystem.String%2CSystem.IFormatProvider%29?displayProperty=nameWithType> para analisar uma data que deve incluir um dia, um mês e um ano com dois dígitos. [!code-csharp[Formatting.DateAndTime.Custom#18](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/custandparsing1.cs#18)] [!code-vb[Formatting.DateAndTime.Custom#18](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/custandparsing1.vb#18)] A tabela a seguir descreve os especificadores de formato de data e hora padrão e exibe uma cadeia de caracteres de resultado produzida por cada especificador de formato. Por padrão, as cadeias de caracteres de resultado refletem as convenções de formatação da cultura en-US. Se um determinado especificador de formato produz uma cadeia de caracteres de resultado localizada, o exemplo também observa a cultura à qual a cadeia de caracteres de resultado se aplica. Para obter informações adicionais sobre como usar cadeias de caracteres de formato data e hora personalizado, confira a seção [Observações](#notes). | Especificador de formato | Descrição | Exemplos | |--|--|--| | "d" | O dia do mês, de 1 a 31.<br /><br /> Mais informações: [Especificador de formato personalizado "d"](#dSpecifier). | 2009-06-01T13:45:30 -> 1<br /><br /> 2009-06-15T13:45:30 -> 15 | | "dd" | O dia do mês, de 01 a 31.<br /><br /> Mais informações: [Especificador de formato personalizado "dd"](#ddSpecifier). | 2009-06-01T13:45:30 -> 01<br /><br /> 2009-06-15T13:45:30 -> 15 | | "ddd" | O nome abreviado do dia da semana.<br /><br /> Mais informações: [Especificador de formato personalizado "ddd"](#dddSpecifier). | 2009-06-15T13:45:30 -> Mon (en-US)<br /><br /> 2009-06-15T13:45:30 -> Пн (ru-RU)<br /><br /> 2009-06-15T13:45:30 -> lun. (fr-FR) | | "dddd" | O nome completo do dia da semana.<br /><br /> Mais informações: [Especificador de formato personalizado "dddd"](#ddddSpecifier). | 2009-06-15T13:45:30 -> Monday (en-US)<br /><br /> 2009-06-15T13:45:30 -> понедельник (ru-RU)<br /><br /> 2009-06-15T13:45:30 -> lundi (fr-FR) | | "f" | Os décimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "F"](#fSpecifier). | 2009-06-15T13:45:30.6170000 -> 6<br /><br /> 2009-06-15T13:45:30.05 -> 0 | | "ff" | Os centésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "FF"](#ffSpecifier). | 2009-06-15T13:45:30.6170000 -> 61<br /><br /> 2009-06-15T13:45:30.0050000 -> 00 | | "fff" | Os milissegundos em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "fff"](#fffSpecifier). | 6/15/2009 13:45:30.617 -> 617<br /><br /> 6/15/2009 13:45:30.0005 -> 000 | | "ffff" | Os décimos de milésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "ffff"](#ffffSpecifier). | 2009-06-15T13:45:30.6175000 -> 6175<br /><br /> 2009-06-15T13:45:30.0000500 -> 0000 | | "fffff" | Os centésimos de milésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "fffff"](#fffffSpecifier). | 2009-06-15T13:45:30.6175400 -> 61754<br /><br /> 6/15/2009 13:45:30.000005 -> 00000 | | "ffffff" | Os milionésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "FFFFFF"](#ffffffSpecifier). | 2009-06-15T13:45:30.6175420 -> 617542<br /><br /> 2009-06-15T13:45:30.0000005 -> 000000 | | "fffffff" | Os décimos de milionésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "fffffff"](#fffffffSpecifier). | 2009-06-15T13:45:30.6175425 -> 6175425<br /><br /> 2009-06-15T13:45:30.0001150 -> 0001150 | | "F" | Se diferente de zero, os décimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "F"](#F_Specifier). | 2009-06-15T13:45:30.6170000 -> 6<br /><br /> 2009-06-15T13:45:30.0500000 -> (nenhuma saída) | | "FF" | Se diferente de zero, os centésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "FF"](#FF_Specifier). | 2009-06-15T13:45:30.6170000 -> 61<br /><br /> 2009-06-15T13:45:30.0050000 -> (nenhuma saída) | | "FFF" | Se diferente de zero, os milissegundos em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "fff"](#FFF_Specifier). | 2009-06-15T13:45:30.6170000 -> 617<br /><br /> 2009-06-15T13:45:30.0005000 -> (nenhuma saída) | | "FFFF" | Se diferente de zero, os décimos de milésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "FFFF"](#FFFF_Specifier). | 2009-06-15T13:45:30.5275000 -> 5275<br /><br /> 2009-06-15T13:45:30.0000500 -> (nenhuma saída) | | "FFFFF" | Se diferente de zero, os centésimos de milésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [o especificador de formato personalizado "fffff"](#FFFFF_Specifier). | 2009-06-15T13:45:30.6175400 -> 61754<br /><br /> 2009-06-15T13:45:30.0000050 -> (nenhuma saída) | | "FFFFFF" | Se diferente de zero, os milionésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "FFFFFF"](#FFFFFF_Specifier). | 2009-06-15T13:45:30.6175420 -> 617542<br /><br /> 2009-06-15T13:45:30.0000005 -> (nenhuma saída) | | "FFFFFFF" | Se diferente de zero, os décimos de milionésimos de segundo em um valor de data e hora.<br /><br /> Mais informações: [Especificador de formato personalizado "FFFFFFF"](#FFFFFFF_Specifier). | 2009-06-15T13:45:30.6175425 -> 6175425<br /><br /> 2009-06-15T13:45:30.0001150 -> 000115 | | "g", "gg" | O período ou a era.<br /><br /> Mais informações: [Especificador de formato personalizado "g" ou "gg"](#gSpecifier). | 2009-06-15T13:45:30.6170000 -> A.D. | | "h" | A hora, usando um relógio de 12 horas de 1 a 12.<br /><br /> Mais informações: [o especificador de formato personalizado "h"](#hSpecifier). | 2009-06-15T01:45:30 -> 1<br /><br /> 2009-06-15T13:45:30 -> 1 | | "hh" | A hora, usando um relógio de 12 horas de 01 a 12.<br /><br /> Mais informações: [o especificador de formato personalizado "HH"](#hhSpecifier). | 2009-06-15T01:45:30 -> 01<br /><br /> 2009-06-15T13:45:30 -> 01 | | "H" | A hora, usando um relógio de 24 horas de 0 a 23.<br /><br /> Mais informações: [Especificador de formato personalizado "H"](#H_Specifier). | 2009-06-15T01:45:30 -> 1<br /><br /> 2009-06-15T13:45:30 -> 13 | | "HH" | A hora, usando um relógio de 24 horas de 00 a 23.<br /><br /> Mais informações: [Especificador de formato personalizado "HH"](#HH_Specifier). | 2009-06-15T01:45:30 -> 01<br /><br /> 2009-06-15T13:45:30 -> 13 | | "K" | Informações de fuso horário.<br /><br /> Mais informações: [Especificador de formato personalizado "K"](#KSpecifier). | Com valores de <xref:System.DateTime>:<br /><br /> 2009-06-15T13:45:30, Tipo não especificado -><br /><br /> 2009-06-15T13:45:30, Tipo Utc -> Z<br /><br /> 2009-06-15T13:45:30, Tipo local -> -07:00 (depende das configurações do computador local)<br /><br /> Com valores de <xref:System.DateTimeOffset>:<br /><br /> 2009-06-15T01:45:30-07:00 --> -07:00<br /><br /> 2009-06-15T08:45:30+00:00 --> +00:00 | | "m" | O minuto, de 0 a 59.<br /><br /> Mais informações: [o especificador de formato personalizado "m"](#mSpecifier). | 2009-06-15T01:09:30 -> 9<br /><br /> 2009-06-15T13:29:30 -> 29 | | "mm" | O minuto, de 00 a 59.<br /><br /> Mais informações: [Especificador de formato personalizado "MM"](#mmSpecifier). | 2009-06-15T01:09:30 -> 09<br /><br /> 2009-06-15T01:45:30 -> 45 | | “M” | O mês, de 1 a 12.<br /><br /> Mais informações: [Especificador de formato personalizado "M"](#M_Specifier). | 2009-06-15T13:45:30 -> 6 | | "MM" | O mês, de 01 a 12.<br /><br /> Mais informações: [o especificador de formato personalizado "mm"](#MM_Specifier). | 2009-06-15T13:45:30 -> 06 | | "MMM" | O nome do mês abreviado.<br /><br /> Mais informações: [Especificador de formato personalizado "MMM"](#MMM_Specifier). | 2009-06-15T13:45:30 -> Jun (en-US)<br /><br /> 2009-06-15T13:45:30 -> juin (fr-FR)<br /><br /> 2009-06-15T13:45:30 -> Jun (zu-ZA) | | "MMMM" | O nome completo do mês.<br /><br /> Mais informações: [Especificador de formato personalizado "MMMM"](#MMMM_Specifier). | 2009-06-15T13:45:30 -> June (en-US)<br /><br /> 2009-06-15T13:45:30 -> juni (da-DK)<br /><br /> 2009-06-15T13:45:30 -> uJuni (zu-ZA) | | "s" | O segundo, de 0 a 59.<br /><br /> Mais informações: [Especificador de formato personalizado "s"](#sSpecifier). | 2009-06-15T13:45:09 -> 9 | | "ss" | O segundo, de 00 a 59.<br /><br /> Mais informações: [Especificador de formato personalizado "ss"](#ssSpecifier). | 2009-06-15T13:45:09 -> 09 | | "t" | O primeiro caractere do designador AM/PM.<br /><br /> Mais informações: [Especificador de formato personalizado "t"](#tSpecifier). | 2009-06-15T13:45:30 -> P (en-US)<br /><br /> 2009-06-15T13:45:30 -> 午 (ja-JP)<br /><br /> 2009-06-15T13:45:30 -> (fr-FR) | | "tt" | O designador AM/PM.<br /><br /> Mais informações: [Especificador de formato personalizado "tt"](#ttSpecifier). | 2009-06-15T13:45:30 -> PM (en-US)<br /><br /> 2009-06-15T13:45:30 -> 午後 (ja-JP)<br /><br /> 2009-06-15T13:45:30 -> (fr-FR) | | "y" | O ano, de 0 a 99.<br /><br /> Mais informações: [Especificador de formato personalizado "y"](#ySpecifier). | 0001-01-01T00:00:00 -> 1<br /><br /> 0900-01-01T00:00:00 -> 0<br /><br /> 1900-01-01T00:00:00 -> 0<br /><br /> 2009-06-15T13:45:30 -> 9<br /><br /> 2019-06-15T13:45:30 -> 19 | | "yy" | O ano, de 00 a 99.<br /><br /> Mais informações: [Especificador de formato personalizado "yy"](#yySpecifier). | 0001-01-01T00:00:00 -> 01<br /><br /> 0900-01-01T00:00:00 -> 00<br /><br /> 1900-01-01T00:00:00 -> 00<br /><br /> 2019-06-15T13:45:30 -> 19 | | "yyy" | O ano, com um mínimo de três dígitos.<br /><br /> Mais informações: [Especificador de formato personalizado "yyy"](#yyySpecifier). | 0001-01-01T00:00:00 -> 001<br /><br /> 0900-01-01T00:00:00 -> 900<br /><br /> 1900-01-01T00:00:00 -> 1900<br /><br /> 2009-06-15T13:45:30 -> 2009 | | "yyyy" | O ano como um número de quatro dígitos.<br /><br /> Mais informações: [Especificador de formato personalizado "yyyy"](#yyyySpecifier). | 0001-01-01T00:00:00 -> 0001<br /><br /> 0900-01-01T00:00:00 -> 0900<br /><br /> 1900-01-01T00:00:00 -> 1900<br /><br /> 2009-06-15T13:45:30 -> 2009 | | "yyyyy" | O ano como um número de cinco dígitos.<br /><br /> Mais informações: [Especificador de formato personalizado "yyyyy"](#yyyyySpecifier). | 0001-01-01T00:00:00 -> 00001<br /><br /> 2009-06-15T13:45:30 -> 02009 | | "z" | Diferença de horas em relação ao UTC, sem zeros à esquerda.<br /><br /> Mais informações: [Especificador de formato personalizado "z"](#zSpecifier). | 2009-06-15T13:45:30-07:00 -> -7 | | "zz" | Diferença de horas em relação ao UTC, com um zero à esquerda para um valor de dígito único.<br /><br /> Mais informações: [Especificador de formato personalizado "zz"](#zzSpecifier). | 2009-06-15T13:45:30-07:00 -> -07 | | "zzz" | Diferença de horas e minutos em relação ao UTC.<br /><br /> Mais informações: [Especificador de formato personalizado "zzz"](#zzzSpecifier). | 2009-06-15T13:45:30-07:00 -> -07:00 | | ":" | O separador de hora.<br /><br /> Mais informações: [Especificador de formato personalizado ":"](#timeSeparator). | 2009-06-15T13:45:30 -> : (en-US)<br /><br /> 2009-06-15T13:45:30 -> . (it-IT)<br /><br /> 2009-06-15T13:45:30 -> : (ja-JP) | | "/" | O separador de data.<br /><br /> Mais informações: [o especificador de formato personalizado "/"](#dateSeparator). | 2009-06-15T13:45:30 -> / (en-US)<br /><br /> 2009-06-15T13:45:30 -> - (ar-DZ)<br /><br /> 2009-06-15T13:45:30 -> . (tr-TR) | | "*String*"<br /><br /> '*cadeia de caracteres*' | Delimitador de cadeia de caracteres literal.<br /><br /> Para saber mais: [Literais de cadeia de caracteres](#Literals). | 2009-06-15T13:45:30 ("arr:" h:m t) -> arr: 1:45 P<br /><br /> 2009-06-15T13:45:30 ('arr:' h:m t) -> arr: 1:45 P | | % | Define o caractere seguinte como um especificador de formato personalizado.<br /><br /> Mais informações:[Usar especificadores de formato personalizado simples](#UsingSingleSpecifiers). | 2009-06-15T13:45:30 (%h) -> 1 | | &#92; | O caractere de escape.<br /><br /> Para saber mais: [Literais de cadeia de caracteres](#Literals) e [Como usar o caractere de escape](#escape). | 2009-06-15T13:45:30 (h \h) -> 1 h | | Qualquer outro caractere | O caractere é copiado, inalterado, para a cadeia de caracteres de resultado.<br /><br /> Para saber mais: [Literais de cadeia de caracteres](#Literals). | 2009-06-15T01:45:30 (arr hh:mm t) -> arr 01:45 A | As seções a seguir oferecem informações adicionais sobre cada especificador de formato de data e hora personalizado. A menos que observado do contrário, cada especificador produz uma representação de cadeia de caracteres idêntica independente de ela ser usada com um valor <xref:System.DateTime> ou um valor <xref:System.DateTimeOffset>. ## <a name="day-d-format-specifier"></a>Especificador de formato de dia "d" ### <a name="the-d-custom-format-specifier"></a><a name="dSpecifier"></a> O especificador de formato personalizado "d" O especificador de formato personalizado "d" representa o dia do mês como um número de 1 a 31. Dias de dígito único são formatados sem um zero à esquerda. Se o especificador de formato "d" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "d". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "d" em várias cadeias de caracteres de formato. [!code-csharp[Formatting.DateAndTime.Custom#1](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#1)] [!code-vb[Formatting.DateAndTime.Custom#1](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#1)] [Voltar à tabela](#table) ### <a name="the-dd-custom-format-specifier"></a><a name="ddSpecifier"></a> O especificador de formato personalizado "dd" A cadeia de caracteres de formato personalizado "dd" representa o dia do mês como um número de 01 a 31. Dias de dígito único são formatados com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "dd" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#2](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#2)] [!code-vb[Formatting.DateAndTime.Custom#2](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#2)] [Voltar à tabela](#table) ### <a name="the-ddd-custom-format-specifier"></a><a name="dddSpecifier"></a> O especificador de formato personalizado "ddd" O especificador de formato personalizado "ddd" representa o nome do dia da semana abreviado. O nome do dia da semana localizado abreviado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.AbbreviatedDayNames%2A?displayProperty=nameWithType> da cultura atual ou especificada. O exemplo a seguir inclui o especificador de formato personalizado "ddd" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#3](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#3)] [!code-vb[Formatting.DateAndTime.Custom#3](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#3)] [Voltar à tabela](#table) ### <a name="the-dddd-custom-format-specifier"></a><a name="ddddSpecifier"></a> O especificador de formato personalizado "dddd" O especificador de formato personalizado "dddd" (mais um número qualquer de especificadores "d" adicionais) representa o nome completo do dia da semana. O nome do dia da semana localizado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.DayNames%2A?displayProperty=nameWithType> da cultura atual ou especificada. O exemplo a seguir inclui o especificador de formato personalizado "dddd" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#4](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#4)] [!code-vb[Formatting.DateAndTime.Custom#4](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#4)] [Voltar à tabela](#table) ## <a name="lowercase-seconds-f-fraction-specifier"></a>Especificador de fração de segundos em minúsculas "f" ### <a name="the-f-custom-format-specifier"></a><a name="fSpecifier"></a> O especificador de formato personalizado "f" O especificador de formato personalizado "f" representa o dígito mais significativo da fração de segundos, ou seja, representa os décimos de segundo em um valor de data e hora. Se o especificador de formato "f" for usado sem outros especificadores de formato, ele será interpretado como o especificador padrão de formato de data e hora "f". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. Quando você usa especificadores de formato "f" como parte de uma cadeia de caracteres de formato fornecida para o método <xref:System.DateTime.ParseExact%2A>, <xref:System.DateTime.TryParseExact%2A>, <xref:System.DateTimeOffset.ParseExact%2A> ou <xref:System.DateTimeOffset.TryParseExact%2A>, o número de especificadores de formato "f" indica o número de dígitos mais significativos da fração de segundos que deve estar presente para analisar a cadeia de caracteres com sucesso. O exemplo a seguir inclui o especificador de formato personalizado "f" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-ff-custom-format-specifier"></a><a name="ffSpecifier"></a> O especificador de formato personalizado "FF" O especificador de formato personalizado "ff" representa os dois dígitos mais significativos da fração de segundos, ou seja, ele representa os centésimos de segundo em um valor de data e hora. O exemplo a seguir inclui o especificador de formato personalizado "ff" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-fff-custom-format-specifier"></a><a name="fffSpecifier"></a> O especificador de formato personalizado "fff" O especificador de formato personalizado "fff" representa os três dígitos mais significativos da fração de segundos, ou seja, ele representa os milissegundos em um valor de data e hora. O exemplo a seguir inclui o especificador de formato personalizado "fff" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-ffff-custom-format-specifier"></a><a name="ffffSpecifier"></a> O especificador de formato personalizado "ffff" O especificador de formato personalizado "ffff" representa os quatro dígitos mais significativos da fração de segundos, ou seja, ele representa os décimos de milésimos de um segundo em um valor de data e hora. Embora seja possível exibir os décimos de milésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT versão 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-fffff-custom-format-specifier"></a><a name="fffffSpecifier"></a> O especificador de formato personalizado "fffff" O especificador de formato personalizado "fffff" representa os cinco dígitos mais significativos da fração de segundos, ou seja, ele representa os centésimos de milésimos de um segundo em um valor de data e hora. Embora seja possível exibir os centésimos de milésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-ffffff-custom-format-specifier"></a><a name="ffffffSpecifier"></a> O especificador de formato personalizado "FFFFFF" O especificador de formato personalizado "ffffff" representa os seis dígitos mais significativos da fração de segundos, ou seja, ele representa os milionésimos de um segundo em um valor de data e hora. Embora seja possível exibir os milionésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-fffffff-custom-format-specifier"></a><a name="fffffffSpecifier"></a> O especificador de formato personalizado "fffffff" O especificador de formato personalizado "fffffff" representa os sete dígitos mais significativos da fração de segundos; ou seja, representa os décimos de milionésimos de segundo em um valor de data e hora. Embora seja possível exibir os décimos de milionésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ## <a name="uppercase-seconds-f-fraction-specifier"></a>Especificador de fração "F" de segundos maiúsculos ### <a name="the-f-custom-format-specifier"></a><a name="F_Specifier"></a> O especificador de formato personalizado "F" O especificador de formato personalizado "F" representa o dígito mais significativo da fração de segundos, ou seja, representa os décimos de segundo em um valor de data e hora. Nada será exibido se o dígito for zero. Se o especificador de formato "F" for usado sem outros especificadores de formato, ele será interpretado como o especificador padrão de formato de data e hora "F". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O número de especificadores de formato "F" usados com o método <xref:System.DateTime.ParseExact%2A>, <xref:System.DateTime.TryParseExact%2A>, <xref:System.DateTimeOffset.ParseExact%2A> ou <xref:System.DateTimeOffset.TryParseExact%2A> indica o número máximo de dígitos significativos da fração de segundos que podem estar presentes para que a análise da cadeia de caracteres seja feita com êxito. O exemplo a seguir inclui o especificador de formato personalizado "F" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-ff-custom-format-specifier"></a><a name="FF_Specifier"></a> O especificador de formato personalizado "FF" O especificador de formato personalizado "FF" representa os dois dígitos mais significativos da fração de segundos, ou seja, ele representa os centésimos de segundo em um valor de data e hora. No entanto, zeros à direita ou dois dígitos zero não são exibidos. O exemplo a seguir inclui o especificador de formato personalizado "FF" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-fff-custom-format-specifier"></a><a name="FFF_Specifier"></a> O especificador de formato personalizado "FFF" O especificador de formato personalizado "FFF" representa os três dígitos mais significativos da fração de segundos, ou seja, ele representa os milissegundos em um valor de data e hora. No entanto, zeros à direita ou três dígitos zero não são exibidos. O exemplo a seguir inclui o especificador de formato personalizado "FFF" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#5](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#5)] [!code-vb[Formatting.DateAndTime.Custom#5](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#5)] [Voltar à tabela](#table) ### <a name="the-ffff-custom-format-specifier"></a><a name="FFFF_Specifier"></a> O especificador de formato personalizado "FFFF" O especificador de formato personalizado "FFFF" representa os quatro dígitos mais significativos da fração de segundos, ou seja, ele representa os décimos de milésimos de um segundo em um valor de data e hora. No entanto, zeros à direita ou quatro dígitos zero não são exibidos. Embora seja possível exibir os décimos de milésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-fffff-custom-format-specifier"></a><a name="FFFFF_Specifier"></a> O especificador de formato personalizado "FFFFF" O especificador de formato personalizado "FFFFF" representa os cinco dígitos mais significativos da fração de segundos, ou seja, ele representa os centésimos de milésimos de um segundo em um valor de data e hora. No entanto, zeros à direita ou cinco dígitos zero não são exibidos. Embora seja possível exibir os centésimos de milésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-ffffff-custom-format-specifier"></a><a name="FFFFFF_Specifier"></a> O especificador de formato personalizado "FFFFFF" O especificador de formato personalizado "FFFFFF" representa os seis dígitos mais significativos da fração de segundos, ou seja, ele representa os milionésimos de um segundo em um valor de data e hora. No entanto, zeros à direita ou seis dígitos zero não são exibidos. Embora seja possível exibir os milionésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ### <a name="the-fffffff-custom-format-specifier"></a><a name="FFFFFFF_Specifier"></a> O especificador de formato personalizado "FFFFFFF" O especificador de formato personalizado "FFFFFFF" representa os sete dígitos mais significativos da fração de segundos; ou seja, representa os décimos de milionésimos de segundo em um valor de data e hora. No entanto, zeros à direita ou sete dígitos zero não são exibidos. Embora seja possível exibir os décimos de milionésimos de um componente de segundos de um valor temporal, esse valor pode não ser significativo. A precisão dos valores de data e hora depende da resolução do relação ao relógio do sistema. Nos sistemas operacionais Windows NT 3.5 (e posterior) e Windows Vista, a resolução do relógio é de aproximadamente 10 a 15 milissegundos. [Voltar à tabela](#table) ## <a name="era-g-format-specifier"></a>Especificador de formato "g" de era ### <a name="the-g-or-gg-custom-format-specifier"></a><a name="gSpecifier"></a> O especificador de formato personalizado "g" ou "GG" Os especificadores de formato personalizado "g" ou "gg" (mais qualquer número de especificadores "g" adicionais) representam o período ou a era, como A.D. A operação de formatação ignora esse especificador quando a data a ser formatada não tem uma cadeia de caracteres de era ou período associada. Se o especificador de formato "g" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "g". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "g" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#6](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#6)] [!code-vb[Formatting.DateAndTime.Custom#6](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#6)] [Voltar à tabela](#table) ## <a name="lowercase-hour-h-format-specifier"></a>Especificador de formato de hora minúscula "h" ### <a name="the-h-custom-format-specifier"></a><a name="hSpecifier"></a> O especificador de formato personalizado "h" O especificador de formato personalizado "h" representa a hora como um número de 1 a 12, ou seja, a hora é representada por um relógio de 12 horas que conta todas as horas desde a meia-noite. Uma hora específica após a meia-noite é indistinguível da mesma hora depois do meio-dia. A hora não é arredondada e uma hora de dígito único é formatada sem um zero à esquerda. Por exemplo, considerando a hora 5:43 da manhã ou da tarde, este especificador de formato personalizado exibe “5". Se o especificador de formato "h" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "h" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#7](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#7)] [!code-vb[Formatting.DateAndTime.Custom#7](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#7)] [Voltar à tabela](#table) ### <a name="the-hh-custom-format-specifier"></a><a name="hhSpecifier"></a> O especificador de formato personalizado "HH" O especificador de formato personalizado "hh" (mais qualquer número de especificadores "h" adicionais) representa a hora como um número de 01 a 12, ou seja, a hora é representada por um relógio de 12 horas que conta todas as horas desde a meia-noite ou o meio-dia. Uma hora específica após a meia-noite é indistinguível da mesma hora depois do meio-dia. A hora não é arredondada e uma hora de dígito único é formatada com um zero à esquerda. Por exemplo, considerando a hora 5:43 da manhã ou da tarde, este especificador de formato exibe “05". O exemplo a seguir inclui o especificador de formato personalizado "hh" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#8](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#8)] [!code-vb[Formatting.DateAndTime.Custom#8](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#8)] [Voltar à tabela](#table) ## <a name="uppercase-hour-h-format-specifier"></a>Especificador de formato de hora em maiúsculas "H" ### <a name="the-h-custom-format-specifier"></a><a name="H_Specifier"></a> O especificador de formato personalizado "H" O especificador de formato personalizado "H" representa a hora como um número de 0 a 23; ou seja, a hora é representada por um relógio de 24 horas baseado em zero que conta todas as horas desde a meia-noite. Uma hora de dígito único é formatada sem um zero à esquerda. Se o especificador de formato "H" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "H" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#9](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#9)] [!code-vb[Formatting.DateAndTime.Custom#9](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#9)] [Voltar à tabela](#table) ### <a name="the-hh-custom-format-specifier"></a><a name="HH_Specifier"></a> O especificador de formato personalizado "HH" O especificador de formato personalizado "HH" (mais qualquer número de especificadores "H" adicionais) representa a hora como um número de 00 a 23; ou seja, a hora é representada por um relógio de 24 horas baseado em zero que conta todas as horas desde a meia-noite. Uma hora de dígito único é formatada com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "HH" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#10](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#10)] [!code-vb[Formatting.DateAndTime.Custom#10](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#10)] [Voltar à tabela](#table) ## <a name="time-zone-k-format-specifier"></a>Especificador de formato de fuso horário "K" ### <a name="the-k-custom-format-specifier"></a><a name="KSpecifier"></a> O especificador de formato personalizado "K" O especificador de formato personalizado "K" representa as informações de fuso horário de um valor temporal. Quando esse especificador de formato é usado com valores <xref:System.DateTime>, a cadeia de caracteres de resultado é definida pelo valor da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType>: - Para o fuso horário local (um valor da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType> de <xref:System.DateTimeKind.Local?displayProperty=nameWithType>), esse especificador é equivalente ao especificador "zzz" e produz uma cadeia de caracteres resultante que contém a diferença local em relação ao Horário Universal Coordenado (UTC). Por exemplo, "-07h00". - Para uma hora UTC (um valor da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType> de <xref:System.DateTimeKind.Utc?displayProperty=nameWithType>), a cadeia de caracteres de resultado inclui um caractere "Z" para representar uma data UTC. - Para um horário de um fuso horário não especificado (um horário cuja propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType> é igual a <xref:System.DateTimeKind.Unspecified?displayProperty=nameWithType>), o resultado é equivalente a <xref:System.String.Empty?displayProperty=nameWithType>. Para valores <xref:System.DateTimeOffset>, o especificador de formato "K" é equivalente ao especificador de formato "zzz" e produz uma cadeia de caracteres resultante que contém a diferença em relação ao valor de <xref:System.DateTimeOffset> do UTC. Se o especificador de formato "K" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir exibe a cadeia de caracteres resultante do uso do especificador de formato personalizado "K" com vários valores <xref:System.DateTime> e <xref:System.DateTimeOffset> em um sistema no fuso horário padrão do Pacífico dos EUA. [!code-csharp-interactive[Formatting.DateAndTime.Custom#12](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#12)] [!code-vb[Formatting.DateAndTime.Custom#12](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#12)] [Voltar à tabela](#table) ## <a name="minute-m-format-specifier"></a>Especificador de formato "m" de minuto ### <a name="the-m-custom-format-specifier"></a><a name="mSpecifier"></a> O especificador de formato personalizado "m" O especificador de formato personalizado "m" representa o minuto como um número de 0 a 59. O minuto representa os minutos inteiros decorridos desde a última hora. Um minuto de dígito único é formatado sem um zero à esquerda. Se o especificador de formato "m" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "m". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "m" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#7](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#7)] [!code-vb[Formatting.DateAndTime.Custom#7](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#7)] [Voltar à tabela](#table) ### <a name="the-mm-custom-format-specifier"></a><a name="mmSpecifier"></a> O especificador de formato personalizado "mm" O especificador de formato personalizado "mm" (mais qualquer número de especificadores "m" adicionais) representam o minuto como um número de 00 a 59. O minuto representa os minutos inteiros decorridos desde a última hora. Um minuto de dígito único é formatado com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "mm" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#8](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#8)] [!code-vb[Formatting.DateAndTime.Custom#8](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#8)] [Voltar à tabela](#table) ## <a name="month-m-format-specifier"></a>Especificador de formato de mês "M" ### <a name="the-m-custom-format-specifier"></a><a name="M_Specifier"></a> O especificador de formato personalizado "M" O especificador de formato personalizado "M" representa o mês como um número de 1 a 12 (ou de 1 a 13 para os calendários com 13 meses). Um mês de dígito único é formatado sem um zero à esquerda. Se o especificador de formato "M" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "M". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "M" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#11](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#11)] [!code-vb[Formatting.DateAndTime.Custom#11](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#11)] [Voltar à tabela](#table) ### <a name="the-mm-custom-format-specifier"></a><a name="MM_Specifier"></a> O especificador de formato personalizado "MM" O especificador de formato personalizado "MM" representa o mês como um número de 01 a 12 (ou de 1 a 13 para os calendários com 13 meses). Um mês de dígito único é formatado com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "MM" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#2](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#2)] [!code-vb[Formatting.DateAndTime.Custom#2](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#2)] [Voltar à tabela](#table) ### <a name="the-mmm-custom-format-specifier"></a><a name="MMM_Specifier"></a> O especificador de formato personalizado "MMM" O especificador de formato personalizado "MMM" representa o nome do mês abreviado. O nome do dia do mês localizado abreviado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.AbbreviatedMonthNames%2A?displayProperty=nameWithType> da cultura atual ou especificada. O exemplo a seguir inclui o especificador de formato personalizado "MMM" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#3](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#3)] [!code-vb[Formatting.DateAndTime.Custom#3](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#3)] [Voltar à tabela](#table) ### <a name="the-mmmm-custom-format-specifier"></a><a name="MMMM_Specifier"></a> O especificador de formato personalizado "MMMM" O especificador de formato personalizado "MMMM" representa o nome do mês completo. O nome do mês localizado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.MonthNames%2A?displayProperty=nameWithType> da cultura atual ou especificada. O exemplo a seguir inclui o especificador de formato personalizado "MMMM" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#4](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#4)] [!code-vb[Formatting.DateAndTime.Custom#4](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#4)] [Voltar à tabela](#table) ## <a name="seconds-s-format-specifier"></a>Especificador de formato de segundos "s" ### <a name="the-s-custom-format-specifier"></a><a name="sSpecifier"></a> O especificador de formato personalizado "s" O especificador de formato personalizado "s" representa os segundos como um número de 0 a 59. O resultado representa os segundos inteiros decorridos desde o último minuto. Um segundo de dígito único é formatado sem um zero à esquerda. Se o especificador de formato "s" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "s". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "s" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#7](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#7)] [!code-vb[Formatting.DateAndTime.Custom#7](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#7)] [Voltar à tabela](#table) ### <a name="the-ss-custom-format-specifier"></a><a name="ssSpecifier"></a> O especificador de formato personalizado "SS" O especificador de formato personalizado "ss" (mais qualquer número de especificadores "s" adicionais) representa os segundos como um número de 00 a 59. O resultado representa os segundos inteiros decorridos desde o último minuto. Um segundo de dígito único é formatado com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "ss" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#8](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#8)] [!code-vb[Formatting.DateAndTime.Custom#8](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#8)] [Voltar à tabela](#table) ## <a name="meridiem-t-format-specifier"></a>Especificador de formato meridiem "t" ### <a name="the-t-custom-format-specifier"></a><a name="tSpecifier"></a> O especificador de formato personalizado "t" O especificador de formato personalizado "t" representa o primeiro caractere do designador AM/PM. O designador localizado apropriado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.AMDesignator%2A?displayProperty=nameWithType> ou <xref:System.Globalization.DateTimeFormatInfo.PMDesignator%2A?displayProperty=nameWithType> da cultura atual ou específica. O designador AM é usado para todas as horas de 0:00:00 (meia-noite) até 11:59:59,999. O designador PM é usado para todas as horas de 12:00:00 (meio-dia) até 23:59:59,999. Se o especificador de formato "t" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "t". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "t" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#7](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#7)] [!code-vb[Formatting.DateAndTime.Custom#7](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#7)] [Voltar à tabela](#table) ### <a name="the-tt-custom-format-specifier"></a><a name="ttSpecifier"></a> O especificador de formato personalizado "tt" O especificador de formato personalizado "tt" (mais qualquer número de especificadores "t" adicionais) representa o designador AM/PM inteiro. O designador localizado apropriado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.AMDesignator%2A?displayProperty=nameWithType> ou <xref:System.Globalization.DateTimeFormatInfo.PMDesignator%2A?displayProperty=nameWithType> da cultura atual ou específica. O designador AM é usado para todas as horas de 0:00:00 (meia-noite) até 11:59:59,999. O designador PM é usado para todas as horas de 12:00:00 (meio-dia) até 23:59:59,999. Certifique-se de usar o especificador "tt" para linguagens para as quais é necessário manter a distinção entre AM e PM. Um exemplo é o idioma japonês, no qual os designadores AM e PM diferem no segundo caractere e não no primeiro. O exemplo a seguir inclui o especificador de formato personalizado "tt" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#8](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#8)] [!code-vb[Formatting.DateAndTime.Custom#8](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#8)] [Voltar à tabela](#table) ## <a name="year-y-format-specifier"></a>Especificador de formato de ano "y" ### <a name="the-y-custom-format-specifier"></a><a name="ySpecifier"></a> O especificador de formato personalizado "y" O especificador de formato personalizado "y" representa o ano como um número de um dígito ou de dois dígitos. Se o ano tiver mais que dois dígitos, somente os dois dígitos de ordem baixa aparecerão no resultado. Se o primeiro dígito de um ano de dois dígitos começa com zero (por exemplo, 2008), o número é formatado sem um zero à esquerda. Se o especificador de formato "y" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão "y". Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "y" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#13](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#13)] [!code-vb[Formatting.DateAndTime.Custom#13](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#13)] [Voltar à tabela](#table) ### <a name="the-yy-custom-format-specifier"></a><a name="yySpecifier"></a> O especificador de formato personalizado "AA" O especificador de formato personalizado "yy" representa o ano como um número de dois dígitos. Se o ano tiver mais que dois dígitos, somente os dois dígitos de ordem baixa aparecerão no resultado. Se o ano de dois dígitos tiver menos de dois dígitos significativos, o número será preenchido com zeros à esquerda para produzir dois dígitos. Em uma operação de análise, um ano de dois dígitos é analisado usando o especificador de formato personalizado “yy” interpretado com base na propriedade <xref:System.Globalization.Calendar.TwoDigitYearMax%2A?displayProperty=nameWithType> do calendário atual do provedor de formato. O exemplo a seguir analisa a representação de cadeia de caracteres de uma data com um ano de dois dígitos usando o calendário gregoriano padrão da cultura en-US que, neste caso, é a cultura atual. Ele então altera o objeto <xref:System.Globalization.CultureInfo> da cultura atual para usar um objeto <xref:System.Globalization.GregorianCalendar> cuja propriedade <xref:System.Globalization.GregorianCalendar.TwoDigitYearMax%2A> foi modificada. [!code-csharp-interactive[Formatting.DateAndTime.Custom#19](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/parseexact2digityear1.cs#19)] [!code-vb[Formatting.DateAndTime.Custom#19](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/parseexact2digityear1.vb#19)] O exemplo a seguir inclui o especificador de formato personalizado "yy" em uma cadeia de caracteres de formato personalizado. [!code-csharp[Formatting.DateAndTime.Custom#13](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#13)] [!code-vb[Formatting.DateAndTime.Custom#13](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#13)] [Voltar à tabela](#table) ### <a name="the-yyy-custom-format-specifier"></a><a name="yyySpecifier"></a> O especificador de formato personalizado "aaa" O especificador de formato personalizado "yyy" representa o ano com, no mínimo, três dígitos. Se o ano tem mais de três dígitos significativos, eles são incluídos na cadeia de caracteres de resultado. Se o ano tem menos de três dígitos, o número é preenchido com zeros à esquerda para produzir três dígitos. > [!NOTE] > Para o calendário budista tailandês, que pode ter anos com cinco dígitos, este especificador de formato exibe todos os dígitos significativos. O exemplo a seguir inclui o especificador de formato personalizado "yyy" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#13](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#13)] [!code-vb[Formatting.DateAndTime.Custom#13](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#13)] [Voltar à tabela](#table) ### <a name="the-yyyy-custom-format-specifier"></a><a name="yyyySpecifier"></a> O especificador de formato personalizado "aaaa" O especificador de formato personalizado "yyyy" representa o ano com, no mínimo, quatro dígitos. Se o ano tem mais de quatro dígitos significativos, eles são incluídos na cadeia de caracteres de resultado. Se o ano possui menos de quatro dígitos, o número é preenchido com zeros à esquerda para produzir quatro dígitos. > [!NOTE] > Para o calendário budista tailandês, que pode ter anos de cinco dígitos, este especificador de formato exibe no mínimo quatro dígitos. O exemplo a seguir inclui o especificador de formato personalizado "yyyy" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#13](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#13)] [!code-vb[Formatting.DateAndTime.Custom#13](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#13)] [Voltar à tabela](#table) ### <a name="the-yyyyy-custom-format-specifier"></a><a name="yyyyySpecifier"></a> O especificador de formato personalizado "yyyyy" O especificador de formato personalizado "yyyyy" (mais qualquer número de especificadores "y" adicionais) representa o ano com, no mínimo, cinco dígitos. Se o ano tem mais de cinco dígitos significativos, eles são incluídos na cadeia de caracteres de resultado. Se o ano tem menos de cinco dígitos, o número é preenchido com zeros à esquerda para produzir cinco dígitos. Se houver especificadores "y" adicionais, o número será preenchido com tantos zeros à esquerda quantos forem necessários para produzir o número de especificadores "y". O exemplo a seguir inclui o especificador de formato personalizado "yyyyy" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#13](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#13)] [!code-vb[Formatting.DateAndTime.Custom#13](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#13)] [Voltar à tabela](#table) ## <a name="offset-z-format-specifier"></a>"Z" especificador de formato de deslocamento ### <a name="the-z-custom-format-specifier"></a><a name="zSpecifier"></a> O especificador de formato personalizado "z" Com valores <xref:System.DateTime>, o especificador de formato personalizado "z" representa a diferença com sinal do fuso horário do sistema operacional local em relação ao UTC, medido em horas. Ele não reflete o valor de uma instância da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType>. Por esse motivo, o especificador de formato "z" não é recomendado para uso com valores <xref:System.DateTime>. Com os valores de <xref:System.DateTimeOffset>, este formato representa a diferença do valor de <xref:System.DateTimeOffset> em relação ao UTC em horas. A diferença é sempre exibida com um sinal à esquerda. Um sinal de adição (+) indica horas depois do UTC, enquanto que um sinal de subtração (-) indica horas antes do UTC. Uma diferença com um único dígito único é formatada sem um zero à esquerda. Se o especificador de formato "z" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. O exemplo a seguir inclui o especificador de formato personalizado "z" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#14](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#14)] [!code-vb[Formatting.DateAndTime.Custom#14](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#14)] [Voltar à tabela](#table) ### <a name="the-zz-custom-format-specifier"></a><a name="zzSpecifier"></a> O especificador de formato personalizado "zz" Com valores <xref:System.DateTime>, o especificador de formato personalizado "zz" representa a diferença com sinal do fuso horário do sistema operacional local em relação ao UTC em horas. Ele não reflete o valor de uma instância da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType>. Por esse motivo, o especificador de formato "zz" não é recomendado para uso com valores <xref:System.DateTime>. Com os valores de <xref:System.DateTimeOffset>, este formato representa a diferença do valor de <xref:System.DateTimeOffset> em relação ao UTC em horas. A diferença é sempre exibida com um sinal à esquerda. Um sinal de adição (+) indica horas depois do UTC, enquanto que um sinal de subtração (-) indica horas antes do UTC. Uma diferença com um único dígito único é formatada com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "zz" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#14](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#14)] [!code-vb[Formatting.DateAndTime.Custom#14](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#14)] [Voltar à tabela](#table) ### <a name="the-zzz-custom-format-specifier"></a><a name="zzzSpecifier"></a> O especificador de formato personalizado "ZZZ" Com valores <xref:System.DateTime>, o especificador de formato personalizado "zzz" representa a diferença com sinal do fuso horário do sistema operacional local em relação ao UTC em horas e minutos. Ele não reflete o valor de uma instância da propriedade <xref:System.DateTime.Kind%2A?displayProperty=nameWithType>. Por esse motivo, o especificador de formato "zzz" não é recomendado para uso com valores <xref:System.DateTime>. Com valores <xref:System.DateTimeOffset>, esse especificador de formato representa a diferença do valor de <xref:System.DateTimeOffset> em relação ao UTC em horas e minutos. A diferença é sempre exibida com um sinal à esquerda. Um sinal de adição (+) indica horas depois do UTC, enquanto que um sinal de subtração (-) indica horas antes do UTC. Uma diferença com um único dígito único é formatada com um zero à esquerda. O exemplo a seguir inclui o especificador de formato personalizado "zzz" em uma cadeia de caracteres de formato personalizado. [!code-csharp-interactive[Formatting.DateAndTime.Custom#14](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/Custom1.cs#14)] [!code-vb[Formatting.DateAndTime.Custom#14](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/Custom1.vb#14)] [Voltar à tabela](#table) ## <a name="date-and-time-separator-specifiers"></a>Especificadores de separadores de data e hora ### <a name="the--custom-format-specifier"></a><a name="timeSeparator"></a> O especificador de formato personalizado ":" O especificador de formato personalizado ":" representa o separador de hora, o qual é usado para diferenciar horas, minutos e segundos. O separador de hora localizado apropriado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.TimeSeparator%2A?displayProperty=nameWithType> da cultura atual ou especificada. > [!NOTE] > Para alterar o separador de hora de uma sequência de data e hora específica, especifique o caractere separador dentro de um delimitador de cadeia de caracteres literal. Por exemplo, a cadeia de caracteres de formato personalizada `hh'_'dd'_'ss` produz uma cadeia de caracteres de resultado em que "\_" (um sublinhado) é sempre usado como o separador de hora. Para alterar o separador de hora de todas as datas de uma cultura, seja para alterar o valor da propriedade <xref:System.Globalization.DateTimeFormatInfo.TimeSeparator%2A?displayProperty=nameWithType> da cultura atual ou para instanciar um objeto <xref:System.Globalization.DateTimeFormatInfo>, atribua o caractere à sua propriedade <xref:System.Globalization.DateTimeFormatInfo.TimeSeparator%2A> e chame uma sobrecarga do método de formatação que inclua um parâmetro <xref:System.IFormatProvider>. Se o especificador de formato ":" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. [Voltar à tabela](#table) ### <a name="the--custom-format-specifier"></a><a name="dateSeparator"></a> O especificador de formato personalizado "/" O especificador de formato personalizado "/" representa o separador de data, o qual é usado para diferenciar anos, meses e dias. O separador de data localizado apropriado é recuperado da propriedade <xref:System.Globalization.DateTimeFormatInfo.DateSeparator%2A?displayProperty=nameWithType> da cultura atual ou especificada. > [!NOTE] > Para alterar o separador de data de uma sequência de data e hora específica, especifique o caractere separador dentro de um delimitador de cadeia de caracteres literal. Por exemplo, a sequência de formato personalizado `mm'/'dd'/'yyyy` produz uma cadeia de caracteres de resultado em que "/" é sempre usado como o separador de data. Para alterar o separador de data de todas as datas de uma cultura, seja para alterar o valor da propriedade <xref:System.Globalization.DateTimeFormatInfo.DateSeparator%2A?displayProperty=nameWithType> da cultura atual ou para instanciar um objeto <xref:System.Globalization.DateTimeFormatInfo>, atribua o caractere à sua propriedade <xref:System.Globalization.DateTimeFormatInfo.DateSeparator%2A> e chame uma sobrecarga do método de formatação que inclua um parâmetro <xref:System.IFormatProvider>. Se o especificador de formato "/" for usado sem outros especificadores de formato personalizado, ele será interpretado como um especificador de formato de data e hora padrão e gerará uma <xref:System.FormatException>. Para saber mais sobre como usar um especificador de formato único, confira [Usar especificadores de formato único personalizados](#UsingSingleSpecifiers) posteriormente nesse artigo. [Voltar à tabela](#table) ## <a name="character-literals"></a><a name="Literals"></a> Literais de caracteres Os seguintes caracteres em uma cadeia de caracteres personalizada de formato de data e hora são reservados e sempre são interpretados como caracteres de formatação ou, no caso de `"` ,, `'` `/` e `\` , como caracteres especiais. | | | | | | |-----|-----|-----|-----|-----| | `F` | `H` | `K` | `M` | `d` | | `f` | `g` | `h` | `m` | `s` | | `t` | `y` | `z` | `%` | `:` | | `/` | `"` | `'` | `\` | | Todos os outros caracteres sempre são interpretados como literais de caracteres e, em uma operação de formatação, são incluídos na cadeia de caracteres de resultado inalterada. Em uma operação de análise, eles devem corresponder exatamente aos caracteres na cadeia de entrada; a comparação diferencia maiúsculas de minúsculas. O exemplo a seguir inclui os caracteres literais "PST" (que indicam a Hora Padrão do Pacífico) e “PDT” (que indicam a Hora de Verão do Pacífico) para representar o fuso horário local em uma cadeia de caracteres de formato. Observe que a cadeia de caracteres está incluída na cadeia de caracteres de resultado e que uma cadeia de caracteres que inclui a cadeia de caracteres de fuso horário local também é analisada com êxito. [!code-csharp[Formatting.DateAndTime.Custom#20](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/LiteralsEx1.cs#20)] [!code-vb[Formatting.DateAndTime.Custom#20](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/LiteralsEx1.vb#20)] Há duas maneiras de indicar que os caracteres devem ser interpretados como caracteres literais e não como caracteres reservados, para que possam ser incluídos em uma cadeia de caracteres de resultado ou analisados com êxito em uma cadeia de caracteres de entrada: - Com o escape de cada caractere reservado. Para obter mais informações, consulte [usando o caractere de escape](#escape). O exemplo a seguir inclui os caracteres literais "pst" (que indicam a Hora Padrão do Pacífico) para representar o fuso horário local em uma cadeia de caracteres de formato. Como "s" e "t" são cadeias de caracteres de formato personalizado, ambos os caracteres devem ter um escape para serem interpretados como caracteres literais. [!code-csharp[Formatting.DateAndTime.Custom#21](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/LiteralsEx2.cs#21)] [!code-vb[Formatting.DateAndTime.Custom#21](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/LiteralsEx2.vb#21)] - Colocando toda a cadeia de caracteres literal entre aspas ou apóstrofos. O exemplo a seguir é semelhante ao anterior, mas "pst" é colocado entre aspas para indicar que toda a cadeia de caracteres delimitada deve ser interpretada como literais de caracteres. [!code-csharp[Formatting.DateAndTime.Custom#22](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/LiteralsEx3.cs#22)] [!code-vb[Formatting.DateAndTime.Custom#22](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/LiteralsEx3.vb#22)] ## <a name="notes"></a>Observações ### <a name="using-single-custom-format-specifiers"></a><a name="UsingSingleSpecifiers"></a> Usando especificadores de formato único personalizado Uma cadeia de caracteres de formato de data e hora personalizado consiste em dois ou mais caracteres. Os métodos de formatação de data e hora interpretam qualquer cadeia de um único caractere como uma cadeia de caracteres de formato de data e hora padrão. Quando não reconhecem o caractere como um especificador de formato válido, eles geram uma <xref:System.FormatException>. Por exemplo, uma cadeia de caracteres de formato que consiste somente no especificador "h" é interpretada como uma cadeia de caracteres de formato padrão de data e hora. No entanto, nesse caso específico, uma exceção é gerada porque não há nenhum especificador "h" de formato padrão de data e hora. Para usar qualquer um dos especificadores de formato de data e hora personalizado como sendo o único especificador em uma cadeia de caracteres de formato (ou seja, para usar o especificador de formato personalizado "d", "f", "F", "g", "h", "H", "K", "m", "M", "s", "t", "y", "z", ":" ou “/” por si só), inclua um espaço antes ou após o especificador ou inclua um especificador de formato de porcentagem "%" antes do especificador de data e hora personalizado simples. Por exemplo, "`%h"` é interpretada como uma cadeia de caracteres de formato de data e hora personalizado que exibe a hora representada pelo valor atual de data e hora. Você também pode usar a cadeia de caracteres de formato " h" ou "H ", embora isso inclua um espaço na cadeia de caracteres resultante em conjunto com a hora. O exemplo a seguir ilustra essas três cadeias de caracteres de formatos. [!code-csharp-interactive[Formatting.DateAndTime.Custom#16](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/literal1.cs#16)] [!code-vb[Formatting.DateAndTime.Custom#16](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/literal1.vb#16)] #### <a name="using-the-escape-character"></a><a name="escape"></a> Usando o caractere de escape Os caracteres "d", "f", "F", "g", "h", "H", "K", "m", "M", "s", "t", "y", "z", ":" ou "/" em uma cadeia de caracteres de formato são interpretados como especificadores de formato personalizados em vez de caracteres literais. Para impedir que um caractere seja interpretado como um especificador de formato, você pode precedê-lo com uma barra invertida (\\), que é o caractere de escape. O caractere de escape significa que o próximo caractere é um literal de caractere que deve ser incluído inalterado na cadeia de caracteres de resultado. Para incluir uma barra invertida em uma cadeia de caracteres de resultado, você deve escapá-la com outra barra invertida (`\\`). > [!NOTE] > Alguns compiladores, como os compiladores C++ e C#, também podem interpretar um único caractere de barra invertida como um caractere de escape. Para garantir que uma cadeia de caracteres seja interpretada corretamente quando formatada, você poderá usar o caractere literal de cadeia de caracteres textual (o caractere @) antes da cadeia de caracteres em C# ou adicionar outro caractere de barra invertida antes de cada barra invertida em C# e em C++. O exemplo de C# a seguir ilustra ambas as abordagens. O exemplo a seguir usa o caractere de escape para impedir que a operação de formatação interprete os caracteres de "h" e "m" como especificadores de formato. [!code-csharp-interactive[Formatting.DateAndTime.Custom#15](~/samples/snippets/csharp/VS_Snippets_CLR/Formatting.DateAndTime.Custom/cs/escape1.cs#15)] [!code-vb[Formatting.DateAndTime.Custom#15](~/samples/snippets/visualbasic/VS_Snippets_CLR/Formatting.DateAndTime.Custom/vb/escape1.vb#15)] ### <a name="control-panel-settings"></a>Configurações do Painel de Controle As configurações de **Opções Regionais e de Idiomas** no Painel de Controle influenciam a cadeia de caracteres de resultado produzida por uma operação de formatação que inclui muitos dos especificadores de formato de data e hora personalizado. Essas configurações são usadas para inicializar o objeto <xref:System.Globalization.DateTimeFormatInfo> associado à cultura de thread atual, a qual fornece os valores usados para determinar a formatação. Computadores que usam configurações diferentes geram cadeias de caracteres de resultado diferentes. Além disso, se o constructo <xref:System.Globalization.CultureInfo.%23ctor%28System.String%29> for usado para criar uma instância de um novo objeto <xref:System.Globalization.CultureInfo> que representa a mesma cultura que a cultura atual do sistema, quaisquer personalizações estabelecidas pelo item **Opções Regionais e de Idioma** no Painel de Controle serão aplicadas ao novo objeto <xref:System.Globalization.CultureInfo>. Você pode usar o construtor <xref:System.Globalization.CultureInfo.%23ctor%28System.String%2CSystem.Boolean%29> para criar um objeto <xref:System.Globalization.CultureInfo> que não reflita as personalizações de um sistema. ### <a name="datetimeformatinfo-properties"></a>Propriedades DateTimeFormatInfo A formatação é influenciada pelas propriedades do objeto <xref:System.Globalization.DateTimeFormatInfo> atual, que é fornecido implicitamente pela cultura de thread atual ou explicitamente pelo parâmetro <xref:System.IFormatProvider> do método que invoca a formatação. Para o parâmetro <xref:System.IFormatProvider>, você deve especificar um objeto <xref:System.Globalization.CultureInfo>, o qual representa uma cultura ou um objeto <xref:System.Globalization.DateTimeFormatInfo>. A cadeia de caracteres de resultado produzida por muitos dos especificadores de formato de data e hora personalizado também depende das propriedades do objeto <xref:System.Globalization.DateTimeFormatInfo> atual. Seu aplicativo pode alterar o resultado produzido por alguns especificadores de formato personalizado de data e hora ao alterar a propriedade <xref:System.Globalization.DateTimeFormatInfo> correspondente. Por exemplo, o especificador de formato "ddd" adiciona um nome de dia da semana abreviado encontrado na matriz de cadeia de caracteres <xref:System.Globalization.DateTimeFormatInfo.AbbreviatedDayNames%2A> à cadeia de caracteres de resultado. Da mesma forma, o especificador de formato "MMMM" adiciona um nome de mês completo encontrado na matriz de cadeias de caracteres <xref:System.Globalization.DateTimeFormatInfo.MonthNames%2A> à cadeia de caracteres de resultado. ## <a name="see-also"></a>Consulte também - <xref:System.DateTime?displayProperty=nameWithType> - <xref:System.IFormatProvider?displayProperty=nameWithType> - [Tipos de formatação](formatting-types.md) - [Cadeias de caracteres de formato de data e hora padrão](standard-date-and-time-format-strings.md) - [Exemplo: utilitário de formatação do WinForms do .NET Core (C#)](https://docs.microsoft.com/samples/dotnet/samples/windowsforms-formatting-utility-cs) - [Exemplo: utilitário de formatação do WinForms do .NET Core (Visual Basic)](https://docs.microsoft.com/samples/dotnet/samples/windowsforms-formatting-utility-vb)
105.99045
886
0.778198
por_Latn
0.981627
5df8347b65a7bd427645e7d7098235cf64110221
12,705
md
Markdown
docs/framework/performance/loader-etw-events.md
dotnet-architecture/docs
5e20aff8a8a3840d0b7236121eeb6824c6aa2b9b
[ "CC-BY-4.0", "MIT" ]
32
2017-11-09T20:29:45.000Z
2021-11-22T15:54:00.000Z
docs/framework/performance/loader-etw-events.md
dotnet-architecture/docs
5e20aff8a8a3840d0b7236121eeb6824c6aa2b9b
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/performance/loader-etw-events.md
dotnet-architecture/docs
5e20aff8a8a3840d0b7236121eeb6824c6aa2b9b
[ "CC-BY-4.0", "MIT" ]
22
2017-11-27T00:38:36.000Z
2021-03-12T06:51:43.000Z
--- title: "Loader ETW Events" ms.date: "03/30/2017" helpviewer_keywords: - "loader events [.NET Framework]" - "ETW, loader events (CLR)" ms.assetid: cb403cc6-56f8-4609-b467-cdfa09f07909 author: "mairaw" ms.author: "mairaw" --- # Loader ETW Events These events collect information relating to loading and unloading application domains, assemblies, and modules. All loader events are raised under the `LoaderKeyword` (0x8) keyword. The `DCStart` and the `DCEnd` events are raised under `LoaderRundownKeyword` (0x8) with `StartRundown`/`EndRundown` enabled. (For more information, see [CLR ETW Keywords and Levels](clr-etw-keywords-and-levels.md).) Loader events are subdivided into the following: - [Application Domain Events](#application_domain_events) - [CLR Loader Assembly Events](#clr_loader_assembly_events) - [Module Events](#module_events) - [CLR Domain Module Events](#clr_domain_module_events) - [Module Range Events](#module_range_events) <a name="application_domain_events"></a> ## Application Domain Events The following table shows the keyword and level. |Keyword for raising the event|Event|Level| |-----------------------------------|-----------|-----------| |`LoaderKeyword` (0x8)|`AppDomainLoad_V1` and `AppDomainUnLoad_V1`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `StartRundownKeyword`|`AppDomainDCStart_V1`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `EndRundownKeyword`|`AppDomainDCEnd_V1`|Informational (4)| The following table shows the event information. |Event|Event ID|Description| |-----------|--------------|-----------------| |`AppDomainLoad_V1` (logged for all application domains)|156|Raised whenever an application domain is created during the lifetime of a process.| |`AppDomainUnLoad_V1`|157|Raised whenever an application domain is destroyed during the lifetime of a process.| |`AppDomainDCStart_V1`|157|Enumerates the application domains during a start rundown.| |`AppDomainDCEnd_V1`|158|Enumerates the application domains during an end rundown.| The following table shows the event data. |Field name|Data type|Description| |----------------|---------------|-----------------| |AppDomainID|win:UInt64|The unique identifier for an application domain.| |AppDomainFlags|win:UInt32|0x1: Default domain.<br /><br /> 0x2: Executable.<br /><br /> 0x4: Application domain, bit 28-31: Sharing policy of this domain.<br /><br /> 0: A shared domain.| |AppDomainName|win:UnicodeString|Friendly application domain name. Might change during the lifetime of the process.| |AppDomainIndex|Win:UInt32|The index of this application domain.| |ClrInstanceID|win:UInt16|Unique ID for the instance of CLR or CoreCLR.| <a name="clr_loader_assembly_events"></a> ## CLR Loader Assembly Events The following table shows the keyword and level. |Keyword for raising the event|Event|Level| |-----------------------------------|-----------|-----------| |`LoaderKeyword` (0x8)|`AssemblyLoad` and `AssemblyUnload`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `StartRundownKeyword`|`AssemblyDCStart`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `EndRundownKeyword`|`AssemblyDCEnd`|Informational (4)| The following table shows the event information. |Event|Event ID|Description| |-----------|--------------|-----------------| |`AssemblyLoad_V1`|154|Raised when an assembly is loaded.| |`AssemblyUnload_V1`|155|Raised when an assembly is unloaded.| |`AssemblyDCStart_V1`|155|Enumerates assemblies during a start rundown.| |`AssemblyDCEnd_V1`|156|Enumerates assemblies during an end rundown.| The following table shows the event data. |Field name|Data type|Description| |----------------|---------------|-----------------| |AssemblyID|win:UInt64|Unique ID for the assembly.| |AppDomainID|win:UInt64|ID of the domain of this assembly.| |BindingID|win:UInt64|ID that uniquely identifies the assembly binding.| |AssemblyFlags|win:UInt32|0x1: Domain neutral assembly.<br /><br /> 0x2: Dynamic assembly.<br /><br /> 0x4: Assembly has a native image.<br /><br /> 0x8: Collectible assembly.| |AssemblyName|win:UnicodeString|Fully qualified assembly name.| |ClrInstanceID|win:UInt16|Unique ID for the instance of CLR or CoreCLR.| <a name="module_events"></a> ## Module Events The following table shows the keyword and level. |Keyword for raising the event|Event|Level| |-----------------------------------|-----------|-----------| |`LoaderKeyword` (0x8)|`ModuleLoad_V2` and `ModuleUnload_V2`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `StartRundownKeyword`|`ModuleDCStart_V2`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `EndRundownKeyword`|`ModuleDCEnd_V2`|Informational (4)| |||| The following table shows the event information. |Event|Event ID|Description| |-----------|--------------|-----------------| |`ModuleLoad_V2`|152|Raised when a module is loaded during the lifetime of a process.| |`ModuleUnload_V2`|153|Raised when a module is unloaded during the lifetime of a process.| |`ModuleDCStart_V2`|153|Enumerates modules during a start rundown.| |`ModuleDCEnd_V2`|154|Enumerates modules during an end rundown.| The following table shows the event data. |Field name|Data type|Description| |----------------|---------------|-----------------| |ModuleID|win:UInt64|Unique ID for the module.| |AssemblyID|win:UInt64|ID of the assembly in which this module resides.| |ModuleFlags|win:UInt32|0x1: Domain neutral module.<br /><br /> 0x2: Module has a native image.<br /><br /> 0x4: Dynamic module.<br /><br /> 0x8: Manifest module.| |Reserved1|win:UInt32|Reserved field.| |ModuleILPath|win:UnicodeString|Path of the Microsoft intermediate language (MSIL) image for the module, or dynamic module name if it is a dynamic assembly (null-terminated).| |ModuleNativePath|win:UnicodeString|Path of the module native image, if present (null-terminated).| |ClrInstanceID|win:UInt16|Unique ID for the instance of CLR or CoreCLR.| |ManagedPdbSignature|win:GUID|GUID signature of the managed program database (PDB) that matches this module. (See Remarks.)| |ManagedPdbAge|win:UInt32|Age number written to the managed PDB that matches this module. (See Remarks.)| |ManagedPdbBuildPath|win:UnicodeString|Path to the location where the managed PDB that matches this module was built. In some cases, this may just be a file name. (See Remarks.)| |NativePdbSignature|win:GUID|GUID signature of the Native Image Generator (NGen) PDB that matches this module, if applicable. (See Remarks.)| |NativePdbAge|win:UInt32|Age number written to the NGen PDB that matches this module, if applicable. (See Remarks.)| |NativePdbBuildPath|win:UnicodeString|Path to the location where the NGen PDB that matches this module was built, if applicable. In some cases, this may just be a file name. (See Remarks.)| ### Remarks - The fields that have "Pdb" in their names can be used by profiling tools to locate PDBs that match the modules that were loaded during the profiling session. The values of these fields correspond to the data written into the IMAGE_DIRECTORY_ENTRY_DEBUG sections of the module normally used by debuggers to help locate PDBs that match the loaded modules. - The field names that begin with "ManagedPdb" refer to the managed PDB corresponding to the MSIL module that was generated by the managed compiler (such as the C# or Visual Basic compiler). This PDB uses the managed PDB format, and describes how elements from the original managed source code, such as files, line numbers, and symbol names, map to MSIL elements that are compiled into the MSIL module. - The field names that begin with "NativePdb" refer to the NGen PDB generated by calling `NGEN createPDB`. This PDB uses the native PDB format, and describes how elements from the original managed source code, such as files, line numbers, and symbol names, map to native elements that are compiled into the NGen module. <a name="clr_domain_module_events"></a> ## CLR Domain Module Events The following table shows the keyword and level. |Keyword for raising the event|Event|Level| |-----------------------------------|-----------|-----------| |`LoaderKeyword` (0x8)|`DomainModuleLoad_V1`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `StartRundownKeyword`|`DomainModuleDCStart_V1`|Informational (4)| |`LoaderRundownKeyword` (0x8) +<br /><br /> `EndRundownKeyword`|`DomainModuleDCEnd_V1`|Informational (4)| The following table shows the event information. |Event|Event ID|Description| |-----------|--------------|-----------------| |`DomainModuleLoad_V1`|151|Raised when a module is loaded for an application domain.| |`DomainModuleDCStart_V1`|151|Enumerates modules loaded for an application domain during a start rundown, and is logged for all application domains.| |`DomainModuleDCEnd_V1`|152|Enumerates modules loaded for an application domain during an end rundown, and is logged for all application domains.| The following table shows the event data. |Field name|Data type|Description| |----------------|---------------|-----------------| |ModuleID|win:UInt64|Identifies the assembly to which this module belongs.| |AssemblyID|win:UInt64|ID of the assembly in which this module resides.| |AppDomainID|win:UInt64|ID of the application domain in which this module is used.| |ModuleFlags|win:UInt32|0x1: Domain neutral module.<br /><br /> 0x2: Module has a native image.<br /><br /> 0x4: Dynamic module.<br /><br /> 0x8: Manifest module.| |Reserved1|win:UInt32|Reserved field.| |ModuleILPath|win:UnicodeString|Path of the MSIL image for the module, or dynamic module name if it is a dynamic assembly (null-terminated).| |ModuleNativePath|win:UnicodeString|Path of the module native image, if present (null-terminated).| |ClrInstanceID|win:UInt16|Unique ID for the instance of CLR or CoreCLR.| <a name="module_range_events"></a> ## Module Range Events The following table shows the keyword and level. |Keyword for raising the event|Event|Level| |-----------------------------------|-----------|-----------| |`PerfTrackKeyWord`)|`ModuleRange`|Informational (4)| |`PerfTrackKeyWord`|`ModuleRangeDCStart`|Informational (4)| |`PerfTrackKeyWord`|`ModuleRangeDCEnd`|Informational (4)| The following table shows the event information. |Event|Event ID|Description| |-----------|--------------|-----------------| |`ModuleRange`|158|This event is present if a loaded Native Image Generator (NGen) image has been optimized with IBC and contains information about the hot sections of the NGen image.| |`ModuleRangeDCStart`|160|A `ModuleRange` event fired at the start of a rundown.| |`ModuleRangeDCEnd`|161|A `ModuleRange` event fired at the end of a rundown.| The following table shows the event data. |Field name|Data type|Description| |----------------|---------------|-----------------| |ClrInstanceID|win:UInt16|Uniquely identifies a specific instance of the CLR in a process if multiple instances of the CLR are loaded.| |ModuleID|win:UInt64|Identifies the assembly to which this module belongs.| |RangeBegin|win:UInt32|The offset in the module that represents the start of the range for the specified range type.| |RangeSize|win:UInt32|The size of the specified range in bytes.| |RangeType|win:UInt32|A single value, 0x4, to represent Cold IBC ranges. This field can represent more values in the future.| |RangeSize1|win:UInt32|0 indicates bad data.| |RangeBegin2|win:UnicodeString|| ### Remarks If a loaded NGen image in a .NET Framework process has been optimized with IBC, the `ModuleRange` event that contains the hot ranges in the NGen image is logged along with its `moduleID` and `ClrInstanceID`. If the NGen image is not optimized with IBC, this event isn't logged. To determine the module name, this event must be collated with the module load ETW events. The payload size for this event is variable; the `Count` field indicates the number of range offsets contained in the event. This event has to be collated with the Windows `IStart` event to determine the actual ranges. The Windows Image Load event is logged whenever an image is loaded, and contains the virtual address of the loaded image. Module range events are fired under any ETW level greater than or equal to 4 and are classified as informational events. ## See also - [CLR ETW Events](clr-etw-events.md)
62.279412
404
0.706336
eng_Latn
0.930953
5df8d2f278eb12139e9fd3468cb25b6b6240fd1c
234
md
Markdown
CHANGELOG.md
gordonbanderson/silverstripe-weblog-wp-import
a77d7059bc6c3aab27ae9c7c62a8200c16c9f1e6
[ "MIT" ]
1
2018-01-17T18:29:28.000Z
2018-01-17T18:29:28.000Z
CHANGELOG.md
gordonbanderson/silverstripe-weblog-wp-import
a77d7059bc6c3aab27ae9c7c62a8200c16c9f1e6
[ "MIT" ]
1
2019-09-03T23:12:17.000Z
2019-09-03T23:12:17.000Z
CHANGELOG.md
gordonbanderson/silverstripe-weblog-wp-import
a77d7059bc6c3aab27ae9c7c62a8200c16c9f1e6
[ "MIT" ]
4
2017-10-14T20:33:03.000Z
2019-09-03T13:19:31.000Z
# Changelog Notable changes to this project will be documented in this file. ## [1.0.0] - Initial release - Fix image styling on all images excluding placeholders - Better YouTube parsing - New yml option: $urlsegment_link_rewrite
21.272727
64
0.773504
eng_Latn
0.987833
5df90a879159eb892550ca47b7302213cfbcb3a7
810
md
Markdown
html/src/views/doc/install/install.md
starains/ide
cbc1a3627f718143f767d91d46163397d1c1334b
[ "Apache-2.0" ]
null
null
null
html/src/views/doc/install/install.md
starains/ide
cbc1a3627f718143f767d91d46163397d1c1334b
[ "Apache-2.0" ]
null
null
null
html/src/views/doc/install/install.md
starains/ide
cbc1a3627f718143f767d91d46163397d1c1334b
[ "Apache-2.0" ]
1
2021-11-11T13:25:08.000Z
2021-11-11T13:25:08.000Z
### 打包 进入工程目录 使用 ` mvn install ` 安装 执行后在工程目录生成` release/版本号 `目录,复制版本目录下的文件到服务器。 **环境变量** 系统需要配置环境变量TEAMIDE_HOME指向工程安装目录,及 `TEAMIDE_HOME = /安装目录` 启动后将会生成目录 > + `$TEAMIDE_HOME/conf` //配置文件,端口,访问路径,JDBC等都在这个目录 > + `$TEAMIDE_HOME/conf/ide.conf` > + `$TEAMIDE_HOME/log` //日志目录 > + `$TEAMIDE_HOME/conf/ide.log` > + `$TEAMIDE_HOME/plugins` //插件 > + `$TEAMIDE_HOME/workspaces` //工作区 > + `$TEAMIDE_HOME/spaces` //空间,这个很重要,源码将会存储在这个目录 **启动&停止** 执行 `bin/start.sh` 启动Team IDE服务 执行 `bin/stop.sh` 停止Team IDE服务 直接jar启动 ` java -jar ide.jar` 或启动带入安装目录和端口等 ` java -jar ide.jar --TEAMIDE_HOME=/data/ide --port=8080` - TEAMIDE_HOME //不配置环境变量则需要指定目录 - port=8080 //可以指定启动端口 **后台启动** ` nohup java -Dfile.encoding=UTF-8 -jar $TEAMIDE_HOME/ide.jar >$TEAMIDE_HOME/logs/start.log 2>&1 & echo $! > $TEAMIDE_HOME/ide.pid`
20.25
131
0.696296
yue_Hant
0.953619
5df9a2012f2612cb300f9d383e9ad7d1da7b35e3
113
md
Markdown
src/Tools-Test.package/TextDiffBuilderTest.class/README.md
hernanmd/pharo
d1b0e3ed73b5f1879acf0fd3ba041b3290f1d499
[ "MIT" ]
5
2019-09-09T21:28:33.000Z
2019-12-24T20:34:04.000Z
src/Tools-Test.package/TextDiffBuilderTest.class/README.md
tinchodias/pharo
b1600a96667c16b28a2ce456b2000840df447171
[ "MIT" ]
1
2018-01-14T20:32:07.000Z
2018-01-16T06:51:28.000Z
src/Tools-Test.package/TextDiffBuilderTest.class/README.md
tinchodias/pharo
b1600a96667c16b28a2ce456b2000840df447171
[ "MIT" ]
null
null
null
Tests for the new algorithm that is diffing changes. Tests from L. Uzonyi (from squeak trunk System.ul207and 208)
56.5
60
0.80531
eng_Latn
0.986735
5df9b6f6a77f80d21deb6e2d1c612dbf19852c7e
2,524
md
Markdown
docs/cloud-design-patterns/bff.md
ThaiQui/awesome-software-architecture
fc05fa11733f5260809cf233f0b1417e03cf0db2
[ "CC0-1.0" ]
7,577
2021-02-24T13:54:31.000Z
2022-03-31T22:11:21.000Z
docs/cloud-design-patterns/bff.md
ThaiQui/awesome-software-architecture
fc05fa11733f5260809cf233f0b1417e03cf0db2
[ "CC0-1.0" ]
19
2021-05-18T17:02:56.000Z
2022-03-03T22:25:14.000Z
docs/cloud-design-patterns/bff.md
ThaiQui/awesome-software-architecture
fc05fa11733f5260809cf233f0b1417e03cf0db2
[ "CC0-1.0" ]
334
2021-04-30T12:08:05.000Z
2022-03-30T04:53:23.000Z
# Backend For Frontend (BFF) ## 📕 Articles - [Backends for Frontends pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/backends-for-frontends) - [The BFF Pattern (Backend for Frontend): An Introduction](https://blog.bitsrc.io/bff-pattern-backend-for-frontend-an-introduction-e4fa965128bf) - [SHARING DATA BETWEEN MODULES IN MODULAR MONOLITH](https://lukaszcoding.com/sharing-data-between-modules-in-modular-monolith/) - [Pattern: Backends For Frontends](https://samnewman.io/patterns/architectural/bff/) - [React UI with .NET Core BFF deployed at GCP AppEngine](https://medium.com/@op.tuuttila/react-ui-with-net-core-bff-deployed-at-gcp-appengine-715cfab2a4e4) - [Episode 020 - The backend for frontend and the HttpClient - ASP.NET Core: From 0 to overkill](https://blog.codingmilitia.com/2019/05/05/aspnet-020-from-zero-to-overkill-backend-for-frontend-httpclient/) - [Episode 029 - Simplifying the BFF with ProxyKit - ASP.NET Core: From 0 to overkill](https://blog.codingmilitia.com/2019/09/11/aspnet-029-from-zero-to-overkill-simplifying-the-bff-with-proxykit/) - [Backends for Frontends Pattern — BFF](https://medium.com/design-microservices-architecture-with-patterns/backends-for-frontends-pattern-bff-7ccd9182c6a1) - [Web App Security, Understanding the Meaning of the BFF Pattern](https://dev.to/damikun/web-app-security-understanding-the-meaning-of-the-bff-pattern-i85) ## 📺 Videos - [Backends For Frontends Pattern - Cloud Design Patterns - BFF Pattern](https://www.youtube.com/watch?v=wgD9t3R3x-w) - [The Backend For Frontend Pattern](https://www.youtube.com/watch?v=zazeGmFmUxg) - [Designing efficient web app endpoint APIs using the Backends for Frontends (BFF) pattern](https://www.youtube.com/watch?v=9Q6In-tbjUU) - [Why Backend for Frontend Is Key to Your Microservices Journey • B. Grant & K. Ramanathan • GOTO 2017](https://www.youtube.com/watch?v=PwgQZ8eCGxA) - [Episode 020 - The backend for frontend and the HttpClient - ASP.NET Core: From 0 to overkill](https://www.youtube.com/watch?v=A8ZCVzeqFtA) - [Episode 029 - Simplifying the BFF with ProxyKit - ASP.NET Core: From 0 to overkill](https://www.youtube.com/watch?v=Wgu97TKaRiI) ## 🚀 Samples - [damikun/trouble-training](https://github.com/damikun/trouble-training) - FullStack app workshop with distributed tracing and monitoring. This shows the configuration from React frontend to .NetCore backend. - [thangchung/bff-auth](https://github.com/thangchung/bff-auth) - The demonstration of modern authentication using BFF pattern
114.727273
209
0.778526
yue_Hant
0.249093
5dfa639957585d7fe2f802073b7013827c3a3057
41,239
md
Markdown
docs/AccessApi.md
CiscoDevNet/intersight-go
fb24f593557afda571b7525e55cd944aa16bd574
[ "Apache-2.0" ]
null
null
null
docs/AccessApi.md
CiscoDevNet/intersight-go
fb24f593557afda571b7525e55cd944aa16bd574
[ "Apache-2.0" ]
1
2022-03-21T06:28:43.000Z
2022-03-21T06:28:43.000Z
docs/AccessApi.md
CiscoDevNet/intersight-go
fb24f593557afda571b7525e55cd944aa16bd574
[ "Apache-2.0" ]
2
2020-07-07T15:00:25.000Z
2022-03-21T04:43:33.000Z
# \AccessApi All URIs are relative to *https://intersight.com* Method | HTTP request | Description ------------- | ------------- | ------------- [**CreateAccessPolicy**](AccessApi.md#CreateAccessPolicy) | **Post** /api/v1/access/Policies | Create a &#39;access.Policy&#39; resource. [**DeleteAccessPolicy**](AccessApi.md#DeleteAccessPolicy) | **Delete** /api/v1/access/Policies/{Moid} | Delete a &#39;access.Policy&#39; resource. [**GetAccessPolicyByMoid**](AccessApi.md#GetAccessPolicyByMoid) | **Get** /api/v1/access/Policies/{Moid} | Read a &#39;access.Policy&#39; resource. [**GetAccessPolicyInventoryByMoid**](AccessApi.md#GetAccessPolicyInventoryByMoid) | **Get** /api/v1/access/PolicyInventories/{Moid} | Read a &#39;access.PolicyInventory&#39; resource. [**GetAccessPolicyInventoryList**](AccessApi.md#GetAccessPolicyInventoryList) | **Get** /api/v1/access/PolicyInventories | Read a &#39;access.PolicyInventory&#39; resource. [**GetAccessPolicyList**](AccessApi.md#GetAccessPolicyList) | **Get** /api/v1/access/Policies | Read a &#39;access.Policy&#39; resource. [**PatchAccessPolicy**](AccessApi.md#PatchAccessPolicy) | **Patch** /api/v1/access/Policies/{Moid} | Update a &#39;access.Policy&#39; resource. [**UpdateAccessPolicy**](AccessApi.md#UpdateAccessPolicy) | **Post** /api/v1/access/Policies/{Moid} | Update a &#39;access.Policy&#39; resource. ## CreateAccessPolicy > AccessPolicy CreateAccessPolicy(ctx).AccessPolicy(accessPolicy).IfMatch(ifMatch).IfNoneMatch(ifNoneMatch).Execute() Create a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { accessPolicy := *openapiclient.NewAccessPolicy("ClassId_example", "ObjectType_example") // AccessPolicy | The 'access.Policy' resource to create. ifMatch := "ifMatch_example" // string | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. (optional) ifNoneMatch := "ifNoneMatch_example" // string | For methods that apply server-side changes, If-None-Match used with the * value can be used to create a resource not known to exist, guaranteeing that another resource creation didn't happen before, losing the data of the previous put. The request will be processed only if the eventually existing resource's ETag doesn't match any of the values listed. Otherwise, the status code 412 (Precondition Failed) is used. The asterisk is a special value representing any resource. It is only useful when creating a resource, usually with PUT, to check if another resource with the identity has already been created before. The comparison with the stored ETag uses the weak comparison algorithm, meaning two resources are considered identical if the content is equivalent - they don't have to be identical byte for byte. (optional) configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.CreateAccessPolicy(context.Background()).AccessPolicy(accessPolicy).IfMatch(ifMatch).IfNoneMatch(ifNoneMatch).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.CreateAccessPolicy``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `CreateAccessPolicy`: AccessPolicy fmt.Fprintf(os.Stdout, "Response from `AccessApi.CreateAccessPolicy`: %v\n", resp) } ``` ### Path Parameters ### Other Parameters Other parameters are passed through a pointer to a apiCreateAccessPolicyRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **accessPolicy** | [**AccessPolicy**](AccessPolicy.md) | The &#39;access.Policy&#39; resource to create. | **ifMatch** | **string** | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. | **ifNoneMatch** | **string** | For methods that apply server-side changes, If-None-Match used with the * value can be used to create a resource not known to exist, guaranteeing that another resource creation didn&#39;t happen before, losing the data of the previous put. The request will be processed only if the eventually existing resource&#39;s ETag doesn&#39;t match any of the values listed. Otherwise, the status code 412 (Precondition Failed) is used. The asterisk is a special value representing any resource. It is only useful when creating a resource, usually with PUT, to check if another resource with the identity has already been created before. The comparison with the stored ETag uses the weak comparison algorithm, meaning two resources are considered identical if the content is equivalent - they don&#39;t have to be identical byte for byte. | ### Return type [**AccessPolicy**](AccessPolicy.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: application/json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## DeleteAccessPolicy > DeleteAccessPolicy(ctx, moid).Execute() Delete a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { moid := "moid_example" // string | The unique Moid identifier of a resource instance. configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.DeleteAccessPolicy(context.Background(), moid).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.DeleteAccessPolicy``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } } ``` ### Path Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc. **moid** | **string** | The unique Moid identifier of a resource instance. | ### Other Parameters Other parameters are passed through a pointer to a apiDeleteAccessPolicyRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- ### Return type (empty response body) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## GetAccessPolicyByMoid > AccessPolicy GetAccessPolicyByMoid(ctx, moid).Execute() Read a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { moid := "moid_example" // string | The unique Moid identifier of a resource instance. configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.GetAccessPolicyByMoid(context.Background(), moid).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.GetAccessPolicyByMoid``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `GetAccessPolicyByMoid`: AccessPolicy fmt.Fprintf(os.Stdout, "Response from `AccessApi.GetAccessPolicyByMoid`: %v\n", resp) } ``` ### Path Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc. **moid** | **string** | The unique Moid identifier of a resource instance. | ### Other Parameters Other parameters are passed through a pointer to a apiGetAccessPolicyByMoidRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- ### Return type [**AccessPolicy**](AccessPolicy.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json, text/csv, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## GetAccessPolicyInventoryByMoid > AccessPolicyInventory GetAccessPolicyInventoryByMoid(ctx, moid).Execute() Read a 'access.PolicyInventory' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { moid := "moid_example" // string | The unique Moid identifier of a resource instance. configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.GetAccessPolicyInventoryByMoid(context.Background(), moid).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.GetAccessPolicyInventoryByMoid``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `GetAccessPolicyInventoryByMoid`: AccessPolicyInventory fmt.Fprintf(os.Stdout, "Response from `AccessApi.GetAccessPolicyInventoryByMoid`: %v\n", resp) } ``` ### Path Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc. **moid** | **string** | The unique Moid identifier of a resource instance. | ### Other Parameters Other parameters are passed through a pointer to a apiGetAccessPolicyInventoryByMoidRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- ### Return type [**AccessPolicyInventory**](AccessPolicyInventory.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json, text/csv, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## GetAccessPolicyInventoryList > AccessPolicyInventoryResponse GetAccessPolicyInventoryList(ctx).Filter(filter).Orderby(orderby).Top(top).Skip(skip).Select_(select_).Expand(expand).Apply(apply).Count(count).Inlinecount(inlinecount).At(at).Tags(tags).Execute() Read a 'access.PolicyInventory' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { filter := "$filter=CreateTime gt 2012-08-29T21:58:33Z" // string | Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false). (optional) (default to "") orderby := "$orderby=CreationTime" // string | Determines what properties are used to sort the collection of resources. (optional) top := int32($top=10) // int32 | Specifies the maximum number of resources to return in the response. (optional) (default to 100) skip := int32($skip=100) // int32 | Specifies the number of resources to skip in the response. (optional) (default to 0) select_ := "$select=CreateTime,ModTime" // string | Specifies a subset of properties to return. (optional) (default to "") expand := "$expand=DisplayNames" // string | Specify additional attributes or related resources to return in addition to the primary resources. (optional) apply := "apply_example" // string | Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set. (optional) count := false // bool | The $count query specifies the service should return the count of the matching resources, instead of returning the resources. (optional) inlinecount := "$inlinecount=true" // string | The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response. (optional) (default to "allpages") at := "at=VersionType eq 'Configured'" // string | Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section. (optional) tags := "tags_example" // string | The 'tags' parameter is used to request a summary of the Tag utilization for this resource. When the 'tags' parameter is specified, the response provides a list of tag keys, the number of times the key has been used across all documents, and the tag values that have been assigned to the tag key. (optional) configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.GetAccessPolicyInventoryList(context.Background()).Filter(filter).Orderby(orderby).Top(top).Skip(skip).Select_(select_).Expand(expand).Apply(apply).Count(count).Inlinecount(inlinecount).At(at).Tags(tags).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.GetAccessPolicyInventoryList``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `GetAccessPolicyInventoryList`: AccessPolicyInventoryResponse fmt.Fprintf(os.Stdout, "Response from `AccessApi.GetAccessPolicyInventoryList`: %v\n", resp) } ``` ### Path Parameters ### Other Parameters Other parameters are passed through a pointer to a apiGetAccessPolicyInventoryListRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **filter** | **string** | Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false). | [default to &quot;&quot;] **orderby** | **string** | Determines what properties are used to sort the collection of resources. | **top** | **int32** | Specifies the maximum number of resources to return in the response. | [default to 100] **skip** | **int32** | Specifies the number of resources to skip in the response. | [default to 0] **select_** | **string** | Specifies a subset of properties to return. | [default to &quot;&quot;] **expand** | **string** | Specify additional attributes or related resources to return in addition to the primary resources. | **apply** | **string** | Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \&quot;$apply\&quot; query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \&quot;aggregate\&quot; and \&quot;groupby\&quot;. The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set. | **count** | **bool** | The $count query specifies the service should return the count of the matching resources, instead of returning the resources. | **inlinecount** | **string** | The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response. | [default to &quot;allpages&quot;] **at** | **string** | Similar to \&quot;$filter\&quot;, but \&quot;at\&quot; is specifically used to filter versioning information properties for resources to return. A URI with an \&quot;at\&quot; Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section. | **tags** | **string** | The &#39;tags&#39; parameter is used to request a summary of the Tag utilization for this resource. When the &#39;tags&#39; parameter is specified, the response provides a list of tag keys, the number of times the key has been used across all documents, and the tag values that have been assigned to the tag key. | ### Return type [**AccessPolicyInventoryResponse**](AccessPolicyInventoryResponse.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json, text/csv, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## GetAccessPolicyList > AccessPolicyResponse GetAccessPolicyList(ctx).Filter(filter).Orderby(orderby).Top(top).Skip(skip).Select_(select_).Expand(expand).Apply(apply).Count(count).Inlinecount(inlinecount).At(at).Tags(tags).Execute() Read a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { filter := "$filter=CreateTime gt 2012-08-29T21:58:33Z" // string | Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false). (optional) (default to "") orderby := "$orderby=CreationTime" // string | Determines what properties are used to sort the collection of resources. (optional) top := int32($top=10) // int32 | Specifies the maximum number of resources to return in the response. (optional) (default to 100) skip := int32($skip=100) // int32 | Specifies the number of resources to skip in the response. (optional) (default to 0) select_ := "$select=CreateTime,ModTime" // string | Specifies a subset of properties to return. (optional) (default to "") expand := "$expand=DisplayNames" // string | Specify additional attributes or related resources to return in addition to the primary resources. (optional) apply := "apply_example" // string | Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \"$apply\" query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \"aggregate\" and \"groupby\". The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set. (optional) count := false // bool | The $count query specifies the service should return the count of the matching resources, instead of returning the resources. (optional) inlinecount := "$inlinecount=true" // string | The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response. (optional) (default to "allpages") at := "at=VersionType eq 'Configured'" // string | Similar to \"$filter\", but \"at\" is specifically used to filter versioning information properties for resources to return. A URI with an \"at\" Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section. (optional) tags := "tags_example" // string | The 'tags' parameter is used to request a summary of the Tag utilization for this resource. When the 'tags' parameter is specified, the response provides a list of tag keys, the number of times the key has been used across all documents, and the tag values that have been assigned to the tag key. (optional) configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.GetAccessPolicyList(context.Background()).Filter(filter).Orderby(orderby).Top(top).Skip(skip).Select_(select_).Expand(expand).Apply(apply).Count(count).Inlinecount(inlinecount).At(at).Tags(tags).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.GetAccessPolicyList``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `GetAccessPolicyList`: AccessPolicyResponse fmt.Fprintf(os.Stdout, "Response from `AccessApi.GetAccessPolicyList`: %v\n", resp) } ``` ### Path Parameters ### Other Parameters Other parameters are passed through a pointer to a apiGetAccessPolicyListRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **filter** | **string** | Filter criteria for the resources to return. A URI with a $filter query option identifies a subset of the entries from the Collection of Entries. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the $filter option. The expression language that is used in $filter queries supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false). | [default to &quot;&quot;] **orderby** | **string** | Determines what properties are used to sort the collection of resources. | **top** | **int32** | Specifies the maximum number of resources to return in the response. | [default to 100] **skip** | **int32** | Specifies the number of resources to skip in the response. | [default to 0] **select_** | **string** | Specifies a subset of properties to return. | [default to &quot;&quot;] **expand** | **string** | Specify additional attributes or related resources to return in addition to the primary resources. | **apply** | **string** | Specify one or more transformation operations to perform aggregation on the resources. The transformations are processed in order with the output from a transformation being used as input for the subsequent transformation. The \&quot;$apply\&quot; query takes a sequence of set transformations, separated by forward slashes to express that they are consecutively applied, i.e. the result of each transformation is the input to the next transformation. Supported aggregation methods are \&quot;aggregate\&quot; and \&quot;groupby\&quot;. The **aggregate** transformation takes a comma-separated list of one or more aggregate expressions as parameters and returns a result set with a single instance, representing the aggregated value for all instances in the input set. The **groupby** transformation takes one or two parameters and 1. Splits the initial set into subsets where all instances in a subset have the same values for the grouping properties specified in the first parameter, 2. Applies set transformations to each subset according to the second parameter, resulting in a new set of potentially different structure and cardinality, 3. Ensures that the instances in the result set contain all grouping properties with the correct values for the group, 4. Concatenates the intermediate result sets into one result set. A groupby transformation affects the structure of the result set. | **count** | **bool** | The $count query specifies the service should return the count of the matching resources, instead of returning the resources. | **inlinecount** | **string** | The $inlinecount query option allows clients to request an inline count of the matching resources included with the resources in the response. | [default to &quot;allpages&quot;] **at** | **string** | Similar to \&quot;$filter\&quot;, but \&quot;at\&quot; is specifically used to filter versioning information properties for resources to return. A URI with an \&quot;at\&quot; Query Option identifies a subset of the Entries from the Collection of Entries identified by the Resource Path section of the URI. The subset is determined by selecting only the Entries that satisfy the predicate expression specified by the query option. The expression language that is used in at operators supports references to properties and literals. The literal values can be strings enclosed in single quotes, numbers and boolean values (true or false) or any of the additional literal representations shown in the Abstract Type System section. | **tags** | **string** | The &#39;tags&#39; parameter is used to request a summary of the Tag utilization for this resource. When the &#39;tags&#39; parameter is specified, the response provides a list of tag keys, the number of times the key has been used across all documents, and the tag values that have been assigned to the tag key. | ### Return type [**AccessPolicyResponse**](AccessPolicyResponse.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json, text/csv, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## PatchAccessPolicy > AccessPolicy PatchAccessPolicy(ctx, moid).AccessPolicy(accessPolicy).IfMatch(ifMatch).Execute() Update a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { moid := "moid_example" // string | The unique Moid identifier of a resource instance. accessPolicy := *openapiclient.NewAccessPolicy("ClassId_example", "ObjectType_example") // AccessPolicy | The 'access.Policy' resource to update. ifMatch := "ifMatch_example" // string | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. (optional) configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.PatchAccessPolicy(context.Background(), moid).AccessPolicy(accessPolicy).IfMatch(ifMatch).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.PatchAccessPolicy``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `PatchAccessPolicy`: AccessPolicy fmt.Fprintf(os.Stdout, "Response from `AccessApi.PatchAccessPolicy`: %v\n", resp) } ``` ### Path Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc. **moid** | **string** | The unique Moid identifier of a resource instance. | ### Other Parameters Other parameters are passed through a pointer to a apiPatchAccessPolicyRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **accessPolicy** | [**AccessPolicy**](AccessPolicy.md) | The &#39;access.Policy&#39; resource to update. | **ifMatch** | **string** | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. | ### Return type [**AccessPolicy**](AccessPolicy.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: application/json, application/json-patch+json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) ## UpdateAccessPolicy > AccessPolicy UpdateAccessPolicy(ctx, moid).AccessPolicy(accessPolicy).IfMatch(ifMatch).Execute() Update a 'access.Policy' resource. ### Example ```go package main import ( "context" "fmt" "os" openapiclient "./openapi" ) func main() { moid := "moid_example" // string | The unique Moid identifier of a resource instance. accessPolicy := *openapiclient.NewAccessPolicy("ClassId_example", "ObjectType_example") // AccessPolicy | The 'access.Policy' resource to update. ifMatch := "ifMatch_example" // string | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. (optional) configuration := openapiclient.NewConfiguration() apiClient := openapiclient.NewAPIClient(configuration) resp, r, err := apiClient.AccessApi.UpdateAccessPolicy(context.Background(), moid).AccessPolicy(accessPolicy).IfMatch(ifMatch).Execute() if err != nil { fmt.Fprintf(os.Stderr, "Error when calling `AccessApi.UpdateAccessPolicy``: %v\n", err) fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r) } // response from `UpdateAccessPolicy`: AccessPolicy fmt.Fprintf(os.Stdout, "Response from `AccessApi.UpdateAccessPolicy`: %v\n", resp) } ``` ### Path Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc. **moid** | **string** | The unique Moid identifier of a resource instance. | ### Other Parameters Other parameters are passed through a pointer to a apiUpdateAccessPolicyRequest struct via the builder pattern Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **accessPolicy** | [**AccessPolicy**](AccessPolicy.md) | The &#39;access.Policy&#39; resource to update. | **ifMatch** | **string** | For methods that apply server-side changes, and in particular for PUT, If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched. If the request cannot be fulfilled, the 412 (Precondition Failed) response is returned. When modifying a resource using POST or PUT, the If-Match header must be set to the value of the resource ModTime property after which no lost update problem should occur. For example, a client send a GET request to obtain a resource, which includes the ModTime property. The ModTime indicates the last time the resource was created or modified. The client then sends a POST or PUT request with the If-Match header set to the ModTime property of the resource as obtained in the GET request. | ### Return type [**AccessPolicy**](AccessPolicy.md) ### Authorization [cookieAuth](../README.md#cookieAuth), [http_signature](../README.md#http_signature), [oAuth2](../README.md#oAuth2), [oAuth2](../README.md#oAuth2) ### HTTP request headers - **Content-Type**: application/json, application/json-patch+json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
68.846411
1,421
0.740028
eng_Latn
0.957625
5dfabd35c51b71a8f70b694870074b0e43bf3b6a
268
md
Markdown
src/pages/training/fullstack.md
tech47-bangalore/www-tech47
7dfcce73c5da0555627289330947a21bf8ccd34c
[ "MIT" ]
2
2018-07-27T07:00:50.000Z
2018-07-27T07:00:58.000Z
src/pages/training/fullstack.md
maximilientoumi/www-tech47
0e20a5ab00381654a56a08723b9044c5425834fe
[ "MIT" ]
14
2018-03-25T04:24:48.000Z
2018-03-25T04:25:03.000Z
src/pages/training/fullstack.md
maximilientoumi/www-tech47
0e20a5ab00381654a56a08723b9044c5425834fe
[ "MIT" ]
2
2017-12-25T11:50:42.000Z
2018-01-23T22:36:50.000Z
--- title: "Time to get skilled!" date: "2017-12-20" author: Jai --- ![Fullstack Tech](fullstack.jpg) We take up specialised training in the following areas - [ReactJs](https://www.reactjs.org) - [NodeJs](https://www.nodejs.org) - [MongoDB] (https://www.mongodb.org)
22.333333
54
0.69403
eng_Latn
0.414698
5dfaf801efd6e9d1feba88cb2377692c0a9b9565
3,644
md
Markdown
content/authors/AlanRobock/_index.md
ShanKothari/srm-ecology-website
03a198ba614dc0095cd1b82979b608439630e7eb
[ "MIT" ]
null
null
null
content/authors/AlanRobock/_index.md
ShanKothari/srm-ecology-website
03a198ba614dc0095cd1b82979b608439630e7eb
[ "MIT" ]
null
null
null
content/authors/AlanRobock/_index.md
ShanKothari/srm-ecology-website
03a198ba614dc0095cd1b82979b608439630e7eb
[ "MIT" ]
null
null
null
--- # Display name title: Alan Robock # Is this the primary user of the site? superuser: false # Role/position role: # Organizations/Affiliations organizations: - name: Rutgers University url: "http://envsci.rutgers.edu/index.php" - name: Rutgers Impact Studies of Climate Intervention (RISCI) lab url: "https://sites.rutgers.edu/risci-lab/" # Short bio (displayed in user profile at end of posts) bio: I am a climate scientist, now specializing on the impacts of aerosols in the stratosphere on climate. interests: - Climate intervention (geoengineering) - nuclear winter - volcanic eruptions and climate education: courses: - course: PhD, Meteorology institution: MIT year: 1977 - course: SM, Meteorology institution: MIT year: 1974 - course: BA, Meteorology institution: Uinversity of Wisconsin, Madison year: 1970 # Social/Academic Networking # For available icons, see: https://sourcethemes.com/academic/docs/page-builder/#icons # For an email link, use "fas" icon pack, "envelope" icon, and a link in the # form "mailto:[email protected]" or "#contact" for contact widget. social: - icon: envelope icon_pack: fas link: "mailto:[email protected]" - icon: twitter icon_pack: fab link: "https://twitter.com/@AlanRobock" - icon: google-scholar icon_pack: ai link: "https://scholar.google.com/citations?user=PuKvZ4MAAAAJ&hl=en"# Link to a PDF of your resume/CV from the About widget. # To enable, copy your resume/CV to `static/files/cv.pdf` and uncomment the lines below. # - icon: cv # icon_pack: ai # link: files/cv.pdf # Enter email to display Gravatar (if Gravatar enabled in Config) # email: "[email protected]" # Highlight the author in author lists? (true/false) highlight_name: false # Organizational groups that you belong to (for People widget) # Set this to `[]` or comment out if you are not using People widget. user_groups: - Group Members --- Alan is a Distinguished Professor of climate science in the Department of Environmental Sciences at Rutgers University. He graduated from the University of Wisconsin, Madison, in 1970 with a B.A. in Meteorology, and from the Massachusetts Institute of Technology with an S.M. in 1974 and Ph.D. in 1977, both in Meteorology. Before graduate school, he served as a Peace Corps Volunteer in the Philippines. He was a professor at the University of Maryland, 1977-1997, and the State Climatologist of Maryland, 1991-1997, before coming to Rutgers in 1998. Prof. Robock has published more than 400 articles on his research in the area of climate change, including more than 260 peer-reviewed papers. His areas of expertise include climate intervention (also called geoengineering), climatic effects of nuclear war, and effects of volcanic eruptions on climate. He serves as Associate Editor of Reviews of Geophysics, the most highly-cited journal in the Earth Sciences. His honors include being a Fellow of the American Geophysical Union, the American Meteorological Society (AMS), and the American Association for the Advancement of Science, and a recipient of the AMS Jule Charney Award. Prof. Robock was a Lead Author of the most recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (awarded the Nobel Peace Prize in 2007). In 2017 the International Campaign to Abolish Nuclear Weapons was awarded the Nobel Peace Prize for “for its work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its groundbreaking efforts to achieve a treaty-based prohibition of such weapons” based partly on the work of Prof. Robock.
52.057143
1,688
0.770033
eng_Latn
0.982668
5dfb6e7ef548a88c7fe7fda2805e40d65a0bb59b
8,604
md
Markdown
reference/5.1/ISE/Import-IseSnippet.md
costigans/PowerShell-Docs.ru-ru
e12bf503c8ab1dd1a32a7d5a015b836eafdaea98
[ "CC-BY-4.0", "MIT" ]
null
null
null
reference/5.1/ISE/Import-IseSnippet.md
costigans/PowerShell-Docs.ru-ru
e12bf503c8ab1dd1a32a7d5a015b836eafdaea98
[ "CC-BY-4.0", "MIT" ]
null
null
null
reference/5.1/ISE/Import-IseSnippet.md
costigans/PowerShell-Docs.ru-ru
e12bf503c8ab1dd1a32a7d5a015b836eafdaea98
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- external help file: ISE-help.xml keywords: powershell,командлет Locale: en-US Module Name: ISE ms.date: 06/09/2017 online version: https://docs.microsoft.com/powershell/module/ise/import-isesnippet?view=powershell-5.1&WT.mc_id=ps-gethelp schema: 2.0.0 title: Import-IseSnippet ms.openlocfilehash: 810be675fc593f665ccc6f3d5b86ac2f6b633863 ms.sourcegitcommit: 9b28fb9a3d72655bb63f62af18b3a5af6a05cd3f ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 07/07/2020 ms.locfileid: "93226857" --- # Import-IseSnippet ## Краткий обзор Импортирует фрагменты кода интегрированной среды сценариев в текущий сеанс. ## SYNTAX ### Фромфолдер (по умолчанию) ``` Import-IseSnippet [-Path] <String> [-Recurse] [<CommonParameters>] ``` ### фроммодуле ``` Import-IseSnippet [-Recurse] -Module <String> [-ListAvailable] [<CommonParameters>] ``` ## DESCRIPTION `Import-IseSnippet`Командлет импортирует многократно используемые текстовые фрагменты из модуля или каталога в текущий сеанс. Фрагменты кода немедленно доступны для использования в интегрированной среде сценариев Windows PowerShell. Этот командлет работает только в интегрированной среде сценариев (ISE) Windows PowerShell. Чтобы просмотреть и использовать импортированные фрагменты, в меню " **Правка** " интегрированной среды сценариев Windows PowerShell нажмите кнопку " **начать фрагменты** " или нажмите клавиши <kbd>CTRL</kbd> + <kbd>J</kbd>. Импортированные фрагменты кода доступны только в текущем сеансе. Чтобы импортировать фрагменты во все сеансы интегрированной среды сценариев Windows PowerShell, добавьте `Import-IseSnippet` команду в профиль Windows PowerShell или скопируйте файлы фрагментов в каталог локальных фрагментов `$home\Documents\WindowsPowershell\Snippets` . Чтобы импортировать фрагменты кода, они должны быть правильно отформатированы во фрагментах кода XML для интегрированной среды сценариев Windows PowerShell и сохранены в файлах Snippet.ps1XML. Чтобы создать подходящие фрагменты кода, используйте `New-IseSnippet` командлет. `New-IseSnippet` создает `<SnippetTitle>.Snippets.ps1xml` файл в `$home\Documents\WindowsPowerShell\Snippets` каталоге. Можно переместить или скопировать фрагменты кода в каталог Snippets модуля Windows PowerShell или в любой другой каталог. `Get-IseSnippet`Командлет, который получает созданные пользователем фрагменты в каталоге локальных фрагментов, не получает импортированные фрагменты. Этот командлет впервые появился в Windows PowerShell 3.0. ## Примеры ### Пример 1. Импорт фрагментов из каталога В этом примере фрагменты кода импортируются из `\\Server01\Public\Snippets` каталога в текущий сеанс. Он использует параметр **рекурсии** для получения фрагментов из всех подкаталогов каталога фрагментов кода. ```powershell Import-IseSnippet -Path \\Server01\Public\Snippets -Recurse ``` ### Пример 2. Импорт фрагментов кода из модуля В этом примере выполняется импорт фрагментов кода из модуля **сниппетмодуле** . Команда использует параметр **ListAvailable** для импорта фрагментов, даже если модуль **сниппетмодуле** не импортируется в сеанс пользователя при выполнении команды. ```powershell Import-IseSnippet -Module SnippetModule -ListAvailable ``` ### Пример 3. Поиск фрагментов в модулях Этот пример получает фрагменты во всех установленных модулях в переменной среды PSModulePath. ```powershell ($env:PSModulePath).split(";") | ForEach-Object {dir $_\*\Snippets\*.Snippets.ps1xml -ErrorAction SilentlyContinue} | ForEach-Object {$_.fullname} ``` ### Пример 4. импорт всех фрагментов модулей В этом примере выполняется импорт всех фрагментов кода из всех установленных модулей в текущий сеанс. Как правило, выполнять такую команду не требуется, так как модули с фрагментами кода будут использовать `Import-IseSnippet` командлет для импорта при импорте модуля. ```powershell ($env:PSModulePath).split(";") | ForEach-Object {dir $_\*\Snippets\*.Snippets.ps1xml -ErrorAction SilentlyContinue} | ForEach-Object {$psise.CurrentPowerShellTab.Snippets.Load($_)} ``` ### Пример 5. копирование всех фрагментов модулей В этом примере файлы фрагментов кода копируются из всех установленных модулей в Каталог фрагментов текущего пользователя. В отличие от импортированных фрагменты кода, влияющие только на текущий сеанс, скопированные фрагменты доступны в каждом сеансе интегрированной среды сценариев Windows PowerShell. ```powershell ($env:PSModulePath).split(";") | ForEach-Object {dir $_\*\Snippets\*.Snippets.ps1xml -ErrorAction SilentlyContinue} | Copy-Item -Destination $home\Documents\WindowsPowerShell\Snippets ``` ## PARAMETERS ### -ListAvailable Указывает, что этот командлет получает фрагменты кода из модулей, установленных на компьютере, даже если модули не импортируются в текущий сеанс. Если этот параметр не указан и модуль, указанный в параметре **module** , не импортируется в текущий сеанс, попытка получить фрагменты кода из модуля завершается ошибкой. Этот параметр активен, только если в команде также используется параметр **Module**. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: FromModule Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Module Импортирует фрагменты кода из указанного модуля в текущий сеанс. Подстановочные знаки не поддерживаются. Этот параметр импортирует фрагменты кода из файлов Snippet.ps1XML во вложенном каталоге фрагментов кода в пути к модулю, например `$home\Documents\WindowsPowerShell\Modules\<ModuleName>\Snippets` . Этот параметр предназначен для использования авторами модулей в сценарии запуска, например, в сценарии, указанном в ключе **ScriptsToProcess** манифеста модуля. Фрагменты кода в модуле не импортируются автоматически вместе с модулем, но `Import-IseSnippet` для их импорта можно использовать команду. ```yaml Type: System.String Parameter Sets: FromModule Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Path Указывает путь к каталогу фрагментов кода, в котором этот командлет импортирует фрагменты кода. ```yaml Type: System.String Parameter Sets: FromFolder Aliases: Required: True Position: 1 Default value: None Accept pipeline input: False Accept wildcard characters: True ``` ### -Recurse Указывает, что этот командлет импортирует фрагменты кода из всех подкаталогов значения параметра **path** . ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### Общие параметры Этот командлет поддерживает общие параметры: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction и -WarningVariable. См. сведения в разделе [about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216). ## Входные данные ### Нет Этот командлет не принимает входные данные по конвейеру. ## Выходные данные ### Нет Этот командлет не создает никакие выходные данные. ## ПРИМЕЧАНИЯ - Командлет нельзя использовать `Get-IseSnippet` для получения импортированных фрагментов кода. `Get-IseSnippet` Возвращает только фрагменты кода в `$home\Documents\WindowsPowerShell\Snippets` каталоге. - `Import-IseSnippet` использует статический метод **Load** объектов **Microsoft. PowerShell. host. ISE. ISESnippetCollection** . Также можно использовать метод **Load** фрагментов кода в объектной модели интегрированной среды сценариев Windows PowerShell: `$psISE.CurrentPowerShellTab.Snippets.Load()` - `New-IseSnippet`Командлет сохраняет новые созданные пользователем фрагменты в неподписанных файлах. ps1xml. Таким образом, Windows PowerShell не может загрузить их в сеанс, для которого действует политика выполнения **AllSigned** или **Restricted**. В сеансе **Restricted** или **AllSigned** можно создать, получить и импортировать созданные пользователем неподписанные фрагменты кода, но их нельзя использовать в сеансе. Чтобы использовать неподписанные пользовательские фрагменты кода, `Import-IseSnippet` возвращаемые командлетом, измените политику выполнения, а затем перезапустите интегрированную среду сценариев Windows PowerShell. Дополнительные сведения о политиках выполнения Windows PowerShell см. в разделе [about_Execution_Policies](../Microsoft.PowerShell.Core/About/about_Execution_Policies.md). ## Связанные ссылки [Get-IseSnippet](Get-IseSnippet.md) [New-IseSnippet](New-IseSnippet.md)
42.80597
515
0.802999
rus_Cyrl
0.910424
5dfbd2063a76a952d3852c653d17b65385eda746
7,660
md
Markdown
README.md
Evgeniy1978/marketplace-docker
847ea026d5ad70a1923a81ebd4b23385369e2bd8
[ "Apache-2.0" ]
2
2022-01-24T07:45:18.000Z
2022-01-24T07:45:19.000Z
README.md
Evgeniy1978/marketplace-docker
847ea026d5ad70a1923a81ebd4b23385369e2bd8
[ "Apache-2.0" ]
null
null
null
README.md
Evgeniy1978/marketplace-docker
847ea026d5ad70a1923a81ebd4b23385369e2bd8
[ "Apache-2.0" ]
null
null
null
# Marketplace Deployment Who is this document for: * Full stack engineers * IT administrators In this tutorial we will install the marketplace locally on a computer or in a virtual machine with Ubuntu OS. The process of installing it in a production environment is the same plus your IT administrator will need to setup the infrastructure (such as domain name, hosting, firewall, nginx, and SSL certificates) so that the server that hosts the marketplace can be accessed by the users on the Internet, like Unique marketplace: [https://unqnft.io](https://unqnft.io). ## Prerequisites * OS: Ubuntu 18.04 or 20.04 * docker CE 20.10 or up * docker-compose 1.25 or up * git * Google Chrome Browser ## Step 1 - Install Polkadot{.js} Extension Visit [https://polkadot.js.org/extension/](https://polkadot.js.org/extension/) and click on the “Download for Chrome” button. Chrome browser will guide you through the rest of the process. ![Install Polkadot{.js} Extension](/doc/step1-1.png) As a result you should see that little icon in the top right corner: ![Install Polkadot{.js} Extension](/doc/step1-2.png) ## Step 2 - Create Admin Address Click on the Polkadot{.js} extension icon and select “create new account” in the menu: <img src="/doc/step2-1.png" width="400"> You should write down the 12-word mnemonic seed on the paper. Do not share it with anybody because this 12-word phrase is all that’s needed to get access to the money and NFTs that are stored on this account. Follow the Polkadot{.js} instructions to complete the account setup. ## Step 3 - Get Unique In order to get the marketplace running, you’ll need some Unique coins. For the TestNet 2.0, it is free. You can get it from the faucet bot on Telegram: [@unique2faucetbot](https://t.me/unique2faucetbot) Copy your account address from Polkadot{.js} extension and send it to the faucet bot: <img src="/doc/step3-1.png" width="400"> ## Step 4 - Deploy Marketplace Smart Contract 1. Download [matcher.wasm](/doc/matcher.wasm) and [metadata.json](/doc/metadata.json) files 2. Open Polkadot Apps UI on the Contracts page: [https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftestnet2.unique.network#/contracts](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Ftestnet2.unique.network#/contracts) ![Deploy Marketplace Smart Contract](/doc/step4-1.png) 3. Click on Upload & deploy code button, select metadata.json and then matcher.wasm files you have downloaded previously in the form fields like this and click “Next”: ![Deploy Marketplace Smart Contract](/doc/step4-2.png) 4. Give the contract 300 Unique coins in Endowment so that it can pay for storing its data, click Deploy (and follow signing transaction): ![Deploy Marketplace Smart Contract](/doc/step4-3.png) 5. When the transaction completes, you should see the green notification bar on the right top and the contract will appear in the “contracts” list: ![Deploy Marketplace Smart Contract](/doc/step4-4.png) 6. Expand “Messages” section and find “SetAdmin” method: ![Deploy Marketplace Smart Contract](/doc/step4-5.png) 7. Click on the “exec” button in front of the setAdmin method and select the marketplace admin address both as the caller (“call from account”) and the parameter (“message to send”). Make sure you've put the same address twice, as shown in the picture. Click Execute button and follow with signing this transaction: ![Deploy Marketplace Smart Contract](/doc/step4-6.png) 8. Click on the matcher contract ornament to copy its address for future use: ![Deploy Marketplace Smart Contract](/doc/step4-7.png) You’re all set with the matcher contract! ## Step 5 - Clone marketplace code from GitHub Open the terminal and execute the following command: ``` git clone https://github.com/UniqueNetwork/marketplace-docker cd marketplace-docker git checkout feature/easy_start git submodule update --init --recursive --remote ``` ## Step 6 - Configure backend (.env file) In this step we will configure the marketplace backend with your administrator address, seed and the matcher contract address. 1. Create .env file in the root of marketplace-docker project and paste the following content in there: ``` POSTGRES_DB=marketplace_db POSTGRES_USER=marketplace POSTGRES_PASSWORD=12345 ADMIN_SEED= MATCHER_CONTRACT_ADDRESS= UNIQUE_WS_ENDPOINT=wss://testnet2.uniquenetwork.io COMMISSION=10 ``` 2. Edit the .env file: * Change ADMIN_SEED to the 12-word admin mnemonic seed phrase that you have saved when you created the admin address in Polkadot{.js} extension * Change MATCHER_CONTRACT_ADDRESS value to the Matcher contract address that you have copied from Apps UI after you have deployed it: <img src="/doc/step6-2.png" width="400"> * Leave the rest of values intact As a result you should see a similar content to this: <img src="/doc/step6-3.png" width="600"> ## Step 7 - Configure frontend (.env file) In this step we will configure the marketplace frontend with your administrator and the matcher contract addresses, specify what NFT collections you’d like your marketplace to handle, and specify the domain name that it’s going to be hosted on (localhost for the purpose of this example). 1. Create an empty .env file in the ui/packages/apps folder and copy the following content in there: ``` CAN_ADD_COLLECTIONS=false CAN_CREATE_COLLECTION=false CAN_CREATE_TOKEN=false CAN_EDIT_COLLECTION=false CAN_EDIT_TOKEN=false COMMISSION=10 CONTRACT_ADDRESS='' DECIMALS=6 ESCROW_ADDRESS='' FAVICON_PATH='favicons/marketplace' KUSAMA_DECIMALS=12 MAX_GAS=1000000000000 MIN_PRICE=0.000001 MIN_TED_COLLECTION=1 QUOTE_ID=2 SHOW_MARKET_ACTIONS=true VALUE=0 VAULT_ADDRESS="" WALLET_MODE=false WHITE_LABEL_URL='http://localhost' UNIQUE_COLLECTION_IDS=23,25 UNIQUE_API='http://localhost:5000' UNIQUE_SUBSTRATE_API='wss://testnet2.uniquenetwork.io' ``` 2. Change the value of CONTRACT_ADDRESS to the address of the smart contract that you copied and saved after it’s been deployed 3. Change the value of ESCROW_ADDRESS to the admin address that you have copied from Polkadot{.js} extension 4. List the collections you would like the marketplace to handle in UNIQUE_COLLECTION_IDS field separated by command (e.g. this example above will configure the marketplace to handle collection 23, Substrapunks, and 25, Chelobricks). ## Step 8 - Build and run **Optional**: You can pre-pull docker images before you start: ``` docker pull postgres:13.4 docker pull node:latest docker pull ubuntu:18.04 ``` Execute the following command in the terminal and wait for it to finish: ``` docker-compose -f docker-compose-local.yml up -d --build ``` ## Step 9 - Enjoy! Open [localhost:3000](http://localhost:3000) in your Chrome browser. On the first launch you will see the Polkadot{.js}’s request to authorize the website, click “Yes”: ![Deploy Marketplace Smart Contract](/doc/step9-1.png) The marketplace will connect to the blockchain and the local backend and will display the empty Market page. It is now ready to play: ![Deploy Marketplace Smart Contract](/doc/step9-2.png) ## License Information Copyright 2021, Unique Network, Usetech Professional Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
38.686869
471
0.775326
eng_Latn
0.96231
5dfbd6e0ee07ce91f85b885ec5cb7163dbe0e9f2
912
md
Markdown
data/readme_files/bugra.pydata-nyc-2014.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
5
2021-05-09T12:51:32.000Z
2021-11-04T11:02:54.000Z
data/readme_files/bugra.pydata-nyc-2014.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
null
null
null
data/readme_files/bugra.pydata-nyc-2014.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
3
2021-05-12T12:14:05.000Z
2021-10-06T05:19:54.000Z
A Machine Learning Pipeline with Scikit-Learn === This repo includes all of the ipython notebooks that I will go through in the [tutorial](http://pydata.org/nyc2014/abstracts/#289). It is targeted to people who have beginner to intermediate knowledge in machine learning and wants to use Scikit-Learn in their machine learning pipeline. It covers from basic concepts of Scikit-Learn(unsupervised learning, supervised learning and cross validation) to pipelines, grid search, model selection and deployment process. The dependencies are given in the 0th notebook, to reproduce it, makes sure you have at least those versions in that notebook. Otherwise, please feel free to open an issue in this repository. You could browse the IPython notebooks in [nbviewer](http://nbviewer.ipython.org/github/bugra/pydata-nyc-2014/tree/master/) [http://bit.ly/pydata-nyc-2014-deck](http://bit.ly/pydata-nyc-2014-deck)
45.6
89
0.794956
eng_Latn
0.993373
5dfc7f165996003454b87a4231c44ccce687c1bb
1,133
md
Markdown
_FULLTEXT/sinonjs.sinon.md
BJBaardse/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
17
2018-07-13T02:16:22.000Z
2021-09-16T15:31:49.000Z
_FULLTEXT/sinonjs.sinon.md
letform/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
null
null
null
_FULLTEXT/sinonjs.sinon.md
letform/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
6
2018-10-12T09:09:05.000Z
2021-01-01T15:32:45.000Z
Sinon.JS Standalone and test framework agnostic JavaScript test spies, stubs and mocks (pronounced "sigh-non", named after Sinon, the warrior). Installation via npm $ npm install sinon or via sinons browser builds available for download on the homepage. There are also npm based CDNs one can use. Usage See the sinon project homepage for documentation on usage. If you have questions that are not covered by the documentation, you can check out the sinon tag on Stack Overflow or drop by #sinon.js on irc.freenode.net:6667. You can also search through the Sinon.JS mailing list archives. Goals No global pollution Easy to use Require minimal “integration” Easy to embed seamlessly with any testing framework Easily fake any interface Ship with ready-to-use fakes for XMLHttpRequest, timers and more Contribute? See CONTRIBUTING.md for details on how you can contribute to Sinon.JS Backers Support us with a monthly donation and help us continue our activities. [Become a backer] Sponsors Become a sponsor and get your logo on our README on GitHub with a link to your site. [Become a sponsor] Licence Sinon.js was released under BSD-3
1,133
1,133
0.80759
eng_Latn
0.997662
5dfc97128dec82e59f3c178568ee0122575a5ff2
22,300
md
Markdown
WindowsServerDocs/identity/solution-guides/Deploy-a-Central-Access-Policy--Demonstration-Steps-.md
eltociear/windowsserverdocs.ja-jp
d45bb4a3e900f0f4bddef6b3709f3c7dec3a9d6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/identity/solution-guides/Deploy-a-Central-Access-Policy--Demonstration-Steps-.md
eltociear/windowsserverdocs.ja-jp
d45bb4a3e900f0f4bddef6b3709f3c7dec3a9d6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/identity/solution-guides/Deploy-a-Central-Access-Policy--Demonstration-Steps-.md
eltociear/windowsserverdocs.ja-jp
d45bb4a3e900f0f4bddef6b3709f3c7dec3a9d6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.assetid: 8738c03d-6ae8-49a7-8b0c-bef7eab81057 title: 集約型アクセス ポリシーの展開 (デモンストレーション手順) author: billmath ms.author: billmath manager: femila ms.date: 05/31/2017 ms.topic: article ms.prod: windows-server ms.technology: identity-adds ms.openlocfilehash: 5f4d94facc57cf2b71d6d546b4a2b60253ff58fe ms.sourcegitcommit: b00d7c8968c4adc8f699dbee694afe6ed36bc9de ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/08/2020 ms.locfileid: "80861205" --- # <a name="deploy-a-central-access-policy-demonstration-steps"></a>集約型アクセス ポリシーの展開 (デモンストレーション手順) >適用対象: Windows Server 2016、Windows Server 2012 R2、Windows Server 2012 このシナリオでは、金融部門のセキュリティ運用では、集約型情報セキュリティを操作して、ファイル サーバーに保管されたアーカイブ金融情報を保護できるように集約型アクセス ポリシーのニーズを指定します。 各国のアーカイブ金融情報には、同じ国の金融従業員が読み取り専用としてアクセスできます。 集約型金融管理グループは、すべての国の金融情報にアクセスできます。 集約型アクセス ポリシーの展開には、以下のフェーズがあります。 |フェーズ|説明 |---------|--------------- |[計画: ポリシーのニーズと展開に必要な構成を特定する](Deploy-a-Central-Access-Policy--Demonstration-Steps-.md#BKMK_1.2)|ポリシーのニーズおよび展開に必要な構成を特定します。 |[実装: コンポーネントとポリシーを構成します。](Deploy-a-Central-Access-Policy--Demonstration-Steps-.md#BKMK_1.3)|コンポーネントおよびポリシーを構成します。 |[集約型アクセスポリシーを展開する](Deploy-a-Central-Access-Policy--Demonstration-Steps-.md#BKMK_1.4)|ポリシーを展開します。 |[管理: ポリシーを変更してステージングする](Deploy-a-Central-Access-Policy--Demonstration-Steps-.md#BKMK_1.5)|ポリシーの変更とステージング。 ## <a name="set-up-a-test-environment"></a><a name="BKMK_1.1"></a>テスト環境のセットアップ 開始する前に、このシナリオをテストするラボを設定する必要があります。 ラボを設定する手順の詳細については、 [「付録 B: テスト環境の](Appendix-B--Setting-Up-the-Test-Environment.md)セットアップ」をご覧ください。 ## <a name="plan-identify-the-need-for-policy-and-the-configuration-required-for-deployment"></a><a name="BKMK_1.2"></a>計画: ポリシーのニーズと展開に必要な構成を特定する このセクションでは、展開の計画フェーズで役立つ一連の大まかな手順を示します。 ||手順|例| |-|--------|-----------| |1.1|ビジネスで集約型アクセス ポリシーが必要であると判断する|ファイル サーバーに保管されている金融情報を保護するために、金融部門のセキュリティ運用では、集約型情報セキュリティを操作して、集約型アクセス ポリシーのニーズを指定します。| |1.2|アクセス ポリシーを表現する|金融ドキュメントは、金融部門のメンバーのみが読み取るようにする必要があります。 金融部門のメンバーは、属している国のドキュメントのみにアクセスする必要があります。 書き込みアクセス権限を備えているのは、金融管理者のみにします。 FinanceException グループのメンバーに対して例外が許可されるようにします。 このグループは読み取りアクセス権限を備えています。| |1.3|Windows Server 2012 コンストラクトでアクセスポリシーを表現する|ターゲット:<p>-Resource. Department は Finance を含みます。<p>アクセス規則:<p>-Allow read User. Country = Resource. Country AND User department = Resource. Department<br />-フルコントロールユーザーを許可します。 MemberOf (FinanceAdmin)<p>例外:<p>Allow read memberOf(FinanceException)| |1.4|ポリシーに必要なファイル プロパティを決定する|次の項目でファイルにタグを付けます。<p>-Department<br />-国| |1.5|ポリシーに必要な要求の種類とグループを決定する|要求の種類:<p>-国<br />-Department<p>ユーザー グループ:<p>-FinanceAdmin<br />-FinanceException| |1.6|このポリシーを適用するサーバーを決定する|すべての金融ファイル サーバーでポリシーを適用します。| ## <a name="implement-configure-the-components-and-policy"></a><a name="BKMK_1.3"></a>実装: コンポーネントとポリシーを構成します。 このセクションでは、金融ドキュメント用の集約型アクセス ポリシーを展開する例を示します。 |いいえ|手順|例| |------|--------|-----------| |2.1|要求の種類を作成する|次の要求の種類を作成します。<p>-Department<br />-国| |2.2|リソース プロパティを作成する|次のリソース プロパティを作成して有効にします。<p>-Department<br />-国| |2.3|集約型アクセス規則を構成する|前のセクションで決定したポリシーが含まれた金融ドキュメント規則を作成します。| |2.4|集約型アクセス ポリシー (CAP) を構成する|金融ポリシーという CAP を作成し、その CAP に金融ドキュメント規則を追加します。| |2.5|集約型アクセス ポリシーのターゲットをファイル サーバーに設定する|金融ポリシー CAP をファイル サーバーにパブリッシュします。| |2.6|KDC での信頼性情報、複合認証、および Kerberos 防御のサポートを有効にする|contoso.com について KDC での信頼性情報、複合認証、および Kerberos 防御のサポートを有効にします。| 次の手順では、Country と Department という2つの要求の種類を作成します。 #### <a name="to-create-claim-types"></a>要求の種類を作成するには 1. Hyper-v マネージャーでサーバー DC1 を開き、パスワード<strong>pass@word1</strong>を使用して contoso\administrator としてログオンします。 2. Active Directory 管理センターを開きます。 3. **ツリー ビュー アイコン**をクリックし、 **[ダイナミック アクセス制御]** を展開し、 **[要求の種類]** を選択します。 **[要求の種類]** を右クリックし、 **[新規作成]** をクリックしてから、 **[要求の種類]** をクリックします。 > [!TIP] > **[タスク]** ウィンドウから **[要求の種類の作成:]** ウィンドウを開きます。 **[タスク]** ウィンドウで **[新規作成]** をクリックしてから、 **[要求の種類]** をクリックします。 4. **[ソース属性]** リストで属性のリストを下にスクロールし、 **[department]** をクリックします。 これにより、 **[表示名]** フィールドに **[department]** のデータが設定されます。 **[OK]** をクリックすると、 5. **[タスク]** ウィンドウで **[新規作成]** をクリックしてから、 **[要求の種類]** をクリックします。 6. **[ソース属性]** リストで属性のリストを下にスクロールし、 **[c]** 属性 (国名) をクリックします。 **[表示名]** フィールドに **country** と入力します。 7. **[提案された値]** セクションで **[次の値を提案します:]** を選択してから、 **[追加]** をクリックします。 8. **[値]** フィールドおよび **[表示名]** フィールドに **US** と入力してから、 **[OK]** をクリックします。 9. 上記ステップを繰り返します。 **[提案された値の追加]** ダイアログ ボックスの **[値]** フィールドおよび **[表示名]** フィールドに **JP** と入力してから、 **[OK]** をクリックします。 ![ソリューションガイド](media/Deploy-a-Central-Access-Policy--Demonstration-Steps-/PowerShellLogoSmall.gif)***<em>Windows PowerShell の同等のコマンド</em>*** 次の Windows PowerShell コマンドレットは、前の手順と同じ機能を実行します。 書式上の制約のため、複数行にわたって折り返される場合でも、各コマンドレットは 1 行に入力してください。 New-ADClaimType country -SourceAttribute c -SuggestedValues:@((New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("US","US","")), (New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("JP","JP",""))) New-ADClaimType department -SourceAttribute department > [!TIP] > Active Directory 管理センターの Windows PowerShell 履歴ビューアーを使用して、Active Directory 管理センターで実行する各手順の Windows PowerShell コマンドレットを検索できます。 詳細については、「 [Windows PowerShell 履歴ビューアー](https://technet.microsoft.com/library/hh831702) 」を参照してください。 次の手順では、リソース プロパティを作成します。 次の手順では、ドメイン コントローラーのグローバル リソース プロパティ リストに自動的に追加されるリソース プロパティを作成して、ファイル サーバーで使用可能にします。 #### <a name="to-create-and-enable-pre-created-resource-properties"></a>事前作成のリソース プロパティを作成して有効にするには 1. Active Directory 管理センターの左側のウィンドウで、 **[ツリー ビュー]** をクリックします。 **[ダイナミック アクセス制御]** を展開してから、 **[リソース プロパティ]** を選択します。 2. **[リソース プロパティ]** を右クリックし、 **[新規作成]** をクリックしてから、 **[参照用リソース プロパティ]** をクリックします。 > [!TIP] > **[タスク]** ウィンドウからリソース プロパティを選択することもできます。 **[新規作成]** をクリックしてから、 **[参照用リソース プロパティ]** をクリックします。 3. **[提案された値リストを共有する要求の種類の選択]** で、 **[country]** をクリックします。 4. **[表示名]** フィールドに **country** と入力してから、 **[OK]** をクリックします。 5. **[リソース プロパティ]** リストをダブルクリックし、 **[Department]** リソース プロパティまで下にスクロールします。 右クリックし、 **[有効にする]** をクリックします。 これにより、ビルトイン **[Department]** リソース プロパティが有効になります。 6. これで、Active Directory 管理センターのナビゲーション ウィンドウの **[リソース プロパティ]** リストに、次の 2 つの有効になっているリソース プロパティが含まれています。 - 国 - 部署 ![ソリューションガイド](media/Deploy-a-Central-Access-Policy--Demonstration-Steps-/PowerShellLogoSmall.gif)***<em>Windows PowerShell の同等のコマンド</em>*** 次の Windows PowerShell コマンドレットは、前の手順と同じ機能を実行します。 書式上の制約のため、複数行にわたって折り返される場合でも、各コマンドレットは 1 行に入力してください。 ``` New-ADResourceProperty Country -IsSecured $true -ResourcePropertyValueType MS-DS-MultivaluedChoice -SharesValuesWith country Set-ADResourceProperty Department_MS -Enabled $true Add-ADResourcePropertyListMember "Global Resource Property List" -Members Country Add-ADResourcePropertyListMember "Global Resource Property List" -Members Department_MS ``` 次の手順では、リソースにアクセスできるユーザーを定義する集約型アクセス規則を作成します。 このシナリオでは、ビジネス規則は次のようにします。 - 金融ドキュメントは、金融部門のメンバーのみが読み取ることができます。 - 金融部門のメンバーは、属している国のドキュメントのみにアクセスできます。 - 書き込みアクセス権限を備えることができるのは、金融管理者のみにします。 - FinanceException グループのメンバーに対して例外を許可します。 このグループは読み取りアクセス権限を備えています。 - 管理者およびドキュメント所有者は引き続き、フル アクセスできます。 または、Windows Server 2012 の構成要素を使用して規則を表現します。 ターゲット: Resource. Department が財務を含む アクセス規則: - Allow Read User.Country=Resource.Country AND User.department = Resource.Department - Allow Full control User.MemberOf(FinanceAdmin) - Allow Read User.MemberOf(FinanceException) #### <a name="to-create-a-central-access-rule"></a>集約型アクセス規則を作成するには 1. Active Directory 管理センターの左側のウィンドウで、 **[ツリー ビュー]** をクリックし、 **[ダイナミック アクセス制御]** を選択してから、 **[集約型アクセス規則]** をクリックします。 2. **[集約型アクセス規則]** を右クリックし、 **[新規作成]** をクリックしてから、 **[集約型アクセス規則]** をクリックします。 3. **[名前]** フィールドに「**金融ドキュメント規則**」と入力します。 4. **[ターゲット リソース]** セクションで **[編集]** をクリックし、 **[集約型アクセス規則]** ダイアログ ボックスで **[条件の追加]** をクリックします。 次の条件を追加します。 **[リソース]** **[Department]** **[等しい]** **[値]** **[Finance]** 。次に、 **[OK]** をクリックします。 5. **[アクセス許可]** セクションで **[次のアクセス許可を現在のアクセス許可として使用する]** を選択し、 **[編集]** をクリックし、 **[アクセス許可のセキュリティの詳細設定]** ダイアログ ボックスで **[追加]** をクリックします。 > [!NOTE] > **[次のアクセス許可を、提案されたアクセス許可として使用する]** オプションを使用することで、ステージングのポリシーを作成できます。 これを行う方法の詳細については、このトピックの「管理: ポリシーの変更とステージング」セクションを参照してください。 6. **[アクセス許可のアクセス許可エントリ]** ダイアログ ボックスで **[プリンシパルの選択]** をクリックし、「**Authenticated Users**」と入力してから、 **[OK]** をクリックします。 7. **[アクセス許可のアクセス許可エントリ]** ダイアログ ボックスで **[条件の追加]** をクリックし、次の条件を追加します。 **[ユーザー]** **[国]** **[その他]** **[リソース]** **[国]** **[条件の追加]** をクリックします。 **[および]** **[ユーザー]** **[Department]** **[任意]** **[リソース]** **[department]** をクリックします。 **[アクセス許可]** を **[読み取り]** に設定します。 8. **[OK]** をクリックしてから、 **[追加]** をクリックします。 **[プリンシパルの選択]** をクリックし、**FinanceAdmin** と入力してから、 **[OK]** をクリックします。 9. **[変更]、[読み取りと実行]、[読み取り]、[書き込み]** の各アクセス許可を選択してから、 **[OK]** をクリックします。 10. **[追加]** をクリックし、 **[プリンシパルの選択]** をクリックし、**FinanceException** と入力してから、 **[OK]** をクリックします。 アクセス許可が **[読み取り]** および **[読み取りと実行]** になるように選択します。 11. **[OK]** を 3 回クリックして終了し、Active Directory 管理センターに戻ります。 ![ソリューションガイド](media/Deploy-a-Central-Access-Policy--Demonstration-Steps-/PowerShellLogoSmall.gif)***<em>Windows PowerShell の同等のコマンド</em>*** 次の Windows PowerShell コマンドレットは、前の手順と同じ機能を実行します。 書式上の制約のため、複数行にわたって折り返される場合でも、各コマンドレットは 1 行に入力してください。 ~~~ $countryClaimType = Get-ADClaimType country $departmentClaimType = Get-ADClaimType department $countryResourceProperty = Get-ADResourceProperty Country $departmentResourceProperty = Get-ADResourceProperty Department $currentAcl = "O:SYG:SYD:AR(A;;FA;;;OW)(A;;FA;;;BA)(A;;0x1200a9;;;S-1-5-21-1787166779-1215870801-2157059049-1113)(A;;0x1301bf;;;S-1-5-21-1787166779-1215870801-2157059049-1112)(A;;FA;;;SY)(XA;;0x1200a9;;;AU;((@USER." + $countryClaimType.Name + " Any_of @RESOURCE." + $countryResourceProperty.Name + ") && (@USER." + $departmentClaimType.Name + " Any_of @RESOURCE." + $departmentResourceProperty.Name + ")))" $resourceCondition = "(@RESOURCE." + $departmentResourceProperty.Name + " Contains {`"Finance`"})" New-ADCentralAccessRule "Finance Documents Rule" -CurrentAcl $currentAcl -ResourceCondition $resourceCondition ~~~ > [!IMPORTANT] > 上記のコマンドレットの例では、グループ FinanceAdmin およびユーザーのセキュリティ ID (SID) は作成時に決定されるため、お客様の例では異なるものになります。 例えば、FinanceAdmin に指定されている SID 値 (S-1-5-21-1787166779-1215870801-2157059049-1113) は、ご使用の展開で作成する必要がある FinanceAdmin グループの実際の SID に置き換える必要があります。 Windows PowerShell を使用して、このグループの SID 値を検索し、その値を変数に割り当ててから、ここでその変数を使用できます。 詳細については、「 [Windows PowerShell ヒント: sid の](https://go.microsoft.com/fwlink/?LinkId=253545)使用」を参照してください。 これで、ユーザーが同じ国および同じ部門のドキュメントにアクセスできるようにする集約型アクセス規則が作成されました。 この規則により、FinanceAdmin グループはドキュメントを編集でき、FinanceException グループはドキュメントを読み取ることができます。 この規則のターゲットは、金融として分類されているドキュメントのみです。 #### <a name="to-add-a-central-access-rule-to-a-central-access-policy"></a>集約型アクセス規則を集約型アクセス ポリシーに追加するには 1. Active Directory 管理センターの左側のウィンドウで、 **[ダイナミック アクセス制御]** をクリックしてから、 **[集約型アクセス ポリシー]** をクリックします。 2. **[タスク]** ウィンドウの **[新規作成]** をクリックし、 **[集約型アクセス ポリシー]** をクリックします。 3. **[集約型アクセス ポリシーの作成:]** の **[名前]** ボックスに「**金融ポリシー**」と入力します。 4. **[メンバー集約型アクセス規則]** で **[追加]** をクリックします。 5. **[金融ドキュメント規則]** をダブルクリックして **[次の集約型アクセス規則を追加します]** リストに追加してから、 **[OK]** をクリックします。 6. **[OK]** をクリックして完了します。 これで、金融ポリシーという集約型アクセス ポリシーが作成されました。 ![ソリューションガイド](media/Deploy-a-Central-Access-Policy--Demonstration-Steps-/PowerShellLogoSmall.gif)***<em>Windows PowerShell の同等のコマンド</em>*** 次の Windows PowerShell コマンドレットは、前の手順と同じ機能を実行します。 書式上の制約のため、複数行にわたって折り返される場合でも、各コマンドレットは 1 行に入力してください。 ``` New-ADCentralAccessPolicy "Finance Policy" Add-ADCentralAccessPolicyMember -Identity "Finance Policy" -Member "Finance Documents Rule" ``` #### <a name="to-apply-the-central-access-policy-across-file-servers-by-using-group-policy"></a>グループ ポリシーを使用してファイル サーバー全体で集約型アクセス ポリシーを適用するには 1. **[スタート]** 画面の **[検索]** ボックスに「**グループ ポリシーの管理**」と入力します。 **[グループ ポリシーの管理]** をダブルクリックします。 > [!TIP] > **[管理ツールを表示]** の設定が無効になっていると、 **[管理ツール]** フォルダーとその内容は **[設定]** の結果に表示されません。 > [!TIP] > 運用環境では、ファイル サーバー組織単位 (OU) を作成し、このポリシーを適用するすべてのファイル サーバーをその OU に追加する必要があります。 次に、グループ ポリシーを作成し、この OU をそのポリシーに追加できます。 2. この手順では、テスト環境のセクション「[ドメイン コントローラーを作成する](Appendix-B--Setting-Up-the-Test-Environment.md#BKMK_Build)」で作成したグループ ポリシー オブジェクトを編集して、作成した集約型アクセス ポリシーを組み込みます。 グループポリシー管理エディターで、ドメイン内の組織単位 (この例では contoso.com) に移動して選択します。**グループポリシー管理**、**フォレスト: contoso.com**、**ドメイン**、 **contoso.com**、 **contoso**、 **fileserverou**です。 3. **[FlexibleAccessGPO]** を右クリックしてから、 **[編集]** をクリックします。 4. グループ ポリシー管理エディター ウィンドウで、**コンピューターの構成** に移動し、**ポリシー** を展開し、**Windows の設定** を展開し、**セキュリティの設定** をクリックします。 5. **[ファイル システム]** を展開し、 **[集約型アクセス ポリシー]** を右クリックしてから、 **[集約型アクセス ポリシーの管理]** をクリックします。 6. **[集約型アクセス ポリシー構成]** ダイアログ ボックスで **[金融ポリシー]** を追加してから、 **[OK]** をクリックします。 7. **[監査ポリシーの詳細な構成]** まで下にスクロールしてこれを展開します。 8. **[監査ポリシー]** を展開し、 **[オブジェクト アクセス]** を選択します。 9. **[集約型アクセス ポリシー ステージングの監査]** をダブルクリックします。 3 つすべてのチェック ボックスを選択してから、 **[OK]** をクリックします。 この手順により、システムは、集約型アクセス ステージング ポリシーに関連した監査イベントを受信できるようになります。 10. **[ファイル システム プロパティの監査]** をダブルクリックします。 3 つすべてのチェック ボックスを選択してから、 **[OK]** をクリックします。 11. グループ ポリシー管理エディターを閉じます。 これで、集約型アクセス ポリシーがグループ ポリシーに組み込まれました。 ドメインのドメインコントローラーで信頼性情報またはデバイスの承認データを提供するには、ダイナミックアクセス制御をサポートするようにドメインコントローラーを構成する必要があります。 #### <a name="to-enable-support-for-claims-and-compound-authentication-for-contosocom"></a>contoso.com の信頼性情報および複合認証のサポートを有効にするには 1. グループ ポリシーの管理を開き、**contoso.com** をクリックしてから、 **[ドメイン コントローラー]** をクリックします。 2. **[既定のドメイン コントローラー ポリシー]** を右クリックし、 **[編集]** をクリックします。 3. グループ ポリシー管理エディター ウィンドウで **コンピューターの構成** をダブルクリックし、**ポリシー** をダブルクリックし、**管理用テンプレート** をダブルクリックし、**システム** をダブルクリックしてから、**KDC** をダブルクリックします。 4. **[KDC で信頼性情報、複合認証、および Kerberos 防御をサポートする]** をダブルクリックします。 **[KDC で信頼性情報、複合認証、および Kerberos 防御をサポートする]** ダイアログ ボックスで **[有効]** をクリックし、 **[オプション]** ドロップ ダウン リストから **[サポートされています]** を選択します。 (集約型アクセス ポリシーでユーザーの信頼性情報を使用するには、この設定を有効にする必要があります)。 5. **[グループ ポリシーの管理]** を閉じます。 6. コマンド プロンプトを開き、「`gpupdate /force`」と入力します。 ## <a name="deploy-the-central-access-policy"></a><a name="BKMK_1.4"></a>集約型アクセスポリシーを展開する ||手順|例| |-|--------|-----------| |3.1|ファイル サーバー上の該当する共有フォルダーに CAP を割り当てる。|ファイル サーバー上の該当する共有フォルダーに集約型アクセス ポリシーを割り当てます。| |3.2|アクセスが適切に構成されていることを確認する。|さまざまな国および部門のユーザーについてアクセスを確認します。| この手順では、ファイル サーバーに集約型アクセス ポリシーを割り当てます。 以前の手順で作成した集約型アクセス ポリシーを受信するファイル サーバーにログオンし、ポリシーを共有フォルダーに割り当てます。 #### <a name="to-assign-a-central-access-policy-to-a-file-server"></a>ファイル サーバーに集約型アクセス ポリシーを割り当てるには 1. Hyper-V マネージャーでサーバー FILE1 に接続します。 Contoso\administrator を使用して、サーバーにログオンします。パスワードは<strong>pass@word1</strong>です。 2. 管理者特権でのコマンド プロンプトを開き、コマンド **gpupdate /force** を入力します。 これにより、グループ ポリシーの変更がサーバーで有効になります。 3. Active Directory からグローバル リソース プロパティーを更新する必要もあります。 管理者特権で Windows PowerShell ウィンドウを開き、「`Update-FSRMClassificationpropertyDefinition`」と入力します。 Enter キーをクリックしてから、Windows PowerShell を閉じます。 > [!TIP] > ファイル サーバーにログオンしてグローバル リソース プロパティを更新することもできます。 ファイル サーバーからグローバル リソース プロパティを更新するには、次のようにします。 > > 1. パスワード<strong>pass@word1</strong>を使用して、contoso\administrator としてファイルサーバー FILE1 にログオンします。 > 2. ファイル サーバー リソース マネージャーを開きます。 ファイル サーバー リソース マネージャーを開くには、 **[スタート]** をクリックし、「**ファイル サーバー リソース マネージャー**」と入力してから、 **[ファイル サーバー リソース マネージャー]** をクリックします。 > 3. ファイル サーバー リソース マネージャーで **[ファイル分類管理]** をクリックし、 **[分類プロパティ]** を右クリックしてから、 **[更新]** をクリックします。 4. エクスプローラーを開き、左側のウィンドウでドライブ D をクリックします。 **[Finance Documents]** フォルダーを右クリックし、 **[プロパティ]** をクリックします。 5. **[分類]** タブをクリックし、 **[Country]** をクリックしてから、 **[値]** フィールドで **[US]** を選択します。 6. **[Department]** をクリックしてから、 **[値]** フィールドで **[Finance]** を選択し、 **[適用]** をクリックします。 > [!NOTE] > 集約型アクセス ポリシーは、金融部門のファイルをターゲットにするように構成されていることに注意してください。 以前の手順では、Country 属性および Department 属性を使用して、フォルダー内のすべてのドキュメントをマークしています。 7. **[セキュリティ]** タブをクリックし、 **[詳細設定]** をクリックします。 **[集約型ポリシー]** タブをクリックします。 8. **[変更]** をクリックし、ドロップダウン メニューから **[金融ポリシー]** を選択してから、 **[適用]** をクリックします。 **[金融ドキュメント規則]** がポリシーにリストされているのを確認できます。 この項目を展開して、Active Directory の規則を作成したときに設定したすべてのアクセス許可を表示します。 9. **[OK]** をクリックしてエクスプローラーに戻ります。 次の手順では、アクセスが適切に構成されているかを確認します。 ユーザー アカウントには、適切な部門属性が設定されている必要があります (これは、Active Directory 管理センターを使用して設定します)。 新規ポリシーの結果を表示する最もシンプルな方法としては、エクスプローラーの **[有効なアクセス]** タブを使用します。 **[有効なアクセス]** タブには、特定のユーザー アカウントのアクセス権限が表示されます。 #### <a name="to-examine-the-access-for-various-users"></a>各種ユーザーのアクセスを調べるには 1. Hyper-V マネージャーでサーバー FILE1 に接続します。 contoso\administrator を使用してサーバーにログオンします。 エクスプローラーで D:\ に移動します。 **[Finance Documents]** フォルダーを右クリックしてから、 **[プロパティ]** をクリックします。 2. **[セキュリティ]** タブをクリックし、 **[詳細設定]** をクリックしてから、 **[有効なアクセス]** タブをクリックします。 3. ユーザーのアクセス許可を確認するには、 **[ユーザーの選択]** をクリックし、ユーザーの名前を入力して、有効な **[アクセスの表示]** をクリックします。有効なアクセス権が表示されます。 例 : - Myriam Delesalle (MDelesalle) は金融部門に属しており、フォルダーに対する読み取りアクセス権を必要としています。 - Miles Reid (MReid) は FinanceAdmin グループのメンバーであり、フォルダーに対する変更アクセス権を必要としています。 - Esther Valle (EValle) は金融部門に属していませんが、FinanceException グループのメンバーであり、読み取りアクセス権が必要です。 - Maira Wenzel (MWenzel) は金融部門に属しておらず、また FinanceAdmin グループと FinanceException グループのいずれのメンバーでもありません。 フォルダーに対するどのアクセスも備えていてはなりません。 有効なアクセス ウィンドウの **アクセスの制限者** とう最後の列に注目してください。 この列には、ユーザーのアクセス許可に影響を与えるゲートが示されます。 この場合、共有および NTFS のアクセス許可により、すべてのユーザーにフル コントロールが許可されています。 ただし、集約型アクセス ポリシーは、以前に構成した規則に基づいてアクセスを制限します。 ## <a name="maintain-change-and-stage-the-policy"></a><a name="BKMK_1.5"></a>管理: ポリシーを変更してステージングする |||| |-|-|-| |数値|手順|例| |4.1|クライアント用のデバイスの信頼性情報を構成する|グループ ポリシー設定を設定してデバイスの信頼性情報を有効にします。| |4.2|デバイスの信頼性情報を有効にする|デバイスの国の信頼性情報の種類を有効にします。| |4.3|変更する既存の集約型アクセス規則にステージング ポリシーを追加する|金融ドキュメント規則を変更して、ステージング ポリシーを追加します。| |4.4|ステージング ポリシーの結果を表示する|Velle のアクセス許可があるかどうかを確認します。| #### <a name="to-set-up-group-policy-setting-to-enable-claims-for-devices"></a>デバイスの信頼性情報を有効にするためにグループ ポリシー設定をセットアップするには 1. DC1 にログオンし、グループ ポリシーの管理を開き、**contoso.com** をクリックし、 **[既定のドメイン ポリシー]** をクリックし、右クリックして **[編集]** を選択します。 2. グループ ポリシー管理エディター ウィンドウで **コンピューターの構成**、**ポリシー**、**管理用テンプレート**、**システム**、**Kerberos** に移動します。 3. **[要求、複合認証、および Kerberos 防御の Kerberos クライアント サポート]** を選択し、 **[有効にする]** をクリックします。 #### <a name="to-enable-a-claim-for-devices"></a>デバイスの信頼性情報を有効にするには 1. Hyper-v マネージャーでサーバー DC1 を開き、パスワード<strong>pass@word1</strong>を使用して contoso\Administrator としてログオンします。 2. **[ツール]** メニューから Active Directory 管理センターを開きます。 3. **[ツリー ビュー]** をクリックし、 **[ダイナミック アクセス制御]** を展開し、 **[要求の種類]** をダブルクリックし、**country** 要求をダブルクリックします。 4. **[この種類の要求は次のクラスに対して発行できます]** で **[コンピューター]** チェック ボックスを選択します。 **[OK]** をクリックすると、 これで、 **[ユーザー]** と **[コンピューター]** の両チェック ボックスが選択されています。 デバイスでユーザーに加えて country 要求を使用できるようになりました。 次の手順では、ステージング ポリシー規則を作成します。 ステージング ポリシーを使用して、新規ポリシー エントリを有効にする前にその効果をモニターできます。 次の手順では、ステージング ポリシー エントリを作成し、共有フォルダーでその効果をモニターします。 #### <a name="to-create-a-staging-policy-rule-and-add-it-to-the-central-access-policy"></a>ステージング ポリシー規則を作成して集約型アクセス ポリシーに追加するには 1. Hyper-v マネージャーでサーバー DC1 を開き、パスワード<strong>pass@word1</strong>を使用して contoso\Administrator としてログオンします。 2. Active Directory 管理センターを開きます。 3. **[ツリー ビュー]** をクリックし、 **[ダイナミック アクセス制御]** を展開し、 **[集約型アクセス規則]** を選択します。 4. **[金融ドキュメント規則]** を右クリックしてから、 **[プロパティ]** をクリックします。 5. **[提案されたアクセス許可]** セクションで **[アクセス許可のステージング構成を有効にする]** チェック ボックスを選択し、 **[編集]** をクリックしてから、 **[追加]** をクリックします。 **[提案されたアクセス許可のアクセス許可エントリ]** ウィンドウで **[プリンシパルの選択]** リンクをクリックし、「**Authenticated Users**」と入力してから、 **[OK]** をクリックします。 6. **[条件の追加]** リンクをクリックし、次の条件を追加します。 **[ユーザー]** **[country]** **[いずれかを満たす]** **[リソース]** **[Country]** . 7. **[条件の追加]** を再度クリックし、次の条件を追加します。 **[および]** **[デバイス]** **[country]** **[いずれかを満たす]** **[リソース]** **[Country]** 8. **[条件の追加]** を再度クリックし、次の条件を追加します。 と **[ユーザー]** **[グループ]** **[すべてのメンバーのメンバー]** **[値]** \(**FinanceException**) 9. FinanceException グループを設定するには、 **[項目の追加]** をクリックし、 **[ユーザー、コンピューター、サービス アカウントまたはグループの選択]** ウィンドウで **FinanceException** と入力します。 10. **[アクセス許可]** をクリックし、 **[フル コントロール]** を選択し、 **[OK]** をクリックします。 11. 提案されたアクセス許可のセキュリティの詳細設定 ウィンドウで **FinanceException** を選択し、**削除** をクリックします。 12. **[OK]** を 2 回クリックして完了します。 ![ソリューションガイド](media/Deploy-a-Central-Access-Policy--Demonstration-Steps-/PowerShellLogoSmall.gif)***<em>Windows PowerShell の同等のコマンド</em>*** 次の Windows PowerShell コマンドレットは、前の手順と同じ機能を実行します。 書式上の制約のため、複数行にわたって折り返される場合でも、各コマンドレットは 1 行に入力してください。 ``` Set-ADCentralAccessRule -Identity: "CN=FinanceDocumentsRule,CN=CentralAccessRules,CN=ClaimsConfiguration,CN=Configuration,DC=Contoso.com" -ProposedAcl: "O:SYG:SYD:AR(A;;FA;;;BA)(A;;FA;;;SY)(A;;0x1301bf;;;S-1-21=1426421603-1057776020-1604)" -Server: "WIN-2R92NN8VKFP.Contoso.com" ``` > [!NOTE] > 上記のコマンドレットの例では、サーバー値はテスト ラボ環境のサーバーを反映しています。 Windows PowerShell 履歴ビューアーを使用して、Active Directory 管理センターで実行する各手順の Windows PowerShell コマンドレットを検索できます。 詳細については、「 [Windows PowerShell 履歴ビューアー](https://technet.microsoft.com/library/hh831702) 」を参照してください。 この提案されたアクセス許可セットでは、FinanceException グループのメンバーは、属している国のファイルにドキュメントとして同じ国のデバイスを使用してアクセスした場合、そのファイルに対するフル アクセスを備えています。 金融部門のユーザーがファイルにアクセスしようとした場合、ファイル サーバーのセキュリティ ログに監査エントリが書き込まれます。 ただし、ポリシーがステージングのレベルから上げられるまで、セキュリティ設定は適用されません。 次の手順では、ステージング ポリシーの結果を確認します。 現在の規則に基づいてアクセス許可を備えているユーザー名を使用して、共有フォルダーにアクセスします。 Esther Valle (EValle) は FinanceException のメンバーであり、現在、読み取り権限を備えています。 このステージング ポリシーでは、EValle はどの権限も備えていてはなりません。 #### <a name="to-verify-the-results-of-the-staging-policy"></a>ステージング ポリシーの結果を確認するには 1. Hyper-v マネージャーでファイルサーバー FILE1 に接続し、パスワード<strong>pass@word1</strong>を使用して contoso\administrator としてログオンします。 2. コマンド プロンプト ウィンドウを開き、**gpupdate /force** と入力します。 これにより、グループ ポリシーの変更がサーバーで有効になります。 3. Hyper-V マネージャーでサーバー CLIENT1 に接続します。 現在ログオンしているユーザーをログオフします。 仮想マシン CLIENT1 を再起動します。 次に、contoso\EValle pass@word1を使用してコンピューターにログオンします。 4. デスクトップのショートカットをダブルクリックして、\FILE1\Finance ドキュメントを \\します。 EValle は引き続きファイルにアクセスできます。 FILE1 に切り替えます。 5. デスクトップのショートカットから **[イベント ビューアー]** を開きます。 **[Windows ログ]** を展開してから、 **[セキュリティ]** を選択します。 **[集約型アクセスポリシーステージング]** タスクカテゴリで、**イベント ID 4818**のエントリを開きます。 EValle にアクセスが許可されていたことが分かります。ただし、ステージング ポリシーに従えば、このユーザーはアクセスが拒否されていました。 ## <a name="next-steps"></a>次の手順 System Center Operations Manager などの集約型サーバー管理システムがある場合、イベントをモニターするように構成することもできます。 これにより、管理者は、集約型アクセス ポリシーを適用する前にその効果をモニターできます。
50.797267
410
0.725336
yue_Hant
0.635793
5dfcc9c38cc7d8684186d274fc26963f9fdc44e8
12,534
md
Markdown
articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
jhomarolo/azure-docs.pt-br
d11ab7fab56d90666ea619c6b12754b7761aca97
[ "CC-BY-4.0", "MIT" ]
1
2019-05-02T14:26:54.000Z
2019-05-02T14:26:54.000Z
articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
jhomarolo/azure-docs.pt-br
d11ab7fab56d90666ea619c6b12754b7761aca97
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
jhomarolo/azure-docs.pt-br
d11ab7fab56d90666ea619c6b12754b7761aca97
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Criar um banco de dados Oracle em uma VM do Azure | Microsoft Docs description: Coloque rapidamente em funcionamento um banco de dados Oracle Database 12c no ambiente do Azure. services: virtual-machines-linux documentationcenter: virtual-machines author: romitgirdhar manager: jeconnoc editor: '' tags: azure-resource-manager ms.assetid: '' ms.service: virtual-machines-linux ms.devlang: na ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure ms.date: 08/02/2018 ms.author: rogirdh ms.openlocfilehash: 490ac613adac968cc323c2d8351b59aece181b68 ms.sourcegitcommit: 3aa0fbfdde618656d66edf7e469e543c2aa29a57 ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 02/05/2019 ms.locfileid: "55734378" --- # <a name="create-an-oracle-database-in-an-azure-vm"></a>Criar um Banco de Dados Oracle em uma VM do Azure Esse guia detalha como usar a CLI do Azure para implantar uma máquina virtual do Azure a partir da [imagem da galeria do marketplace da Oracle](https://azuremarketplace.microsoft.com/marketplace/apps/Oracle.OracleDatabase12102EnterpriseEdition?tab=Overview) a fim de criar um banco de dados Oracle 12c. Depois que o servidor for implantado, conecte-se por meio de SSH para configurar o banco de dados Oracle. Se você não tiver uma assinatura do Azure, crie uma [conta gratuita](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) antes de começar. [!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)] Se você optar por instalar e usar a CLI localmente, este guia de início rápido exigirá a execução da CLI do Azure versão 2.0.4 ou posterior. Execute `az --version` para encontrar a versão. Se você precisa instalar ou atualizar, consulte [Instalar a CLI do Azure]( /cli/azure/install-azure-cli). ## <a name="create-a-resource-group"></a>Criar um grupo de recursos Crie um grupo de recursos com o comando [az group create](/cli/azure/group). Um grupo de recursos do Azure é um contêiner lógico no qual os recursos do Azure são implantados e gerenciados. O exemplo a seguir cria um grupo de recursos chamado *myResourceGroup* no local *eastus*. ```azurecli-interactive az group create --name myResourceGroup --location eastus ``` ## <a name="create-virtual-machine"></a>Criar máquina virtual Para criar uma VM (máquina virtual), use o comando [az vm create](/cli/azure/vm). O exemplo a seguir cria uma VM chamada `myVM`. Ele também criará chaves SSH, se elas ainda não existirem em um local de chave padrão. Para usar um conjunto específico de chaves, use a opção `--ssh-key-value`. ```azurecli-interactive az vm create \ --resource-group myResourceGroup \ --name myVM \ --image Oracle:Oracle-Database-Ee:12.1.0.2:latest \ --size Standard_DS2_v2 \ --admin-username azureuser \ --generate-ssh-keys ``` Depois de criar a VM, a CLI do Azure exibe informações semelhantes ao exemplo a seguir. Observe o valor de `publicIpAddress`. Você pode usar esse endereço para acessar a VM. ```azurecli { "fqdns": "", "id": "/subscriptions/{snip}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM", "location": "westus", "macAddress": "00-0D-3A-36-2F-56", "powerState": "VM running", "privateIpAddress": "10.0.0.4", "publicIpAddress": "13.64.104.241", "resourceGroup": "myResourceGroup" } ``` ## <a name="connect-to-the-vm"></a>Conectar-se à VM Para criar uma sessão SSH com a VM, use o comando a seguir. Substitua o endereço IP pelo valor `publicIpAddress` para a sua VM. ```bash ssh azureuser@<publicIpAddress> ``` ## <a name="create-the-database"></a>Criar o banco de dados O software Oracle já está instalado na imagem do Marketplace. Crie um banco de dados de exemplo da seguinte maneira. 1. Altere para o superusuário *oracle* e inicialize o ouvinte para o registro em log: ```bash $ sudo su - oracle $ lsnrctl start ``` A saída deverá ser semelhante a esta: ```bash Copyright (c) 1991, 2014, Oracle. All rights reserved. Starting /u01/app/oracle/product/12.1.0/dbhome_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 12.1.0.2.0 - Production Log messages written to /u01/app/oracle/diag/tnslsnr/myVM/listener/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=myVM.twltkue3xvsujaz1bvlrhfuiwf.dx.internal.cloudapp.net)(PORT=1521))) Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 23-MAR-2017 15:32:08 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Log File /u01/app/oracle/diag/tnslsnr/myVM/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=myVM.twltkue3xvsujaz1bvlrhfuiwf.dx.internal.cloudapp.net)(PORT=1521))) The listener supports no services The command completed successfully ``` 2. Crie o banco de dados: ```bash dbca -silent \ -createDatabase \ -templateName General_Purpose.dbc \ -gdbname cdb1 \ -sid cdb1 \ -responseFile NO_VALUE \ -characterSet AL32UTF8 \ -sysPassword OraPasswd1 \ -systemPassword OraPasswd1 \ -createAsContainerDatabase true \ -numberOfPDBs 1 \ -pdbName pdb1 \ -pdbAdminPassword OraPasswd1 \ -databaseType MULTIPURPOSE \ -automaticMemoryManagement false \ -storageType FS \ -ignorePreReqs ``` A criação do banco de dados demora alguns minutos. 3. Definir variáveis do Oracle Antes de se conectar, você precisa definir duas variáveis de ambiente: *ORACLE_HOME* e *ORACLE_SID*. ```bash ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME ORACLE_SID=cdb1; export ORACLE_SID ``` Você também pode adicionar as variáveis ORACLE_HOME e ORACLE_SID ao arquivo .bashrc. Isso economiza as variáveis de ambiente para entradas futuras. Confirme se as instruções a seguir foram adicionadas ao arquivo `~/.bashrc` usando o editor de sua escolha. ```bash # Add ORACLE_HOME. export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 # Add ORACLE_SID. export ORACLE_SID=cdb1 ``` ## <a name="oracle-em-express-connectivity"></a>Conectividade com o Oracle EM Express Para ter uma ferramenta de gerenciamento de GUI que você pode usar para explorar o banco de dados, configure o Oracle EM Express. Para se conectar ao Oracle EM Express, a porta deve ser configurada primeiramente no Oracle. 1. Conecte-se ao seu banco de dados usando sqlplus: ```bash sqlplus / as sysdba ``` 2. Após a conexão, defina a porta 5502 como EM Express ```bash exec DBMS_XDB_CONFIG.SETHTTPSPORT(5502); ``` 3. Abra o contêiner PDB1, se ainda não estiver aberto, mas verifique o status primeiro: ```bash select con_id, name, open_mode from v$pdbs; ``` A saída deverá ser semelhante a esta: ```bash CON_ID NAME OPEN_MODE ----------- ------------------------- ---------- 2 PDB$SEED READ ONLY 3 PDB1 MOUNT ``` 4. Se o OPEN_MODE de `PDB1` não for READ WRITE, execute os comandos a seguir para abrir PDB1: ```bash alter session set container=pdb1; alter database open; ``` Você precisa digitar `quit` para encerrar a sessão de sqlplus e o digitar `exit` fazer logoff do usuário do Oracle. ## <a name="automate-database-startup-and-shutdown"></a>Automatizar a inicialização e o desligamento do banco de dados Por padrão, o banco de dados Oracle não inicia automaticamente quando você reinicia a VM. Para configurar o banco de dados Oracle para iniciar automaticamente, primeiro entre como raiz. Em seguida, crie e atualize alguns arquivos do sistema. 1. Conectar-se como raiz ```bash sudo su - ``` 2. Usando o editor de sua escolha, edite o arquivo `/etc/oratab` e altere o padrão `N` para `Y`: ```bash cdb1:/u01/app/oracle/product/12.1.0/dbhome_1:Y ``` 3. Crie um arquivo chamado `/etc/init.d/dbora` e cole o conteúdo a seguir: ``` #!/bin/sh # chkconfig: 345 99 10 # Description: Oracle auto start-stop script. # # Set ORA_HOME to be equivalent to $ORACLE_HOME. ORA_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 ORA_OWNER=oracle case "$1" in 'start') # Start the Oracle databases: # The following command assumes that the Oracle sign-in # will not prompt the user for any values. # Remove "&" if you don't want startup as a background process. su - $ORA_OWNER -c "$ORA_HOME/bin/dbstart $ORA_HOME" & touch /var/lock/subsys/dbora ;; 'stop') # Stop the Oracle databases: # The following command assumes that the Oracle sign-in # will not prompt the user for any values. su - $ORA_OWNER -c "$ORA_HOME/bin/dbshut $ORA_HOME" & rm -f /var/lock/subsys/dbora ;; esac ``` 4. Altere as permissões nos arquivos com *chmod* da seguinte maneira: ```bash chgrp dba /etc/init.d/dbora chmod 750 /etc/init.d/dbora ``` 5. Crie links simbólicos para inicialização e desligamento, da seguinte maneira: ```bash ln -s /etc/init.d/dbora /etc/rc.d/rc0.d/K01dbora ln -s /etc/init.d/dbora /etc/rc.d/rc3.d/S99dbora ln -s /etc/init.d/dbora /etc/rc.d/rc5.d/S99dbora ``` 6. Para testar as alterações, reinicie a VM: ```bash reboot ``` ## <a name="open-ports-for-connectivity"></a>Abrir as portas para conectividade A última tarefa é configurar alguns pontos de extremidade externos. Para configurar o Grupo de Segurança de Rede do Azure que protege a VM, primeiro saia da sessão SSH na VM (você de ter saído do SSH ao reiniciar na etapa anterior). 1. Para abrir o ponto de extremidade que você usa para acessar o banco de dados Oracle remotamente, crie uma regra de Grupo de Segurança de Rede com [az network nsg rule create](/cli/azure/network/nsg/rule) da seguinte maneira: ```azurecli-interactive az network nsg rule create \ --resource-group myResourceGroup\ --nsg-name myVmNSG \ --name allow-oracle \ --protocol tcp \ --priority 1001 \ --destination-port-range 1521 ``` 2. Para abrir o ponto de extremidade que você usa para acessar o Oracle EM Express remotamente, crie uma regra de Grupo de Segurança de Rede com [az network nsg rule create](/cli/azure/network/nsg/rule) da seguinte maneira: ```azurecli-interactive az network nsg rule create \ --resource-group myResourceGroup \ --nsg-name myVmNSG \ --name allow-oracle-EM \ --protocol tcp \ --priority 1002 \ --destination-port-range 5502 ``` 3. Se for necessário, obtenha o endereço IP público de sua VM com [az network public-ip show](/cli/azure/network/public-ip) da seguinte maneira: ```azurecli-interactive az network public-ip show \ --resource-group myResourceGroup \ --name myVMPublicIP \ --query [ipAddress] \ --output tsv ``` 4. Conecte-se ao EM Express pelo navegador. Verifique se o seu navegador é compatível com EM Express (é necessário ter Flash instalado): ``` https://<VM ip address or hostname>:5502/em ``` Você pode fazer logon usando a conta **SYS** e marcar a caixa de seleção **como sysdba**. Use a senha **OraPasswd1** que você definiu durante a instalação. ![Captura de tela da página de logon Oracle OEM Express](./media/oracle-quick-start/oracle_oem_express_login.png) ## <a name="clean-up-resources"></a>Limpar recursos Depois de terminar de explorar seu primeiro banco de dados Oracle no Azure e a VM não for mais necessária, você poderá usar o comando [az group delete](/cli/azure/group) para remover o grupo de recursos, a VM e todos os recursos relacionados. ```azurecli-interactive az group delete --name myResourceGroup ``` ## <a name="next-steps"></a>Próximas etapas Saiba mais sobre outras [Soluções Oracle no Azure](oracle-considerations.md). Experimente o tutorial [Instalar e configurar o Oracle Automated Storage Management](configure-oracle-asm.md).
37.867069
409
0.689963
por_Latn
0.938952
5dfced0b05fbef270e980913450db45fea52e888
86
md
Markdown
docs/content/recipes/_index.md
framefactory/cook
8aec909551da4c778995321909cf36acd7ca9d2c
[ "Apache-2.0" ]
50
2018-10-22T09:44:39.000Z
2022-03-22T17:40:31.000Z
docs/content/recipes/_index.md
framefactory/cook
8aec909551da4c778995321909cf36acd7ca9d2c
[ "Apache-2.0" ]
17
2019-12-19T21:47:01.000Z
2022-03-04T18:21:35.000Z
docs/content/recipes/_index.md
gemtwink/dpo-cook
b9e12a36b273bd148feb109277fe2bfa3370eccf
[ "Apache-2.0" ]
3
2019-07-02T06:47:41.000Z
2021-07-20T18:30:58.000Z
--- title: Recipes --- The Cook ships with a number of predefined processing recipes.
17.2
62
0.744186
eng_Latn
0.998828
5dfd68b2b747647e5b1ccc17bc2948bd75b87b70
2,741
md
Markdown
docs/framework/unmanaged-api/fusion/asm-display-flags-enumeration.md
mitharp/docs.ru-ru
6ca27c46e446ac399747e85952a3135193560cd8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/fusion/asm-display-flags-enumeration.md
mitharp/docs.ru-ru
6ca27c46e446ac399747e85952a3135193560cd8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/fusion/asm-display-flags-enumeration.md
mitharp/docs.ru-ru
6ca27c46e446ac399747e85952a3135193560cd8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Перечисление ASM_DISPLAY_FLAGS ms.date: 03/30/2017 api_name: - ASM_DISPLAY_FLAGS api_location: - fusion.dll api_type: - COM f1_keywords: - ASM_DISPLAY_FLAGS helpviewer_keywords: - ASM_DISPLAY_FLAGS enumeration [.NET Framework fusion] ms.assetid: dbade6c9-9d26-4a79-9fd2-46108edd12d7 topic_type: - apiref author: rpetrusha ms.author: ronpet ms.openlocfilehash: 0871a06c6e27089d9e8fea6726d1d7b37fb75120 ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 01/23/2019 ms.locfileid: "54561755" --- # <a name="asmdisplayflags-enumeration"></a>Перечисление ASM_DISPLAY_FLAGS Указывает версию, сборки, язык и региональные параметры, подпись и т. д., сборки, отображаемое имя будет использоваться [IAssemblyName::GetDisplayName](../../../../docs/framework/unmanaged-api/fusion/iassemblyname-getdisplayname-method.md) метод. ## <a name="syntax"></a>Синтаксис ``` typedef enum { ASM_DISPLAYF_VERSION = 0x01, ASM_DISPLAYF_CULTURE = 0x02, ASM_DISPLAYF_PUBLIC_KEY_TOKEN = 0x04, ASM_DISPLAYF_PUBLIC_KEY = 0x08, ASM_DISPLAYF_CUSTOM = 0x10, ASM_DISPLAYF_PROCESSORARCHITECTURE = 0x20, ASM_DISPLAYF_LANGUAGEID = 0x40, ASM_DISPLAYF_RETARGET = 0x80, ASM_DISPLAYF_CONFIG_MASK = 0x100, ASM_DISPLAYF_MVID = 0x200, ASM_DISPLAYF_FULL = ASM_DISPLAYF_VERSION | ASM_DISPLAYF_CULTURE | ASM_DISPLAYF_PUBLIC_KEY_TOKEN | ASM_DISPLAYF_RETARGET | ASM_DISPLAYF_PROCESSORARCHITECTURE } ASM_DISPLAY_FLAGS; ``` ## <a name="remarks"></a>Примечания `ASM_DISPLAYF_FULL` отражает все изменения, внесенные в версию [IAssemblyName](../../../../docs/framework/unmanaged-api/fusion/iassemblyname-interface.md) объекта. Не следует предполагать, что возвращаемое значение является неизменяемым. ## <a name="requirements"></a>Требования **Платформы:** См. раздел [Требования к системе](../../../../docs/framework/get-started/system-requirements.md). **Заголовок.** Fusion.h **Библиотека:** Включена как ресурс в MsCorEE.dll **Версии платформы .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)] ## <a name="see-also"></a>См. также - [Интерфейс IAssemblyName](../../../../docs/framework/unmanaged-api/fusion/iassemblyname-interface.md) - [Перечисления Fusion](../../../../docs/framework/unmanaged-api/fusion/fusion-enumerations.md)
39.724638
248
0.662897
yue_Hant
0.731297
5dfe8ca244511f0428688278728842c570995df8
2,762
md
Markdown
README.md
tequilarista/google-kubernetes-engine-plugin
6511d6c434a61f0bd81ba44472af8ada2f90caa2
[ "Apache-2.0" ]
37
2019-01-11T00:37:54.000Z
2022-03-19T03:58:32.000Z
README.md
tequilarista/google-kubernetes-engine-plugin
6511d6c434a61f0bd81ba44472af8ada2f90caa2
[ "Apache-2.0" ]
150
2019-02-02T01:33:21.000Z
2022-03-31T16:02:08.000Z
README.md
tequilarista/google-kubernetes-engine-plugin
6511d6c434a61f0bd81ba44472af8ada2f90caa2
[ "Apache-2.0" ]
66
2019-01-10T18:20:47.000Z
2022-03-06T01:31:34.000Z
<!-- Copyright 2019 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Google Kubernetes Engine Plugin for Jenkins [![Jenkins Plugin](https://img.shields.io/jenkins/plugin/v/google-kubernetes-engine.svg)](https://plugins.jenkins.io/google-kubernetes-engine) [![Jenkins Plugin Installs](https://img.shields.io/jenkins/plugin/i/google-kubernetes-engine.svg?color=blue)](https://plugins.jenkins.io/google-kubernetes-engine) The Google Kubernetes Engine (GKE) Plugin allows you to deploy build artifacts to Kubernetes clusters running in GKE with Jenkins. ## Documentation Please see the [Google Kubernetes Engine Plugin](docs/Home.md) docs for complete documentation. ## Installation 1. Go to **Manage Jenkins** then **Manage Plugins**. 1. (Optional) Make sure the plugin manager has updated data by clicking the **Check now** button. 1. In the Plugin Manager, click the **Available** tab and look for the "Google Kubernetes Engine Plugin". 1. Check the box under the **Install** column and click the **Install without restart** button. 1. If the plugin does not appear under **Available**, make sure it appears under **Installed** and is enabled. ## Plugin Source Build Installation See [Plugin Source Build Installation](docs/SourceBuildInstallation.md) to build and install from source. ## Usage See the [Usage](docs/Home.md#usage) documentation for how to create a `Deploy to GKE` build step. ## Feature requests and bug reports Please file feature requests and bug reports as [github issues](https://github.com/jenkinsci/google-kubernetes-engine-plugin/issues). **NOTE**: Starting with version 0.7 of this plugin, version 0.9 or higher of the [Google OAuth Credentials plugin](https://github.com/jenkinsci/google-oauth-plugin) must be used. Older versions of this plugin are still compatible with version 0.9 of the OAuth plugin. ## Community The GCP Jenkins community uses the **#gcp-jenkins** slack channel on [https://googlecloud-community.slack.com](https://googlecloud-community.slack.com) to ask questions and share feedback. Invitation link available here: [gcp-slack](https://cloud.google.com/community#home-support). ## Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) ## License See [LICENSE](LICENSE)
38.361111
162
0.764663
eng_Latn
0.792951
5dffce08b98e40fa0e66fdc7ed6596e1ceca62c0
299
md
Markdown
content/pages/about-me/index.md
Uvacoder/port-photogrammetry-gatsby
092c2563cfc3d607903236d252d253a996abb1ea
[ "MIT" ]
1
2021-09-05T10:01:15.000Z
2021-09-05T10:01:15.000Z
content/pages/about-me/index.md
Uvacoder/port-photogrammetry-gatsby
092c2563cfc3d607903236d252d253a996abb1ea
[ "MIT" ]
null
null
null
content/pages/about-me/index.md
Uvacoder/port-photogrammetry-gatsby
092c2563cfc3d607903236d252d253a996abb1ea
[ "MIT" ]
1
2019-11-23T01:22:34.000Z
2019-11-23T01:22:34.000Z
--- title: "About Me" --- Hi. I'm Rick van Dam and this is my blog. I'm a full time .NET developer at Aviva Solutions. In my free time iam also actively programming on [GitHub](https://github.com/barsonax) or I take my camera somewhere to take beautiful photo's. ![A park in lisbon](./park.jpg)
24.916667
143
0.712375
eng_Latn
0.992027
b9000d390a29686d8f0c58dd696ab84a7d8b648c
380
md
Markdown
docs/1.19/core-v1-podReadinessGate.md
smuth4/k8s-alpha
5e0d0738721f3c190198818ef7f9414dcee9167f
[ "Apache-2.0" ]
70
2020-05-13T10:44:17.000Z
2021-11-15T10:42:11.000Z
docs/1.19/core-v1-podReadinessGate.md
smuth4/k8s-alpha
5e0d0738721f3c190198818ef7f9414dcee9167f
[ "Apache-2.0" ]
2
2020-11-19T16:36:56.000Z
2021-07-02T11:55:44.000Z
docs/1.19/core-v1-podReadinessGate.md
smuth4/k8s-alpha
5e0d0738721f3c190198818ef7f9414dcee9167f
[ "Apache-2.0" ]
10
2020-06-23T09:05:57.000Z
2021-06-02T00:02:55.000Z
--- permalink: /1.19/core/v1/podReadinessGate/ --- # package podReadinessGate PodReadinessGate contains the reference to a pod condition ## Index * [`fn withConditionType(conditionType)`](#fn-withconditiontype) ## Fields ### fn withConditionType ```ts withConditionType(conditionType) ``` ConditionType refers to a condition in the pod's condition list with matching type.
18.095238
83
0.763158
eng_Latn
0.757439
b90045eda3a70efd1fe474157e9dfe96722a1152
203
md
Markdown
_posts/0000-01-02-Gabriel-Lukaszewicz.md
Gabriel-Lukaszewicz/github-slideshow
ef0048e86f6746423ad234dafb57abbca9781046
[ "MIT" ]
null
null
null
_posts/0000-01-02-Gabriel-Lukaszewicz.md
Gabriel-Lukaszewicz/github-slideshow
ef0048e86f6746423ad234dafb57abbca9781046
[ "MIT" ]
5
2020-06-24T22:02:08.000Z
2022-02-26T08:53:28.000Z
_posts/0000-01-02-Gabriel-Lukaszewicz.md
Gabriel-Lukaszewicz/github-slideshow
ef0048e86f6746423ad234dafb57abbca9781046
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" --- Hello world! > What a wonderful world. **that's what she said!** ```python if (ACAB): print("F 12") ``` Use the left arrow to go back!
12.6875
37
0.625616
eng_Latn
0.994109
b900a667fb2f1ccd389c95a4381fc690a65762eb
3,015
md
Markdown
Create AudioBook from pdf/README.md
chaitanyashimpi/Youtube-Projects
0b807911053b4e0adf2de112272aca2e38fa372f
[ "MIT" ]
350
2020-09-09T00:27:49.000Z
2022-03-28T06:42:46.000Z
Create AudioBook from pdf/README.md
pepelawycliffe/Youtube-Projects
2709584654c4b9e71bd6f0edec8d447cb87ed39b
[ "MIT" ]
7
2020-09-29T13:23:35.000Z
2022-03-21T04:31:20.000Z
Create AudioBook from pdf/README.md
pepelawycliffe/Youtube-Projects
2709584654c4b9e71bd6f0edec8d447cb87ed39b
[ "MIT" ]
242
2020-09-09T14:09:33.000Z
2022-03-27T19:04:18.000Z
# Create your own Audiobook from any pdf with Python ![Watch the video](https://github.com/ayushi7rawat/Youtube-Projects/blob/master/Create%20AudioBook%20from%20pdf/cover.png) Code Walkthrough: ========================== you can find a [step by step walkthrough in my Blog](https://ayushirawat.com/create-your-own-audiobook-from-any-pdf-with-python) Refer the [YouTube video tutorial at for better Understanding](https://www.youtube.com/watch?v=ZWjXbe9DOVA) for the same Required Libraries: ========================== ``` pip install PyPDF2 pip install pyttsx3 ``` LICENSE: ========================== Copyright (c) 2020 Ayushi Rawat This project is licensed under the MIT License My Digital Garden: ========================== You can find my blogs at my [Website](https://ayushirawat.com). - [GitHub CLI 1.0: All you need to know](https://ayushirawat.com/github-cli-10-all-you-need-to-know) - [Python 3.9: All You need to know](https://ayushirawat.com/python-39-all-you-need-to-know) - [How to make your own Google Chrome Extension](https://ayushirawat.com/how-to-make-your-own-google-chrome-extension-1) ### The Developer Dictionary 🌱 Check out my latest videos on [YouTube](https://www.youtube.com/ayushirawat): - [How to make your own Google Chrome Extension](https://www.youtube.com/watch?v=ZWbPtPHR4hY) - [Web Scraping Coronavirus Data into MS Excel](https://www.youtube.com/watch?v=CTRYYz1u7Y8) - [September Leetcode playlist](https://www.youtube.com/playlist?list=PLjaO05BrsbIP4_rYhYjB95q-IpxoIXmlm) <p align="center"> <b><i>Let's connect! Find me on the web.</i></b> [<img height="30" src="https://img.shields.io/badge/twitter-%231DA1F2.svg?&style=for-the-badge&logo=twitter&logoColor=white" />][twitter] [<img height="30" src = "https://img.shields.io/badge/Youtube-%23E4405F.svg?&style=for-the-badge&logo=Youtube&logoColor=white">][Youtube] [<img height="30" src="https://img.shields.io/badge/Hashnode-%230077B5.svg?&style=for-the-badge&logo=Hashnode&logoColor=white" />][Hashnode] [<img height="30" src = "https://img.shields.io/badge/gmail-c14438?&style=for-the-badge&logo=gmail&logoColor=white">][gmail] [<img height="30" src="https://img.shields.io/badge/linkedin-blue.svg?&style=for-the-badge&logo=linkedin&logoColor=white" />][LinkedIn] [<img height="30" src="https://img.shields.io/badge/-Medium-000000.svg?&style=for-the-badge&logo=Medium&logoColor=white" />][Medium] [<img height="30" src = "https://img.shields.io/badge/Facebook-036be4.svg?&style=for-the-badge&logo=facebook&logoColor=white">][Facebook] <br /> <hr /> [twitter]: https://twitter.com/ayushi7rawat [youtube]: https://youtube.com/ayushirawat [Hashnode]: https://ayushirawat.com [gmail]: https://gmail.com [linkedin]: https://www.linkedin.com/in/ayushi7rawat/ [Medium]: https://medium.com/@ayushi7rawat [Facebook]: https://www.facebook.com/ayushi7rawat If you have any Queries or Suggestions, feel free to reach out to me. <h3 align="center">Show some &nbsp;❤️&nbsp; by starring some of the repositories!</h3>
47.109375
140
0.720398
yue_Hant
0.464415
b900dcb70207408b0bf57a13e751e90b8bd0eb27
2,096
markdown
Markdown
doc/tutorials/calib3d/camera_calibration_pattern/camera_calibration_pattern.markdown
Gabrielreisribeiro/opencv
39df6d2f035ca6822494fe4234e163895828b23b
[ "Apache-2.0" ]
null
null
null
doc/tutorials/calib3d/camera_calibration_pattern/camera_calibration_pattern.markdown
Gabrielreisribeiro/opencv
39df6d2f035ca6822494fe4234e163895828b23b
[ "Apache-2.0" ]
null
null
null
doc/tutorials/calib3d/camera_calibration_pattern/camera_calibration_pattern.markdown
Gabrielreisribeiro/opencv
39df6d2f035ca6822494fe4234e163895828b23b
[ "Apache-2.0" ]
1
2021-04-30T18:00:41.000Z
2021-04-30T18:00:41.000Z
Create calibration pattern {#tutorial_camera_calibration_pattern} ========================================= @tableofcontents @next_tutorial{tutorial_camera_calibration_square_chess} | | | | -: | :- | | Original author | Laurent Berger | | Compatibility | OpenCV >= 3.0 | The goal of this tutorial is to learn how to create calibration pattern. You can find a chessboard pattern in https://github.com/opencv/opencv/blob/5.x/doc/pattern.png You can find a circleboard pattern in https://github.com/opencv/opencv/blob/5.x/doc/acircles_pattern.png Create your own pattern --------------- Now, if you want to create your own pattern, you will need python to use https://github.com/opencv/opencv/blob/5.x/doc/pattern_tools/gen_pattern.py Example create a checkerboard pattern in file chessboard.svg with 9 rows, 6 columns and a square size of 20mm: python gen_pattern.py -o chessboard.svg --rows 9 --columns 6 --type checkerboard --square_size 20 create a circle board pattern in file circleboard.svg with 7 rows, 5 columns and a radius of 15mm: python gen_pattern.py -o circleboard.svg --rows 7 --columns 5 --type circles --square_size 15 create a circle board pattern in file acircleboard.svg with 7 rows, 5 columns and a square size of 10mm and less spacing between circle: python gen_pattern.py -o acircleboard.svg --rows 7 --columns 5 --type acircles --square_size 10 --radius_rate 2 create a radon checkerboard for findChessboardCornersSB() with markers in (7 4), (7 5), (8 5) cells: python gen_pattern.py -o radon_checkerboard.svg --rows 10 --columns 15 --type radon_checkerboard -s 12.1 -m 7 4 7 5 8 5 If you want to change unit use -u option (mm inches, px, m) If you want to change page size use -w and -h options @cond HAVE_opencv_aruco If you want to create a ChArUco board read @ref tutorial_charuco_detection "tutorial Detection of ChArUco Corners" in opencv_contrib tutorial. @endcond @cond !HAVE_opencv_aruco If you want to create a ChArUco board read tutorial Detection of ChArUco Corners in opencv_contrib tutorial. @endcond
39.54717
147
0.739981
eng_Latn
0.925714
b9013561f684e4b9a39598e20df294d6a39310e4
238
md
Markdown
docs/create-payload.md
Aless55/XUMM-SDK-PHP
76b28a17460ecf87b2aa89d64159f50583f2b51e
[ "MIT" ]
1
2022-03-11T07:25:27.000Z
2022-03-11T07:25:27.000Z
docs/create-payload.md
Aless55/XUMM-SDK-PHP
76b28a17460ecf87b2aa89d64159f50583f2b51e
[ "MIT" ]
5
2022-03-09T22:02:50.000Z
2022-03-24T12:19:40.000Z
docs/create-payload.md
Aless55/XUMM-SDK-PHP
76b28a17460ecf87b2aa89d64159f50583f2b51e
[ "MIT" ]
1
2022-03-22T20:12:21.000Z
2022-03-22T20:12:21.000Z
# Creating a payload Create a payload by passing a `PayloadJson` or a `PayloadBlob` object to `XummSdk::createPayload`. A basic payload array could look like this: ``` [ "txjson": { "TransactionType": "SignIn" } ] ```
21.636364
98
0.651261
eng_Latn
0.74693
b902bb2081de9b40d5e928e3ef41130c23fe023c
3,305
md
Markdown
client-example/app-mesh-example/README.md
Venafi/aws-lambda-venafi
5080ded055e9cbaed94be899fde75ef2eb19be90
[ "Apache-2.0" ]
5
2019-07-23T13:52:34.000Z
2020-10-17T14:23:44.000Z
client-example/app-mesh-example/README.md
Venafi/aws-lambda-venafi
5080ded055e9cbaed94be899fde75ef2eb19be90
[ "Apache-2.0" ]
1
2019-11-21T14:53:34.000Z
2019-11-21T22:35:23.000Z
client-example/app-mesh-example/README.md
Venafi/aws-lambda-venafi
5080ded055e9cbaed94be899fde75ef2eb19be90
[ "Apache-2.0" ]
2
2019-07-19T17:46:55.000Z
2019-10-18T20:36:17.000Z
# Venafi Policy Enforcement with Amazon Private CA and AWS App Mesh In this sample, we walkthrough how we can enforce enterprise security policy for certificate requests for AWS App Mesh deployments. **NOTE:** At the time of the writing of this walkthrough, AWS App Mesh only supports TLS on the preview channel. You must use 'us-west-2' as the region when following this guide or you won't be able to complete it. This walkthrough is a combination of the [general README](https://github.com/Venafi/aws-private-ca-policy-venafi/blob/master/README.md) and the walkthrough on [Configuring TLS with AWS Certificate Manager](https://github.com/aws/aws-app-mesh-examples/blob/master/walkthroughs/tls-with-acm/README.md). We highly suggest you have both of these README.md files open while walking through this. ### Instructions We should start by following the main [README](https://github.com/Venafi/aws-private-ca-policy-venafi/blob/master/README.md) in this repository. Go through all the instructions until you reach the "Requesting Certificates" stage. **Note:** The Amazon Certificate Manager Private CA used for these examples will be same. So please set the root domain for the CA to something you can enforce with the policy set. Now let's move to the AWS instructions, README found [here](https://github.com/aws/aws-app-mesh-examples/blob/master/walkthroughs/tls-with-acm/README.md). Complete step 1 with no changes and complete step 2 by changing the value of the SERVICES_DOMAIN variable so that it will comply with the policy enforced by Venafi. **Example:** Policy only allows certs to be authorized for *.example.com domain, be sure to set the SERVICES_DOMAIN to example.com. Skip step 3, we already set up the PCA in the Venafi based instructions. Before we move on to step 4, we need to set the variable CERTIFICATE_ARN with the certificate we want to use. This can be done by running something like this (edit the command to with the right domain, base-url, policy and arn value. For more information go back to the 'Requesting Certificates' section in the Venafi README): ```bash export CERTIFICATE_ARN=`cli-appmesh.py request --domain "*.example.com" --base-url https://1234abcdzz.execute-api.us-west-2.amazonaws.com/v1/request/ --policy zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz --arn "arn:aws:acm-pca:us-west-2:11122233344:certificate-authority/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" | jq -r .CertificateArn` ``` **Note:** Make sure you're running this Python script with Python3 (or it wont work) and that you're in the right folder (/client-example/app-mesh-example/) If ran correctly you should get your certificate arn returned within the CERTIFICATE_ARN variable. Feel free to check by running: ```bash echo $CERTIFICATE_ARN ``` If you are requesting a certificate that doesn't fit with the set policies, you will get an error when setting the CERTIFICATE_ARN. You can also check by looking at the value of the variable, if it's an empty string, something went wrong. Now complete step 4. This will take the Certificate you have created above and apply it to the mesh. Now finish it up with step 5. This will allow you to test the app. Congratulations! You app mesh application is now secured with TLS certificates that comply with your enterprises security policy!
91.805556
390
0.784871
eng_Latn
0.995532
b902d680fd357258bd2a70f979fec9f037b7ab51
1,760
md
Markdown
_posts/2021-04-22-9791195966950.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
_posts/2021-04-22-9791195966950.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
_posts/2021-04-22-9791195966950.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
--- title: "그림으로 배우는 ABA 실천매뉴얼" date: 2021-04-22 17:14:50 +0900 categories: [국내도서, 사회과학] image: https://bimage.interpark.com/goods_image/5/2/0/8/307925208s.jpg description: ● 최근, 세계 각지에서, 자폐스펙트럼장애를 시작으로 발달장애를 가진 아동들을 위한 조기중재 방법으로 ABA(응용행동분석)에 관심이 높아지고 있습니다. 이 책은 쓰미키회의 주교재인 [쓰미키BOOK]에 수록된 ABA홈테라피 프로그램을, 다양한 그림을 이용해서 알기 쉽게 시각화해보자는 취지로 제작되었습니다 --- ## **정보** - **ISBN : 9791195966950** - **출판사 : 시그마프레스(에이스북)** - **출판일 : 20190508** - **저자 : 후지사카 류지** ------ ## **요약** ● 최근, 세계 각지에서, 자폐스펙트럼장애를 시작으로 발달장애를 가진 아동들을 위한 조기중재 방법으로 ABA(응용행동분석)에 관심이 높아지고 있습니다. 이 책은 쓰미키회의 주교재인 [쓰미키BOOK]에 수록된 ABA홈테라피 프로그램을, 다양한 그림을 이용해서 알기 쉽게 시각화해보자는 취지로 제작되었습니다. 이 책만으로도 테라피를 진행할 수 있도록 가능하면 많은 정보를 넣어보았습니다. 이 책은 좀 더 이해하기 쉬운 자료를 찾던 ABA부모회 회원들이 직접 조사하고, 선택한 책입니다. 그렇기 때문에 어느 책보다도 더 쉽게 ABA를 이해하고 실천하는데 도움이 되리라 생각합니다. ------ 최근, 세계 각지에서, 자폐스펙트럼장애를 시작으로 발달장애를 가진 아동들을 위한 조기중재 방법으로 ABA(응용행동분석)에 관심이 높아지고 있습니다. 이 책은 쓰미키회의 주교재인 [쓰미키BOOK]에 수록된 ABA홈테라피 프로그램을, 다양한 그림을 이용해서 알기 쉽게 시각화해보자는 취지로 제작되었습니다.... ------ 그림으로 배우는 ABA 실천매뉴얼 ------ ## **리뷰** 5.0 손-옥 아이들 육아에 많이 도움되길 기대 해 봅니다. 2021.03.31 <br/>5.0 방-진 그림으로 더자세히나와서 이해하기가쉬워지는거같아요 2021.02.23 <br/>5.0 정-운 도움이됩니다 2021.02.08 <br/>5.0 박-범 책 조아요 ㅠㅠ 2020.12.16 <br/>2.0 진-미 띄어쓰기 너무 틀려서 몰입이 안되네요. 2020.12.08 <br/>5.0 문-숙 아직 못 읽어봤는데 기대가 됩니다. 2020.08.27 <br/>5.0 김-목 좋아요. 2020.08.21 <br/>5.0 이-영 그림으로 설명이 잘 되어있어서 이해하기 좋아요?? 2020.07.12 <br/>5.0 김-정 그림으로 되어 있어서 이해하기 쉬워요. 2020.07.04 <br/>5.0 강-오 좋아요 2020.06.04 <br/>5.0 형-혜 마음에 들어요. 2020.06.01 <br/>5.0 김-영 쉬운 설명으로 실천해보기 좋아요 2020.03.17 <br/>5.0 이-영 많은 도움이 됩니다 2019.11.11 <br/>5.0 김-현 집에서 하는 aba 프로그램을 그림으로 해서 쉽게 나와 있어서 보기 편했어요 2019.11.04 <br/>5.0 김-진 아주나이스 해요 2019.07.14 <br/>5.0 임-진 부모들이 읽고 실천하기 쉽게 설명되어 있어서 구입하고 지인에게 추천했습니다. ABA가 어려운 학문이라는 선입견이 이 책을 접하고 난 후 완화되었습니다. 2019.07.10 <br/>
46.315789
729
0.668182
kor_Hang
1.00001
b903a6a24eb91375fcff12c98a57912b3fd45094
127
md
Markdown
tutorial/fundamentals/06_templates.md
xinetzone/plotly-book
fe7d8dbd6237a306806869d91e6df8a45c58f211
[ "Apache-2.0" ]
null
null
null
tutorial/fundamentals/06_templates.md
xinetzone/plotly-book
fe7d8dbd6237a306806869d91e6df8a45c58f211
[ "Apache-2.0" ]
null
null
null
tutorial/fundamentals/06_templates.md
xinetzone/plotly-book
fe7d8dbd6237a306806869d91e6df8a45c58f211
[ "Apache-2.0" ]
1
2021-11-23T04:34:53.000Z
2021-11-23T04:34:53.000Z
(plotly:templates)= # Theming and templates 参考:[Theming and templates | Python | Plotly](https://plotly.com/python/templates/)
31.75
82
0.755906
eng_Latn
0.363734
b903f24296499d410b1a4dd1979f1d7d9a23aa96
2,020
md
Markdown
website/views/demo/PolarBar/stack-polarBar.md
qcharts/core
e6c9076b0e491fc090a158efa98f2013acfdcf1c
[ "MIT" ]
22
2020-08-17T09:19:27.000Z
2022-02-18T10:48:43.000Z
website/views/demo/PolarBar/stack-polarBar.md
qcharts/core
e6c9076b0e491fc090a158efa98f2013acfdcf1c
[ "MIT" ]
5
2020-07-17T09:37:30.000Z
2022-02-13T19:02:15.000Z
website/views/demo/PolarBar/stack-polarBar.md
qcharts/core
e6c9076b0e491fc090a158efa98f2013acfdcf1c
[ "MIT" ]
5
2021-04-04T08:11:27.000Z
2022-03-28T10:25:14.000Z
## Nightingale Stack Chart 堆叠图 :::demo ```javascript const data = [ { product: "05-08", year: "图例一", sales: 30, }, { product: "05-08", year: "图例二", sales: 15, }, { product: "05-08", year: "图例三", sales: 20, }, { product: "05-09", year: "图例一", sales: 30, }, { product: "05-09", year: "图例二", sales: 17, }, { product: "05-09", year: "图例三", sales: 20, }, { product: "05-10", year: "图例一", sales: 17.57, }, { product: "05-10", year: "图例二", sales: 24, }, { product: "05-10", year: "图例三", sales: 37.54, }, { product: "05-11", year: "图例一", sales: 41, }, { product: "05-11", year: "图例二", sales: 28, }, { product: "05-11", year: "图例三", sales: 21, }, { product: "05-12", year: "图例一", sales: 14, }, { product: "05-12", year: "图例二", sales: 25, }, { product: "05-12", year: "图例三", sales: 35, }, { product: "05-13", year: "图例一", sales: 44, }, { product: "05-13", year: "图例二", sales: 25, }, { product: "05-13", year: "图例三", sales: 10, }, { product: "05-14", year: "图例一", sales: 25, }, { product: "05-14", year: "图例二", sales: 25, }, { product: "05-14", year: "图例三", sales: 10, }, { product: "05-15", year: "图例一", sales: 25, }, { product: "05-15", year: "图例二", sales: 25, }, { product: "05-15", year: "图例三", sales: 10, }, ] const { Chart, PolarBar, Tooltip, Legend } = qcharts const chart = new Chart({ container: "#app", }) chart.source(data, { row: "year", value: "sales", text: "product", }) const bar = new PolarBar({ stack: true, radius: 0.8, groupPadAngle: 15, }).style("pillar", { strokeColor: "#FFF", lineWidth: 1, }) const tooltip = new Tooltip() const legend = new Legend() chart.append([bar, tooltip, legend]) ``` :::
12.866242
52
0.45396
krc_Cyrl
0.337922
b90411d4c187c7f8c21e1676d7edbc5243a11627
21
md
Markdown
README.md
kevinarias1337/Taller1HTML-PHP
99cb12e3ba528fb2db80eabb7baa68a88b2a9b76
[ "Apache-2.0" ]
null
null
null
README.md
kevinarias1337/Taller1HTML-PHP
99cb12e3ba528fb2db80eabb7baa68a88b2a9b76
[ "Apache-2.0" ]
null
null
null
README.md
kevinarias1337/Taller1HTML-PHP
99cb12e3ba528fb2db80eabb7baa68a88b2a9b76
[ "Apache-2.0" ]
null
null
null
# Taller1HTML-PHP --
7
17
0.666667
dan_Latn
0.233407
b9041fe08e538a242bc59ae1de29a2a817530e5d
1,256
md
Markdown
_posts/boj/2020-07-26-boj-11053.md
bconfiden2/bconfiden2.github.io
0f9ca2c9a5961092b9eb95cdfe0903d387d24673
[ "MIT" ]
2
2021-11-06T06:18:01.000Z
2021-12-22T23:40:11.000Z
_posts/boj/2020-07-26-boj-11053.md
bconfiden2/bconfiden2.github.io
0f9ca2c9a5961092b9eb95cdfe0903d387d24673
[ "MIT" ]
1
2022-01-10T07:10:51.000Z
2022-01-10T07:53:43.000Z
_posts/boj/2020-07-26-boj-11053.md
bconfiden2/bconfiden2.github.io
0f9ca2c9a5961092b9eb95cdfe0903d387d24673
[ "MIT" ]
null
null
null
--- layout: post title: "[백준] 11053.cpp : 가장 긴 증가하는 부분 수열" subtitle: "" categories: ps tags: boj --- *# 동적 계획법 # 백준* <br> [문제 바로가기](https://www.acmicpc.net/problem/11053) <br> --- - 꼼꼼히 문제를 따지지 않으면 다양한 반례에 걸리기 쉬운 문제이다. - 내 위치에서의 가장 긴 증가하는 부분 수열은, 이전까지의 가장 긴 증가하는 부분 수열에 1 (자기 자신)을 더한 값이다. - 여기서 이전까지의 부분 수열들 중, 자기 자신보다 작은 값들의 수열들에 대해서만 고려해야 한다. - 위의 로직을 매 반복마다 수행하기에는 O(n3) 이기 때문에, DP 배열을 만들어 이전의 값들을 담아놓기로 한다. --- <br> {% highlight c++ %} #include <iostream> using namespace std; int n; int arr[1000]; // 입력받은 데이터 int ans[1000]; // 각 인덱스에 대한 최대 길이값을 담을 DP 배열 int main(void) { cin >> n; for(int i = 0 ; i < n ; i++) { cin >> arr[i]; } int answer = 0; for(int i = 0 ; i < n ; i++) // DP 배열 채움 { int max = 0; for(int idx = 0 ; idx < i ; idx++) // 이전까지의 배열들을 탐색 { if(arr[idx] < arr[i] && ans[idx] > max) // 자신보다 작은 입력값들 중, 최대 길이값을 찾음 { max = ans[idx]; } } ans[i] = max + 1; // 해당 길이값에 자기 자신을 추가하여 길이값 저장 if(answer < ans[i]) answer = ans[i]; // 전체 길이값들 중 최댓값 갱신 } cout << answer << endl; } {% endhighlight %}
19.323077
85
0.472134
kor_Hang
1.00001
b9043132b227089cbd6f835381dafd11ecd6f808
18,202
md
Markdown
docs/android/platform/fragments/implementing-with-fragments/walkthrough.md
v-radelg/xamarin-docs.zh-cn
28331b62bbdf4fd989ca1912e17f8dc647735b9a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/android/platform/fragments/implementing-with-fragments/walkthrough.md
v-radelg/xamarin-docs.zh-cn
28331b62bbdf4fd989ca1912e17f8dc647735b9a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/android/platform/fragments/implementing-with-fragments/walkthrough.md
v-radelg/xamarin-docs.zh-cn
28331b62bbdf4fd989ca1912e17f8dc647735b9a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Xamarin Android 片段演练-第1部分 ms.prod: xamarin ms.topic: tutorial ms.assetid: ED368FA9-A34E-DC39-D535-5C34C32B9761 ms.technology: xamarin-android author: davidortinau ms.author: daortin ms.date: 08/21/2018 ms.openlocfilehash: 043ad02f9ca9148910364ac82917551ee58d72ba ms.sourcegitcommit: 2fbe4932a319af4ebc829f65eb1fb1816ba305d3 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 10/29/2019 ms.locfileid: "73027407" --- # <a name="fragments-walkthrough-ndash-phone"></a>分段演练 &ndash; 电话 这是演练的第一部分,它将创建面向 Android 设备的 Xamarin Android 应用。 本演练将讨论如何在 Xamarin 中创建片段,以及如何将它们添加到示例。 [![](./images/intro-screenshot-phone-sml.png)](./images/intro-screenshot-phone.png#lightbox) 将为此应用创建以下类: 1. `PlayQuoteFragment` &nbsp; 此片段将显示 William 莎士比亚的播放报价。 它将由 `PlayQuoteActivity`来托管。 1. `Shakespeare` &nbsp; 此类将两个硬编码数组保存为属性。 1. `TitlesFragment` &nbsp; 此片段将显示由 William 莎士比亚编写的播放标题的列表。 它将由 `MainActivity`来托管。 1. `PlayQuoteActivity` &nbsp; `TitlesFragment` 将启动 `PlayQuoteActivity`,以响应用户在 `TitlesFragment`中选择一个重头戏。 ## <a name="1-create-the-android-project"></a>1. 创建 Android 项目 创建名为**FragmentSample**的新的 Xamarin Android 项目。 # <a name="visual-studiotabwindows"></a>[Visual Studio](#tab/windows) [![创建新的 Xamarin Android 项目](./walkthrough-images/01-newproject.w157-sml.png)](./walkthrough-images/01-newproject.w157.png#lightbox) # <a name="visual-studio-for-mactabmacos"></a>[Visual Studio for Mac](#tab/macos) [![创建新的 Xamarin Android 项目](./walkthrough-images/01-newproject.m742-sml.png)](./walkthrough-images/01-newproject.m742.png#lightbox) 建议为本演练选择**新式开发**。 创建项目后,请将文件**布局 main.axml**重命名为**layout/activity_main main.axml**。 ----- ## <a name="2-add-the-data"></a>2. 添加数据 此应用程序的数据将存储在两个硬编码的字符串数组中,这些数组是类名 `Shakespeare`的属性: * `Shakespeare.Titles` &nbsp; 此数组将保存 William 莎士比亚的播放列表。 这是 `TitlesFragment`的数据源。 * `Shakespeare.Dialogue` &nbsp; 此数组将包含 `Shakespeare.Titles`中包含的其中一个重头戏的引号列表。 这是 `PlayQuoteFragment`的数据源。 向 FragmentSample 项目C#添加一个新类并将其命名为**Shakespeare.cs**。 在此文件中,创建一个C#名为`Shakespeare`的新类,其中包含以下内容 ```csharp class Shakespeare { public static string[] Titles = { "Henry IV (1)", "Henry V", "Henry VIII", "Richard II", "Richard III", "Merchant of Venice", "Othello", "King Lear" }; public static string[] Dialogue = { "So shaken as we are, so wan with care, Find we a time for frighted peace to pant, And breathe short-winded accents of new broils To be commenced in strands afar remote. No more the thirsty entrance of this soil Shall daub her lips with her own children's blood; Nor more shall trenching war channel her fields, Nor bruise her flowerets with the armed hoofs Of hostile paces: those opposed eyes, Which, like the meteors of a troubled heaven, All of one nature, of one substance bred, Did lately meet in the intestine shock And furious close of civil butchery Shall now, in mutual well-beseeming ranks, March all one way and be no more opposed Against acquaintance, kindred and allies: The edge of war, like an ill-sheathed knife, No more shall cut his master. Therefore, friends, As far as to the sepulchre of Christ, Whose soldier now, under whose blessed cross We are impressed and engaged to fight, Forthwith a power of English shall we levy; Whose arms were moulded in their mothers' womb To chase these pagans in those holy fields Over whose acres walk'd those blessed feet Which fourteen hundred years ago were nail'd For our advantage on the bitter cross. But this our purpose now is twelve month old, And bootless 'tis to tell you we will go: Therefore we meet not now. Then let me hear Of you, my gentle cousin Westmoreland, What yesternight our council did decree In forwarding this dear expedience.", "Hear him but reason in divinity, And all-admiring with an inward wish You would desire the king were made a prelate: Hear him debate of commonwealth affairs, You would say it hath been all in all his study: List his discourse of war, and you shall hear A fearful battle render'd you in music: Turn him to any cause of policy, The Gordian knot of it he will unloose, Familiar as his garter: that, when he speaks, The air, a charter'd libertine, is still, And the mute wonder lurketh in men's ears, To steal his sweet and honey'd sentences; So that the art and practic part of life Must be the mistress to this theoric: Which is a wonder how his grace should glean it, Since his addiction was to courses vain, His companies unletter'd, rude and shallow, His hours fill'd up with riots, banquets, sports, And never noted in him any study, Any retirement, any sequestration From open haunts and popularity.", "I come no more to make you laugh: things now, That bear a weighty and a serious brow, Sad, high, and working, full of state and woe, Such noble scenes as draw the eye to flow, We now present. Those that can pity, here May, if they think it well, let fall a tear; The subject will deserve it. Such as give Their money out of hope they may believe, May here find truth too. Those that come to see Only a show or two, and so agree The play may pass, if they be still and willing, I'll undertake may see away their shilling Richly in two short hours. Only they That come to hear a merry bawdy play, A noise of targets, or to see a fellow In a long motley coat guarded with yellow, Will be deceived; for, gentle hearers, know, To rank our chosen truth with such a show As fool and fight is, beside forfeiting Our own brains, and the opinion that we bring, To make that only true we now intend, Will leave us never an understanding friend. Therefore, for goodness' sake, and as you are known The first and happiest hearers of the town, Be sad, as we would make ye: think ye see The very persons of our noble story As they were living; think you see them great, And follow'd with the general throng and sweat Of thousand friends; then in a moment, see How soon this mightiness meets misery: And, if you can be merry then, I'll say A man may weep upon his wedding-day.", "First, heaven be the record to my speech! In the devotion of a subject's love, Tendering the precious safety of my prince, And free from other misbegotten hate, Come I appellant to this princely presence. Now, Thomas Mowbray, do I turn to thee, And mark my greeting well; for what I speak My body shall make good upon this earth, Or my divine soul answer it in heaven. Thou art a traitor and a miscreant, Too good to be so and too bad to live, Since the more fair and crystal is the sky, The uglier seem the clouds that in it fly. Once more, the more to aggravate the note, With a foul traitor's name stuff I thy throat; And wish, so please my sovereign, ere I move, What my tongue speaks my right drawn sword may prove.", "Now is the winter of our discontent Made glorious summer by this sun of York; And all the clouds that lour'd upon our house In the deep bosom of the ocean buried. Now are our brows bound with victorious wreaths; Our bruised arms hung up for monuments; Our stern alarums changed to merry meetings, Our dreadful marches to delightful measures. Grim-visaged war hath smooth'd his wrinkled front; And now, instead of mounting barded steeds To fright the souls of fearful adversaries, He capers nimbly in a lady's chamber To the lascivious pleasing of a lute. But I, that am not shaped for sportive tricks, Nor made to court an amorous looking-glass; I, that am rudely stamp'd, and want love's majesty To strut before a wanton ambling nymph; I, that am curtail'd of this fair proportion, Cheated of feature by dissembling nature, Deformed, unfinish'd, sent before my time Into this breathing world, scarce half made up, And that so lamely and unfashionable That dogs bark at me as I halt by them; Why, I, in this weak piping time of peace, Have no delight to pass away the time, Unless to spy my shadow in the sun And descant on mine own deformity: And therefore, since I cannot prove a lover, To entertain these fair well-spoken days, I am determined to prove a villain And hate the idle pleasures of these days. Plots have I laid, inductions dangerous, By drunken prophecies, libels and dreams, To set my brother Clarence and the king In deadly hate the one against the other: And if King Edward be as true and just As I am subtle, false and treacherous, This day should Clarence closely be mew'd up, About a prophecy, which says that 'G' Of Edward's heirs the murderer shall be. Dive, thoughts, down to my soul: here Clarence comes.", "To bait fish withal: if it will feed nothing else, it will feed my revenge. He hath disgraced me, and hindered me half a million; laughed at my losses, mocked at my gains, scorned my nation, thwarted my bargains, cooled my friends, heated mine enemies; and what's his reason? I am a Jew. Hath not a Jew eyes? hath not a Jew hands, organs, dimensions, senses, affections, passions? fed with the same food, hurt with the same weapons, subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer, as a Christian is? If you prick us, do we not bleed? if you tickle us, do we not laugh? if you poison us, do we not die? and if you wrong us, shall we not revenge? If we are like you in the rest, we will resemble you in that. If a Jew wrong a Christian, what is his humility? Revenge. If a Christian wrong a Jew, what should his sufferance be by Christian example? Why, revenge. The villany you teach me, I will execute, and it shall go hard but I will better the instruction.", "Virtue! a fig! 'tis in ourselves that we are thus or thus. Our bodies are our gardens, to the which our wills are gardeners: so that if we will plant nettles, or sow lettuce, set hyssop and weed up thyme, supply it with one gender of herbs, or distract it with many, either to have it sterile with idleness, or manured with industry, why, the power and corrigible authority of this lies in our wills. If the balance of our lives had not one scale of reason to poise another of sensuality, the blood and baseness of our natures would conduct us to most preposterous conclusions: but we have reason to cool our raging motions, our carnal stings, our unbitted lusts, whereof I take this that you call love to be a sect or scion.", "Blow, winds, and crack your cheeks! rage! blow! You cataracts and hurricanoes, spout Till you have drench'd our steeples, drown'd the cocks! You sulphurous and thought-executing fires, Vaunt-couriers to oak-cleaving thunderbolts, Singe my white head! And thou, all-shaking thunder, Smite flat the thick rotundity o' the world! Crack nature's moulds, an germens spill at once, That make ingrateful man!" }; } ``` ## <a name="3-create-the-playquotefragment"></a>3. 创建 PlayQuoteFragment `PlayQuoteFragment` 是一个 Android 片段,该片段将显示用户在应用程序中之前选择的莎士比亚 play 的报价,此片段不会使用 Android 布局文件;相反,它会动态创建其用户界面。 向项目添加一个名为 `PlayQuoteFragment` 的新 `Fragment` 类: # <a name="visual-studiotabwindows"></a>[Visual Studio](#tab/windows) [![添加新C#类](./walkthrough-images/04-addfragment.w157-sml.png)](./walkthrough-images/02-addclass.w157.png#lightbox) # <a name="visual-studio-for-mactabmacos"></a>[Visual Studio for Mac](#tab/macos) [![添加新C#类](./walkthrough-images/04-addfragment.m742-sml.png)](./walkthrough-images/02-addclass.m742.png#lightbox) ----- 然后,将片段的代码更改为类似于以下代码片段: ```csharp public class PlayQuoteFragment : Fragment { public int PlayId => Arguments.GetInt("current_play_id", 0); public static PlayQuoteFragment NewInstance(int playId) { var bundle = new Bundle(); bundle.PutInt("current_play_id", playId); return new PlayQuoteFragment {Arguments = bundle}; } public override View OnCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { if (container == null) { return null; } var textView = new TextView(Activity); var padding = Convert.ToInt32(TypedValue.ApplyDimension(ComplexUnitType.Dip, 4, Activity.Resources.DisplayMetrics)); textView.SetPadding(padding, padding, padding, padding); textView.TextSize = 24; textView.Text = Shakespeare.Dialogue[PlayId]; var scroller = new ScrollView(Activity); scroller.AddView(textView); return scroller; } } ``` 这是 Android 应用中的一种常见模式,用于提供将实例化片段的工厂方法。 这可确保创建具有必要参数的片段,以便正常工作。 在本演练中,应用应使用 `PlayQuoteFragment.NewInstance` 方法,以便在每次选择引号时创建新的片段。 `NewInstance` 方法将采用单个参数 &ndash; 要显示的引号的索引。 在屏幕上呈现片段时,Android 将调用 `OnCreateView` 方法。 它将返回作为片段的 Android `View` 对象。 此片段不使用布局文件来创建视图。 相反,它将以编程方式创建视图,方法是实例化**TextView**来保存引号,并在**ScrollView**中显示该小组件。 > [!NOTE] > 片段子类必须具有没有参数的公共默认构造函数。 ## <a name="4-create-the-playquoteactivity"></a>4. 创建 PlayQuoteActivity 片段必须承载于活动内,因此,此应用需要一个将承载 `PlayQuoteFragment`的活动。 活动将在运行时将片段动态添加到其布局。 将新活动添加到该应用程序,并将其命名 `PlayQuoteActivity`: # <a name="visual-studiotabwindows"></a>[Visual Studio](#tab/windows) [![向项目添加 Android 活动](./walkthrough-images/03-addactivity.w157-sml.png)](./walkthrough-images/03-addactivity.w157.png#lightbox) # <a name="visual-studio-for-mactabmacos"></a>[Visual Studio for Mac](#tab/macos) [![向项目添加 Android 活动](./walkthrough-images/03-addactivity.m742-sml.png)](./walkthrough-images/03-addactivity.m742.png#lightbox) ----- 在 `PlayQuoteActivity`中编辑代码: ```csharp [Activity(Label = "PlayQuoteActivity")] public class PlayQuoteActivity : Activity { protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); var playId = Intent.Extras.GetInt("current_play_id", 0); var detailsFrag = PlayQuoteFragment.NewInstance(playId); FragmentManager.BeginTransaction() .Add(Android.Resource.Id.Content, detailsFrag) .Commit(); } } ``` 创建 `PlayQuoteActivity` 时,它将实例化新的 `PlayQuoteFragment` 并在其在 `FragmentTransaction`上下文的根视图中加载该片段。 请注意,此活动不会为其用户界面加载 Android 布局文件。 相反,新的 `PlayQuoteFragment` 将添加到应用程序的根视图中。 资源标识符 `Android.Resource.Id.Content` 用于引用活动的根视图,而无需知道其特定标识符。 ## <a name="5-create-titlesfragment"></a>5. 创建 TitlesFragment `TitlesFragment` 将子类称为称为 `ListFragment` 的专用片段,该片段封装用于在片段中显示 `ListView` 的逻辑。 `ListFragment` 公开 `ListAdapter` 属性(由 `ListView` 用来显示其内容)和一个名为 `OnListItemClick` 的事件处理程序,该事件处理程序允许片段响应由 `ListView`显示的行的单击。 若要开始,请向项目中添加一个新的片段,并将其命名为**TitlesFragment**: # <a name="visual-studiotabwindows"></a>[Visual Studio](#tab/windows) [![将 Android 片段添加到项目](./walkthrough-images/04-addfragment.w157-sml.png)](./walkthrough-images/04-addfragment.w157.png#lightbox) # <a name="visual-studio-for-mactabmacos"></a>[Visual Studio for Mac](#tab/macos) [![将 Android 片段添加到项目](./walkthrough-images/04-addfragment.m742-sml.png)](./walkthrough-images/04-addfragment.m742.png#lightbox) ----- 编辑片段内的代码: ```csharp public class TitlesFragment : ListFragment { int selectedPlayId; public TitlesFragment() { // Being explicit about the requirement for a default constructor. } public override void OnActivityCreated(Bundle savedInstanceState) { base.OnActivityCreated(savedInstanceState); ListAdapter = new ArrayAdapter<String>(Activity, Android.Resource.Layout.SimpleListItemActivated1, Shakespeare.Titles); if (savedInstanceState != null) { selectedPlayId = savedInstanceState.GetInt("current_play_id", 0); } } public override void OnSaveInstanceState(Bundle outState) { base.OnSaveInstanceState(outState); outState.PutInt("current_play_id", selectedPlayId); } public override void OnListItemClick(ListView l, View v, int position, long id) { ShowPlayQuote(position); } void ShowPlayQuote(int playId) { var intent = new Intent(Activity, typeof(PlayQuoteActivity)); intent.PutExtra("current_play_id", playId); StartActivity(intent); } } ``` 创建活动时,Android 会调用片段的 `OnActivityCreated` 方法;这是创建 `ListView` 的列表适配器的位置。 `ShowQuoteFromPlay` 方法将启动 `PlayQuoteActivity` 的实例,以显示所选播放的报价单。 ## <a name="display-titlesfragment-in-mainactivity"></a>在 MainActivity 中显示 TitlesFragment 最后一步是在 `MainActivity`中显示 `TitlesFragment`。 活动不会动态加载片段。 相反,将通过使用 `fragment` 元素在活动的布局文件中声明片段来静态加载片段。 通过将 `android:name` 特性设置为片段类(包括类型的命名空间)来标识要加载的片段。 例如,若要使用 `TitlesFragment`,则 `android:name` 将设置为 "`FragmentSample.TitlesFragment`"。 编辑布局文件**activity_main**,并将现有的 XML 替换为以下内容: ```xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:orientation="horizontal" android:layout_width="match_parent" android:layout_height="match_parent"> <fragment android:name="FragmentSample.TitlesFragment" android:id="@+id/titles" android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> ``` > [!NOTE] > `class` 属性是 `android:name`的有效替换。 对于首选窗体,没有正式的指导,有很多代码库示例将 `class` 与 `android:name`互换使用。 MainActivity 无需更改代码。 该类中的代码应该非常类似于以下代码片段: ```csharp [Activity(Label = "@string/app_name", Theme = "@style/AppTheme", MainLauncher = true)] public class MainActivity : Activity { protected override void OnCreate(Bundle savedInstanceState) { base.OnCreate(savedInstanceState); SetContentView(Resource.Layout.activity_main); } } ``` ## <a name="run-the-app"></a>运行应用 现在代码已完成,请在设备上运行应用以查看其运行情况。 [在电话上运行的应用程序![屏幕截图。](./walkthrough-images/05-app-screenshots-sml.png)](./walkthrough-images/05-app-screenshots.png#lightbox) [本演练的第2部分](./walkthrough-landscape.md)将为在横向模式下运行的设备 optimtize 此应用程序。
64.546099
1,773
0.724865
eng_Latn
0.929562
b90455d1c9d17490d0466aeb70217b969b52b435
6,168
md
Markdown
docs/linux/sql-server-linux-manage-powershell.md
options/sql-docs
b5ac9749e7ba4aecad3f6211750623afa71c9e69
[ "CC-BY-4.0", "MIT" ]
1
2021-02-01T19:06:43.000Z
2021-02-01T19:06:43.000Z
docs/linux/sql-server-linux-manage-powershell.md
options/sql-docs
b5ac9749e7ba4aecad3f6211750623afa71c9e69
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/linux/sql-server-linux-manage-powershell.md
options/sql-docs
b5ac9749e7ba4aecad3f6211750623afa71c9e69
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Manage SQL Server on Linux with PowerShell | Microsoft Docs description: This topic provides an overview of using PowerShell on Windows with SQL Server on Linux. author: sanagama ms.author: sanagama manager: jhubbard ms.date: 03/17/2017 ms.topic: article ms.prod: sql-linux ms.technology: database-engine ms.assetid: a3492ce1-5d55-4505-983c-d6da8d1a94ad --- # Use PowerShell on Windows to Manage SQL Server on Linux [!INCLUDE[tsql-appliesto-sslinux-only](../includes/tsql-appliesto-sslinux-only.md)] This topic introduces [SQL Server PowerShell](https://msdn.microsoft.com/en-us/library/mt740629.aspx) and walks you through a couple of examples on how to use it with SQL Server 2017 RC2 on Linux. PowerShell support for SQL Server is currently available on Windows, so you can use it when you have a Windows machine that can connect to a remote SQL Server instance on Linux. ## Install the newest version of SQL PowerShell on Windows [SQL PowerShell](https://msdn.microsoft.com/en-us/library/mt740629.aspx) on Windows is included with [SQL Server Management Studio (SSMS)](../ssms/sql-server-management-studio-ssms.md). When working with SQL Server, you should always use the most recent version of SSMS and SQL PowerShell. The latest version of SSMS is continually updated and optimized and currently works with SQL Server 2017 RC2 on Linux. To download and install the latest version, see [Download SQL Server Management Studio](../ssms/download-sql-server-management-studio-ssms.md). To stay up-to-date, the latest version of SSMS prompts you when there is a new version available to download. ## Before you begin Read the [Known Issues](sql-server-linux-release-notes.md) for SQL Server 2017 RC2 on Linux. ## Launch PowerShell and import the *sqlserver* module Let's start by launching PowerShell on Windows. Open a *command prompt* on your Windows computer, and type **PowerShell** to launch a new Windows PowerShell session. ``` PowerShell ``` SQL Server provides a Windows PowerShell module named **SqlServer** that you can use to import the SQL Server components (SQL Server provider and cmdlets) into a PowerShell environment or script. Copy and paste the command below at the PowerShell prompt to import the **SqlServer** module into your current PowerShell session: ```powershell Import-Module SqlServer ``` Type the command below at the PowerShell prompt to verify that the **SqlServer** module was imported correctly: ```powershell Get-Module -Name SqlServer ``` PowerShell should display information similar to what's below: ``` ModuleType Version Name ExportedCommands ---------- ------- ---- ---------------- Script 0.0 SqlServer Manifest 20.0 SqlServer {Add-SqlAvailabilityDatabase, Add-SqlAvailabilityGroupList... ``` ## Connect to SQL Server and get server information Let's use PowerShell on Windows to connect to your SQL Server 2017 instance on Linux and display a couple of server properties. Copy and paste the commands below at the PowerShell prompt. When you run these commands, PowerShell will: - Display the *Windows PowerShell credential request* dialog that prompts you for the credentials (*SQL username* and *SQL password*) to connect to your SQL Server 2017 RC2 instance on Linux - Load the SQL Server Management Objects (SMO) assembly - Create an instance of the [Server](https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server.aspx) object - Connect to the **Server** and display a few properties Remember to replace **\<your_server_instance\>** with the IP address or the hostname of your SQL Server 2017 RC2 instance on Linux. ```powershell # Prompt for credentials to login into SQL Server $serverInstance = "<your_server_instance>" $credential = Get-Credential # Load the SMO assembly and create a Server object [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null $server = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $serverInstance # Set credentials $server.ConnectionContext.LoginSecure=$false $server.ConnectionContext.set_Login($credential.UserName) $server.ConnectionContext.set_SecurePassword($credential.Password) # Connect to the Server and get a few properties $server.Information | Select-Object Edition, HostPlatform, HostDistribution | Format-List # done ``` PowerShell should display information similar to what's shown below: ``` Edition : Developer Edition (64-bit) HostPlatform : Linux HostDistribution : Ubuntu ``` > [!NOTE] > If nothing is displayed for these values, the connection to the target SQL Server instance most likely failed. Make sure that you can use the same connection information to connect from SQL Server Management Studio. Then review the [connection troubleshooting recommendations](sql-server-linux-troubleshooting-guide.md#connection). ## Examine SQL Server error logs Let's use PowerShell on Windows to examine error logs connect on your SQL Server 2017 instance on Linux. We will also use the **Out-GridView** cmdlet to show information from the error logs in a grid view display. Copy and paste the commands below at the PowerShell prompt. They might take a few minutes to run. These commands do the following: - Display the *Windows PowerShell credential request* dialog that prompts you for the credentials (*SQL username* and *SQL password*) to connect to your SQL Server 2017 RC2 instance on Linux - Use the **Get-SqlErrorLog** cmdlet to connect to the SQL Server 2017 instance on Linux and retrieve error logs since **Yesterday** - Pipe the output to the **Out-GridView** cmdlet Remember to replace **\<your_server_instance\>** with the IP address or the hostname of your SQL Server 2017 RC2 instance on Linux. ```powershell # Prompt for credentials to login into SQL Server $serverInstance = "<your_server_instance>" $credential = Get-Credential # Retrieve error logs since yesterday Get-SqlErrorLog -ServerInstance $serverInstance -Credential $credential -Since Yesterday | Out-GridView # done ``` ## See also - [SQL Server PowerShell](../relational-databases/scripting/sql-server-powershell.md)
50.975207
663
0.776102
eng_Latn
0.935362
b9048659bc6d2fd69f6ca75241bc12e4e3799bf2
10,682
md
Markdown
docs/source/en/tutorials/mysql.md
zfben/egg
c0b0bb8345df83bbd2949b0af34bb397b5185e17
[ "MIT" ]
2
2019-02-20T07:38:42.000Z
2019-02-20T07:38:43.000Z
docs/source/en/tutorials/mysql.md
apphuangweipeng137/egg
b971e66336af4c8e241303866c8fa9acaaf4e66f
[ "MIT" ]
null
null
null
docs/source/en/tutorials/mysql.md
apphuangweipeng137/egg
b971e66336af4c8e241303866c8fa9acaaf4e66f
[ "MIT" ]
1
2018-12-07T15:36:24.000Z
2018-12-07T15:36:24.000Z
title: MySQL --- MySQL is one of the most common and best RDBMS in terms of web applications. It is used in many large-scale websites such as Google and Facebook. ## egg-mysql egg-mysql is provided to access both the MySQL databases and MySQL-based online database service. ### Installation and Configuration Install [egg-mysql] ```bash $ npm i --save egg-mysql ``` Enable Plugin: ```js // config/plugin.js exports.mysql = { enable: true, package: 'egg-mysql', }; ``` Configure database information in `config/config.${env}.js` #### Single Data Source Configuration to accesss single MySQL instance as shown below: ```js // config/config.${env}.js exports.mysql = { // database configuration client: { host: 'mysql.com', port: '3306', user: 'test_user', password: 'test_password', database: 'test', }, // load into app, default true app: true, // load into agent, default false agent: false, }; ``` Use: ```js await app.mysql.query(sql, values); // single instance can be accessed through app.mysql ``` #### Multiple Data Sources Configuration to accesss multiple MySQL instances as below: ```js exports.mysql = { clients: { // clientId, obtain the client instances using the app.mysql.get('clientId') db1: { host: 'mysql.com', port: '3306', user: 'test_user', password: 'test_password', database: 'test', }, db2: { host: 'mysql2.com', port: '3307', user: 'test_user', password: 'test_password', database: 'test', }, // ... }, //default configuration of all databases default: { }, // load into app, default true app: true, // load into agent, default false agent: false, }; ``` Use: ```js const client1 = app.mysql.get('db1'); await client1.query(sql, values); const client2 = app.mysql.get('db2'); await client2.query(sql, values); ``` #### Dynamic Creation Pre-declaration of configuration might not needed in the configuration file. Obtaining the actual parameters dynamically from the configuration center then initialize an instance instead. ```js // {app_root}/app.js module.exports = app => { app.beforeStart(async () => { // obtain the MySQL configuration from the configuration center // { host: 'mysql.com', port: '3306', user: 'test_user', password: 'test_password', database: 'test' } const mysqlConfig = await app.configCenter.fetch('mysql'); app.database = app.mysql.createInstance(mysqlConfig); }); }; ``` ## Service layer Connecting to MySQL is a data processing layer in the Web layer. So it is strongly recommended that keeping the code in the Service layer. An example of connecting to MySQL as follows. Details of Service layer, refer to [service](../basics/service.md) ```js // app/service/user.js class UserService extends Service { async find(uid) { // assume we have the user id then trying to get the user details from database const user = await this.app.mysql.get('users', { id: 11 }); return { user }; } } ``` After that, obtaining the data from service layer using the controller ```js // app/controller/user.js class UserController extends Controller { async info() { const ctx = this.ctx; const userId = ctx.params.id; const user = await ctx.service.user.find(userId); ctx.body = user; } } ``` ## Writing CRUD Following statments default under `app/service` if not specifed ### Create INSERT method to perform the INSERT INTO query ```js // INSERT const result = await this.app.mysql.insert('posts', { title: 'Hello World' }); //insert a record title 'Hello World' to 'posts' table => INSERT INTO `posts`(`title`) VALUES('Hello World'); console.log(result); => { fieldCount: 0, affectedRows: 1, insertId: 3710, serverStatus: 2, warningCount: 2, message: '', protocol41: true, changedRows: 0 } // check if insertion is success or failure const insertSuccess = result.affectedRows === 1; ``` ### Read Use `get` or `select` to select one or multiple records. `select` method support query criteria and result customization - get one record ```js const post = await this.app.mysql.get('posts', { id: 12 }); => SELECT * FROM `posts` WHERE `id` = 12 LIMIT 0, 1; ``` - query all from the table ```js const results = await this.app.mysql.select('posts'); => SELECT * FROM `posts`; ``` - query criteria and result customization ```js const results = await this.app.mysql.select('posts', { // search posts table where: { status: 'draft', author: ['author1', 'author2'] }, // WHERE criteria columns: ['author', 'title'], // get the value of certain columns orders: [['created_at','desc'], ['id','desc']], // sort order limit: 10, // limit the return rows offset: 0, // data offset }); => SELECT `author`, `title` FROM `posts` WHERE `status` = 'draft' AND `author` IN('author1','author2') ORDER BY `created_at` DESC, `id` DESC LIMIT 0, 10; ``` ### Update UPDATE operation to update the records of databases ```js // modify data and search by primary key ID, and refresh const row = { id: 123, name: 'fengmk2', otherField: 'other field value', // any other fields u want to update modifiedAt: this.app.mysql.literals.now, // `now()` on db server }; const result = await this.app.mysql.update('posts', row); // update records in 'posts' => UPDATE `posts` SET `name` = 'fengmk2', `modifiedAt` = NOW() WHERE id = 123 ; // check if update is success or failure const updateSuccess = result.affectedRows === 1; // if primary key is your custom id,such as custom_id,you should config it in `where` const row = { name: 'fengmk2', otherField: 'other field value', // any other fields u want to update modifiedAt: this.app.mysql.literals.now, // `now()` on db server }; const options = { where: { custom_id: 456 } }; const result = await this.app.mysql.update('posts', row, options); // update records in 'posts' => UPDATE `posts` SET `name` = 'fengmk2', `modifiedAt` = NOW() WHERE custom_id = 456 ; // check if update is success or failure const updateSuccess = result.affectedRows === 1; ``` ### Delete DELETE operation to delete the records of databases ```js const result = await this.app.mysql.delete('posts', { author: 'fengmk2', }); => DELETE FROM `posts` WHERE `author` = 'fengmk2'; ``` ## Implementation of SQL statement Plugin supports splicing and execute SQL statment directly. It can use `query` to execute a valid SQL statement **Note!! Strongly do not recommend developers splicing SQL statement, it is easier to cause SQL injection!!** Use the `mysql.escape` method if you have to splice SQL statement Refer to [preventing-sql-injection-in-node-js](http://stackoverflow.com/questions/15778572/preventing-sql-injection-in-node-js) ```js const postId = 1; const results = await this.app.mysql.query('update posts set hits = (hits + ?) where id = ?', [1, postId]); => update posts set hits = (hits + 1) where id = 1; ``` ## Transaction Transaction is mainly used to deal with large data of high complexity. For example, in a personnel management system, deleting a person which need to delete the basic information of the staff, but also need to delete the related information of staff, such as mailboxes, articles and so on. It is easier to use transaction to run a set of operations. A transaction is a set of continuous database operations which performed as a single unit of work. Each individual operation within the group is successful and the transaction succeeds. If one part of the transaction fails, then the entire transaction fails. In gerenal, transaction must be atomic, consistent, isolated and durable. - Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, then the entire transaction fails, and the database state is left unchanged. - The consistency property ensures that any transaction will bring the database from one valid state to another. - The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially - The durability property ensures that once a transaction has been committed, it will remain so. Therefore, for a transaction, must be accompanied by beginTransaction, commit or rollback, respectively, beginning of the transaction, success and failure to roll back. egg-mysql proviodes two types of transactions ### Manual Control - adventage: `beginTransaction`, `commit` or `rollback` can be completely under control by developer - disadventage: more handwritten code, Forgot catching error or cleanup will lead to serious bug. ```js const conn = await app.mysql.beginTransaction(); // initialize the transaction try { await conn.insert(table, row1); // first step await conn.update(table, row2); // second step await conn.commit(); // commit the transaction } catch (err) { // error, rollback await conn.rollback(); // rollback after catching the exception!! throw err; } ``` ### Automatic control: Transaction with scope - API:`beginTransactionScope(scope, ctx)` - `scope`: A generatorFunction which will execute all sqls of this transaction. - `ctx`: The context object of current request, it will ensures that even in the case of a nested transaction, there is only one active transaction in a request at the same time. - adventage: easy to use, as if there is no transaction in your code. - disadvantage: all transation will be successful or failed, cannot control precisely ```js const result = await app.mysql.beginTransactionScope(async conn => { // don't commit or rollback by yourself await conn.insert(table, row1); await conn.update(table, row2); return { success: true }; }, ctx); // ctx is the context of current request, accessed by `this.ctx` within in service file. // if error throw on scope, will auto rollback ``` ## Literal Use `Literal` if need to call literals or functions in MySQL ### Inner Literal - `NOW()`:The database system time, obtained by `app.mysql.literals.now` ```js await this.app.mysql.insert(table, { create_time: this.app.mysql.literals.now, }); => INSERT INTO `$table`(`create_time`) VALUES(NOW()) ``` ### Custom literal The following demo showe how to call `CONCAT(s1, ...sn)` funtion in mysql to do string splicing. ```js const Literal = this.app.mysql.literals.Literal; const first = 'James'; const last = 'Bond'; await this.app.mysql.insert(table, { id: 123, fullname: new Literal(`CONCAT("${first}", "${last}"`), }); => INSERT INTO `$table`(`id`, `fullname`) VALUES(123, CONCAT("James", "Bond")) ``` [egg-mysql]: https://github.com/eggjs/egg-mysql
28.792453
349
0.703052
eng_Latn
0.943934
b9048fbeaad289a809f450162c7f3049a841cb1a
5,725
md
Markdown
articles/aks/update-credentials.md
riwaida/azure-docs.ja-jp
2beaedf3fc0202ca4db9e5caabb38d0d3a4f6f5b
[ "CC-BY-4.0", "MIT" ]
1
2020-05-27T07:43:21.000Z
2020-05-27T07:43:21.000Z
articles/aks/update-credentials.md
riwaida/azure-docs.ja-jp
2beaedf3fc0202ca4db9e5caabb38d0d3a4f6f5b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/aks/update-credentials.md
riwaida/azure-docs.ja-jp
2beaedf3fc0202ca4db9e5caabb38d0d3a4f6f5b
[ "CC-BY-4.0", "MIT" ]
1
2020-05-21T03:03:13.000Z
2020-05-21T03:03:13.000Z
--- title: クラスターの資格情報のリセット titleSuffix: Azure Kubernetes Service description: Azure Kubernetes Service (AKS) クラスター用のサービス プリンシパル資格情報または AAD アプリケーション資格情報を更新またはリセットする方法について説明します services: container-service ms.topic: article ms.date: 03/11/2019 ms.openlocfilehash: 8420771e32aa792aa79a07fdf4362ad0d9b45d48 ms.sourcegitcommit: d6e4eebf663df8adf8efe07deabdc3586616d1e4 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 04/15/2020 ms.locfileid: "81392631" --- # <a name="update-or-rotate-the-credentials-for-azure-kubernetes-service-aks"></a>Azure Kubernetes Service (AKS) 用の資格情報を更新またはローテーションする 既定では、有効期限が 1 年のサービス プリンシパルと共に AKS クラスターが作成されます。 期限が近づいたら、資格情報をリセットしてサービス プリンシパルの期限を延長することができます。 また、定義済みのセキュリティ ポリシーの一環として、資格情報を更新またはローテーションすることもできます。 この記事では、AKS クラスターのこれらの資格情報の更新方法について説明します。 また、[AKS クラスターと Azure Active Directory を統合][aad-integration]してあり、クラスターの認証プロバイダーとしてそれを使用する場合もあります。 その場合は、クラスター、AAD サーバー アプリ、AAD クライアント アプリ用にさらに 2 つの ID が作成されていて、それらの資格情報もリセットできます。 または、サービス プリンシパルの代わりに、マネージド ID をアクセス許可に使用できます。 マネージド ID は、サービス プリンシパルよりも簡単に管理でき、更新やローテーションが必要ありません。 詳細については、[マネージド ID の使用](use-managed-identity.md)に関するページを参照してください。 ## <a name="before-you-begin"></a>開始する前に Azure CLI バージョン 2.0.65 以降がインストールされて構成されている必要があります。 バージョンを確認するには、 `az --version` を実行します。 インストールまたはアップグレードする必要がある場合は、「 [Azure CLI のインストール][install-azure-cli]」を参照してください。 ## <a name="update-or-create-a-new-service-principal-for-your-aks-cluster"></a>AKS クラスター用の新しいサービス プリンシパルを更新または作成する AKS クラスターの資格情報を更新する場合、以下の方法から選択できます。 * クラスターで使用されている既存のサービス プリンシパルの資格情報を更新する。または、 * サービス プリンシパルを作成し、それらの新しい資格情報を使用するようにクラスターを更新する。 ### <a name="reset-existing-service-principal-credential"></a>既存のサービス プリンシパルの資格情報をリセットする 既存のサービス プリンシパル資格情報を更新するには、[az aks show][az-aks-show] コマンドを使用して、クラスターのサービス プリンシパル ID を取得します。 以下の例は、*myResourceGroup* リソース グループにある *myAKSCluster* という名前のクラスターの ID を取得します。 サービス プリンシパル ID は、追加コマンドで使用するための *SP_ID* という名前の変数として設定されます。 ```azurecli-interactive SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv) ``` サービス プリンシパル ID を含む変数セットを指定し、[az ad sp credential reset][az-ad-sp-credential-reset] を使用して資格情報をリセットします。 以下の例では、Azure プラットフォームがサービス プリンシパルの新しいセキュア シークレットを生成できます。 この新しいセキュア シークレットは、変数としても保管されます。 ```azurecli-interactive SP_SECRET=$(az ad sp credential reset --name $SP_ID --query password -o tsv) ``` 次に、[新しいサービス プリンシパル資格情報での AKS クラスターの更新](#update-aks-cluster-with-new-service-principal-credentials)に進みます。 このステップは、サービス プリンシパルの変更を AKS クラスターに反映させるために必要です。 ### <a name="create-a-new-service-principal"></a>新しいサービス プリンシパルを作成する 前のセクションで既存のサービス プリンシパル資格情報の更新を選択した場合は、このステップをスキップしてください。 続いて、[新しいサービス プリンシパル資格情報で AKS クラスターを更新](#update-aks-cluster-with-new-service-principal-credentials)します。 サービス プリンシパルを作成してから、それらの新しい資格情報を使用するように AKS クラスターを更新するには、[az ad sp create-for-rbac][az-ad-sp-create] コマンドを使用します。 次の例では、`--skip-assignment` パラメーターによって、追加の既定の割り当てが行われないようにしています。 ```azurecli-interactive az ad sp create-for-rbac --skip-assignment ``` 出力は次の例のようになります。 使っている `appId` と `password` をメモします。 これらの値は、次のステップで使用します。 ```json { "appId": "7d837646-b1f3-443d-874c-fd83c7c739c5", "name": "7d837646-b1f3-443d-874c-fd83c7c739c", "password": "a5ce83c9-9186-426d-9183-614597c7f2f7", "tenant": "a4342dc8-cd0e-4742-a467-3129c469d0e5" } ``` 次に、以下の例に示すように、使用した [az ad sp create-for-rbac][az-ad-sp-create] コマンドの出力を使用して、サービス プリンシパル ID とクライアント シークレットの変数を定義します。 *SP_ID* は *appId* で、*SP_SECRET* は *パスワード* です。 ```console SP_ID=7d837646-b1f3-443d-874c-fd83c7c739c5 SP_SECRET=a5ce83c9-9186-426d-9183-614597c7f2f7 ``` 次に、[新しいサービス プリンシパル資格情報での AKS クラスターの更新](#update-aks-cluster-with-new-service-principal-credentials)に進みます。 このステップは、サービス プリンシパルの変更を AKS クラスターに反映させるために必要です。 ## <a name="update-aks-cluster-with-new-service-principal-credentials"></a>新しいサービス プリンシパル資格情報で AKS クラスターを更新する 既存のサービス プリンシパル資格情報の更新を選択したか、サービス プリンシパルの作成を選択したかに関係なく、ここで [az aks update-credentials][az-aks-update-credentials] コマンドを使用して、新しい資格情報で AKS クラスターを更新します。 *--service-principal* と *--client-secret* の変数が使用されます。 ```azurecli-interactive az aks update-credentials \ --resource-group myResourceGroup \ --name myAKSCluster \ --reset-service-principal \ --service-principal $SP_ID \ --client-secret $SP_SECRET ``` サービス プリンシパル資格情報が AKS で更新されるまでに少し時間がかかります。 ## <a name="update-aks-cluster-with-new-aad-application-credentials"></a>新しい AAD アプリケーション資格情報で AKS クラスターを更新する [AAD 統合手順][create-aad-app]に従って、新しい AAD サーバー アプリケーションとクライアント アプリケーションを作成できます。 または、[サービス プリンシパルのリセットの場合と同じ方法](#reset-existing-service-principal-credential)に従って、既存の AAD アプリケーションをリセットします。 その後は、同じ [az aks update-credentials][az-aks-update-credentials] コマンドを使用し、ただし *--reset-aad* 変数を使用して、クラスター AAD アプリケーションの資格情報を更新することだけが必要です。 ```azurecli-interactive az aks update-credentials \ --resource-group myResourceGroup \ --name myAKSCluster \ --reset-aad \ --aad-server-app-id <SERVER APPLICATION ID> \ --aad-server-app-secret <SERVER APPLICATION SECRET> \ --aad-client-app-id <CLIENT APPLICATION ID> ``` ## <a name="next-steps"></a>次のステップ この記事では、AKS クラスター自体と AAD 統合アプリケーション用のサービス プリンシパルを更新しました。 クラスター内のワークロードの ID を管理する方法について詳しくは、「[AKS の認証と認可のベスト プラクティス][best-practices-identity]」を参照してください。 <!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az-aks-show [az-aks-update-credentials]: /cli/azure/aks#az-aks-update-credentials [best-practices-identity]: operator-best-practices-identity.md [aad-integration]: azure-ad-integration.md [create-aad-app]: azure-ad-integration.md#create-the-server-application [az-ad-sp-create]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac [az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az-ad-sp-credential-reset
46.169355
320
0.790393
yue_Hant
0.52685
b904a25a1d3161218bbaf4c6e00a77f3ccd2a61f
1,749
md
Markdown
_posts/2016-6-10-Swagger-Rest-API.md
zeldi/zeldi.github.io
64ec96d053ff37ccf710f539998ff70072acd29c
[ "MIT" ]
null
null
null
_posts/2016-6-10-Swagger-Rest-API.md
zeldi/zeldi.github.io
64ec96d053ff37ccf710f539998ff70072acd29c
[ "MIT" ]
null
null
null
_posts/2016-6-10-Swagger-Rest-API.md
zeldi/zeldi.github.io
64ec96d053ff37ccf710f539998ff70072acd29c
[ "MIT" ]
null
null
null
--- layout: post title: Tutorial on Creating RESTful API using Swagger permalink: rest-swagger date: 2016-6-10 --- I found one of the best tutorial on using Swagger to write RESTful API using Node.js. It covers end to end CRUD (Create, Read, Update, Delete) cycle. This tutorial is written by Samuela Zara on <a href="https://scotch.io/tutorials/speed-up-your-restful-api-development-in-node-js-with-swagger">scotch.io</a> and provides step by step instructions to create Restful API to manage movies collection. If you haven't heard of Swagger, it’s an open API initiative aimed at making powerful, consistent and sharable definitions of RESTful APIs. It offers interactive documentation, client side SDK generation and discoverability so that your APIs are easily discovered, described and used by developers. In this tutorial Samuela starts of with setting up Swagger module for Node.js, then uses the mock mode feature, which helps you start off with the APIs setup without writing single line of code. Once the structure is in place then you implement APIs for Get movie, Update movie, Add movie and Delete movie. You can find this tutorial here at - <a href="https://scotch.io/tutorials/speed-up-your-restful-api-development-in-node-js-with-swagger">Restful API in Node.js using Swagger</a>. More references on learning swagger: 1. <a href="http://swagger.io/">Swagger</a> 2. <a href="https://apihandyman.io/writing-openapi-swagger-specification-tutorial-part-1-introduction/">Writing Open API with Swagger</a> 3. <a href="http://openapi-specification-visual-documentation.apihandyman.io/">Visualization OPenAPI Specs </a> Enjoy!!! <a href="https://scotch.io/tutorials/speed-up-your-restful-api-development-in-node-js-with-swagger">
64.777778
397
0.781589
eng_Latn
0.947623
b904af16fdf0ae55ba664d498f8922eef16858b6
3,847
md
Markdown
README.md
habi/Skeletonization
db35cc4ddc5d22fbec72183e57d9632e9fb4d50b
[ "MIT" ]
1
2020-01-14T14:32:29.000Z
2020-01-14T14:32:29.000Z
README.md
habi/Skeletonization
db35cc4ddc5d22fbec72183e57d9632e9fb4d50b
[ "MIT" ]
null
null
null
README.md
habi/Skeletonization
db35cc4ddc5d22fbec72183e57d9632e9fb4d50b
[ "MIT" ]
null
null
null
# Skeletonization Using Fuzzy Logic ### Mohammad Mahdi Kamani, Farshid Farhat, Stephen Wistar, James Z. Wang In this repository you can see the code for skeletonization of binary images using our novel fuzzy inference system. Check out the demo notebook [here](https://github.com/mmkamani7/Skeletonization/blob/master/Skeletonization.ipynb). Please cite our papers listed below, if you use our code. <img src="img/skeletonization.png" height=600px align="middle"> Skeleton of a shape is a low-level representation that can be used for matching and recognition purposes in various fields of study including image retrieval and shape matching or human pose estimation and recovery. Skeleton can provide a good abstraction of a shape, which contains topological structure and its features. Because it is the simplest representation of a shape, there has been an extensive effort among researchers to develop generic algorithms for skeletonization of shapes. However, since there is no “true” skeleton defined for an object, the literature in skeletonization lack of a robust evaluation. The vast majority of the algorithms are based on Blum’s “Grassfire” analogy and formulation for skeletonization. The most important key factor in skeletonization algorithms is to preserve the topology of the shape. ## Sofrware Requirements This code has been developed and tested on Python2.7 on Windows, Ubuntu and MacOS, however, it shoud work fine on Python3.x. Please let us know if you have difficulty on running this code on Issues page. The code in Python needs some packages, you can get them by this line of code in bash or cmd (you might need sudo on Linux): ```bash [sudo] pip install -U scipy scikit-image scikit-fuzzy ``` ## Getting Started You can see the demo in jupyter [here](https://github.com/mmkamani7/Skeletonization/blob/master/Skeletonization.ipynb). You can also use the functions directly as it is described in the notebook. Just add these lines at the begining of your code: ```python from fuzzyTransform import fuzzyTransform from skeleton2Graph import * import skeletonization ``` to have the skeleton directly write the following lines: ```python import skeletonization ob = skeletonization.skeleton() ob.BW = BW # the binary image that you want to find its skeleton ob.skeletonization() ``` Then you would have an object with ``skeleton`` attribute which has the skeleton of your desired object. You can also follow the steps described in the notebook: After caculating the Euclidean Distance Transform and its gradient( as described in the notebook) you can call Flux function to calculate the outward flux: ```python flux(delD_xn, delD_yn) ``` Then use this flux to calculate the graph of the initial skeleton and its information with this function: ```python adjacencyMatrix, edgeList, edgeProperties,edgeProperties2, verticesProperties, verticesProperties2, endPoints, branchPoints = skeleton2Graph(initialSkeleton,initialSkeleton*fluxMap) ``` Then you would use our fuzzy inference system to prune the skeleton like this: ```python skeletonNew = fuzzyTransform(initialSkeleton, vertices, edgeList, edgeProperties, verticesProperties, verticesProperties2, adjacencyMatrix) ``` ## References ```ref @article{kamani2017skeleton, title={Skeleton Matching with Applications in Severe Weather Detection}, author={Kamani, Mohammad Mahdi and Farhat, Farshid and Wistar, Stephen and Wang, James Z}, journal={Applied Soft Computing}, year={2017}, publisher={Elsevier} } @inproceedings{kamani2016shape, title={Shape matching using skeleton context for automated bow echo detection}, author={Kamani, Mohammad Mahdi and Farhat, Farshid and Wistar, Stephen and Wang, James Z}, booktitle={Big Data (Big Data), 2016 IEEE International Conference on}, pages={901--908}, year={2016}, organization={IEEE} } ```
55.753623
835
0.793865
eng_Latn
0.993384
b904af9f2ae779dfbbc9e3ac7b058566ceca4776
4,155
md
Markdown
CHANGELOG.md
xujuntwt95329/binaryen
6cec37ef2e38ae1844c61832566cfc22cb3e0ef6
[ "Apache-2.0" ]
1
2019-08-30T17:22:26.000Z
2019-08-30T17:22:26.000Z
CHANGELOG.md
xujuntwt95329/binaryen
6cec37ef2e38ae1844c61832566cfc22cb3e0ef6
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
xujuntwt95329/binaryen
6cec37ef2e38ae1844c61832566cfc22cb3e0ef6
[ "Apache-2.0" ]
null
null
null
Changelog ========= This document describes changes between tagged Binaryen versions. To browse or download snapshots of old tagged versions, visit https://github.com/WebAssembly/binaryen/releases. Not all changes are documented here. In particular, new features, user-oriented fixes, options, command-line parameters, usage changes, deprecations, significant internal modifications and optimizations etc. generally deserve a mention. To examine the full set of changes between versions, visit the link to full changeset diff at the end of each section. Current Trunk ------------- - Binaryen.js instruction API changes: - `notify` -> `atomic.notify` - `i32.wait` / `i64.wait` -> `i32.atomic.wait` / `i64.atomic.wait` - Binaryen.js: `flags` argument in `setMemory` function is removed. - `atomic.fence` instruction support is added. - wasm-emscripten-finalize: Don't realy on name section being present in the input. Use the exported names for things instead. v88 --- - wasm-emscripten-finalize: For -pie binaries that import a mutable stack pointer we internalize this an import it as immutable. - The `tail-call` feature including the `return_call` and `return_call_indirect` instructions is ready to use. v87 --- - Rename Bysyncify => Asyncify v86 --- - The --initial-stack-pointer argument to wasm-emscripten-finalize no longer has any effect. It will be removed completely in future release. v85 --- - Wast file parsing rules now don't allow a few invalid formats for typeuses that were previously allowed. Typeuse entries should follow this format, meaning they should have (type) -> (param) -> (result) order if more than one of them exist. ``` typeuse ::= (type index|name)+ | (type index|name)+ (param ..)* (result ..)* | (param ..)* (result ..)* ``` Also, all (local) nodes in function definition should be after all typeuse elements. - Removed APIs related to deprecated instruction names in Binaryen.js: - `get_local` / `getLocal` - `set_local` / `setLocal` - `tee_local` / `teeLocal` - `get_global` / `getGlobal` - `set_global` / `setGlobal` - `current_memory` / `currentMemory` - `grow_memory` / `growMemory` They are now available as their new instruction names: `local.get`, `local.set`, `local.tee`, `global.get`, `global.set`, `memory.size`, and `memory.grow`. - Add feature handling to the C/JS API with no feature enabled by default. v84 --- - Generate dynCall thunks for any signatures used in "invoke" calls. v81 --- - Fix AsmConstWalker handling of string address in arg0 with -fPIC code v80 --- - Change default feature set in the absence of a target features section from all features to MVP. v79 --- - Improve support for side modules v78 --- - Add `namedGlobals` to metadata output of wasm-emscripten-finalize - Add support for llvm PIC code. - Add --side-module option to wasm-emscripten-finalize. - Add `segmentPassive` argument to `BinaryenSetMemory` for marking segments passive. - Make `-o -` print to stdout instead of a file named "-". v73 --- - Remove wasm-merge tool. v73 --- - Remove jsCall generation from wasm-emscripten-finalize. This is not needed as of https://github.com/emscripten-core/emscripten/pull/8255. v55 --- - `RelooperCreate` in the C API now has a Module parameter, and `RelooperRenderAndDispose` does not. - The JS API now has the `Relooper` constructor receive the `Module`. - Relooper: Condition properties on Branches must not have side effects. older ----- - `BinaryenSetFunctionTable` in the C API no longer accepts an array of functions, instead it accepts an array of function names, `const char** funcNames`. Previously, you could not include imported functions because they are of type `BinaryenImportRef` instead of `BinaryenFunctionRef`. #1650 - `BinaryenSetFunctionTable` in the C API now expects the initial and maximum table size as additional parameters, like `BinaryenSetMemory` does for pages, so tables can be grown dynamically. #1687 - Add `shared` parameters to `BinaryenAddMemoryImport` and `BinaryenSetMemory`, to support a shared memory. #1686
30.551471
80
0.730927
eng_Latn
0.974389
b904b4f2fcdfd84a4d140b27764090165cbf44bf
2,122
md
Markdown
powerapps-docs/maker/portals/configure/configure-contacts.md
noriji/powerapps-docs.ja-jp
0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/portals/configure/configure-contacts.md
noriji/powerapps-docs.ja-jp
0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/portals/configure/configure-contacts.md
noriji/powerapps-docs.ja-jp
0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ポータルで使用する連絡先を構成する |MicrosoftDocs description: ポータルで使用する連絡先を追加して構成する手順。 author: sbmjais manager: shujoshi ms.service: powerapps ms.topic: conceptual ms.custom: '' ms.date: 11/04/2019 ms.author: shjais ms.reviewer: '' ms.openlocfilehash: 4a8c70304385007c132f2c13ec0ddca4b68e2231 ms.sourcegitcommit: d9cecdd5a35279d78aa1b6c9fc642e36a4e4612c ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 11/04/2019 ms.locfileid: "73553189" --- # <a name="configure-a-contact-for-use-on-a-portal"></a>ポータルで使用する連絡先を構成する 連絡先の基本情報を入力した後 (または、ユーザーがポータルでサインアップフォームに入力した場合)、ポータルの連絡先フォームの [web 認証] タブにアクセスし、ローカル認証を使用して連絡先を構成します。 フェデレーション認証オプションの詳細については、「[ポータルの認証 id を設定](set-authentication-identity.md)する」を参照してください。 ローカル認証を使用してポータルの連絡先を構成するには、次の手順に従います。 1. **ユーザー名**を入力します。 2. コマンド リボンで、**その他のコマンド** &gt; **パスワードの変更** に移動します。 パスワードの変更ワークフローを完了すると、必要なフィールドが自動的に構成されます。 これを完了すると、お客様のポータル用に連絡先が構成されます。 ## <a name="change-password-for-a-contact-from-portal-management-app"></a>ポータル管理アプリから連絡先のパスワードを変更する 1. [ポータル管理アプリ](configure-portal.md)を開きます。 2. [**ポータル** > **連絡先**] に移動し、パスワードを変更する連絡先を開きます。 または、[[共有](../manage-existing-portals.md#share)] ウィンドウから **[連絡先]** ページを開くこともできます。 3. 上部にあるツールバーの **[タスクフロー]** を選択します。 > [!div class="mx-imgBorder"] > ![タスクフローアイコン](../media/task-flow.png "タスクフローアイコン") 4. **ポータルの連絡先**タスクフローの [パスワードの変更] を選択します。 5. **[ポータル連絡先のパスワードの変更]** ウィンドウで、パスワードを変更する連絡先を選択または作成し、 **[次へ]** を選択します。 > [!div class="mx-imgBorder"] > ![パスワードを変更する連絡先を選択してください](../media/change-password-select-contact.png "パスワードを変更する連絡先を選択してください") 6. **[新しいパスワード]** フィールドに新しいパスワードを入力し、 **[次へ]** を選択します。 > [!div class="mx-imgBorder"] > ![連絡先の新しいパスワードを入力してください](../media/change-password-new-password.png "連絡先の新しいパスワードを入力してください") パスワードを入力せずに **[次へ]** を選択した場合は、選択した連絡先のパスワードを削除するかどうかを確認するメッセージが表示されます。 > [!div class="mx-imgBorder"] > ![連絡先のパスワードを削除します。](../media/change-password-remove-password.png "連絡先のパスワードを削除します。") 7. 変更を行った後、 **[完了]** を選択します。 ### <a name="see-also"></a>関連項目 [ポータルに連絡先を招待する](invite-contacts.md) [ポータルの認証 id を設定する](set-authentication-identity.md)
34.225806
229
0.739397
yue_Hant
0.351059
b904b6f3e0bfa6c853afab4159fd2ba52db7f225
2,578
md
Markdown
流行/Six Thirty-Ariana Grande/README.md
hsdllcw/everyonepiano-music-database
d440544ad31131421c1f6b5df0f039974521eb8d
[ "MIT" ]
17
2020-12-01T05:27:50.000Z
2022-03-28T05:03:34.000Z
流行/Six Thirty-Ariana Grande/README.md
hsdllcw/everyonepiano-music-database
d440544ad31131421c1f6b5df0f039974521eb8d
[ "MIT" ]
null
null
null
流行/Six Thirty-Ariana Grande/README.md
hsdllcw/everyonepiano-music-database
d440544ad31131421c1f6b5df0f039974521eb8d
[ "MIT" ]
2
2021-08-24T08:58:58.000Z
2022-02-08T08:22:52.000Z
**Six Thirty** 是由美国女歌手Ariana Grande录唱的一首歌曲,被收录在Ariana Grande于2020年10月30日发行的第六张录音室专辑《Positions》。 Ariana是一个奇妙的矛盾体,她举手投足之间混杂着多种气质,时尚、甜美、活泼,同时也具备超越同龄人的稳健台风,而当她一开口唱歌,听众又仿佛在她的传统发声技巧与百老汇式的唱腔中,回到了上世纪90年代。 同时,网站还为大家提供了《[ **Just Like Magic**](Music-12277-Just-Like-Magic-Ariana- Grande.html "Just Like Magic")》曲谱下载 歌词下方是 _Six Thirty钢琴谱_ ,大家可以免费下载学习。 ### Six Thirty歌词: Ah hey yeah I know I be on some bulls**t Know I be driving you crazy But I know you love how I whip it You can only stay mad for a minute So come here and give me some kisses You know I'm very delicious You know I'm very impatient Might change my mind so don't keep me waiting I just wonder baby if you're gonna stay Even if one day I'll lose it and go crazy I know this s**t kinda heavy I just wanna tell you directly So boy let me know if you ready Are you down What's up Are you down What's up Are you down Are you down Are you down Are you down Are you down Mmm You know you be on some bulls**t Bulls**t Act so possessive and crazy Crazy But I know that's just 'causе you love me And you ain't scared to show mе your ugly And maybe that's just how it's supposed to be I'm the release you the dopamine And you wonder baby if I'm gonna stay Even if one day you lose it and go crazy I know this s**t kinda heavy Heavy Just wanna ask you directly Directly Boy let me know if you ready Are you down What's up What's up Are you down What's up What's up Are you down Are you down What's up Are you down What's up Are you down Tell me Are you down Are you gonna be Six thirty Mmm Down like six thirty Mmm Down like sunset Down like sunset Down like my head on your chest Mmm Down like six thirty Ooh Down like six thirty Six thirty Down like my foot on the gas skrrt skrrt Down like six thirty yeah What you gon' do when I'm bored And I wanna play video games at 2 AM What if I need a friend Will you ride 'til the end Am I enough to keep your love When I'm old and stuck will you still have a crush Are you down What's up Are you down What's up Are you down Oh Are you down Are you down Are you down Tell me Are you down Are you gonna be Six thirty Mmm Down like six thirty Down like six thirty Down like sunset Down like sunset Down like my head on your chest Down like my head on your chest Down like six thirty Ooh Down like six thirty Six thirty Down like my foot on the gas skrrt skrrt Down like six thirty yeah
22.224138
100
0.714895
eng_Latn
0.996282
b90596f6007244a7c2a4e4c4fbc85c8dda6b1239
12,234
md
Markdown
powerbi-docs/connect-data/desktop-data-sources.md
hyoshioka0128/powerbi-docs.ja-jp
bd12ffdd6cdb2e180aa24732de8faa039ecb3484
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/connect-data/desktop-data-sources.md
hyoshioka0128/powerbi-docs.ja-jp
bd12ffdd6cdb2e180aa24732de8faa039ecb3484
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/connect-data/desktop-data-sources.md
hyoshioka0128/powerbi-docs.ja-jp
bd12ffdd6cdb2e180aa24732de8faa039ecb3484
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Power BI Desktop のデータ ソース description: Power BI Desktop のデータ ソース author: davidiseminger ms.reviewer: '' ms.service: powerbi ms.subservice: powerbi-desktop ms.topic: conceptual ms.date: 05/19/2020 ms.author: davidi LocalizationGroup: Connect to data ms.openlocfilehash: f84fcc4b32468ab8ffddbb593ae97ea8fb20442a ms.sourcegitcommit: 250242fd6346b60b0eda7a314944363c0bacaca8 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 05/20/2020 ms.locfileid: "83693632" --- # <a name="data-sources-in-power-bi-desktop"></a>Power BI Desktop のデータ ソース Power BI Desktop を使用すると、多種多様なソースからデータに接続できます。 利用できるデータ ソースの完全な一覧が必要な場合、「[Power BI データ ソース](power-bi-data-sources.md)」を参照してください。 データに接続するには、 **[ホーム]** リボンを使用します。 **[よく使われる]** データ型メニューを表示するには、 **[データの取得]** ボタン ラベルまたは下矢印を選択します。 ![[よく使われる] データ型メニュー、Power BI Desktop の [データの取得]](media/desktop-data-sources/data-sources-01.png) **[データの取得]** ダイアログ ボックスに移動するには、 **[よく使われる]** データ型メニューを表示し、 **[詳細]** を選択します。 **[データの取得]** ダイアログ ボックスを表示する (そして **[よく使われる]** メニューをバイパスする)には、 **[データの取得]** アイコンを直接選択する方法もあります。 ![[データの取得] ボタン、Power BI Desktop](media/desktop-data-sources/data-sources-02.png) > [!NOTE] > Power BI チームは Power BI Desktop や Power BI サービスで利用できるデータ ソースを継続的に拡張しています。 そのため、**ベータ**や**プレビュー**などのマークが付いた、未完成の早期バージョンのデータ ソースが頻繁に公開されています。 データ ソースに**ベータ**や**プレビュー**などのマークが付いている場合、サポートや機能が限定されています。運用環境では利用しないでください。 また、Power BI Desktop の**ベータ**または**プレビュー**とマークされているデータ ソースは、データ ソースが一般提供 (GA) になるまで、Power BI サービスまたは他の Microsoft サービスで使用できない可能性があります。 > [!NOTE] > Power BI Desktop 用のデータ コネクタの多くには、認証に Internet Explorer 10 (またはそれ以降) が必要です。 ## <a name="data-sources"></a>データ ソース **[データの取得]** ダイアログ ボックスには、次のカテゴリのデータ型が表示されます。 * すべて * ファイル * データベース * Power Platform * Azure * オンライン サービス * その他 **[すべて]** カテゴリには、すべてのカテゴリにあるすべてのデータ接続の種類が含まれます。 ### <a name="file-data-sources"></a>ファイルのデータ ソース **[ファイル]** カテゴリには、次のデータ接続があります。 * Excel * テキスト/CSV * XML * JSON * フォルダー * PDF * SharePoint フォルダー 次の図は、 **[ファイル]** の **[データの取得]** ウィンドウを示しています。 ![ファイルのデータ ソース、[データの取得] ダイアログ ボックス、Power BI Desktop](media/desktop-data-sources/data-sources-03.png) ### <a name="database-data-sources"></a>データベースのデータ ソース **[データベース]** カテゴリには、次のデータ接続があります。 * SQL Server データベース * Access データベース * SQL Server Analysis Services データベース * Oracle データベース * IBM DB2 データベース * IBM Informix データベース (ベータ) * IBM Netezza * MySQL データベース * PostgreSQL データベース * Sybase データベース * Teradata データベース * SAP HANA データベース * SAP Business Warehouse Application サーバー * SAP Business Warehouse メッセージ サーバー * Amazon Redshift * Impala * Google BigQuery * Vertica * Snowflake * Essbase * AtScale キューブ * BI コネクタ * Data Virtuality LDW (ベータ) * Denodo * Dremio * Exasol * Indexima (ベータ) * InterSystems IRIS (ベータ) * Jethro (ベータ) * Kyligence * Linkar PICK スタイル / MultiValue Databases (ベータ) * MarkLogic > [!NOTE] > 一部のデータベース コネクタの場合、有効にするためには、 **[ファイル]、[オプションと設定]、[オプション]** の順に選択し、 **[プレビュー機能]** を選択し、コネクタを有効にする必要があります。 前途コネクタの一部が表示されず、その中に使用したいコネクタも含まれている場合は、 **[プレビュー機能]** を確認してください。 データ ソースに*ベータ*や*プレビュー*などのマークが付いている場合、サポートや機能が限定されていることにもご注意ください。運用環境では利用しないでください。 次の図は、 **[データベース]** の **[データの取得]** ウィンドウを示しています。 ![データベースのデータ ソース、[データの取得] ダイアログ ボックス、Power BI Desktop](media/desktop-data-sources/data-sources-04.png) ### <a name="power-platform-data-sources"></a>Power Platform のデータ ソース **[Power Platform]** カテゴリには、次のデータ接続があります。 * Power BI データセット * Power BI データフロー * Common Data Service * Power Platform データフロー 次の図は、 **[Power Platform]** の **[データの取得]** ウィンドウを示しています。 ![Power Platform のデータ ソース、[データの取得] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/data-sources-05.png) ### <a name="azure-data-sources"></a>Azure データ ソース **[Azure]** カテゴリには、次のデータ接続があります。 * Azure SQL Database * Azure SQL Data Warehouse * Azure Analysis Services データベース * Azure Database for PostgreSQL * Azure Blob Storage * Azure Table Storage * Azure Cosmos DB * Azure Data Lake Storage Gen2 * Azure Data Lake Storage Gen1 * Azure HDInsight (HDFS) * Azure HDInsight Spark * HDInsight 対話型クエリ * Azure Data Explorer (Kusto) * Azure Cost Management * Azure Time Series Insights (ベータ) 次の図は、 **[Azure]** の **[データの取得]** ウィンドウを示しています。 ![Azure のデータ ソース、[データの取得] ダイアログ ボックス、Power BI Desktop](media/desktop-data-sources/data-sources-06.png) ### <a name="online-services-data-sources"></a>Online Services のデータ ソース **[オンライン サービス]** カテゴリには、次のデータ接続があります。 * SharePoint Online リスト * Microsoft Exchange Online * Dynamics 365 (オンライン) * Dynamics NAV * Dynamics 365 Business Central * Dynamics 365 Business Central (オンプレミス) * Microsoft Azure Consumption Insights (ベータ) * Azure DevOps (Boards のみ) * Azure DevOps Server (Boards のみ) * Salesforce オブジェクト * Salesforce レポート * Google Analytics * Adobe Analytics * appFigures (ベータ) * Data.World - データセットの取得 (ベータ) * GitHub (Beta) * LinkedIn Sales Navigator (ベータ) * Marketo (ベータ) * Mixpanel (ベータ) * Planview Enterprise One - PRM (ベータ) * Planview Projectplace (ベータ) * QuickBooks Online (ベータ) * Smartsheet * SparkPost (ベータ) * SweetIQ (ベータ) * Planview Enterprise One - CTM (ベータ) * Twilio (ベータ) * tyGraph (Beta) * Webtrends (Beta) * Zendesk (ベータ) * Asana (ベータ) * Dynamics 365 Customer Insights (ベータ版) * Emigo Data Source * Entersoft Business Suite (ベータ) * FactSet Analytics (ベータ) * Industrial App Store * Intune データ ウェアハウス (ベータ) * Microsoft Graph Security (ベータ) * Power BI 用 Projectplace (ベータ) * Product Insights (ベータ) * Quick Base * TeamDesk (Beta) * Webtrends Analytics (ベータ) * Witivio (ベータ) * Workplace Analytics (ベータ) * Zoho Creator (ベータ) 次の図は、 **[オンライン サービス]** の **[データの取得]** ウィンドウを示しています。 ![Online Services のデータ ソース、[データの取得] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/data-sources-07.png) ### <a name="other-data-sources"></a>その他のデータ ソース **[その他]** カテゴリには、次のデータ接続があります。 * Web * SharePoint リスト * OData フィード * Active Directory * Microsoft Exchange * Hadoop ファイル (HDFS) * Spark * Hive LLAP (ベータ) * R スクリプト * Python スクリプト * ODBC * OLE DB * Solver * Cognite Data Fusion (ベータ) * FHIR * Information Grid (ベータ) * Jamf Pro (ベータ) * MicroStrategy for Power BI * Paxata * QubolePresto (ベータ) * Roamler (ベータ) * Shortcuts Business Insights (ベータ) * Siteimprove * SurveyMonkey (ベータ) * Tenforce (Smart)List * TIBCO(R) Data Virtualization (ベータ) * Vena (ベータ) * Workforce Dimensions (ベータ) * Zucchetti HR Infinity (ベータ) * 空のクエリ 次の図は、 **[その他]** の **[データの取得]** ウィンドウを示しています。 ![その他のデータソース、[データの取得] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/data-sources-08.png) > [!NOTE] > 現時点では、Azure Active Directory を使用して保護されているカスタム データ ソースに接続することはできません。 ## <a name="connecting-to-a-data-source"></a>データ ソースに接続する データ ソースに接続するには、 **[データの取得]** ウィンドウでデータ ソースを選択し、 **[接続]** を選びます。 次の図の場合、 **[その他]** データ接続カテゴリで **[Web]** が選択されています。 ![Web への接続、[データの取得] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/data-sources-08.png) 対象のデータ接続に固有の接続ウィンドウが表示されます。 資格情報が必要な場合には、入力を求めるプロンプトが表示されます。 次の図には、Web データ ソースに接続するために URL を入力している様子が示されています。 ![URL の入力、[Web から] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/datasources-fromwebbox.png) URL またはリソースの接続情報を入力し、 **[OK]** を選択します。 Power BI Desktop によってデータ ソースに接続され、 **[ナビゲーター]** に使用できるデータ ソースが表示されます。 ![[ナビゲーター] ダイアログボックス、Power BI Desktop](media/desktop-data-sources/datasources-fromnavigatordialog.png) データを読み込むには、 **[ナビゲーター]** ペインの下部にある **[読み込み]** ボタンをクリックします。 データを読み込む前に Power Query エディターでクエリを変換または編集するには、 **[データの変換]** ボタンを選択します。 これで、Power BI Desktop でデータ ソースに接続できます。 データ ソースの一覧は拡大を続けており、ここからデータへの接続をお試しください。一覧への追加は絶えず続いているため、頻繁にご確認ください。 ## <a name="using-pbids-files-to-get-data"></a>PBIDS ファイルを使用したデータの取得 PBIDS ファイルは、特定の構造を持つ Power BI Desktop ファイルであり、Power BI データ ソース ファイルであることを識別するための .PBIDS 拡張子が付いています。 PBIDS ファイルを作成すると、組織のレポート作成者の **[データの取得]** エクスペリエンスを効率化できます。 新しいレポート作成者が PBIDS ファイルを使用しやすいように、管理者はよく使用される接続用にこれらのファイルを作成することをお勧めします。 作成者が PBIDS ファイルを開くと、Power BI Desktop が開き、認証を受けてファイルに指定されているデータ ソースに接続することができる資格情報の入力がユーザーに求められます。 **[ナビゲーション]** ダイアログ ボックスが表示されると、ユーザーはモデルに読み込むデータ ソースからテーブルを選択する必要があります。 PBIDS ファイルで指定されていない場合、ユーザーは必要に応じてデータベースを選択します。 以降、ユーザーはビジュアルの構築を開始するか、 **[最近のソース]** に選択して新しいテーブル セットをモデルに読み込むことができるようになります。 現在、PBIDS ファイルでは、1 つのファイルで 1 つのデータ ソースのみがサポートされています。 複数のデータ ソースを指定すると、エラーが発生します。 PBIDS ファイルを作成するには、管理者が 1 つの接続に必要な入力を指定する必要があります。 また、接続モードを DirectQuery または Import と指定することもできます。 ファイルに **mode** がないか null の場合、ユーザーが Power BI Desktop でファイルを開くと、**DirectQuery** または **Import** を選択するよう求められます。 ### <a name="pbids-file-examples"></a>PBIDS ファイルの例 このセクションでは、一般的に使用されるデータ ソースの例をいくつか示します。 ファイルの種類 PBIDS は、Power BI Desktop でもサポートされているデータ接続のみをサポートしますが、2 つの例外があります。ライブ接続と空のクエリ。 PBIDS ファイルには、認証情報と、テーブルとスキーマの情報が含まれて "*いません*"。 次のコード スニペットは、PBIDS ファイルの一般的な例をいくつか示していますが、完全でも包括的でもありません。 その他のデータ ソースについては、[プロトコルおよびアドレス情報のデータ ソース参照 (DSR) 形式](https://docs.microsoft.com/azure/data-catalog/data-catalog-dsr#data-source-reference-specification)に関する記事を参照してください。 これらの例は便宜上のものであり、網羅する目的はありません。また、DSR 形式のサポートされているすべてのコネクタが含まれているわけではありません。 管理者または組織は、これらの例をガイドとして使用して独自のデータ ソースを作成し、そこから独自のデータ ソース ファイルを作成およびサポートすることができます。 #### <a name="azure-as"></a>Azure AS ```json { "version": "0.1", "connections": [ { "details": { "protocol": "analysis-services", "address": { "server": "server-here" }, } } ] } ``` #### <a name="folder"></a>フォルダー ```json { "version": "0.1", "connections": [ { "details": { "protocol": "folder", "address": { "path": "folder-path-here" } } } ] } ``` #### <a name="odata"></a>OData ```json { "version": "0.1", "connections": [ { "details": { "protocol": "odata", "address": { "url": "URL-here" } } } ] } ``` #### <a name="sap-bw"></a>SAP BW ```json { "version": "0.1", "connections": [ { "details": { "protocol": "sap-bw-olap", "address": { "server": "server-name-here", "systemNumber": "system-number-here", "clientId": "client-id-here" }, } } ] } ``` #### <a name="sap-hana"></a>SAP Hana ```json { "version": "0.1", "connections": [ { "details": { "protocol": "sap-hana-sql", "address": { "server": "server-name-here:port-here" }, } } ] } ``` #### <a name="sharepoint-list"></a>SharePoint リスト URL は、サイト内のリストではなく、SharePoint サイト自体を指す必要があります。 ユーザーはナビゲーターを利用し、そのサイトから、それぞれがモデルのテーブルになる 1 つ以上のリストを選択できます。 ```json { "version": "0.1", "connections": [ { "details": { "protocol": "sharepoint-list", "address": { "url": "URL-here" }, } } ] } ``` #### <a name="sql-server"></a>SQL Server ```json {   "version": "0.1",   "connections": [     {       "details": {         "protocol": "tds",         "address": {           "server": "server-name-here",           "database": "db-name-here (optional) "         }       },       "options": {},       "mode": "DirectQuery"     }   ] } ``` #### <a name="text-file"></a>テキスト ファイル ```json { "version": "0.1", "connections": [ { "details": { "protocol": "file", "address": { "path": "path-here" } } } ] } ``` #### <a name="web"></a>Web ```json { "version": "0.1", "connections": [ { "details": { "protocol": "http", "address": { "url": "URL-here" } } } ] } ``` #### <a name="dataflow"></a>データフロー ```json { "version": "0.1", "connections": [ { "details": { "protocol": "powerbi-dataflows", "address": { "workspace":"workspace id (Guid)", "dataflow":"optional dataflow id (Guid)", "entity":"optional entity name" } } } ] } ``` ## <a name="next-steps"></a>次の手順 Power BI Desktop では、あらゆる種類の操作を実行できます。 そのような機能について詳しくは、次のリソースをご覧ください。 * [Power BI Desktop とは何ですか?](../fundamentals/desktop-what-is-desktop.md) * [Power BI Desktop でのクエリの概要](../transform-model/desktop-query-overview.md) * [Power BI Desktop でのデータ型](desktop-data-types.md) * [Power BI Desktop でのデータの整形と結合](desktop-shape-and-combine-data.md) * [Power BI Desktop での一般的なクエリ タスク](../transform-model/desktop-common-query-tasks.md)
25.12115
347
0.671816
yue_Hant
0.553967
b905c6b59a5ac59c1b60b98a1ae18baf7e080772
2,187
md
Markdown
README.md
nishanb/file.io-Android-Client
300f5cc1f7aaac16d59f997c0b8849892619bfb9
[ "Apache-2.0" ]
null
null
null
README.md
nishanb/file.io-Android-Client
300f5cc1f7aaac16d59f997c0b8849892619bfb9
[ "Apache-2.0" ]
null
null
null
README.md
nishanb/file.io-Android-Client
300f5cc1f7aaac16d59f997c0b8849892619bfb9
[ "Apache-2.0" ]
null
null
null
# File.io Android App [![Codacy Badge](https://api.codacy.com/project/badge/Grade/845a73f559a747279016b83c41a78446)](https://www.codacy.com/app/rumaan/file.io-app?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=rumaan/file.io-app&amp;utm_campaign=Badge_Grade) [![Build Status](https://travis-ci.org/rumaan/file.io-Android-Client.svg?branch=master)](https://travis-ci.org/rumaan/file.io-Android-Client) This simple app allows you to upload any file and get a _sharable_ link with a set expiration time. The file will be **Deleted** after its downloaded or after expiration time (regardless of whether its downloaded or not). This app is made with the help of [file.io](https://file.io) which is an **_Anonymous_, _Secure_** file sharing platform by [Humbly](http://humbly.com/). ## Screenshots 📸 <p float="left"> <img src="/screenshots/screenshot.png" alt="Home Screen" height="600"/> ## Libraries Used ❤️ - [Android Support Library](https://developer.android.com/topic/libraries/support-library/index.html) - [WorkManager](https://developer.android.com/topic/libraries/architecture/workmanager) - [LiveData](https://developer.android.com/topic/libraries/architecture/livedata) - [Room](https://developer.android.com/topic/libraries/architecture/room) - [ViewModel](https://developer.android.com/topic/libraries/architecture/viewmodel) - [Permission Dispatcher](https://permissions-dispatcher.github.io/PermissionsDispatcher/) - [Fuel](https://github.com/kittinunf/Fuel) - [NumberProgressBar](https://github.com/daimajia/NumberProgressBar) - [FirebaseCrashlytics](https://firebase.google.com/docs/crashlytics) - [CustomActivityOnCrash](https://github.com/Ereza/CustomActivityOnCrash) - [MaterialAboutLibrary](https://github.com/daniel-stoneuk/material-about-library) Vector Images from [FlatIcon](https://www.flaticon.com/). ## TODO List ✅: - [ ] Set Expiration Time - [X] Handle different use cases for storage on the device. - [ ] Progress Bar for File Upload Progress - [X] Custom Error Dialog - [X] Kotlinize the project 🎳 - [ ] Support for *Multiple* File Upload (Create a List) - [ ] App Icon - [X] Offline Case (using WorkManager) - [ ] Complete rest of the UI
56.076923
251
0.764518
yue_Hant
0.325843
b905d01081d750e8702cea24fb483413f81cb578
1,806
md
Markdown
README.md
PowerNukkit/PowerNukkit-AntiCheater
d87a23cb1cd59e4c032261807fa221c2e70b3963
[ "Unlicense" ]
6
2020-11-21T17:12:13.000Z
2021-01-14T05:13:59.000Z
README.md
PowerNukkit/PowerNukkit-AntiCheater
d87a23cb1cd59e4c032261807fa221c2e70b3963
[ "Unlicense" ]
null
null
null
README.md
PowerNukkit/PowerNukkit-AntiCheater
d87a23cb1cd59e4c032261807fa221c2e70b3963
[ "Unlicense" ]
2
2020-11-16T13:01:29.000Z
2021-04-10T03:36:15.000Z
# Example Plugin for PowerNukkit This is an example plugin which can also be used as template to start your own plugin. As an example I created a plugin named clone-me, it creates a clone of yourself when you run `/clone` and gives you a flower if you hit the clone and then despawn the clone. It also send some fancy messages. These is enough to serve as an example on how to: - Begin a new plugin - Create event listeners and handlers - Create custom commands - Format text - Spawn NPCs - Despawn NPCs - Detect attacks - Make entities invulnerable - Create and fill a `plugin.yml` file - Debug your plugin properly ## Cloning and importing 1. Just do a normal `git clone https://github.com/PowerNukkit/ExamplePlugin.git` (or the URL of your own git repository) 2. Import the `pom.xml` file with your IDE, it should do the rest by itself ## Debugging 1. Create a zip file containing only your `plugin.yml` file 2. Rename the zip file to change the extension to jar 3. Create an empty folder anywhere, that will be your server folder. <small>_Note: You don't need to place the PowerNukkit jar in the folder, your IDE will load it from the maven classpath._</small> 4. Create a folder named `plugins` inside your server folder <small>_Note: It is needed to bootstrap your plugin, your IDE will load your plugin classes from the classpath automatically, so it needs to have only the `plugin.yml` file._</small> 5. Move the jar file that contains only the `plugin.yml` to the `plugins` folder 6. Create a new Application run configuration setting the working directory to the server folder and the main class to: `cn.nukkit.Nukkit` ![](https://i.imgur.com/NUrrZab.png) 7. Now you can run in debug mode. If you change the `plugin.yml` you will need to update the jar file that you've made.
51.6
141
0.760797
eng_Latn
0.997955
b9063cf587fb0e8baaa995efd84ae7ed332df24d
498
markdown
Markdown
_posts/2014-09-14-tobelo-mentoring-14-september-2014.markdown
hixio-mh/website-4
943b8ecdb1f7e507abbb404051afbd40d0a855d3
[ "MIT" ]
4
2018-03-23T08:55:53.000Z
2018-03-24T15:10:22.000Z
_posts/2014-09-14-tobelo-mentoring-14-september-2014.markdown
hixio-mh/website-4
943b8ecdb1f7e507abbb404051afbd40d0a855d3
[ "MIT" ]
3
2021-12-20T17:56:32.000Z
2021-12-20T17:59:59.000Z
_posts/2014-09-14-tobelo-mentoring-14-september-2014.markdown
hixio-mh/website-4
943b8ecdb1f7e507abbb404051afbd40d0a855d3
[ "MIT" ]
5
2020-01-01T09:54:05.000Z
2021-11-23T15:49:11.000Z
title: Monitoring Wilayah Hutan Suku Tobelo Dalam Dodaga dengan Seluler - Mentoring 14 September 2014 date: 2014-09-14 categories: - laporan - mentoring - Monitoring Hutan Tobelo --- **Diskusi via surel** **Persiapan training** From: Munadi Kilkoda Date: 2014-09-14 14:35 GMT+07:00 To: Hillun Vilayl Napis, Yantisa Akhadi Mas Yantisa sama Mba Hillun Program yang kami siap bergerak. Kira2 minggu ini ada waktu lowong dari mentor sama nanang untuk kase training Ushahidi sama keuangan ya?
22.636364
137
0.773092
ind_Latn
0.849827
b9069276ba9eea8c5e2738c6bda3488ee5d4264e
690
md
Markdown
website/book/elements/bitflow.md
openpatch/hyperbook
64ce9cae01d7d1ecf280d8ac50c4fa478f330008
[ "MIT" ]
null
null
null
website/book/elements/bitflow.md
openpatch/hyperbook
64ce9cae01d7d1ecf280d8ac50c4fa478f330008
[ "MIT" ]
39
2022-03-08T23:56:07.000Z
2022-03-28T00:23:18.000Z
website/book/elements/bitflow.md
openpatch/hyperbook
64ce9cae01d7d1ecf280d8ac50c4fa478f330008
[ "MIT" ]
null
null
null
--- name: Bitflow --- # Bitflow [Bitflow](https://bitflow.openpatch.org/) is a great way for conducting dynamic flow-based assessments. It can, for example, be used at the end of a chapter as a check-up. ## Flow ```md ::flow{src="/flow.json"} ``` ::flow{src="/flow.json"} The height of a flow defaults to 400px, but you can set a custom one like so: ```md ::flow{src="/flow.json" height="800px"} ``` ::flow{src="/flow.json" height="800px"} ## Task ```md ::task{src="/task.json"} ``` ::task{src="/task.json"} The height of a task defaults to 400px, but you can set a custom one like so: ```md ::task{src="/task.json" height="800px"} ``` ::task{src="/task.json" height="800px"}
16.829268
90
0.647826
eng_Latn
0.951212
b906bfab514ca6b22d876dbe21fdcd663dfdc1ad
11,756
md
Markdown
docs/README.md
yangkai2g7k/go-open-service-broker-client
398d7fa8ecf8a3b6af699b19be6b46cd79a8ddcb
[ "Apache-2.0" ]
null
null
null
docs/README.md
yangkai2g7k/go-open-service-broker-client
398d7fa8ecf8a3b6af699b19be6b46cd79a8ddcb
[ "Apache-2.0" ]
null
null
null
docs/README.md
yangkai2g7k/go-open-service-broker-client
398d7fa8ecf8a3b6af699b19be6b46cd79a8ddcb
[ "Apache-2.0" ]
null
null
null
# Documentation The [Open Service Broker API](https://github.com/openservicebrokerapi/servicebroker) describes an entity (service broker) that provides some set of capabilities (services). Services have different *plans* that describe different tiers of the service. New instances of the services are *provisioned* in order to be used. Some services can be *bound* to applications for programmatic use. Example: - Service: "database as a service" - Instance: "My database" - Binding: "Credentials to use my database in app 'guestbook'" ## Background Reading Reading the [API specification](https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md) is recommended before reading this documentation. ## API Fundamentals There are 7 operations in the API: 1. Getting a broker's 'catalog' of services: [`Client.GetCatalog`](#getting-a-brokers-catalog) 2. Provisioning a new instance of a service: [`Client.ProvisionInstance`](#provisioning-a-new-instance-of-a-service) 3. Updating properties of an instance: [`Client.UpdateInstance`](#updating-properties-of-an-instance) 4. Deprovisioning an instance: [`Client.DeprovisionInstance`](#deprovisioning-an-instance) 5. Checking the status of an asynchronous operation (provision, update, or deprovision) on an instance: [`Client.PollLastOperation`](#provisioning-a-new-instance-of-a-service) 6. Binding to an instance: [`Client.Bind`](#binding-to-an-instance) 7. Unbinding from an instance: [`Client.Unbind`](#unbinding-from-an-instance) ### Getting a broker's catalog A broker's catalog holds information about the services a broker provides and their plans. A platform implementing the OSB API must first get the broker's catalog. ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func GetBrokerCatalog(URL string) (*osb.CatalogResponse, error) { config := osb.DefaultClientConfiguration() config.URL = URL client, err := osb.NewClient(config) if err != nil { return nil, err } return client.GetCatalog() } ``` ### Provisioning a new instance of a service To provision a new instance of a service, call the `Client.Provision` method. Key points: 1. `ProvisionInstance` returns a response from the broker for successful operations, or an error if the broker returned an error response or there was a problem communicating with the broker 2. Use the `IsHTTPError` method to test and convert errors from Brokers into the standard broker error type, allowing access to conventional broker-provided fields 3. The `response.Async` field indicates whether the broker is performing the provision concurrently; see the [`LastOperation`](#checking-the-status-of-an-async-operation) method for information about handling asynchronous operations ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func ProvisionService(client osb.Client, request osb.ProvisionRequest) (*osb.CatalogResponse, error) { request := &ProvisionRequest{ InstanceID: "my-dbaas-service-instance", // Made up parameters for a hypothetical service ServiceID: "dbaas-service", PlanID: "dbaas-gold-plan", Parameters: map[string]interface{}{ "tablespace-page-cost": 100, "tablespace-io-concurrency": 5, }, // Set the AcceptsIncomplete field to indicate that this client can // support asynchronous operations (provision, update, deprovision). AcceptsIncomplete: true, } // ProvisionInstance returns a response from the broker for successful // operations, or an error if the broker returned an error response or // there was a problem communicating with the broker. resp, err := client.ProvisionInstance(request) if err != nil { // Use the IsHTTPError method to test and convert errors from Brokers // into the standard broker error type, allowing access to conventional // broker-provided fields. errHttp, isError := osb.IsHTTPError(err) if isError { // handle error response from broker } else { // handle errors communicating with the broker } } // The response.Async field indicates whether the broker is performing the // provision concurrently. See the LastOperation method for information // about handling asynchronous operations. if response.Async { // handle asynchronous operation } } ``` ### Updating properties of an instance To update the plan and/or parameters of a service instance, call the `UpdateInstance` method. Key points: 1. A service's plan may be changed only if that service is `PlanUpdatable` 2. `UpdateInstance` returns a response from the broker for successful operations, or an error if the broker returned an error response or there was a problem communicating with the broker 3. Use the `IsHTTPError` method to test and convert errors from Brokers into the standard broker error type, allowing access to conventional broker-provided fields 4. The `response.Async` field indicates whether the broker is performing the provision concurrently; see the [`LastOperation`](#checking-the-status-of-an-async-operation) method for information about handling asynchronous operations 5. Passing `PlanID` or `Parameters` fields to this operation indicates that the user wishes to update those fields; values for these fields should not be passed if those fields have not changed ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func UpdateService(client osb.Client) { newPlan := "dbaas-quadruple-plan", request := &osb.UpdateInstanceRequest{ InstanceID: "my-dbaas-service-instance", ServiceID: "dbaas-service", AcceptsIncomplete: true, // Passing the plan indicates that the user // wants the plan to change. PlanID: &newPlan, // Passing a parameter indicates that the user // wants the parameter value to change. Parameters: map[string]interface{}{ "tablespace-page-cost": 50, "tablespace-io-concurrency": 100, }, } response, err := client.UpdateInstance(request) if err != nil { httpErr, isError := osb.IsHTTPError(err) if isError { // handle errors from broker } else { // handle errors communicating with broker } } if response.Async { // handle asynchronous update operation } else { // handle successful update } } ``` ### Deprovisioning an instance To deprovision a service instance, call the `DeprovisionInstance` method. Key points: 1. `DeprovisionInstance` returns a response from the broker for successful operations, or an error if the broker returned an error response or there was a problem communicating with the broker 2. Use the `IsHTTPError` method to test and convert errors from Brokers into the standard broker error type, allowing access to conventional broker-provided fields 3. An HTTP `Gone` response is equivalent to success -- use `IsGoneError` to test for this condition 4. The `response.Async` field indicates whether the broker is performing the deprovision concurrently; see the [`LastOperation`](#checking-the-status-of-an-async-operation) method for information about handling asynchronous operations ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func DeprovisionService(client osb.Client) { request := &osb.DeprovisionRequest{ InstanceID: "my-dbaas-service-instance", ServiceID: "dbaas-service", PlanID: "dbaas-gold-plan", AcceptsIncomplete: true, } response, err := client.DeprovisionInstance(request) if err != nil { httpErr, isError := osb.IsHTTPError(err) if isError { // handle errors from broker if osb.IsGoneError(httpErr) { // A 'gone' status code means that the service instance // doesn't exist. This means there is no more work to do and // should be equivalent to a success. } } else { // handle errors communicating with broker } } if response.Async { // handle asynchronous deprovisions } else { // handle successful deprovision } } ``` ### Checking the status of an asynchronous operation If the client returns a response from [`ProvisionInstance`](#provisioning-a-new-instance-of-a-service), [`UpdateInstance`](#updating-properties-of-an-instance), or [`DeprovisionInstance`](#deprovisioning-an-instance) with the `response.Async` field set to true, it means the broker is executing the operation asynchronously. You must call the `PollLastOperation` method on the client to check on the status of the operation. ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func PollServiceInstance(client osb.Client, deleting bool) error { request := &osb.LastOperationRequest{ InstanceID: "my-dbaas-service-instance" ServiceID: "dbaas-service", PlanID: "dbaas-gold-plan", // Brokers may provide an identifying key for an asychronous operation. OperationKey: osb.OperationKey("12345") } response, err := client.PollLastOperation(request) if err != nil { // If the operation was for delete and we receive a http.StatusGone, // this is considered a success as per the spec. if osb.IsGoneError(err) && deleting { // handle instances that we were deprovisioning and that are now // gone } // The broker returned an error. While polling last operation, this // represents an invalid response and callers should continue polling // last operation. } switch response.State { case osb.StateInProgress: // The operation is still in progress case osb.StateSucceeded: // The operation succeeded case osb.StateFailed: // The operation failed. } } ``` ### Binding to an instance To create a new binding to an instance, call the `Bind` method. Key points: 1. `Bind` returns a response from the broker for successful operations, or an error if the broker returned an error response or there was a problem communicating with the broker 2. Use the `IsHTTPError` method to test and convert errors from Brokers into the standard broker error type, allowing access to conventional broker-provided fields ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func BindToInstance(client osb.Client) { request := &osb.BindRequest{ BindingID: "binding-id", InstanceID: "instance-id", ServiceID: "dbaas-service", PlanID: "dbaas-gold-plan", // platforms might want to pass an identifier for applications here AppGUID: "app-guid", // pass parameters here Parameters: map[string]interface{}{}, } response, err := brokerClient.Bind(request) if err != nil { httpErr, isError := osb.IsHTTPError(err) if isError { // handle errors from the broker } else { // handle errors communicating with the broker } } // do something with the credentials } ``` ### Unbinding from an instance To unbind from a service instance, call the `Unbind` method. Key points: 1. `Unbind` returns a response from the broker for successful operations, or an error if the broker returned an error response or there was a problem communicating with the broker 2. Use the `IsHTTPError` method to test and convert errors from Brokers into the standard broker error type, allowing access to conventional broker-provided fields ```go import ( osb "github.com/maplain/go-open-service-broker-client/v2" ) func UnbindFromInstance(client osb.Client) { request := &osb.UnbindRequest{ BindingID: "binding-id", InstanceID: "instance-id", ServiceID: "dbaas-service", PlanID: "dbaas-gold-plan", AppGUID: "app-guid", } response, err := brokerClient.Unbind(request) if err != nil { httpErr, isError := osb.IsHTTPError(err) if isError { // handle errors from the broker } else { // handle errors communicating with the broker } } // handle successful unbind } ```
31.433155
176
0.740218
eng_Latn
0.957231
b906d6d6952bf0cbb5ec7ca721bf85a2280a084a
7,747
md
Markdown
docs/relational-databases/system-stored-procedures/sp-help-log-shipping-secondary-database-transact-sql.md
zelanko/sql-docs.es-es
e8de33fb5b7b566192c5fd38f7d922aca7fa3840
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-stored-procedures/sp-help-log-shipping-secondary-database-transact-sql.md
zelanko/sql-docs.es-es
e8de33fb5b7b566192c5fd38f7d922aca7fa3840
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-stored-procedures/sp-help-log-shipping-secondary-database-transact-sql.md
zelanko/sql-docs.es-es
e8de33fb5b7b566192c5fd38f7d922aca7fa3840
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: sp_help_log_shipping_secondary_database (Transact-SQL) title: sp_help_log_shipping_secondary_database (Transact-SQL) | Microsoft Docs ms.custom: '' ms.date: 08/02/2016 ms.prod: sql ms.prod_service: database-engine ms.reviewer: '' ms.technology: system-objects ms.topic: language-reference f1_keywords: - sp_help_log_shipping_secondary_database - sp_help_log_shipping_secondary_database_TSQL dev_langs: - TSQL helpviewer_keywords: - sp_help_log_shipping_secondary_database ms.assetid: 11ce42ca-d3f1-44c8-9cac-214ca8896b9a author: MashaMSFT ms.author: mathoma ms.openlocfilehash: ac291d5c829c1ddc4022a7d0d59f65348daa859a ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 08/17/2020 ms.locfileid: "88486004" --- # <a name="sp_help_log_shipping_secondary_database-transact-sql"></a>sp_help_log_shipping_secondary_database (Transact-SQL) [!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)] Este procedimiento almacenado recupera la configuración de una o más bases de datos secundarias. ![Icono de vínculo de tema](../../database-engine/configure-windows/media/topic-link.gif "Icono de vínculo de tema") [Convenciones de sintaxis de Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md) ## <a name="syntax"></a>Sintaxis ``` sp_help_log_shipping_secondary_database [ @secondary_database = ] 'secondary_database' OR [ @secondary_id = ] 'secondary_id' ``` ## <a name="arguments"></a>Argumentos `[ @secondary_database = ] 'secondary_database'` Es el nombre de la base de datos secundaria. *secondary_database* es de **tipo sysname**y no tiene ningún valor predeterminado. `[ @secondary_id = ] 'secondary_id'` El ID. del servidor secundario en la configuración del trasvase de registros. *secondary_id* es de tipo **uniqueidentifier** y no puede ser null. ## <a name="return-code-values"></a>Valores de código de retorno 0 (correcto) o 1 (error) ## <a name="result-sets"></a>Conjuntos de resultados |Nombre de la columna|Descripción| |-----------------|-----------------| |**secondary_id**|Id. del servidor secundario en la configuración del trasvase de registros.| |**primary_server**|Nombre de la instancia principal del [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)] en la configuración del trasvase de registros.| |**primary_database**|Nombre de la base de datos principal en la configuración del trasvase de registros.| |**backup_source_directory**|Directorio donde se almacenan los archivos de copia de seguridad de registros de transacciones del servidor principal.| |**backup_destination_directory**|Directorio del servidor secundario donde se copian los archivos de copia de seguridad.| |**file_retention_period**|Tiempo, en minutos, durante el que un archivo de copia de seguridad se retiene en el servidor secundario antes de eliminarse.| |**copy_job_id**|Id. asociado al trabajo de copia en el servidor secundario.| |**restore_job_id**|Id. asociado al trabajo de restauración en el servidor secundario.| |**monitor_server**|Nombre de la instancia del [!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)] que se utiliza como servidor de supervisión en la configuración del trasvase de registros.| |**monitor_server_security_mode**|Modo de seguridad utilizado para conectarse al servidor de supervisión.<br /><br /> 1 = Autenticación de [!INCLUDE[msCoName](../../includes/msconame-md.md)] Windows.<br /><br /> 0 = [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] autenticación.| |**secondary_database**|Nombre de la base de datos secundaria en la configuración del trasvase de registros.| |**restore_delay**|Cantidad de tiempo, en minutos, que espera el servidor secundario antes de restaurar un archivo de copia de seguridad dado. El valor predeterminado es 0 minutos.| |**restore_all**|Si se establece en 1, el servidor secundario restaura todas las copias de seguridad disponibles del registro de transacciones cuando se ejecuta el trabajo de restauración. De lo contrario, se detiene tras haber restaurado un archivo.| |**restore_mode**|Modo de restauración para la base de datos secundaria.<br /><br /> 0 = restore log with NORECOVERY.<br /><br /> 1 = Restaurar registro con STANDBY.| |**disconnect_users**|Si se establece en 1, los usuarios se desconectarán de la base de datos secundaria cuando se realice una operación de restauración. Valor predeterminado = 0.| |**block_size**|Tamaño, en bytes, que se utiliza como tamaño de bloque para el dispositivo de copia de seguridad.| |**buffer_count**|Número total de búferes utilizados por la operación de copia de seguridad o restauración.| |**max_transfer_size**|Tamaño, en bytes, de la solicitud de entrada o salida máxima emitida por [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] al dispositivo de copia de seguridad.| |**restore_threshold**|Número de minutos permitido entre las operaciones de restauración antes de que se genere una alerta.| |**threshold_alert**|Alerta que se generará cuando se sobrepase el umbral de restauración.| |**threshold_alert_enabled**|Determina si las alertas de umbral de restauración están habilitadas.<br /><br /> 1 = Habilitada.<br /><br /> 0 = Deshabilitada.| |**last_copied_file**|Nombre del último archivo de copia de seguridad copiado en el servidor secundario.| |**last_copied_date**|Fecha y hora de la última operación de copia en el servidor secundario.| |**last_copied_date_utc**|Fecha y hora de la última operación de copia en el servidor secundario, expresadas en hora universal coordinada (UTC).| |**last_restored_file**|Nombre del último archivo de copia de seguridad restaurado en la base de datos secundaria.| |**last_restored_date**|Fecha y hora de la última operación de restauración en la base de datos secundaria.| |**last_restored_date_utc**|Fecha y hora de la última operación de restauración en la base de datos secundaria, expresadas en hora universal coordinada (UTC).| |**history_retention_period**|Cantidad de tiempo, en minutos, durante la que los registros de historial del trasvase de registros se mantienen en una base de datos secundaria determinada antes de ser eliminados.| |**last_restored_latency**|Período de tiempo, en minutos, transcurrido desde que se creó la copia de seguridad de registros en el servidor primario hasta que se restauró en el servidor secundario.<br /><br /> El valor inicial es NULL.| ## <a name="remarks"></a>Observaciones Si incluye el parámetro *secondary_database* , el conjunto de resultados contendrá información acerca de esa base de datos secundaria; Si incluye el parámetro *secondary_id* , el conjunto de resultados contendrá información acerca de todas las bases de datos secundarias asociadas a ese ID. secundario. **sp_help_log_shipping_secondary_database** se debe ejecutar desde la base de datos **maestra** en el servidor secundario. ## <a name="permissions"></a>Permisos Solo los miembros del rol fijo de servidor **sysadmin** pueden ejecutar este procedimiento. ## <a name="see-also"></a>Consulte también [sp_help_log_shipping_secondary_primary &#40;&#41;de Transact-SQL ](../../relational-databases/system-stored-procedures/sp-help-log-shipping-secondary-primary-transact-sql.md) [Acerca del trasvase de registros &#40;SQL Server&#41;](../../database-engine/log-shipping/about-log-shipping-sql-server.md) [Procedimientos almacenados del sistema &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/system-stored-procedures-transact-sql.md)
75.95098
305
0.76507
spa_Latn
0.916264
b9070c32c764e83cb88ba892f1759f3f52226faf
7,364
md
Markdown
docs/reporting-services/report-design/vary-polygon-line-and-point-display-by-rules-and-analytical-data.md
kazuyuk/sql-docs.ja-jp
b4913a1e60b25f2c582ab655a35a8225f7796091
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/report-design/vary-polygon-line-and-point-display-by-rules-and-analytical-data.md
kazuyuk/sql-docs.ja-jp
b4913a1e60b25f2c582ab655a35a8225f7796091
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/report-design/vary-polygon-line-and-point-display-by-rules-and-analytical-data.md
kazuyuk/sql-docs.ja-jp
b4913a1e60b25f2c582ab655a35a8225f7796091
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ルールおよび分析データを使用した多角形、線、およびポイントの表示の変更 | Microsoft Docs ms.custom: '' ms.date: 03/07/2017 ms.prod: reporting-services ms.prod_service: reporting-services-sharepoint, reporting-services-native ms.component: report-design ms.reviewer: '' ms.suite: pro-bi ms.technology: '' ms.tgt_pltfrm: '' ms.topic: conceptual f1_keywords: - "10538" - "10537" - sql13.rtp.rptdesigner.mapembeddedpolygonlayerproperties.general.f1 - MICROSOFT.REPORTDESIGNER.MAPMARKER.MARKERSTYLE - sql13.rtp.rptdesigner.mapembeddedlinelayerproperties.general.f1 - sql13.rtp.rptdesigner.mapembeddedpointlayerproperties.general.f1 - "10531" - "10536" - sql13.rtp.rptdesigner.maplinelayerproperties.widthrules.f1 ms.assetid: 7f1f5584-37b4-4fa2-ae44-8988c5f0c744 caps.latest.revision: 12 author: maggiesMSFT ms.author: maggies manager: kfile ms.openlocfilehash: 3e78b0319639852d8bb4cac5be3f3b2157ac0703 ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 05/03/2018 ms.locfileid: "33027459" --- # <a name="vary-polygon-line-and-point-display-by-rules-and-analytical-data"></a>Vary Polygon, Line, and Point Display by Rules and Analytical Data マップ レイヤー上の多角形、線、およびポイントの表示オプションを制御する方法として、レイヤーのオプションを設定する方法、レイヤー上のマップ要素ルールを設定する方法、およびレイヤー上の特定の埋め込みマップ要素のオプションをオーバーライドする方法があります。 表示オプションは、特定の優先順位に従って適用されます。以下に示すオプションは、後に出現するオプションほど優先順位が高くなります。 1. 多角形レイヤー、線レイヤー、およびポイント レイヤーに対して設定されたオプションは、マップ要素がレポート定義に埋め込まれているかどうかに関係なく、そのレイヤー上のすべてのマップ要素に適用されます。 2. ルールに対して設定されたオプションは、レイヤー上のすべてのマップ要素に適用されます。 すべてのデータ ビジュアライゼーション オプションは、空間データに関連付けられているマップ要素にのみ適用されます。 データ ビジュアライゼーション オプションを使用するには、表示バリエーションの基になるデータ フィールドを指定する必要があります。 データ ビジュアライゼーション ルールを適用するには、あらかじめ分析データおよび空間データの対応フィールドを設定しておく必要があります。 詳細については、「 [マップ (レポート ビルダーおよび SSRS)](../../reporting-services/report-design/maps-report-builder-and-ssrs.md)」を参照してください。 3. 選択した埋め込みマップ要素に設定したオプション。 レイヤー オプションをオーバーライドした場合、レポート定義に加えた変更は永続的に保持されます。 表示オプションをオーバーライドする以外に、データ フィールド値を変更して、特定の多角形、線、およびポイントをレイヤー上に表示する方法をカスタマイズすることもできます。 レイヤー上のマップ要素の表示を制御することに加え、レイヤーの透明度を制御して、先に描画されたレイヤーが後から描画されたレイヤーを透過して表示されるようにすることができます。 マップまたはマップ レイヤー全体に影響するオプションを変更する方法の詳細については、「[マップまたはマップ レイヤーのデータと表示のカスタマイズ (レポート ビルダーおよび SSRS)](../../reporting-services/report-design/customize-the-data-and-display-of-a-map-or-map-layer-report-builder-and-ssrs.md)」を参照してください。 > [!NOTE] > [!INCLUDE[ssRBRDDup](../../includes/ssrbrddup-md.md)] ## <a name="Rules"></a> ルールについて レイヤー上のマップ要素の表示プロパティをレポート プロセッサによって自動的に調整するための 4 種類のルールを設定できます。 ルールは、マップ要素の種類 (多角形、線、ポイント) に応じて異なります。 - **多角形 :** 多角形の色を変更します。 - **多角形の中心点 :** 各多角形の中心点に表示されるマーカーについて、マーカーの色、マーカー サイズ、およびマーカーの種類を変更します。 - **線 :** 線の色および線の幅を変更します。 - **ポイント :** 各ポイントに表示されるマーカーについて、マーカーの色、マーカー サイズ、およびマーカーの種類を変更します。 ## <a name="Color"></a> 色ルールについて 色ルールは、多角形、線、およびポイントまたは多角形の中心点を表すマーカーの塗りつぶしの色に適用されます。 色ルールでは、次の 4 つのオプションがサポートされます。 - [テンプレート スタイルを適用する] : ウィザードで選択したテーマに基づいて、レイヤーのテンプレート スタイルが定義されます。 テーマによって、フォントのスタイル、罫線のスタイル、およびパレットが設定されます。 - [色パレットを使用してデータを表示する] : 名前を使用してパレットを指定します。 レポート プロセッサは、パレット内の各色を段階的に処理し、パレット内の各色について濃淡の変化を適用し、徐々に淡くしていくことで、レイヤー内の各マップ要素の色を設定します。 - [色の範囲を使用してデータを表示する] : 開始色、中間色、および終了色を指定します。 次に、分布オプションを指定します。 レポート プロセッサは、分布オプション値を使用して、ヒート マップのような表示を生成する色のセットを作成します。 ヒート マップでは、温度に対応する色が表示されます。 たとえば、0 ~ 100 の範囲において、低い温度を表す小さな値は青色で表され、高い温度を表す大きな値は赤色で表されます。 - [作成した色を使用してデータを表示する] : 色のセットを指定します。 レポート プロセッサは、指定された値を順次処理して、レイヤー内の各マップ要素の色を設定します。 既定のパレットには、白色が含まれています。 白色とパレット内の他の色との間でコントラストが大きくなるのを避けるには、パレット内の明るい色を開始色として指定します。 データと関連付けられていないマップ要素を色なしとして表示するには、レイヤー上のマップ要素の既定の色に **[色なし]** を設定します。 ### <a name="color-scale"></a>カラー スケール 既定では、すべての色ルール値は、最初の凡例に表示されるだけでなく、カラー スケールにも表示されます。 カラー スケールは、1 つの色範囲を表示するために設計されています。 カラー スケールに表示する最も重要なデータを選択します。 カラー スケールに含めない値を削除するには、すべてのレイヤーのすべての色ルールのカラー スケール オプションをクリアします。 ## <a name="Width"></a> 線幅ルールについて 幅ルールは線に適用されます。 幅ルールでは、次の 2 つのオプションがサポートされます。 - [既定の線幅を使用する] : 線幅をポイント単位で指定します。 - [線の幅を使用してデータを表示する] : 線の最小幅および最大幅を設定し、幅を変更するために使用するデータ フィールドを指定します。次に、このデータを適用するための分布オプションを指定します。 ## <a name="Size"></a> マーカーのサイズ ルールについて サイズ ルールは、ポイントまたは多角形の中心点を表すマーカーに適用されます。 サイズ ルールでは、次の 2 つのオプションがサポートされます。 - [既定のマーカー サイズを使用する] : サイズをポイント単位で指定します。 - [サイズを使用してデータを表示する] : マーカーの最小サイズおよび最大サイズを設定し、サイズを変更するために使用するデータ フィールドを指定します。次に、このデータを適用するための分布オプションを指定します。 ## <a name="Marker"></a> マーカーの種類ルールについて マーカーの種類ルールは、ポイントまたは多角形の中心点を表すマーカーに適用されます。 マーカーの種類ルールでは、次の 2 つのオプションがサポートされます。 - [既定のマーカーの種類を使用する] : 使用できるマーカーの種類のいずれかを指定します。 - [マーカーを使用してデータを表示する] : マーカーのセットを指定し、マーカーの使用順を指定します。 マーカーの種類には、 **円**、 **ひし形**、 **五角形**、 **画鋲**、 **四角形**、 **星型**、 **三角形**、 **台形**、および **くさび形**があります。 ## <a name="Distribution"></a> 分布オプションについて 値の分布を作成するには、データを範囲に分割します。 分布タイプ、部分範囲の数、および範囲の最小値/最大値を指定します。 次のリストでは、3 つのマップ要素と、1 ~ 9999 の範囲内で 1、10、200、2000、4777、8999 の 6 つの値が関連付けられている分析値があることを想定しています。 - **[等間隔] :** データは一定範囲の間隔に区分されます。 たとえば、0 ~ 2999、3000 ~ 5999、および 6000 ~ 8999 の 3 つの範囲が作成されます。 この場合、部分範囲 1 には、1、10、200、500 が区分されます。 部分範囲 2 には、4777 が区分されます。 部分範囲 3 には、8999 が区分されます。 この方法では、データがどのように分布しているかは考慮されません。 非常に大きい値や小さい値が含まれていると、分布結果が偏る可能性があります。 - **[均等割り付け] :** 個々の範囲のアイテム数が均等になるようにデータが区分されます。 たとえば、0 ~ 10、11 ~ 500、および 501 ~ 8999 の 3 つの範囲が作成されます。 部分範囲 1 には、1 および 10 が区分されます。 部分範囲 2 には、200 および 500 が区分されます。 部分範囲 3 には、4777 および 8999 が区分されます。 この方法では、非常に広い範囲または狭い範囲の区分を作成することで、分布が偏る可能性があります。 - **[最適] :** つりあいのとれた部分範囲を形成するように分布が自動的に調整されます。 部分範囲の数は、アルゴリズムによって決まります。 - **カスタム。** 範囲の数を独自に指定して、値の分布を制御します。 サンプル データでは、3 つの範囲 1 ~ 2、3 ~ 8、9 を指定できます。 各ルールでは、分布値によってマップ要素の表示値が変更されます。 ## <a name="Legends"></a> 凡例および凡例項目について 凡例項目は、各レイヤーに対して指定したルールに基づいて自動的に作成されます。 ルール オプションは、作成される項目の数および表示先の凡例を制御します。 既定では、すべてのルールのすべてのアイテムが最初の凡例に追加されます。 最初の凡例から項目を移動するには、必要に応じて凡例を作成した後、それぞれのルールに対して、ルールに基づいて表示される項目を表示するための凡例を指定します。 ルールに基づいて項目を非表示にするには、空白の凡例名を指定します。 凡例の表示位置を制御するには、[凡例のプロパティ] ダイアログ ボックスを使用して、マップ ビューポートを基準とする相対的な位置を指定します。 詳細については、「 [マップの凡例、カラー スケール、および関連付けられているルールの変更 (レポート ビルダーおよび SSRS)](../../reporting-services/report-design/change-map-legends-color-scale-and-associated-rules-report-builder-and-ssrs.md)」を参照してください。 凡例は、タイトルまたはテキストに合わせて自動的に拡張されます。 凡例項目のテキストを書式設定するには、マップの凡例キーワードとカスタム書式を使用します。 詳細については、「 [凡例の内容の書式を変更するには](../../reporting-services/report-design/change-map-legends-color-scale-and-associated-rules-report-builder-and-ssrs.md#ChangeFormatItems)」を参照してください。 次の表に、使用できるさまざまな書式を示します。 |キーワードおよび書式|Description|凡例のテキストとして表示される内容の例| |------------------------|-----------------|---------------------------------------------------| |`#FROMVALUE {C0}`|小数部のない合計金額を表示します。|$400| |`#FROMVALUE {C2}`|合計金額を小数点以下 2 桁で表示します。|$400.55| |`#TOVALUE`|データ フィールドの実際の数値を表示します。|10000| |`#FROMVALUE{N0} - #TOVALUE{N0}`|範囲の実際の開始値と終了値を表示します。|10 - 790| ## <a name="see-also"></a>参照 [マップの凡例、カラー スケール、および関連付けられているルールの変更 (レポート ビルダーおよび SSRS)](../../reporting-services/report-design/change-map-legends-color-scale-and-associated-rules-report-builder-and-ssrs.md) [マップ &#40;レポート ビルダーおよび SSRS&#41;](../../reporting-services/report-design/maps-report-builder-and-ssrs.md) [マップ ウィザードおよびマップ レイヤー ウィザードのページ &#40;レポート ビルダーおよび SSRS&#41;](../../reporting-services/report-design/map-wizard-and-map-layer-wizard-report-builder-and-ssrs.md)
51.859155
360
0.759234
yue_Hant
0.442987
b907fa8e8040c13b9c364cd7601700acea69f349
354
md
Markdown
categories/security.md
dejavu-bit/minimal-mistakes
2e5333482b3209ea8d3c591ffddb6f11bbe32311
[ "MIT" ]
null
null
null
categories/security.md
dejavu-bit/minimal-mistakes
2e5333482b3209ea8d3c591ffddb6f11bbe32311
[ "MIT" ]
null
null
null
categories/security.md
dejavu-bit/minimal-mistakes
2e5333482b3209ea8d3c591ffddb6f11bbe32311
[ "MIT" ]
null
null
null
--- title: Security layout: category class: wide permalink: /categories/security/ taxonomy: security entries_layout: grid sidebar: - title: "" image: /assets/images/me.jpeg image_alt: "logo" text: "Technical Engineer, Cisco Systems" - nav: "sidebarCategory" - title: "" text: "" --- Sample post listing for the category `security`.
19.666667
48
0.689266
eng_Latn
0.534889
b90842f9b4282fde36b89fe1a36873dd799755fd
79
md
Markdown
README.md
dailystudio/project_generator
a9e9a474d1b8e06e85c4edb8b43440e8b77fce6d
[ "Apache-2.0" ]
null
null
null
README.md
dailystudio/project_generator
a9e9a474d1b8e06e85c4edb8b43440e8b77fce6d
[ "Apache-2.0" ]
null
null
null
README.md
dailystudio/project_generator
a9e9a474d1b8e06e85c4edb8b43440e8b77fce6d
[ "Apache-2.0" ]
null
null
null
# project_generator Code template generator for projects in different language
26.333333
58
0.860759
eng_Latn
0.991392
b908a84dfbaf1b3a03362cb3f50fef659e9e615c
2,069
md
Markdown
docs/en/guide/features/tag.md
hibikine/growi-docs
bca38ccd72500944217b6ab56fa70616cd0f357e
[ "MIT" ]
20
2018-08-29T15:16:07.000Z
2022-01-17T16:16:26.000Z
docs/en/guide/features/tag.md
hibikine/growi-docs
bca38ccd72500944217b6ab56fa70616cd0f357e
[ "MIT" ]
42
2018-11-27T02:02:31.000Z
2021-12-09T15:32:46.000Z
docs/en/guide/features/tag.md
hibikine/growi-docs
bca38ccd72500944217b6ab56fa70616cd0f357e
[ "MIT" ]
71
2018-08-02T01:42:34.000Z
2022-02-14T06:02:42.000Z
# Using Tags In GROWI, pages are generally managed using a hierarchical structure called page paths, but it is also possible to manage pages with cross-cutting attributes by attaching tags to them. Tags make it easier to search for specific pages. This section explains how to use tags. ## Tagging a page Navigate to the page where the tags are going to be added. As shown in the image below, there is an "Add tags for this page +" button on the upper left side of the page. ![](./images/tag1.png) Click the "Add tags for this page +" button to add tags to the page. When clicked, a window for editing tags will appear as shown in the image below. ![](./images/tag2.png) In this input field, type the word to use as a tag, then click on the desired tag as shown in the red circle below, or press Enter. ![](./images/tag3.png) The word will then be highlighted, as shown in the image below. ![](./images/tag4.png) This is the state in which tags are ready to be set. Japanese tags could also be added. Multiple tags can be added at the same time and can be deleted or edited by clicking the X button on each tag. ![](./images/tag5.png) When the tags have been set, click the "Done" button as shown in the image below. ![](./images/tag6.png) Clicking the Done button will successfully add the tags to the page. Tags attached to a page can be clicked to perform a search. ![](./images/tag7.png) Clicking on a tag attached to a page will perform a search using the tag. Pages with the same tag will be displayed in the search results. ![](./images/tag8.png) ## Searching by tag The search box in the Navbar can also be used to search for tags. Focusing on the search box will display a guide when searching, as shown in the image below. ![](./images/tagsearch1.png) Use the tag search function to search for tags using queries like `tag:wiki`. ![](./images/tagsearch2.png) Pages with the tags that have just been set will be displayed in the search results. ![](./images/tagsearch3.png) Use the tag function to improve the workflow.
32.328125
184
0.740454
eng_Latn
0.999057
b909272d934225c537161ad740b31e0b757f3141
3,642
md
Markdown
home-cluster/components/smart-low-voltage-ups/module3/README.md
Halytskyi/SmartThings
b1d599a7025f57586770bec42db436b1c0c7e244
[ "Apache-2.0" ]
10
2020-11-09T00:17:45.000Z
2022-02-15T01:04:09.000Z
home-cluster/components/smart-low-voltage-ups/module3/README.md
Halytskyi/SmartThings
b1d599a7025f57586770bec42db436b1c0c7e244
[ "Apache-2.0" ]
null
null
null
home-cluster/components/smart-low-voltage-ups/module3/README.md
Halytskyi/SmartThings
b1d599a7025f57586770bec42db436b1c0c7e244
[ "Apache-2.0" ]
null
null
null
# Low voltage UPS for smart home - Module #3 ## Main functions - sensors for measuring voltage, curent and power consumption on inputs and outputs of module #2 with ability sending data to server via [PJON protocol](https://github.com/gioblu/PJON) ## PJON Specification - PJON TxRx Bus Server ID: _1_ - PJON Tx Bus Server ID: _6_ - PJON Bus Device ID: _18_ - PJON Strategy: _SoftwareBitBang_ ## Requirements and components - 1 x Arduino Pro Mini 328 - 5V/16MHz - 4 x ACS712-20A modules - 4 x 10k resistors - 4 x 100k resistors - 2 x 5A fuses - 1 x 7A fuses - 1 x 8A fuses | Arduino PIN | Component | Notes | | --- | --- | --- | | D2 (Ext. Int.) | - || | D3 (PWM) | - || | D4 | - || | D5 (PWM) | - || | D6 (PWM) | - || | D7 | [PJON v13.0](https://github.com/gioblu/PJON/tree/13.0/src/strategies/SoftwareBitBang) | Communication with Server (TxRx) | | D8 | - || | D9 (PWM) | - || | D10 (PWM) | - || | D11 (PWM) | - || | D12 | [PJON v13.0](https://github.com/gioblu/PJON/tree/13.0/src/strategies/SoftwareBitBang) | Communication with Server (TX only) | | D13 | - || | A0 | Voltmeter: r1=100k, r2=10k | UPS output #1 | | A1 | Voltmeter: r1=100k, r2=10k | UPS output #2 | | A2 | Voltmeter: r1=100k, r2=10k | UPS output #3 | | A3 | Voltmeter: r1=100k, r2=10k | UPS output #4 | | A4 | ACS712-20A | UPS output #3 | | A5 | ACS712-20A | UPS output #4 | | A6 | ACS712-20A | UPS output #1 | | A7 | ACS712-20A | UPS output #2 | ### Components photos and schematics | Name | Schema / Photo | | --- | --- | | Voltmeter | [<img src="../images_common/voltmeter.jpg" alt="Voltmeter" width="170"/>](../images_common/voltmeter.jpg) | | ACS712 | [<img src="../images_common/ACS712_1.jpg" alt="ACS712_1" width="170"/>](../images_common/ACS712_1.jpg) [<img src="../images_common/ACS712_2.jpg" alt="ACS712_2" width="294"/>](../images_common/ACS712_2.jpg) | ## Commands | Command | Description | EEPROM | Auto-push | Notes | | --- | --- | --- | --- | --- | | V-[1-4] | Read value of voltage for chargers and outputs | - | + (auto-push every 1 minute) | Volt | | V-[1-4]-a | Read value of auto-push voltage for chargers and outputs | - | - | 0 - disabled<br>1 - enabled | | V-[1-4]-a=[0,1] | Disable/enable auto-push for read values of voltage for chargers and outputs | + | - | 0 - disable<br>1 - enable<br>default: 0 | | I-[1-4] | Read value of current for chargers and outputs | - | + (auto-push every 1 minute) | Amper | | I-[1-4]-a | Read value of auto-push current for chargers and outputs | - | - | 0 - disabled<br>1 - enabled | | I-[1-4]-a=[0,1] | Disable/enable auto-push for read values of current for chargers and outputs | + | - | 0 - disable<br>1 - enable<br>default: 0 | | P-[1-4] | Read value of power consumption for chargers and outputs | - | + (auto-push every 1 minute) | Watt (Volt * Amper) | | P-[1-4]-a | Read value of auto-push power consumption for chargers and outputs | - | - | 0 - disabled<br>1 - enabled | | P-[1-4]-a=[0,1] | Disable/enable auto-push for read values of power consumption for chargers and outputs | + | - | 0 - disable<br>1 - enable<br>default: 0 | where,<br> [V,I,P]-1 - UPS output #1<br> [V,I,P]-2 - UPS output #2<br> [V,I,P]-3 - UPS output #3<br> [V,I,P]-4 - UPS output #4<br> ***EEPROM*** - memory values are kept when the board is turned off<br> ***Auto-push*** - periodically send data to server ## Device Photos [<img src="images/slvu_module3_1.jpeg" alt="Module 3" width="300"/>](images/slvu_module3_1.jpeg) [<img src="images/slvu_module3_2.jpeg" alt="Module 3" width="328"/>](images/slvu_module3_2.jpeg) [<img src="images/slvu_module3_3.jpeg" alt="Module 3" width="347"/>](images/slvu_module3_3.jpeg)
44.962963
218
0.644975
eng_Latn
0.357533
b9094915deed0bb6030dc7f71d341106d45b49a9
1,258
md
Markdown
docs/error-messages/compiler-errors-2/compiler-error-c2786.md
Chrissavi/cpp-docs.de-de
6cc90f896a0a1baabb898a00f813f77f058bb7e5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c2786.md
Chrissavi/cpp-docs.de-de
6cc90f896a0a1baabb898a00f813f77f058bb7e5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c2786.md
Chrissavi/cpp-docs.de-de
6cc90f896a0a1baabb898a00f813f77f058bb7e5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Compilerfehler C2786 ms.date: 11/04/2016 f1_keywords: - C2786 helpviewer_keywords: - C2786 ms.assetid: 6676d8c0-86dd-4a39-bdda-b75a35f4d137 ms.openlocfilehash: 60e921c17cd2b3f9462df77094162bb3f1eff379 ms.sourcegitcommit: 1f009ab0f2cc4a177f2d1353d5a38f164612bdb1 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 07/27/2020 ms.locfileid: "87206678" --- # <a name="compiler-error-c2786"></a>Compilerfehler C2786 "Type": Ungültiger Operand für __uuidof Der [__uuidof](../../cpp/uuidof-operator.md) -Operator nimmt einen benutzerdefinierten Typ mit einer angefügten GUID oder einem Objekt eines solchen benutzerdefinierten Typs an. Mögliche Ursachen: 1. Das-Argument ist kein benutzerdefinierter Typ. 1. **`__uuidof`** die GUID kann nicht aus dem Argument extrahiert werden. Im folgenden Beispiel wird C2786 generiert: ```cpp // C2786.cpp struct __declspec(uuid("00000000-0000-0000-0000-000000000000")) A {}; int main() { __uuidof(int); // C2786 __uuidof(int *); // C2786 __uuidof(A **); // C2786 // no error __uuidof(A); __uuidof(A *); __uuidof(A &); __uuidof(A[]); int i; int *pi; A **ppa; __uuidof(i); // C2786 __uuidof(pi); // C2786 __uuidof(ppa); // C2786 } ```
24.192308
197
0.713037
deu_Latn
0.408569
b909772fc3e351d15c7d275de780aa00d824b54a
693
md
Markdown
website/docs/rules/type/optionalDependencies-type.md
aarongoldenthal/npm-package-json-lint
a44917b365b68664eb6c4c4d7a76abbe30dc3263
[ "MIT" ]
141
2016-04-26T23:56:10.000Z
2022-03-29T20:24:14.000Z
website/docs/rules/type/optionalDependencies-type.md
aarongoldenthal/npm-package-json-lint
a44917b365b68664eb6c4c4d7a76abbe30dc3263
[ "MIT" ]
214
2016-04-22T02:25:14.000Z
2022-03-26T03:48:03.000Z
website/docs/rules/type/optionalDependencies-type.md
aarongoldenthal/npm-package-json-lint
a44917b365b68664eb6c4c4d7a76abbe30dc3263
[ "MIT" ]
40
2017-04-07T07:08:44.000Z
2021-12-31T15:44:37.000Z
--- id: optionalDependencies-type title: optionalDependencies-type --- Enabling this rule will result in an error being generated if the value in `optionalDependencies` is not an object. ## Example .npmpackagejsonlintrc configuration ```json { "rules": { "optionalDependencies-type": "error" } } ``` ## Rule Details ### *Incorrect* example(s) ```json { "optionalDependencies": 1 } ``` ```json { "optionalDependencies": ["npm-package-json-lint"] } ``` ```json { "optionalDependencies": "npm-package-json-lint" } ``` ### *Correct* example(s) ```json { "optionalDependencies": { "npm-package-json-lint": "^0.3.0" } } ``` ## History * Introduced in version 1.0.0
13.075472
115
0.65368
eng_Latn
0.59523
b9098b69d63bb7472ea44f77597d5dade9aadcfe
25
md
Markdown
README.md
doit8307/java-fx
32db4f615caef39c26c7dd5cbaf0e8d5c23c5cc4
[ "Unlicense" ]
null
null
null
README.md
doit8307/java-fx
32db4f615caef39c26c7dd5cbaf0e8d5c23c5cc4
[ "Unlicense" ]
1
2018-01-02T14:13:01.000Z
2018-01-02T14:13:01.000Z
README.md
doit8307/java-fx
32db4f615caef39c26c7dd5cbaf0e8d5c23c5cc4
[ "Unlicense" ]
null
null
null
# java-fx JavaFX project
8.333333
14
0.76
cat_Latn
0.316069
b90999dd88e2add5c4c5dbb44fd79b13f6853aa0
7,038
md
Markdown
readme.md
argshook/redux-msg
279594b6dc87e6cf8f357bc7c5a0770f262d9894
[ "MIT" ]
5
2017-10-04T12:09:41.000Z
2018-07-13T14:44:25.000Z
readme.md
argshook/redux-msg
279594b6dc87e6cf8f357bc7c5a0770f262d9894
[ "MIT" ]
1
2017-09-30T20:11:31.000Z
2017-10-04T21:24:54.000Z
readme.md
argshook/redux-msg
279594b6dc87e6cf8f357bc7c5a0770f262d9894
[ "MIT" ]
1
2018-11-08T10:03:13.000Z
2018-11-08T10:03:13.000Z
# Redux Msg [![Build Status](https://travis-ci.org/argshook/redux-msg.svg?branch=master)](https://travis-ci.org/argshook/redux-msg) ## `npm i redux-msg` small set of functions to help DRY redux code: ```js import { // helpers for regular redux createReducer, createSelector, createState, // special actions called "messages" for advanced code DRYness createMessage, createMessagesReducer, mergeReducers } from 'redux-msg'; ``` 928 bytes in total (gzipped), 0 dependencies, 100% satisfaction usage examples in [this repo](https://github.com/argshook/how-to-redux) # Motivation Redux is great but applications written using it tend to attract boilerplate code. Not much is needed to avoid this: only 3 tiny helper functions for starters, or additional 3 (also tiny) functions if you can handle a little convention. # Convention your redux-aware components should have: * `NAME` - `string` a unique name of component. easily changeable when needed * `MODEL` - `object` the shape of state * that's it that's no magic, just: ```js export const NAME = 'my awesome unique name' export const MODEL = { woodoo: true, greeting: 'Howdy', randomNumber: 4 } ``` this convention is helpful even without any of the helper functions suggested here. # API all 6 exported functions are explained below starting from simplest ## `createReducer` `const { createReducer } = require('redux-msg')` if you code reducers with `switch`es or `if`s, this function is for you. ### Usage ```js const { createReducer } = require('redux-msg'); const reducer = createReducer(MODEL)(reducers)` ``` where: * `MODEL` is an `object` of redux state * `reducers` is an `object` where: * `key` is action type (e.g. `COUNTER_INCREASE`) * `value` is a reducer function of signature `(state, action) => state`. so instead of this: ```js const reducer = (state, action) => { switch(action.type) { 'increase': return { ...state, count: state.count + 1 } } } ``` you can do this: ```js const MODEL = { count: 0 } const reducer = createReducer(MODEL)({ increase: state => ({...state, count: state + 1}) }) ``` ### Return Value `createReducer(MODEL)(reducers)` returns yet another reducer with signature `(state, action) => state`. This means that it can be used with other redux tools with no problem. ### Example ```js import { createReducer } from 'redux-msg'; export const MODEL = { count: 0 }; // reducer created with `createReducer` export const reducer = createReducer(MODEL)({ increase: state => ({ ...state, count: state.count + 1 }), setCount: (state, action) => ({ ...state, count: action.count }) }); // ... later dispatch({ type: 'increase' }); // state is now { count: 1 } dispatch({ type: 'setCount', count: 10 }); // state is now { count: 10 } ``` --- ## `createSelector` `const { createSelector } = require('redux-msg')` when you have state, you want to be able to read it easily. easily means from anywhere and always the same way. let's consider bad approach for a moment. imagine your `store.getState()` returns: ```js { counterComponent: { count: 0 } } ``` you can create function ```js const selectCount = state => state.counterComponent.count; ``` then call it somewhere else ```js selectCount(store.getState()) // <= 0 ``` however, this doesn't scale well: you need such function for each model property and it also needs to know full path to reach `count`. by following simple convention to name your components, you can automatically create such select functions with `createSelector` without the need to know path to properties. `createSelector(NAME)(MODEL)` where: * `NAME` is a `string` labeling your component. This should also be part of `combineReducers()`: ```js import counterLogic from 'components/counter/logic'; import todoLogic from 'components/todo/logic'; combineReducers({ [counterLogic.NAME]: counterLogic.reducer, [todoLogic.NAME]: todoLogic.reducer }); ``` > a `NAME` defined once for each redux state section is also useful > for other helper functions in this library. * `MODEL` is an `object` of redux state ### Return Value object with keys that are the same as in given `MODEL`. values are functions of signature `state => any`, where `state` is `store.getState()` and `any` is whatever type that slice of state is. For example: `logic.js`: ```js const NAME = 'counterComponent'; const MODEL = { count: 0, message: 'hello there!' }; export const select = createSelector(NAME)(MODEL); assert.deepEqual(Object.keys(selectors), Object.keys(MODEL)) // just to illustrate that both have same keys console.log(select.count(store.getState())) // <= 0 console.log(select.message(store.getState())) // <= 'hello there!' ``` this fits really well with `react-redux` `mapStateToProps`: `component.js`: ```js import { select } from './logic'; const mapStateToProps = state => ({ count: select.count(state) }); ``` ### Example ```js import { createSelector } from 'redux-msg'; export const NAME = 'counterComponent'; export const MODEL = { count: 0 }; export const selector = createSelector(NAME)(MODEL); ``` it can be combined with other selectors easily: ```js export const selectors = { ...createSelector(NAME)(MODEL), myOtherSelector: state => state[NAME].specialItem } ``` --- ## `createState` `const { createState } = require('redux-msg')` helper to create a slice of global state for specific component. can be used as a state "factory", to hydrate `createStore` when loading component dynamically or during server side rendering. can also be used as utility in tests. ### Usage `const initState = createState(NAME)(MODEL)` where: * `NAME` is a `string` labeling your component. This should also be part of `combineReducers()`. See `createSelector` for more details * `MODEL` is an `object` of redux state ### Return Value a function with signature `object -> { [NAME]: { ...MODEL, object } }`. That's pretty much the actual implementation. `createState(NAME)(MODEL)` returns function that accepts `object` and returns `state`. The returned `state` has key `name` and its value is shallowly merged `MODEL` and `object`. Code explains better than i do, please see example. ### Example `myComponent/redux.js`: ```js import { createState } from 'redux-msg'; const NAME = 'myComponent'; const MODEL = { default: 'property', something: 'i am some default value' }; export const state = createState(NAME)(MODEL); ``` `create-store.js`: `createStore` from `redux` accepts second parameter - initial state. this is where `createState` may be used ```js import { createStore, combineReducers } from 'redux'; import myComponent from 'myComponent/redux'; const store = createStore( combineReducers({ [myComponent.NAME]: myComponent.reducer }), { ...createMyComponentState({ something: 'i am NOT default haha!' }) }) ``` after this, `store.getState()` will return: ```js { 'myComponent': { default: 'property' something: 'i am NOT default haha!' } } ``` ---
22.062696
134
0.701194
eng_Latn
0.981182
b90c4306ce12709f8266c2e23d204806a8f09878
2,805
md
Markdown
README.md
djlisko01/nomad
2d7c6da04e2695a131019ac5cbf3d302e45a419c
[ "MIT" ]
null
null
null
README.md
djlisko01/nomad
2d7c6da04e2695a131019ac5cbf3d302e45a419c
[ "MIT" ]
2
2021-11-10T02:19:41.000Z
2021-11-10T02:20:07.000Z
README.md
djlisko01/nomad
2d7c6da04e2695a131019ac5cbf3d302e45a419c
[ "MIT" ]
2
2021-11-10T00:30:07.000Z
2021-11-23T02:36:26.000Z
# nomad A website to connect project owners with freelancers. ## Objective This project is intended to focus on backend development using Node.js and Express. The database for the website was built using a non-relational database called Mongo DB. The database was hosted using MongoDB Atlas as our database server. ## Screenshots ![Login Page](./documentation-images/login-page.png) ![Signup Page](./documentation-images/signup-page.png) ![Projects Listing Page](./documentation-images/projects-listing-page.png) ![Post Project Page](./documentation-images/post-project-page.png) ![Freelancers Listing Page](./documentation-images/freelancers-listing-page.png) ![Create Freelancer Page](./documentation-images/create-freelancer-page.png) ![Freelancer Post Page](./documentation-images/freelancer-post-page.png) ## How-To-Use There are 2 ways to use this project. 1. Visit our deployed application at Heroku. (link below) 2. Clone this git repository. ### Using our deployed version Please visit our deployed application for the client version. ### Using locally via `git clone` Once you clone our repository, make sure to install all dependencies. Run the following command inside nomad root folder to install all dependencies: `npm install` At the `db` folder, we have provided 2 JSON files to be used as a collection for the database. 1. Projects.json 2. Users.json #### Creating local database Before running the program we first need to create a local database. Run the following command in your terminal to start local mongo server: `mongod --dbpath ~/data/db` Keeping the local server running, open a new terminal and run the following to create nomadLocalDB database and import the given JSON files as collections: 1. Projects collection ``` mongoimport -h localhost:27017 -d nomadLocalDB -c Projects --drop --jsonArray --file ./db/Projects.json ``` 2. Users collection ``` mongoimport -h localhost:27017 -d nomadLocalDB -c Users --drop --jsonArray --file ./db/Users.json ``` NOTE: In `./db/myMongoDB.js` make sure the global constant is saying `DB_NAME = "nomadLocalDB"`, otherwise the program cannot find the local database. #### Running locally Once the local database has been created, follow these steps to run locally: 1. Run Mongo server: `mongod --dbpath ~/data/db` 2. Run client server: `npm run devstart` NOTE: the devstart script has been prepared to run the server using nodemone. If it cannot run, try running `npm start` instead. 3. Using your browser go to localhost: http://locahost:3000/ ### Relevant Links [Demo](https://nomad-app-project.herokuapp.com/) [Video](https://youtu.be/aumBmPMepUE) [Slides](https://docs.google.com/presentation/d/1BTYYXypbosWAm4gJ2Wu3WdDcv5MKs_HqppMUeoTTZis/edit?usp=sharing) [Class](https://johnguerra.co/classes/webDevelopment_fall_2021/)
48.362069
239
0.775758
eng_Latn
0.932886
b90c476e42e9f6a91a44a4f09ef48c7c1e312095
11,957
md
Markdown
articles/active-directory-domain-services/active-directory-ds-scenarios.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-domain-services/active-directory-ds-scenarios.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-domain-services/active-directory-ds-scenarios.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Servizi di dominio Azure Active Directory: Scenari di distribuzione | Microsoft Docs' description: Scenari di distribuzione per i Servizi di dominio Azure Active Directory services: active-directory-ds documentationcenter: '' author: mahesh-unnikrishnan manager: mtillman editor: curtand ms.assetid: c5216ec9-4c4f-4b7e-830b-9d70cf176b20 ms.service: active-directory ms.component: domain-services ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual ms.date: 09/21/2017 ms.author: maheshu ms.openlocfilehash: db2bd855300d93d832a3dd7ca0ce526478824ccc ms.sourcegitcommit: 9222063a6a44d4414720560a1265ee935c73f49e ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 08/03/2018 ms.locfileid: "39502570" --- # <a name="deployment-scenarios-and-use-cases"></a>Scenari di distribuzione e casi d'uso Questa sezione illustra alcuni scenari e casi d'uso che traggono vantaggio dall'utilizzo di Servizi di dominio Azure Active Directory (AD). ## <a name="secure-easy-administration-of-azure-virtual-machines"></a>Gestione sicura e semplificata delle macchine virtuali di Azure È possibile utilizzare Servizi di dominio Azure Active Directory per gestire le macchine virtuali di Azure in modo semplificato. Le macchine virtuali di Azure possono appartenere al dominio gestito, consentendo di utilizzare le credenziali aziendali di Active Directory per effettuare l'accesso. Questo approccio consente di evitare complicazioni con la gestione delle credenziali, ad esempio la gestione degli account di amministratore locali su ciascuna delle macchine virtuali di Azure. Le macchine virtuali del server aggiunte al dominio gestito possono inoltre essere gestite e protette tramite i criteri di gruppo. È possibile applicare le linee di base della sicurezza necessaria alle macchine virtuali di Azure e bloccarle in conformità con le linee guida sulla sicurezza aziendale. Ad esempio, è possibile utilizzare le funzionalità di gestione dei criteri di gruppo per limitare i tipi di applicazioni che possono essere avviate su tali macchine virtuali. ![Gestione ottimizzata delle macchine virtuali di Azure](./media/active-directory-domain-services-scenarios/streamlined-vm-administration.png) Man mano che i server e altri elementi dell'infrastruttura raggiungono la fine del ciclo di vita, Contoso sposta molte delle applicazioni attualmente ospitate in locale nel cloud. Lo standard IT corrente prevede che i server che ospitano applicazioni aziendali sia aggiunto a un dominio e gestito tramite Criteri di gruppo. L'amministratore IT di Contoso preferisce aggiungere a un dominio le macchine virtuali distribuite in Azure per semplificarne la gestione. Di conseguenza, gli amministratori e gli utenti possono accedere utilizzando le proprie credenziali aziendali. Allo stesso tempo, le macchine possono essere configurate per essere conformi alle linee di base della sicurezza necessaria utilizzando i criteri di gruppo. Contoso preferisce non dover distribuire, monitorare e gestire i controller di dominio in Azure per proteggere le macchine virtuali di Azure. Servizi di dominio Azure Active Directory è quindi un'ottima soluzione per questo caso di utilizzo. **Note sulla distribuzione** Considerare i seguenti punti importanti per questo scenario di distribuzione: * Per impostazione predefinita, i domini gestiti forniti da Servizi di dominio Azure AD offrono una singola struttura di unità organizzativa di tipo semplice. Tutte le macchine aggiunte al dominio si trovano in una singola unità organizzativa di tipo semplice. È tuttavia possibile scegliere di creare unità organizzative personalizzate. * Servizi di dominio Azure AD supporta Criteri di gruppo semplici nel formato di un oggetto Criteri di gruppo predefinito per i contenitori di utenti e computer. È possibile creare oggetti Criteri di gruppo personalizzati e assegnarli a unità organizzative personalizzate. * Servizi di dominio Azure AD supporta lo schema dell'oggetto computer di Active Directory di base. Non è possibile estendere lo schema dell'oggetto computer. ## <a name="lift-and-shift-an-on-premises-application-that-uses-ldap-bind-authentication-to-azure-infrastructure-services"></a>Spostamento di un'applicazione locale che usa l'autenticazione di binding LDAP nei servizi di infrastruttura di Azure ![Binding LDAP](./media/active-directory-domain-services-scenarios/ldap-bind.png) Contoso ha un'applicazione locale acquistata da un fornitore di software indipendente molti anni fa. L'applicazione è attualmente in modalità manutenzione presso il fornitore di software indipendente e le modifiche apportate all'applicazione sono estremamente costose per Contoso. L'applicazione ha un front-end basato sul Web che raccoglie le credenziali degli utenti tramite un modulo Web e che autentica gli utenti tramite un binding LDAP con l'istanza di Active Directory aziendale. Contoso vorrebbe eseguire la migrazione di questa applicazione ai servizi di infrastruttura di Azure. Sarebbe opportuno che l'applicazione funzionasse così com'è, senza richiedere modifiche. Gli utenti dovrebbero inoltre poter eseguire l'autenticazione usando le credenziali aziendali esistenti senza che sia necessaria una formazione sulle nuove procedure. In altre parole, gli utenti non devono accorgersi che è stata eseguita la migrazione dell'applicazione e che l'applicazione non è più in esecuzione in locale. **Note sulla distribuzione** Considerare i seguenti punti importanti per questo scenario di distribuzione: * Assicurarsi che l'applicazione non debba eseguire operazioni di modifica/scrittura nella directory. L'accesso LDAP in scrittura ai domini gestiti forniti da Servizi di dominio Azure AD non è supportato. * Non è possibile modificare direttamente le password nel dominio gestito. Gli utenti finali possono modificare le password tramite il meccanismo di reimpostazione della password self-service o nella directory locale. Queste modifiche vengono automaticamente sincronizzate e rese disponibili nel dominio gestito. ## <a name="lift-and-shift-an-on-premises-application-that-uses-ldap-read-to-access-the-directory-to-azure-infrastructure-services"></a>Spostamento di un'applicazione locale che usa la lettura LDAP per accedere alla directory nei servizi di infrastruttura di Azure Contoso ha un'applicazione line-of-business locale (LOB) che è stata sviluppata quasi un decennio fa. L'applicazione è compatibile con le directory ed è stata progettata per funzionare con Windows Server Active Directory. L'applicazione usa LDAP (Lightweight Directory Access Protocol) per la lettura di informazioni/attributi relativi agli utenti da Active Directory. L'applicazione non modifica gli attributi né esegue operazioni di scrittura nella directory. Contoso desidera eseguire la migrazione di questa applicazione ai servizi di infrastruttura di Azure e dismettere l'hardware locale obsoleto che ospita l'applicazione. L'applicazione non può essere riscritta per l'uso di API di directory moderne, ad esempio l'API Graph basata su REST di Azure AD. Di conseguenza, un'opzione di spostamento è opportuna quando è possibile eseguire la migrazione dell'applicazione per l'esecuzione nel cloud senza modificare il codice o riscrivere l'applicazione. **Note sulla distribuzione** Considerare i seguenti punti importanti per questo scenario di distribuzione: * Assicurarsi che l'applicazione non debba eseguire operazioni di modifica/scrittura nella directory. L'accesso LDAP in scrittura ai domini gestiti forniti da Servizi di dominio Azure AD non è supportato. * Assicurarsi che l'applicazione non necessiti di uno schema Active Directory personalizzato o esteso. Le estensioni dello schema non sono supportate in Servizi di dominio Azure AD. ## <a name="migrate-an-on-premises-service-or-daemon-application-to-azure-infrastructure-services"></a>Eseguire la migrazione di un servizio locale o di un'applicazione daemon ai servizi di infrastruttura di Azure Alcune applicazioni sono costituite da più livelli, dove uno dei livelli deve eseguire chiamate autenticate a un livello di back-end, ad esempio un livello di database. Gli account di servizio di Active Directory vengono comunemente utilizzati per questi casi di utilizzo. È possibile sollevare e spostare tali applicazioni per i servizi dell'infrastruttura Azure e utilizzare Servizi di dominio Azure Active Directory per le esigenze di identità di queste applicazioni. È possibile scegliere di utilizzare lo stesso account di servizio che viene sincronizzato dalla directory locale ad Azure AD. In alternativa, è possibile creare innanzitutto un'unità organizzativa, quindi un account di servizio separato nell'unità organizzativa, per distribuire tali applicazioni. ![Account del servizio mediante WIA](./media/active-directory-domain-services-scenarios/wia-service-account.png) Contoso ha un'applicazione di insieme di credenziali appositamente sviluppata e personalizzata che include un front-end Web, un server SQL e un server FTP back-end. L'autenticazione integrata di Windows degli account del servizio viene usata per eseguire l'autenticazione del front-end Web con il server FTP. Il front-end Web è configurato per l'esecuzione come account del servizio. Il server back-end è configurato per autorizzare l'accesso dall'account del servizio per il front-end Web. Contoso preferisce non dover distribuire una macchina virtuale del controller di dominio nel cloud per spostare questa applicazione nei servizi dell'infrastruttura di Azure. L'amministratore IT di Contoso può distribuire i server che ospitano il front-end Web, il server SQL e il server FTP in macchine virtuali di Azure. Dopodiché, queste macchine virtuali vengono aggiunte al dominio gestito di Servizi di dominio Azure AD. Successivamente, sarà possibile usare lo stesso account del servizio nella directory locale per l'autenticazione dell'app. Questo account del servizio viene sincronizzato con il dominio gestito di servizi di Servizi di dominio Azure Active Directory ed è disponibile per l'utilizzo. **Note sulla distribuzione** Considerare i seguenti punti importanti per questo scenario di distribuzione: * Assicurarsi che l'applicazione usi nome utente e password per l'autenticazione. L'autenticazione basata su certificati/smart card non è supportata da Servizi di dominio Azure AD. * Non è possibile modificare direttamente le password nel dominio gestito. Gli utenti finali possono modificare le password tramite il meccanismo di reimpostazione della password self-service o nella directory locale. Queste modifiche vengono automaticamente sincronizzate e rese disponibili nel dominio gestito. ## <a name="windows-server-remote-desktop-services-deployments-in-azure"></a>Distribuzioni di Servizi Desktop remoto di Windows Server in Azure È possibile usare Azure AD Domain Services per fornire servizi di dominio AD gestiti ai server di desktop remoto distribuiti in Azure. Per altre informazioni su questo scenario di distribuzione, vedere come [integrare Azure AD Domain Services con la distribuzione di Servizi Desktop remoto](https://docs.microsoft.com/windows-server/remote/remote-desktop-services/rds-azure-adds). ## <a name="domain-joined-hdinsight-clusters-preview"></a>Cluster HDInsight aggiunti al dominio (anteprima) È possibile impostare un cluster HDInsight di Azure aggiunto a un dominio gestito di Azure AD Domain Services con Apache Ranger abilitato. Creare e applicare criteri Hive tramite Apache Ranger e consentire agli utenti (ad esempio, i ricercatori) di connettersi a Hive usando strumenti basati su ODBC, come Excel, Tableau e così via. Microsoft sta lavorando per aggiungere presto altri carichi di lavoro, ad esempio HBase, Spark e Storm, a HDInsight aggiunto al dominio. Per altre informazioni su questo scenario di distribuzione, vedere come [configurare i cluster HDInsight aggiunti al dominio](../hdinsight/domain-joined/apache-domain-joined-configure.md)
131.395604
1,199
0.829556
ita_Latn
0.999471
b90c825046e5d23c27dbc22801c54e0c9cc18d9c
511
md
Markdown
content/events/2018-moscow/speakers/alexey-vahov.md
docent-net/devopsdays-web
8056b7937e293bd63b43d98bd8dca1844eee8a88
[ "Apache-2.0", "MIT" ]
6
2016-11-14T14:08:29.000Z
2018-05-09T18:57:06.000Z
content/events/2018-moscow/speakers/alexey-vahov.md
docent-net/devopsdays-web
8056b7937e293bd63b43d98bd8dca1844eee8a88
[ "Apache-2.0", "MIT" ]
461
2016-11-11T19:23:06.000Z
2019-07-21T16:10:04.000Z
content/events/2018-moscow/speakers/alexey-vahov.md
docent-net/devopsdays-web
8056b7937e293bd63b43d98bd8dca1844eee8a88
[ "Apache-2.0", "MIT" ]
15
2016-11-11T15:07:53.000Z
2019-01-18T04:55:24.000Z
+++ Title = "Алексей Вахов" Twitter = "" image = "alexey-vahov.png" type = "speaker" linktitle = "alexey-vahov" +++ Технический директор в компании Учи.ру. Закончил Факультет общей и прикладной Физики МФТИ. 7 лет работал С++-разработчиком в очень больших системах (десятки миллионов строчек кода). Позднее перешел в веб, любимая серверная технология Ruby on Rails, вхожу в топ-100 контрибьюторов. Увлекаюсь эксплуатацией, докерами и все что с этим связано, это интересно и жизненно важно для нашей компании.
42.583333
391
0.772994
rus_Cyrl
0.974845
b90ce91c465885f5ec53c5ca99b4a59e629a5e8d
10,647
md
Markdown
docs/framework/ui-automation/ui-automation-support-for-the-group-control-type.md
acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
1
2019-04-11T17:00:02.000Z
2019-04-11T17:00:02.000Z
docs/framework/ui-automation/ui-automation-support-for-the-group-control-type.md
Acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/ui-automation/ui-automation-support-for-the-group-control-type.md
Acidburn0zzz/docs.fr-fr
5fdf04b7027f8b7d749c2180da4b99068e1f44ee
[ "CC-BY-4.0", "MIT" ]
1
2022-02-23T14:59:20.000Z
2022-02-23T14:59:20.000Z
--- title: Prise en charge d'UI Automation pour le type de contrôle Group ms.date: 03/30/2017 helpviewer_keywords: - UI Automation, Group control type - Group control type - control types, Group ms.assetid: 18e01bab-01f8-4567-b867-88dce9c4a435 ms.openlocfilehash: 063ef780793eef87ed08cbf2d98d387bd811c166 ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 04/08/2019 ms.locfileid: "59209740" --- # <a name="ui-automation-support-for-the-group-control-type"></a>Prise en charge d'UI Automation pour le type de contrôle Group > [!NOTE] > Cette documentation s'adresse aux développeurs .NET Framework qui souhaitent utiliser les classes [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] managées définies dans l'espace de noms <xref:System.Windows.Automation>. Pour plus d’informations sur [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)], consultez [Windows Automation API : UI Automation](https://go.microsoft.com/fwlink/?LinkID=156746). Cette rubrique fournit des informations sur la prise en charge d’ [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] pour le type de contrôle Group. Dans [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)], un type de contrôle est un ensemble de conditions qu’un contrôle doit respecter pour pouvoir utiliser la propriété <xref:System.Windows.Automation.AutomationElement.ControlTypeProperty> . Ces conditions incluent des recommandations spécifiques pour l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] , les valeurs de propriété [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] et les modèles de contrôle [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] . Le contrôle de groupe représente un nœud dans une hiérarchie. Le contrôle de type Group crée une séparation dans l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] . Ainsi, les éléments qui sont regroupés ont une division logique dans l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] . Les sections suivantes définissent l’arborescence, les propriétés, les modèles de contrôle et les événements [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] nécessaires au type de contrôle Group. Les exigences [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] s’appliquent à tous les contrôles de groupe, qu’il s’agisse de [!INCLUDE[TLA#tla_winclient](../../../includes/tlasharptla-winclient-md.md)], [!INCLUDE[TLA#tla_win32](../../../includes/tlasharptla-win32-md.md)]ou [!INCLUDE[TLA#tla_winforms](../../../includes/tlasharptla-winforms-md.md)]. <a name="Required_UI_Automation_Tree_Structure"></a> ## <a name="required-ui-automation-tree-structure"></a>Arborescence UI Automation obligatoire Le tableau suivant représente l’affichage de contrôle et l’affichage du contenu de l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] relative aux contrôles de groupe. En outre, il décrit ce que peut contenir chaque affichage. Pour plus d’informations sur l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] , consultez [UI Automation Tree Overview](../../../docs/framework/ui-automation/ui-automation-tree-overview.md). |Affichage de contrôle|Affichage de contenu| |------------------|------------------| |Regrouper<br /><br /> -0 ou plusieurs contrôles|Regrouper<br /><br /> -0 ou plusieurs contrôles| En général, les contrôles de groupe aura le [prise en charge d’UI Automation pour le Type de contrôle ListItem](../../../docs/framework/ui-automation/ui-automation-support-for-the-listitem-control-type.md), [prise en charge d’UI Automation pour le Type de contrôle TreeItem](../../../docs/framework/ui-automation/ui-automation-support-for-the-treeitem-control-type.md), ou [prise en charge d’UI Automation pour le Type de contrôle DataItem](../../../docs/framework/ui-automation/ui-automation-support-for-the-dataitem-control-type.md) types se trouvés sous les dans la sous-arborescence de contrôle. « Group » étant un conteneur générique, tous les types de contrôle peuvent se trouver sous le contrôle de groupe dans l’arborescence. <a name="Required_UI_Automation_Properties"></a> ## <a name="required-ui-automation-properties"></a>Propriétés UI Automation requises Le tableau suivant répertorie les propriétés [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] dont la valeur ou la définition est particulièrement pertinente pour les contrôles de groupe. Pour plus d’informations sur les propriétés [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] , consultez [UI Automation Properties for Clients](../../../docs/framework/ui-automation/ui-automation-properties-for-clients.md). |[!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] Propriété|Value|Notes| |------------------------------------------------------------------------------------|-----------|-----------| |<xref:System.Windows.Automation.AutomationElementIdentifiers.AutomationIdProperty>|Consultez les notes.|La valeur de cette propriété doit être unique dans tous les contrôles d’une application.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.BoundingRectangleProperty>|Consultez les notes.|Rectangle externe qui contient l’ensemble du contrôle.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.ClickablePointProperty>|Consultez les notes.|Pris en charge s’il existe un rectangle englobant. Si les points du rectangle englobant ne sont pas tous interactifs et que vous effectuez un test de positionnement spécialisé, vous devez remplacer et fournir un point interactif.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.IsKeyboardFocusableProperty>|Consultez les notes.|Si le contrôle peut recevoir le focus clavier, il doit prendre en charge cette propriété.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.NameProperty>|Consultez les notes.|Le contrôle de groupe tire généralement son nom du texte qui sert d’étiquette au contrôle.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.LabeledByProperty>|Consultez les notes.|En général, les contrôles de groupe créent eux-mêmes leurs étiquettes. Dans ce cas, ils retournent `null` ici. S’il existe une étiquette de texte statique pour le groupe, elle doit être retournée en tant que valeur de la propriété LabeledBy.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.ControlTypeProperty>|Regrouper|Cette valeur est identique pour toutes les infrastructures d’interface utilisateur.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.LocalizedControlTypeProperty>|"groupe"|Chaîne localisée correspondant au type de contrôle Group.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.IsContentElementProperty>|True|Le contrôle de groupe est toujours inclus dans l’affichage de contenu de l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] .| |<xref:System.Windows.Automation.AutomationElementIdentifiers.IsControlElementProperty>|True|Le contrôle de groupe est toujours inclus dans l’affichage de contrôle de l’arborescence [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] .| <a name="Required_UI_Automation_Control_Patterns"></a> ## <a name="required-ui-automation-control-patterns"></a>Modèles de contrôle UI Automation obligatoires Le tableau suivant répertorie les modèles de contrôle [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] qui doivent être pris en charge pour le type de contrôle Group. Pour plus d’informations sur les modèles de contrôle, consultez [UI Automation Control Patterns Overview](../../../docs/framework/ui-automation/ui-automation-control-patterns-overview.md). |Modèle de contrôle|Assistance|Notes| |---------------------|-------------|-----------| |<xref:System.Windows.Automation.Provider.IExpandCollapseProvider>|Selon le cas|Les contrôles de groupe qui peuvent être utilisés pour afficher ou masquer des informations doivent prendre en charge le modèle ExpandCollapse.| <a name="Required_UI_Automation_Events"></a> ## <a name="required-ui-automation-events"></a>Événements UI Automation requis Le tableau suivant répertorie les événements [!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] qui doivent être pris en charge par tous les contrôles de groupe. Pour plus d’informations sur les événements, consultez [UI Automation Events Overview](../../../docs/framework/ui-automation/ui-automation-events-overview.md). |[!INCLUDE[TLA2#tla_uiautomation](../../../includes/tla2sharptla-uiautomation-md.md)] Événement|Assistance|Notes| |---------------------------------------------------------------------------------|-------------|-----------| |<xref:System.Windows.Automation.AutomationElementIdentifiers.BoundingRectangleProperty> événement de modification de propriété.|Obligatoire|Aucun.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.IsOffscreenProperty> événement de modification de propriété.|Obligatoire|Aucun.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.IsEnabledProperty> événement de modification de propriété.|Obligatoire|Aucun.| |<xref:System.Windows.Automation.ExpandCollapsePatternIdentifiers.ExpandCollapseStateProperty> événement de modification de propriété.|Selon le cas|Aucun.| |<xref:System.Windows.Automation.TogglePatternIdentifiers.ToggleStateProperty> événement de modification de propriété.|Selon le cas|Aucun.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.AutomationFocusChangedEvent>|Obligatoire|Aucun.| |<xref:System.Windows.Automation.AutomationElementIdentifiers.StructureChangedEvent>|Obligatoire|Aucun.| ## <a name="see-also"></a>Voir aussi - <xref:System.Windows.Automation.ControlType.Group> - [Vue d'ensemble des types de contrôle UI Automation](../../../docs/framework/ui-automation/ui-automation-control-types-overview.md) - [Vue d'ensemble d'UI Automation](../../../docs/framework/ui-automation/ui-automation-overview.md)
133.0875
851
0.767728
fra_Latn
0.711206
b90cee653b901808ccb03060aba85b1b22d96af9
17,938
md
Markdown
images/win/Vs2019-Server2019-Readme.md
bokio/azure-pipelines-image-generation
823349f466903b9c705fdcdee09fd249e62029a5
[ "MIT" ]
1
2021-06-24T06:13:30.000Z
2021-06-24T06:13:30.000Z
images/win/Vs2019-Server2019-Readme.md
bokio/azure-pipelines-image-generation
823349f466903b9c705fdcdee09fd249e62029a5
[ "MIT" ]
null
null
null
images/win/Vs2019-Server2019-Readme.md
bokio/azure-pipelines-image-generation
823349f466903b9c705fdcdee09fd249e62029a5
[ "MIT" ]
null
null
null
# Azure Pipelines Hosted Windows 2019 with VS2019 image The following software is installed on machines in the Azure Pipelines **Hosted Windows 2019 with VS2019** pool. Components marked with **\*** have been upgraded since the previous version of the image. ## Chocolatey _Version:_ 0.10.11<br/> _Environment:_ * PATH: contains location for choco.exe ## Docker _Version:_ 18.09.1<br/> _Environment:_ * PATH: contains location of docker.exe ## Docker-compose _Version:_ 1.23.2<br/> _Environment:_ * PATH: contains location of docker-compose.exe ## Powershell Core _Version:_ 6.1.1 <br/> ## Docker images The following container images have been cached: * microsoft/dotnet-framework@sha256:f937dbfb092e5a04ca6ac93d49ab0b6c375bdaf9cd4d9f7bd9d4a9407eb36585 * microsoft/windowsservercore@sha256:05de0a0ac13d3652bd1f2281b8589459ebb611092e3fe4d8f1be91f1f6984266 * microsoft/aspnet@sha256:a4d6856b978e5b9858ca294c97c5f4bc9ecfe3b062a7a015eaf625801573bc11 * microsoft/nanoserver@sha256:2b783310e6c82de737e893abd53ae238ca56b5a96e2861558fb9a111d6691ddb * microsoft/aspnetcore-build@sha256:9ecc7c5a8a7a11dca5f08c860165646cb30d084606360a3a72b9cbe447241c0c ## Visual Studio 2019 Enterprise _Version:_ VisualStudioPreview/16.0.0-pre.2.2+28602.52<br/> _Location:_ C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview The following workloads and components are installed with Visual Studio 2019: * Component.CPython2.x64 * Component.CPython3.x64 * Component.Linux.CMake * Component.UnityEngine.x64 * Component.UnityEngine.x86 * Component.Unreal.Android * Microsoft.Component.Azure.DataLake.Tools * Microsoft.Component.CookiecutterTools * Microsoft.Component.PythonTools.Miniconda * Microsoft.Component.PythonTools.Web * Microsoft.Component.VC.Runtime.UCRTSDK * Microsoft.Net.ComponentGroup.4.6.2.DeveloperTools * Microsoft.Net.ComponentGroup.4.7.1.DeveloperTools * Microsoft.Net.ComponentGroup.4.7.DeveloperTools * Microsoft.VisualStudio.Component.AspNet45 * Microsoft.VisualStudio.Component.Azure.Kubernetes.Tools * Microsoft.VisualStudio.Component.Azure.MobileAppsSdk * Microsoft.VisualStudio.Component.Azure.ServiceFabric.Tools * Microsoft.VisualStudio.Component.Azure.Storage.AzCopy * Microsoft.VisualStudio.Component.Debugger.JustInTime * Microsoft.VisualStudio.Component.DslTools * Microsoft.VisualStudio.Component.EntityFramework * Microsoft.VisualStudio.Component.FSharp.Desktop * Microsoft.VisualStudio.Component.LinqToSql * Microsoft.VisualStudio.Component.TeamOffice * Microsoft.VisualStudio.Component.TestTools.CodedUITest * Microsoft.VisualStudio.Component.TestTools.WebLoadTest * Microsoft.VisualStudio.Component.UWP.VC.ARM64 * Microsoft.VisualStudio.Component.VC.ATL.ARM * Microsoft.VisualStudio.Component.VC.ATLMFC * Microsoft.VisualStudio.Component.VC.ATLMFC.Spectre * Microsoft.VisualStudio.Component.VC.CLI.Support * Microsoft.VisualStudio.Component.VC.CMake.Project * Microsoft.VisualStudio.Component.VC.DiagnosticTools * Microsoft.VisualStudio.Component.VC.MFC.ARM * Microsoft.VisualStudio.Component.VC.MFC.ARM.Spectre * Microsoft.VisualStudio.Component.VC.MFC.ARM64 * Microsoft.VisualStudio.Component.VC.MFC.ARM64.Spectre * Microsoft.VisualStudio.Component.VC.Runtimes.ARM.Spectre * Microsoft.VisualStudio.Component.VC.Runtimes.ARM64.Spectre * Microsoft.VisualStudio.Component.VC.Runtimes.x86.x64.Spectre * Microsoft.VisualStudio.Component.VC.TestAdapterForBoostTest * Microsoft.VisualStudio.Component.VC.TestAdapterForGoogleTest * Microsoft.VisualStudio.Component.VC.v141 * Microsoft.VisualStudio.Component.Windows10SDK.17134 * Microsoft.VisualStudio.Component.Windows10SDK.17763 * Microsoft.VisualStudio.ComponentGroup.Azure.CloudServices * Microsoft.VisualStudio.ComponentGroup.Azure.ResourceManager.Tools * Microsoft.VisualStudio.ComponentGroup.Web.CloudTools * Microsoft.VisualStudio.Workload.Azure * Microsoft.VisualStudio.Workload.Data * Microsoft.VisualStudio.Workload.DataScience * Microsoft.VisualStudio.Workload.ManagedDesktop * Microsoft.VisualStudio.Workload.ManagedGame * Microsoft.VisualStudio.Workload.NativeCrossPlat * Microsoft.VisualStudio.Workload.NativeDesktop * Microsoft.VisualStudio.Workload.NativeGame * Microsoft.VisualStudio.Workload.NativeMobile * Microsoft.VisualStudio.Workload.NetCoreTools * Microsoft.VisualStudio.Workload.NetCrossPlat * Microsoft.VisualStudio.Workload.NetWeb * Microsoft.VisualStudio.Workload.Node * Microsoft.VisualStudio.Workload.Office * Microsoft.VisualStudio.Workload.Python * Microsoft.VisualStudio.Workload.Universal * Microsoft.VisualStudio.Workload.VisualStudioExtension ## WIX Tools _Toolset Version:_ 3.11.2318<br/> _Environment:_ * WIX: Installation root of WIX ## .NET 4.7.2 _Version:_ 4.7.03190 ## Windows Driver Kit _Version:_ 10.0.17763.0<br/> ## Azure Service Fabric _SDK Version:_ 3.3.617.9590<br/> _Runtime Version:_ 6.4.617.9590 ## Python (64 bit) #### Python 3.7.0 _Environment:_ * PATH: contains location of python.exe #### Python 2.7.14 _Location:_ C:\Python27amd64 ## WinAppDriver _Version:_ 1.1.1809.18001<br/> ## Android SDK Build Tools #### 28.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\28.0.3 #### 28.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\28.0.2 #### 28.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\28.0.1 #### 28.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\28.0.0 #### 27.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\27.0.3 #### 27.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\27.0.2 #### 27.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\27.0.1 #### 27.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\27.0.0 #### 26.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\26.0.3 #### 26.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\26.0.2 #### 26.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\26.0.1 #### 26.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\26.0.0 #### 25.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\25.0.3 #### 25.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\25.0.2 #### 25.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\25.0.1 #### 25.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\25.0.0 #### 24.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\24.0.3 #### 24.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\24.0.2 #### 24.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\24.0.1 #### 24.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\24.0.0 #### 23.0.3 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\23.0.3 #### 23.0.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\23.0.2 #### 23.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\23.0.1 #### 22.0.1 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\22.0.1 #### 21.1.2 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\21.1.2 #### 20.0.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\20.0.0 #### 19.1.0 _Location:_ C:\Program Files (x86)\Android\android-sdk\build-tools\19.1.0 ## Android SDK Platforms #### 9 (API 28) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-28 #### 8.1.0 (API 27) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-27 #### 8.0.0 (API 26) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-26 #### 7.1.1 (API 25) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-25 #### 7.0 (API 24) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-24 #### 6.0 (API 23) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-23 #### 5.1.1 (API 22) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-22 #### 5.0.1 (API 21) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-21 #### 4.4.2 (API 19) _Location:_ C:\Program Files (x86)\Android\android-sdk\platforms\android-19 ## Azure/AzureRM Powershell modules #### 2.1.0 This version is installed and is available via Get-Module -ListAvailable #### 3.8.0 This version is saved but not installed _Location:_ C:\Modules\azurerm_3.8.0\AzureRM\3.8.0\AzureRM.psd1 #### 4.2.1 This version is saved but not installed _Location:_ C:\Modules\azurerm_4.2.1\AzureRM\4.2.1\AzureRM.psd1 #### 5.1.1 This version is saved but not installed _Location:_ C:\Modules\azurerm_5.1.1\AzureRM\5.1.1\AzureRM.psd1 #### 6.7.0 This version is saved but not installed _Location:_ C:\Modules\azurerm_6.7.0\AzureRM\6.7.0\AzureRM.psd1 ## TLS12 _Version:_ 1.2<br/> _Description:_ .NET has been configured to use TLS 1.2 by default ## Azure CLI _Version:_ 2.0.57<br/> _Environment:_ * PATH: contains location of az.cmd ## Python _Version:_ 2.7.14 (x86)<br/>_Version:_ 3.4.4 (x86)<br/>_Version:_ 3.5.4 (x86)<br/>_Version:_ 3.6.8 (x86)<br/>_Version:_ 3.7.2 (x86)<br/>_Version:_ 2.7.14 (x64)<br/>_Version:_ 3.4.4 (x64)<br/>_Version:_ 3.5.4 (x64)<br/>_Version:_ 3.6.8 (x64)<br/>_Version:_ 3.7.2 (x64)<br/><br/> > Note: These versions of Python are available through the [Use Python Version](https://go.microsoft.com/fwlink/?linkid=871498) task. ## Git _Version:_ 2.20.1<br/> _Environment:_ * PATH: contains location of git.exe ## Git Large File Storage (LFS) _Version:_ 2.6.1<br/> _Environment:_ * PATH: contains location of git-lfs.exe * GIT_LFS_PATH: location of git-lfs.exe ## Go (x64) #### 1.11.5 _Environment:_ * GOROOT_1_9_X64: root directory of the Go 1.11.5 installation #### 1.11.5 _Environment:_ * PATH: contains the location of go.exe version 1.11.5 * GOROOT: root directory of the Go 1.11.5 installation * GOROOT_1_10_X64: root directory of the Go 1.11.5 installation ## PHP (x64) #### 7.3.1 _Environment:_ * PATH: contains the location of php.exe version 7.3.1 * PHPROOT: root directory of the PHP 7.3.1 installation ## Ruby (x64) #### 2.4.3p205 _Location:_ C:\hostedtoolcache\windows\Ruby\2.4.3\x64\bin #### 2.5.0p0 _Environment:_ * Location: C:\hostedtoolcache\windows\Ruby\2.5.0\x64\bin * PATH: contains the location of ruby.exe version 2.5.0p0 ## Subversion _Version:_ 1.8.17<br/> _Environment:_ * PATH: contains location of svn.exe ## Google Chrome _version:_ 71.0.3578.98 ## Mozilla Firefox _version:_ 64.0.2 ## Selenium Web Drivers #### Chrome Driver _version:_ 2.45 _Environment:_ * ChromeWebDriver: location of chromedriver.exe #### Gecko Driver _version:_ 0.23.0 _Environment:_ * GeckoWebDriver: location of geckodriver.exe #### IE Driver _version:_ 3.8.0.0 _Environment:_ * IEWebDriver: location of IEDriverServer.exe ## Node.js _Version:_ 10.15.1<br/> _Environment:_ * PATH: contains location of node.exe<br/> * Gulp [05:37:52] CLI version 2.0.1<br/> * Grunt grunt-cli v1.3.2<br/> * Bower 1.8.8<br/> * Yarn 1.13.0<br/> > Note: You can install and use another version of Node.js on Microsoft-hosted agent pools using the [Node tool installer](https://docs.microsoft.com/vsts/pipelines/tasks/tool/node-js) task. ## npm _Version:_ 6.7.0<br/> _Environment:_ * PATH: contains location of npm.cmd ## Java Development Kit #### 1.8.0_202 _Environment:_ * JAVA_HOME: location of JDK * PATH: contains bin folder of JDK #### 11.0.2 _Location:_ C:\Program Files\Java\zulu-11-azure-jdk_11.29.3-11.0.2-win_x64 ## Ant _Version:_ 1.10.5<br/> _Environment:_ * PATH: contains location of ant.cmd * ANT_HOME: location of ant.cmd * COBERTURA_HOME: location of cobertura-2.1.1.jar ## Maven _Version:_ 3.6.0<br/> _Environment:_ * PATH: contains location of mvn.bat * M2_HOME: Maven installation root ## Gradle _Version:_ 5.2<br/> _Environment:_ * PATH: contains location of gradle ## Cmake _Version:_ 3.13.4<br/> _Environment:_ * PATH: contains location of cmake.exe ## SQL Server Data Tier Application Framework (x64) _Version:_ 15.0.4200.1<br/> ## .NET Core The following runtimes and SDKs are installed: _Environment:_ * PATH: contains location of dotnet.exe _SDK:_ * 2.2.103 C:\Program Files\dotnet\sdk\2.2.103 * 2.2.102 C:\Program Files\dotnet\sdk\2.2.102 * 2.2.101 C:\Program Files\dotnet\sdk\2.2.101 * 2.2.100 C:\Program Files\dotnet\sdk\2.2.100 * 2.1.600-preview-009472 C:\Program Files\dotnet\sdk\2.1.600-preview-009472 * 2.1.503 C:\Program Files\dotnet\sdk\2.1.503 * 2.1.502 C:\Program Files\dotnet\sdk\2.1.502 * 2.1.500 C:\Program Files\dotnet\sdk\2.1.500 * 2.1.403 C:\Program Files\dotnet\sdk\2.1.403 * 2.1.402 C:\Program Files\dotnet\sdk\2.1.402 * 2.1.401 C:\Program Files\dotnet\sdk\2.1.401 * 2.1.400 C:\Program Files\dotnet\sdk\2.1.400 * 2.1.4 C:\Program Files\dotnet\sdk\2.1.4 * 2.1.302 C:\Program Files\dotnet\sdk\2.1.302 * 2.1.301 C:\Program Files\dotnet\sdk\2.1.301 * 2.1.300 C:\Program Files\dotnet\sdk\2.1.300 * 2.1.202 C:\Program Files\dotnet\sdk\2.1.202 * 2.1.201 C:\Program Files\dotnet\sdk\2.1.201 * 2.1.200 C:\Program Files\dotnet\sdk\2.1.200 * 2.1.2 C:\Program Files\dotnet\sdk\2.1.2 * 2.1.105 C:\Program Files\dotnet\sdk\2.1.105 * 2.1.104 C:\Program Files\dotnet\sdk\2.1.104 * 2.1.103 C:\Program Files\dotnet\sdk\2.1.103 * 2.1.102 C:\Program Files\dotnet\sdk\2.1.102 * 2.1.101 C:\Program Files\dotnet\sdk\2.1.101 * 2.1.100 C:\Program Files\dotnet\sdk\2.1.100 * 2.0.3 C:\Program Files\dotnet\sdk\2.0.3 * 2.0.0 C:\Program Files\dotnet\sdk\2.0.0 * 1.1.9 C:\Program Files\dotnet\sdk\1.1.9 * 1.1.8 C:\Program Files\dotnet\sdk\1.1.8 * 1.1.7 C:\Program Files\dotnet\sdk\1.1.7 * 1.1.5 C:\Program Files\dotnet\sdk\1.1.5 * 1.1.4 C:\Program Files\dotnet\sdk\1.1.4 * 1.1.11 C:\Program Files\dotnet\sdk\1.1.11 * 1.1.10 C:\Program Files\dotnet\sdk\1.1.10 * 1.0.4 C:\Program Files\dotnet\sdk\1.0.4 * 1.0.1 C:\Program Files\dotnet\sdk\1.0.1 _Runtime:_ * 2.2.1 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.2.1 * 2.2.0 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.2.0 * 2.1.7 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.7 * 2.1.6 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.6 * 2.1.5 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.5 * 2.1.4 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.4 * 2.1.3 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.3 * 2.1.2 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.2 * 2.1.1 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.1 * 2.1.0 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.0 * 2.0.9 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.9 * 2.0.7 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.7 * 2.0.6 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.6 * 2.0.5 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.5 * 2.0.3 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.3 * 2.0.0 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.0 * 1.1.9 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.9 * 1.1.8 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.8 * 1.1.7 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.7 * 1.1.6 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.6 * 1.1.5 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.5 * 1.1.4 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.4 * 1.1.2 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.2 * 1.1.10 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.10 * 1.1.1 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.1.1 * 1.0.9 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.9 * 1.0.8 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.8 * 1.0.7 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.7 * 1.0.5 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.5 * 1.0.4 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.4 * 1.0.13 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.13 * 1.0.12 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.12 * 1.0.11 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.11 * 1.0.10 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\1.0.10 ## Mysql _Version:_ 5.7.21.0<br/> _Environment:_ * PATH: contains location of mysql.exe ## MinGW _Version:_ 8.1.0<br/> _Environment:_ * PATH: contains location of the MinGW 'bin' directory ## TypeScript _Version:_ Version 3.3.1<br/> ## Miniconda _Version:_ conda 4.5.12<br/> _Environment:_ * CONDA: contains location of the root of the Miniconda installation ## Azure CosmosDb Emulator _Version:_ 2.1.3.0<br/> _Location:_ C:\Program Files\Azure Cosmos DB Emulator\ ## 7zip _Version:_ 18.06<br/> ## Mercurial _Version:_ <br/> ## jq _Version:_ jq-1.5<br/> ## Inno Setup _Version:_ 5.6.1<br/> ## Perl _Version:_ v5.26.2<br/> ## GitVersion _Version:_ 4.0.0.0<br/> ## OpenSSL _Version:_ 1.1.1a at C:\Program Files\Git\usr\bin\openssl.exe<br/>_Version:_ 1.1.1a at C:\Program Files\Git\mingw64\bin\openssl.exe<br/>_Version:_ 1.0.2j at C:\Program Files (x86)\Subversion\bin\openssl.exe<br/>_Version:_ 1.1.0i at C:\Strawberry\c\bin\openssl.exe<br/>_Version:_ 1.1.1 at C:\Program Files\OpenSSL\bin\openssl.exe<br/> ## Cloud Foundry CLI _Version:_ 6.42.0<br/>
28.518283
334
0.724217
yue_Hant
0.379907
b90db9e2dbeeacc3ba34c56bdc415586506f0801
616
md
Markdown
.deploy/kubernetes/README.md
sander2/btc-parachain
4e0283794bb40bae46327695741be2396ee22f35
[ "Apache-2.0" ]
null
null
null
.deploy/kubernetes/README.md
sander2/btc-parachain
4e0283794bb40bae46327695741be2396ee22f35
[ "Apache-2.0" ]
null
null
null
.deploy/kubernetes/README.md
sander2/btc-parachain
4e0283794bb40bae46327695741be2396ee22f35
[ "Apache-2.0" ]
null
null
null
# PolkaBTC Kubernetes Helm Chart This Helm Chart can be used to deploy a containerized PolkaBTC Parachain to a Kubernetes cluster. ## Install To install the chart with the release name `my-release` into namespace `my-namespace` from within this directory: ```bash helm install --namespace my-namespace --name my-release --values values.yaml ./ ``` ## Uninstall To uninstall/delete the `my-release` deployment: ```bash helm delete --namespace my-namespace my-release ``` ## Upgrade To upgrade the `my-release` deployment: ```bash helm upgrade --namespace my-namespace --values values.yaml my-release ./ ```
22
113
0.74513
eng_Latn
0.94448
b90e02713caef36bdd77ebb08948ef18ea0995b0
2,967
md
Markdown
README.md
dangreenisrael/sentry-testkit
b876bb80b832af8b1df6ff0cea42c028721b96cd
[ "MIT" ]
null
null
null
README.md
dangreenisrael/sentry-testkit
b876bb80b832af8b1df6ff0cea42c028721b96cd
[ "MIT" ]
null
null
null
README.md
dangreenisrael/sentry-testkit
b876bb80b832af8b1df6ff0cea42c028721b96cd
[ "MIT" ]
null
null
null
<p align="center"> <img alt="sentry-teskit" src="./docs/logo/Sentry_github.svg" height="132"> </p> [![npm version](https://badge.fury.io/js/sentry-testkit.svg)](https://badge.fury.io/js/sentry-testkit) ![GitHub](https://img.shields.io/github/license/mashape/apistatus.svg?style=popout) ![Hackage-Deps](https://img.shields.io/hackage-deps/v/lens.svg) [![Build Status](https://travis-ci.org/wix/sentry-testkit.svg?branch=master)](https://travis-ci.org/wix/sentry-testkit) Sentry is an open-source JavaScript SDK published by [Sentry](https://sentry.io/welcome/) to enable error tracking that helps developers monitor and fix crashes in real time.<br> However, when building tests for your application, you want to assert that the right flow-tracking or error is being sent to *Sentry*, **but** without really sending it to *Sentry* servers. This way you won't swamp Sentry with false reports during test running and other CI operations. ## Sentry Testkit - to the rescue *Sentry Testkit* enables Sentry to work natively in your application, and by overriding the default Sentry transport mechanism, the report is not really sent but rather logged locally into memory. In this way, the logged reports can be fetched later for your own usage, verification, or any other use you may have in your local developing/testing environment. ## Usage ### Installation ``` npm install sentry-testkit --save-dev ``` ### Using in tests ```javascript const sentryTestkit = require('sentry-testkit') const {testkit, sentryTransport} = sentryTestkit() // initialize your Sentry instance with sentryTransport Sentry.init({ dsn: 'some_dummy_dsn', transport: sentryTransport, //... other configurations }) // then run any scenario that should call Sentry.catchException(...) expect(testKit.reports()).toHaveLength(1) const report = testKit.reports()[0] expect(report).toHaveProperty(...) ``` ## Yes! We Love [Puppeteer](https://pptr.dev/) ```javascript const sentryTestkit = require('sentry-testkit') const {testkit} = sentryTestkit() testkit.puppeteer.startListening(page); // Run any scenario that will call Sentry.captureException(...), for example: await page.addScriptTag({ content: `throw new Error('An error');` }); expect(testKit.reports()).toHaveLength(1) const report = testKit.reports()[0] expect(report).toHaveProperty(...) testkit.puppeteer.stopListening(page); ``` You may see more usage examples in the testing section of this repository as well. ## Test Kit API See full API description and documentation here: https://wix.github.io/sentry-testkit/ ## What About Nodejs? **Of Course!** `sentry-testkit` have full support in both `@sentry/browser` and `@sentry/node` since they have the same API and lifecycle under the hood. ## Raven-Testkit The good old legacy `raven-testkit` documentation can be found [here](LEGACY_API.md). It it still there to serve `Raven` which is the old legacy SDK of *Sentry* for JavaScript/Node.js platforms
40.643836
359
0.751938
eng_Latn
0.920423
b90e7327adb1e82c6085a09cfe4201cfff5108c0
957
md
Markdown
README.md
zdmr/TDirectionView
df8627549bd886f00c14fccc009033267817bbf6
[ "MIT" ]
null
null
null
README.md
zdmr/TDirectionView
df8627549bd886f00c14fccc009033267817bbf6
[ "MIT" ]
null
null
null
README.md
zdmr/TDirectionView
df8627549bd886f00c14fccc009033267817bbf6
[ "MIT" ]
null
null
null
# TDirectionView [![CI Status](https://img.shields.io/travis/zdmr/TDirectionView.svg?style=flat)](https://travis-ci.org/zdmr/TDirectionView) [![Version](https://img.shields.io/cocoapods/v/TDirectionView.svg?style=flat)](https://cocoapods.org/pods/TDirectionView) [![License](https://img.shields.io/cocoapods/l/TDirectionView.svg?style=flat)](https://cocoapods.org/pods/TDirectionView) [![Platform](https://img.shields.io/cocoapods/p/TDirectionView.svg?style=flat)](https://cocoapods.org/pods/TDirectionView) ## Example To run the example project, clone the repo, and run `pod install` from the Example directory first. ## Requirements ## Installation TDirectionView is available through [CocoaPods](https://cocoapods.org). To install it, simply add the following line to your Podfile: ```ruby pod 'TDirectionView' ``` ## Author zdmr, [email protected] ## License TDirectionView is available under the MIT license. See the LICENSE file for more info.
31.9
123
0.763845
yue_Hant
0.41032
b90e7f1d248f75742876cee692cc69f260eaa5fb
30,655
md
Markdown
docs/visual-basic/programming-guide/concepts/async/index.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/programming-guide/concepts/async/index.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/programming-guide/concepts/async/index.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Programmation asynchrone avec Async et Await (Visual Basic) ms.date: 07/20/2015 ms.assetid: bd7e462b-583b-4395-9c36-45aa9e61072c ms.openlocfilehash: 0d8810da424b0759dcfba882efe462514a14145a ms.sourcegitcommit: b1cfd260928d464d91e20121f9bdba7611c94d71 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 07/02/2019 ms.locfileid: "67505954" --- # <a name="asynchronous-programming-with-async-and-await-visual-basic"></a>Programmation asynchrone avec Async et Await (Visual Basic) Vous pouvez éviter des goulots d'étranglement de performance et améliorer la réactivité globale de votre application à l'aide de la programmation asynchrone. Toutefois, les techniques traditionnelles pour écrire des applications asynchrones peuvent être complexes et rendre ces applications difficiles à écrire, déboguer et mettre à jour. Visual Studio 2012 a introduit une approche simplifiée, la programmation async, qui tire parti de la prise en charge asynchrone de .NET Framework 4.5 et des versions ultérieures ainsi que de Windows Runtime. Le compilateur effectue le travail difficile dont se chargeait le développeur jusqu’à maintenant. En outre, votre application conserve une structure logique qui ressemble à du code synchrone. Par conséquent, vous obtenez tous les avantages de la programmation asynchrone avec peu d'effort. Cette rubrique fournit une vue d'ensemble sur quand et comment utiliser la programmation asynchrone, et inclut des liens vers des rubriques du support, qui contiennent des informations et des exemples. ## <a name="BKMK_WhentoUseAsynchrony"></a> Async améliore la réactivité Le comportement asynchrone est essentiel pour les activités qui sont potentiellement bloquantes, par exemple lorsque votre application accède au Web. L'accès à une ressource Web est parfois lent ou différé. Si cette activité est bloquée dans un processus synchrone, l'application entière doit attendre. Dans un processus asynchrone, l'application peut poursuivre une autre tâche qui ne dépend pas de la ressource Web jusqu'à ce que la tâche potentiellement bloquante soit terminée. Le tableau suivant indique les zones classiques où la programmation asynchrone améliore la réactivité. Les API répertoriées de .NET Framework 4.5 et de Windows Runtime contiennent des méthodes qui prennent en charge la programmation async. |Domaine d'application|API de prise en charge qui contiennent des méthodes async| |----------------------|------------------------------------------------| |Accès Web|<xref:System.Net.Http.HttpClient>, <xref:Windows.Web.Syndication.SyndicationClient>| |Utilisation de fichiers|<xref:Windows.Storage.StorageFile>, <xref:System.IO.StreamWriter>, <xref:System.IO.StreamReader>, <xref:System.Xml.XmlReader>| |Utilisation d'images|<xref:Windows.Media.Capture.MediaCapture>, <xref:Windows.Graphics.Imaging.BitmapEncoder>, <xref:Windows.Graphics.Imaging.BitmapDecoder>| |Programmation WCF|[Opérations synchrones et asynchrones](../../../../framework/wcf/synchronous-and-asynchronous-operations.md)| ||| Le comportement asynchrone est particulièrement utile pour les applications qui accèdent au thread d'interface utilisateur, car toute activité liée à l'interface utilisateur partage généralement un thread. Si un processus est bloqué dans une application synchrone, tous les processus sont bloqués. Votre application ne répond plus et vous pouvez conclure qu'elle a rencontré une défaillance, alors qu'elle attend simplement. Lorsque vous utilisez des méthodes asynchrones, l'application continue à répondre à l'interface utilisateur. Vous pouvez redimensionner ou réduire une fenêtre, par exemple, ou vous pouvez fermer l'application si vous ne souhaitez pas attendre qu'elle se termine. L'approche basée sur async ajoute l'équivalent d'une transmission automatique à la liste d'options dont vous disposez pour concevoir des opérations asynchrones. En d'autres termes, vous obtenez tous les avantages de la programmation asynchrone classique mais avec beaucoup moins d'efforts du point de vue du développeur. ## <a name="BKMK_HowtoWriteanAsyncMethod"></a> Les méthodes Async sont plus faciles à écrire Les mots clés [async](../../../../visual-basic/language-reference/modifiers/async.md) et [await](../../../../visual-basic/language-reference/operators/await-operator.md) en Visual Basic sont au cœur de la programmation async. Avec ces deux mots clés, vous pouvez utiliser des ressources dans .NET Framework ou Windows Runtime pour créer une méthode asynchrone presque aussi facilement qu’une méthode synchrone. Les méthodes asynchrones définies avec `Async` et `Await` sont appelées méthodes async. L'exemple suivant illustre une méthode async. Presque tous les éléments du code doivent vous sembler familiers. Les commentaires indiquent les fonctionnalités que vous ajoutez pour créer le comportement asynchrone. Le fichier d’exemple complet Windows Presentation Foundation (WPF) se trouve à la fin de cette rubrique. Vous pouvez télécharger l’exemple sur [Exemple async : exemple de « Programmation asynchrone avec Async et Await »](https://code.msdn.microsoft.com/Async-Sample-Example-from-9b9f505c). ```vb ' Three things to note in the signature: ' - The method has an Async modifier. ' - The return type is Task or Task(Of T). (See "Return Types" section.) ' Here, it is Task(Of Integer) because the return statement returns an integer. ' - The method name ends in "Async." Async Function AccessTheWebAsync() As Task(Of Integer) ' You need to add a reference to System.Net.Http to declare client. Dim client As HttpClient = New HttpClient() ' GetStringAsync returns a Task(Of String). That means that when you await the ' task you'll get a string (urlContents). Dim getStringTask As Task(Of String) = client.GetStringAsync("https://msdn.microsoft.com") ' You can do work here that doesn't rely on the string from GetStringAsync. DoIndependentWork() ' The Await operator suspends AccessTheWebAsync. ' - AccessTheWebAsync can't continue until getStringTask is complete. ' - Meanwhile, control returns to the caller of AccessTheWebAsync. ' - Control resumes here when getStringTask is complete. ' - The Await operator then retrieves the string result from getStringTask. Dim urlContents As String = Await getStringTask ' The return statement specifies an integer result. ' Any methods that are awaiting AccessTheWebAsync retrieve the length value. Return urlContents.Length End Function ``` Si `AccessTheWebAsync` n'a aucun travail qu'il peut effectuer entre l'appel de `GetStringAsync` et l'attente de son achèvement, vous pouvez simplifier votre code en appelant et attendant dans l'instruction unique suivante. ```vb Dim urlContents As String = Await client.GetStringAsync() ``` Les caractéristiques suivantes résument ce qui fait de l'exemple précédent une méthode async. - La signature de la méthode inclut un modificateur `Async`. - Le nom d'une méthode async, par convention, se termine par un suffixe « Async ». - Le type de retour est l'un des types suivants : - <xref:System.Threading.Tasks.Task%601> si votre méthode a une instruction de retour dans laquelle l'opérande a le type TResult. - <xref:System.Threading.Tasks.Task> si votre méthode n'a aucune instruction de retour, ou si elle a une instruction de retour sans opérande. - [Sub](../../../../visual-basic/programming-guide/language-features/procedures/sub-procedures.md) si vous écrivez un gestionnaire d’événements async. Pour plus d’informations, consultez « Types et paramètres de retour » dans la suite de cette rubrique. - La méthode inclut généralement au moins une expression await, qui marque le point au-delà duquel la méthode ne peut pas poursuivre son exécution tant que l'opération asynchrone attendue n'est pas terminée. Dans le même temps, la méthode est interrompue, et le contrôle retourne à l'appelant de la méthode. La section suivante de cette rubrique illustre ce qui se produit au point d'interruption. Dans les méthodes async, vous utilisez les mots clés et les types fournis pour indiquer ce que vous souhaitez faire, et le compilateur effectue le reste, notamment le suivi de ce qui doit se produire lorsque le contrôle retourne à un point d'attente dans une méthode interrompue. Il peut être difficile de gérer des processus de routine, tels que les boucles et la gestion des exceptions, dans le code asynchrone traditionnel. Dans une méthode async, vous écrivez ces éléments comme vous l’auriez fait dans une solution synchrone, et le problème est résolu. Pour plus d’informations sur le comportement asynchrone dans les versions antérieures de .NET Framework, consultez la page [Programmation asynchrone .NET Framework traditionnelle et TPL](../../../../standard/parallel-programming/tpl-and-traditional-async-programming.md). ## <a name="BKMK_WhatHappensUnderstandinganAsyncMethod"></a> Ce qui se produit dans une méthode Async La chose la plus importante à comprendre en programmation asynchrone est le déplacement du flux de contrôle d'une méthode à l'autre. Le diagramme suivant vous guide à travers le processus. ![Diagramme illustrant le suivi d’un programme asynchrone.](./media/index/navigation-trace-async-program.png) Les numéros du diagramme correspondent aux étapes suivantes. 1. Un gestionnaire d'événements appelle et attend la méthode async `AccessTheWebAsync`. 2. `AccessTheWebAsync` crée une instance <xref:System.Net.Http.HttpClient> et appelle la méthode asynchrone <xref:System.Net.Http.HttpClient.GetStringAsync%2A> pour télécharger le contenu d'un site Web comme une chaîne. 3. Quelque chose se produit dans `GetStringAsync` et suspend sa progression. Elle peut être obligée d'attendre la fin d'un téléchargement sur un site Web ou de toute autre activité bloquante. Pour éviter de bloquer les ressources, `GetStringAsync` cède le contrôle à son appelant, `AccessTheWebAsync`. `GetStringAsync` retourne un <xref:System.Threading.Tasks.Task%601> où TResult est une chaîne et `AccessTheWebAsync` assigne la tâche à la variable `getStringTask`. La tâche représente le processus en cours de l'appel de `GetStringAsync`, avec l'engagement de produire une valeur de chaîne réelle lorsque le travail est terminé. 4. Étant donné que `getStringTask` n'a pas encore été attendu, `AccessTheWebAsync` peut continuer avec un autre travail qui ne dépend pas du résultat final issu de `GetStringAsync`. Cette opération est représentée par un appel à la méthode synchrone `DoIndependentWork`. 5. `DoIndependentWork` est une méthode synchrone qui effectue son travail et retourne à son appelant. 6. `AccessTheWebAsync` n'a plus de travail à exécuter sans un résultat de `getStringTask`. `AccessTheWebAsync` veut ensuite calculer et retourner la longueur de la chaîne téléchargée, mais la méthode ne peut pas calculer cette valeur avant d'avoir la chaîne. Par conséquent, `AccessTheWebAsync` utilise un opérateur await pour interrompre sa progression et pour céder le contrôle à la méthode ayant appelé `AccessTheWebAsync`. `AccessTheWebAsync`Retourne un `Task<int>` (`Task(Of Integer)` en Visual Basic) à l’appelant. La tâche représente la promesse de produire un résultat entier qui est la longueur de la chaîne téléchargée. > [!NOTE] > Si `GetStringAsync` (et donc `getStringTask`) est terminé avant que `AccessTheWebAsync` ne l'attende, le contrôle reste dans `AccessTheWebAsync`. Le fait de suspendre, puis de retourner à `AccessTheWebAsync` ne sert à rien si le processus asynchrone appelé (`getStringTask`) s'est déjà effectué et si AccessTheWebSync n'a pas à attendre le résultat final. Dans l'appelant (le gestionnaire d'événements dans cet exemple), le modèle de traitement continue. L'appelant peut effectuer d'autres tâches qui ne dépendent pas du résultat de `AccessTheWebAsync` avant d'attendre ce résultat, ou l'appelant peut attendre immédiatement. Le gestionnaire d'événements attend `AccessTheWebAsync`, et `AccessTheWebAsync` attend `GetStringAsync`. 7. `GetStringAsync` se termine et génère un résultat de chaîne. Le résultat de chaîne n'est pas retourné par l'appel de `GetStringAsync` de la façon à laquelle vous pourriez vous attendre. N’oubliez pas que la méthode a déjà retourné une tâche à l’étape 3. Au lieu de cela, la chaîne résultante est stockée dans la tâche qui représente l’achèvement de la méthode, `getStringTask`. L'opérateur await récupère le résultat de `getStringTask`. L'instruction d'assignation assigne le résultat récupéré à `urlContents`. 8. Lorsque `AccessTheWebAsync` a le résultat de chaîne, la méthode peut calculer la longueur de la chaîne. Puis, le travail d'`AccessTheWebAsync` est également interrompu, et le gestionnaire d'événements en attente peut reprendre. Dans l'exemple complet à la fin de la rubrique, vous pouvez vérifier que le gestionnaire d'événements récupère et imprime la valeur du résultat de la longueur. Si vous débutez en programmation asynchrone, prenez une minute pour déterminer la différence entre le comportement synchrone et le comportement asynchrone. Une méthode synchrone retourne une fois son travail terminé (étape 5), mais une méthode asynchrone retourne une valeur de tâche lorsque son travail est interrompu (étapes 3 et 6). Lorsque la méthode async termine finalement son travail, la tâche est marquée comme terminée et le résultat, le cas échéant, est stocké dans la tâche. Pour plus d’informations sur le flux de contrôle, consultez la page [Flux de contrôle dans les programmes async (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/control-flow-in-async-programs.md). ## <a name="BKMK_APIAsyncMethods"></a> Méthodes async d’API Vous pouvez vous demander où rechercher les méthodes telles que `GetStringAsync` qui prennent en charge la programmation async. .NET Framework 4.5 ou version ultérieure contient de nombreux membres qui fonctionnent avec `Async` et `Await`. Vous pouvez identifier ces membres par le suffixe « Async » joint au nom de membre et un type de retour <xref:System.Threading.Tasks.Task> ou <xref:System.Threading.Tasks.Task%601>. Par exemple, la classe `System.IO.Stream` contient des méthodes telles que <xref:System.IO.Stream.CopyToAsync%2A>, <xref:System.IO.Stream.ReadAsync%2A> et <xref:System.IO.Stream.WriteAsync%2A> en même temps que les méthodes synchrones <xref:System.IO.Stream.CopyTo%2A>, <xref:System.IO.Stream.Read%2A> et <xref:System.IO.Stream.Write%2A>. Windows Runtime contient également de nombreuses méthodes utilisables avec `Async` et `Await` dans les applications Windows. Pour plus d’informations et des exemples de méthodes, consultez [appeler les API asynchrones dans C# ou Visual Basic](/windows/uwp/threading-async/call-asynchronous-apis-in-csharp-or-visual-basic), [programmation asynchrone (applications Windows Runtime)](https://docs.microsoft.com/previous-versions/windows/apps/hh464924(v=win.10)), et [WhenAny : Combler l’écart entre le .NET Framework et le Runtime Windows](https://docs.microsoft.com/previous-versions/visualstudio/visual-studio-2013/jj635140(v=vs.120)). ## <a name="BKMK_Threads"></a> Threads Les méthodes Async sont conçues pour être des opérations non bloquantes. Une expression `Await` dans une méthode async ne bloque pas le thread actuel lors de l’exécution de la tâche attendue. Au lieu de cela, l'expression inscrit le reste de la méthode comme continuation et retourne le contrôle à l'appelant de la méthode async. Les mots clés `Async` et `Await` n’entraînent pas la création de threads supplémentaires. Les méthodes Async ne requièrent pas de multithreading, car une méthode async ne fonctionne pas sur son propre thread. La méthode s'exécute sur le contexte de synchronisation actuel et utilise du temps sur le thread uniquement lorsqu'elle est active. Vous pouvez utiliser <xref:System.Threading.Tasks.Task.Run%2A?displayProperty=nameWithType> pour déplacer le travail lié au processeur vers un thread d'arrière-plan, mais un thread d'arrière-plan ne permet pas à un processus qui attend simplement les résultats de devenir disponible. L’approche basée sur async en matière de programmation asynchrone est préférable aux approches existantes, dans presque tous les cas. En particulier, cette approche est préférable à <xref:System.ComponentModel.BackgroundWorker> pour les opérations d’e/S, car le code est plus simple et vous n’êtes pas obligé de se prémunir contre des conditions de concurrence. Combinée à <xref:System.Threading.Tasks.Task.Run%2A?displayProperty=nameWithType>, la programmation asynchrone est meilleure que <xref:System.ComponentModel.BackgroundWorker> pour les opérations utilisant le processeur de manière intensive car la programmation asynchrone sépare les détails de coordination de l'exécution de votre code à partir du travail que `Task.Run` transfère au pool de threads. ## <a name="BKMK_AsyncandAwait"></a> Async et Await Si vous spécifiez qu’une méthode est une méthode async en utilisant un modificateur [Async](../../../../visual-basic/language-reference/modifiers/async.md), vous activez les deux fonctionnalités suivantes. - La méthode async marquée peut utiliser [Await](../../../../visual-basic/language-reference/operators/await-operator.md) pour indiquer des points d’interruption. L'opérateur await indique au compilateur que la méthode async ne peut pas continuer au-delà de ce point, tant que le processus asynchrone attendu n'est pas terminé. Entre-temps, le contrôle retourne à l'appelant de la méthode async. L’interruption d’une méthode async dans une expression `Await` ne constitue pas une sortie de la méthode et les blocs `Finally` ne s’exécutent pas. - La méthode async marquée peut elle-même être attendue par les méthodes qui l'appellent. La méthode async contient généralement une ou plusieurs occurrences d’un opérateur `Await`, mais l’absence des expressions `Await` ne provoque pas d’erreur du compilateur. Si une méthode async n’utilise pas un opérateur `Await` pour marquer un point de sélection, elle s’exécute comme le fait une méthode synchrone, en dépit du modificateur `Async`. Le compilateur émet un avertissement pour ces méthodes. `Async` et `Await` sont des mots clés contextuels. Pour plus d'informations et des exemples, consultez les rubriques suivantes : - [Async](../../../../visual-basic/language-reference/modifiers/async.md) - [Await (opérateur)](../../../../visual-basic/language-reference/operators/await-operator.md) ## <a name="BKMK_ReturnTypesandParameters"></a> Paramètres et types de retour Dans la programmation .NET Framework, une méthode async retourne généralement un <xref:System.Threading.Tasks.Task> ou <xref:System.Threading.Tasks.Task%601>. Dans une méthode async, un opérateur `Await` est appliqué à une tâche retournée à partir d’un appel à une autre méthode async. Vous spécifiez <xref:System.Threading.Tasks.Task%601> comme type de retour si la méthode contient une instruction [Return](../../../../visual-basic/language-reference/statements/return-statement.md) qui spécifie un opérande de type `TResult`. Vous utilisez `Task` comme type de retour si la méthode n'a aucune instruction return ou une instruction return qui ne retourne pas d'opérande. L'exemple suivant montre comment déclarer et appeler une méthode qui retourne <xref:System.Threading.Tasks.Task%601> ou <xref:System.Threading.Tasks.Task>. ```vb ' Signature specifies Task(Of Integer) Async Function TaskOfTResult_MethodAsync() As Task(Of Integer) Dim hours As Integer ' . . . ' Return statement specifies an integer result. Return hours End Function ' Calls to TaskOfTResult_MethodAsync Dim returnedTaskTResult As Task(Of Integer) = TaskOfTResult_MethodAsync() Dim intResult As Integer = Await returnedTaskTResult ' or, in a single statement Dim intResult As Integer = Await TaskOfTResult_MethodAsync() ' Signature specifies Task Async Function Task_MethodAsync() As Task ' . . . ' The method has no return statement. End Function ' Calls to Task_MethodAsync Task returnedTask = Task_MethodAsync() Await returnedTask ' or, in a single statement Await Task_MethodAsync() ``` Chaque tâche retournée représente le travail en cours. Une tâche encapsule des informations sur l’état du processus asynchrone et, éventuellement, le résultat final du processus ou l’exception que le processus déclenche s’il ne réussit pas. Une méthode async peut également être une méthode `Sub`. Ce type de retour est essentiellement utilisé pour définir les gestionnaires d’événements, où un type de retour est obligatoire. Les gestionnaires d'événements asynchrones servent souvent de point de départ aux programmes asynchrones. Une méthode async qui est une procédure `Sub` ne peut pas être attendue, et l’appelant ne peut capturer aucune exception levée par la méthode. Une méthode async ne peut pas déclarer de paramètres [ByRef](../../../../visual-basic/language-reference/modifiers/byref.md), mais elle peut appeler des méthodes qui comportent ces paramètres. Pour plus d’informations ainsi que des exemples, consultez la page [Types de retour Async (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/async-return-types.md). Pour plus d’informations sur l’interception des exceptions dans les méthodes async, consultez la page [Instruction Try...Catch...Finally](../../../../visual-basic/language-reference/statements/try-catch-finally-statement.md). Les API asynchrones dans la programmation Windows Runtime ont l’un des types de retour suivants, qui sont semblables aux tâches : - <xref:Windows.Foundation.IAsyncOperation%601>, qui correspond à <xref:System.Threading.Tasks.Task%601> - <xref:Windows.Foundation.IAsyncAction>, qui correspond à <xref:System.Threading.Tasks.Task> - <xref:Windows.Foundation.IAsyncActionWithProgress%601> - <xref:Windows.Foundation.IAsyncOperationWithProgress%602> Pour plus d’informations et un exemple, consultez [appeler les API asynchrones en c# ou Visual Basic](/windows/uwp/threading-async/call-asynchronous-apis-in-csharp-or-visual-basic). ## <a name="BKMK_NamingConvention"></a> Convention d’affectation de noms Par convention, on ajoute « Async » aux noms des méthodes qui possèdent un modificateur `Async`. Vous pouvez ignorer la convention où un événement, une classe de base, ou un contrat d'interface suggère un nom différent. Par exemple, vous ne devez pas renommer les gestionnaires d'événements communs, tels que `Button1_Click`. ## <a name="BKMK_RelatedTopics"></a> Rubriques connexes et exemples (Visual Studio) |Titre|Description|Exemple| |-----------|-----------------|------------| |[Procédure pas à pas : Accès Web à l’aide d’Async et Await (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/walkthrough-accessing-the-web-by-using-async-and-await.md)|Montre comment convertir une solution WPF synchrone en une solution WPF asynchrone. L’application télécharge une série de sites web.|[Exemple Async : Accès à la procédure web](https://code.msdn.microsoft.com/Async-Sample-Accessing-the-9c10497f)| |[Guide pratique pour Étendre la procédure pas à pas Async à l’aide de Task.WhenAll (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/how-to-extend-the-async-walkthrough-by-using-task-whenall.md)|Ajoute <xref:System.Threading.Tasks.Task.WhenAll%2A?displayProperty=nameWithType> à la procédure précédente. L'utilisation de `WhenAll` démarre tous les téléchargements en même temps.|| |[Guide pratique pour Effectuer plusieurs requêtes Web en parallèle en utilisant Async et Await (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/how-to-make-multiple-web-requests-in-parallel-by-using-async-and-await.md)|Explique comment démarrer plusieurs tâches en même temps.|[Exemple Async : Effectuer plusieurs requêtes web en parallèle](https://code.msdn.microsoft.com/Async-Make-Multiple-Web-49adb82e)| |[Types de retour Async (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/async-return-types.md)|Décrit les types que les méthodes async peuvent retourner et explique quand chaque type est approprié.|| |[Flux de contrôle dans les programmes Async (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/control-flow-in-async-programs.md)|Effectue le suivi en détail du flux de contrôle via une série d'expressions await dans un programme asynchrone.|[Exemple Async : Flux de contrôle dans les programmes Async](https://code.msdn.microsoft.com/Async-Sample-Control-Flow-5c804fc0)| |[Ajuster une application Async (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/fine-tuning-your-async-application.md)|Indique comment ajouter les fonctionnalités suivantes à votre solution async :<br /><br /> - [Annuler une tâche Asynch ou une liste de tâches (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/cancel-an-async-task-or-a-list-of-tasks.md)<br />- [Annuler des tâches Asynch après une période spécifique (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/cancel-async-tasks-after-a-period-of-time.md)<br />- [Annuler les tâches Asynch restantes lorsque l’une d’elles est terminée (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/cancel-remaining-async-tasks-after-one-is-complete.md)<br />- [Démarrer plusieurs tâches Asynch et les traiter une fois terminées (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/start-multiple-async-tasks-and-process-them-as-they-complete.md)|[Exemple Async : Réglage de votre application](https://code.msdn.microsoft.com/Async-Fine-Tuning-Your-a676abea)| |[Gérer la réentrance dans Async Apps (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/handling-reentrancy-in-async-apps.md)|Montre comment traiter les cas où une opération asynchrone active redémarre pendant son exécution.|| |[WhenAny : transition entre .NET Framework et Windows Runtime](https://docs.microsoft.com/previous-versions/visualstudio/visual-studio-2013/jj635140(v=vs.120))|Montre comment établir un pont entre les types de tâches dans le .NET Framework et IAsyncOperations dans le Runtime Windows afin que vous puissiez utiliser <xref:System.Threading.Tasks.Task.WhenAny%2A> avec une méthode Windows Runtime.|[Exemple Async : transition entre .NET et Windows Runtime (AsTask et WhenAny)](https://docs.microsoft.com/previous-versions/visualstudio/visual-studio-2013/jj635140(v=vs.120))| |Annulation Async : transition entre .NET Framework et Windows Runtime|Montre comment établir un pont entre les types de tâches dans le .NET Framework et IAsyncOperations dans le Runtime Windows afin que vous puissiez utiliser <xref:System.Threading.CancellationTokenSource> avec une méthode Windows Runtime.|[Exemple Async : transition entre .NET et Windows Runtime (AsTask & Annulation)](https://code.msdn.microsoft.com/Async-Sample-Bridging-9479eca3)| |[Utiliser async pour l’accès aux fichiers (Visual Basic)](../../../../visual-basic/programming-guide/concepts/async/using-async-for-file-access.md)|Répertorie et explique les avantages de l'utilisation d'async et d'await pour accéder aux fichiers.|| |[Modèle asynchrone basé sur les tâches (TAP, Task-based Asynchronous Pattern)](../../../../standard/asynchronous-programming-patterns/task-based-asynchronous-pattern-tap.md)|Décrit un nouveau modèle pour le comportement asynchrone dans le .NET Framework. Le modèle est basé sur les types <xref:System.Threading.Tasks.Task> et <xref:System.Threading.Tasks.Task%601>.|| |[Vidéos Async sur Channel 9](https://channel9.msdn.com/search?term=async+&type=All)|Fournit des liens vers diverses vidéos sur la programmation asynchrone.|| ## <a name="BKMK_CompleteExample"></a> Exemple complet Le code suivant est le fichier MainWindow.xaml.vb de l’application WPF (Windows Presentation Foundation) traitée dans cette rubrique. Vous pouvez télécharger l’exemple à partir de [Exemple Async : exemple de « Programmation asynchrone avec Async et Await »](https://code.msdn.microsoft.com/Async-Sample-Example-from-9b9f505c). ```vb ' Add an Imports statement and a reference for System.Net.Http Imports System.Net.Http Class MainWindow ' Mark the event handler with async so you can use Await in it. Private Async Sub StartButton_Click(sender As Object, e As RoutedEventArgs) ' Call and await separately. 'Task<int> getLengthTask = AccessTheWebAsync(); '' You can do independent work here. 'int contentLength = await getLengthTask; Dim contentLength As Integer = Await AccessTheWebAsync() ResultsTextBox.Text &= String.Format(vbCrLf & "Length of the downloaded string: {0}." & vbCrLf, contentLength) End Sub ' Three things to note in the signature: ' - The method has an Async modifier. ' - The return type is Task or Task(Of T). (See "Return Types" section.) ' Here, it is Task(Of Integer) because the return statement returns an integer. ' - The method name ends in "Async." Async Function AccessTheWebAsync() As Task(Of Integer) ' You need to add a reference to System.Net.Http to declare client. Dim client As HttpClient = New HttpClient() ' GetStringAsync returns a Task(Of String). That means that when you await the ' task you'll get a string (urlContents). Dim getStringTask As Task(Of String) = client.GetStringAsync("https://msdn.microsoft.com") ' You can do work here that doesn't rely on the string from GetStringAsync. DoIndependentWork() ' The Await operator suspends AccessTheWebAsync. ' - AccessTheWebAsync can't continue until getStringTask is complete. ' - Meanwhile, control returns to the caller of AccessTheWebAsync. ' - Control resumes here when getStringTask is complete. ' - The Await operator then retrieves the string result from getStringTask. Dim urlContents As String = Await getStringTask ' The return statement specifies an integer result. ' Any methods that are awaiting AccessTheWebAsync retrieve the length value. Return urlContents.Length End Function Sub DoIndependentWork() ResultsTextBox.Text &= "Working . . . . . . ." & vbCrLf End Sub End Class ' Sample Output: ' Working . . . . . . . ' Length of the downloaded string: 41763. ``` ## <a name="see-also"></a>Voir aussi - [Await (opérateur)](../../../../visual-basic/language-reference/operators/await-operator.md) - [Async](../../../../visual-basic/language-reference/modifiers/async.md)
92.893939
1,122
0.780003
fra_Latn
0.938018
b90e8b2831123a844719b1f95378f0ced05d8add
491
md
Markdown
content/aws/s3/cli/create_bucket.md
enricomarchesin/notes
da0f1db1bd9a5669f3cad6bf5201ebd5ef94fc7b
[ "CC0-1.0" ]
1
2018-01-09T19:06:03.000Z
2018-01-09T19:06:03.000Z
content/aws/s3/cli/create_bucket.md
enricomarchesin/notes
da0f1db1bd9a5669f3cad6bf5201ebd5ef94fc7b
[ "CC0-1.0" ]
null
null
null
content/aws/s3/cli/create_bucket.md
enricomarchesin/notes
da0f1db1bd9a5669f3cad6bf5201ebd5ef94fc7b
[ "CC0-1.0" ]
1
2020-08-28T11:03:18.000Z
2020-08-28T11:03:18.000Z
--- title: "Create Bucket" author: "Chris Albon" date: 2018-06-17T00:00:00-07:00 description: "How to create a bucket on AWS S3." type: technical_note draft: false --- ## Create Bucket Make a bucket (`mb`) on AWS S3 called `kicks-pasta-steer`. The bucket name you choose must be _globally_ unique, meaning nobody else in the world must have used that bucket name before. {{< highlight markdown >}} aws s3 mb s3://kicks-pasta-steer {{< /highlight >}} ``` make_bucket: kicks-pasta-steer ```
25.842105
185
0.716904
eng_Latn
0.939219
b90ff2241575b5776b25d85f77310ea6fb324e32
163
md
Markdown
content/model-types/emerging.md
thriveweb/glass
6e66d1882c2c5593ca8000fe8ee2981c2256b6e4
[ "MIT" ]
null
null
null
content/model-types/emerging.md
thriveweb/glass
6e66d1882c2c5593ca8000fe8ee2981c2256b6e4
[ "MIT" ]
1
2018-05-30T01:17:27.000Z
2018-05-30T01:17:27.000Z
content/model-types/emerging.md
thriveweb/glass
6e66d1882c2c5593ca8000fe8ee2981c2256b6e4
[ "MIT" ]
2
2019-07-18T02:37:06.000Z
2020-01-09T13:06:46.000Z
--- template: ModelsPage title: Emerging featuredImage: >- https://ucarecdn.com/a68b1a14-e66a-4fdd-a057-1badb62f6d37/-/crop/828x426/0,0/-/preview/ order: 6 ---
18.111111
89
0.723926
yue_Hant
0.222508
b9103b946d6e0f819b24682d95e79e6099e1a5cf
6,486
md
Markdown
tutorials/abap-environment-abapgit/abap-environment-abapgit.md
Kavya-Gowda/Tutorials
513040c665719f3a60fd0032263f0c711967125f
[ "Apache-2.0" ]
1
2018-02-14T12:03:33.000Z
2018-02-14T12:03:33.000Z
tutorials/abap-environment-abapgit/abap-environment-abapgit.md
Kavya-Gowda/Tutorials
513040c665719f3a60fd0032263f0c711967125f
[ "Apache-2.0" ]
null
null
null
tutorials/abap-environment-abapgit/abap-environment-abapgit.md
Kavya-Gowda/Tutorials
513040c665719f3a60fd0032263f0c711967125f
[ "Apache-2.0" ]
null
null
null
--- auto_validation: true title: Use abapGit to Transform ABAP Source Code to the Cloud description: Transform ABAP source code from on-premise SAP system to a SAP Cloud Platform ABAP Environment instance. primary_tag: products>sap-cloud-platform--abap-environment tags: [ tutorial>beginner, topic>abap-development, products>sap-cloud-platform ] time: 15 author_name: Niloofar Naseri author_profile: https://github.com/niloofar-naseri --- ## Prerequisites - `github.com` or similar account - SAP Cloud Platform ABAP Environment system and user with developer role - `on-premise` system with user and required root CA of `Git` server (STRUST) - Download Eclipse Photon or Oxygen and install ABAP Development Tools (ADT). See <https://tools.hana.ondemand.com/#abap>. ## Details ### You will learn - How to create content in an `on-premise` system and push it to `Git` repository - How to import the content from `Git` repository into a SAP Cloud Platform ABAP Environment instance Please be aware that `abapGIT`, see `https://docs.abapgit.org` for detailed documentation, is an open source project owned by the community. Therefore, SAP does not provide support for `abapGIT` in general. However, SAP takes care of the GIT integration inside SAP Cloud Platform ABAP Environment. --- [ACCORDION-BEGIN [Step 1: ](Create a Git repository)] 1. Log in to your `github.com` account. 2. Create a new repository by clicking on **New repository** button. ![new repository](github1.png) 3. Enter a name and description and check the checkbox **Initialize this repository with a README** and click **Create repository**. ![create repository](github2.png) 4. Our repository is all set up for now. ![create repository](github3.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 2: ](Install and set up abapGit)] As next you need to install `abapGit` on our `on-premise` system. > **Important!** Arrange with your system administrator before you install `zabapgit`. 1. Copy the content of the latest build from the program `zabapgit`, that you will find in the `abapGit` repository `https://github.com/larshp/abapGit`. 2. Open your `on-premise` system of your choice and create a new program and paste the content from step 1 into it. 3. Activate and execute the program. 4. If you have installed `abapGit` before, you need to go to SE38, search for **ZABAPGIT** program and press **Execute**. ![search program](abapgit1.png) 5. Now `abapGit` is installed and open. ![execute program](abapgit2.png) You can find all installation information under `https://github.com/larshp/abapGit` > **Documentation/Guides**. [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 3: ](Push ABAP source from on-premise to Git repository)] > **Important!** Logon with language `EN` to your `on-premise` system. SAP Cloud Platform ABAP Environment just supports `EN` at the moment. Otherwise you'll get problem during import. 1. Click on **Clone or download** and copy the URL of your repository. ![clone](abapgit3.png) 2. Then go into transaction `ZABAPGIT` and press the **+ Online** button. ![add online](abapgit4.png) 3. Paste the repository URL and click on **Create Package**. ![repository URL](abapgit5.png) 4. Enter a name and description and LOCAL as **Software Component** and click on **Continue**. ![continue](abapgit6.png) 5. Click on **OK**. ![create package](abapgit7.png) 6. You will see the cloned `Git` repository in `abapGit`. ![ABAPGIT](abapgit8.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 4: ](Add ABAP development objects)] 1. Open your `on-premise` system in ADT in Eclipse and find your created Package in the last step. 2. Add ABAP development objects to your package (e.g. ABAP class). ![development objects](abapgit9.png) > **Important!** Not supported ABAP object types will be ignored during import. [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 5: ](Stage and commit developed objects)] 1. Go back to the `abapGit` UI and click on **Refresh** to see all developments objects that you created in Eclipse. ![refresh](abapgit10.png) 2. Press **Stage**. ![stage](abapgit11.png) 3. Select single objects to add or **Add all and commit**. ![add all](abapgit12.png) 4. Enter **committer name**, **committer e-mail** and a **comment** and press **Commit**. ![commit](abapgit13.png) 5. You will be prompted with a credentials popup. Enter your `Git` repository server credentials and click **Execute**. ![enter credentials](abapgit14.png) 7. After everything went well, you can see the pushed ABAP objects in your `Git` repository. ![repository updated](abapgit15.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 6: ](Install abapGit Eclipse plugin)] 1. Open your Eclipse with installed ADT. 2. In Eclipse, choose in the menu bar **Help** > **Install New Software**. ![eclipse](eclipse1.png) 3. Add the URL `http://eclipse.abapgit.org/updatesite/`and press enter to display the available features. Select **`abapGit`** **for ABAP Development Tools (ADT)** and install the plugin. ![enter URL](eclipse2.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 7: ](Open abapGit repositories)] 1. Select your cloud project system in the **Project Explorer** and open the `abapGit` repositories view by opening **Window** > **Show View** > **Other ...**. ![repository](eclipse6.png) 2. Expand the category **ABAP** and select **`abapGit Repositories`** and click **Open** ![expand ABAP](eclipse7.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 8: ](Clone Git repository into SAP Cloud Platform ABAP Environment)] 1. Click the clone button (green + button) in the `abapGit` repositories view. ![clone button](clone1.png) 2. Enter your `Git` repository URL and press **Next**. ![enter repository](clone2.png) 3. Select **Branch** and a **Package**, where your `Git` repository should be cloned. (If you have no packages, you need to create a new one before) and click **Next**. ![choose package](clone3.png) 4. Select a **Transport Request** and click **Finish**. ![finish](clone4.png) 5. Your imported sources are now available under your package. ![end](clone5.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 9: ](Test yourself)] [VALIDATE_1] [ACCORDION-END] ---
31.950739
299
0.706907
eng_Latn
0.888595
b910b0dc1050a65bc928695940746bf44dbe747d
17,300
md
Markdown
articles/media-services/video-indexer/release-notes.md
silvercr/azure-docs.es-es
a40a316665a10e4008b60dabd50cbb3ec86e9c1d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/video-indexer/release-notes.md
silvercr/azure-docs.es-es
a40a316665a10e4008b60dabd50cbb3ec86e9c1d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/video-indexer/release-notes.md
silvercr/azure-docs.es-es
a40a316665a10e4008b60dabd50cbb3ec86e9c1d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Notas de la versión de Video Indexer de Azure Media Services | Microsoft Docs description: Para mantenerse al día con los últimos desarrollos, en este artículo se proporcionan las actualizaciones más reciente en Video Indexer de Azure Media Services. services: media-services documentationcenter: '' author: Juliako manager: femila editor: '' ms.service: media-services ms.subservice: video-indexer ms.workload: na ms.topic: article ms.date: 06/02/2020 ms.author: juliako ms.openlocfilehash: 5bd4c9aa3fde9e3fa596ce5a18b892edfab60af5 ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/02/2020 ms.locfileid: "84325072" --- # <a name="azure-media-services-video-indexer-release-notes"></a>Notas de la versión de Video Indexer de Azure Media Services >Reciba notificaciones para volver a visitar esta página y obtener actualizaciones; para ello, copie y pegue esta URL (`https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+Video+Indexer+release+notes%22&locale=en-us`) en el lector de fuentes RSS. Para mantenerse al día con los avances más recientes, este artículo proporciona información acerca de los elementos siguientes: * Versiones más recientes * Problemas conocidos * Corrección de errores * Funciones obsoletas ## <a name="may-2020"></a>Mayo de 2020 ### <a name="video-indexer-deployed-in-the-east-us"></a>Implementación de Video Indexer en el este de EE. UU. Ya puede crear una cuenta de pago de Video Indexer en la región Este de EE. UU. ### <a name="video-indexer-url"></a>Dirección URL de Video Indexer Los puntos de conexión regionales de Video Indexer se han unificado para comenzar solo con www. No se requiere ningún elemento de acción. A partir de ahora, llegará a www.videoindexer.ai tanto para insertar widgets como para iniciar sesión en las aplicaciones web de Video Indexer. Además, wus.videoindexer.ai se redirigirá a www. Puede encontrar más información en [Inserción de widgets de Video Indexer en las aplicaciones](video-indexer-embed-widgets.md). ## <a name="april-2020"></a>Abril de 2020 ### <a name="new-widget-parameters-capabilities"></a>Nuevas capacidades de parámetros de widget El widget **Insights** incluye los nuevos parámetros `language` y `control`. El widget **Player** tiene el nuevo parámetro `locale`. Los parámetros `locale` y `language` controlan el idioma del reproductor. Para más información, consulte la sección de [tipos de widgets](video-indexer-embed-widgets.md#widget-types). ### <a name="new-player-skin"></a>Nueva máscara del reproductor Nueva máscara del reproductor iniciada con el diseño actualizado. ### <a name="prepare-for-upcoming-changes"></a>Preparación para los próximos cambios * En la actualidad, las siguientes API devuelven un objeto de cuenta: * [Create-Paid-Account](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Create-Paid-Account) * [Get-Account](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Account) * [Get-Accounts-Authorization](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Accounts-Authorization) * [Get-Accounts-With-Token](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Accounts-With-Token) El objeto de cuenta tiene un campo `Url` que apunta a la ubicación del [sitio web de Video Indexer](https://www.videoindexer.ai/). En el caso de las cuentas de pago, el campo `Url` apunta actualmente a una dirección URL interna en lugar de al sitio web público. En las próximas semanas lo cambiaremos y volverá a la dirección URL del [sitio web de Video Indexer](https://www.videoindexer.ai/) en todas las cuentas (de prueba y de pago). No use las direcciones URL internas; debe usar las [API públicas de Video Indexer](https://api-portal.videoindexer.ai/). * Si va a insertar direcciones URL de Video Indexer en las aplicaciones y no apuntan al [sitio web de Video Indexer](https://www.videoindexer.ai/) ni al punto de conexión de Video Indexer (`https://api.videoindexer.ai`), sino que lo hacen a un punto de conexión regional (por ejemplo, `https://wus2.videoindexer.ai`), vuelva a generar las direcciones URL. Para ello, haga lo siguiente: * Reemplace la dirección URL por otra que apunte a las API de widget de Video Indexer (por ejemplo, el [widget Insights](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Video-Insights-Widget)). * Use el sitio web de Video Indexer para generar una nueva dirección URL insertada: Presione **Reproducir** para ir a la página del vídeo, haga clic en el botón **&lt;/&gt; Embed** (Insertar) y copie la dirección URL en la aplicación: Las direcciones URL regionales no se admiten y se bloquearán en las próximas semanas. ## <a name="january-2020"></a>Enero de 2020 ### <a name="custom-language-support-for-additional-languages"></a>Compatibilidad con idiomas personalizados adicionales Video Indexer admite ahora modelos de lenguaje personalizados para `ar-SY`, `en-UK` y `en-AU` (solo API). ### <a name="delete-account-timeframe-action-update"></a>Actualización del período de tiempo de la acción de eliminación de una cuenta La acción de eliminación de una cuenta ahora elimina la cuenta en un plazo de 90 días en lugar de 48 horas. ### <a name="new-video-indexer-github-repository"></a>Nuevo repositorio de GitHub para Video Indexer Ahora hay disponible un nuevo repositorio de GitHub para Video Indexer con distintos proyectos, guías de introducción y ejemplos de código: https://github.com/Azure-Samples/media-services-video-indexer ### <a name="swagger-update"></a>Actualización de Swagger Video Indexer ha unificado las **autenticaciones** y **operaciones** en una [especificación de OpenAPI para Video Indexer (swagger)](https://api-portal.videoindexer.ai/docs/services/Operations/export?DocumentFormat=OpenApiJson). Los desarrolladores pueden encontrar las API en el [Portal para desarrolladores de Video Indexer](https://api-portal.videoindexer.ai/). ## <a name="december-2019"></a>Diciembre de 2019 ### <a name="update-transcript-with-the-new-api"></a>Actualización de transcripción con la nueva API Actualice una sección específica de la transcripción mediante la API [Update-Video-Index](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Update-Video-Index?&pattern=update). ### <a name="fix-account-configuration-from-the-video-indexer-portal"></a>Corrección de la configuración de la cuenta desde el portal de Video Indexer Ahora puede actualizar la configuración de conexión de Media Services y usar la autoayuda para resolver problemas como estos: * Recurso incorrecto de Azure Media Services * Cambios de contraseña * Los recursos de Media Services se han movido entre suscripciones Para corregir la configuración de la cuenta, en el portal de Video Indexer, vaya a Configuración > pestaña Cuenta (como propietario). ### <a name="configure-the-custom-vision-account"></a>Configuración de la cuenta de Custom Vision Configure la cuenta de Custom Vision en cuentas de pago mediante el portal de Video Indexer (anteriormente, esto solo lo admitía la API). Para ello, inicie sesión en el portal de Video Indexer, elija Personalización de modelos > Personajes animados > Configurar. ### <a name="scenes-shots-and-keyframes--now-in-one-insight-pane"></a>Escenas, tomas y fotogramas clave: ahora en un panel de información detallada Ahora las escenas, las tomas y los fotogramas clave se combinan en un panel de información detallada para facilitar el consumo y la navegación. Al seleccionar la escena deseada, puede ver qué tomas y fotogramas clave contiene. ### <a name="notification-about-a-long-video-name"></a>Notificación sobre un nombre de vídeo largo Cuando el nombre de un vídeo tiene más de 80 caracteres, Video Indexer muestra un error descriptivo durante la carga. ### <a name="streaming-endpoint-is-disabled-notification"></a>Notificación de punto de conexión de streaming deshabilitado Cuando el punto de conexión de streaming está deshabilitado, Video Indexer mostrará un error descriptivo en la página del reproductor. ### <a name="error-handling-improvement"></a>Mejora en el control de errores Ahora, si un vídeo se indexa de forma activa, se devolverá el código de estado 409 desde las API [Re-Index Video](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Re-Index-Video?https://api-portal.videoindexer.ai/docs/services/Operations/operations/Re-Index-Video?) (Volver a indexar el vídeo) y [Update Video Index](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Update-Video-Index?) (Actualizar índice de vídeo), para evitar que se invaliden por accidente los cambios de volver a indexar. ## <a name="november-2019"></a>Noviembre de 2019 * Compatibilidad con modelos de lenguaje personalizado en coreano Video Indexer ahora admite modelos de lenguaje personalizados en coreano (`ko-KR`) tanto en la API como en el portal. * Nuevos idiomas compatibles con Conversión de voz en texto (STT) Las API de Video Indexer ahora admiten STT en árabe levantino (ar-SY), dialecto inglés del Reino Unido (en-GB) y dialecto australiano inglés (en-AU). En el caso de la carga de vídeo, reemplazamos zh-HANS por zh-CN, se admiten ambos, pero zh-CN es el recomendado y el más preciso. ## <a name="october-2019"></a>Octubre de 2019 * Búsqueda de personajes animados en la galería Ahora, al indexar personajes animados, puede buscarlos en la galería de vídeos de la cuenta. Para más información, consulte [Reconocimiento de personajes animados](animated-characters-recognition.md). ## <a name="september-2019"></a>Septiembre de 2019 En la feria IBC 2019, se anunciaron varios avances: * Reconocimiento de caracteres animados (versión preliminar pública) Posibilidad de detectar, agrupar y reconocer caracteres en el contenido animado, gracias a la integración con Custom Vision. Para más información, consulte [Detección de personajes animados](animated-characters-recognition.md). * Identificación de varios idiomas (versión preliminar pública) Detecte segmentos en varios idiomas en la pista de audio y cree una transcripción multilingüe a partir de ellos. Compatibilidad inicial: inglés, español, alemán y francés. Para más información, consulte [Identificación y transcripción automáticas del contenido de varios idiomas](multi-language-identification-transcription.md). * Extracción de entidades con nombre para personas y ubicaciones extrae marcas, ubicaciones y personas del lenguaje hablado y del texto visual mediante el procesamiento de lenguaje natural (NLP). * Clasificación del tipo de toma editorial Etiquetado de tomas con tipos editoriales como cierre, toma media, dos tomas, interior, exterior, etc. Para más información, consulte [Detección del tipo de toma editorial](scenes-shots-keyframes.md#editorial-shot-type-detection). * Mejora de la inferencia de temas: ahora abarca el nivel 2 El modelo de inferencia de temas ahora admite una granularidad más profunda de la taxonomía IPTC. Lea todos los detalles en [Innovación con inteligencia artificial de Azure Media Services](https://azure.microsoft.com/blog/azure-media-services-new-ai-powered-innovation/). ## <a name="august-2019"></a>Agosto de 2019 ### <a name="video-indexer-deployed-in-uk-south"></a>Video Indexer implementado en Sur de Reino Unido Ya puede crear una cuenta de pago de Video Indexer en la región Sur de Reino Unido. ### <a name="new-editorial-shot-type-insights-available"></a>Nueva información disponible sobre los tipos de capturas editoriales Las nuevas etiquetas agregadas a las capturas de vídeo proporcionan "tipos de captura" editoriales para identificarlas con las frases editoriales que se suelen emplear en el flujo de trabajo de creación de contenido, como primer plano extremo, primer plano, plano general, plano medio, dos tomas, exteriores, interiores, cara izquierda y cara derecha (disponible en el JSON). ### <a name="new-people-and-locations-entities-extraction-available"></a>Extracción de entidades de nuevas personas y ubicaciones disponible Video Indexer identifica las ubicaciones y las personas con nombre a través del procesamiento de lenguaje natural (NLP) desde el OCR y la transcripción del vídeo. Video Indexer usa el algoritmo de aprendizaje automático para reconocer cuándo se está haciendo mención a ubicaciones específicas (por ejemplo, la Torre Eiffel) o a personas (por ejemplo, John Doe) en un vídeo. ### <a name="keyframes-extraction-in-native-resolution"></a>Extracción de fotogramas clave en resolución nativa Los fotogramas clave extraídos por Video Indexer están disponibles en la resolución original del vídeo. ### <a name="ga-for-training-custom-face-models-from-images"></a>Disponibilidad general para entrenar modelos de caras personalizadas a partir de imágenes El entrenamiento de caras a partir de imágenes ha pasado del modo de versión preliminar a disponibilidad general (disponible mediante la API y en el portal). > [!NOTE] > La transición de versión preliminar a disponibilidad general no tiene ninguna repercusión en el precio. ### <a name="hide-gallery-toggle-option"></a>Ocultar opción de alternancia de la galería El usuario puede optar por ocultar la pestaña de la galería en el portal (es similar a ocultar la pestaña de ejemplos). ### <a name="maximum-url-size-increased"></a>Aumento del tamaño máximo de la dirección URL Compatibilidad con la cadena de consulta de dirección URL 4096 (en lugar de 2048) en la indexación de un vídeo. ### <a name="support-for-multi-lingual-projects"></a>Compatibilidad con proyectos multilingües Ahora se pueden crear proyectos basados en vídeos indexados en distintos idiomas (solo API). ## <a name="july-2019"></a>Julio de 2019 ### <a name="editor-as-a-widget"></a>Editor como un widget El editor Video Indexer AI ahora está disponible como widget para insertarse en las aplicaciones de los clientes. ### <a name="update-custom-language-model-from-closed-caption-file-from-the-portal"></a>Actualización del modelo de lenguaje personalizado desde el archivo de subtítulos cerrados desde el portal Los clientes pueden proporcionar los formatos de archivo VTT, SRT y TTML como entrada para los modelos de idioma en la página de personalización del portal. ## <a name="june-2019"></a>Junio de 2019 ### <a name="video-indexer-deployed-to-japan-east"></a>Video Indexer se ha implementado en el Japón Oriental Ya puede crear una cuenta de pago de Video Indexer en la región del Japón Oriental. ### <a name="create-and-repair-account-api-preview"></a>Crear y reparar la API de la cuenta (versión preliminar) Se agregó una nueva API que le permite [actualizar el punto de conexión o la clave de la instancia Azure Media Services](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Update-Paid-Account-Azure-Media-Services?&groupBy=tag). ### <a name="improve-error-handling-on-upload"></a>Mejorar el control de errores en la carga Se devuelve un mensaje descriptivo en caso de que haya una configuración incorrecta de la cuenta subyacente de Azure Media Services. ### <a name="player-timeline-keyframes-preview"></a>Versión preliminar de los fotogramas clave de la escala de tiempo del reproductor Ya puede ver una versión preliminar de la imagen de cada hora en la escala de tiempo del reproductor. ### <a name="editor-semi-select"></a>Selección parcial del editor Ya puede ver una versión preliminar de todas las conclusiones que se seleccionaron como resultado de elegir un período de tiempo específico en el editor. ## <a name="may-2019"></a>Mayo de 2019 ### <a name="update-custom-language-model-from-closed-caption-file"></a>Actualizar el modelo de lenguaje personalizado desde el archivo de subtítulos Las API para [crear modelos de lenguaje personalizados](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Create-Language-Model?&groupBy=tag) y [actualizar los modelos de lenguaje personalizados](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Update-Language-Model?&groupBy=tag) admiten los formatos de archivo VTT, SRT y TTML como entrada para los modelos de lenguaje. Al llamar a la [API para actualizar la transcripción de vídeo](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Update-Video-Transcript?&pattern=transcript), la transcripción se agrega automáticamente. El modelo de aprendizaje asociado con el vídeo también se actualiza automáticamente. Para obtener información sobre cómo personalizar y entrenar sus modelos de lenguaje, consulte [Customize a Language model with Video Indexer](customize-language-model-overview.md) (Personalizar un modelo de lenguaje con el Video Indexer). ### <a name="new-download-transcript-formats--txt-and-csv"></a>Nuevos formatos de transcripción de descarga: TXT y CSV Además del formato de subtítulos ya admitidos (SRT, VTT y TTML), Video Indexer ahora admite la descarga de la transcripción en los formatos TXT y CSV. ## <a name="next-steps"></a>Pasos siguientes [Información general](video-indexer-overview.md)
66.283525
547
0.782428
spa_Latn
0.95302
b910c812d69831d1475b9ea84bb6d38777cfb91d
929
md
Markdown
README.md
agarwalyeshu/node-okex-ws
15fd169387f9d61187e2c13ee39b4835d9a02250
[ "MIT" ]
6
2018-05-16T12:18:50.000Z
2021-12-12T06:01:12.000Z
README.md
agarwalyeshu/node-okex-ws
15fd169387f9d61187e2c13ee39b4835d9a02250
[ "MIT" ]
2
2018-10-30T19:19:23.000Z
2020-01-22T00:32:39.000Z
README.md
agarwalyeshu/node-okex-ws
15fd169387f9d61187e2c13ee39b4835d9a02250
[ "MIT" ]
3
2018-07-02T06:08:55.000Z
2020-07-15T11:14:52.000Z
# node-okex-ws-spot Small library to get updates from OKEX spot web sockets. # Install: ``` npm install --save node-okex-ws-spot ``` # Usage: ```javascript const Okex = require('node-okex-ws-spot'); const okexSocket = new Okex(); var pairs = ['BCH/BTC','LTC/BTC','LTC/USDT','LTC/ETH']; //Subscribe pairs ticker okexSocket.addSubscriptionTicker(pairs); //Subscribe pairs deals okexSocket.addSubscriptionDeals(pairs); //Subscribe pairs depth okexSocket.addSubscriptionDepth(pairs); //Subscribe pairs K-line //K-line time period, such as 1min, 3min, 5min, 15min, 30min, 1hour, 2hour, 4hour, 6hour, 12hour, day, 3day, week okexSocket.addSubscriptionKline(pairs,'30min'); //to terminate the socket and all the subscription okexSocket.terminate(); okexSocket.onMessage(data => { console.log(data[0]['data']); }); ``` Take a look in examples. # Todo. Add authenticate endpoints. # Disclaimer. Use it at your own risk.
20.195652
113
0.726588
eng_Latn
0.672672
b910e2f324193dc72912904025294572c9989c5d
27
md
Markdown
content/numeros/1983/_index.md
carlospeix/revista-pelo
4911c15c7df6cd258c2edef0931072d160dd23dc
[ "MIT" ]
null
null
null
content/numeros/1983/_index.md
carlospeix/revista-pelo
4911c15c7df6cd258c2edef0931072d160dd23dc
[ "MIT" ]
null
null
null
content/numeros/1983/_index.md
carlospeix/revista-pelo
4911c15c7df6cd258c2edef0931072d160dd23dc
[ "MIT" ]
null
null
null
+++ title = "Año 1983" +++
6.75
18
0.444444
eng_Latn
0.771182
b9114181bfa56fa0d8de57b72975dcc6f1b92185
11,211
md
Markdown
blog/_posts/2019-10-28-build-cache-spark-joy.md
kensipe/tilt.build
7dc2276db7dd07f6468aca0b2c4c6f558484cfa7
[ "Apache-2.0" ]
11
2020-05-30T16:53:06.000Z
2022-02-16T00:22:24.000Z
blog/_posts/2019-10-28-build-cache-spark-joy.md
kensipe/tilt.build
7dc2276db7dd07f6468aca0b2c4c6f558484cfa7
[ "Apache-2.0" ]
169
2020-05-18T16:59:41.000Z
2022-03-25T14:16:27.000Z
blog/_posts/2019-10-28-build-cache-spark-joy.md
kensipe/tilt.build
7dc2276db7dd07f6468aca0b2c4c6f558484cfa7
[ "Apache-2.0" ]
39
2020-06-20T22:20:58.000Z
2022-03-03T07:32:49.000Z
--- slug: build-cache-spark-joy date: 2019-10-28 author: maia layout: blog title: "Does This Build Cache Spark Joy?" subtitle: "Pruning Away your Docker Disk Space Woes" image: "marie-kondo-containers.jpg" image_needs_slug: true image_caption: 'Do you really need all those old containers? (Credit: Netflix, from "Tidying Up with Marie Kondo", 2019)' tags: - docker - kubernetes - tilt - containers - debugging keywords: - kubernetes - disk usage - docker - docker prune - tilt - orchestration --- My favorite class of bugs is the one that users run into when they’re using your product _too much_. If you’ve been using Tilt for a while and so Tilt has been building lots of Docker images for you and it’s starting to eat up your disk space, it can be _super_ frustrating, of course---but when a bunch of people started [reporting this problem](https://github.com/windmilleng/tilt/issues/2102), I’ll admit that I was a little excited. ![A user on Slack: "I'm not sure if this is a Tilt problem, but I haven't experienced it before switching to Tilt. Several times per day during development my pods get evicted due to DiskPressure. I'm running Kubernetes with Minikube and 20 GB space allocated. If I do docker system prune it removes about 5 GB of build cache and DiskPressure goes away. Has anyone else experienced this issue?"](/assets/images/build-cache-spark-joy/disk-space-report-1.png) ![A user on Slack: "We use Docker Desktop with Tilt and eventually it eats up the local Docker system storage (50gb) and I have to do a 'docker system prune -a' to get Docker Desktop's Kubernetes cluster working again. I'm assuming it's because Tilt is building more and more images every time a change occurs. What's the best practice to clean up these or prevent this from happening?"](/assets/images/build-cache-spark-joy/disk-space-report-2.png) ![A user on Slack reporting an error that reads: "Build Failed: ImageBuild: failed to solve with frontend dockerfile.v0: failed to build LLB: failed to copy files: copy file range failed: no space left on device"](/assets/images/build-cache-spark-joy/disk-space-report-3.png) If you’re experiencing Docker disk space woes (whether you’re developing on Tilt or not), you’re not alone. This post digs into the signs and causes of disk space issues, tells you how to fix them yourself, and describes Tilt’s new way of handling these problems. ## Dude, where’s my disk space? ### How to tell you're in storage trouble How do you know that you’re running into disk space trouble? The surest sign is that your Docker daemon is throwing errors of the form: > No space left on device This error can happen in the course of many different operations, but generally it means the same thing: your Docker daemon doesn’t have enough room for all the junk that’s on it. The more opaque form of this error is when you're running MacOS and your _local Kubernetes cluster_ (e.g. Kubernetes for Docker for Mac, or Minikube, if you’re also using the Minikube Docker instance) starts throwing “DiskPressure” errors. Recall that local k8s clusters are (generally) single-node clusters, and on MacOS, your nodes are all VMs; thus, all the k8s stuff happening on a single VM. Funnily, all your Docker storage is _also_ on that same VM; thus, if you have too much junk in your Docker storage, it takes away space that k8s would otherwise want to use, so k8s starts complaining about a lack of space. I won’t say that this is always the case, but often, “DiskPressure” errors on your local k8s cluster are, at their root, Docker disk space problems, and not k8s specific; try the steps below and see if the errors go away. ### What's eating you(r disk space)? If you think you’re running out of Docker storage space, dig in with [`docker system df`](https://docs.docker.com/engine/reference/commandline/system_df/) to see where your space is going (try `-v` for even more info). You’ll see stats for four types of Docker objects. I haven’t had to battle volume bloat much, so I’m not going to talk about it here, but let’s talk about the other three objects: **Images**: okay, if you use Docker, you probably know what an image is, and it’s probably pretty to easy to imagine how, if you have enough of them, they start taking up a lot of disk space. Because images are composed of layers (see below), an image only takes up size according to its _unique_ layers---but this can still add up. If you’re developing with Tilt, especially if you’re not using [Live Update](https://blog.tilt.dev/2019/04/02/fast-kubernetes-development-with-live-update.html) and are doing a fresh docker build for every code change, you’ll be building a _lot_ of images. Sorry about that! **Build Cache**: Docker images are composed of _layers_ stacked on top of each other, each layer representing the filesystem state that resulted from a Dockerfile step. (For more on layers, [see the docs](https://docs.docker.com/storage/storagedriver/#images-and-layers).) If nothing has changed in layer X, we can reuse the layer X we have lying around from last time and save ourselves time and work. Layers from past builds live in the cache. Usually, this is great---it increases the probability that whenever we’re building a new image, we can reuse something from before. **Containers**: if you have a lot of containers around, either running or stopped, they can start to eat up your space, especially if you’ve got big files on them. If you tend to spin up k8s resources and then forget about them, those containers could be causing you unnecessary trouble. With K8s, though, at least when you bring down a pod, its container is _removed_; some other container management systems (e.g. Docker Compose) will stop but not remove the container, so you still have to contend with its size in storage. Docker does all this (retaining a cache, keeping old images around) in order to be _fast_, and usually that’s great! But sometimes it goes too far; when Docker hoards too much old stuff just in case we need it later, it may run out of room to do the work we actually need it to do. ## Get that disk space back! Luckily, there are a number of `prune` commands that you can run to get rid of all the old Docker artifacts that you don’t actually need anymore and reclaim your precious, precious disk space. (Note that `docker system df` has a "reclaimable" column that indicates how much of each object can safely be pruned away.) **[`docker image prune`](https://docs.docker.com/engine/reference/commandline/image_prune/)**: by default, this command gets rid of all _dangling images_, i.e. images without tags. (You get dangling images when, say, you had tag `myapp:latest` pointing to image ID `a1b2c3d4`, but then you pull down a new version of `myapp:latest`, such that the tag now points to `e5f6g7h8`; your old image ID, `a1b2c3d4`, is now tagless, i.e. _dangling_.) You can use the `--all`/`-a` flag to get rid of _unused images_ as well (i.e. images not associated with a container). **[`docker builder prune`](https://docs.docker.com/engine/reference/commandline/builder_prune/)**: remove layers from the build cache that aren't referenced by any images. **[`docker container prune`](https://docs.docker.com/engine/reference/commandline/container_prune/)**: remove stopped containers. You can kill all of your birds (...whales?) with one stone with [`docker system prune`](https://docs.docker.com/engine/reference/commandline/system_prune/), which basically does all of the above. It can be especially satisfying to watch your disk usage with `watch -d docker system df` (you may need to `brew install watch`) as you prune, and see the numbers drop before your eyes. ## How does Tilt deal with this? Tilt builds a lot of images, and we don’t want you to be sad; that's why Tilt will periodically prune away your old Docker junk for you. When I sat down to write this feature, I figured it would be a simple matter of: ``` for { select { case <-time.After(time.Hour): docker.SystemPrune() } } ``` Alas, as is often the case with software, it was much more complicated than that. Some considerations of Tilt's Docker Pruner: ### Don't prune images/caches the user might want soon If you're actively developing on Image X, we don't want to prune it away, even if it's not currently running on a container. That's why Tilt only prunes images/containers of a certain age---by default, 6h or older (you can configure this setting in your Tiltfile). Unfortunately, Docker makes it hard to tell how old an image actually is; the timestamp recorded on an image (and the one respected by the `--until` filter on prune commands) represents the time the image was _first built_; if no code has changed, Tilt will build and tag your image, but the build is a no-op and doesn't change the timestamp. The solution? The `metadata.lastTaggedTime` field, which gives us an accurate picture of the last time Tilt saw this image. It would be a pain if you opened your laptop in the morning and started Tilt, and we pruned away last night's build cache as you were trying to start up new images from it. That's why we wait until all of your pending builds have finished before we prune, so that you're sure to have touched any images you're currently using. ### Stay in your lane We also don't want running Tilt in one repo to blow away all your caches for another project---which may have totally different Docker Pruner settings, or may not even use Tilt! To make sure we only mess with images built by Tilt, we filter for the `builtby:tilt` label; to make sure we don't deal splash damage to your other Tilt projects, we only prune images that the current Tilt run knows about. ### Give the user control By default, Tilt's Docker Pruner runs once after your initial builds have all completed, and then every hour thereafter, and removes images that are 6 hours old or older. If that doesn't work for you, don't worry, you can configure the Docker Pruner settings Tiltfile: * Not worried about Docker disk space at all? Disable the Docker Pruner entirely! ``` docker_pruner_settings(disable=True) ``` * If you want to keep your images around for a really long time, adjust the max age of images we keep around: ``` docker_pruner_settings(max_age_mins=1440) ``` * Say your project eats up a ton of space, and you want to blow away the maximum possible amount of stuff every time; set the max age really low. (Remember that no matter how low you set the max age, we'll only prune objects that are _not in use_. So, whatever images you're currently running are always safe.) ``` docker_pruner_settings(max_age_mins=15) ``` * Maybe the amount of space you use is unpredictable and doesn't correlate with time Tilt has been up for; in this case, instead of pruning every X hours, prune every Y builds instead. ``` docker_pruner_settings(num_builds=10) ``` We hope the Docker Pruner helps keep your disk usage in check, so you can stay in flow without worrying about finicky errors. [Read more about the settings in the docs.](https://docs.tilt.dev/api.html#api.docker_prune_settings), configure it to your liking, and let us know how it's working for you!
97.486957
841
0.76755
eng_Latn
0.99878
b911aa776249985d1be00516f13ee52f2637ede8
1,609
md
Markdown
docs/README.md
dvornikov-aa/clever
d9e0c947978692b01e5cf891af50f68a1f1fb267
[ "MIT" ]
1
2021-06-23T16:16:00.000Z
2021-06-23T16:16:00.000Z
docs/README.md
dvornikov-aa/clever
d9e0c947978692b01e5cf891af50f68a1f1fb267
[ "MIT" ]
null
null
null
docs/README.md
dvornikov-aa/clever
d9e0c947978692b01e5cf891af50f68a1f1fb267
[ "MIT" ]
null
null
null
Клевер ====== <img src="assets/clever3.png" align="right" width="300px" alt="Клевер"> «Клевер» — это учебный конструктор программируемого квадрокоптера, состоящего из популярных открытых компонентов, а также набор необходимой документации и библиотек для работы с ним. Набор включает в себя полетный контроллер PixHawk/PixRacer с полетным стеком PX4, Raspberry Pi 3 в качестве управлящего бортового компьютера, модуль камеры для реализации полетов с использованием компьютерного зрения, а также набор различных датчиков и другой периферии. На базе точно такой же платформы были созданы многие «большие» проекты компании Copter Express, например, дроны для [пиар-акций по автономной доставке пиццы](https://www.youtube.com/watch?v=hmkAoZOtF58) (Самара, Казань); дрон-доставщик кофе в Сколково, мониторинговый дрон с зарядной станцией, дроны-победители на полевых испытаниях «[Робокросс-2016](https://www.youtube.com/watch?v=dGbDaz_VmYU)», «[Робокросс-2017](https://youtu.be/AQnd2CRczbQ)» и многие другие. Для того, чтобы научиться собирать, настраивать, пилотировать и программировать автономный дрон «Клевер», воспользуйтесь этим учебником. Образ для Raspberry Pi ---------------------- **Образ ОС** для RPi 3 с предустановленным и преднастроенным ПО можно скачать [здесь](microsd_images.html). Образ включает в себя: * Raspbian Stretch * ROS Kinetic * Настроенную работу с сетью * OpenCV * mavros * Набор ПО для работы с Клевером [Описание API](simple_offboard.html) для автономных полетов. Исходный код сборщика образа и всего ПО можно найти на [GitHub](https://github.com/CopterExpress/clever).
51.903226
463
0.786203
rus_Cyrl
0.963603
b911f8f51152b7c6e4998a4f890256ac09975717
272
md
Markdown
README.md
johanremilien/PongScreenSaver
8b59e927d12548a1fd37524c6878122a18d2ab68
[ "MIT" ]
null
null
null
README.md
johanremilien/PongScreenSaver
8b59e927d12548a1fd37524c6878122a18d2ab68
[ "MIT" ]
null
null
null
README.md
johanremilien/PongScreenSaver
8b59e927d12548a1fd37524c6878122a18d2ab68
[ "MIT" ]
null
null
null
# PongScreenSaver Full project from [Trevor Philips](https://github.com/trevphil)' great article on [BetterProgramming](https://betterprogramming.pub/how-to-make-a-custom-screensaver-for-mac-os-x-7e1650c13bd8) entitled "*How to create a custom screensaver for Mac OS X*".
90.666667
253
0.790441
eng_Latn
0.514866