hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
7a9baf36290369bbcfa1262372948419bf0efc53
698
md
Markdown
docs/1.16/core-v1-localVolumeSource.md
justinwalz/k8s-alpha
3b3cc2f1120daaf5dd74c147c0adec79720726f9
[ "Apache-2.0" ]
70
2020-05-13T10:44:17.000Z
2021-11-15T10:42:11.000Z
docs/1.16/core-v1-localVolumeSource.md
justinwalz/k8s-alpha
3b3cc2f1120daaf5dd74c147c0adec79720726f9
[ "Apache-2.0" ]
2
2020-11-19T16:36:56.000Z
2021-07-02T11:55:44.000Z
docs/1.16/core-v1-localVolumeSource.md
justinwalz/k8s-alpha
3b3cc2f1120daaf5dd74c147c0adec79720726f9
[ "Apache-2.0" ]
10
2020-06-23T09:05:57.000Z
2021-06-02T00:02:55.000Z
--- permalink: /1.16/core/v1/localVolumeSource/ --- # package localVolumeSource Local represents directly-attached storage with node affinity (Beta feature) ## Index * [`fn withFsType(fsType)`](#fn-withfstype) * [`fn withPath(path)`](#fn-withpath) ## Fields ### fn withFsType ```ts withFsType(fsType) ``` Filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a fileystem if unspecified. ### fn withPath ```ts withPath(path) ``` The full path to the volume on the node. It can be either a directory or block device (disk, partition, ...).
23.266667
231
0.717765
eng_Latn
0.980814
7a9bf711db0430dd62b6aab64d799485a6b76961
2,671
md
Markdown
articles/supply-chain/master-planning/tasks/constrained-plan.md
MicrosoftDocs/Dynamics-365-Operations.fr-fr
9f97b0553ee485dfefc0a57ce805f740f4986a7e
[ "CC-BY-4.0", "MIT" ]
2
2020-05-18T17:14:08.000Z
2021-04-20T21:13:46.000Z
articles/supply-chain/master-planning/tasks/constrained-plan.md
MicrosoftDocs/Dynamics-365-Operations.fr-fr
9f97b0553ee485dfefc0a57ce805f740f4986a7e
[ "CC-BY-4.0", "MIT" ]
6
2017-12-13T18:31:58.000Z
2019-04-30T11:46:19.000Z
articles/supply-chain/master-planning/tasks/constrained-plan.md
MicrosoftDocs/Dynamics-365-Operations.fr-fr
9f97b0553ee485dfefc0a57ce805f740f4986a7e
[ "CC-BY-4.0", "MIT" ]
1
2019-10-12T18:19:20.000Z
2019-10-12T18:19:20.000Z
--- title: Générer un plan avec contrainte description: Cette rubrique indique comment créer un plan qui prend en compte des contraintes en termes de matière et de capacité. author: ChristianRytt ms.date: 08/02/2019 ms.topic: business-process ms.prod: '' ms.technology: '' ms.search.form: DefaultDashboard, ReqCreatePlanWorkspace, ReqTransPlanCard, ReqPlanSched audience: Application User ms.reviewer: kamaybac ms.search.region: Global ms.author: crytt ms.search.validFrom: 2016-06-30 ms.dyn365.ops.version: AX 7.0.0 ms.openlocfilehash: 5fea315d41d01cb578d7d60c9eb7006e4b6c3362 ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 09/29/2021 ms.locfileid: "7578342" --- # <a name="generate-a-constrained-plan"></a>Générer un plan avec contrainte [!include [banner](../../includes/banner.md)] Cette rubrique indique comment créer un plan qui prend en compte des contraintes en termes de matière et de capacité. Le plan vérifie que la fabrication ne démarre pas avant que les matières soient disponibles et que des ressources ne soient pas surréservées. Les données fictives utilisées pour créer cette procédure correspondent à la société USMF. Cette procédure est destinée au gestionnaire de production. ## <a name="set-up-a-constrained-plan"></a>Paramétrer un plan avec contrainte 1. Dans la page d’accueil, sélectionnez l’espace de travail **Planification**. 2. Sélectionnez **Plans généraux** dans la liste des liens dans la partie droite de l’espace de travail. 3. Dans la liste, recherchez et sélectionnez l’enregistrement souhaité. Exemple : **StaticPlan** 4. Sélectionnez **Oui** dans le champ **Capacité finie**. 5. Entrez `30` dans le champ **Plage de gestion de la capacité finie**. 6. Développez la section **Plages de gestion en jours**. 7. Sélectionnez **Oui** dans le champ **Capacité**. 8. Entrez un nombre dans le champ **Plage de gestion de planification de capacité (jours)**. Exemple : `60` 9. Sélectionnez **Oui** dans le champ **Retards calculés**. 10. Entrez un nombre dans le champ **Calculer la plage de gestion des retards (jours)**. Exemple : `60` 11. Développez la section **Retards calculés**. 12. Sélectionnez **Oui** dans tous les champs **Ajouter le retard calculé à la date de besoin**. 13. Fermez la page. ## <a name="create-a-constrained-plan"></a>Créer un plan avec contrainte 1. Sélectionnez **Exécuter**. 2. Dans le champ **Plan général**, entrez ou sélectionnez le plan pour lequel vous avez paramétré les contraintes. 3. Cliquez sur **OK**. 4. Sélectionnez **Ordres prévisionnels**. [!INCLUDE[footer-include](../../../includes/footer-banner.md)]
48.563636
260
0.764882
fra_Latn
0.934636
7a9c295e56a7311b6acd0b23720741a4a3a3e11b
569
md
Markdown
Q006/readme.md
fukuball/LeetCode
049209e83169129b194786f8973d21f5f14ea235
[ "MIT" ]
3
2016-01-31T16:45:19.000Z
2020-12-18T22:43:08.000Z
Q006/readme.md
imbi7py/LeetCode
049209e83169129b194786f8973d21f5f14ea235
[ "MIT" ]
null
null
null
Q006/readme.md
imbi7py/LeetCode
049209e83169129b194786f8973d21f5f14ea235
[ "MIT" ]
1
2020-12-19T05:50:55.000Z
2020-12-19T05:50:55.000Z
[ZigZag Conversion](https://leetcode.com/problems/zigzag-conversion/) ======== - Difficulty: Easy The string "PAYPALISHIRING" is written in a zigzag pattern on a given number of rows like this: (you may want to display this pattern in a fixed font for better legibility) ``` P A H N A P L S I I G Y I R ``` And then read line by line: `"PAHNAPLSIIGYIR"` Write the code that will take a string and make this conversion given a number of rows: ``` string convert(string text, int nRows); ``` `convert("PAYPALISHIRING", 3)` should return `"PAHNAPLSIIGYIR"`.
28.45
172
0.711775
eng_Latn
0.969956
7a9c451f535fe511fd896c46bac822f7d60475a6
1,044
md
Markdown
README.md
dlavric/golang-travis-release
5a61ae10089483f9a7b74275bcafc1e5bd41d9b4
[ "MIT" ]
null
null
null
README.md
dlavric/golang-travis-release
5a61ae10089483f9a7b74275bcafc1e5bd41d9b4
[ "MIT" ]
null
null
null
README.md
dlavric/golang-travis-release
5a61ae10089483f9a7b74275bcafc1e5bd41d9b4
[ "MIT" ]
null
null
null
## This repository is made for the purpose of testing how Travis works with new releases Script `hello.go` prints `hello` ## Prerequisites - [X] Install GO on MacOS: ```shell $ brew install golang ``` - [X] Install [Travis CLI](https://blog.travis-ci.com/2013-01-14-new-client) ## What's included The repository includes: - A [.travis.yml](https://github.com/dlavric/golang-travis-release/blob/main/.travis.yml) configuration file - A [hello.go](https://github.com/dlavric/golang-travis-release/blob/main/hello.go) go lang script that prints hello ## How to use this repo - Clone this repo: ```shell $ git clone https://github.com/dlavric/golang-hello-test-travis $ cd golang-hello-test-travis ``` - Build and install the program with the go tool: ```shell $ go mod init example.com/user/hello $ go install example.com/user/hello ``` - Add the install directory to our PATH to make running binaries easy: ```shell $ export PATH=$PATH:$(dirname $(go list -f '{{.Target}}' .)) ``` - Run `hello.go`: ```shell $ go run hello.go ```
21.75
116
0.705939
eng_Latn
0.788322
7a9c5e6f9563d8b90f3ac646ca48ef9cfa710556
320
md
Markdown
notes/0143.md
briankoser/kodex
4b7aebb2a046de92cd3eeaad8d93d6b7476d8faa
[ "MIT" ]
1
2018-12-06T15:11:10.000Z
2018-12-06T15:11:10.000Z
notes/0143.md
briankoser/kodex
4b7aebb2a046de92cd3eeaad8d93d6b7476d8faa
[ "MIT" ]
277
2017-09-22T19:29:51.000Z
2022-02-13T08:34:38.000Z
notes/0143.md
briankoser/kodex
4b7aebb2a046de92cd3eeaad8d93d6b7476d8faa
[ "MIT" ]
null
null
null
--- date: 2020-10-17 author: Melissa --- Holland Farm with the family. The corn pit; striding through the corn maze with Lydia and Ruth; the picnic by the car; sending Lydia and Amber on the zipline, each by herself; rolling through the big pipe with the girls giggling and trying to stay upright; the picnic at Bucee's.
64
279
0.76875
eng_Latn
0.999775
7a9c7f5fddf387594fae5bfdb20f3dd6d618cc79
47
md
Markdown
README.md
papiparham/mserv-announcement
b4c3fa28bd74be7b5ea31209c87f4849ddea5efe
[ "MIT" ]
null
null
null
README.md
papiparham/mserv-announcement
b4c3fa28bd74be7b5ea31209c87f4849ddea5efe
[ "MIT" ]
null
null
null
README.md
papiparham/mserv-announcement
b4c3fa28bd74be7b5ea31209c87f4849ddea5efe
[ "MIT" ]
null
null
null
# mserv-announcement Announcement Microservice
15.666667
25
0.87234
eng_Latn
0.838197
7a9c98b8fcce1090f05b7c3543f033116ece380d
920
md
Markdown
src/pages/services/volusion-to-bigcommerce-migration.md
BallisticAgencyWeb/HeadlessBallistic
a12311dcfe5d4d8033ef87c3cc6cfce1042d17c8
[ "MIT" ]
null
null
null
src/pages/services/volusion-to-bigcommerce-migration.md
BallisticAgencyWeb/HeadlessBallistic
a12311dcfe5d4d8033ef87c3cc6cfce1042d17c8
[ "MIT" ]
null
null
null
src/pages/services/volusion-to-bigcommerce-migration.md
BallisticAgencyWeb/HeadlessBallistic
a12311dcfe5d4d8033ef87c3cc6cfce1042d17c8
[ "MIT" ]
null
null
null
--- templateKey: service-post title: Volusion to BigCommerce Migration date: 2020-09-04T18:43:09.392Z wistiaid: 8yvsvxwqkr featuredimage: /img/ba-website_servicesproducts_migration-web__34841.1597171286.jpg --- We are waiving our product migration fee! For a limited time, we will migrate your Volusion product data to BigCommerce for free. We know that it is more important than ever to migrate off of Volusion and onto a better eCommerce platform, so we're here to help. **Our Volusion to BigCommerce Migration Service includes:** Product Data Migration (Free until 09/30)\ New Website Design\ Historical Order Data Backup\ Historical Customer Data Backup\ Customer Group Migration\ Promotions/Discounts/Coupon Migration\ Customer Reviews Migration\ Content Pages/Blog Migration\ Payment Provider Migration\ Shipping Services Migration\ Inventory Sync\ Anything that you need to successfully migrate to BigCommerce!
36.8
219
0.817391
eng_Latn
0.822579
7a9d751a66e54a391b014443660b035a970b8172
6,364
md
Markdown
ru/managed-mongodb/operations/data-migration.md
anton-bryukhov/docs
8fb69a121137c195745c17cc1e7f0cc68169edec
[ "CC-BY-4.0" ]
null
null
null
ru/managed-mongodb/operations/data-migration.md
anton-bryukhov/docs
8fb69a121137c195745c17cc1e7f0cc68169edec
[ "CC-BY-4.0" ]
null
null
null
ru/managed-mongodb/operations/data-migration.md
anton-bryukhov/docs
8fb69a121137c195745c17cc1e7f0cc68169edec
[ "CC-BY-4.0" ]
null
null
null
# Миграция данных в {{ mmg-name }} Чтобы перенести вашу базу данных в сервис {{ mmg-name }}, нужно непосредственно перенести данные, закрыть старую базу данных на запись и перенести нагрузку на кластер БД в Яндекс.Облаке. Перенести данные в кластер {{ mmg-name }} можно с помощью утилит `mongodump` и `mongorestore`: создайте дамп рабочей базы и восстановите его в нужном кластере. Перед тем, как переносить данные, проверьте, совпадают ли версии СУБД у существующей базы данных и вашего кластера в Облаке. Если версии разные, восстановить сделанный дамп не получится. Последовательность действий: 1. [Создайте дамп](#dump) переносимой базы с помощью утилиты `mongodump`. 1. При необходимости [создайте виртуальную машину](#create-vm) в {{ compute-name }}, чтобы восстанавливать базу из дампа в инфраструктуре Яндекс.Облака. 1. [Создайте кластер {{ mmg-name }}](#create-cluster), на котором будет развернута восстановленная база. 1. [Восстановите данные из дампа](#restore) в кластере с помощью утилиты `mongorestore`. ### Создайте дамп {#dump} Создать дамп базы данных следует с помощью утилиты `mongodump`. Подробно утилита описана в [документации {{ MG }}](https://docs.mongodb.com/manual/reference/program/mongodump/). 1. Установите `mongodump` и дополнительные утилиты для работы с MongoDB. Пример для дистрибутивов Ubuntu и Debian: ``` $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 ... $ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list ... $ sudo apt-get update ... $ sudo apt-get install mongodb-org-shell mongodb-org-tools ``` Инструкции для других платформ, а также более подробную информацию об установке утилит можно найти на странице [Install MongoDB](https://docs.mongodb.com/manual/installation/). 2. Перед созданием дампа рекомендуется переключить СУБД в режим «только чтение», чтобы не потерять данные, которые могут появиться за время создания дампа. 3. Создайте дамп базы данных: ``` $ mongodump --host <адрес сервера СУБД> --port <порт> --username <имя пользователя> --password "<пароль>" --db <имя базы данных> --out ~/db_dump ``` Если вы можете использовать несколько ядер процессора для создания дампа, задайте флаг `-j` с количеством доступных ядер: ``` $ mongodump --host <адрес сервера СУБД> --port <порт> --username <имя пользователя> --password "<пароль>" -j <количество ядер> --db <имя базы данных> --out ~/db_dump ``` 4. Архивируйте дамп: ``` $ tar -cvzf db_dump.tar.gz ~/db_dump ``` ### (опционально) Создайте виртуальную машину для загрузки дампа {#create-vm} Промежуточная виртуальная машина в {{ compute-full-name }} понадобится, если: * К вашему кластеру {{ mmg-name }} нет доступа из интернета. * Ваше оборудование или соединение с кластером в Облаке недостаточно надежны. Чтобы подготовить виртуальную машину для восстановления дампа: 1. В консоли управления [создайте новую виртуальную машину](../../compute/operations/vm-create/create-linux-vm.md) из образа Ubuntu 18.04. Нужное количество оперативной памяти и ядер процессора зависит от объема переносимых данных и требуемой скорости переноса. Минимальной конфигурации (1 ядро, 2 ГБ RAM, 10 ГБ дискового пространства) должно хватить для переносы базы до 1 ГБ. Чем больше переносимая база, тем больше должно быть дискового пространства (как минимум в два раза больше, чем размер базы) и оперативной памяти. Виртуальная машина должна находиться в той же сети и зоне доступности, что хост-мастер кластера {{ mmg-name }}. Кроме того, виртуальной машине должен быть присвоен внешний IP-адрес, чтобы вы могли загрузить файл дампа извне Облака. 2. Установите клиент {{ MG }} и дополнительные утилиты для работы с СУБД: ``` $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 ... $ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list ... $ sudo apt-get update ... $ sudo apt-get install mongodb-org-shell mongodb-org-tools ``` 3. Перенесите дамп базы данных с вашего сервера на виртуальную машину, например, используя утилиту `scp`: ``` scp ~/db_dump.tar.gz <имя пользователя ВМ>@<публичный адрес ВМ>:/tmp/db_dump.tar.gz ``` 4. Распакуйте дамп на виртуальной машине: ``` tar -xzf /tmp/db_dump.tar.gz ``` В результате вы должны получить виртуальную машину с дампом базы данных, который готов к восстановлению на кластер {{ mmg-name }}. ## Создайте кластер {{ mmg-name }} {#create-cluster} Создайте кластер, вычислительная мощность и размер хранилища которого соответствуют среде, в которой развернута существующая база данных. Подробно о создании кластера {{ mmg-name }} — на странице [{#T}](cluster-create.md). ### Восстановите данные {#restore} Восстанавливать базу данных из дампа следует с помощью утилиты [mongorestore](https://docs.mongodb.com/manual/reference/program/mongorestore/). * Если вы восстанавливаете дамп с виртуальной машине в Облаке: ``` $ mongorestore --host <адрес сервера СУБД> \ --port <порт> \ --username <имя пользователя> \ --password "<пароль>" \ -j <количество потоков> \ --authenticationDatabase <имя базы данных> \ --nsInclude '*.*' /tmp/db_dump ``` * Если вы восстанавливаете дамп с сервера вне Яндекс Облака для `mongorestore` необходимо явно задать параметры SSL: ``` $ mongorestore --host <адрес сервера СУБД> \ --port <порт> \ --ssl \ --sslCAFile <путь к файлу сертификата> \ --username <имя пользователя> \ --password "<пароль>" \ -j <количество потоков> \ --authenticationDatabase <имя базы данных> \ --nsInclude '*.*' ~/db_dump ``` * Если нужно перенести только определенные коллекции, то задайте флаги `--nsInclude` и `--nsExclude`с указанием на пространства имен, которые нужно или не нужно включать для восстанавливаемого набора коллекций.
47.849624
264
0.705531
rus_Cyrl
0.917445
7a9d7a2030079cde7aec75cd891d6ad9e3d06f6c
3,311
md
Markdown
src/pages/luzern-switzerland/index.md
therealice/beansprouty
eab56ecbb80b1d1ca24c4bb737764886e9426dfc
[ "MIT" ]
null
null
null
src/pages/luzern-switzerland/index.md
therealice/beansprouty
eab56ecbb80b1d1ca24c4bb737764886e9426dfc
[ "MIT" ]
null
null
null
src/pages/luzern-switzerland/index.md
therealice/beansprouty
eab56ecbb80b1d1ca24c4bb737764886e9426dfc
[ "MIT" ]
null
null
null
--- title: Luzern, Switzerland author: C. N. Aa. Thondrup date: "2018-05-03T12:00:00.000Z" duration: 5 featuredImage: "./luzern.jpg" --- > Open areas, a lake, a mountain view, nice weather, swiss chocolate. Perhaps more tourists than expected, but overall a good first impression. ### 28 april - 3 may 2018 The city center is small and easy to get around. There is a lot of people here, and a lot of tourists it seems. After having walked around with our luggage, we finally found our hotel. We tried to figure out how to get inside, but after several failed attempts to contact the hotel, we gave up. We looked up another hotel 15 minutes by bus outside the city called Hotel Waldegg. Nice small clean hotel with modern interior, but not good value for your money. We stayed one night before moving on to an Airbnb apartment in Eschenbach, 15 minutes by train north of Luzern. Here we stayed from 29 april - 3 mai. <iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d86984.3466971739!2d8.21209340688269!3d47.05473349553356!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x478ffa2a79547379%3A0xaef02ad1409952af!2sLucerne%2C+Switzerland!5e0!3m2!1sen!2sit!4v1529856826696" width="600" height="450" frameborder="0" style="border:0" allowfullscreen></iframe> Eschenbach is a nice, safe, clean and friendly area. Nice if you want to stay outside the city. One of the things that are really cool about Switzerland is their [Zurich Vitaparcours](https://www.zurichvitaparcours.ch/de/Finder). Spread around the country they have these public running courses, usually in the forest, where you can do different exercises. It's really great! There was a Zurich Vitaparcours in the forest right behind the house we lived in. So lucky! ![Rings](./rings.jpg 'Zurich Vitaparcour in Eschenbach') ### Food You can find the most delicious food at at [Blendteehaus](http://blendteehaus.ch/). Thave much more than just tee. Really tasty. You can check them out on [HappyCow](https://www.happycow.net/reviews/blend-teehaus-luzern-86442). Then of course you have the safe [Tibits](https://www.tibits.ch/de/restaurants#tibits-luzern) which is always a winner. It's on the train station, top floor. One more place I would like to recommend is [MÜLLER Reformhaus Vital Shop AG](https://www.google.com/maps/place/M%C3%9CLLER+Reformhaus+Vital+Shop+AG/@47.05201,8.3051428,3a,75y,90t/data=!3m8!1e2!3m6!1sAF1QipMaekW3oIzKHlTizRx0mM8HIKG-ue-V4oevirB4!2e10!3e12!6shttps:%2F%2Flh5.googleusercontent.com%2Fp%2FAF1QipMaekW3oIzKHlTizRx0mM8HIKG-ue-V4oevirB4%3Dw129-h86-k-no!7i4288!8i2848!4m8!1m2!11m1!2s180FjmycaP5k-OaGoa9lCSBrgrV0!3m4!1s0x478ffb9f070926fb:0x1b6e3402ceea5d70!8m2!3d47.051926!4d8.305192). ![Blendteehaus](./teehaus.jpg 'Blendteehaus') ![Food at Blendteehaus](./teehausfood.jpg 'Food at Blendteehaus') ![Tibits](./tibits.jpg 'Tibits at the train station') ### Day trip to Zurich We went on a short trip to Zurich while staying in Luzern to meet my fathers cousin and his family for the first time. A quick stop at [Hiltl](https://hiltl.ch/en/) in Zurich before taking the train to Egg just outside the city where they live. If you are ever in Zurich, try Hiltl. They have so much to offer. It's similar to Tibits. ![Tibetan gathering](./tibetan-gathering.jpg 'Tibetan gathering with my fathers cousin')
94.6
650
0.780127
eng_Latn
0.971005
7a9e15e145b74b2038ff6481cc302183e91528c2
13,542
md
Markdown
docs/Prepositions.md
Genitana/EnglishGrammarBook
27a4fb202c123bdf5cc55ef7586ead6fff16d3bb
[ "MIT" ]
365
2017-05-04T12:35:02.000Z
2022-03-23T02:26:22.000Z
docs/Prepositions.md
bugcai/EnglishGrammarBook
0b0340ed979d516a1cc90611f11e646dbcd98f4c
[ "MIT" ]
16
2017-05-04T03:07:39.000Z
2022-01-15T13:36:14.000Z
docs/Prepositions.md
bugcai/EnglishGrammarBook
0b0340ed979d516a1cc90611f11e646dbcd98f4c
[ "MIT" ]
164
2017-05-04T03:00:18.000Z
2022-03-30T07:51:22.000Z
# 第七章 介系词 **介系词(prepositions)**,从造字来看是 pre- 加上 position,意思是「放在前面的东西」。介系词的用法就是放在名词前面、与名词(称为它的受词)共同构成一个意义单元、称为介系词片语,当作修饰语使用。介系词片语这种修饰语可以当形容词或副词使用,是最有弹性、使用频率最高的一种修饰语。可以用来修饰名词、动词、形容词、副词等各大词类。以下是介系词片语修饰各种词类的例子: <u>The company</u> is <u>in trouble</u>. 名词 介系词片语 (公司有困难了。) 介系词片语 in trouble 当形容词使用,在句中充当主词补语、修饰主词 the company(名词)。 I <u>leave</u> <u>for Hong Kong</u> tomorrow. 动词 介系词片语 (我明天动身到香港去。) 介系词片语 for Hong Kong 当地方副词使用,修饰动词 leave。 The country is <u>rich</u> <u>in mineral wealth</u>. 形容词 介系词片语 (这个国家富含矿物资源。) 介系词片语 in mineral wealth 当副词使用,修饰形容词 rich。 The new janitor works <u>half-heartedly</u> <u>at best</u>. 副词 介系词片语 (新来的工友做起事情来,充其量只能说是兴趣缺缺。) 介系词片语 at best 当副词使用,修饰副词 half-heartedly。 ## 介系词与介副词 有一种词类,看起来和介系词一模一样,但是后面却没有受词,而是直接当做副词使用。这种词类称为**介副词(particles)**,应该当作副词看待。例如: Come <u>in</u>. (请进。) The soldier stood <u>up</u>. (士兵站了起来。) 第 1 句中的 in 看起来像介系词,后面却没有受词,而是直接当地方副词使用,修饰动词 come,称为介副词。第 2 句中的 up 也是介副词,直接修饰动词 stood。 介副词还经常和介系词共同使用,例如: The plumber <u>went down to</u> the basement. (水管工人下到地下室去。) 句中的 down 是个介副词,后面没有受词、直接修饰动词 went。它后面的 to 才是介系词,后接名词片语 the basement 当受词、构成一个介系词片语 to the basement,当地方副词使用,修饰动词 went。 He <u>has gone over to</u> your house. (他已经跑到你家去了。) 句中的 over 是介副词,直接修饰动词 has gone。后面的 to 才是介系词,后接名词片语 your house 当受词、构成一个介系词片语 to your house,当地方副词使用、修饰动词 has gone。 ## 介系词与片语动词 请比较一下底下这两个句子: The man slept on the couch. (这名男子睡在长沙发上。) The man turned on the light. (这名男子打开了灯。) 乍看之下两句的构造非常相似,但是句型其实并不一样。第 1 句是 S + V 的句型,如下: <u>The man</u> <u>slept</u> <u>on the couch</u>. S V 介系词片语 句中的动词是 slept,后面的 on the couch 是个介系词片语,当地方副词使用、修饰动词 slept 的地方。第 2 句就不同了,是 S + V + O 的句型,如下: <u>The man</u> <u>turned on</u> <u>the light</u>. S V O 这时候应该把 turned on 当一个动词来看(称为片语动词),后面的 the light 是动词 turned on 的受词。因为,如果第 2 句仍然采用第 1 句的句型来诠释,成为: <u>The man</u> <u>turned</u> <u>on the light</u>. (误) S V 介系词片语 那么,介系词片语 on the light 是个地方副词(在灯上),用来修饰动词 turned,整个句子的意思就会变成:「这名男子在灯上旋转」。这样解释显然讲不通。而是应该把 turned on 合在一起当成动词看待,意思是「打开」。 类似 turn on 这种情况,在动词后面加上介副词、由两个以上单字构成一个意义单元、应该当做一个动词来诠释,这种构造称为**片语动词(phrasal verbs)**。 ## 片语动词的种类 片语动词可分成及物动词与不及物动词,而及物动词中又可分成可以拆开的片语动词和不可拆开的片语动词。所以片语动词大致可以分成以下几种。 ### 一.及物动词、不可拆开(受词必须放在片语后面) #### get over 康复,痊愈 It's only a cold; you'<u>ll get over it</u> soon enough. (只不过是小感冒,很快就会好了。) 像 get over 这种片语动词属于及物动词,应该要有受词。而 get over 这种片语不可拆开,意思是说它的受词只能放在后面(如 get over it),而不能放在 get 与 over 的中间。如果看到 get it over with 这种构造,那是另一个不同的片语,意思变成「趁早把事情了结」。接下来几个例子都和 get over 属于同一类(及物、不可拆开),请读者自行体会。 #### look into 调查,了解 The manager <u>will look into your complaint</u> at once. (经理将就您的抱怨立即进行了解。) #### take after 类似,像 Henry <u>takes after his father</u>. (Henry很像他爸爸。) #### stand by 支援 Don’t worry; I’ll <u>stand by you</u>. (别担心,我会支援你。) 如果看到 stand by 后面缺乏受词,那么表示这是另一个片语动词,属于不及物,意思是「等待,准备」。例如: Your team is next; please <u>stand by</u>. (下一个轮到你们这队了,请准备。) ### 二.不及物动词 #### come about 發生 How did this all <u>come about</u>? (這件事是怎麼發生的?) 不及物的片語動詞,後面當然不會有受詞。 #### fall off 下跌 Business <u>has fallen off</u> badly since the SARS epidemic. (自從SARS疫情以來,生意慘跌。) #### go off 爆炸 The bomb <u>went off</u> in the middle of the night. (炸彈半夜爆炸了。) #### turn up 出現,出席 The meeting was postponed because too few people <u>turned up</u>. (會議延期了,因為出席的人太少。) ### 三.及物动词、可以拆开(受词可放在片语后面、也可开放在中间) #### bring up 抚养长大 That woman <u>brought up eight children</u>. That woman <u>brought eight children up</u>. That woman <u>brought them up</u>. (那个女人抚养八个小孩长大。) 像 bring up 这种片语动词属于及物动词,它的受词 eight children 可以放在 bring up 的后面、也可以把 bring 与 up 拆开来放在中间。像这一类的片语动词,如果受词是代名词(如them)就不适合放在后面,而必须把片语动词拆开、受词放中间(如brought them up)。原因在于:后面的位置是比较重要的位置,念起来重音也会落在这个位置。但是代名词是比较不重要的东西,不宜念成重音。所以,只要片语动词是可以拆开的那种、而受词又是代名词,就必须拆开来把代名词放在中间、把后面要念成重音的位置留给片语动词中的介副词(如 brought them up)。 #### call off 取消 The boss has <u>called off the meeting</u>. The boss has <u>called the meeting off</u>. The boss has <u>called it off</u>. (老板已经取消了会议。) #### turn down 拒绝 I’m going to <u>turn down his offer</u>. I’m going to <u>turn his offer down</u>. I’m going to <u>turn it down</u>. (我将拒绝他的提议。) #### make up 捏造,编造 He <u>made up a long story</u>. He <u>made a long story up</u>. He <u>made it up</u>. (他编造了一个很长的故事。) 如果看到 make up 的后面缺乏受词,那表示它是另一个片语动词,属于不及物,意思是「合好」,例如: He and his separated wife finally <u>made up</u>. (他和分居的老婆终于合好了。) ### 四.三个字以上的片语动词,及物、不可拆开(受词应放在后面) #### catch up with 赶上 He'<u>s slowly catching up with his classmates</u> in exam grades. (他的考试成绩渐渐赶上同班同学了。) 从前的文法书称片语动词为「双字动词」(two-word verbs),但是这个名称不很贴切,因为片语动词有时候不只两个字,例如 catch up with。片语动词如果是三个字,通常第一个是动词(如catch)、第二个是介副词(如 up)、第三个是介系词(如 with)。因为第三个是介系词,后面必须有受词,所以凡是这种三个字以上的片语动词都是及物动词,而且都不可拆、受词一定要放在后面。以下几个例子都是同样的情况。 #### drop out of 退出,中辍 John hurt his leg and had to <u>drop out of the race</u>. (John腿受伤,不得不退出比赛。) #### get away with 侥幸逃脱,全身而退 If you take the money, you can't expect to <u>get away with it</u>. (你要是敢拿钱,就别想全身而退。) #### go back on 食言,说话不算 I promised my kid a new notebook, so I can't <u>go back on my word</u> n​​ow. (我答应小孩要买一台新笔电,现在不能说了不算。) 以上所做的主要是分类的工作,介绍一下片语动词有哪些种类。在此有限的空间当然无法列举周全。片语动词的情况和介系词类似,最好是经过广读、广泛地接触吸收,慢慢就会知道该怎么用。 ## 空间介系词 表示空间的介系词可以分成点、线、面、体四个角度来探讨。所谓点、线、面、体,都是主观的认定。亦即,同一个空间,只要说话者在这句话中把它当成一个点看待,就要采取表示「点」的介系词。换一个句子,把同一个空间当成立体来看待,就该采取表示「体」的介系词。 ### 一、点:at The bus <u>will stop</u> <u>at the dock</u> to pick up passengers. (巴士在码头将停下来载客。) 在巴士的行驶路线上,码头是一个「点」、来到这个点要停。属于「点」的空间概念,通常用 at 这个介系词来表示。 We <u>have arrived</u> <u>at our destination</u>. (我们已经抵达了我们的目的地。) 在旅程中,目的地是个「终点」。 The sniper <u>is aiming</u> <u>at the kidnapper</u>. (狙击手在瞄准绑票犯。) 用枪瞄准目标时,这个目标就是一个「瞄准点」。 The instructor <u>points to</u> the poster on the wall and says, “Never <u>point</u> your gun <u>at anybody</u>.” (教官指向墙上的海报,说:「绝不要用枪瞄准任何人。」) 如果是「指向某个方向」,应该说 point to,这个介系词 to 相当于 toward 的用法。如果说「瞄准」,那是把目标当成一个定点,就该说 point at。 ### 二、线:on, along The student memorized 10 new words <u>on his way</u> to school. (这个学生在上学途中记了十个新单字。) 从家里到学校是一条「线」状的路程。属于「线」的空间观念通常以 on 或 along 来表现。 There are many beautiful villas <u>along the beach</u>. (沿着海滩有许多美丽的别墅。) 海滩可以视为海与陆地交界的一条「线」,介系词采用 along 或者 on 都可以。 I see three bookstores <u>on this street</u>. (我看到这条街上有三家书店。) 一条街,是「线」形的构造,介系词可用 along 或 on。 ### 三、面:on I strained my eyes but couldn't see any ship <u>on the sea</u>. (我亟目眺望,但是海面上看不到有船。) 「海面」是个平面。在平面上,通常用 on 这个介系词来表示。 There’s a picture hanging <u>on the wall</u>. (墙上挂了张画。) 「墙面」也是个平面,所以用 on 这个介系词。 The speaker is standing <u>on the platform</u>. (演说者站在讲台上。) 讲台的上面是个平面,所以用 on 表示。 ### 四、体:in I like to stay <u>in my office</u> because it's quiet there. (我喜欢待在自己办公室里头,因为很安静。) 所谓「点、线、面、体」的认定,其实是主观的。上面这个句子表示「办公室里头很安静」,那是把办公室视为一个具有长、宽、高的立体看待,所以介系词用in。但是下面这个例子就不一样了: We'll go our separate ways, and meet <u>at my office</u> at three. (我们各走各的,三点钟在我办公室碰面。) 这个句子是在约碰面的「地点」,所以是把办公室视为一个「点」看待,因此介系词用 at。 I think I'll walk; there are too many cars <u>in the street</u>. (我想我走路好了,街上车子太多了。) 如果把街道视为一条线,介系词应该用 on 或 along。但是上面这个句子是把街道视为一个立体空间,说里头的车子太多了,这时就该用 in。 ### 比喻用法 空间介系词有时候是用来作比喻,并不是真正用来表示空间。例如: That shameless man is <u>beneath contempt</u>. (那个无耻的人简直太令人瞧不起了。) 介系词片语 beneath contempt 字面上是「比轻蔑更低」,这是空间介系词的比喻用法,表示「令人瞧不起」。 What the lecturer is saying is quite <u>beyond my comprehension</u>. (演讲人所说的话我完全听不懂。) 介系词片语 beyond my comprehension 字面上是「超越我理解的范围之外」,这也是空间介系词的比喻用法,表示「不懂」。 His objection is quite <u>beside the point</u>. (他的反对意见完全离题了。) 介系词片语 beside the point 字面上是「偏离了重点」,以此比喻「离题、与主题无关」 It's sometimes difficult to live <u>within one's income</u>. (要量入为出,​​有时候很困难。) 介系词片语 within one's means 字面上是「在个人收入范围之内」,这是用空间介系词来比喻「不透支」。 ## 时间介系词 比较常用的时间介系词有 in, on, at 这些。如果表达的是一个时间的「定点」、非常短的时间,通常用 at 这个介系词,例如: I’ll meet you <u>at six o’clock</u>. (我们六点钟见。) 「六点钟」是个时间上的「定点」,介系词应该用 at。类似的例子有:at noon, at midnight, at dinner, at sunrise, at sunset 等等。比较特别的是 at night:虽然「夜晚」的时间比较长,但仍要使用 at 这个介系词。这没有什么道理可讲,而是语言习惯使然。 如果是一段比较长的时间,通常用 in 这个介系词来表示,例如: I do most of my work <u>in the morning</u>. (我大部分的事情都是在早上处理。) 「早上」是「一段时间」,介系词应该用 in。类似的例子有:in the evening, in M​​ay, in summer, in 2007 等等。 如果表示出来「哪一天」,如星期、日期、节日等等,通常要用 on 这个介系词,例如: My birthday is <u>on January 23</u>. (我的生日是元月23日。) 讲出「元月23日」这天,介系词应该用 on。另外像 on Monday, on New Year's Day 等等,介系词都要用 on。甚至包括与 the morning, the evening 之类并用时,只要有讲出哪一天,介系词仍然是用 on。例如: The accident happened <u>on the morning of June 20</u>. (意外发生在六月20日早上。) ## 其他介系词 除了时间、空间之外,还有一些介系词用于表达其他的观念。比较值得注意的有以下这些。 ### apart from 除了…之外 这个介系词有两种不同的用法。它可以相当于 except for,表示「除了……之外就没了」,例如: We had no trouble on the way <u>apart from</u>(= except for) <u>a flat tire</u>. (我们一路都没碰到问题,除了一枚轮胎没气。) 但是它也可以相当于 in addition to,表示「除了……之外还有」,例如: <u>Apart from</u>(= in addition to) <u>a flat tire</u>, we also ran out of gas. (除了轮胎没气之外,我们还用光了汽油。) ### as for 至于 这个介系词在写作时很好用,可以用在逐条列举、也可以用来列举呈对比的项目,例如: I don't blame you. <u>As for your friend</u>, he has behaved very badly. (我并不怪你。至于你的朋友,他的行为就很差劲。) ### for 为了…,以(价钱),支援 这是个常用的介系词,有好几种解释。 This document is <u>for your information</u> only. (本文件只供你个人参考之用) I bought the book <u>for 200 dollars</u>. (这本书是我花200元买的。) Are you <u>for the tax cut</u>? (你支援减税案吗?) ### of ……的 这是出现频率最高的介系词之一,有好几种用法。 These shoes are <u>made of rubber</u>. (这双鞋是橡胶做的。) Holland is a part <u>of Europe</u>. (荷兰是欧洲的一部分。) I think <u>of you often</u>. (我经常会想念你。) The arrival <u>of the train caused</u> a big stir on the platform. (火车的抵达在月台上掀起一阵骚动。) ### at 以(价钱或速度) We joined the tour <u>at $3,000</u> per person. (我们以每人3,000元的价格参加了旅行团。) The car was going <u>at 90 km</u> per hour. (车子当时以每小时90公里的速度行进。) ### but 除了 通常 but 是对等连接词,但也可以当介系词使用,用法相当于 except。 No one <u>but</u>(= except) <u>a fool</u> would accept a challenge like that. (除了傻瓜,没有人会接受这样的挑战。) ### in 以(表达方式) How do you say that <u>in English</u>? (这句话用英文怎么说?) He signed his name <u>in black ink</u>. (他以黑墨水签下名字。) ### up to 由 …… 决定,可以胜任,在从事(或计画) 这个介系词有好几种不同的意思,请读者由例句中自行判读。 It’s not <u>up to me</u> to decide. (这件事不是由我来决定的。) I’m not <u>up to this job</u>. (我个工作我胜任不了。) Do you know <u>what</u> Tom has been <u>up to</u> recently? (你知道汤姆最近在搞什么鬼吗?) ### between / among 在 … 之间 一般文法书说 between 用于表示两者之间,among 则是三者以上。这个区别大致说来是可以接受的,可是要拿它当规则来背,就会有例外。其实这两个介系词的差别主要不在两个与多个之差,而在于:between 有标示位置的功能(意思是「夹在中间」),among 则没有(只表示「其中,之中」 )。例如: The Rhine flows <u>between France and Germany</u>. (莱茵河流经法国与德国中间。) 「两个中间」用 between,这个例子刚好是法国和德国这「两个」,所以用 between。不过,采用between真正的理由并不在于「两个」、而在于「夹在中间」:莱茵河夹在法国德国中间,所以要用 between。换句话说,在这里采用 between 是因为具有标示位置的功能。 <u>Among the major cities</u> in the world, Shanghai is probably developing most rapidly. (在世界各大城市之中,上海很可能是目前发展最快速的。) 「三个以上」用 among,这个例子提到的 the major cities in the world 有许多个,所以用 among。不过,这里采用 among 的真正理由也不在于「三个以上」,而是在于:这个上下文完全没有标示位置的功能,只表示「其中、之中」。请看下例: Switzerland lies <u>between France, Germany, Austria, and Italy</u>. (瑞士位于法国、德国、奥地利与义大利中间。) 在这个例子中,between 后面有四个国家,可是仍然要用 between,主要是因为现在是用这四个国家来标示位置,把瑞士「夹在中间」,而不是笼统地表示「其中、之中」。既然有标示位置的功能,就应该用 between。 ### except / except for 除了…… 一般的情况下,except for 和 except 的用法与意思都差不多。但是放在句首时只能用 except for,不能用 except。 You can all go <u>except George</u>. You can all go <u>except for George</u>. <u>Except for George</u>, you can all go. (大家都能去,除了乔治。) ### on / about (谈论)关于… 文章或演说的主题,如果采比较正式的口吻、比较专业或学术化,就应该用 on 这个介系词来表示。如果是非正式的口吻,用 about 就行了。 He has written a book <u>on the temples</u> of the Upper Nile. (他写了一本书论尼罗河上游的庙宇。) He’s talking <u>about his childhood</u>. (他在谈他的童年。) ### over 关于… 如果是「争论」的话题,通常要用 over 这个介系词来表示,例如: They're arguing <u>over their share</u> of the property. (他们在争这份财产要怎么分。) ### through 经由……,靠…… 这个介系词可以表示「途径」或「手段」,例如: He achieved fame <u>through sheer hard work</u>. (他纯粹是靠苦干而成名的。) ## 普通名词抽象化 在第二章介绍名词片语时,我们提过「普通名词抽象化」这个观念,例如: I think I’ll <u>go by bus</u>. (我想我搭巴士去好了。) 用在 go by bus 这个片语中,bus 已经不再是一辆具体、有形的车辆,而是代表一种「交通方式」。这是把普通名词 bus 当作抽象名词使用,因此可以采用零冠词。但是,如果在 bus 前面有形容词、构成一个名词片语,那么它又会恢复普通名词的身分、必须有限定词,例如: I’ll go <u>by the 5:30 bus</u>. (我搭五点半那班巴士好了。) 说出是五点半「那班」,这是明确指出要搭「哪一班」巴士,所以还是要加定冠词 the。 He usually <u>goes to bed</u> before 11:00. (他通常是十一点之前就寝。) He’s <u>going toward the bed</u>. (他向床走过去。) 第 1 句里面的 go to bed是 个特定的片语,表示「上床睡觉」,这时候 bed 这个普通名词要采抽象化处理、加零冠词。但是如果介系词换成 toward、成为 go toward the bed,意思就不再是「上床睡觉」,而是「朝那张床走过去」。这时候 bed 仍然是个普通名词,必须有限定词。 I love to travel <u>by sea</u>. (我喜欢航海旅行。) I have a small house <u>by the sea</u>. (我在海边有栋小房子。) 第 1 句中的 travel by sea 是个特定片语,表示「以航海的方式旅行」,这时候 sea 不再是普通名词,而是以 by sea 来表示某种交通方式,因此 sea 可以视为抽象名词看待、前面用零冠词。但是,如果意思是「在海边」,那么 sea 不再是一种抽象的「交通方式」、而是具体的「海洋」,就需要有限定词。 ## 介系词片语的位置 如上所述,介系词片语是很有弹性的修饰语,可以当作形容词或副词使用。介系词片语在句中的位置,通常是放在它所修饰对象的后面作后位修饰语使用,例如: I love to <u>read</u> <u>books</u> <u>with pictures in my leisure time</u>. (我有空的时候很喜欢阅读有插画的书。) 介系词片语 with pictures 当形容词使用,修饰名词 books,放在名词后面的位置。另一个介系词片语 in my leisure time 当时间副词使用,修饰 read 的时间,放在句尾位置。 使用介系词片语和使用其他的修饰语一样,都得小心不要犯模棱两可的错误。例如: The secretary <u>had to retype</u> the letter which she <u>had been working on</u> <u>under the order of the manager</u>. (不佳) 这个句子写得不好,因为介系词片语 under the order of the manager 放的位置既是全句的句尾、又是关系子句的句尾,所以它可以修饰主要子句的动词 had to retype 、也可以修饰关系子句的动词 had been working on,造成模棱两可的结果。可以分别修改如下: <u>Under the order of the manager</u>, the secretary <u>had to retype</u> the letter which she had been working on. (在经理命令之下,秘书不得不重打那封她一直在处理的信件。) The secretary had to retype the letter which, <u>under the order of the manager</u>, she <u>had been working on</u>. (秘书必须重打那封她奉经理之命一直在处理的信件。) 再看一个例子: I <u>saw</u> that many houses <u>were destroyed</u> by fire <u>on TV</u>. (不佳) 介系词片语 by fire 放在被动态的动词 were destroyed 后面修饰它,意思很清楚(「被火烧毁」)。但是另一个介系词片语 on TV 是个地方副词,放在这个位置一方面是全句的句尾、另一方面又是名词子句(that子句)的句尾,所以它可以修饰主要子句的动词 saw、也可以修饰名词子句的动词 were destroyed,造成模棱两可的结果。应修改如下: <u>On TV</u> I <u>saw</u> that many houses were destroyed by fire. (在电视上我看到许多房子被火烧毁。) 把介系词片语 on TV 移到句首,它就只能够修饰主要子句的动词 saw,意思也就变清楚了。
25.599244
294
0.727367
eng_Latn
0.60409
7a9e3ecddbcd6ead9635ade37f1b2dd553a64deb
1,040
md
Markdown
README.md
havrikov/mexcounter
53697c5c90f755aa531ec221255c1be4b3f1357a
[ "MIT" ]
1
2021-01-05T12:42:00.000Z
2021-01-05T12:42:00.000Z
README.md
havrikov/mexcounter
53697c5c90f755aa531ec221255c1be4b3f1357a
[ "MIT" ]
null
null
null
README.md
havrikov/mexcounter
53697c5c90f755aa531ec221255c1be4b3f1357a
[ "MIT" ]
null
null
null
# MEXCounter A very simple Java agent that counts **m**ethod **ex**ecutions and reports them in a CSV file after the targeted process has finished. It works by instrumenting the bytecode of all non-abstract methods of classes matching a given package prefix. The method execution counts are collected at runtime in a `ConcurrentHashMap`, whose contents are dumped into a CSV file when the JVM finishes. ## Usage Run it with ```bash java -javaagent:path-to-mexcounter.jar=package.prefix,output.csv -jar path-to-target.jar ``` Where - `package.prefix` is the prefix of the package of the classes for which you want the method executions to be counted, - `output.csv` is the path to a file the findings will be written into, once the jvm exits. The targeted program may be compiled for any Java version, but you must run it on at least a Java 8 JVM because we rely on the `Map::merge` method. ## Building Build the agent jar with ```bash ./gradlew build ``` This will create a `mexcounter-1.0.0.jar` inside the `build/libs/` directory.
35.862069
134
0.763462
eng_Latn
0.999092
7a9f25693294ee0b8e67cdaf6aee801a7507dc3c
5,809
md
Markdown
articles/spring-cloud/disaster-recovery.md
niklasloow/azure-docs.sv-se
31144fcc30505db1b2b9059896e7553bf500e4dc
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/spring-cloud/disaster-recovery.md
niklasloow/azure-docs.sv-se
31144fcc30505db1b2b9059896e7553bf500e4dc
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/spring-cloud/disaster-recovery.md
niklasloow/azure-docs.sv-se
31144fcc30505db1b2b9059896e7553bf500e4dc
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure våren Cloud geo-Disaster Recovery | Microsoft Docs description: Lär dig hur du skyddar ditt våren Cloud-program från regionala avbrott author: bmitchell287 ms.service: spring-cloud ms.topic: conceptual ms.date: 10/24/2019 ms.author: brendm ms.custom: devx-track-java ms.openlocfilehash: 19e022073f43548a91fad76cb380a75205237bbd ms.sourcegitcommit: 53acd9895a4a395efa6d7cd41d7f78e392b9cfbe ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 09/22/2020 ms.locfileid: "90892552" --- # <a name="azure-spring-cloud-disaster-recovery"></a>Haveri beredskap för Azure våren Cloud **Den här artikeln gäller för:** ✔️ Java ✔️ C # I den här artikeln beskrivs några strategier som du kan använda för att skydda dina Azure våren Cloud-program från att drabbas av drift stopp. En region eller ett Data Center kan drabbas av drift stopp som orsakas av regionala katastrofer, men noggrann planering kan minimera påverkan på dina kunder. ## <a name="plan-your-application-deployment"></a>Planera program distributionen Azure våren Cloud-program körs i en angiven region. Azure körs i ett antal geografier över hela världen. En Azure-geografi är ett definierat område i världen som innehåller minst en Azure-region. En Azure-region är ett område inom ett geografiskt område som innehåller ett eller flera data Center. Varje Azure-region är kopplad till en annan region inom samma geografi, tillsammans med ett regionalt par. Azure serialiserar plattforms uppdateringar (planerat underhåll) över regionala par, vilket säkerställer att endast en region i varje par uppdateras i taget. I händelse av ett avbrott som påverkar flera regioner prioriteras minst en region i varje par för återställning. Att säkerställa hög tillgänglighet och skydd från katastrofer kräver att du distribuerar dina moln program till flera regioner. Azure tillhandahåller en lista över [kopplade regioner](../best-practices-availability-paired-regions.md) så att du kan planera dina våren-moln distributioner till regionala par. Vi rekommenderar att du beaktar tre viktiga faktorer när du utformar en mikrotjänsts arkitektur: regions tillgänglighet, Azure-kopplade regioner och tjänst tillgänglighet. * Region tillgänglighet: Välj ett geografiskt område nära användarna för att minimera nätverks fördröjningen och överförings tiden. * Azure-kopplade regioner: Välj kopplade regioner inom det valda geografiska utrymmet för att säkerställa koordinerade plattforms uppdateringar och prioriterad återställning vid behov. * Tjänst tillgänglighet: Bestäm om dina kopplade regioner ska köra frekvent/varm, varm/varm eller varm/kall. ## <a name="use-azure-traffic-manager-to-route-traffic"></a>Använd Azure Traffic Manager för att dirigera trafik [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) tillhandahåller DNS-baserad belastnings utjämning för trafiken och kan distribuera nätverks trafik över flera regioner. Använd Azure Traffic Manager för att dirigera kunder till den närmaste Azure våren Cloud Service-instansen till dem. För bästa prestanda och redundans ska du dirigera all program trafik via Azure Traffic Manager innan du skickar den till din Azure våren Cloud-tjänst. Om du har Azure våren Cloud-program i flera regioner använder du Azure-Traffic Manager för att kontrol lera flödet av trafik till dina program i varje region. Definiera en Azure Traffic Manager-slutpunkt för varje tjänst som använder tjänstens IP-adress. Kunder bör ansluta till ett Azure Traffic Manager DNS-namn som pekar på moln tjänsten Azure våren. Azure Traffic Manager belastnings Utjämnings trafik mellan de definierade slut punkterna. Om en katastrof träffar ett Data Center dirigerar Azure Traffic Manager trafiken från den regionen till dess par, vilket garanterar tjänste kontinuiteten. ## <a name="create-azure-traffic-manager-for-azure-spring-cloud"></a>Skapa Azure-Traffic Manager för Azure våren Cloud 1. Skapa ett Azure våren-moln i två olika regioner. Du behöver två tjänst instanser av Azure våren Cloud som distribueras i två olika regioner (USA, östra och Västeuropa). Starta ett befintligt Azure våren Cloud-program med hjälp av Azure Portal för att skapa två tjänst instanser. Var och en fungerar som primär och misslyckad slut punkt för trafik. **Två tjänst instans uppgifter:** | Tjänstens namn | Position | Program | |--|--|--| | service – exempel – a | East US | Gateway/auth-service/account-service | | service – exempel-b | Europa, västra | Gateway/auth-service/account-service | 2. Konfigurera anpassad domän för tjänst följ det [anpassade domän dokumentet](spring-cloud-tutorial-custom-domain.md) för att konfigurera en anpassad domän för dessa två befintliga tjänst instanser. När den har kon figurer ATS kommer båda tjänst instanserna att bindas till den anpassade domänen: bcdr-test.contoso.com 3. Skapa en Traffic Manager och två slut punkter: [skapa en Traffic Manager profil med hjälp av Azure Portal](https://docs.microsoft.com/azure/traffic-manager/quickstart-create-traffic-manager-profile). Här är Traffic Manager-profilen: * Traffic Manager DNS-namn: `http://asc-bcdr.trafficmanager.net` * Slut punkts profiler: | Profil | Typ | Mål | Prioritet | Anpassade huvud inställningar | |--|--|--|--|--| | Slut punkt för en profil | Extern slut punkt | service-sample-a.asc-test.net | 1 | värd: bcdr-test.contoso.com | | Slut punkt B-profil | Extern slut punkt | service-sample-b.asc-test.net | 2 | värd: bcdr-test.contoso.com | 4. Skapa en CNAME-post i DNS-zonen: bcdr-test.contoso.com CNAME asc-bcdr.trafficmanager.net. 5. Nu är miljön helt konfigurerad. Kunderna ska kunna komma åt appen via: bcdr-test.contoso.com ## <a name="next-steps"></a>Nästa steg * [Snabb start: Distribuera ditt första Azure våren Cloud-program](spring-cloud-quickstart.md)
81.816901
677
0.799105
swe_Latn
0.999562
7a9f31d39582d5e8899b2748fb6037dec82d6900
17,874
md
Markdown
docs/modules/index.ts.md
kylegoetz/hyper-ts
8edf7e13829c9aa54a8956fca64528261dac5b4b
[ "MIT" ]
null
null
null
docs/modules/index.ts.md
kylegoetz/hyper-ts
8edf7e13829c9aa54a8956fca64528261dac5b4b
[ "MIT" ]
null
null
null
docs/modules/index.ts.md
kylegoetz/hyper-ts
8edf7e13829c9aa54a8956fca64528261dac5b4b
[ "MIT" ]
null
null
null
--- title: index.ts nav_order: 2 parent: Modules --- # index overview Added in v0.5.0 --- <h2 class="text-delta">Table of contents</h2> - [BodyOpen (interface)](#bodyopen-interface) - [Connection (interface)](#connection-interface) - [CookieOptions (interface)](#cookieoptions-interface) - [HeadersOpen (interface)](#headersopen-interface) - [Middleware (interface)](#middleware-interface) - [ResponseEnded (interface)](#responseended-interface) - [StatusOpen (interface)](#statusopen-interface) - [MediaType (type alias)](#mediatype-type-alias) - [Status (type alias)](#status-type-alias) - [URI (type alias)](#uri-type-alias) - [MediaType](#mediatype) - [Status](#status) - [URI](#uri) - [alt](#alt) - [ap](#ap) - [apFirst](#apfirst) - [apSecond](#apsecond) - [bimap](#bimap) - [chain](#chain) - [chainFirst](#chainfirst) - [clearCookie](#clearcookie) - [closeHeaders](#closeheaders) - [contentType](#contenttype) - [cookie](#cookie) - [decodeBody](#decodebody) - [decodeHeader](#decodeheader) - [decodeMethod](#decodemethod) - [decodeParam](#decodeparam) - [decodeParams](#decodeparams) - [decodeQuery](#decodequery) - [end](#end) - [evalMiddleware](#evalmiddleware) - [execMiddleware](#execmiddleware) - [filterOrElse](#filterorelse) - [flatten](#flatten) - [fromConnection](#fromconnection) - [fromEither](#fromeither) - [fromIOEither](#fromioeither) - [fromOption](#fromoption) - [fromPredicate](#frompredicate) - [fromTaskEither](#fromtaskeither) - [gets](#gets) - [header](#header) - [ichain](#ichain) - [iof](#iof) - [json](#json) - [left](#left) - [leftIO](#leftio) - [leftTask](#lefttask) - [map](#map) - [mapLeft](#mapleft) - [middleware](#middleware) - [modifyConnection](#modifyconnection) - [orElse](#orelse) - [redirect](#redirect) - [right](#right) - [rightIO](#rightio) - [rightTask](#righttask) - [send](#send) - [status](#status) - [tryCatch](#trycatch) --- # BodyOpen (interface) Type indicating that headers have already been sent, and that the body is currently streaming **Signature** ```ts export interface BodyOpen { readonly BodyOpen: unique symbol } ``` Added in v0.5.0 # Connection (interface) A `Connection`, models the entirety of a connection between the HTTP server and the user agent, both request and response. State changes are tracked by the phantom type `S` **Signature** ```ts export interface Connection<S> { /** * @since 0.5.0 */ readonly _S: S readonly getRequest: () => IncomingMessage readonly getBody: () => unknown readonly getHeader: (name: string) => unknown readonly getParams: () => unknown readonly getQuery: () => unknown readonly getOriginalUrl: () => string readonly getMethod: () => string readonly setCookie: ( this: Connection<HeadersOpen>, name: string, value: string, options: CookieOptions ) => Connection<HeadersOpen> readonly clearCookie: (this: Connection<HeadersOpen>, name: string, options: CookieOptions) => Connection<HeadersOpen> readonly setHeader: (this: Connection<HeadersOpen>, name: string, value: string) => Connection<HeadersOpen> readonly setStatus: (this: Connection<StatusOpen>, status: Status) => Connection<HeadersOpen> readonly setBody: (this: Connection<BodyOpen>, body: unknown) => Connection<ResponseEnded> readonly endResponse: (this: Connection<BodyOpen>) => Connection<ResponseEnded> } ``` Added in v0.5.0 # CookieOptions (interface) **Signature** ```ts export interface CookieOptions { readonly expires?: Date readonly domain?: string readonly httpOnly?: boolean readonly maxAge?: number readonly path?: string readonly sameSite?: boolean | 'strict' | 'lax' readonly secure?: boolean readonly signed?: boolean } ``` Added in v0.5.0 # HeadersOpen (interface) Type indicating that headers are ready to be sent, i.e. the body streaming has not been started **Signature** ```ts export interface HeadersOpen { readonly HeadersOpen: unique symbol } ``` Added in v0.5.0 # Middleware (interface) A middleware is an indexed monadic action transforming one `Connection` to another `Connection`. It operates in the `TaskEither` monad, and is indexed by `I` and `O`, the input and output `Connection` types of the middleware action. **Signature** ```ts export interface Middleware<I, O, E, A> { (c: Connection<I>): TE.TaskEither<E, [A, Connection<O>]> } ``` Added in v0.5.0 # ResponseEnded (interface) Type indicating that headers have already been sent, and that the body stream, and thus the response, is finished **Signature** ```ts export interface ResponseEnded { readonly ResponseEnded: unique symbol } ``` Added in v0.5.0 # StatusOpen (interface) Type indicating that the status-line is ready to be sent **Signature** ```ts export interface StatusOpen { readonly StatusOpen: unique symbol } ``` Added in v0.5.0 # MediaType (type alias) **Signature** ```ts export type MediaType = typeof MediaType[keyof typeof MediaType] ``` Added in v0.5.0 # Status (type alias) **Signature** ```ts export type Status = typeof Status[keyof typeof Status] ``` Added in v0.5.0 # URI (type alias) **Signature** ```ts export type URI = typeof URI ``` Added in v0.5.0 # MediaType Adapted from https://github.com/purescript-contrib/purescript-media-types **Signature** ```ts export declare const MediaType: { readonly applicationFormURLEncoded: 'application/x-www-form-urlencoded' readonly applicationJSON: 'application/json' readonly applicationJavascript: 'application/javascript' readonly applicationOctetStream: 'application/octet-stream' readonly applicationXML: 'application/xml' readonly imageGIF: 'image/gif' readonly imageJPEG: 'image/jpeg' readonly imagePNG: 'image/png' readonly multipartFormData: 'multipart/form-data' readonly textCSV: 'text/csv' readonly textHTML: 'text/html' readonly textPlain: 'text/plain' readonly textXML: 'text/xml' } ``` Added in v0.5.0 # Status **Signature** ```ts export declare const Status: { readonly Continue: 100 readonly SwitchingProtocols: 101 readonly Processing: 102 readonly EarlyHints: 103 readonly OK: 200 readonly Created: 201 readonly Accepted: 202 readonly NonAuthoritativeInformation: 203 readonly NoContent: 204 readonly ResetContent: 205 readonly PartialContent: 206 readonly MultiStatus: 207 readonly AlreadyReported: 208 readonly IMUsed: 226 readonly MultipleChoices: 300 readonly MovedPermanently: 301 readonly Found: 302 readonly SeeOther: 303 readonly NotModified: 304 readonly UseProxy: 305 readonly SwitchProxy: 306 readonly TemporaryRedirect: 307 readonly PermanentRedirect: 308 readonly BadRequest: 400 readonly Unauthorized: 401 readonly PaymentRequired: 402 readonly Forbidden: 403 readonly NotFound: 404 readonly MethodNotAllowed: 405 readonly NotAcceptable: 406 readonly ProxyAuthenticationRequired: 407 readonly RequestTimeout: 408 readonly Conflict: 409 readonly Gone: 410 readonly LengthRequired: 411 readonly PreconditionFailed: 412 readonly PayloadTooLarge: 413 readonly URITooLong: 414 readonly UnsupportedMediaType: 415 readonly RangeNotSatisfiable: 416 readonly ExpectationFailed: 417 readonly Teapot: 418 readonly MisdirectedRequest: 421 readonly UnprocessableEntity: 422 readonly Locked: 423 readonly FailedDependency: 424 readonly TooEarly: 425 readonly UpgradeRequired: 426 readonly PreconditionRequired: 428 readonly TooManyRequests: 429 readonly RequestHeaderFieldsTooLarge: 431 readonly UnavailableForLegalReasons: 451 readonly InternalServerError: 500 readonly NotImplemented: 501 readonly BadGateway: 502 readonly ServiceUnavailable: 503 readonly GatewayTimeout: 504 readonly HTTPVersionNotSupported: 505 readonly VariantAlsoNegotiates: 506 readonly InsufficientStorage: 507 readonly LoopDetected: 508 readonly NotExtended: 510 readonly NetworkAuthenticationRequired: 511 } ``` Added in v0.5.0 # URI **Signature** ```ts export declare const URI: 'Middleware' ``` Added in v0.5.0 # alt **Signature** ```ts export declare const alt: <R, E, A>( that: () => Middleware<R, R, E, A> ) => (fa: Middleware<R, R, E, A>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # ap **Signature** ```ts export declare const ap: <R, E, A>( fa: Middleware<R, R, E, A> ) => <B>(fab: Middleware<R, R, E, (a: A) => B>) => Middleware<R, R, E, B> ``` Added in v0.5.0 # apFirst **Signature** ```ts export declare const apFirst: <R, E, B>( fb: Middleware<R, R, E, B> ) => <A>(fa: Middleware<R, R, E, A>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # apSecond **Signature** ```ts export declare const apSecond: <R, E, B>( fb: Middleware<R, R, E, B> ) => <A>(fa: Middleware<R, R, E, A>) => Middleware<R, R, E, B> ``` Added in v0.5.0 # bimap **Signature** ```ts export declare const bimap: <E, G, A, B>( f: (e: E) => G, g: (a: A) => B ) => <R>(fa: Middleware<R, R, E, A>) => Middleware<R, R, G, B> ``` Added in v0.5.0 # chain **Signature** ```ts export declare const chain: <R, E, A, B>( f: (a: A) => Middleware<R, R, E, B> ) => (ma: Middleware<R, R, E, A>) => Middleware<R, R, E, B> ``` Added in v0.5.0 # chainFirst **Signature** ```ts export declare const chainFirst: <R, E, A, B>( f: (a: A) => Middleware<R, R, E, B> ) => (ma: Middleware<R, R, E, A>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # clearCookie Returns a middleware that clears the cookie `name` **Signature** ```ts export declare function clearCookie<E = never>( name: string, options: CookieOptions ): Middleware<HeadersOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # closeHeaders Returns a middleware that changes the connection status to `BodyOpen` **Signature** ```ts export declare function closeHeaders<E = never>(): Middleware<HeadersOpen, BodyOpen, E, void> ``` Added in v0.5.0 # contentType Returns a middleware that sets the given `mediaType` **Signature** ```ts export declare function contentType<E = never>(mediaType: MediaType): Middleware<HeadersOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # cookie Returns a middleware that sets the cookie `name` to `value`, with the given `options` **Signature** ```ts export declare function cookie<E = never>( name: string, value: string, options: CookieOptions ): Middleware<HeadersOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # decodeBody Returns a middleware that tries to decode `connection.getBody()` **Signature** ```ts export declare function decodeBody<E, A>(f: (input: unknown) => Either<E, A>): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # decodeHeader Returns a middleware that tries to decode `connection.getHeader(name)` **Signature** ```ts export declare function decodeHeader<E, A>( name: string, f: (input: unknown) => Either<E, A> ): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # decodeMethod Returns a middleware that tries to decode `connection.getMethod()` **Signature** ```ts export declare function decodeMethod<E, A>( f: (method: string) => Either<E, A> ): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # decodeParam Returns a middleware that tries to decode `connection.getParams()[name]` **Signature** ```ts export declare function decodeParam<E, A>( name: string, f: (input: unknown) => Either<E, A> ): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # decodeParams Returns a middleware that tries to decode `connection.getParams()` **Signature** ```ts export declare function decodeParams<E, A>( f: (input: unknown) => Either<E, A> ): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # decodeQuery Returns a middleware that tries to decode `connection.getQuery()` **Signature** ```ts export declare function decodeQuery<E, A>(f: (input: unknown) => Either<E, A>): Middleware<StatusOpen, StatusOpen, E, A> ``` Added in v0.5.0 # end Returns a middleware that ends the response without sending any response body **Signature** ```ts export declare function end<E = never>(): Middleware<BodyOpen, ResponseEnded, E, void> ``` Added in v0.5.0 # evalMiddleware **Signature** ```ts export declare function evalMiddleware<I, O, E, A>(ma: Middleware<I, O, E, A>, c: Connection<I>): TE.TaskEither<E, A> ``` Added in v0.5.0 # execMiddleware **Signature** ```ts export declare function execMiddleware<I, O, E, A>( ma: Middleware<I, O, E, A>, c: Connection<I> ): TE.TaskEither<E, Connection<O>> ``` Added in v0.5.0 # filterOrElse **Signature** ```ts export declare const filterOrElse: { <E, A, B>(refinement: Refinement<A, B>, onFalse: (a: A) => E): <R>( ma: Middleware<R, R, E, A> ) => Middleware<R, R, E, B> <E, A>(predicate: Predicate<A>, onFalse: (a: A) => E): <R>(ma: Middleware<R, R, E, A>) => Middleware<R, R, E, A> } ``` Added in v0.5.0 # flatten **Signature** ```ts export declare const flatten: <R, E, A>(mma: Middleware<R, R, E, Middleware<R, R, E, A>>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # fromConnection **Signature** ```ts export declare function fromConnection<I = StatusOpen, E = never, A = never>( f: (c: Connection<I>) => Either<E, A> ): Middleware<I, I, E, A> ``` Added in v0.5.0 # fromEither **Signature** ```ts export declare const fromEither: <R, E, A>(ma: Either<E, A>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # fromIOEither **Signature** ```ts export declare function fromIOEither<I = StatusOpen, E = never, A = never>(fa: IOEither<E, A>): Middleware<I, I, E, A> ``` Added in v0.5.0 # fromOption **Signature** ```ts export declare const fromOption: <E>(onNone: () => E) => <R, A>(ma: Option<A>) => Middleware<R, R, E, A> ``` Added in v0.5.0 # fromPredicate **Signature** ```ts export declare const fromPredicate: { <E, A, B>(refinement: Refinement<A, B>, onFalse: (a: A) => E): <U>(a: A) => Middleware<U, U, E, B> <E, A>(predicate: Predicate<A>, onFalse: (a: A) => E): <R>(a: A) => Middleware<R, R, E, A> } ``` Added in v0.5.0 # fromTaskEither **Signature** ```ts export declare function fromTaskEither<I = StatusOpen, E = never, A = never>( fa: TE.TaskEither<E, A> ): Middleware<I, I, E, A> ``` Added in v0.5.0 # gets **Signature** ```ts export declare function gets<I = StatusOpen, E = never, A = never>(f: (c: Connection<I>) => A): Middleware<I, I, E, A> ``` Added in v0.5.0 # header Returns a middleware that writes the given header **Signature** ```ts export declare function header<E = never>(name: string, value: string): Middleware<HeadersOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # ichain **Signature** ```ts export declare function ichain<A, O, Z, E, B>( f: (a: A) => Middleware<O, Z, E, B> ): <I>(ma: Middleware<I, O, E, A>) => Middleware<I, Z, E, B> ``` Added in v0.5.0 # iof **Signature** ```ts export declare function iof<I = StatusOpen, O = StatusOpen, E = never, A = never>(a: A): Middleware<I, O, E, A> ``` Added in v0.5.0 # json Returns a middleware that sends `body` as JSON **Signature** ```ts export declare function json<E>( body: unknown, onError: (reason: unknown) => E ): Middleware<HeadersOpen, ResponseEnded, E, void> ``` Added in v0.5.0 # left **Signature** ```ts export declare function left<I = StatusOpen, E = never, A = never>(e: E): Middleware<I, I, E, A> ``` Added in v0.5.0 # leftIO **Signature** ```ts export declare function leftIO<I = StatusOpen, E = never, A = never>(fe: IO<E>): Middleware<I, I, E, A> ``` Added in v0.5.0 # leftTask **Signature** ```ts export declare function leftTask<I = StatusOpen, E = never, A = never>(te: Task<E>): Middleware<I, I, E, A> ``` Added in v0.5.0 # map **Signature** ```ts export declare const map: <A, B>(f: (a: A) => B) => <R, E>(fa: Middleware<R, R, E, A>) => Middleware<R, R, E, B> ``` Added in v0.5.0 # mapLeft **Signature** ```ts export declare const mapLeft: <E, G>(f: (e: E) => G) => <R, A>(fa: Middleware<R, R, E, A>) => Middleware<R, R, G, A> ``` Added in v0.5.0 # middleware **Signature** ```ts export declare const middleware: Monad3<'Middleware'> & Alt3<'Middleware'> & Bifunctor3<'Middleware'> & MonadThrow3<'Middleware'> & MonadTask3<'Middleware'> ``` Added in v0.5.0 # modifyConnection **Signature** ```ts export declare function modifyConnection<I, O, E>(f: (c: Connection<I>) => Connection<O>): Middleware<I, O, E, void> ``` Added in v0.5.0 # orElse **Signature** ```ts export declare function orElse<E, I, O, M, A>( f: (e: E) => Middleware<I, O, M, A> ): (ma: Middleware<I, O, E, A>) => Middleware<I, O, M, A> ``` Added in v0.5.0 # redirect Returns a middleware that sends a redirect to `uri` **Signature** ```ts export declare function redirect<E = never>(uri: string): Middleware<StatusOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # right **Signature** ```ts export declare function right<I = StatusOpen, E = never, A = never>(a: A): Middleware<I, I, E, A> ``` Added in v0.5.0 # rightIO **Signature** ```ts export declare function rightIO<I = StatusOpen, E = never, A = never>(fa: IO<A>): Middleware<I, I, E, A> ``` Added in v0.5.0 # rightTask **Signature** ```ts export declare function rightTask<I = StatusOpen, E = never, A = never>(fa: Task<A>): Middleware<I, I, E, A> ``` Added in v0.5.0 # send Returns a middleware that sends `body` as response body **Signature** ```ts export declare function send<E = never>(body: string): Middleware<BodyOpen, ResponseEnded, E, void> ``` Added in v0.5.0 # status Returns a middleware that writes the response status **Signature** ```ts export declare function status<E = never>(status: Status): Middleware<StatusOpen, HeadersOpen, E, void> ``` Added in v0.5.0 # tryCatch **Signature** ```ts export declare function tryCatch<I = StatusOpen, E = never, A = never>( f: () => Promise<A>, onRejected: (reason: unknown) => E ): Middleware<I, I, E, A> ``` Added in v0.5.0
19.449402
120
0.679982
eng_Latn
0.734048
7a9f8a634ac56f6d6a009e48d49f4d7b9202b4dd
690
md
Markdown
UnityProject/Packages/net.droidman.ohmyframework.core/README.md
ouyangwenyuan/OhMyFramework
a9dc17243518906d63a1d65d68afc8a9505ab2d7
[ "Unlicense" ]
2
2020-12-16T05:25:36.000Z
2021-05-29T07:28:12.000Z
UnityProject/Packages/net.droidman.ohmyframework.core/README.md
ouyangwenyuan/OhMyFramework
a9dc17243518906d63a1d65d68afc8a9505ab2d7
[ "Unlicense" ]
null
null
null
UnityProject/Packages/net.droidman.ohmyframework.core/README.md
ouyangwenyuan/OhMyFramework
a9dc17243518906d63a1d65d68afc8a9505ab2d7
[ "Unlicense" ]
null
null
null
# 框架简介 ohmyframework 文档, 这是我的一个个人工作过程中的一个开发总结吧,也不能叫framework吧,也不知道叫啥,暂时叫这个名字吧。 ## Why?为什么做这个 工作中,总是忙忙碌碌,每天都感觉在做新功能,但是新功能又感觉曾经做过,经过头脑抽丝剥茧,再查看以往的代码,总能找到一些相似的逻辑和代码,记性越来越差,被太多细节占满,恰宝宝出生,有俩周多陪产假在家,遂整理整理自己头脑中的乱麻,形成自己的框架,方便以后的工作。 ## What?这个做什么用 初步实现的内容: 俩个基本模块: 一、框架核心,整体结构framework ,管理后期所有模块的管理器,负责模块的生命周期。自动加入管理列表。 二、工具包,主要包括工具类,日志类和常用类的扩展类,配置类,常量类,非常简单的异步任务执行类,基于类型的事件类。 三、自定义编辑器扩展面板框架,整合编辑器插件和工具到一个面版中,自动加入面板中。 后期规划和进行中: 因为本人主要从事的是益智类+关卡类+任务剧情类拼合类游戏,这类游戏实现随意,随时修改不固定,注重UI表现,还要可持续,对新手友好,综合整理以下几大块: 1. 资源管理:主要负责游戏资源的下载,更新,加载,回收等。 2. UI管理:负责资源的展现,层级,引导,剧情,特效,回收,数据通信等 3. 存储管理,负责用户的数据存储,上传下载,更新覆盖,展现,刷新等。 4. 网络管理:负责数据接口的展现和上传,资源和配置的下载和上传。 5. 配置管理:配置转换,配置设置,配置更新 6. 音效管理:背景音和音效 7. 。。。
20.294118
129
0.802899
yue_Hant
0.628755
7aa08e7ef77557751e03fe1f8600c958b96f4ae0
10,997
markdown
Markdown
_posts/2019-06-12-cerebro.markdown
ADALabUCSD/blog.adalabucsd.github.io
6f75bf66d64c10f1e6c0d5bf200cf8ba59189085
[ "Apache-2.0" ]
1
2021-08-15T16:53:42.000Z
2021-08-15T16:53:42.000Z
_posts/2019-06-12-cerebro.markdown
ADALabUCSD/research-blog
6f75bf66d64c10f1e6c0d5bf200cf8ba59189085
[ "Apache-2.0" ]
null
null
null
_posts/2019-06-12-cerebro.markdown
ADALabUCSD/research-blog
6f75bf66d64c10f1e6c0d5bf200cf8ba59189085
[ "Apache-2.0" ]
null
null
null
--- title: "Cerebro: Efficient and Reproducible Model Selection for Deep Learning" layout: post date: 2019-06-12 15:03:56 +0800 category: research tags: [model selection, deep learning] comments: true permalink: /:title.html author: Supun Nakandala and Yuhao Zhang --- <link rel="stylesheet" href="{{- 'assets/css/accordion.css' | relative_url -}}"> *This post is about our project Cerebro. Please check the home page at [here](https://adalabucsd.github.io/cerebro.html). You can also find more details in our [workshop paper](https://adalabucsd.github.io/papers/2019_Cerebro_DEEM.pdf)*. Feel like deep learning today? Let's say you have already purchased one or several beefy machines, set up everything, and found yourself a large enough dataset for the application. It is high time we started training the deep learning model. # The headache of model selection An immediate problem arises: which neural network architecture should you use? Assume you want convolutional neural networks(CNNs) for your applications. There are AlexNet, VGGNet, InceptionNet, ResNet, Inception-ResNet, MobileNet, SqueezeNet ... Say after several hours you finally place your bet, yet you encounter another headache: parameter tuning. Even with just vanilla stochastic gradient descent(SGD), parameter tuning can still be non-trivial. How do you set the batch size? How large should you set the learning rate? Too large it fails to converge, too small the time cost and your electricity bill become ludicrous. What kind of regularization do you use? If plain L2, how large the regularization should you set? If drop-out, how large should the drop-out proportion be? The list of questions goes on and on. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/jackie_chan.jpg" width="300" alt="jackie"> </p> This huge bottleneck of experimenting model architectures and tuning parameters is called model selection. Unfortunately, there is no axiom for it. It's very hard, if possible, to determine which configuration gives you the best performance before trying out. Chances are you would need to train a dozen, if not hundreds, of models and benchmark them before reaching the optimal choice. Alas, favorite deep nets tools like TensorFlow focus on the latency of training *one model at a time*, not on throughput *how many configurations can you try out in unit time*. Several parallel execution approaches have been studied. Each has some practical limitations. <!--Suddenly the throughput of model selection, *how many training configurations are evaluated per unit time*, becomes vital. Higher throughput potentially means reaching better accuracy sooner. It can also reduce total resource costs by improving resource utilization.--> # The ugliness of un-reproducibility Reproducibility of the training, which means given the same configuration, the software must yield the same model in the end, is often neglected by web companies. However, it is one of the *must-haves* of science and a showstopper for many enterprises and domain scientists. Among various approaches for accelerating deep nets training, some are physically un-reproducible, meaning even if all the pseudo-randomness are saddled, you may still end up with different results. But don't worry, Cerebro is here to the rescue. # Review of existing landscapes In this section, I will briefly introduce two existing forms of parallelism for deep learning training, task, and data parallelism. Feel free to skip the details you are already familiar with this. <!-- Note there is a third type of parallelism known as model parallelism, which is orthogonal to our interest.--> <ul class="accordion"> <li> <a class="toggle" href="javascript:void(0);">Task parallelism</a> <ul class="inner"> <li> This is the most straightforward type of parallelism. You use a cluster of machines and dispatch the configurations to them. Every worker has the complete dataset, so there is no communication between the workers during the training. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/tp.gif" width="500" alt="tp"> </p> <b>Pros:</b> Reproducible. Best statistical convergence efficiency. Zero communication overheads while training.<br> <b>Cons:</b> Storage wastage out of the data replication. Sometimes the dataset can be so big that it no longer fits in single node memory. Down-sampling the dataset will risk over-fitting. </li> </ul> </li> <li> <a class="toggle" href="javascript:void(0);">Data parallelism</a> <ul class="inner"> <li> With this type of parallelism, the dataset is partitioned and distributed to the workers. One model configuration is submitted to the cluster at each time. The workers train the model on their partition and update the global weights. I will introduce three methods that reside in this regime: bulk synchronous parallel, parameter server, and decentralized synchronous parallelism. </li> <a href="#" class="toggle">Bulk synchronous parallel(BSP) </a> <ul class="inner"> <li> BSP(AKA model averaging) systems such as Spark and TensorFlow with model averaging parallelize one model at a time. They broadcast the model, train models independently on each worker’s partition, collect all models on the master, average the weights, and repeat this every epoch. Alas, this approach converges poorly; so the application of this method is limited. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/bsp.gif" width="500" alt="bsp"> </p> <b>Pros:</b> Lowest communication overhead for data parallel training. Good data scalability<br> <b>Cons:</b> Poor statistical convergence. </li> </ul> <li> <a href="#" class="toggle">Parameter server(PS)</a> <ul class="inner"> <li> PS approaches also parallelize one model at a time but at a finer granularity than BSP. Work- ers push gradient updates to the master at each mini-batch and pull the latest model when ready. If the master waits to collect all updates per cycle, it is synchronous; otherwise, if the master continues training whenever it gets an update, it is asynchronous. Asynchronous PS is highly scalable but unreproducible; it often has poorer convergence than syn- chronous PS due to stale updates but synchronous PS has higher overhead for synchronization. All PS-style approaches have high communication costs compared to BSP due to their centralized all-to-one communications at each mini-batch. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/ps.gif" width="500" alt="ps"> </p> <b>Pros:</b> Good data scalability<br> <b>Cons:</b> Poor statistical convergence. Not reproducible(Synchronous-PS alleviates non-reproducibility and stale updates but incurs high overheads due to synchronization). High communication overheads </li> </ul> </li> <li> <a href="#" class="toggle">Decentralized synchronous parallelism</a> <ul class="inner"> <li> Decentralized systems such as <a href="https://github.com/horovod/horovod">Horovod</a> adopt HPC-style techniques to enable synchronous all-reduce SGD. It is reproducible and the adopted ring all-reduce algorithm has a time complexity independent of the number of workers for the bandwidth- bound term. However, the synchronization barrier becomes a communication bottleneck. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/hvd.gif" width="500" alt="hvd"> </p> <b>Pros:</b> Good data scalability. Reproducible.<br> <b>Cons:</b> High communication overheads. </li> </ul> </li> </ul> </li> </ul> # Model Hopper Parallelism(MOP): a mixture of task and data parallelism Cerebro utilizes a combination of task, and data parallelism is the most efficient approach. One key observation is that **SGD is robust to data visit order**, which means as long as it sees the entire dataset eventually, it does matter in which order the visit may be. Therefore, we purpose Cerebro. The workers now instead of exchanging weights, sending the checkpointed models to each other after each epoch of training. This requires minimal disk storage and trivial communication. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/cerebro.gif" width="500" alt="mop"> </p> With such design, Cerebro should be highly data-scalable, reproducible , efficient and have perfect convergence speed. We also envision Cerebro to be the narrow waist between the AutoML procedures and the various deep learning frameworks such as TensorFlow, PyTorch. (A) and (B) of the following figure demonstrate the power of Cerebro and (C) shows its position in the entire tool-chain of deep learning. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/paneltradeoffs_new.png" width="2400" alt="tradeoffs"> </p> # Experiments on ImageNet Now it is time to show you some experiment results. We compare a prototype implementation of MOP on TensorFlow to many state-of-the-art systems on an 8-node GPU cluster with Nvidia P100 GPUs. We train 16 models on ImageNet for 10 epochs: 2 architectures (VGG16 and ResNet50), 2 batch sizes (32 and 256); 2 learning rates (10E−4 and 10E−6 ), and 2 regularizers (10E-4 and 10E-6). The following figure shows the makespans, average GPU utilization, and learning curves of the best model from each system. <p style="text-align:center;"> <img src="{{site.baseurl}}/assets/2019-06-12-cerebro/initial_results.png" width="700" alt="tradeoffs"> </p> # Takeaways 1. MOP is over 10x faster than TF's in-built asynchronous PS. 2. MOP is also 3x faster than Horovod. The high GPU utilization is because Horovod's communication utilizes GPU kernels. 3. MOP's runtime is comparable to TF's model averaging and task parallelism. 4. Model averaging converges very slowly. 5. Celery and MOP have the best learning curves but note that Celery has 8x the memory/storage footprint as MOP due to dataset copies. 6. Overall, MOP is the most resource-efficient and still offers accuracy similar to sequential SGD. The system is open-sourced and available at [Cerebro](https://adalabucsd.github.io/cerebro-system). Hopefully, Cerebro can one day help you, my friend, the *master of the black arts of deep learning*. Have a nice day! <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"></script> <script src="{{site.baseurl}}/assets/js/accordion.js"></script>
81.459259
706
0.733109
eng_Latn
0.992207
7aa0fe92d410d17a5874aac996adc26c318f3649
998
md
Markdown
README.md
brunomvsouza/media-organizer
c35b3ed2864a5e3426cc262bf91f609d54714616
[ "MIT" ]
2
2015-10-11T14:37:01.000Z
2021-05-07T08:26:10.000Z
README.md
brunomvsouza/media-organizer
c35b3ed2864a5e3426cc262bf91f609d54714616
[ "MIT" ]
null
null
null
README.md
brunomvsouza/media-organizer
c35b3ed2864a5e3426cc262bf91f609d54714616
[ "MIT" ]
null
null
null
# Photo Organizer A small command line tool created to organize photos and videos of a given source path to a destination path on /year/month/20071119T083748-0600_md5md5m.ext format. For photos it uses EXIF photo creation date with a fallback to mdate to create the new file path. For videos it just uses mdate to create the new file path. ### Requirements * [rvm](https://rvm.io) ## Installation * Install [rvm](https://rvm.io) if you didn't already * Clone the project * `cd` to the project's root folder and accept .rvmrc needs (default ruby version and default gemset) * Run `bundle install` on project's root folder to install gem dependencies ## Running ``` bash ruby photo_organizer.rb /path/to/source/dir /path/to/destination/dir ruby video_organizer.rb /path/to/source/dir /path/to/destination/dir ``` ## Known bugs Unfortunately there no standalized way to store photo's capture date and time on EXIF data what causes some issues on creating file names of some camera vendors.
35.642857
164
0.767535
eng_Latn
0.963983
7aa1417eba41037ffc86f9aaddad66eb3082e02e
1,666
md
Markdown
1. Programming C#/1. CSharp-Part-1/3. Operators-and-Expressions/13. Modify Bit/README.md
KaloyanchoSt/Telerik-Academy-Homeworks
d5cd6a0072c648391024af1fd83f7bee8729afd7
[ "MIT" ]
15
2015-11-20T00:32:42.000Z
2018-04-26T20:16:36.000Z
1. Programming C#/1. CSharp-Part-1/3. Operators-and-Expressions/13. Modify Bit/README.md
KaloyanchoSt/Telerik-Academy-Homeworks
d5cd6a0072c648391024af1fd83f7bee8729afd7
[ "MIT" ]
null
null
null
1. Programming C#/1. CSharp-Part-1/3. Operators-and-Expressions/13. Modify Bit/README.md
KaloyanchoSt/Telerik-Academy-Homeworks
d5cd6a0072c648391024af1fd83f7bee8729afd7
[ "MIT" ]
17
2016-04-05T06:30:28.000Z
2020-04-19T20:01:04.000Z
# Modify Bit ## Description We are given an integer number **N** (read from the console), a bit value **v** (read from the console as well) (v = 0 or 1) and a position **P** (read from the console). Write a sequence of operators (a few lines of C# code) that modifies **N** to hold the value **v** at the position **P** from the binary representation of **N** while preserving all other bits in **N**. Print the resulting number on the console. ## Input - The input will consist of exactly **3** lines containing the following: 1. First line - the integer number **N**. 1. Second line - the position **P**. 1. Third line - the bit value **v**. ## Output - Output a single line containing the value of the number **N** with the modified bit. ## Constraints - **N** will always be a valid 64-bit **unsigned** integer. - **P** will always be between in the range `[0, 64)`. - **v** will be always either 0 or 1. - Time limit: **0.1s** - Memory limit: **16MB** ## Sample tests | Input | Binary representation | Modified value | Output | |--------------------|-----------------------|-------------------|----------------| | 5 <br/>2 <br/>0 | 00000000 00000101 | 00000000 00000001 | 1 | | 0 <br/>9 <br/>1 | 00000000 00000000 | 00000010 00000000 | 512 | | 15 <br/>1 <br/>1 | 00000000 00001111 | 00000000 00001111 | 15 | | 5343 <br/>7 <br/>0 | 00010100 11011111 | 00010100 01011111 | 5215 | | 62241<br/>11<br/>0 | 11110011 00100001 | 11110011 00100001 | 62241 | ## Submission - Submit your code [here](http://bgcoder.com/Contests/Compete/Index/310#12)
46.277778
171
0.594238
eng_Latn
0.965891
7aa1e89b8787f364d99bb2bd4b56a0b40d00cf66
1,559
md
Markdown
results/crinacle/harman_in-ear_2019v2/Kinera Nanna/README.md
ruixiao85/AutoEq
a41d50e2bcde9609fe37848f55b019a13496ef66
[ "MIT" ]
1
2020-07-26T09:34:56.000Z
2020-07-26T09:34:56.000Z
results/crinacle/harman_in-ear_2019v2/Kinera Nanna/README.md
TheDarkMikeRises/AutoEq
ab0fcf2fe072665f8af1d253c14226621fadecec
[ "MIT" ]
null
null
null
results/crinacle/harman_in-ear_2019v2/Kinera Nanna/README.md
TheDarkMikeRises/AutoEq
ab0fcf2fe072665f8af1d253c14226621fadecec
[ "MIT" ]
null
null
null
# Kinera Nanna See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info. ### Parametric EQs In case of using parametric equalizer, apply preamp of **-6.3dB** and build filters manually with these parameters. The first 5 filters can be used independently. When using independent subset of filters, apply preamp of **-6.2dB**. | Type | Fc | Q | Gain | |:--------|:---------|:-----|:---------| | Peaking | 11 Hz | 0.06 | -2.7 dB | | Peaking | 3342 Hz | 2.49 | 6.0 dB | | Peaking | 4831 Hz | 6.84 | -2.6 dB | | Peaking | 17493 Hz | 0.69 | -7.9 dB | | Peaking | 19828 Hz | 0.55 | -14.4 dB | | Peaking | 1574 Hz | 3.04 | -2.3 dB | | Peaking | 7290 Hz | 2.35 | 1.5 dB | | Peaking | 7908 Hz | 5.42 | -4.7 dB | | Peaking | 11012 Hz | 1.13 | 2.2 dB | | Peaking | 13001 Hz | 3.71 | -4.1 dB | ### Fixed Band EQs In case of using fixed band (also called graphic) equalizer, apply preamp of **-3.8dB** (if available) and set gains manually with these parameters. | Type | Fc | Q | Gain | |:--------|:---------|:-----|:---------| | Peaking | 31 Hz | 1.41 | -3.6 dB | | Peaking | 62 Hz | 1.41 | -1.0 dB | | Peaking | 125 Hz | 1.41 | -1.4 dB | | Peaking | 250 Hz | 1.41 | -1.4 dB | | Peaking | 500 Hz | 1.41 | 0.4 dB | | Peaking | 1000 Hz | 1.41 | -0.0 dB | | Peaking | 2000 Hz | 1.41 | -0.6 dB | | Peaking | 4000 Hz | 1.41 | 3.8 dB | | Peaking | 8000 Hz | 1.41 | -0.6 dB | | Peaking | 16000 Hz | 1.41 | -17.9 dB | ### Graphs ![](./Kinera%20Nanna.png)
38.975
98
0.546504
eng_Latn
0.756951
7aa210e320f54308645b0b130d87a14ca8428b4b
1,179
md
Markdown
sdk/docs/Future.md
fossabot/lusid-sdk-java-preview
a1bc1b3c5a3e7c0aa0d54796c45740e031e3bd4b
[ "MIT" ]
null
null
null
sdk/docs/Future.md
fossabot/lusid-sdk-java-preview
a1bc1b3c5a3e7c0aa0d54796c45740e031e3bd4b
[ "MIT" ]
1
2020-10-29T09:28:40.000Z
2020-10-29T09:28:40.000Z
sdk/docs/Future.md
fossabot/lusid-sdk-java-preview
a1bc1b3c5a3e7c0aa0d54796c45740e031e3bd4b
[ "MIT" ]
1
2020-10-29T09:18:06.000Z
2020-10-29T09:18:06.000Z
# Future ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **startDate** | [**OffsetDateTime**](OffsetDateTime.md) | The start date of the instrument. This is normally synonymous with the trade-date. | **maturityDate** | [**OffsetDateTime**](OffsetDateTime.md) | The final maturity date of the instrument. This means the last date on which the instruments makes a payment of any amount. For the avoidance of doubt, that is not necessarily prior to its last sensitivity date for the purposes of risk; e.g. instruments such as Constant Maturity Swaps (CMS) often have sensitivities to rates beyond their last payment date | **identifiers** | **Map&lt;String, String&gt;** | external market codes and identifiers for the bond, e.g. ISIN. | **contractDetails** | [**FuturesContractDetails**](FuturesContractDetails.md) | | **contracts** | **Double** | The number of contracts held | [optional] **refSpotPrice** | **Double** | The reference spot price for the future at which the contract was entered into. | [optional] **underlying** | [**LusidInstrument**](LusidInstrument.md) | |
62.052632
446
0.675997
eng_Latn
0.971565
7aa220c4368a66853db76d24acb81e868d2697ed
2,968
md
Markdown
benchmark/opperf/nd_operations/README.md
paulk-asert/incubator-mxnet
6acf7e6a051e75d9f1cca0ec3c198c38c0f6a3fe
[ "Apache-2.0" ]
228
2018-12-06T09:34:01.000Z
2022-03-08T17:02:02.000Z
benchmark/opperf/nd_operations/README.md
paulk-asert/incubator-mxnet
6acf7e6a051e75d9f1cca0ec3c198c38c0f6a3fe
[ "Apache-2.0" ]
29
2020-09-05T00:57:25.000Z
2022-02-26T14:48:52.000Z
benchmark/opperf/nd_operations/README.md
paulk-asert/incubator-mxnet
6acf7e6a051e75d9f1cca0ec3c198c38c0f6a3fe
[ "Apache-2.0" ]
34
2018-12-14T02:59:53.000Z
2022-01-22T14:15:19.000Z
<!--- Licensed to the Apache Software Foundation (ASF) under one --> <!--- or more contributor license agreements. See the NOTICE file --> <!--- distributed with this work for additional information --> <!--- regarding copyright ownership. The ASF licenses this file --> <!--- to you under the Apache License, Version 2.0 (the --> <!--- "License"); you may not use this file except in compliance --> <!--- with the License. You may obtain a copy of the License at --> <!--- http://www.apache.org/licenses/LICENSE-2.0 --> <!--- Unless required by applicable law or agreed to in writing, --> <!--- software distributed under the License is distributed on an --> <!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY --> <!--- KIND, either express or implied. See the License for the --> <!--- specific language governing permissions and limitations --> <!--- under the License. --> # TODO - Operators not covered in this Benchmark Utility **NOTE:** This list is AUTOGENERATED when you run opperf.py utility 0. LogisticRegressionOutput 1. broadcast_axes 2. ravel_multi_index 3. multi_sgd_mom_update 4. smooth_l1 5. scatter_nd 6. reshape 7. one_hot 8. linalg_potri 9. mp_sgd_update 10. multi_sgd_update 11. signum_update 12. Convolution_v1 13. repeat 14. Custom 15. softmax_cross_entropy 16. SwapAxis 17. norm 18. Softmax 19. rmspropalex_update 20. fill_element_0index 21. cast 22. UpSampling 23. BatchNorm_v1 24. CTCLoss 25. LRN 26. cast_storage 27. pick 28. GridGenerator 29. sample_multinomial 30. Activation 31. LinearRegressionOutput 32. Pooling_v1 33. ftml_update 34. Crop 35. ElementWiseSum 36. diag 37. Reshape 38. Pad 39. linalg_gemm2 40. crop 41. rmsprop_update 43. RNN 44. argmin 45. SoftmaxOutput 46. linalg_extractdiag 47. sgd_mom_update 48. SequenceLast 49. Deconvolution 50. flip 51. SequenceReverse 52. swapaxes 53. SVMOutput 54. linalg_trsm 55. where 56. SoftmaxActivation 57. signsgd_update 58. slice 59. linalg_gelqf 60. softmin 61. linalg_gemm 62. BilinearSampler 63. mp_sgd_mom_update 64. choose_element_0index 65. tile 66. space_to_depth 67. gather_nd 68. argsort 69. SequenceMask 70. reshape_like 71. slice_axis 72. stack 73. topk 74. khatri_rao 75. multi_mp_sgd_update 76. linalg_sumlogdiag 77. broadcast_to 78. IdentityAttachKLSparseReg 79. sort 80. SpatialTransformer 81. Concat 82. uniform 83. InstanceNorm 84. expand_dims 85. multi_mp_sgd_mom_update 86. reverse 87. add_n 88. clip 89. ctc_loss 90. shape_array 91. unravel_index 92. linalg_potrf 93. Cast 94. broadcast_like 95. Embedding 96. linalg_makediag 97. transpose 98. linalg_syrk 99. squeeze 101. ROIPooling 102. ftrl_update 103. SliceChannel 104. slice_like 105. depth_to_space 106. linalg_maketrian 108. pad 109. LayerNorm 110. split 111. MAERegressionOutput 112. Correlation 113. argmax 114. batch_take 115. L2Normalization 116. broadcast_axis 117. linalg_trmm 118. linalg_extracttrian 119. normal 120. take 121. MakeLoss 122. sgd_update 123. adam_update 124. concat
20.755245
70
0.769542
eng_Latn
0.673109
7aa262e8ccf662cb72e554507a345d32135c6a5e
82
md
Markdown
README.md
Thomas-Faure/Matchmaking
d0ccedcbf15b65a2b1e1df6150897377f888987e
[ "MIT" ]
null
null
null
README.md
Thomas-Faure/Matchmaking
d0ccedcbf15b65a2b1e1df6150897377f888987e
[ "MIT" ]
null
null
null
README.md
Thomas-Faure/Matchmaking
d0ccedcbf15b65a2b1e1df6150897377f888987e
[ "MIT" ]
1
2020-12-10T19:14:34.000Z
2020-12-10T19:14:34.000Z
# Matchmaking adresse du site internet : http://projetagilenotilt.alwaysdata.net/
27.333333
67
0.804878
fra_Latn
0.350288
7aa270cfdb5fd7b61e9584038de68a83406a13dd
9,480
md
Markdown
docs/relational-databases/backup-restore/file-restores-full-recovery-model.md
ysy68251435/sql-docs
56b963446965f3a4bb0fa1446f49578dbff382e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/backup-restore/file-restores-full-recovery-model.md
ysy68251435/sql-docs
56b963446965f3a4bb0fa1446f49578dbff382e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/backup-restore/file-restores-full-recovery-model.md
ysy68251435/sql-docs
56b963446965f3a4bb0fa1446f49578dbff382e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "File Restores (Full Recovery Model) | Microsoft Docs" ms.custom: "" ms.date: "03/14/2017" ms.prod: sql ms.prod_service: backup-restore ms.reviewer: "" ms.technology: backup-restore ms.topic: conceptual helpviewer_keywords: - "file restores [SQL Server]" - "full recovery model [SQL Server], performing restores" - "restoring files [SQL Server], Transact-SQL restore sequence" - "restoring files [SQL Server]" - "file restores [SQL Server], full recovery model" - "restoring files [SQL Server], full recovery model" - "Transact-SQL restore sequence" - "file restores [SQL Server], Transact-SQL restore sequence" ms.assetid: d2236a2a-4cf1-4c3f-b542-f73f6096e15c author: mashamsft ms.author: mathoma manager: craigg --- # File Restores (Full Recovery Model) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] This topic is relevant only for databases that contain multiple files or filegroups under the full or bulk-load recovery model. In a file restore, the goal is to restore one or more damaged files without restoring the whole database. A file restore scenario consists of a single restore sequence that copies, rolls forward, and recovers the appropriate data If the filegroup that is being restored is read/write, an unbroken chain of log backups must be applied after the last data or differential backup is restored. This brings the filegroup forward to the log records in the current active log records in the log file. The recovery point is typically near the end of log, but not necessarily. If the filegroup that is being restored is read-only, usually applying log backups is unnecessary and is skipped. If the backup was taken after the file became read-only, that is the last backup to restore. Roll forward stops at the target point. The file-restore scenarios are as follows: - Offline file restore In an *offline file restore*, the database is offline while damaged files or filegroups are restored. At the end of the restore sequence, the database comes online. All editions of [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] support offline file restore. - Online file restore In an *online file restore*, if database is online at restore time, it remains online during the file restore. However, each filegroup in which a file is being restored is offline during the restore operation. After all the files in an offline filegroup are recovered, the filegroup is automatically brought online. For information about support for online page and file restore, see [Editions and Supported Features for SQL Server 2016](../../sql-server/editions-and-supported-features-for-sql-server-2016.md). For more information about online restores, see [Online Restore (SQL Server)](../../relational-databases/backup-restore/online-restore-sql-server.md). > [!TIP] > If you want the database to be offline for a file restore, take the database offline before you start the restore sequence by executing the following [ALTER DATABASE](../../t-sql/statements/alter-database-transact-sql-set-options.md) statement: ALTER DATABASE *database_name* SET OFFLINE. ## <a name="Overview"></a> Restoring Damaged Files from File Backups 1. Before restoring one or more damaged files, attempt to create a [tail-log backup](../../relational-databases/backup-restore/tail-log-backups-sql-server.md). If the log has been damaged, a tail-log backup cannot be created, and you must restore the whole database. For information about how to back up a transaction log, see [Transaction Log Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/transaction-log-backups-sql-server.md). > [!IMPORTANT] > For an offline file restore, you must always take a tail-log backup before the file restore. For an online file restore, you must always take the log backup after the file restore. This log backup is necessary to allow for the file to be recovered to a state consistent with the rest of the database. 2. Restore each damaged file from the most recent file backup of that file. 3. Restore the most recent differential file backup, if any, for each restored file. 4. Restore transaction log backups in sequence, starting with the backup that covers the oldest of the restored files and ending with the tail-log backup created in step 1. You must restore the transaction log backups that were created after the file backups to bring the database to a consistent state. The transaction log backups can be rolled forward quickly, because only the changes that apply to the restored files are applied. Restoring individual files can be better than restoring the whole database, because undamaged files are not copied and then rolled forward. However, the whole chain of log backups still has to be read. 5. Recover the database. [!INCLUDE[freshInclude](../../includes/paragraph-content/fresh-note-steps-feedback.md)] > [!NOTE] > File backups can be used to restore the database to an earlier point in time. To do this, you must restore a complete set of file backups, and then restore transaction log backups in sequence to reach a target point that is after the end of the most recent restored file backup. For more information about point-in-time recovery, see [Restore a SQL Server Database to a Point in Time &#40;Full Recovery Model&#41;](../../relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model.md). ## Transact-SQL Restore Sequence for an Offline File Restore (Full Recovery Model) A file restore scenario consists of a single restore sequence that copies, rolls forward, and recovers the appropriate data. This section shows the essential [RESTORE](../../t-sql/statements/restore-statements-transact-sql.md) options for a file-restore sequence. Syntax and details that are not relevant to this purpose are omitted. The following sample restore sequence shows an offline restore of two secondary files, `A` and `B`, using WITH NORECOVERY. Next, two log backups are applied with NORECOVERY, followed with the tail-log backup, and this is restored using WITH RECOVERY. > [!NOTE] > The following sample restore sequence starts by taking the file offline and then creates a tail-log backup. ``` --Take the file offline. ALTER DATABASE database_name MODIFY FILE SET OFFLINE; -- Back up the currently active transaction log. BACKUP LOG database_name TO <tail_log_backup> WITH NORECOVERY; GO -- Restore the files. RESTORE DATABASE database_name FILE=name FROM <file_backup_of_file_A> WITH NORECOVERY; RESTORE DATABASE database_name FILE=<name> ...... FROM <file_backup_of_file_B> WITH NORECOVERY; -- Restore the log backups. RESTORE LOG database_name FROM <log_backup> WITH NORECOVERY; RESTORE LOG database_name FROM <log_backup> WITH NORECOVERY; RESTORE LOG database_name FROM <tail_log_backup> WITH RECOVERY; ``` ## Examples - [Example: Online Restore of a Read-Write File &#40;Full Recovery Model&#41;](../../relational-databases/backup-restore/example-online-restore-of-a-read-write-file-full-recovery-model.md) - [Example: Online Restore of a Read-Only File &#40;Full Recovery Model&#41;](../../relational-databases/backup-restore/example-online-restore-of-a-read-only-file-full-recovery-model.md) - [Example: Offline Restore of Primary and One Other Filegroup &#40;Full Recovery Model&#41;](../../relational-databases/backup-restore/example-offline-restore-of-primary-and-one-other-filegroup-full-recovery-model.md) ## <a name="RelatedTasks"></a> Related Tasks **To restore files and filegroups** - [Restore Files to a New Location &#40;SQL Server&#41;](../../relational-databases/backup-restore/restore-files-to-a-new-location-sql-server.md) - [Restore Files and Filegroups &#40;SQL Server&#41;](../../relational-databases/backup-restore/restore-files-and-filegroups-sql-server.md) - <xref:Microsoft.SqlServer.Management.Smo.Restore.SqlRestore%2A> (SMO) ## See Also [Backup and Restore: Interoperability and Coexistence &#40;SQL Server&#41;](../../relational-databases/backup-restore/backup-and-restore-interoperability-and-coexistence-sql-server.md) [Differential Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/differential-backups-sql-server.md) [Full File Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/full-file-backups-sql-server.md) [Backup Overview &#40;SQL Server&#41;](../../relational-databases/backup-restore/backup-overview-sql-server.md) [Restore and Recovery Overview &#40;SQL Server&#41;](../../relational-databases/backup-restore/restore-and-recovery-overview-sql-server.md) [RESTORE &#40;Transact-SQL&#41;](../../t-sql/statements/restore-statements-transact-sql.md) [Complete Database Restores &#40;Simple Recovery Model&#41;](../../relational-databases/backup-restore/complete-database-restores-simple-recovery-model.md) [Piecemeal Restores &#40;SQL Server&#41;](../../relational-databases/backup-restore/piecemeal-restores-sql-server.md)
66.760563
536
0.736392
eng_Latn
0.958292
7aa2987c7f392c5ada683bde1f7e841bc7ab0d2c
68,770
md
Markdown
src/api-explorer/v3-0/_Attendees.swagger2.json.md
SAP-archive/developer.concur.com
a3482bac74277e99d3764bd45b47f207e0efec21
[ "Apache-2.0", "MIT" ]
1
2021-04-28T21:38:53.000Z
2021-04-28T21:38:53.000Z
src/api-explorer/v3-0/_Attendees.swagger2.json.md
jaronmanyama/developer.concur.com
77bfcc890712a5bf0d5b60b5040474f2914ddc93
[ "Apache-2.0" ]
null
null
null
src/api-explorer/v3-0/_Attendees.swagger2.json.md
jaronmanyama/developer.concur.com
77bfcc890712a5bf0d5b60b5040474f2914ddc93
[ "Apache-2.0" ]
3
2020-12-16T17:53:39.000Z
2021-01-09T22:18:17.000Z
--- title: Attendees v3.0 language_tabs: - shell: Shell - http: HTTP - javascript: JavaScript - ruby: Ruby - python: Python - php: PHP - java: Java - go: Go toc_footers: [] includes: [] search: true highlight_theme: darkula headingLevel: 2 generator: widdershins v4.0.1 --- <h1 id="attendees">Attendees v3.0</h1> > Scroll down for code samples, example requests and responses. Select a language for code samples from the tabs above or the mobile navigation menu. Get the configured attendees for a user. You can also update attendees by providing some or all of the attendee fields, or create new attendees. Base URLs: * <a href="https://www.concursolutions.com/api/v3.0">https://www.concursolutions.com/api/v3.0</a> # Authentication - oAuth2 authentication. To use this API, you need to get OAuth client credentials (client ID, secret, and geolocation) from SAP Concur, and be authorized to use the relevant scope. Refer to the <a href="https://developer.concur.com/api-reference/authentication/getting-started.html">full authentication information</a> for more information. - Flow: clientCredentials - Token URL = [https://us.api.concursolutions.com/oauth2/v0](https://us.api.concursolutions.com/oauth2/v0) <h1 id="attendees-resources">Resources</h1> ## get__expense_attendees > Code samples ```shell # You can also use wget curl -X GET https://www.concursolutions.com/api/v3.0/expense/attendees \ -H 'Accept: application/json' \ -H 'Authorization: Bearer {access-token}' ``` ```http GET https://www.concursolutions.com/api/v3.0/expense/attendees HTTP/1.1 Host: www.concursolutions.com Accept: application/json ``` ```javascript const headers = { 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; fetch('https://www.concursolutions.com/api/v3.0/expense/attendees', { method: 'GET', headers: headers }) .then(function(res) { return res.json(); }).then(function(body) { console.log(body); }); ``` ```ruby require 'rest-client' require 'json' headers = { 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}' } result = RestClient.get 'https://www.concursolutions.com/api/v3.0/expense/attendees', params: { }, headers: headers p JSON.parse(result) ``` ```python import requests headers = { 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } r = requests.get('https://www.concursolutions.com/api/v3.0/expense/attendees', headers = headers) print(r.json()) ``` ```php <?php require 'vendor/autoload.php'; $headers = array( 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}', ); $client = new \GuzzleHttp\Client(); // Define array of request body. $request_body = array(); try { $response = $client->request('GET','https://www.concursolutions.com/api/v3.0/expense/attendees', array( 'headers' => $headers, 'json' => $request_body, ) ); print_r($response->getBody()->getContents()); } catch (\GuzzleHttp\Exception\BadResponseException $e) { // handle exception or api errors. print_r($e->getMessage()); } // ... ``` ```java URL obj = new URL("https://www.concursolutions.com/api/v3.0/expense/attendees"); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); con.setRequestMethod("GET"); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println(response.toString()); ``` ```go package main import ( "bytes" "net/http" ) func main() { headers := map[string][]string{ "Accept": []string{"application/json"}, "Authorization": []string{"Bearer {access-token}"}, } data := bytes.NewBuffer([]byte{jsonReq}) req, err := http.NewRequest("GET", "https://www.concursolutions.com/api/v3.0/expense/attendees", data) req.Header = headers client := &http.Client{} resp, err := client.Do(req) // ... } ``` `GET /expense/attendees` *Get all attendees* Gets all attendees owned by the specified user, or the user associated with the access token. <h3 id="get__expense_attendees-parameters">Parameters</h3> |Name|In|Type|Required|Description| |---|---|---|---|---| |externalID|query|string|false|The external ID of an attendee. By entering a value for this parameter, you can limit the results to the attendees who match the specified external ID. Up to 10 comma-separated external IDs may be specified.| |attendeeTypeID|query|string|false|The ID of an attendee type. By entering a value for this parameter, you can limit the results to the attendees who match the specified type.| |offset|query|string|false|The starting point of the next set of results, after the limit specified in the limit field has been reached.| |limit|query|integer(int32)|false|The number of records to return. Default value: 25| |user|query|string|false|The login ID of the user that has added the attendee to an expense. The user who is performing this API request must have the Web Services Admin (Professional) or Can Administer (Standard) user role to use this parameter.| > Example responses > 200 Response ```json { "Items": { "AttendeeTypeCode": "string", "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom2": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom3": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom4": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom5": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom6": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom7": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom8": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom9": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom10": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom11": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom12": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom13": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom14": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom15": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom16": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom17": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom18": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom19": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom20": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom21": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom22": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom23": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom24": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom25": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "ExternalID": "string", "FirstName": "string", "HasExceptionsPrevYear": true, "HasExceptionsYTD": true, "ID": "string", "LastName": "string", "MiddleInitial": "string", "OwnerLoginID": "string", "OwnerName": "string", "Suffix": "string", "Title": "string", "TotalAmountPrevYear": 0, "TotalAmountYTD": 0, "URI": "string", "VersionNumber": 0 }, "NextPage": "string" } ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <AttendeeCollection> <Items> <AttendeeTypeCode>string</AttendeeTypeCode> <AttendeeTypeID>string</AttendeeTypeID> <Company>string</Company> <CurrencyCode>string</CurrencyCode> <Custom1> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom1> <Custom2> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom2> <Custom3> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom3> <Custom4> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom4> <Custom5> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom5> <Custom6> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom6> <Custom7> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom7> <Custom8> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom8> <Custom9> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom9> <Custom10> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom10> <Custom11> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom11> <Custom12> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom12> <Custom13> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom13> <Custom14> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom14> <Custom15> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom15> <Custom16> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom16> <Custom17> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom17> <Custom18> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom18> <Custom19> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom19> <Custom20> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom20> <Custom21> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom21> <Custom22> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom22> <Custom23> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom23> <Custom24> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom24> <Custom25> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom25> <ExternalID>string</ExternalID> <FirstName>string</FirstName> <HasExceptionsPrevYear>true</HasExceptionsPrevYear> <HasExceptionsYTD>true</HasExceptionsYTD> <ID>string</ID> <LastName>string</LastName> <MiddleInitial>string</MiddleInitial> <OwnerLoginID>string</OwnerLoginID> <OwnerName>string</OwnerName> <Suffix>string</Suffix> <Title>string</Title> <TotalAmountPrevYear>0</TotalAmountPrevYear> <TotalAmountYTD>0</TotalAmountYTD> <URI>string</URI> <VersionNumber>0</VersionNumber> </Items> <NextPage>string</NextPage> </AttendeeCollection> ``` <h3 id="get__expense_attendees-responses">Responses</h3> |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Success|[AttendeeCollection](#schemaattendeecollection)| <aside class="warning"> To perform this operation, you must be authenticated by means of one of the following methods: OAuth2 </aside> ## post__expense_attendees > Code samples ```shell # You can also use wget curl -X POST https://www.concursolutions.com/api/v3.0/expense/attendees \ -H 'Content-Type: application/json' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer {access-token}' ``` ```http POST https://www.concursolutions.com/api/v3.0/expense/attendees HTTP/1.1 Host: www.concursolutions.com Content-Type: application/json Accept: application/json ``` ```javascript const inputBody = '{ "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 }'; const headers = { 'Content-Type':'application/json', 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; fetch('https://www.concursolutions.com/api/v3.0/expense/attendees', { method: 'POST', body: inputBody, headers: headers }) .then(function(res) { return res.json(); }).then(function(body) { console.log(body); }); ``` ```ruby require 'rest-client' require 'json' headers = { 'Content-Type' => 'application/json', 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}' } result = RestClient.post 'https://www.concursolutions.com/api/v3.0/expense/attendees', params: { }, headers: headers p JSON.parse(result) ``` ```python import requests headers = { 'Content-Type': 'application/json', 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } r = requests.post('https://www.concursolutions.com/api/v3.0/expense/attendees', headers = headers) print(r.json()) ``` ```php <?php require 'vendor/autoload.php'; $headers = array( 'Content-Type' => 'application/json', 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}', ); $client = new \GuzzleHttp\Client(); // Define array of request body. $request_body = array(); try { $response = $client->request('POST','https://www.concursolutions.com/api/v3.0/expense/attendees', array( 'headers' => $headers, 'json' => $request_body, ) ); print_r($response->getBody()->getContents()); } catch (\GuzzleHttp\Exception\BadResponseException $e) { // handle exception or api errors. print_r($e->getMessage()); } // ... ``` ```java URL obj = new URL("https://www.concursolutions.com/api/v3.0/expense/attendees"); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); con.setRequestMethod("POST"); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println(response.toString()); ``` ```go package main import ( "bytes" "net/http" ) func main() { headers := map[string][]string{ "Content-Type": []string{"application/json"}, "Accept": []string{"application/json"}, "Authorization": []string{"Bearer {access-token}"}, } data := bytes.NewBuffer([]byte{jsonReq}) req, err := http.NewRequest("POST", "https://www.concursolutions.com/api/v3.0/expense/attendees", data) req.Header = headers client := &http.Client{} resp, err := client.Do(req) // ... } ``` `POST /expense/attendees` *Create attendee* Creates a new attendee. > Body parameter ```json { "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 } ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <AttendeePost> <AttendeeTypeID>string</AttendeeTypeID> <Company>string</Company> <CurrencyCode>string</CurrencyCode> <Custom1>string</Custom1> <Custom10>string</Custom10> <Custom11>string</Custom11> <Custom12>string</Custom12> <Custom13>string</Custom13> <Custom14>string</Custom14> <Custom15>string</Custom15> <Custom16>string</Custom16> <Custom17>string</Custom17> <Custom18>string</Custom18> <Custom19>string</Custom19> <Custom2>string</Custom2> <Custom20>string</Custom20> <Custom21>string</Custom21> <Custom22>string</Custom22> <Custom23>string</Custom23> <Custom24>string</Custom24> <Custom25>string</Custom25> <Custom3>string</Custom3> <Custom4>string</Custom4> <Custom5>string</Custom5> <Custom6>string</Custom6> <Custom7>string</Custom7> <Custom8>string</Custom8> <Custom9>string</Custom9> <ExternalID>string</ExternalID> <FirstName>string</FirstName> <LastName>string</LastName> <MiddleInitial>string</MiddleInitial> <Suffix>string</Suffix> <Title>string</Title> <TotalAmountYTD>0</TotalAmountYTD> </AttendeePost> ``` <h3 id="post__expense_attendees-parameters">Parameters</h3> |Name|In|Type|Required|Description| |---|---|---|---|---| |user|query|string|false|The login ID of the user that has added the attendee to an expense. The user who is performing this API request must have the Web Services Admin (Professional) or Can Administer (Standard) user role to use this parameter.| |body|body|[AttendeePost](#schemaattendeepost)|true|The Attendee object to create.| > Example responses > 200 Response ```json { "ID": "string", "URI": "string" } ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <CreateResponse> <ID>string</ID> <URI>string</URI> </CreateResponse> ``` <h3 id="post__expense_attendees-responses">Responses</h3> |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Success|[CreateResponse](#schemacreateresponse)| |400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Void](#schemavoid)| <aside class="warning"> To perform this operation, you must be authenticated by means of one of the following methods: OAuth2 </aside> ## get__expense_attendees_{id} > Code samples ```shell # You can also use wget curl -X GET https://www.concursolutions.com/api/v3.0/expense/attendees/{id} \ -H 'Accept: application/json' \ -H 'Authorization: Bearer {access-token}' ``` ```http GET https://www.concursolutions.com/api/v3.0/expense/attendees/{id} HTTP/1.1 Host: www.concursolutions.com Accept: application/json ``` ```javascript const headers = { 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; fetch('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', { method: 'GET', headers: headers }) .then(function(res) { return res.json(); }).then(function(body) { console.log(body); }); ``` ```ruby require 'rest-client' require 'json' headers = { 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}' } result = RestClient.get 'https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', params: { }, headers: headers p JSON.parse(result) ``` ```python import requests headers = { 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } r = requests.get('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', headers = headers) print(r.json()) ``` ```php <?php require 'vendor/autoload.php'; $headers = array( 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}', ); $client = new \GuzzleHttp\Client(); // Define array of request body. $request_body = array(); try { $response = $client->request('GET','https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', array( 'headers' => $headers, 'json' => $request_body, ) ); print_r($response->getBody()->getContents()); } catch (\GuzzleHttp\Exception\BadResponseException $e) { // handle exception or api errors. print_r($e->getMessage()); } // ... ``` ```java URL obj = new URL("https://www.concursolutions.com/api/v3.0/expense/attendees/{id}"); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); con.setRequestMethod("GET"); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println(response.toString()); ``` ```go package main import ( "bytes" "net/http" ) func main() { headers := map[string][]string{ "Accept": []string{"application/json"}, "Authorization": []string{"Bearer {access-token}"}, } data := bytes.NewBuffer([]byte{jsonReq}) req, err := http.NewRequest("GET", "https://www.concursolutions.com/api/v3.0/expense/attendees/{id}", data) req.Header = headers client := &http.Client{} resp, err := client.Do(req) // ... } ``` `GET /expense/attendees/{id}` *Get attendee* Gets a single attendee by ID. <h3 id="get__expense_attendees_{id}-parameters">Parameters</h3> |Name|In|Type|Required|Description| |---|---|---|---|---| |id|path|string|true|The attendee ID.| |user|query|string|false|The login ID of the user that has added the attendee to an expense. The user who is performing this API request must have the Web Services Admin (Professional) or Can Administer (Standard) user role to use this parameter.| > Example responses > 200 Response ```json { "AttendeeTypeCode": "string", "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom2": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom3": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom4": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom5": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom6": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom7": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom8": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom9": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom10": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom11": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom12": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom13": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom14": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom15": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom16": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom17": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom18": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom19": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom20": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom21": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom22": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom23": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom24": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom25": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "ExternalID": "string", "FirstName": "string", "HasExceptionsPrevYear": true, "HasExceptionsYTD": true, "ID": "string", "LastName": "string", "MiddleInitial": "string", "OwnerLoginID": "string", "OwnerName": "string", "Suffix": "string", "Title": "string", "TotalAmountPrevYear": 0, "TotalAmountYTD": 0, "URI": "string", "VersionNumber": 0 } ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <AttendeeGet> <AttendeeTypeCode>string</AttendeeTypeCode> <AttendeeTypeID>string</AttendeeTypeID> <Company>string</Company> <CurrencyCode>string</CurrencyCode> <Custom1> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom1> <Custom2> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom2> <Custom3> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom3> <Custom4> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom4> <Custom5> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom5> <Custom6> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom6> <Custom7> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom7> <Custom8> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom8> <Custom9> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom9> <Custom10> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom10> <Custom11> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom11> <Custom12> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom12> <Custom13> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom13> <Custom14> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom14> <Custom15> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom15> <Custom16> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom16> <Custom17> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom17> <Custom18> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom18> <Custom19> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom19> <Custom20> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom20> <Custom21> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom21> <Custom22> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom22> <Custom23> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom23> <Custom24> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom24> <Custom25> <Code>string</Code> <ListItemID>string</ListItemID> <Type>string</Type> <Value>string</Value> </Custom25> <ExternalID>string</ExternalID> <FirstName>string</FirstName> <HasExceptionsPrevYear>true</HasExceptionsPrevYear> <HasExceptionsYTD>true</HasExceptionsYTD> <ID>string</ID> <LastName>string</LastName> <MiddleInitial>string</MiddleInitial> <OwnerLoginID>string</OwnerLoginID> <OwnerName>string</OwnerName> <Suffix>string</Suffix> <Title>string</Title> <TotalAmountPrevYear>0</TotalAmountPrevYear> <TotalAmountYTD>0</TotalAmountYTD> <URI>string</URI> <VersionNumber>0</VersionNumber> </AttendeeGet> ``` <h3 id="get__expense_attendees_{id}-responses">Responses</h3> |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Success|[AttendeeGet](#schemaattendeeget)| <aside class="warning"> To perform this operation, you must be authenticated by means of one of the following methods: OAuth2 </aside> ## put__expense_attendees_{id} > Code samples ```shell # You can also use wget curl -X PUT https://www.concursolutions.com/api/v3.0/expense/attendees/{id} \ -H 'Content-Type: application/json' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer {access-token}' ``` ```http PUT https://www.concursolutions.com/api/v3.0/expense/attendees/{id} HTTP/1.1 Host: www.concursolutions.com Content-Type: application/json Accept: application/json ``` ```javascript const inputBody = '{ "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 }'; const headers = { 'Content-Type':'application/json', 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; fetch('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', { method: 'PUT', body: inputBody, headers: headers }) .then(function(res) { return res.json(); }).then(function(body) { console.log(body); }); ``` ```ruby require 'rest-client' require 'json' headers = { 'Content-Type' => 'application/json', 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}' } result = RestClient.put 'https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', params: { }, headers: headers p JSON.parse(result) ``` ```python import requests headers = { 'Content-Type': 'application/json', 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } r = requests.put('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', headers = headers) print(r.json()) ``` ```php <?php require 'vendor/autoload.php'; $headers = array( 'Content-Type' => 'application/json', 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}', ); $client = new \GuzzleHttp\Client(); // Define array of request body. $request_body = array(); try { $response = $client->request('PUT','https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', array( 'headers' => $headers, 'json' => $request_body, ) ); print_r($response->getBody()->getContents()); } catch (\GuzzleHttp\Exception\BadResponseException $e) { // handle exception or api errors. print_r($e->getMessage()); } // ... ``` ```java URL obj = new URL("https://www.concursolutions.com/api/v3.0/expense/attendees/{id}"); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); con.setRequestMethod("PUT"); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println(response.toString()); ``` ```go package main import ( "bytes" "net/http" ) func main() { headers := map[string][]string{ "Content-Type": []string{"application/json"}, "Accept": []string{"application/json"}, "Authorization": []string{"Bearer {access-token}"}, } data := bytes.NewBuffer([]byte{jsonReq}) req, err := http.NewRequest("PUT", "https://www.concursolutions.com/api/v3.0/expense/attendees/{id}", data) req.Header = headers client := &http.Client{} resp, err := client.Do(req) // ... } ``` `PUT /expense/attendees/{id}` *Update attendee* Updates the specified attendee. Only the fields provided in the supplied object are updated. Missing fields are not altered. > Body parameter ```json { "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 } ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <AttendeePut> <AttendeeTypeID>string</AttendeeTypeID> <Company>string</Company> <CurrencyCode>string</CurrencyCode> <Custom1>string</Custom1> <Custom10>string</Custom10> <Custom11>string</Custom11> <Custom12>string</Custom12> <Custom13>string</Custom13> <Custom14>string</Custom14> <Custom15>string</Custom15> <Custom16>string</Custom16> <Custom17>string</Custom17> <Custom18>string</Custom18> <Custom19>string</Custom19> <Custom2>string</Custom2> <Custom20>string</Custom20> <Custom21>string</Custom21> <Custom22>string</Custom22> <Custom23>string</Custom23> <Custom24>string</Custom24> <Custom25>string</Custom25> <Custom3>string</Custom3> <Custom4>string</Custom4> <Custom5>string</Custom5> <Custom6>string</Custom6> <Custom7>string</Custom7> <Custom8>string</Custom8> <Custom9>string</Custom9> <ExternalID>string</ExternalID> <FirstName>string</FirstName> <LastName>string</LastName> <MiddleInitial>string</MiddleInitial> <Suffix>string</Suffix> <Title>string</Title> <TotalAmountYTD>0</TotalAmountYTD> </AttendeePut> ``` <h3 id="put__expense_attendees_{id}-parameters">Parameters</h3> |Name|In|Type|Required|Description| |---|---|---|---|---| |id|path|string|true|The attendee ID.| |user|query|string|false|The login ID of the user that has added the attendee to an expense. The user who is performing this API request must have the Web Services Admin (Professional) or Can Administer (Standard) user role to use this parameter.| |body|body|[AttendeePut](#schemaattendeeput)|true|The partial or complete Attendee object to update.| > Example responses > 200 Response ```json {} ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <Void/> ``` <h3 id="put__expense_attendees_{id}-responses">Responses</h3> |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Success|[Void](#schemavoid)| |400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Void](#schemavoid)| <aside class="warning"> To perform this operation, you must be authenticated by means of one of the following methods: OAuth2 </aside> ## delete__expense_attendees_{id} > Code samples ```shell # You can also use wget curl -X DELETE https://www.concursolutions.com/api/v3.0/expense/attendees/{id} \ -H 'Accept: application/json' \ -H 'Authorization: Bearer {access-token}' ``` ```http DELETE https://www.concursolutions.com/api/v3.0/expense/attendees/{id} HTTP/1.1 Host: www.concursolutions.com Accept: application/json ``` ```javascript const headers = { 'Accept':'application/json', 'Authorization':'Bearer {access-token}' }; fetch('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', { method: 'DELETE', headers: headers }) .then(function(res) { return res.json(); }).then(function(body) { console.log(body); }); ``` ```ruby require 'rest-client' require 'json' headers = { 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}' } result = RestClient.delete 'https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', params: { }, headers: headers p JSON.parse(result) ``` ```python import requests headers = { 'Accept': 'application/json', 'Authorization': 'Bearer {access-token}' } r = requests.delete('https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', headers = headers) print(r.json()) ``` ```php <?php require 'vendor/autoload.php'; $headers = array( 'Accept' => 'application/json', 'Authorization' => 'Bearer {access-token}', ); $client = new \GuzzleHttp\Client(); // Define array of request body. $request_body = array(); try { $response = $client->request('DELETE','https://www.concursolutions.com/api/v3.0/expense/attendees/{id}', array( 'headers' => $headers, 'json' => $request_body, ) ); print_r($response->getBody()->getContents()); } catch (\GuzzleHttp\Exception\BadResponseException $e) { // handle exception or api errors. print_r($e->getMessage()); } // ... ``` ```java URL obj = new URL("https://www.concursolutions.com/api/v3.0/expense/attendees/{id}"); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); con.setRequestMethod("DELETE"); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println(response.toString()); ``` ```go package main import ( "bytes" "net/http" ) func main() { headers := map[string][]string{ "Accept": []string{"application/json"}, "Authorization": []string{"Bearer {access-token}"}, } data := bytes.NewBuffer([]byte{jsonReq}) req, err := http.NewRequest("DELETE", "https://www.concursolutions.com/api/v3.0/expense/attendees/{id}", data) req.Header = headers client := &http.Client{} resp, err := client.Do(req) // ... } ``` `DELETE /expense/attendees/{id}` *Delete an attendee by ID* DEPRECATED: 05/19/2016 UNSUPPORTED: 11/19/2016 Deletes the specified attendee. <h3 id="delete__expense_attendees_{id}-parameters">Parameters</h3> |Name|In|Type|Required|Description| |---|---|---|---|---| |id|path|string|true|The ID of the attendee to delete.| |user|query|string|false|The login ID of the user that has added the attendee to an expense. The user who is performing this API request must have the Web Services Admin (Professional) or Can Administer (Standard) user role to use this parameter.| > Example responses > 200 Response ```json {} ``` ```xml <?xml version="1.0" encoding="UTF-8" ?> <Void/> ``` <h3 id="delete__expense_attendees_{id}-responses">Responses</h3> |Status|Meaning|Description|Schema| |---|---|---|---| |200|[OK](https://tools.ietf.org/html/rfc7231#section-6.3.1)|Success|[Void](#schemavoid)| |400|[Bad Request](https://tools.ietf.org/html/rfc7231#section-6.5.1)|Bad Request|[Void](#schemavoid)| <aside class="warning"> To perform this operation, you must be authenticated by means of one of the following methods: OAuth2 </aside> # Schemas <h2 id="tocS_AttendeeCollection">AttendeeCollection</h2> <a id="schemaattendeecollection"></a> <a id="schema_AttendeeCollection"></a> <a id="tocSattendeecollection"></a> <a id="tocsattendeecollection"></a> ```json { "Items": { "AttendeeTypeCode": "string", "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom2": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom3": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom4": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom5": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom6": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom7": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom8": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom9": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom10": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom11": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom12": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom13": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom14": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom15": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom16": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom17": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom18": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom19": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom20": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom21": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom22": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom23": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom24": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom25": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "ExternalID": "string", "FirstName": "string", "HasExceptionsPrevYear": true, "HasExceptionsYTD": true, "ID": "string", "LastName": "string", "MiddleInitial": "string", "OwnerLoginID": "string", "OwnerName": "string", "Suffix": "string", "Title": "string", "TotalAmountPrevYear": 0, "TotalAmountYTD": 0, "URI": "string", "VersionNumber": 0 }, "NextPage": "string" } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |Items|[AttendeeGet](#schemaattendeeget)|false|none|none| |NextPage|string|false|none|The URI of the next page of results, if any.| <h2 id="tocS_AttendeeGet">AttendeeGet</h2> <a id="schemaattendeeget"></a> <a id="schema_AttendeeGet"></a> <a id="tocSattendeeget"></a> <a id="tocsattendeeget"></a> ```json { "AttendeeTypeCode": "string", "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom2": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom3": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom4": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom5": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom6": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom7": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom8": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom9": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom10": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom11": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom12": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom13": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom14": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom15": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom16": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom17": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom18": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom19": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom20": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom21": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom22": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom23": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom24": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "Custom25": { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" }, "ExternalID": "string", "FirstName": "string", "HasExceptionsPrevYear": true, "HasExceptionsYTD": true, "ID": "string", "LastName": "string", "MiddleInitial": "string", "OwnerLoginID": "string", "OwnerName": "string", "Suffix": "string", "Title": "string", "TotalAmountPrevYear": 0, "TotalAmountYTD": 0, "URI": "string", "VersionNumber": 0 } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |AttendeeTypeCode|string|false|none|A code that indicates the type of attendee. Examples: EMPLOYEE, SPOUSE, BUSGUEST. Maximum length: 40 characters| |AttendeeTypeID|string|false|none|The ID of the attendee type. To obtain the attendee type ID value, use the "GET /expense/attendeetypes" endpoint. The value of the ID element in the response is the attendee type ID.| |Company|string|false|none|The name of the attendee's company. Maximum length: 150 characters| |CurrencyCode|string|false|none|The 3-letter ISO 4217 currency code for monetary amounts related to an attendee.| |Custom1|[CustomField](#schemacustomfield)|false|none|none| |Custom2|[CustomField](#schemacustomfield)|false|none|none| |Custom3|[CustomField](#schemacustomfield)|false|none|none| |Custom4|[CustomField](#schemacustomfield)|false|none|none| |Custom5|[CustomField](#schemacustomfield)|false|none|none| |Custom6|[CustomField](#schemacustomfield)|false|none|none| |Custom7|[CustomField](#schemacustomfield)|false|none|none| |Custom8|[CustomField](#schemacustomfield)|false|none|none| |Custom9|[CustomField](#schemacustomfield)|false|none|none| |Custom10|[CustomField](#schemacustomfield)|false|none|none| |Custom11|[CustomField](#schemacustomfield)|false|none|none| |Custom12|[CustomField](#schemacustomfield)|false|none|none| |Custom13|[CustomField](#schemacustomfield)|false|none|none| |Custom14|[CustomField](#schemacustomfield)|false|none|none| |Custom15|[CustomField](#schemacustomfield)|false|none|none| |Custom16|[CustomField](#schemacustomfield)|false|none|none| |Custom17|[CustomField](#schemacustomfield)|false|none|none| |Custom18|[CustomField](#schemacustomfield)|false|none|none| |Custom19|[CustomField](#schemacustomfield)|false|none|none| |Custom20|[CustomField](#schemacustomfield)|false|none|none| |Custom21|[CustomField](#schemacustomfield)|false|none|none| |Custom22|[CustomField](#schemacustomfield)|false|none|none| |Custom23|[CustomField](#schemacustomfield)|false|none|none| |Custom24|[CustomField](#schemacustomfield)|false|none|none| |Custom25|[CustomField](#schemacustomfield)|false|none|none| |ExternalID|string|false|none|A unique identifier for the attendee, assigned outside of Concur. Maximum length: 48 characters| |FirstName|string|false|none|The attendee's first name. Maximum length: 50 characters| |HasExceptionsPrevYear|boolean|false|none|Determines whether the attendee had exceptions in the previous year, based on yearly total limits for attendees. Format: true or false| |HasExceptionsYTD|boolean|false|none|Determines whether the attendee has exceptions in the current year, based on yearly total limits for attendees. Format: true or false| |ID|string|false|none|The unique identifier of the resource.| |LastName|string|false|none|The attendee's last name. Maximum length: 132 characters| |MiddleInitial|string|false|none|The attendee's middle initial. Maximum length: 1 character| |OwnerLoginID|string|false|none|The login ID of the user who owns the attendee record.| |OwnerName|string|false|none|The name of the user who owns the attendee record.| |Suffix|string|false|none|The attendee's name suffix. Maximum length: 32 characters| |Title|string|false|none|The attendee's title. Maximum length: 32 characters| |TotalAmountPrevYear|number(double)|false|none|The total amount spent on the attendee in the previous calendar year.| |TotalAmountYTD|number(double)|false|none|The total amount spent on the attendee in the current calendar year.| |URI|string|false|none|The URI to the resource.| |VersionNumber|integer(int32)|false|none|The attendee's version number.| <h2 id="tocS_AttendeePost">AttendeePost</h2> <a id="schemaattendeepost"></a> <a id="schema_AttendeePost"></a> <a id="tocSattendeepost"></a> <a id="tocsattendeepost"></a> ```json { "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |AttendeeTypeID|string|false|none|The ID of the attendee type. To obtain the attendee type ID value, use the "GET /expense/attendeetypes" endpoint. The value of the ID element in the response is the attendee type ID.| |Company|string|false|none|The name of the attendee's company. Maximum length: 150 characters| |CurrencyCode|string|false|none|The 3-letter ISO 4217 currency code for monetary amounts related to an attendee.| |Custom1|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom10|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom11|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom12|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom13|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom14|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom15|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom16|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom17|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom18|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom19|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom2|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom20|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom21|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom22|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom23|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom24|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom25|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom3|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom4|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom5|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom6|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom7|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom8|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom9|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |ExternalID|string|false|none|A unique identifier for the attendee, assigned outside of SAP Concur. Maximum length: 48 characters| |FirstName|string|false|none|The attendee's first name. Maximum length: 50 characters| |LastName|string|false|none|The attendee's last name. Maximum length: 132 characters| |MiddleInitial|string|false|none|The attendee's middle initial. Maximum length: 1 character| |Suffix|string|false|none|The attendee's name suffix. Maximum length: 32 characters| |Title|string|false|none|The attendee's title. Maximum length: 32 characters| |TotalAmountYTD|number(double)|false|none|The total amount spent on the attendee in the current calendar year.| <h2 id="tocS_AttendeePut">AttendeePut</h2> <a id="schemaattendeeput"></a> <a id="schema_AttendeePut"></a> <a id="tocSattendeeput"></a> <a id="tocsattendeeput"></a> ```json { "AttendeeTypeID": "string", "Company": "string", "CurrencyCode": "string", "Custom1": "string", "Custom10": "string", "Custom11": "string", "Custom12": "string", "Custom13": "string", "Custom14": "string", "Custom15": "string", "Custom16": "string", "Custom17": "string", "Custom18": "string", "Custom19": "string", "Custom2": "string", "Custom20": "string", "Custom21": "string", "Custom22": "string", "Custom23": "string", "Custom24": "string", "Custom25": "string", "Custom3": "string", "Custom4": "string", "Custom5": "string", "Custom6": "string", "Custom7": "string", "Custom8": "string", "Custom9": "string", "ExternalID": "string", "FirstName": "string", "LastName": "string", "MiddleInitial": "string", "Suffix": "string", "Title": "string", "TotalAmountYTD": 0 } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |AttendeeTypeID|string|false|none|The ID of the attendee type. To obtain the attendee type ID value, use the "GET /expense/attendeetypes" endpoint. The value of the ID element in the response is the attendee type ID.| |Company|string|false|none|The name of the attendee's company. Maximum length: 150 characters| |CurrencyCode|string|false|none|The 3-letter ISO 4217 currency code for monetary amounts related to an attendee.| |Custom1|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom10|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom11|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom12|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom13|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom14|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom15|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom16|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom17|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom18|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom19|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom2|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom20|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom21|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom22|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom23|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom24|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom25|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom3|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom4|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom5|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom6|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom7|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom8|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |Custom9|string|false|none|A custom field associated with the attendee. This field may or may not have data, depending on how Expense is configured.| |ExternalID|string|false|none|A unique identifier for the attendee, assigned outside of Concur. Maximum length: 48 characters| |FirstName|string|false|none|The attendee's first name. Maximum length: 50 characters| |LastName|string|false|none|The attendee's last name. Maximum length: 132 characters| |MiddleInitial|string|false|none|The attendee's middle initial. Maximum length: 1 character| |Suffix|string|false|none|The attendee's name suffix. Maximum length: 32 characters| |Title|string|false|none|The attendee's title. Maximum length: 32 characters| |TotalAmountYTD|number(double)|false|none|The total amount spent on the attendee in the current calendar year.| <h2 id="tocS_CreateResponse">CreateResponse</h2> <a id="schemacreateresponse"></a> <a id="schema_CreateResponse"></a> <a id="tocScreateresponse"></a> <a id="tocscreateresponse"></a> ```json { "ID": "string", "URI": "string" } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |ID|string|false|none|The attendee's title. Maximum length: 32 characters| |URI|string|false|none|The attendee's title. Maximum length: 32 characters| <h2 id="tocS_CustomField">CustomField</h2> <a id="schemacustomfield"></a> <a id="schema_CustomField"></a> <a id="tocScustomfield"></a> <a id="tocscustomfield"></a> ```json { "Code": "string", "ListItemID": "string", "Type": "string", "Value": "string" } ``` ### Properties |Name|Type|Required|Restrictions|Description| |---|---|---|---|---| |Code|string|false|none|For list fields, this is the list item code.| |ListItemID|string|false|none|For list fields, this is the list item ID.| |Type|string|false|none|The custom field type. Possible values: Amount, Boolean, ConnectedList, Date, Integer, List, Number, Text| |Value|string|false|none|The value in the Org Unit or Custom field. For list fields, this is the name of the list item. Maximum length: 48 characters| <h2 id="tocS_Void">Void</h2> <a id="schemavoid"></a> <a id="schema_Void"></a> <a id="tocSvoid"></a> <a id="tocsvoid"></a> ```json {} ``` ### Properties *None*
26.790027
341
0.648946
eng_Latn
0.316511
7aa2de91e85aaa97e94da53dd24602052dfdb1e9
1,365
md
Markdown
2020/11/30/2020-11-30 01:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
3
2020-07-14T14:54:15.000Z
2020-08-21T06:48:24.000Z
2020/11/30/2020-11-30 01:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020/11/30/2020-11-30 01:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020年11月30日01时数据 Status: 200 1.贾玲身材 微博热度:739344 2.华春莹连发三推为丁真打call 微博热度:570515 3.F1严重事故 微博热度:543588 4.虾米音乐 微博热度:429218 5.应采儿说陈小春GAI好欠打 微博热度:428930 6.蔡依林小S接吻 微博热度:414118 7.丁真用藏语接受央视采访 微博热度:347380 8.这就是有钱人的家吗 微博热度:259098 9.以色列外交部拒绝评论暗杀事件 微博热度:249605 10.锦江学院 微博热度:245671 11.肖战粉丝 录音 微博热度:227186 12.江疏影拿暖手宝走红毯 微博热度:202118 13.乔欣胡一天CP感 微博热度:169769 14.警方通报锦江学院2人死亡 微博热度:166940 15.河南洛阳发现2600多年前戎人王级大墓 微博热度:145445 16.蛋壳 微博热度:145425 17.丁真 微博热度:145384 18.胡宇桐 微博热度:145328 19.香港中小学12月2日起停课 微博热度:145276 20.Halo系统 微博热度:145228 21.王祖蓝卧铺交友交了个寂寞 微博热度:145173 22.我国超2.5亿人受脱发困扰 微博热度:145148 23.张云雷年度最具突破男歌手 微博热度:139986 24.容祖儿希林娜依高不如跳舞 微博热度:132913 25.蓝俏俏高级绿茶 微博热度:131879 26.天天向上 微博热度:128831 27.嫦娥五号近月点再次刹车 微博热度:126196 28.可以显示表情的口罩 微博热度:125138 29.伊朗外长扎里夫用中文发推 微博热度:124486 30.我们的歌 微博热度:124156 31.被拒绝的小猫有多委屈 微博热度:117502 32.压倒王俊凯的最后一个雪球 微博热度:113482 33.蔡依林把小s口红亲花了 微博热度:105846 34.伊朗德黑兰爆发抗议活动 微博热度:103160 35.女童幼儿园午休时死亡家属发声 微博热度:102108 36.陈可辛称不会捍卫自己的电影 微博热度:101763 37.RM 微博热度:96797 38.格罗斯让撞车起火 微博热度:95357 39.黑豹流媒体版换新片头纪念博斯曼 微博热度:94635 40.StartUp 微博热度:93537 41.张译一秒落泪 微博热度:88721 42.孟美岐金色短发齐刘海 微博热度:81623 43.现在的鬼屋不是鬼屋了 微博热度:78480 44.偷电动车不会骑推15里弄回家 微博热度:77783 45.电影电话 微博热度:71706 46.尔冬升被黄奕跳舞吓到 微博热度:70621 47.杨幂把85穿在身上 微博热度:68224 48.大学课堂堪比发布会 微博热度:50994 49.雪地策马的女副县长名叫贺娇龙 微博热度:50430 50.爱的厘米 微博热度:50429
6.691176
22
0.775092
yue_Hant
0.328154
7aa312199f83ad6899a9462b5e606a723eb5f6fb
1,491
md
Markdown
SecurityCompliance/run-reports-in-advanced-ediscovery.md
alisalih1/OfficeDocs-o365seccomp
ac64a6042b76a308f7c17e459cd24a565169456d
[ "CC-BY-4.0", "MIT" ]
2
2019-01-23T01:57:16.000Z
2019-10-13T15:53:27.000Z
SecurityCompliance/run-reports-in-advanced-ediscovery.md
alisalih1/OfficeDocs-o365seccomp
ac64a6042b76a308f7c17e459cd24a565169456d
[ "CC-BY-4.0", "MIT" ]
null
null
null
SecurityCompliance/run-reports-in-advanced-ediscovery.md
alisalih1/OfficeDocs-o365seccomp
ac64a6042b76a308f7c17e459cd24a565169456d
[ "CC-BY-4.0", "MIT" ]
2
2020-11-03T22:44:21.000Z
2021-10-03T22:46:55.000Z
--- title: "Run reports in Office 365 Advanced eDiscovery" ms.author: chrfox author: chrfox manager: laurawi ms.date: 9/14/2017 ms.audience: Admin ms.topic: article ms.service: o365-administration localization_priority: Normal search.appverid: - MOE150 - MET150 ms.assetid: b270243e-99a0-4c34-9b21-acb1512d56c6 description: "Learn how to run a report and then download its .csv file in Office 365 Advanced eDiscovery. " --- # Run reports in Office 365 Advanced eDiscovery > [!NOTE] > Advanced eDiscovery requires an Office 365 E3 with the Advanced Compliance add-on or an E5 subscription for your organization. If you don't have that plan and want to try Advanced eDiscovery, you can [sign up for a trial of Office 365 Enterprise E5](https://go.microsoft.com/fwlink/p/?LinkID=698279). This topic describes how to run reports in Advanced eDiscovery. ## Running reports You can download a .csv file with a report for the selected process. 1. In the **Reports** tab, select an option from the **Report name** list. Select from three **Report name** options: **Relevance decide**, **Themes list,** or **Tagged files**. ![eDiscovery Analytics Reports](media/f16aee7a-508f-4acc-99bc-a2c8dec01312.png) 2. Available parameters, and sort and filter options can be set, depending on the selected report. 3. Click **Download CSV**. The requested report is generated and downloaded. ## See also [Office 365 Advanced eDiscovery](office-365-advanced-ediscovery.md)
35.5
303
0.753186
eng_Latn
0.941441
7aa3416d0094305c15bcec9ca2c469463ebd729f
93
md
Markdown
README.md
edbird/Tracker-TestTank-Analysis
5ad4ada221f048a3e8726dad28065099bc78b748
[ "Apache-2.0" ]
null
null
null
README.md
edbird/Tracker-TestTank-Analysis
5ad4ada221f048a3e8726dad28065099bc78b748
[ "Apache-2.0" ]
null
null
null
README.md
edbird/Tracker-TestTank-Analysis
5ad4ada221f048a3e8726dad28065099bc78b748
[ "Apache-2.0" ]
null
null
null
# Tracker-TestTank-Analysis SuperNEMO Tracker TestTank Analysis Code (C++) (tracker-signals)
31
64
0.795699
kor_Hang
0.247331
7aa34687ee245b933fca329901744e8dff212874
15,057
md
Markdown
use-cases/forensic_clustering/README.md
chaitrasrinivas/metron
2e78df67c12a6fcad726551128e9753ad36d5ee9
[ "Apache-2.0" ]
1
2020-03-09T16:12:39.000Z
2020-03-09T16:12:39.000Z
use-cases/forensic_clustering/README.md
chaitrasrinivas/metron
2e78df67c12a6fcad726551128e9753ad36d5ee9
[ "Apache-2.0" ]
null
null
null
use-cases/forensic_clustering/README.md
chaitrasrinivas/metron
2e78df67c12a6fcad726551128e9753ad36d5ee9
[ "Apache-2.0" ]
null
null
null
# Problem Statement Having a forensic hash, such as [TLSH](https://github.com/trendmicro/tlsh), is a useful tool in cybersecurity. In short, the notion is that semantically similar documents should hash to a value which also similar. Contrast this with your standard cryptographic hashes, such as SHA and MD, where small deviations in the input data will yield large deviations in the hashes. The traditional use-case is to hash input documents or binaries and compare against a known blacklist of malicious hashes. A sufficiently similar hash will indicate a match. This will avoid malicious parties fuzzing input data to avoid detection. While this is interesting, it still requires metric-space searches in a blacklist. I envisioned a slightly more interesting streaming use-case of on-the-fly clustering of data. While the TLSH hashes created do not necessarily hash to precisely the same value on similar documents, more traditional non-forensic hashes *do* collide when sufficiently similar. Namely, the Hamming distance [LSH](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Bit_sampling_for_Hamming_distance) applied to the TLSH hash would give us a way to bin semantic hashes such that similar hashes (by hamming distance) have the same hash. Inspired by a good [talk](https://github.com/fluenda/dataworks_summit_iot_botnet/blob/master/dws-fucs-lopresto.pdf) by Andy LoPresto and Andre Fucs de Miranda from Apache NiFi, we will proceed to take logs from the Cowrie honeypot and compute TLSH hashes and semantic bins so that users can easily find similarly malicious activity to known threats in logs. Consider the following excerpts from the Cowrie logs the authors above have shared: ``` { "eventid": "cowrie.command.success" , "timestamp": "2017-09-18T11:45:25.028091Z" , "message": "Command found: /bin/busybox LSUCT" , "system": "CowrieTelnetTransport,787,121.237.129.163" , "isError": 0 , "src_ip": "121.237.129.163" , "session": "21caf72c6358" , "input": "/bin/busybox LSUCT" , "sensor": "a927e8b28666" } ``` and ``` { "eventid": "cowrie.command.success" , "timestamp": "2017-09-17T04:06:39.673206Z" , "message": "Command found: /bin/busybox XUSRH" , "system": "CowrieTelnetTransport,93,94.51.110.74" , "isError": 0 , "src_ip": "94.51.110.74" , "session": "4c047bbc016c" , "input": "/bin/busybox XUSRH" , "sensor": "a927e8b28666" } ``` You will note the `/bin/busybox` call with a random selection afterwards. Excerpting from an analysis of an IOT exploit [here](https://isc.sans.edu/diary/21543): ``` The use of the command "busybox ECCHI" appears to have two functions. First of all, cowrie, and more "complete" Linux distrubtions then commonly found on DVRs will respond with a help screen if a wrong module is used. So this way, "ECCHI" can be used to detect honeypots and irrelevant systems if the reply isn't simply "ECCHI: applet not found". Secondly, the command is used as a market to indicate that the prior command finished. Later, the attacker adds "/bin/busybox ECCHI" at the end of each line, following the actual command to be executed. ``` We have a few options at our disposal: * If we were merely filtering and alerting on the execution of `/bin/busybox` we would include false positives. * If we looked at `/bin/busybox XUSRH`, we'd miss many attempts with a *different* value as `XUSRH` is able to be swapped out for another random sequence to foil overly strict rules. * If we looked for `/bin/busybox *` then we'd capture this scenario well, but it'd be nice to be able to not be specific to detecting the `/bin/busybox` style of exploits. Indeed, this is precisely what semantic hashing and binning allows us, the ability to group by semantic similarity without being too specific about what we mean of as "semantic" or "similar". We want to cast a wide net, but not pull back every fish in the sea. For this demonstration, we will * ingest some 400 cowrie records * tag records from an IP blacklist for known malicious actors * use the alerts UI to investigate and find similar attacks. ## Preliminaries We assume that the following environment variables are set: * `METRON_HOME` - the home directory for metron * `ZOOKEEPER` - The zookeeper quorum (comma separated with port specified: e.g. `node1:2181` for full-dev) * `BROKERLIST` - The Kafka broker list (comma separated with port specified: e.g. `node1:6667` for full-dev) * `ES_HOST` - The elasticsearch master (and port) e.g. `node1:9200` for full-dev. Also, this does not assume that you are using a kerberized cluster. If you are, then the parser start command will adjust slightly to include the security protocol. Before editing configurations, be sure to pull the configs from zookeeper locally via ``` $METRON_HOME/bin/zk_load_configs.sh --mode PULL -z $ZOOKEEPER -o $METRON_HOME/config/zookeeper/ -f ``` ## Setting up the Data First we must set up the cowrie log data in our cluster's access node. * Download the data from the github repository for the talk mentioned above [here](https://github.com/fluenda/dataworks_summit_iot_botnet/blob/master/180424243034750.tar.gz). Ensure that's moved into your home directory on the metron node. * Create a directory called `cowrie` in ~ and untar the tarball into that directory via: ``` mkdir ~/cowrie cd ~/cowrie tar xzvf ~/180424243034750.tar.gz ``` ## Configuring the Parser The Cowrie data is coming in as simple JSON blobs, so it's easy to parse. We really just need to adjust the timestamp and a few fields and we have valid data. * Create `$METRON_HOME/config/zookeeper/parsers/cowrie.json` with the following content: ``` { "parserClassName":"org.apache.metron.parsers.json.JSONMapParser", "sensorTopic":"cowrie", "fieldTransformations" : [ { "transformation" : "STELLAR" ,"output" : [ "timestamp"] ,"config" : { "timestamp" : "TO_EPOCH_TIMESTAMP( timestamp, 'yyyy-MM-dd\\'T\\'HH:mm:ss.SSS')" } } ] } ``` Before we start, we will want to install ES mappings so ES knows how to interpret our fields: ``` curl -XPUT 'http://$ES_HOST/cowrie*/_mapping/cowrie_doc' -d ' { "properties" : { "adapter:stellaradapter:begin:ts" : { "type" : "string" }, "adapter:stellaradapter:end:ts" : { "type" : "string" }, "blacklisted" : { "type" : "boolean" }, "compCS" : { "type" : "string" }, "data" : { "type" : "string" }, "dst_ip" : { "type" : "string" }, "dst_port" : { "type" : "long" }, "duration" : { "type" : "double" }, "encCS" : { "type" : "string" }, "enrichmentjoinbolt:joiner:ts" : { "type" : "string" }, "enrichmentsplitterbolt:splitter:begin:ts" : { "type" : "string" }, "enrichmentsplitterbolt:splitter:end:ts" : { "type" : "string" }, "eventid" : { "type" : "string" }, "guid" : { "type" : "string" }, "input" : { "type" : "string" }, "isError" : { "type" : "long" }, "is_alert" : { "type" : "string" }, "kexAlgs" : { "type" : "string" }, "keyAlgs" : { "type" : "string" }, "macCS" : { "type" : "string" }, "message" : { "type" : "string" }, "original_string" : { "type" : "string" }, "password" : { "type" : "string" }, "sensor" : { "type" : "string" }, "session" : { "type" : "string" }, "similarity_bin" : { "type" : "string" }, "size" : { "type" : "long" }, "source:type" : { "type" : "string" }, "src_ip" : { "type" : "string" }, "src_port" : { "type" : "long" }, "system" : { "type" : "string" }, "threat:triage:rules:0:comment" : { "type" : "string" }, "threat:triage:rules:0:name" : { "type" : "string" }, "threat:triage:rules:0:reason" : { "type" : "string" }, "threat:triage:rules:0:score" : { "type" : "long" }, "threat:triage:score" : { "type" : "double" }, "threatinteljoinbolt:joiner:ts" : { "type" : "string" }, "threatintelsplitterbolt:splitter:begin:ts" : { "type" : "string" }, "threatintelsplitterbolt:splitter:end:ts" : { "type" : "string" }, "timestamp" : { "type" : "long" }, "tlsh" : { "type" : "string" }, "ttylog" : { "type" : "string" }, "username" : { "type" : "string" }, "version" : { "type" : "string" }, "alert" : { "type" : "nested" } } } ' ``` * Create the `cowrie` kafka topic via: ``` /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper $ZOOKEEPER --create --topic cowrie --partitions 1 --replication-factor 1 ``` ## Import the Blacklist Here, to build out a scenario, we will assume that we have a blacklist of known malicious hosts. For our purposes, we'll choose one particular host IP to be malicious. * Create `~/blacklist.csv` to contain the following: ``` 94.51.110.74 ``` * Create `~/blacklist_extractor.json` to contain the following: ``` { "config" : { "columns" : { "ip" : 0 }, "indicator_column" : "ip", "type" : "blacklist", "separator" : "," }, "extractor" : "CSV" } ``` * Import the data `$METRON_HOME/bin/flatfile_loader.sh -i ~/blacklist.csv -t threatintel -c t -e ~/blacklist_extractor.json` This will create a new enrichment type "blacklist" with a single entry "94.51.110.74". ## Configure Enrichments We will want to do the following: * Add enrichments to faciliate binning * Construct what we consider to be a sufficient representation of the thing we want to cluster. For our purposes, this is centered around the input command, so that would be: * The `message` field * The `input` field * The `isError` field * Compute the TLSH hash of this representation, called `tlsh` * Compute the locality sensitive hash of the TLSH hash suitable for binning, called `similarity_bin` * Set up the threat intelligence to use the blacklist * Set an alert if the message is from an IP address in the threat intelligence blacklist. * Score blacklisted messages with `10`. In production, this would be more complex. Now, we can create the enrichments thusly by creating `$METRON_HOME/config/zookeeper/enrichments/cowrie.json` with the following content: ``` { "enrichment": { "fieldMap": { "stellar" : { "config" : [ "characteristic_rep := JOIN([ 'message', exists(message)?message:'', 'input', exists(input)?input:'', 'isError', exists(isError)?isError:''], '|')", "forensic_hashes := HASH(characteristic_rep, 'tlsh', { 'hashes' : 16, 'bucketSize' : 128 })", "similarity_bin := MAP_GET('tlsh_bin', forensic_hashes)", "tlsh := MAP_GET('tlsh', forensic_hashes)", "forensic_hashes := null", "characteristic_rep := null" ] } } ,"fieldToTypeMap": { } }, "threatIntel": { "fieldMap": { "stellar" : { "config" : [ "blacklisted := ENRICHMENT_EXISTS( 'blacklist', src_ip, 'threatintel', 't')", "is_alert := (exists(is_alert) && is_alert) || blacklisted" ] } }, "fieldToTypeMap": { }, "triageConfig" : { "riskLevelRules" : [ { "name" : "Blacklisted Host", "comment" : "Determine if a host is blacklisted", "rule" : "blacklisted != null && blacklisted", "score" : 10, "reason" : "FORMAT('IP %s is blacklisted', src_ip)" } ], "aggregator" : "MAX" } } } ``` ### A Note About Similarity Hashes and TLSH Notice that we have specified a number of hash functions of `16` when constructing the similarity bin. I arrived at that by trial and error, which is not always tenable, frankly. What is more sensible is likely to construct *multiple* similarity bins of size `8`, `16`, `32` at minimum. * The smaller the number of hashes, the more loose the notion of similarity (more possibly dissimilar things would get grouped together). * The larger the number of hashes, the more strict (similar things may not be grouped together). ## Create the Data Loader We want to pull a snapshot of the cowrie logs, so create `~/load_data.sh` with the following content: ``` COWRIE_HOME=~/cowrie for i in cowrie.1626302-1636522.json cowrie.16879981-16892488.json cowrie.21312194-21331475.json cowrie.698260-710913.json cowrie.762933-772239.json cowrie.929866-939552.json cowrie.1246880-1248235.json cowrie.19285959-19295444.json cowrie.16542668-16581213.json cowrie.5849832-5871517.json cowrie.6607473-6609163.json;do echo $i cat $COWRIE_HOME/$i | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list node1:6667 --topic cowrie sleep 2 done ``` * Set the `+x` bit on the executable via: ``` chmod +x ~/load_data.sh ``` ## Execute Demonstration From here, we've set up our configuration and can push the configs: * Push the configs to zookeeper via ``` $METRON_HOME/bin/zk_load_configs.sh --mode PUSH -z $ZOOKEEPER -i $METRON_HOME/config/zookeeper/ ``` * Start the parser via: ``` $METRON_HOME/bin/start_parser_topology.sh -k $BROKERLIST -z $ZOOKEEPER -s cowrie ``` * Push cowrie data into the `cowrie` topic via ``` ~/load_data.sh ``` Once this data is loaded, we can use the Alerts UI, starting from known malicious actors, to find others doing similar things. * First we can look at the alerts directly and find an instance of our `/bin/busybox` activity: ![Alerts](find_alerts.png) * We can now pivot and look for instances of messages with the same `semantic_hash` but who are *not* alerts: ![Pivot](clustered.png) As you can see, we have found a few more malicious actors: * 177.239.192.172 * 180.110.69.182 * 177.238.236.21 * 94.78.80.45 Now we can look at *other* things that they're doing to build and refine our definition of what an alert is without resorting to hard-coding of rules. Note that nothing in our enrichments actually used the string `busybox`, so this is a more general purpose way of navigating similar things.
34.693548
321
0.629342
eng_Latn
0.97529
7aa422b277a085e6cc42cd6b96258141a6c8d517
1,015
md
Markdown
README.md
zanachka/scrapyext
2dd5e0fc03f8e4b8793b808744d4dd6452e5d5b3
[ "BSD-3-Clause" ]
12
2016-01-14T16:15:56.000Z
2021-08-12T09:31:27.000Z
README.md
zanachka/scrapyext
2dd5e0fc03f8e4b8793b808744d4dd6452e5d5b3
[ "BSD-3-Clause" ]
null
null
null
README.md
zanachka/scrapyext
2dd5e0fc03f8e4b8793b808744d4dd6452e5d5b3
[ "BSD-3-Clause" ]
3
2017-06-09T21:57:23.000Z
2022-03-13T08:16:11.000Z
scrapyext ========= [scrapyext][scrapyext.git] is meant as a source for code samples and modules related to the [Scrapy][scrapy.git] framework. Code in this repo may be deprecated or not work with current scrapy versions, isn't endorsed or supported by anyone, and might not be fit for any use. Quality of code may differ vastly between modules. About ----- [Scrapy][scrapy.git] and [scrapylib][scrapylib.git] take well-written and well-tested code fitting the scope of these projects. Code that doesn't fit those constraints (because of missing tests, being hacks, lacking quality, or simply not fitting the scope) might still be useful to others for ideas or to improve. As such `scrapyext` should be seen as a namespace to find scrapy modules, hacks and related code, that are outside the scope of scrapy/lib. License ------- (3-clause) BSD ---- [scrapy.git]: https://github.com/scrapy/scrapy [scrapylib.git]: https://github.com/scrapinghub/scrapylib [scrapyext.git]: https://github.com/nyov/scrapyext
32.741935
77
0.757635
eng_Latn
0.996434
7aa51de1843da84d079938888b90c94e76dc250a
1,681
md
Markdown
Exchange/ExchangeOnline/monitoring/what-happened-to-delivery-reports-in-office-365.md
dplotnikov/OfficeDocs-Exchange
b6462cba2d4f031eb6d8718d5d3b2a597423d686
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeOnline/monitoring/what-happened-to-delivery-reports-in-office-365.md
dplotnikov/OfficeDocs-Exchange
b6462cba2d4f031eb6d8718d5d3b2a597423d686
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeOnline/monitoring/what-happened-to-delivery-reports-in-office-365.md
dplotnikov/OfficeDocs-Exchange
b6462cba2d4f031eb6d8718d5d3b2a597423d686
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- localization_priority: Normal description: Delivery reports was a feature in Office 365 that allowed users and administrators to discover and view delivery information about messages. ms.topic: article author: chrisda ms.author: chrisda ms.assetid: f7efced3-6964-41da-bd54-e14620e8a0de title: What happened to delivery reports in Office 365? ms.collection: exchange-online search.appverid: - BCS160 - MET150 ms.audience: Admin ms.custom: MiniMaven ms.service: exchange-online manager: serdars --- # What happened to delivery reports in Office 365? Delivery reports was a feature in Office 365 that allowed users and administrators to discover and view delivery information about messages. In Office 365, delivery reports for administrators has been replaced by message trace. For more information, see these topics: - [Using Message Trace](https://support.office.com/article/bbf5a330-e83f-43d1-9d51-cfd17d576dd8.aspx) - [Trace an email message](https://go.microsoft.com/fwlink/p/?linkid=282262) Currently, there's no direct replacement for delivery reports for users, so the delivery report links in Outlook and Outlook on the web don't go anywhere. **Notes** - Delivery reports for users and administrators is still available in on-premises Exchange environments. For more information, see [Track messages with delivery reports](https://go.microsoft.com/fwlink/p/?linkid=282265). - Read receipts and delivery notifications aren't related to delivery reports, and are still available in Office 365. For more information, see [Add and request read receipts and delivery notifications](https://support.office.com/article/a34bf70a-4c2c-4461-b2a1-12e4a7a92141.aspx).
42.025
281
0.800714
eng_Latn
0.980263
7aa536bb4f474df58d264799dab5594876dd26d2
92
md
Markdown
src/ch08-goroutines-and-channels/readme.md
irisida/bluebook
82d20f9afbcf6fed251bd69e29abc96b586c5d38
[ "MIT" ]
null
null
null
src/ch08-goroutines-and-channels/readme.md
irisida/bluebook
82d20f9afbcf6fed251bd69e29abc96b586c5d38
[ "MIT" ]
null
null
null
src/ch08-goroutines-and-channels/readme.md
irisida/bluebook
82d20f9afbcf6fed251bd69e29abc96b586c5d38
[ "MIT" ]
null
null
null
![](/assets/bluebookrepologo.png) # The Blue Book - Chapter 08 - Goroutines and Channels
15.333333
54
0.717391
eng_Latn
0.438973
7aa57016dbf8f7344a4a67337909635ae835adfe
14,634
md
Markdown
os/linux/aaa.md
based2/KB
3091f93c85baaacbe42d3141b86aed52c05a9ddb
[ "Apache-2.0" ]
2
2021-08-05T11:16:45.000Z
2021-11-29T14:01:23.000Z
os/linux/aaa.md
based2/KB
3091f93c85baaacbe42d3141b86aed52c05a9ddb
[ "Apache-2.0" ]
null
null
null
os/linux/aaa.md
based2/KB
3091f93c85baaacbe42d3141b86aed52c05a9ddb
[ "Apache-2.0" ]
null
null
null
https://tldp.org/docs.html#howto > https://news.ycombinator.com/item?id=26991660 https://news.ycombinator.com/item?id=25506763 Print RAM Usage of process including children https://libreboot.org/news/libreboot202104xx.html https://ubuntu.com/blog/private-home-directories-for-ubuntu-21-04 > https://lobste.rs/s/k1tojw/private_home_directories_for_ubuntu_21_04 https://linuxzoo.net/ https://people.kernel.org/kuba/common-interface-for-nic-statistics https://man7.org/tlpi/api_changes/index.html > https://news.ycombinator.com/item?id=27052992 https://mjg59.dreamwidth.org/57199.html Producing a trustworthy x86-based Linux appliance > https://news.ycombinator.com/item?id=27365057 https://despairlabs.com/posts/2021-06-16-io-uring-is-not-an-event-system/ > https://news.ycombinator.com/item?id=27540248 https://linuxguidehq.com/linux-commands-cheat-sheet/ https://www.redhat.com/sysadmin/7-linux-namespaces # Selinux https://selinuxproject.org/page/Main_Page https://muhammadraza.me/2021/Linux-FS/ # Libc https://en.wikipedia.org/wiki/GNU_C_Library > https://sourceware.org/git/glibc.git https://www.musl-libc.org/intro.html https://justine.lol/cosmopolitan/index.html # Tux https://github.com/Nautalis/Tux3D # Embedded https://barebox.org/ https://librecmc.org/ https://www.rauc.io/ Updates > https://news.ycombinator.com/item?id=27354223 # Kernel modules https://www.collabora.com/news-and-blog/blog/2021/05/05/quick-hack-patching-kernel-module-using-dkms/ # Elf https://github.com/ruslashev/elfcat > https://news.ycombinator.com/item?id=27590508 https://github.com/NixOS/patchelf > https://news.ycombinator.com/item?id=27079565 https://github.com/intoli/exodus > https://news.ycombinator.com/item?id=29446297 https://kestrelcomputer.github.io/kestrel/2018/02/01/on-elf-2 > https://news.ycombinator.com/item?id=29660319 # Window manager https://www.freedesktop.org/wiki/ https://wayland.freedesktop.org/ https://wiki.gnome.org/Projects/Mutter https://www.enlightenment.org/ # UEFI https://linderud.dev/blog/mkinitcpio-v31-and-uefi-stubs/ > https://news.ycombinator.com/item?id=28273182 # News https://cloudnull.io/2017/05/nfs-mount-via-systemd/ https://www.phoronix.com/scan.php?page=news_item&px=systemd-250 https://tuxphones.com/mobile-linux-phone-desktop-environments-de-comparison-interfaces/ https://framagit.org/LeDub/Pierre_de_Rosette-apt-dnf/-/blob/main/README.md https://www.theregister.com/2021/12/08/intel_software_defined_silicon_update/ SDSi DRM > https://news.ycombinator.com/item?id=29483655 https://lkml.org/lkml/2021/12/6/461 > https://news.ycombinator.com/item?id=29485465 https://github.com/haampie/libtree > https://news.ycombinator.com/item?id=29413753 https://news.ycombinator.com/item?id=29422574 Which single board computers work with a vanilla blobless kernel? https://news.ycombinator.com/item?id=29307245 How do you quickly add calendar events in Linux? https://alpinelinux.org/posts/Alpine-3.15.0-released.html > https://news.ycombinator.com/item?id=29330394 https://gauthier.uk/blog/grub_less/ > https://news.ycombinator.com/item?id=29278063 https://blog.cloudflare.com/the-tale-of-a-single-register-value/?a > https://news.ycombinator.com/item?id=29260464 https://gist.github.com/motorailgun/cc2c573f253d0893f429a165b5f851ee Installing Windows and Linux into the same partition > https://news.ycombinator.com/item?id=29288720 https://news.ycombinator.com/item?id=29215270 https://changelog.complete.org/archives/10311-managing-an-external-display-on-linux-shouldnt-be-this-hard https://github.com/nnsee/fileless-elf-exec > https://www.reddit.com/r/netsec/comments/qsihvt/fee_execute_elf_binaries_without_dropping_files/ https://decoded.legal/blog/2021/11/running-a-law-firm-on-linux > https://news.ycombinator.com/item?id=29199395 https://manybutfinite.com/post/kernel-boot-process/ https://github.com/nuta/kerla Monolithic operating system kernel written from scratch in Rust which aims to be compatible with the Linux ABI, that is, it runs Linux binaries without any modifications. Rust https://shorez.de/linux-on-the-m-1-with-gpu-acceleration > https://news.ycombinator.com/item?id=29144074 https://github.com/matheusmoreira/liblinux C library that provides architecture-independent access to Linux system calls > https://news.ycombinator.com/item?id=29158425 https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/ http://dbp-consulting.com/tutorials/debugging/linuxProgramStartup.html > https://news.ycombinator.com/item?id=29104841 https://www.reddit.com/r/linux/comments/qkm01c/a_refresher_on_the_linux_file_system_structure/ https://yggdrasil-sr.github.io/ > https://news.ycombinator.com/item?id=29056168 https://mozillagfx.wordpress.com/2021/10/30/switching-the-linux-graphics-stack-from-glx-to-egl/ https://lists.x.org/archives/xorg/2021-October/060799.html > https://news.ycombinator.com/item?id=29016318 https://github.com/nuta/kerla > https://news.ycombinator.com/item?id=28986229 https://www.phoronix.com/scan.php?page=news_item&px=8M-IOPS-Per-Core-Linux > https://news.ycombinator.com/item?id=28893863 https://github.com/AdnanHodzic/auto-cpufreq laptop > https://news.ycombinator.com/item?id=28894335 https://nathanotterness.com/2021/10/tiny_elf_modernized.html > https://news.ycombinator.com/item?id=28848070 https://inconsolation.wordpress.com/ https://philsyme.github.io/lfs-tw/ https://github.com/NsCDE/NsCDE https://en.wikipedia.org/wiki/Cooperative_Linux https://xtermwm.sourceforge.io/ > https://news.ycombinator.com/item?id=28808837 https://github.com/nix-gui/nix-gui https://awesomewm.org/ https://xmonad.org/ > https://news.ycombinator.com/item?id=28793941 https://blogs.gnome.org/uraeus/2021/10/01/pipewire-and-fixing-the-linux-video-capture-stack/ > https://news.ycombinator.com/item?id=28726125 https://mauricius.dev/configure-an-infrared-remote-control-with-linux/ https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.1M-IOPS-Per-Core > https://news.ycombinator.com/item?id=28706762 https://who-t.blogspot.com/2021/09/an-xorg-release-without-xwayland.html https://arstechnica.com/gadgets/2021/09/android-to-take-an-upstream-first-development-model-for-the-linux-kernel/ https://github.com/aristocratos/btop > https://news.ycombinator.com/item?id=28634898 https://darkshadow.io/2020/08/01/speech-synthesis-on-linux.html > https://news.ycombinator.com/item?id=28651588 https://cenains.blog/2021/08/31/sudo-in-system/ > https://news.ycombinator.com/item?id=28594057 https://マリウス.com/linux-on-the-desktop-part-two/ > https://news.ycombinator.com/item?id=28585005 https://linux.slashdot.org/story/21/09/18/0050255/is-2021-the-year-of-the-linux-desktop > https://news.ycombinator.com/item?id=28583352 https://www.omgubuntu.co.uk/2021/09/ubuntu-makes-firefox-snap-default > https://news.ycombinator.com/item?id=28564600 https://lunduke.substack.com/p/the-linux-distributions-of-1992 > https://news.ycombinator.com/item?id=28552922 https://kde.org/announcements/plasma/5/5.22.90/ https://www.collabora.com/news-and-blog/news-and-events/generate-mininal-gstreamer-build-tailored-to-your-needs.html https://joshuastrobl.com/2021/09/14/building-an-alternative-ecosystem/ https://spectrum-os.org/ > https://news.ycombinator.com/item?id=28530861 https://blogs.gnome.org/alexm/2021/09/10/cleaning-up-header-bars/ https://www.phoronix.com/scan.php?page=article&item=windows-11-september&num=1 > https://news.ycombinator.com/item?id=28489289 https://eighty-twenty.org/2021/09/09/perf-addr2line-speed-improvement > https://news.ycombinator.com/item?id=28468751 https://frame.work/blog/linux-on-the-framework-laptop > https://news.ycombinator.com/item?id=28380959 http://lkml.iu.edu/hypermail/linux/kernel/2108.3/05470.html > https://news.ycombinator.com/item?id=28355754 https://simpletools.info/doku.php/osinstallation:debian11runit https://lists.lttng.org/pipermail/lttng-dev/2021-August/030046.html > https://news.ycombinator.com/item?id=28053078 https://github.com/vrmiguel/bustd Available memory or bust! https://bootlin.com/blog/how-we-found-that-the-linux-nios2-memset-implementation-had-a-bug/ https://lwn.net/Articles/863071/ Descriptorless Files for Io_uring > https://news.ycombinator.com/item?id=28003697 https://pointieststick.com/2021/07/30/this-week-in-kde-better-hidpi-on-x11/ https://lwn.net/SubscriberLink/864521/d704bdcced0c5c60/ Strict memcpy() bounds checking for the kernel > https://news.ycombinator.com/item?id=28015263 https://mobian-project.org/ > https://news.ycombinator.com/item?id=27965416 https://www.arsouyes.org/en/blog/2021/2021-07-05_Numerisation_DV > https://news.ycombinator.com/item?id=27956874 https://lwn.net/SubscriberLink/864184/06caefb9c8f2bbd5/ https://twitter.com/alyssarzg/status/1419469011734073347 M1 > https://news.ycombinator.com/item?id=27957813 https://www.phoronix.com/scan.php?page=news_item&px=IMA-Target-Measurements-DM https://pointieststick.com/2021/07/23/this-week-in-kde-power-profiles-and-a-more-polished-kickoff/ https://lwn.net/Articles/862018/ Rust for Linux redux > https://news.ycombinator.com/item?id=27939498 https://lwn.net/Articles/863459/ A GPIO driver in Rust > https://news.ycombinator.com/item?id=27886458 https://lore.kernel.org/lkml/CAHk-=whfeq9gyPWK3yao6cCj7LKeU3vQEDGJ3rKDdcaPNVMQzQ@mail.gmail.com/ > https://news.ycombinator.com/item?id=27880657 https://www.cnx-software.com/2021/07/18/linux-5-0-esp32-processor/ > https://news.ycombinator.com/item?id=27886168 https://github.com/mupuf/libwsm Wayland Security Modules https://www.phoronix.com/scan.php?page=news_item&px=le9-Linux-Low-RAM > https://news.ycombinator.com/item?id=27844116 https://lore.kernel.org/lkml/[email protected]/ > https://news.ycombinator.com/item?id=27746130 https://01.org/powertop/ Power management > https://news.ycombinator.com/item?id=27727120 https://befinitiv.wordpress.com/2021/06/30/installing-linux-into-a-286-laptop-from-the-year-1989/ > https://news.ycombinator.com/item?id=27705340 https://fancl20.github.io/contents/00-posts/2021-06-30-io_uring-is-not-only-a-generic-asynchronous-syscall-facility.html > https://news.ycombinator.com/item?id=27684608 https://edw.elementary.io/ https://www.theregister.com/2021/06/27/linux_kernel_5_13_released/ https://lwn.net/SubscriberLink/860262/29e92db3e7272504/ A stable bug fix bites proprietary modules https://wiki.debian.org/Teams/Apt/Spec/AptSign > https://news.ycombinator.com/item?id=27586146 https://www.pixelbeat.org/docs/coreutils-gotchas.html > https://www.reddit.com/r/programming/comments/3uogjv/gnu_coreutils_gotchas/ > https://news.ycombinator.com/item?id=27549863 https://lwn.net/Articles/857599/ Rewriting the GNU Coreutils in Rust > https://lobste.rs/s/m0npll/rewriting_gnu_coreutils_rust https://www.phoronix.com/scan.php?page=news_item&px=Google-Wants-Rust-In-Kernel https://www.phoronix.com/scan.php?page=news_item&px=LVFS-100k-Firmware-Day https://fwupd.org/ https://foundation.kernelci.org/blog/ https://github.com/ossf/wg-securing-critical-projects/blob/main/presentations/The_state_of_the_Linux_kernel_security.pdf > https://news.ycombinator.com/item?id=27513149 https://lore.kernel.org/lkml/[email protected]/ > https://news.ycombinator.com/item?id=27509944 https://blogs.gnome.org/tbernard/2021/06/11/community-power-1/ https://lists.debian.org/debian-devel-announce/2021/06/msg00000.html bullseye https://lwn.net/Articles/857148/ printk() indexing > https://news.ycombinator.com/item?id=27471141 https://lore.kernel.org/lkml/[email protected]/ > https://news.ycombinator.com/item?id=27459675 https://www.phoronix.com/scan.php?page=news_item&px=No-O3-For-Linux-Kernel > https://news.ycombinator.com/item?id=27409140 https://lwn.net/SubscriberLink/858023/1caabaef50d4946b/ Auditing io_uring https://www.collabora.com/news-and-blog/news-and-events/a-libweston-based-compositor-for-automotive-grade-linux.html https://landlock.io/ > https://news.ycombinator.com/item?id=27215563 http://0pointer.net/blog/file-descriptor-limits.html > https://news.ycombinator.com/item?id=27215690 https://github.com/jart/cosmopolitan/releases/tag/1.0 > https://news.ycombinator.com/item?id=27180182 https://alpinelinux.org/conf/ https://gitlab.gnome.org/GNOME/mutter/-/issues/1606 > https://news.ycombinator.com/item?id=27152069 https://lwn.net/SubscriberLink/855226/72737207b5650d33/ > https://news.ycombinator.com/item?id=27137674 https://blog.twitter.com/engineering/en_us/topics/open-source/2021/dropping-cache-didnt-drop-cache.html > https://news.ycombinator.com/item?id=27086209 https://investableuniverse.com/2021/05/05/linux-foundation-agstack-open-source-agriculture-technology/ > https://news.ycombinator.com/item?id=27086701 https://tedium.co/2021/05/07/linux-live-cd-history/ > https://news.ycombinator.com/item?id=27081369 https://www.susecon.com/ https://indico.cern.ch/event/995485/contributions/4256466/attachments/2207964/3736640/hepix21-linuxatcern.pdf CentOS -> https://www.trinitydesktop.org/about.php https://lore.kernel.org/lkml/[email protected]/ new LSM called Landlock, from Mickaël Salaün https://lkml.org/lkml/2021/4/27/1208 > https://news.ycombinator.com/item?id=26968680 https://www.tag1consulting.com/blog/interview-linus-torvalds-linux-and-git > https://news.ycombinator.com/item?id=26969595 https://skarnet.com/projects/service-manager.html > https://news.ycombinator.com/item?id=26955586 https://www.neowin.net/news/linux-bans-university-of-minnesota-for-sending-buggy-patches-in-the-name-of-research/ > https://news.ycombinator.com/item?id=26889677 > > https://github.com/QiushiWu/qiushiwu.github.io/blob/main/papers/OpenSourceInsecurity.pdf > > > https://news.ycombinator.com/item?id=26889719 > > > > https://lore.kernel.org/lkml/CAK8KejpUVLxmqp026JY7x5GzHU2YJLPU8SzTZUNXU2OXC70ZQQ@mail.gmail.com/T/#u > > > > > https://news.ycombinator.com/item?id=26929470 https://www.linux.com/featured/in-the-trenches-with-thomas-gleixner-real-time-linux-kernel-patch-set/ https://lwn.net/Articles/851184/ > https://news.ycombinator.com/item?id=26858752 https://www.openwall.com/lists/announce/2021/04/12/1 LKRG, it is a kernel module that performs runtime integrity checking of the Linux kernel and detection of security vulnerability exploits against the kernel https://news.ycombinator.com/item?id=26739220 Why is the Linux community struggling to implement hibernation?
36.585
205
0.781331
yue_Hant
0.65818
7aa5b72cadcec58ce63aa647b870b4c46ecba8c5
87
md
Markdown
Eaglewarrior_hacktoberfest2021/Readme.md
PravabKar/Machine-learning
d284b4e028a9d26033ce40688169983655b1a899
[ "MIT" ]
1
2021-10-01T17:16:55.000Z
2021-10-01T17:16:55.000Z
Eaglewarrior_hacktoberfest2021/Readme.md
PravabKar/Machine-learning
d284b4e028a9d26033ce40688169983655b1a899
[ "MIT" ]
null
null
null
Eaglewarrior_hacktoberfest2021/Readme.md
PravabKar/Machine-learning
d284b4e028a9d26033ce40688169983655b1a899
[ "MIT" ]
null
null
null
This folder contains all the submissions for Hacktoberfest2021 1) COVID19 predictions
21.75
62
0.850575
eng_Latn
0.990951
7aa5fec5bc06d42231a326a2b67f3d419cca5c5c
1,735
md
Markdown
_posts/2022-05-24-rsync.md
novafacing/novafacing.github.io
0edbe98c1329afb9bad8b82ff1ed280aab77c044
[ "MIT" ]
null
null
null
_posts/2022-05-24-rsync.md
novafacing/novafacing.github.io
0edbe98c1329afb9bad8b82ff1ed280aab77c044
[ "MIT" ]
null
null
null
_posts/2022-05-24-rsync.md
novafacing/novafacing.github.io
0edbe98c1329afb9bad8b82ff1ed280aab77c044
[ "MIT" ]
null
null
null
--- layout: post title: rsync is slow and hard categories: [Random] --- `rsync` is incredible, and you should use it! But, 95% of the time what I need to do with a utility like rsync is to back up my `Downloads` folder to the large HDD on my desktop before I distro-hop (yes, I'm that person). Everything else is backed up to github anyway, because I basically do nothing but code on my computers! Using `rsync` (or really `rsync -azP`) for this works okay, but I run into another problem: I intentionally bought a pretty slow laptop because it gets better battery life. This is not really an issue unless I am compiling llvm or trying to compress a 200GB `Downloads` folder. There's a compounding issue in that I frequently am doing this over the summer, which means that since I am in school at the moment, I am doing this in an airbnb typically owned by a less than tech savvy landlord and the Wifi is not...great. So what do we do? We don't want to compress over the wire, we *definitely* don't want to write our own utility... but FTP is fast! And also...it's pretty annoying to use. `put -r` of `sftp` fame doesn't exist on regular old `ftp` and besides, setting up a server and writing a unit file will take an hour. What I do is this: On the server: ```sh $ python3 -m pip install pyftpdlib $ python3 -m pyftpdlib --directory=. --port [PORT] --write ``` On the client, I install [`lftp`](https://github.com/lavv17/lftp), the best FTP client I am aware of, and use its "reverse mirror" functionality to recursively copy over my full directory: ```sh $ lftp -u anonymous,password -p [PORT] [HOSTNAME] lftp~> mirror -R Downloads ``` That's it! Wait an hour or two, and you are ready to distrohop to your heart's content.
44.487179
121
0.740634
eng_Latn
0.999332
7aa619d34618a891a96d95e2eded9934f17582a1
9,680
md
Markdown
static/english/learningCenter/v2.0/request/requestParameter.md
nishchay/translations
d335f67c1c0638ace868ea3a62e615a0328b3c69
[ "BSD-3-Clause" ]
null
null
null
static/english/learningCenter/v2.0/request/requestParameter.md
nishchay/translations
d335f67c1c0638ace868ea3a62e615a0328b3c69
[ "BSD-3-Clause" ]
null
null
null
static/english/learningCenter/v2.0/request/requestParameter.md
nishchay/translations
d335f67c1c0638ace868ea3a62e615a0328b3c69
[ "BSD-3-Clause" ]
2
2021-01-12T14:04:25.000Z
2021-01-15T04:09:24.000Z
#### Request parameter Class `Nishchay\Http\Request\Request` is used for fetching GET, POST request parameters. It can also be used to fetch request file. ##### GET parameter Using `Request::get` we can fetch GET parameter. If get parameter with name does not exists it returns `false`. This method accepts only one argument which should be name of GET parameter. ```php $name = Request::get('name'); ``` Check if GET parameter `name` exists or not ```php if (Request::get('name') === false) { # Do something } ``` ###### Fetch all GET parameters Don't pass anything to `get` method and it will returns all GET parameter in an array. It returns empty array if no GET parameter exists. ##### POST parameter Just like `get` method `post` works same but this method used to fetch POST parameter. ```php $name = Request::post('name'); ``` Check if POST parameter `name` exists or not ```php if (Request::get('post') === false) { # Do something } ``` ###### Fetch all POST parameters Don't pass anything to `post` method and it will returns all POST parameter in an array. It returns empty if no POST parameter exists in request body. ##### URL segment To fetch placeholder of url use `segment` method which accepts only one argument, name of segment. If segment with name does not exists it returns false. ```php $userId = Request::segment('userId'); ``` Check if POST parameter `name` exists or not ```php if (Request::segment('userId') === false) { # Do something } ``` ###### Fetch all segment Don't pass anything to `segment` method and it will returns all segments from url. It returns empty array if there are no placeholder segment in url. ##### Request file Using `file` method we can get uploaded file. It returns instance of `Nishchay\Http\Request\RequestFile` which contains details of uploaded file. If there's no file with the name, it returns `false`. `RequestFile` class have following methods using which we can get details of uploaded file | Method | Description | | ----------- | ----------------------------------------------------------- | | getFileName | Returns name of uploaded file | | getTempName | Returns path to temp file | | getType | Returns name of uploaded file | | getSize | Returns name of uploaded file | | getError | Returns name of uploaded file | | rename | To rename uploaded file | | move | To move file from temp directory to actual upload directory | If there are multiple file with the name, it returns list of `RequestFile` instance in an array. ###### Upload file When server receives request with file, file is first uploaded to temp directory. We then have to move it to our upload directory. We can do that with the help of `move` method. This method accepts only one argument which should be location where file need to uploaded to. ```php $file = Request::file('profilePic'); if ($file !== false) { $file->move('/Users/{serverUser}/images/profilePics'); } ``` **NOTE:** Do not pass file name with the location, `move` method will add file name(Name of file which returns in `getFileName` method). We only need to pass path to directory where file should be uploaded to. ###### Rename uploaded file To rename uploaded use `rename` method. Pass only new name of file without any extension. This method reuse extension of uploaded file. Its always good to rename file before upload it to upload directory to fix name conflicts. ```php $file = Request::file('profilePic'); if ($file !== false) { $file->rename('newName')->move('/Users/{serverUser}/images/profilePics'); } ``` ##### Fetch server value Using `server` method we can server value. We can either pass server variable name or alias of it. Such alias is listed below: | Method | Description | | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | SOFTWARE | Server identification string | | NAME | Name of the server host under which the current script is executing. If the script is running on a virtual host, this will be the value defined for that virtual host. | | IP | IP address of client | | PORT | Port being used on the user's machine to communicate with the web server. | | SERVER_IP | IP address of the server under which the current script is executing. | | SERVER_PORT | Port on the server machine being used by the web server for communication | | SIGNATURE | String containing the server version and virtual host name which are added to server-generated pages, if enabled. | | ADMIN | The value given to the SERVER_ADMIN (for Apache) directive in the web server configuration file. If the script is running on a virtual host, this will be the value defined for that virtual host. | | HOST | Contents of the Host: header from the current request, if there is one. | | AGENT | Contents of the User-Agent: header from the current request, if there is one. This is a string denoting the user agent being which is accessing the page. | | ACCEPT | Contents of the Accept: header from the current request, if there is one. | | LANGUAGE | Contents of the Accept-Language: header from the current request, if there is one. | | ENCODING | Contents of the Accept-Encoding: header from the current request, if there is one. | | CONNECTION | Contents of the Connection: header from the current request, if there is one. Example: `Keep-Alive`. | | QUERY | Query string, if any, via which the page was accessed. | | METHOD | Which HTTP request method was used to access the page | | SCHEME | Request scheme like HTTP, HTTPS | | URI | URI which was given in order to access this page | | SCRIPT | Contains the current script's path. This is useful for pages which need to point to themselves. | | PROTOCOL | Name and revision of the information protocol via which the page was requested | | SELF | The filename of the currently executing script, relative to the document root. | **NOTE** This method returns `FALSE` if server value with alias name or variable name not found. ##### Find if request is AJAX Using `isAjax` method we can find if request is AJAX or not. This method returns `TRUE` if request is AJAX. ##### HTTP request methods Below are some method which returns `TRUE` if based on request HTTP method. | Method | Description | | -------- | ----------------------------------------------- | | isPost | Returns `TRUE` if request HTTP method is POST | | isGet | Returns `TRUE` if request HTTP method is GET | | isPut | Returns `TRUE` if request HTTP method is PUT | | isDelete | Returns `TRUE` if request HTTP method is DELETE | | isPatch | Returns `TRUE` if request HTTP method is PATCH |
62.051282
272
0.483368
eng_Latn
0.995885
7aa6425373c53a658503141478de0c65cea89585
865
md
Markdown
packages/zent/src/radio/demos/layout.md
foreverzmy/zent
8eea14d8524ed17e8768d2063183ece3bee3cd77
[ "MIT" ]
null
null
null
packages/zent/src/radio/demos/layout.md
foreverzmy/zent
8eea14d8524ed17e8768d2063183ece3bee3cd77
[ "MIT" ]
null
null
null
packages/zent/src/radio/demos/layout.md
foreverzmy/zent
8eea14d8524ed17e8768d2063183ece3bee3cd77
[ "MIT" ]
null
null
null
--- order: 3 zh-CN: title: 布局 en-US: title: Layout --- ```js import { Radio, Layout } from 'zent' const RadioGroup = Radio.Group; const { Row, Col } = Layout class App extends React.Component { state = { value: 'A', } onChange = (e) => { this.setState({ value: e.target.value }); } render() { return ( <RadioGroup onChange={this.onChange} value={this.state.value} style={{ width: '100%' }}> <Row> <Col span={8}><Radio value="A">A</Radio></Col> <Col span={8}><Radio value="B">B</Radio></Col> <Col span={8}><Radio value="C">C</Radio></Col> <Col span={8}><Radio value="D">D</Radio></Col> <Col span={8}><Radio value="E">E</Radio></Col> <Col span={8}><Radio value="F">F</Radio></Col> </Row> </RadioGroup> ); } } ReactDOM.render( <App /> , mountNode ); ```
18.804348
94
0.53526
yue_Hant
0.416976
7aa6721e7ae9ce56b360de93b9ba63ff5b32223f
1,768
md
Markdown
ce/field-service/set-up-postal-codes.md
benabbon/dynamics-365-customer-engagement
f560fee79a88c992bf1a7e469e783faf8b50d148
[ "CC-BY-4.0", "MIT" ]
1
2021-07-26T17:43:28.000Z
2021-07-26T17:43:28.000Z
ce/field-service/set-up-postal-codes.md
benabbon/dynamics-365-customer-engagement
f560fee79a88c992bf1a7e469e783faf8b50d148
[ "CC-BY-4.0", "MIT" ]
null
null
null
ce/field-service/set-up-postal-codes.md
benabbon/dynamics-365-customer-engagement
f560fee79a88c992bf1a7e469e783faf8b50d148
[ "CC-BY-4.0", "MIT" ]
1
2021-07-26T08:17:01.000Z
2021-07-26T08:17:01.000Z
--- title: "Set up postal codes (Dynamics 365 Field Service) | MicrosoftDocs" ms.custom: - dyn365-fieldservice ms.date: 09/30/2017 ms.reviewer: krbjoran ms.service: dynamics-365-customerservice ms.suite: ms.technology: - field-service ms.tgt_pltfrm: ms.topic: article author: FieldServiceDave ms.assetid: c1cce991-fc21-4c97-afc5-8db822868518 caps.latest.revision: 14 ms.author: daclar manager: shellyha search.audienceType: - admin - customizer - enduser search.app: - D365CE - D365FS --- # Set up postal codes and relate them to service territories (Field Service) Creating postal code records and relating them to service territories lets an account be automatically assigned to a service territory when the account address is entered. When a user tabs out of the postal code field on the account record form, the system automatically populates the service territory field if it finds a match to the postal code. Postal codes can be assigned to territories, but it is not necessary for the territories feature to work. > [!NOTE] > You can't have the same postal code assigned to multiple territories. 1. From the main menu, click **Field Service** > **Administration**, and then choose **Postal Codes**. 2. On the **Postal Codes** screen, click **+New** in the upper left corner. Use the tooltips to help fill in your information, and then click **Save**. ### See also [Overview of Dynamics 365 Field Service](../field-service/overview.md) [Set up territories](../field-service/set-up-territories.md) [Set up booking rules](../field-service/set-up-booking-rules.md) [Set up booking statuses](../field-service/set-up-booking-statuses.md)<br> [User's Guide](../field-service/user-guide.md)
37.617021
350
0.735294
eng_Latn
0.959275
7aa68d26d63422e06db92355963a8edb169490df
1,545
md
Markdown
README.md
mrseanryan/goodreads-console
0a2ad6919cd2d986c162dd8ab54e3fda91d9e692
[ "MIT" ]
null
null
null
README.md
mrseanryan/goodreads-console
0a2ad6919cd2d986c162dd8ab54e3fda91d9e692
[ "MIT" ]
6
2020-09-06T18:58:17.000Z
2020-09-06T19:02:17.000Z
README.md
mrseanryan/goodreads-console
0a2ad6919cd2d986c162dd8ab54e3fda91d9e692
[ "MIT" ]
null
null
null
# goodreads-console README A simple console for retrieving data from Goodreads. ## Usage ``` grc-show-reviews.py ``` Your reviews are dumped out, in CSV format: ``` # Found 27 book reviews # title _ gr-id _ link _ body _ read-count _ date-added _ date-updated _ rating Heroes: Mortals and Monsters, Quests and Adventures (Stephen Fry's Great Mythology, #2) _ 41433634 _ https://www.goodreads.com/review/show/3033048494 _ Readable and cohesive - yet the whimsical humour is at times a little irritating. Seems to lack the gravitas of a conventional translation.<br /><br />Good footnotes, but no image attributions (who painted those lovely paintings?) _ 1 _ 2019-11-01 _ 2019-11-14 _ 3 ... ``` A '\_' seperator is used, in order avoid the review text getting split up when imported into _Google Sheets_ or other tool. ## Setup 1. Get Goodreads API key https://www.goodreads.com/api/keys 2. Save the key to this file: ``` key.credentials.txt ``` 2. Save the secret to this file: ``` secret.credentials.txt ``` 3. Save your user id to this file: ``` user-id.credentials.txt ``` tip: to get your user id, go to https://www.goodreads.com/ then click on **My Books**. You will see the user id in the URL. 3. Install Python 3.7.x and pip - Python 3.7.9 or later - pip 20.2.2 or later 4. Install dependencies ``` pip3 install -r pip.config ``` ## References ### Goodreads API https://www.goodreads.com/api/index ### Python library https://github.com/mdzhang/goodreads-api-client-python ## License License is [MIT](./LICENSE)
21.458333
416
0.720388
eng_Latn
0.920373
7aa6b41e9e88833c5747c003d45daeea3d139022
40,755
md
Markdown
docs/reference/tfs-ps-sync/assign-permissions-support-tfs-project-server-integration.md
captainnumerica/vsts-docs
dae407a39e13f933ef953ce6a957abfaa5ca3c04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reference/tfs-ps-sync/assign-permissions-support-tfs-project-server-integration.md
captainnumerica/vsts-docs
dae407a39e13f933ef953ce6a957abfaa5ca3c04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reference/tfs-ps-sync/assign-permissions-support-tfs-project-server-integration.md
captainnumerica/vsts-docs
dae407a39e13f933ef953ce6a957abfaa5ca3c04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Assign permissions to support TFS-Project Server integration titleSuffix: TFS description: Assign permissions to support configuration and working with Team Foundation Server & Project Server data synchronization ms.prod: devops ms.technology: devops-agile ms.assetid: d71eb502-00d0-4904-ac79-23195a707dc9 ms.manager: jillfra ms.author: kaelli author: KathrynEE ms.topic: conceptual ms.date: 03/16/2017 --- # Assign permissions to support TFS-Project Server integration [!INCLUDE [temp](../../_shared/tfs-ps-sync-header.md)] <a name="Top"></a> Assigning permissions is the first step in configuring Team Foundation Server and Project Server to support data synchronization. You must grant permissions to several accounts&mdash;administrators, service accounts, and team members. You must also make sure that specific service accounts have access as a Shared Services Provider (SSP) for the server that hosts SharePoint Products for Project Server. You should grant permissions after you have installed Team Foundation Server Extensions for Project Server Integration. For more information, see [System and setup requirements](system-and-setup-requirements.md). ## Before you begin Before you begin, you'll want to know which Project Web Access or Project Web App (PWA) instances and TFS project collections will participate in data synchronization. You'll also want to have answers to the following questions. ### <a name="assign_perm"></a> Do you have all the permissions you need to assign permissions? Make sure you belong to the following groups: - **Team Foundation Administrators** group, required to grant TFS permissions. You must also have access to the **Team Foundation Administration Console**. [Add accounts to administer TFS](../../organizations/security/set-project-collection-level-permissions.md). - **Administrator for Project Web App** for each instance of PWA, required to grant Project Server permissions. You must also have access to Project Server through PWA. - **Administrators** security group for the SQL Server databases for Project Server, required to grant permissions to the PWA Reporting and Publishing databases. - **Farm Administrators** group, the administrators group for the Web application that supports Project Server, or the **SharePoint Administration** group, required to grant SSP permissions. Group membership will depend on the security architecture of your deployment. - Administrator on the local computer, required to use `stsadm.exe`. ### <a name="auth_mode"></a> Is the authentication mode set correctly for your version of Project Server? - **For Project Server 2010**: The SharePoint web application for the instance of PWA must be set to **Classic Mode Authentication**. Classic Mode Authentication uses Windows authentication. User accounts are treated by SharePoint Server 2010 as Active Directory Domain Services (AD DS) accounts. You will not be able to register the PWA if its authentication is set to Claims Based Authentication. If you're not sure which authentication mode is set, or you need to switch authentication modes, [jump to this section](#auth2010). - **For Project Server 2013**: Two permissions are supported: SharePoint Permission mode and Project Permission mode. Both these modes use Claims Based authorization. The permissions that you need to assign differ, depending on the permission mode that is set. SharePoint permissions mode creates SharePoint groups that directly correspond to the default security groups found in Project Server permission mode. These groups are used to grant users varying levels of access to projects and Project Server functionality. SharePoint permission mode is new for Project Server 2013. New Project Web App instances use the SharePoint permission mode by default. In an on-premises installation, the mode can be changed for a given instance of Project Web App by using the **Set-SPProjectPermissionModeWindows** PowerShell cmdlet. Project Server permission mode provides a set of customizable security groups and other functionality that is distinct from SharePoint groups. This security platform operates independent from the SharePoint permissions in the farm and allows you to fine tune the permission levels for Project Web App users. This is the same permission mode that was available in Project Server 2010. For a comparison of features supported in each security mode, see [Plan user access in Project Server 2013](https://technet.microsoft.com/library/fp161361\(v=office.15\).aspx). If you're not sure which Permission mode is set, or you need to switch Permission modes, [jump to this section](#perm2013). ### <a name="windows_groups"></a> Have you created Windows groups to effectively manage user accounts? To minimize manually adding users to TFS and Project Server, create Windows or Active Directory groups. You can then add these groups to TFS groups, Project Server, and SharePoint sites which have pre-defined permissions. Also, you can synchronize resources with Active Directory across multiple domains and forests. For more information, see [Manage security group synchronization with Active Directory in Project Server 2013](https://technet.microsoft.com/library/gg750243.aspx). ## <a name="accounts"></a> 1. Identify all the service and user accounts that you need to assign permissions to Identify the service accounts, user accounts, or Active Directory groups that have been configured and will need access to the resources that support data synchronization between TFS and Project Server. ### <a name="service_accounts"></a> Service accounts Identify the following service accounts: - **Service account for TFS** [Open the Team Foundation Administration console](/azure/devops/server/command-line/open-admin-console). If a Network Service account is used, [change it to a domain account](/azure/devops/server/admin/change-service-account-password). - **Service account for the Project Server Event Handler** On the machine where Project Server is installed, open **Computer>Manage Services** and find **Microsoft Project Server Events Service**. - **Service account(s) that run the Project server web application pool(s)** There might be more than one service account, depending on the number of PWA instances that will participate in TFS data synchronization. You need to identify both the SharePoint appPool hosting PWA and the PSI service appPool. A GUID appPool name could be associated with the PSI service appPool. 1. Open SharePoint Central Site Administration, Application Management, Manage Service Application, Project Server Application. Find the SharePoint site that hosts the PWA instance. Make a note of the number. It might be under one or more ports, for example, SharePoint 80, or SharePoint web app. 2. Open IIS manager, expand sites, and find the SharePoint websites that correspond to the PWA that you identified. - **For Project Server 2010**: Open Advanced settings for the application Pool and you'll find the account identity for the AppPool. - **For Project Server 2013**: Expand SharePoint web services and expand each GUID until you find the one that contains project PSI service. In Advanced settings, identify the Application Pool, which is a GUID pool name. ![Find GUID of PSI app pools](_img/alm_iis_findapppoolguid.png "ALM_IIS_FindAppPoolGUID") Under IIS, AppPools, find the account used to run this GUID application pool. ![Find service accounts of PSI app pools](_img/alm_iis_findapppoolguid_2.png "ALM_IIS_FindAppPoolGUID_2") ### User accounts Identify the following user accounts or groups: - User account(s) who will run the `TFSProjectServer registerPWA` command - User account(s) who will map components to support TFS-Project Server integration, but not register PWAs - Users of Project Professional - Users assigned as project resources or have TFS work items assigned to them These users submit status updates that flow into the status queue for the project manager Depending on the role, you grant permissions to each PWA instance that participates in data synchronization to the SharePoint server, to the enterprise resource pool, and to TFS. ## <a name="grant_pwa_permissions"></a> 2. Grant permissions to access each PWA instance Do the following tasks, based on the version and permission mode used in your deployment. You must add accounts for each PWA instance that you will register and map to a project. |Task|Set for these configuration:| |----------|----------------------------------| |[2-1. Grant Global permissions to the TFS Service account](#global)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")| |[2-2. Grant Category permissions to the TFS Service account](#category)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")| |[2-3. Add accounts to a PWA security group](#pwa):<br /><br /> - TFS Service account<br />- Service account(s) for the Project Server web application pool<br />- User accounts that configure the integration<br />- Accounts of users of Project Professional: **Project Manager** or **Portfolio Managers**<br />- User accounts assigned as resources in the project plan: **Team Members**|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")| |[2-4. Add accounts to a PWA security group (SharePoint mode)](#pwa_sp)<br /><br /> - TFS Service account<br />- Service account(s) for the Project Server web application pool<br />- Service account for the Project Server Event Handler, add to Administrators for PWA<br />- User accounts that configure the integration<br />- Accounts of users of Project Professional: **Project Manager** or **Portfolio Managers**<br />- User accounts assigned as resources in the project plan: **Team Members**|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")| |[2-5. Add user accounts to the Active Directory Enterprise Resource Pool](#pool)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode") ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")| ### <a name="global"></a> 2-1 Grant Global permissions **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") and ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode") 1. From the PWA Settings page, open Manage Users, and then New User. 2. Add the TFS service account. 3. Type the required information in each field. Note the following: 1. Clear the check box for **User can be assigned as a resource** because the account is a service account. 2. For **User Authentication**, type the name of the service account for TFS. 3. Assign the following **Global** permissions: - Admin: Manage Enterprise Custom Fields, Manage Server Events, Manage Site Services, and Manage Users and Groups. - General: Log On, New Task Assignment, and Reassign Task. - Project: Build Team on New Project. - Views: View Approvals, View Project Center, View Resource Center, and View Task Center. 4. Save your changes. ### <a name="category"></a> 2-2 Grant Category permissions **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") and ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode") 1. From the home page for PWA, in the Quick Launch area, choose **Server Settings**. 2. Next, choose **Manage Categories** and then **New Category**. 3. Type a name for the service account category, for example, type **Servicing Account**. 4. Under **Available Users**, choose the name of the service account for Team Foundation Server, and then choose **Add**. ![Create TFS Service account category](_img/alm_pwa_tfsserviceaccntcategory.png "ALM_PWA_TFSServiceAccntCategory") 5. Under Projects, choose **All current and future projects in Project Server database**, and then click **Save**. 6. Add the TFS service account and select the checkboxes for these **Category** permissions: - Project: Open Project and View Project Site - Resource: View Enterprise Resource Data ![Category permissions for TFS service account](_img/alm_tfsps_pwa_categorypermissions.png "ALM_TFSPS_PWA_CategoryPermissions") ### <a name="pwa"></a> 2-3 Add accounts to a PWA security group **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") and ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode") 1. From the PWA Settings page, open Manage Users, New User, and then type the required information in each field: - Clear the check box for **User can be assigned as a resource** if the account is a service account. - For **User Authentication**, type the account name of the user or service account for TFS. - Clear the check box for **Resource can be leveled** if the account is an administrator or a service account. 2. For **Security Groups**, add the account or group to one of the default groups: 1. **Administrators**: TFS service account and the accounts of users who configure the integration, ones who register or unregister PWAs. 2. **Project Managers**: users who work with Project Professional and PWA. 3. **Team Members**: users who are assigned as a resource and who are assigned to TFS work items. 3. If you have customized Category permissions, verify that team members have the following Security Categories: **Create New Task or Assignment**, **Create Object Links**, **Open Project**, **View Project Site**, and **View Project Schedule in Project Web App**(Project Server 2010). ![Security categories, My Projects for team members](_img/alm_pwa_addcategoriesteammembers.png "ALM_PWA_AddCategoriesTeamMembers") For Project Server 2013, Permission mode, select: Open Project, View Project Site, and View Project Schedule in Project Web App. To modify the category permissions for a selected user in a category, select the category in the **Selected Categories** list, and then select **Allow** for the permissions that you want to allow. 4. Save your changes. For more information, see [Add a user account in Project Server 2010](http://go.microsoft.com/fwlink/?LinkId=207279) or [Plan user access in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262117).. ### <a name="pwa_sp"></a> 2-4 Add accounts to a PWA security group (SharePoint mode) **Required for:** ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") 1. From the PWA home page, open **Site settings** from the gear icon. ![Open site settings for PWA &#40;PS 2013&#41;](_img/alm_tfsps_pwasitesettings.png "ALM_TFSPS_PWASiteSettings") 2. Open Site Collection Administrators and add the TFS service account. 3. Open **People and groups**. ![Open People and Groups for PWA &#40;PS 2013&#41;](_img/alm_tfsps_pwapeopleandgroups.png "ALM_TFSPS_PWAPeopleAndGroups") 4. Choose the group to which you want to add accounts. ![Choose the group in PWA to add accounts &#40;PS 2013&#41;](_img/alm_tfsps_pwapandg_team.png "ALM_TFSPS_PWAPAndG_Team") 1. **Team Members for Project Web App**: accounts assigned as resources in the project plan or to the Assigned To field for a work item. Or, add the Active Directory group used to manage these resources. 2. **Administrators for Project Web App**: the service accounts for Team Foundation Server, the Project Server web application pool, and Project Server Event Handler. Also, add the accounts of users who configure the integration by running the **TfsAdmin ProjectServer RegisterPWA/UnRegisterPWA** commands 3. **PWA Site Collection Administrators** : the accounts of users who configure the integration by running the **TfsAdmin ProjectServer RegisterPWA/UnRegisterPWA** commands 4. **Project Managers for Project Web App**: accounts of users of Project Professional. > [!TIP] > To view all the default groups, choose **More**. To view permissions assigned to each group, choose **Settings, View Group Permissions**. To learn more, see [Plan user access in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262117). 5. On the group page, choose **New, Add users**. 6. Type the name of each account or Active Directory group to add to the selected group. ![Add accounts to a group for PWA &#40;PS 2013&#41;](_img/alm_tfsps_pwa_addaccount.png "ALM_TFSPS_PWA_AddAccount") 7. Choose **Share**. ### <a name="pool"></a> 2-5 Add user accounts to the Active Directory Enterprise Resource Pool **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode"), ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode"), and ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") 1. From the PWA settings page, under Operational policies, choose Active Directory resource pool synchronization. ![Open Active Directory Resource Pool Sync](_img/alm_pwa_setactivedirectresouce.png "ALM_PWA_SetActiveDirectResouce") 2. Add the Active Directory group of TFS team members to the enterprise resource pool. ![Active Directory Enterprise Resource Pool](_img/alm_pwa_activatesyncgroup.png "ALM_PWA_ActivateSyncGroup") ## <a name="grant_sharepoint_permissions"></a> 3. Grant SharePoint Server permissions Grant the specified permissions using SharePoint Central Administration. Or, you can use Windows PowerShell. |Task|Set for these configurations:| |----------|-----------------------------------| |[3-1. Grant Full Control Connect permissions to start the Project Server Service Application](#full_control)<br /><br /> - TFS service account<br />- Service account for the Project Server Event Handler|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")| |[3-2. Add TFS service account to the Site Collection Administrators for the SharePoint site](#site_collection)|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")| ### <a name="full_control"></a> 3-1 Grant Full Control Connect permissions to start the Project Server Service Application **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode") and ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode") 1. On to the SharePoint server for Project Server, open **SharePoint Central Administration**, and under **Application Management**, choose **Manage service applications**. ![Choose Manage service applications](_img/alm_sp_manageserviceapps.png "ALM_SP_ManageServiceApps") 2. Highlight the row for **Project Server Service Application** by clicking within the row but not the name of the application. In the ribbon, choose **Permissions**. ![Select permissions](_img/alm_alm_sp_msa_selectpermissions.png "ALM_ALM_SP_MSA_SelectPermissions") 3. Type the name of the service account for TFS, and then choose **Add**. 4. Make sure that the name of the newly added service account is highlighted, and then select the **Full Control** check box. Choose **OK**. ![Connection permissions full control](_img/alm_sp_msa_fullcontrol.png "ALM_SP_MSA_FullControl") 5. Repeat steps 3 and 4, this time add the service account for Service account for the Project Server Event Handler. If there is more than one service account, make sure you add it. For more information, see [Restrict or enable access to a service application](https://technet.microsoft.com/library/ff463596.aspx). ### <a name="site_collection"></a> 3-2. Add TFS service account to the Site Collection Administrators group **Required for:** ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") 1. On to the SharePoint server for Project Server, open **SharePoint 2013 Central Administration**, and choose **Site settings** from the gear icon. ![Open SharePoint Site Settings for PS 2013](_img/alm_tfsps_sitesettings.png "ALM_TFSPS_SiteSettings") 2. Choose **Site collection administrators**. ![Open Site Collection Administrators for PS 2013](_img/alm_tfsps_sitecollectionadmin.png "ALM_TFSPS_SiteCollectionAdmin") 3. Type the name of the TFS service account, and choose OK when done. ## <a name="grant_db_permissions"></a> 4. Grant Project Server database permissions **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode"), ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode"), and ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") Grant permissions to both the service account for TFS and the service account for the Project Server web application pool to update the database or databases for each PWA instance. This step is required for all deployments, both Project Server 2010 and Project Server 2013. 1. On the data-tier server for Project Server, open **SQL Server Management Studio**. 2. In the **Server type** list, select **Database Engine**. 3. In **Server name**, type the name of the server that hosts the databases for Project Server, and then choose **Connect**. > [!NOTE] > If SQL Server is installed on a cluster, type the name of the cluster, not the computer name. If you have specified a named instance, type the server and instance name in the following format: *DatabaseServer\InstanceName*. SQL Server Management Studio opens. 4. Expand **Databases**, right-click or open the context menu for the database for the instance of PWA, and then choose **Properties**: - For Project Server 2010: **PWA_Reporting** or **PWA_Publishing** - For Project Server 2013: **ProjectWebApp** 5. On the **Permissions** page. add the service account for TFS, (required for Project Server 2010 and Project Server 2013, Permission mode). For SQL Server 2008: Choose **Add** to add an account. For SQL Server 2012: Choose **Search** to add an account. ![Add user &#40;SQL Server 2012&#41;](_img/alm_alm_sql_2012_adduser.png "ALM_ALM_SQL_2012_AddUser") 6. Grant these permissions based on the database you've selected: - For Project Server 2010: **PWA_Reporting**: **Alter any Schema**, **Create Table**, **Delete** , **Execute**, **Insert**, **Select**, and **Update**. - For Project Server 2010: **PWA_Publishing**: **Select** - For Project Server 2013: **ProjectWebAppAlter any Schema**, **Create Table**, **Delete** , **Execute**, **Insert**, **Select**, and **Update**. ![Check permissions](_img/alm_sql_grantpermissions.png "ALM_SQL_GrantPermissions") 7. Repeat steps 5 through 6, this time add the service account of the Project Server web application pool. This is required for all deployments. 8. Repeat steps 4 through 7 for each instance of PWA that will participate in data synchronization with TFS. ## <a name="add_tfadmingroup"></a> 5. Add user accounts to Team Foundation Administrators group **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode"), ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode"), and ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") 1. On the application-tier server, [Open the Team Foundation Administration Console](/azure/devops/server/command-line/open-admin-console), and open **Group Membership**. ![Application tier, choose Group Membership](_img/alm_tac_groupmembership.png "ALM_TAC_GroupMembership") 2. Open **Team Foundation Administrators**. 3. Choose Windows User or Group and then choose Add. ![Add Windows account](_img/alm_tac_addwindowsaccount.png "ALM_TAC_AddWindowsAccount") 4. Enter the name of the accounts of users who configure the integration by running the **TfsAdmin ProjectServer RegisterPWA/UnRegisterPWA** commands. ![Check name](_img/alm_tac_checkname.png "ALM_TAC_CheckName") ## <a name="twa_apsi"></a> 6. Grant Administer Project Server integration permissions **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode"), ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode"), and ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") Accounts of users who configure the TFS-Project Server integration require Administer Project Server Integration permission set to allow. Set this for each project collection that you map to a PWA. From the Security page for the project collection, either open the permissions for a user account or a Windows account that you've added to TFS for administering project server integration. Set the permissions for Administer Project Server Integration to Allow. ![Set Administer Project Server Integration perm](_img/alm_tfsps_twa_setadminpsinteg.png "ALM_TFSPS_TWA_SetAdminPSInteg") ## <a name="add_twa"></a> 7. Add accounts to Team Foundation groups **Required for:** ![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode"), ![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode"), and ![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode") Accounts of users who work in Project Professional or TFS require permissions to view or contribute to TFS. From the TFS web portal administration Security page for the project, you can add accounts to either the project collection or each project. Add accounts or the Active Directory groups to the appropriate roles. ![Choose the project group and add members](_img/addausertoateamprojectgroup.png "Addausertoateamprojectgroup") Verify that user accounts or groups have been added to the following TFS groups for each project that will participate in data synchronization: - **Contributor** role: Team members who work in a TFS project that is integrated with Project Server. This includes all user accounts assigned as resources in the project plan or to the Assigned To field for a work item. These users submit status updates that flow into the status queue for the project manager. - **Reader** role: Users who modify enterprise project plans that are mapped to a project. For more info, see [Add users to projects](../../organizations/security/add-users-team-project.md). ## Permission checklist Use the following checklist to review that all permissions have been set according to your version and authentication mode. Remember that permissions must be granted to accounts for all PWA instances, projects, and project collections that will participate in data synchronization between TFS and Project Server. If you customize a role or security categories for a role, you might inadvertently remove required permissions. |Account|Permissions|Project Server 2010|Project Server 2013 (Permission mode)|Project Server 2013 (SharePoint mode)|Application| |-------------|-----------------|-------------------------|---------------------------------------------|---------------------------------------------|-----------------| |Service Account for TFS|[Global](#global) and [Category](#category) permissions|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")||PWA| ||[Administrators for Project Web App group](#pwa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA||PWA| ||[Site Collection Administrators group](#site_collection)|||![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|SharePoint Central Administration| ||[Connect permissions to the Project Server Service Application (Full Control)](#full_control)||![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")||SharePoint Central Administration| ||[PWA_Reporting and PWA_Publishing databases](#grant_db_permissions)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|||SQL Server Management Studio| ||[ProjectWebApp database](#grant_db_permissions)||![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|SQL Server Management Studio| |Service account for the Project Server web application pool (Note 1)|[Administrators for PWA group](#pwa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| ||[PWA_Reporting and PWA_Publishing databases](#grant_db_permissions)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|||SQL Server Management Studio| ||[ProjectWebApp database](#grant_db_permissions)||![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|SQL Server Management Studio| vice account for the Project Server Event Handler|[Connect permissions to the Project Server Service Application (Full Control)](#full_control)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")||SharePoint Central Administration| ||[Administrators for PWA group](#pwa)||![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| |User accounts who will configure the integration and run the **TFSProjectServer registerPWA** command|[Administrators for PWA group](#pwa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| ||[Site Collection Administrators group](#site_collection)|||![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|SharePoint Central Administration| ||[Team Foundation Administrators group](#add_tfadmingroup)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|Team Foundation Administration Console| ||[Administer Project Server integration](#twa_apsi)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|TFS web portal | |User accounts who will map components to support TFS-Project Server integration, but not register PWAs|[Administer Project Server integration](#twa_apsi)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|TFS web portal | rs of Project Professional|[Project Manager group for each PWA instance](#pwa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| ||[TFS Readers group](#add_twa)||![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|TFS web portal | |Users assigned as project resources or have TFS work items assigned to them|[Team Members for the PWA App group](#pwa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| ||[Plan groups, categories, and RBS in Project Server 2013 (Note 2)](https://msdn.microsoft.com/library/cc197354.aspx) (Note 2)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")||PWA| ||[Enterprise project pool and to the project resource pool for the project plan](#pool)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|PWA| ||[TFS Contributors group](#add_twa)|![Project Server 2010 Classic Mode](_img/alm_tfs-ps_classicmode.png "ALM_TFS-PS_ClassicMode")|![Project Server 2013 Permission Mode](_img/alm_tfs-ps_permmode.png "ALM_TFS-PS_PermMode")|![Project Server 2013 SharePoint Mode](_img/alm_tfs-ps_spmode.png "ALM_TFS-PS_SPMode")|TFS web portal | **Notes:** 1. Some deployments might have more than one service account for the Project Server Web Application Pool. See [Service accounts](#service_accounts) to determine the service accounts for these application pools. 2. The Security Categories assigned to Team Members by default are sufficient; however, if these categories have been customized, then some permissions might have been removed. The following categories are required: **Create New Task or Assignment**, **Create Object Links**, **Open Project**, **View Project Site**, and **View Project Schedule in Project Web App** (Project Server 2010), and **Open Project**, **View Project Site**, and **View Project Schedule in Project Web App** (Project Server 2013, Project permission mode). ## Q & A <a name="auth2010"></a> ### Q: How do I determine or change the Authentication mode in SharePoint 2010? **A:** From SharePoint 2010 Central Administration site, open Manage web applications from the Application Management section, and then open the PWA application. Verify that Classic Mode Authentication is selected. ![PWA 2010 Authentication](_img/alm_tfsps_pwa_2010_authentication.png "ALM_TFSPS_PWA_2010_Authentication") If it isn't, you'll need to [create a new PWA instance that uses Windows-Classic authentication](https://technet.microsoft.com/library/gg276326.aspx). <a name="perm2013"></a> ### Q: How do I determine the Permission mode in SharePoint 2013? **A:** From the PWA home page, use the gear icon to open **PWA settings**. ![PWA page, select PWA settings](_img/alm_tfsps_pwa_pwasettings.png "ALM_TFSPS_PWA_PWASettings") If SharePoint Permissions mode is set, you'll see this page: ![PWA Settings when SharePoint Permission mode](_img/alm_tfsps_pwa_sp_settings.png "ALM_TFSPS_PWA_SP_Settings") If Project Permissions mode is set, you'll see this page, which includes a section titled **Security**. You'll also see additional links: ![PWA Settings when Project Permission mode](_img/alm_tfsps_pwa_pm_settings.png "ALM_TFSPS_PWA_PM_Settings") ### Q: How do I switch permission modes in Project Server 2013? **A:** By default, PWA apps are created using SharePoint permission mode. If you switch from SharePoint permission mode to classic Project Server permission mode, you have to manually configure your security permissions structure in Project Server 2013. Switching between SharePoint permission mode and Project Server permission mode deletes all security-related settings. To switch permission mode, see [Set-SPProjectPermissionMode](https://technet.microsoft.com/library/jj219486.aspx). ### Q: What other resources are available? **A**: You might find answers to additional questions from the following resources: |Project Server 2010|Microsoft Project Server 2013| |-------------------------|-----------------------------------| |- [Manage users in Project Server 2010](http://go.microsoft.com/fwlink/?LinkId=207275)<br />- [Plan for administrative and service accounts (Project Server 2010)](http://go.microsoft.com/fwlink/?LinkId=207273)<br />- [Plan groups, categories, and RBS in Project Server 2010](https://technet.microsoft.com/library/cc197354.aspx)<br />- [Manage security groups in Project Server 2010](http://go.microsoft.com/fwlink/?LinkId=207274)<br />- [Project Server 2010 global permissions](http://go.microsoft.com/fwlink/?LinkId=207276)<br />- [Project Server 2010 default group permissions](http://go.microsoft.com/fwlink/?LinkId=207277)<br />- [Add resources to the enterprise resource pool](http://go.microsoft.com/fwlink/?LinkId=203356)<br />- [Active Directory Resource Pool Synchronization (Project Server 2010 settings)](https://technet.microsoft.com/library/gg982985.aspx)|- [Plan user access in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262117)<br />- [Plan for administrative and service accounts (Project Server 2013)](http://go.microsoft.com/fwlink/?LinkId=262110)<br />- [Plan groups, categories, and RBS in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262111)<br />- [Manage security groups in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262112)<br />- [Manage security group synchronization with Active Directory in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262113)<br />- [Manage users in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262114)<br />- [Manage Active Directory Resource Pool synchronization in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262115)<br />- [Default group permissions in Project Server 2013](http://go.microsoft.com/fwlink/?LinkId=262116)| ## Related articles [Configuration quick reference](configuration-quick-reference.md) [Configure TFS-Project Server integration](configure-tfs-project-server-integration.md) [Synchronization process overview](synchronization-process-overview.md) [Administer TFS-Project Server integration](administrate-integration-tfs-project-server.md)
88.597826
1,801
0.752889
eng_Latn
0.878755
7aa72b853625a1725ae6915aa4e698374f5ecfe9
1,113
md
Markdown
docs/cppcx/to-vector-function.md
v-makoud/cpp-docs
b05cff71a8a6a8a4c7bbea1263fd0a711853f921
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/cppcx/to-vector-function.md
v-makoud/cpp-docs
b05cff71a8a6a8a4c7bbea1263fd0a711853f921
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/cppcx/to-vector-function.md
v-makoud/cpp-docs
b05cff71a8a6a8a4c7bbea1263fd0a711853f921
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "to_vector Function | Microsoft Docs" ms.custom: "" ms.date: "12/30/2016" ms.technology: "cpp-windows" ms.topic: "language-reference" f1_keywords: ["collection/Windows::Foundation::Collections::to_vector"] dev_langs: ["C++"] helpviewer_keywords: ["to_vector Function"] ms.assetid: 9cdd5123-7243-4def-a1d3-162e0bf6219e author: "mikeblome" ms.author: "mblome" ms.workload: ["cplusplus"] --- # to_vector Function Returns a `std::vector` whose value is the same as the collection underlying the specified IVector or IVectorView parameter. ## Syntax ``` template <typename T> inline ::std::vector<T> to_vector(IVector<T>^ v); template <typename T> inline ::std::vector<T> to_vector(IVectorView<T>^ v); ``` #### Parameters *T*<br/> The template type parameter. *v*<br/> An IVector or IVectorView interface that provides access to an underlying Vector or VectorView object. ### Return Value ### Requirements **Header:** collection.h **Namespace:** Windows::Foundation::Collections ## See Also [Windows::Foundation::Collections Namespace](../cppcx/windows-foundation-collections-namespace-c-cx.md)
23.680851
124
0.738544
eng_Latn
0.508765
7aa7f9ccdfea9e461d738b560732a0f904ddf8cb
3,644
md
Markdown
README.md
circlespainter/scalabilityconnection
1eab4123e9f45cc6443e02d482b7a18b2813390e
[ "CC0-1.0" ]
null
null
null
README.md
circlespainter/scalabilityconnection
1eab4123e9f45cc6443e02d482b7a18b2813390e
[ "CC0-1.0" ]
null
null
null
README.md
circlespainter/scalabilityconnection
1eab4123e9f45cc6443e02d482b7a18b2813390e
[ "CC0-1.0" ]
null
null
null
The Scalability Connection -------------------------- Application requirements have changed dramatically in recent years. Only a few years ago a large application had tens of servers, seconds of response time, hours of offline maintenance and gigabytes of data. Today applications are deployed on everything from mobile devices to cloud-based clusters running thousands of multi-core processors. Users expect millisecond response times and 100% uptime. Data is measured in Petabytes. Today's demands are not met by yesterday’s software architectures. New architectures need to be Adaptive and Resilient. We can synthesize those requirements by naming them Scalable Systems. Scalable Systems are typically loosely-coupled and this makes them easier to develop and amenable to change. *Scalable Systems are:* * <a name="Responsive"></a>**Responsive**: The [system](/glossary#System) responds in a timely manner if at all meaningful. Responsiveness is the cornerstone of usability and utility, but more than that, responsiveness means that problems may be detected quickly and dealt with effectively. Responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service. This consistent behaviour in turn simplifies error handling, builds end user confidence, and encourages further interaction. * <a name="Resilient"></a>**Resilient**: The system stays responsive in the face of [failure](/glossary#Failure). This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by [replication](/glossary#Replication), containment, [isolation](/glossary#Isolation) and [delegation](/glossary#Delegation). Failures are contained within each [component](/glossary#Component), isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures. * <a name="Elastic"></a>**Elastic**: The system stays responsive under varying workload. Scalable Systems can react to changes in the input rate by increasing or decreasing the [resources](/glossary#Resource) allocated to service these inputs. This requires designs that have ideally no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Scalable Systems support predictive and timely scaling algorithms by providing relevant live performance measures. They achieve [elasticity](/glossary#Elasticity) in a cost-effective way on commodity hardware and software platforms. In addition, Scalable Systems can leverage [message-passing](/glossary#Message-Driven) and [actor systems](/glossary#Actor-Systems) to establish a boundary between components that enable loose coupling, isolation, [location transparency](/glossary#Location-Transparency) and delegating [errors](/glossary#Failure). Actor systems facilitates load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying [back-pressure](/glossary#Back-Pressure) when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. [Sign the Connection](http://www.scalabilityconnection.org/)
242.933333
828
0.811471
eng_Latn
0.99854
7aa830490fd3d6512f270655da675bd6b689c206
1,582
md
Markdown
README.md
evias/article-snippets
bd11bcf0acd538d0ba259c8b46546041af65fbba
[ "MIT" ]
null
null
null
README.md
evias/article-snippets
bd11bcf0acd538d0ba259c8b46546041af65fbba
[ "MIT" ]
null
null
null
README.md
evias/article-snippets
bd11bcf0acd538d0ba259c8b46546041af65fbba
[ "MIT" ]
null
null
null
# eVias Article Snippets This repository holds source code snippets used in written articles published by eVias Services. These source code snippets can vary in the programming language choice. *The author of this package cannot be held responsible for any loss of money or any malintentioned usage forms of this package. Please use this package with caution.* Package licensed under [MIT](LICENSE) License. ## Disclaimer THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Author / Contributors | Name | Github | | --- | --- | | [Grégory Saive](https://evias.be) | [evias](https://github.com/evias) ## Pot de vin If you like the initiative, and for the sake of good mood, I recommend you take a few minutes to Donate a beer or Three [because I like that] by sending some XEM (or whatever Mosaic you think pays me a few beers someday!) to my Wallet: Bitcoin: bc1qslcp7mw3zzldwf9arnltzfnqpnqq0l6smhxyym NEM: NB72EM6TTSX72O47T3GQFL345AB5WYKIDODKPPYW ## License This software is open-sourced software licensed under the [MIT license](https://opensource.org/licenses/MIT). You will find a copy of the [MIT license](LICENSE) in the root this repository.
51.032258
460
0.776865
yue_Hant
0.34272
7aa89fd20797f5109b7ee531bba95af244192b20
42
md
Markdown
README.md
geeztypes/stopwords-am
aae6948620de71c182508d44964ffa00c288b964
[ "MIT" ]
null
null
null
README.md
geeztypes/stopwords-am
aae6948620de71c182508d44964ffa00c288b964
[ "MIT" ]
null
null
null
README.md
geeztypes/stopwords-am
aae6948620de71c182508d44964ffa00c288b964
[ "MIT" ]
1
2021-04-11T06:43:40.000Z
2021-04-11T06:43:40.000Z
# stopwords-am Amharic stop words for NLP
14
26
0.785714
eng_Latn
0.757003
7aa8a2d51162a84c63adccd3c9de083ccc40ee3c
577
md
Markdown
_posts/2014-02-28-Sugar_pot.md
kouens/kouens.github.io
a953a93683cc31fb2ba7ce2c57f49747bace0bbf
[ "MIT" ]
1
2022-01-27T14:19:26.000Z
2022-01-27T14:19:26.000Z
_posts/2014-02-28-Sugar_pot.md
kouens/kouens.github.io
a953a93683cc31fb2ba7ce2c57f49747bace0bbf
[ "MIT" ]
null
null
null
_posts/2014-02-28-Sugar_pot.md
kouens/kouens.github.io
a953a93683cc31fb2ba7ce2c57f49747bace0bbf
[ "MIT" ]
null
null
null
--- layout: mypost title: 恋する少女(ドール)と想いのキセキ~Poupee de souhaits~ categories: [Sugar pot] --- # 恋する少女(ドール)と想いのキセキ~Poupee de souhaits~ ![Cover](140228_Sugar_pot.jpg) > ブランド:<a href="http://www.sugarpot-hp.com/" target="_blank">Sugar pot</a> > 発売日:2014年2月28日 > ジャンル:大好きなあなたのためだけに生きる少女人形たちとの恋愛ADV > 原画:月嶋ゆうこ、成瀬守 > シナリオ:近江谷宥 --- ## ダウンロード > ファイル形式:[Rip] mdf+mds+マニュアル+wav+cue+log+rr3 - **BaiduDisk** - [リンク](https://pan.baidu.com/s/169l24AQL3UksiNoOg4-oew) - パスワード:d9i8 - 解凍パスワード:Kouen(全角) - **セーブデータ** - [140228_Sugar_pot.zip](140228_Sugar_pot.zip)
16.970588
74
0.677643
yue_Hant
0.344771
7aa9848f526ebf54571b729a332d4699c2a4ff75
14,550
md
Markdown
docs/Grove-OLED_Display_0.96inch.md
SeeedDocument/Seeed-WiKi
0f8c056c408a84c6423d77fef229d6efdd6c9994
[ "BSD-2-Clause" ]
57
2016-09-28T01:19:35.000Z
2022-01-07T13:59:21.000Z
docs/Grove-OLED_Display_0.96inch.md
SeeedDocument/Seeed-WiKi
0f8c056c408a84c6423d77fef229d6efdd6c9994
[ "BSD-2-Clause" ]
16
2017-02-06T15:48:03.000Z
2018-02-28T21:40:10.000Z
docs/Grove-OLED_Display_0.96inch.md
SeeedDocument/Seeed-WiKi
0f8c056c408a84c6423d77fef229d6efdd6c9994
[ "BSD-2-Clause" ]
81
2016-09-06T04:21:06.000Z
2022-03-10T06:32:45.000Z
--- title: Grove - OLED Display 0.96 inch category: Display bzurl: https://www.seeedstudio.com/Grove-OLED-Display-0.96%22-p-781.html oldwikiname: Grove - OLED Display 0.96" prodimagename: Grove-OLED-0.96.png surveyurl: https://www.surveymonkey.com/r/Grove_OLED_0_96 sku: 104030008 tags: grove_i2c, io_3v3, io_5v, plat_duino, plat_bbg, plat_pi, plat_wio, plat_linkit --- ![](https://raw.githubusercontent.com/SeeedDocument/Grove_OLED_Display_0.96/master/images/Grove-OLED-0.96.png) **Grove - OLED Display 0.96"** module is an OLED monochrome 128×64dot matrix display module with Grove 4pin I2C Interface.Comparing to LCD, OLED screens are more competitive, which has a number of advantages such as high brightness, self-emission, high contrast ratio, slim / thin outline, wide viewing angle, wide temperature range, and low power consumption. It has bigger screen so that it can display more contents than the OLED 96×96. [![](https://raw.githubusercontent.com/SeeedDocument/Seeed-WiKi/master/docs/images/get_one_now.png)](https://www.seeedstudio.com/item_detail.html?p_id=781) ## Features ------------ - Grove compatible interface - Communicate Mode:I2C - Low power consumption - Display Color: White - Wide range of operating temperature:-20℃~70℃ !!!warning Please notice: heavy impact or stress on the OLED will cause the breakdown of screen. !!!Tip More details about Grove modules please refer to [Grove System](http://wiki.seeed.cc/Grove_System/) ## Specifications ------------ |Items |Min |Norm |Max |Unit | |------------------------------------|-----------|---------|-------|--------------| |Power Voltage (VCC) |3.3 |5.0 |5.5 |V | |Driver IC |- |SSD1308Z |- |- | |Display Color |- |White |- |- | |Dot Matrix |- |128×64 |- |- | |Panel Size |- |26.7(W)×19.26(H)|-|mm | |Active Area |-|21.74(W)×11.175 (H)|- |mm | |Dot Pitch |-|0.17(W)×0.175 (H)|- |mm | |Dot Size |-|0.15(W)×0.15 (H)|- |mm | |Wide range of operating temperature |-|-20~70 |- |℃ | ## Platforms Supported ------------ ## Getting Started ------------ ### With Arduino #### Connection The OLED128*64 uses all the pins of SSD1308 chip, the default original point is on the top left corner. You can also change the original point by adjusting the program and in order to display your desired patterns. For more details, please refer [SSD1308_1.0.pdf](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/SSD1308_1.0.pdf) and [LY190-128064.pdf](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/LY190-128064.pdf). Here we demonstrate how to display "Hello World" on the screen. First of all, We need to prepare the below stuffs: | Seeeduino V4 | Grove - OLED Display 0.96inch | Base Shield | |--------------|-------------|-----------------| |![enter image description here](https://raw.githubusercontent.com/SeeedDocument/Grove_Light_Sensor/master/images/gs_1.jpg)|![enter image description here](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/images/grove%20oled%200.96_s.jpg)|![enter image description here](https://raw.githubusercontent.com/SeeedDocument/Grove_Light_Sensor/master/images/gs_4.jpg)| |[Get ONE Now](http://www.seeedstudio.com/Seeeduino-V4.2-p-2517.html)|[Get ONE Now](https://www.seeedstudio.com/Grove-OLED-Display-0.96%26quot%3B-p-781.html)|[Get ONE Now](https://www.seeedstudio.com/Base-Shield-V2-p-1378.html)| - Plug the Grove OLED Display 128*64 onto the I2C port on Grove Base Shield, and then plug the Base Shield into Seeeduino; #### Software - Download [Seeed OLED Display 128*64 library](https://github.com/Seeed-Studio/OLED_Display_128X64/archive/master.zip) - Please follow [how to install an arduino library](http://wiki.seeed.cc/How_to_install_Arduino_Library/) procedures to install library. - Open the code directly by the path: **File -> Example ->OLED_Display_128X64-master->OLED_Hello_World**. ![](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/images/library%20example.png) ``` #include <Wire.h> #include <SeeedOLED.h> void setup() { Wire.begin(); SeeedOled.init(); //initialze SEEED OLED display SeeedOled.clearDisplay(); //clear the screen and set start position to top left corner SeeedOled.setNormalDisplay(); //Set display to normal mode (i.e non-inverse mode) SeeedOled.setPageMode(); //Set addressing mode to Page Mode SeeedOled.setTextXY(0,0); //Set the cursor to Xth Page, Yth Column SeeedOled.putString("Hello World!"); //Print the String } void loop() { } ``` - Upload the code. - We can see "hello world" on screen. ### With Beaglebone Green To begin editing programs that live on BBG, you can use the [Cloud9 IDE](https://c9.io) and refer [Beaglebone Green Wiki](http://wiki.seeed.cc/BeagleBone_Green/). Here are the steps how to display "Hello World" on OLED. **Step1**: Click the "+" in the top-right to create a new file. ![](https://raw.githubusercontent.com/SeeedDocument/Grove_OLED_Display_0.96/master/images/C9-create-tab.png) ![](https://raw.githubusercontent.com/SeeedDocument/Grove_OLED_Display_0.96/master/images/C9_newfile.jpg) **Step2**:Copy and paste the following code into the new tab ``` python from Adafruit_I2C import Adafruit_I2C import time import math Oled = Adafruit_I2C(0x3c) Command_Mode=0x80 Data_mode=0x40 grayH= 0xF0 grayL= 0x0F Normal_Display_Cmd=0xA4 BasicFont = [[0 for x in xrange(8)] for x in xrange(10)] BasicFont=[[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00], [0x00,0x00,0x5F,0x00,0x00,0x00,0x00,0x00], [0x00,0x00,0x07,0x00,0x07,0x00,0x00,0x00], [0x00,0x14,0x7F,0x14,0x7F,0x14,0x00,0x00], [0x00,0x24,0x2A,0x7F,0x2A,0x12,0x00,0x00], [0x00,0x23,0x13,0x08,0x64,0x62,0x00,0x00], [0x00,0x36,0x49,0x55,0x22,0x50,0x00,0x00], [0x00,0x00,0x05,0x03,0x00,0x00,0x00,0x00], [0x00,0x1C,0x22,0x41,0x00,0x00,0x00,0x00], [0x00,0x41,0x22,0x1C,0x00,0x00,0x00,0x00], [0x00,0x08,0x2A,0x1C,0x2A,0x08,0x00,0x00], [0x00,0x08,0x08,0x3E,0x08,0x08,0x00,0x00], [0x00,0xA0,0x60,0x00,0x00,0x00,0x00,0x00], [0x00,0x08,0x08,0x08,0x08,0x08,0x00,0x00], [0x00,0x60,0x60,0x00,0x00,0x00,0x00,0x00], [0x00,0x20,0x10,0x08,0x04,0x02,0x00,0x00], [0x00,0x3E,0x51,0x49,0x45,0x3E,0x00,0x00], [0x00,0x00,0x42,0x7F,0x40,0x00,0x00,0x00], [0x00,0x62,0x51,0x49,0x49,0x46,0x00,0x00], [0x00,0x22,0x41,0x49,0x49,0x36,0x00,0x00], [0x00,0x18,0x14,0x12,0x7F,0x10,0x00,0x00], [0x00,0x27,0x45,0x45,0x45,0x39,0x00,0x00], [0x00,0x3C,0x4A,0x49,0x49,0x30,0x00,0x00], [0x00,0x01,0x71,0x09,0x05,0x03,0x00,0x00], [0x00,0x36,0x49,0x49,0x49,0x36,0x00,0x00], [0x00,0x06,0x49,0x49,0x29,0x1E,0x00,0x00], [0x00,0x00,0x36,0x36,0x00,0x00,0x00,0x00], [0x00,0x00,0xAC,0x6C,0x00,0x00,0x00,0x00], [0x00,0x08,0x14,0x22,0x41,0x00,0x00,0x00], [0x00,0x14,0x14,0x14,0x14,0x14,0x00,0x00], [0x00,0x41,0x22,0x14,0x08,0x00,0x00,0x00], [0x00,0x02,0x01,0x51,0x09,0x06,0x00,0x00], [0x00,0x32,0x49,0x79,0x41,0x3E,0x00,0x00], [0x00,0x7E,0x09,0x09,0x09,0x7E,0x00,0x00], [0x00,0x7F,0x49,0x49,0x49,0x36,0x00,0x00], [0x00,0x3E,0x41,0x41,0x41,0x22,0x00,0x00], [0x00,0x7F,0x41,0x41,0x22,0x1C,0x00,0x00], [0x00,0x7F,0x49,0x49,0x49,0x41,0x00,0x00], [0x00,0x7F,0x09,0x09,0x09,0x01,0x00,0x00], [0x00,0x3E,0x41,0x41,0x51,0x72,0x00,0x00], [0x00,0x7F,0x08,0x08,0x08,0x7F,0x00,0x00], [0x00,0x41,0x7F,0x41,0x00,0x00,0x00,0x00], [0x00,0x20,0x40,0x41,0x3F,0x01,0x00,0x00], [0x00,0x7F,0x08,0x14,0x22,0x41,0x00,0x00], [0x00,0x7F,0x40,0x40,0x40,0x40,0x00,0x00], [0x00,0x7F,0x02,0x0C,0x02,0x7F,0x00,0x00], [0x00,0x7F,0x04,0x08,0x10,0x7F,0x00,0x00], [0x00,0x3E,0x41,0x41,0x41,0x3E,0x00,0x00], [0x00,0x7F,0x09,0x09,0x09,0x06,0x00,0x00], [0x00,0x3E,0x41,0x51,0x21,0x5E,0x00,0x00], [0x00,0x7F,0x09,0x19,0x29,0x46,0x00,0x00], [0x00,0x26,0x49,0x49,0x49,0x32,0x00,0x00], [0x00,0x01,0x01,0x7F,0x01,0x01,0x00,0x00], [0x00,0x3F,0x40,0x40,0x40,0x3F,0x00,0x00], [0x00,0x1F,0x20,0x40,0x20,0x1F,0x00,0x00], [0x00,0x3F,0x40,0x38,0x40,0x3F,0x00,0x00], [0x00,0x63,0x14,0x08,0x14,0x63,0x00,0x00], [0x00,0x03,0x04,0x78,0x04,0x03,0x00,0x00], [0x00,0x61,0x51,0x49,0x45,0x43,0x00,0x00], [0x00,0x7F,0x41,0x41,0x00,0x00,0x00,0x00], [0x00,0x02,0x04,0x08,0x10,0x20,0x00,0x00], [0x00,0x41,0x41,0x7F,0x00,0x00,0x00,0x00], [0x00,0x04,0x02,0x01,0x02,0x04,0x00,0x00], [0x00,0x80,0x80,0x80,0x80,0x80,0x00,0x00], [0x00,0x01,0x02,0x04,0x00,0x00,0x00,0x00], [0x00,0x20,0x54,0x54,0x54,0x78,0x00,0x00], [0x00,0x7F,0x48,0x44,0x44,0x38,0x00,0x00], [0x00,0x38,0x44,0x44,0x28,0x00,0x00,0x00], [0x00,0x38,0x44,0x44,0x48,0x7F,0x00,0x00], [0x00,0x38,0x54,0x54,0x54,0x18,0x00,0x00], [0x00,0x08,0x7E,0x09,0x02,0x00,0x00,0x00], [0x00,0x18,0xA4,0xA4,0xA4,0x7C,0x00,0x00], [0x00,0x7F,0x08,0x04,0x04,0x78,0x00,0x00], [0x00,0x00,0x7D,0x00,0x00,0x00,0x00,0x00], [0x00,0x80,0x84,0x7D,0x00,0x00,0x00,0x00], [0x00,0x7F,0x10,0x28,0x44,0x00,0x00,0x00], [0x00,0x41,0x7F,0x40,0x00,0x00,0x00,0x00], [0x00,0x7C,0x04,0x18,0x04,0x78,0x00,0x00], [0x00,0x7C,0x08,0x04,0x7C,0x00,0x00,0x00], [0x00,0x38,0x44,0x44,0x38,0x00,0x00,0x00], [0x00,0xFC,0x24,0x24,0x18,0x00,0x00,0x00], [0x00,0x18,0x24,0x24,0xFC,0x00,0x00,0x00], [0x00,0x00,0x7C,0x08,0x04,0x00,0x00,0x00], [0x00,0x48,0x54,0x54,0x24,0x00,0x00,0x00], [0x00,0x04,0x7F,0x44,0x00,0x00,0x00,0x00], [0x00,0x3C,0x40,0x40,0x7C,0x00,0x00,0x00], [0x00,0x1C,0x20,0x40,0x20,0x1C,0x00,0x00], [0x00,0x3C,0x40,0x30,0x40,0x3C,0x00,0x00], [0x00,0x44,0x28,0x10,0x28,0x44,0x00,0x00], [0x00,0x1C,0xA0,0xA0,0x7C,0x00,0x00,0x00], [0x00,0x44,0x64,0x54,0x4C,0x44,0x00,0x00], [0x00,0x08,0x36,0x41,0x00,0x00,0x00,0x00], [0x00,0x00,0x7F,0x00,0x00,0x00,0x00,0x00], [0x00,0x41,0x36,0x08,0x00,0x00,0x00,0x00], [0x00,0x02,0x01,0x01,0x02,0x01,0x00,0x00], [0x00,0x02,0x05,0x05,0x02,0x00,0x00,0x00]] def oled_init(): sendCommand(0xFD) # Unlock OLED driver IC MCU interface from entering command. i.e: Accept commands sendCommand(0x12) sendCommand(0xAE) # Set display off sendCommand(0xA8) # set multiplex ratio sendCommand(0x5F) # 96 sendCommand(0xA1) # set display start line sendCommand(0x00) sendCommand(0xA2) # set display offset sendCommand(0x60) sendCommand(0xA0) # set remap sendCommand(0x46) sendCommand(0xAB) # set vdd internal sendCommand(0x01) sendCommand(0x81) # set contrasr sendCommand(0x53) # 100 nit sendCommand(0xB1) # Set Phase Length sendCommand(0X51) sendCommand(0xB3) # Set Display Clock Divide Ratio/Oscillator Frequency sendCommand(0x01) sendCommand(0xB9) sendCommand(0xBC) # set pre_charge voltage/VCOMH sendCommand(0x08) # (0x08); sendCommand(0xBE) # set VCOMH sendCommand(0X07) # (0x07); sendCommand(0xB6) # Set second pre-charge period sendCommand(0x01) sendCommand(0xD5) # enable second precharge and enternal vsl sendCommand(0X62) # (0x62); sendCommand(0xA4) # Set Normal Display Mode sendCommand(0x2E) # Deactivate Scroll sendCommand(0xAF) # Switch on display time.sleep(0.1) # delay(100); # Row Address sendCommand(0x75) # Set Row Address sendCommand(0x00) # Start 0 sendCommand(0x5f) # End 95 # Column Address sendCommand(0x15) # Set Column Address sendCommand(0x08) # Start from 8th Column of driver IC. This is 0th Column for OLED sendCommand(0x37) # End at (8 + 47)th column. Each Column has 2 pixels(segments) # Init gray level for text. Default:Brightest White grayH= 0xF0 grayL= 0x0F def sendCommand(byte): Oled.write8(Command_Mode,byte) def sendData(byte): Oled.write8(Data_mode,byte) def multi_comm(commands): for c in commands: sendCommand(c) def oled_clearDisplay(): for j in range (0,48): for i in range (0,96): sendData(0x00) def oled_setNormalDisplay(): sendCommand(Normal_Display_Cmd) def oled_setVerticalMode(): sendCommand(0xA0) # remap to sendCommand(0x46) # Vertical mode def oled_setTextXY(Row,Column): sendCommand(0x15) # Set Column Address sendCommand(0x08+(Column*4)) # Start Column: Start from 8 sendCommand(0x37) # End Column # Row Address sendCommand(0x75) # Set Row Address sendCommand(0x00+(Row*8)) # Start Row sendCommand(0x07+(Row*8)) # End Row def oled_putChar(C): C_add=ord(C) if C_add<32 or C_add>127: # Ignore non-printable ASCII characters C=' ' C_add=ord(C) for i in range(0,8,2): for j in range(0,8): c=0x00 bit1=((BasicFont[C_add-32][i])>>j)&0x01 bit2=((BasicFont[C_add-32][i+1])>>j)&0x01 if bit1: c=c|grayH else: c=c|0x00 if bit2: c=c|grayL else: c=c|0x00 sendData(c) def oled_putString(String): for i in range(len(String)): oled_putChar(String[i]) if __name__=="__main__": oled_init() oled_setNormalDisplay() oled_setTextXY(0,0) oled_putString("Hello") time.sleep(10) #Oled.write8(Command_Mode,0xFD) #sendCommand(0xFD) print 'hello world' ``` **Step3**: Save the file by clicking the disk icon with with the .py extension. **Step4**: Connect Grove - OLED to Grove I2C socket on BBG. **Step5**: Run the code. We'll find that the Grove - OLED outputs "Hello World". ## Resources ----------- - **[Eagle]** [Grove-OLED128x64](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/OLED%20128x64.zip) - **[PDF]** [Grove-OLED128x64 Schematic](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/OLED%20128x64%20SCH.pdf) - **[PDF]** [Grove-OLED128x64 PCB](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/OLED%20128x64%20PCB.pdf) - **[Library]** [GitHub Library for OLED](https://github.com/Seeed-Studio/OLED_Display_128X64/archive/master.zip) - **[Datasheet]** [Resources of SSD1308_1.0.pdf](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/SSD1308_1.0.pdf) - **[Datasheet]** [Resources of LY190-128064.pdf](https://github.com/SeeedDocument/Grove_OLED_Display_0.96/raw/master/resource/LY190-128064.pdf) - **[Wiki]** [Beaglebone Green Wiki](http://wiki.seeed.cc/BeagleBone_Green/)
39.863014
476
0.687491
eng_Latn
0.198354
7aa997034c522bef7bad5e6b4d37d99fb4d9d414
2,462
md
Markdown
node_modules/simport/README.md
Dark-Red-Apple/NekoSlider
f99268422506bbe67bf4af173247bc969b910278
[ "MIT" ]
null
null
null
node_modules/simport/README.md
Dark-Red-Apple/NekoSlider
f99268422506bbe67bf4af173247bc969b910278
[ "MIT" ]
null
null
null
node_modules/simport/README.md
Dark-Red-Apple/NekoSlider
f99268422506bbe67bf4af173247bc969b910278
[ "MIT" ]
null
null
null
# Simport [![License][LicenseIMGURL]][LicenseURL] [![NPM version][NPMIMGURL]][NPMURL] [![Dependency Status][DependencyStatusIMGURL]][DependencyStatusURL] [![Build Status][BuildStatusIMGURL]][BuildStatusURL] [![Coverage Status][CoverageIMGURL]][CoverageURL] Use [dynamic imports](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#Dynamic_Imports) just like plain old [require](https://nodejs.org/api/esm.html#esm_require). With simport you can: - get `require` - get `__filename` or `__dirname` - load json - avoid extensions - avoid destructuring default - pass `simport` into functions like [tryCatch](https://github.com/coderaiser/try-to-catch) - use [absolute path in windows](https://github.com/nodejs/node/issues/31710#issuecomment-587434048) ## Install `npm i simport` ## API ### createSimport Commonjs: ```js const {createSimport} = require('simport'); const simport = createSimport(__filename); ``` ESM: ```js import {createSimport} from 'simport'; const simport = createSimport(import.meta.url); // you can import json await simport('./package.json'); // returns ({ name: simport, }); // you can avoid .js extension await simport('./server'); // you can avoid destructure default const validate = await simport('./validate'); // same as const {default: validate2} = await import('./validate.js'); ``` ### createCommons ```js import {createCommons} from 'simport'; const { __filename, __dirname, require, } = createCommons(import.meta.url); // now you have plain old CommonJS variables ``` ## License MIT [NPMIMGURL]: https://img.shields.io/npm/v/simport.svg?style=flat [BuildStatusIMGURL]: https://travis-ci.com/coderaiser/simport.svg?branch=master [DependencyStatusIMGURL]: https://img.shields.io/david/coderaiser/simport.svg?style=flat [LicenseIMGURL]: https://img.shields.io/badge/license-MIT-317BF9.svg?style=flat [NPMURL]: https://npmjs.org/package/simport "npm" [BuildStatusURL]: https://travis-ci.com/coderaiser/simport "Build Status" [DependencyStatusURL]: https://david-dm.org/coderaiser/simport "Dependency Status" [LicenseURL]: https://tldrlegal.com/license/mit-license "MIT License" [CoverageURL]: https://coveralls.io/github/coderaiser/simport?branch=master [CoverageIMGURL]: https://coveralls.io/repos/coderaiser/simport/badge.svg?branch=master&service=github
30.02439
256
0.71446
yue_Hant
0.325155
7aaa5c5f42286fd1ec5ebcf72bd31da8d71b8eab
1,614
md
Markdown
docs/content/migration/helm-chart.md
prakritichauhan07/maesh
c6d4b2faa921c9b50be976adf90b86e2941d454c
[ "Apache-2.0" ]
null
null
null
docs/content/migration/helm-chart.md
prakritichauhan07/maesh
c6d4b2faa921c9b50be976adf90b86e2941d454c
[ "Apache-2.0" ]
null
null
null
docs/content/migration/helm-chart.md
prakritichauhan07/maesh
c6d4b2faa921c9b50be976adf90b86e2941d454c
[ "Apache-2.0" ]
null
null
null
# Migrations Helm Chart {: .subtitle } ## v1.x to v2.0 ### Image version Since version `v1.2`, Maesh uses [Traefik](https://github.com/containous/traefik/) as a library and does not rely on its Docker image anymore. Therefore, the `controller.image` and `mesh.image` options have been removed. You should use the new `image` option as described in the [documentation](../install.md#deploy-helm-chart). ### Log Level The `controller.logging.debug` and `mesh.logging` options have been removed. You should use the new `controller.logLevel` and `mesh.logLevel` options to configure the logging level for the controller and proxies. ### SMI Mode The `smi.enable` option has been deprecated and removed. You should use the new and backward compatible ACL mode option as described in the [documentation](../install.md#access-control-list). ## v2.0 to v2.1 ### Default Mode The `controller.mesh.defaultMode` option has been deprecated and will be removed in a future major release. You should use the new `defaultMode` option to configure the default traffic mode for Maesh services. ### Prometheus and Grafana services Prior to version `v2.1`, when the Metrics chart is deployed, Prometheus and Grafana services are exposed by default through a `NodePort`. For security reasons, those services are not exposed by default anymore. To expose them you should use the new `prometheus.service` and `grafana.service` options, more details in the corresponding [values.yaml](https://github.com/containous/maesh/blob/e59b861ac91261b950663410a6223a02fc7e2290/helm/chart/maesh/charts/metrics/values.yaml).
44.833333
231
0.770756
eng_Latn
0.986953
7aaa9cb56d0b3b048a80f71789df369093a417a3
6,009
md
Markdown
windows/deployment/deploy-windows-cm/create-an-application-to-deploy-with-windows-10-using-configuration-manager.md
nieaton/windows-itpro-docs
9398e8e8aa8a920d25c02d58589507e7a2b1fc04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows/deployment/deploy-windows-cm/create-an-application-to-deploy-with-windows-10-using-configuration-manager.md
nieaton/windows-itpro-docs
9398e8e8aa8a920d25c02d58589507e7a2b1fc04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows/deployment/deploy-windows-cm/create-an-application-to-deploy-with-windows-10-using-configuration-manager.md
nieaton/windows-itpro-docs
9398e8e8aa8a920d25c02d58589507e7a2b1fc04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Create an app to deploy with Windows 10 using Configuration Manager description: Microsoft Microsoft Endpoint Manager supports deploying applications as part of the Windows 10 deployment process. ms.assetid: 2dfb2f39-1597-4999-b4ec-b063e8a8c90c ms.reviewer: manager: dougeby ms.author: aaroncz keywords: deployment, task sequence, custom, customize ms.prod: w10 ms.localizationpriority: medium ms.mktglfcycl: deploy ms.sitesec: library audience: itpro author: aczechowski ms.topic: article --- # Create an application to deploy with Windows 10 using Configuration Manager **Applies to** - Windows 10 Microsoft Endpoint Manager supports deploying applications as part of the Windows 10 deployment process. In this section, you create an application in Microsoft Endpoint Manager that you later configure the task sequence to use. For the purposes of this guide, we will use one server computer: CM01. - CM01 is a domain member server and Configuration Manager software distribution point. In this guide CM01 is a standalone primary site server. CM01 is running Windows Server 2019. However, an earlier, supported version of Windows Server can also be used. >[!NOTE] >The [reference image](add-a-windows-10-operating-system-image-using-configuration-manager.md) used in this lab already contains some applications, such as Microsoft Office 365 Pro Plus x64. The procedure demonstrated in this article enables you to add some additional custom applications beyond those included in the reference image. ## Example: Create the Adobe Reader application On **CM01**: 1. Create the **D:\Setup** folder if it does not already exist. 1. Download the Enterprise distribution version of [Adobe Acrobat Reader DC](https://get.adobe.com/reader/enterprise/) (ex: AcroRdrDC2000620034_en_US.exe) to **D:\\Setup\\Adobe** on CM01. The filename will differ depending on the version of Acrobat Reader. 2. Extract the .exe file that you downloaded to an .msi. The source folder will differ depending on where you downloaded the file. See the following example: ```powershell Set-Location C:\Users\administrator.CONTOSO\Downloads .\AcroRdrDC2000620034_en_US.exe -sfx_o"d:\Setup\Adobe\" -sfx_ne ``` >Note: the extraction process will create the "Adobe" folder 3. Using File Explorer, copy the **D:\\Setup\\Adobe** folder to the **D:\\Sources\\Software\\Adobe** folder. 4. In the Configuration Manager Console, in the Software Library workspace, expand **Application Management**. 5. Right-click **Applications**, point to **Folder** and then click **Create Folder**. Assign the name **OSD**. 6. Right-click the **OSD** folder, and click **Create Application**. 7. In the Create Application Wizard, on the **General** page, use the following settings: * Automatically detect information about this application from installation files * Type: Windows Installer (\*.msi file) * Location: \\\\CM01\\Sources$\\Software\\Adobe\\AcroRead.msi ![The Create Application Wizard.](../images/mdt-06-fig20.png "The Create Application Wizard") The Create Application Wizard 8. Click **Next**, and wait while Configuration Manager parses the MSI file. 9. On the **Import Information** page, review the information and then click **Next**. 10. On the **General Information** page, name the application Adobe Acrobat Reader DC - OSD Install, click **Next** twice, and then click **Close**. >[!NOTE] >Because it is not possible to reference an application deployment type in the task sequence, you should have a single deployment type for applications deployed by the task sequence. If you are deploying applications via both the task sequence and normal application deployment, and you have multiple deployment types, you should have two applications of the same software. In this section, you add the "OSD Install" suffix to applications that are deployed via the task sequence. If using packages, you can still reference both package and program in the task sequence. ![Add the OSD Install suffix to the application name.](../images/mdt-06-fig21.png "Add the OSD Install suffix to the application name") Add the "OSD Install" suffix to the application name 11. In the **Applications** node, select the Adobe Reader - OSD Install application, and click **Properties** on the ribbon bar (this is another place to view properties, you can also right-click and select properties). 12. On the **General Information** tab, select the **Allow this application to be installed from the Install Application task sequence action without being deployed** check box, and click **OK**. Next, see [Add drivers to a Windows 10 deployment with Windows PE using Configuration Manager](add-drivers-to-a-windows-10-deployment-with-windows-pe-using-configuration-manager.md). ## Related topics [Prepare for Zero Touch Installation of Windows 10 with Configuration Manager](prepare-for-zero-touch-installation-of-windows-10-with-configuration-manager.md)<br> [Create a custom Windows PE boot image with Configuration Manager](create-a-custom-windows-pe-boot-image-with-configuration-manager.md)<br> [Add a Windows 10 operating system image using Configuration Manager](add-a-windows-10-operating-system-image-using-configuration-manager.md)<br> [Add drivers to a Windows 10 deployment with Windows PE using Configuration Manager](add-drivers-to-a-windows-10-deployment-with-windows-pe-using-configuration-manager.md)<br> [Create a task sequence with Configuration Manager and MDT](./create-a-task-sequence-with-configuration-manager-and-mdt.md)<br> [Deploy Windows 10 using PXE and Configuration Manager](deploy-windows-10-using-pxe-and-configuration-manager.md)<br> [Refresh a Windows 7 SP1 client with Windows 10 using Configuration Manager](refresh-a-windows-7-client-with-windows-10-using-configuration-manager.md)<br> [Replace a Windows 7 SP1 client with Windows 10 using Configuration Manager](replace-a-windows-7-client-with-windows-10-using-configuration-manager.md)<br>
69.068966
572
0.779997
eng_Latn
0.960656
7aaacea1b6950ef693b215855e624b1639d31355
7,746
md
Markdown
includes/virtual-machines-common-sizes-storage.md
cristhianu/azure-docs.es-es
910ba6adc1547b9e94d5ed4cbcbe781921d009b7
[ "CC-BY-4.0", "MIT" ]
2
2019-09-04T06:39:25.000Z
2019-09-04T06:43:40.000Z
includes/virtual-machines-common-sizes-storage.md
cristhianu/azure-docs.es-es
910ba6adc1547b9e94d5ed4cbcbe781921d009b7
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-common-sizes-storage.md
cristhianu/azure-docs.es-es
910ba6adc1547b9e94d5ed4cbcbe781921d009b7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: archivo de inclusión description: archivo de inclusión services: virtual-machines author: jonbeck7 ms.service: virtual-machines ms.topic: include ms.date: 04/17/2019 ms.author: azcspmt;jonbeck;cynthn ms.custom: include file ms.openlocfilehash: b98aebfd7bef3edff8e046d7ef1c388ea57afa04 ms.sourcegitcommit: ac1cfe497341429cf62eb934e87f3b5f3c79948e ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/01/2019 ms.locfileid: "67501274" --- Los tamaños de VM optimizadas para almacenamiento proporcionan un alto rendimiento de disco y de E/S y son ideales para macrodatos, bases de datos SQL y NoSQL, almacenamiento de datos y bases de datos transaccionales grandes. Por ejemplo, Cassandra, MongoDB, Cloudera y Redis. En este artículo, se proporciona información acerca del número de vCPU, discos de datos y tarjetas de interfaz de red, así como del rendimiento del almacenamiento local y del ancho de banda de red para cada tamaño optimizado. La serie Lsv2 proporciona un alto rendimiento, baja latencia, almacenamiento NVMe local asignado directamente que se ejecuta en el procesador [AMD EPYC &trade; 7551](https://www.amd.com/en/products/epyc-7000-series) con una potencia en todos los núcleos de 2,55 GHz y una potencia máxima de 3,0 GHz. Las máquinas virtuales de la serie Lsv2 están disponibles en tamaños de 8 a 80 vCPU en una configuración de varios subprocesos simultáneos. Hay 8 GiB de memoria por vCPU y un dispositivo SSD NVMe M.2 de 1,92 TB por cada 8 vCPU, con hasta 19,2 TB (10 x 1,92 TB) disponibles en la versión L80s v2. > [!NOTE] > Las máquinas virtuales de la serie Lsv2 están optimizadas para usar el disco local del nodo conectado directamente a la máquina virtual, en lugar de utilizar discos de datos duraderos. Esto permite mayores IOPS por rendimiento para las cargas de trabajo. Las series Lsv2 y Ls no admiten la creación de una memoria caché local para aumentar el número de E/S por segundo que pueden alcanzar los discos de datos duraderos. > > El alto rendimiento y el elevado número de E/S por segundo del disco local hace de que las máquinas virtuales de las series Lsv2 y Ls sean ideales para almacenes NoSQL, como Apache Cassandra y MongoDB, que replican datos en diferentes máquinas virtuales para lograr la persistencia en caso de error de una máquina virtual individual. > > Para más información, consulte [Optimización del rendimiento en las máquinas virtuales de la serie Lsv2](../articles/virtual-machines/linux/storage-performance.md). ## <a name="lsv2-series"></a>Serie Lsv2 ACU: 150-175 Premium Storage: Compatible Almacenamiento en caché de Premium Storage: No compatible | Size | vCPU | Memoria (GiB) | Disco temporal <sup>1</sup> (GiB) | Discos NVMe<sup>2</sup> | Rendimiento de disco NVMe<sup>3</sup> (IOPS de lectura/MBps) | Rendimiento máx. de disco de datos no en caché (E/S por segundo/MBps)<sup>4</sup> | Nº máx. de discos de datos | Nº máx. de NIC/ancho de banda de red esperado (Mbps) | |---------------|-----------|-------------|--------------------------|----------------|---------------------------------------------------|-------------------------------------------|------------------------------|------------------------------| | Standard_L8s_v2 | 8 | 64 | 80 | 1 de 1,92 TB | 400 000/2000 | 8000/160 | 16 | 2/3200 | | Standard_L16s_v2 | 16 | 128 | 160 | 2 de 1,92 TB | 800 000/4000 | 16 000/320 | 32 | 4/6400 | | Standard_L32s_v2 | 32 | 256 | 320 | 4 de 1,92 TB | 1,5 mill./8000 | 32 000/640 | 32 | 8/12 800 | | Standard_L48s_v2 | 48 | 384 | 480 | 6x1.92 TB | 2.2M / 14000 | 48000/960 | 32 | 8/más de 16 000 | | Standard_L64s_v2 | 64 | 512 | 640 | 8 de 1,92 TB | 2,9 mill./16000 | 64 000/1280 | 32 | 8/más de 16 000 | | Standard_L80s_v2<sup>5</sup> | 80 | 640 | 800 | 10 de 1,92 TB | 3,8 mill./20 000 | 80 000/1400 | 32 | 8/más de 16 000 | <sup>1</sup> Las máquinas virtuales de la serie Lsv2 tienen un disco de recursos temporal basado en el estándar SCSI para paginación o el archivo de intercambio del sistema operativo (D: en Windows, /dev/sdb en Linux). Dicho disco proporciona 80 GiB de almacenamiento, 4000 IOPS y una velocidad de transferencia de 80 MBps por cada 8 vCPU (p. ej., el tamaño Standard_L80s_v2 proporciona 800 GiB a 40 000 IOPS y 800 MBps). Esto garantiza que las unidades de NVMe se puedan dedicar completamente al uso de aplicaciones. Este disco es efímero y se perderán todos los datos al detenerlo o desasignarlo. <sup>2</sup> Los discos NVMe locales son efímeros y los datos se perderán en estos discos si se detiene o desasigna la VM. <sup>3</sup> La tecnología Hyper-V NVMe Direct proporciona acceso sin límite a las unidades de NVMe locales asignadas de forma segura al espacio de VM de invitado. Para lograr el máximo rendimiento es necesario usar la última compilación WS2019 o Ubuntu 18.04 o 16.04 de Azure Marketplace. El rendimiento de escritura varía en función del tamaño de E/S, la carga de unidad y la utilización de capacidad. <sup>4</sup> Las VM de la serie Lsv2 no proporcionan almacenamiento caché de host para el disco de datos, ya que las cargas de trabajo de Lsv2 no se ven beneficiadas. Sin embargo, las máquinas virtuales Lsv2 pueden admitir la opción de disco de sistema operativo efímero de máquina virtual de Azure (hasta 30 GiB). <sup>5</sup> las máquinas virtuales con más de 64 vCPU requieren uno de estos sistemas operativos invitados compatibles: - Windows Server 2016 o posterior - Ubuntu 16.04 LTS o posterior, con kernel ajustado para Azure (kernel 4.15 o posterior) - SLES 12 SP2 o posterior - RHEL o CentOS, versiones 6.7 a 6.10, con la versión 4.3.1 (o posterior) del paquete LIS proporcionado por Microsoft instalada - RHEL o CentOS, versión 7.3, con la versión 4.2.1 (o posterior) del paquete LIS proporcionado por Microsoft instalada - RHEL o CentOS, versión 7.6 o posterior - Oracle Linux con UEK4 o posterior - Debian 9 con el kernel de backports, Debian 10 o posterior - CoreOS con un kernel 4.14 o posterior ## <a name="size-table-definitions"></a>Definiciones de tabla de tamaño - La capacidad de almacenamiento se muestra en unidades de GiB o 1024^3 bytes. Cuando compare discos que se miden en GB (1000^3 bytes) con discos que se miden en GiB (1024^3), recuerde que los números que representan la capacidad en GiB pueden parecer más pequeños. Por ejemplo, 1023 GiB = 1098.4 GB - Se midió el rendimiento de disco en operaciones de entrada/salida por segundo (E/S por segundo) y MBps, donde Mbps = 10^6 bytes/s. - Si desea obtener el mejor rendimiento para las VM, debe limitar el número de discos de datos a 2 discos por vCPU. - **El ancho de banda de red esperado** es el [ancho de banda agregado máximo asignado por tipo de máquina virtual](../articles/virtual-network/virtual-machine-network-throughput.md) en todas las NIC y para todos los destinos. Los límites superiores no están garantizados pero están diseñados para proporcionar una guía a la hora de seleccionar el tipo correcto de máquina virtual para la aplicación deseada. El rendimiento de red real dependerá de diversos factores (como, por ejemplo, la congestión de la red, las cargas de la aplicación y la configuración de red). Para más información acerca de cómo optimizar el rendimiento de red, consulte [Optimización del rendimiento de red para Windows y Linux](../articles/virtual-network/virtual-network-optimize-network-bandwidth.md). Para lograr el rendimiento de red esperado en Linux o Windows, puede que sea necesario seleccionar una versión específica u optimizar la máquina virtual. Para más información, consulte [Pruebas confiables para el rendimiento de máquinas virtuales](../articles/virtual-network/virtual-network-bandwidth-testing.md).
106.109589
1,095
0.743997
spa_Latn
0.980189
7aab48dbe46dca5b01d1b4f59610571721bb21f6
4,115
md
Markdown
README.md
JetJadeja/Tanzine
772923968a2a929cec24f170c928fa1fc3e797c4
[ "MIT" ]
4
2021-03-10T21:34:59.000Z
2021-04-24T17:43:17.000Z
README.md
pranavbaburaj/tanzine-programming-language
6175ba0191bdabe56c0b77f52d382bfb983a2ef9
[ "MIT" ]
27
2020-07-14T21:12:55.000Z
2020-12-10T22:36:33.000Z
README.md
JetDeveloping/Tanzine
772923968a2a929cec24f170c928fa1fc3e797c4
[ "MIT" ]
1
2021-03-14T14:20:22.000Z
2021-03-14T14:20:22.000Z
# TanzineLang Tanzine is a new programming language written and interpreted in Python. It is able to utilize Python's standard libraries as well a libraries on the [Python Package Index](https://pypi.org/). Using Tanzine, you are able to acheive relative imports, however package installation is not yet completed. ## Please join our Discord server for help or ways to get involved: https://discord.gg/4aWwGQ4 ### Warning ⚠️ Tanzine is an extremely new language, so there is no installation process. We are currently working on an installer to add the **tanzine** command to your path. The Tanzine tutorial listed below also does not cover **everything**. The Tanzine core team is working on docs, and you can feel free to help too by opening a PR! ## Basics 📒 Tanzine's syntax is simple to understand. Each statement comes after a definitive, which will tell tanzine what type of statement you are making. A definitive looks something like this: `@DEFINITIVE@`. Statements in Tanzine are similar to those in other languages. For example, math is the exact same (besides definitives)! Let's add two numbers! Type: `@MATH@ 5 + 2`. Since we do not display anything, nothing happens. So, let's assign a variable to the output of *5 + 2* and then display it in the console. To assign variables, we can use the VAR definitive (`@VAR@`) followed by the variable name. So we could do `@VAR@ num` (however that would display an error). Next we can write an equal symbol and then the statement that we are setting the variable to. So we can type: `@VAR@ num = @MATH@ 5 + 2`. And there you go! We have defined a variable! Now to print this variable we need to use the `print` function. To run functions, we must use the RUN definitive (`@RUN@`) followed by the function and parameters. The syntax for a function is `(function,arg1,arg2,arg3)`. **Note the lack of spaces between the arguments**! To use variables as arguments, we need to use the `@` symbol followed by the variable name. So, our variable `num` would be `@num` in our function. We can type `@RUN@ (/print,@num)` to print *num* out. This will output `7`! Full code: ``` @VAR@ num = @MATH@ 5 + 2 @RUN@ (/print,@num) ``` Full output: ``` 7 ``` **More basics are yet to come!** ## Capabilities Here is a snippet of code showing Tanzine's capabilities: ``` @FUNC@ <fetchJSON> [@url] { @VAR@ request = @RUN@ (requests/get,@url) @VAR@ son = @RUN@ (@request.json) @RUN@ (/print,@son) @RETURN@ } @VAR@ response = @RUN@ (<fetchJSON>,"http://crows.sh:9000/cosmosis/getChain") @RUN@ (/print,@response) ``` This code defines a function called **fetchJSON** that takes in a URL. You can use the function to make a GET request to the URL. This code will make a request to `http://crows.sh:9000/cosmosis/getChain` and return the JSONified outputed. ## Quick Start Guide 📋 First, you must start an app using Tanzine start app. `tanzine startapp AppName`. If you do not specify an app name, Tanzine will create one called *TanzineApp*. We can just use the default app name. Type `tanzine startapp`. Next, open your IDE of choice and open our *TanzineApp* directory. There should be a file called **main.tzn**. You can open this file and start making some changes! ### More Information ℹ️ The core code behind Tanzine was written in just 3 hours (we've added more features, though). **This means that the core Tanzine code isn't pretty, but development team is working on polishing the codebase, and you can help too by opening a PR or reporting issues!** ## Please join our Discord server for help or ways to get involved: https://discord.gg/4aWwGQ4 The server has a Discord bot (called Tater) allowing you to run Tanzine code directly in Discord, and see the output as a response message from the bot. - Mention the bot by typing `@Tater` for instructions! - Whenever an error is not covered by Tanzine (and the parser crashes), the bot **will create an issue in this repository**! # Dev Team 👨‍💻 The core dev team currently consists of: [JetDeveloping](https://github.com/JetDeveloping) [TransmissionsDev](https://github.com/TransmissionsDev)
61.41791
508
0.740705
eng_Latn
0.997514
7aab9b50f640a11ee3cee140f40d1fbb5e4e8581
360
md
Markdown
node_modules/oae-library/README.md
Orodan/Hilary-jitsi-fork
db1ebb7fb9b102335e90435d3e5b9dbf237f8386
[ "ECL-2.0" ]
96
2015-01-01T17:50:26.000Z
2022-01-01T13:38:26.000Z
node_modules/oae-library/README.md
GatechVIP/Hilary
0f5a3a782871a074898c97934f04a3aa0886591a
[ "ECL-2.0" ]
1,077
2015-01-02T13:25:59.000Z
2022-01-14T22:30:35.000Z
node_modules/oae-library/README.md
GatechVIP/Hilary
0f5a3a782871a074898c97934f04a3aa0886591a
[ "ECL-2.0" ]
67
2015-01-01T17:50:30.000Z
2021-08-20T20:51:02.000Z
The `library` module for OAE. A generic module which handles indexing libraries. A "Library" is essentially a sorted list of resource ids in which a visibility mask is applied. You effectively get 3 different sub-lists: private, loggedin, public, and the authentication+tenant of the user accessing the library determines which version of the Library you get.
90
278
0.808333
eng_Latn
0.995851
7aac7ddf08b55127b475f33ff682695e2f3cb113
1,211
md
Markdown
biztalk/esb-toolkit/manage-pending-requests-page.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/esb-toolkit/manage-pending-requests-page.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/esb-toolkit/manage-pending-requests-page.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 保留中の要求 ページの管理 |Microsoft Docs ms.custom: '' ms.date: 06/08/2017 ms.prod: biztalk-server ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: article ms.assetid: f5cdeb6e-71fc-45af-a24d-731c9a459a76 caps.latest.revision: 2 author: MandiOhlinger ms.author: mandia manager: anneta ms.openlocfilehash: 53a22e30d2613181c30806891cd9791af0881431 ms.sourcegitcommit: 381e83d43796a345488d54b3f7413e11d56ad7be ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 05/07/2019 ms.locfileid: "65261588" --- # <a name="manage-pending-requests-page"></a>保留中の要求 ページを管理します。 図 1 は、保留中の要求の管理ページで、表示の保留中の登録の一覧を要求と要求の履歴を表示するリンクを示します。 ![保留中の要求 ページを管理](../esb-toolkit/media/ch8-managependingrequestspage.gif "Ch8 ManagePendingRequestsPage") **図 1** **ESB 管理ポータル管理保留中の要求 ページ** 次の一覧では、ESB 管理ポータル保留中要求の管理ページの機能を使用する方法について説明します。 - をクリックして、 **ViewDetails**アイコン (虫眼鏡) を開く、[レジストリ詳細ページ](../esb-toolkit/registry-details-page.md)、場所、保留中の要求の詳細を表示、発行、更新、または削除できますが。 - をクリックして、**承認**アイコン (チェック マーク) を保留中の要求を承認します。 - をクリックして、**拒否**アイコン (クロス マーク) を保留中の要求を拒否します。 - をクリックして、**要求履歴を表示する**の以前の Universal Description, Discovery, and Integration (UDDI) の登録要求の情報を表示するページを開くためのリンク。
31.051282
132
0.755574
yue_Hant
0.609399
7aaccee2f084385f6852b460d3cc4299488bf4cb
1,273
md
Markdown
README.md
salihcodev/backdoors
9f171d6a374bbdb5d97d95c19c407df56c7b6bfa
[ "MIT" ]
null
null
null
README.md
salihcodev/backdoors
9f171d6a374bbdb5d97d95c19c407df56c7b6bfa
[ "MIT" ]
null
null
null
README.md
salihcodev/backdoors
9f171d6a374bbdb5d97d95c19c407df56c7b6bfa
[ "MIT" ]
null
null
null
<p align="center"> <a href="" alt="alt" width="500" /> </a> </p> <h1 align="center">e-Commercial platform</h1> <p align="center"><a href="https://fullcart.com" /><code>@fullcart</code></a></p> <br> <p align="center"> <!-- learn badge --> <a href="https://lerna.js.org"> <img alt="Maintained with Learn" src="https://img.shields.io/badge/maintained%20with-lerna-cc00ff.svg" /> </a> <!-- github starts --> <img alt="Github Stars" src="https://badgen.net/github/stars/salihcodev/prods-systems" /> </p> --- <br /> ## File Structure ### High level structure ```javascript .../fullcart ├── dockerfiles │ ├── docker-compose.yml │ ├── Dockerfile │ └── Dockerfile.dev │ ├── docs │   ├── server │   │   └── main.doc.md │   └── web │   └── main.doc.md │ ├── package.json ├── packages │   ├── server │   └── web │ ├── lerna.json ├── LICENSE ├── package.json └── yarn.lock ``` --- ## Generals ### Project naming convention - For naming files and directories i like to use **cabab-case** - For naming functions, utilities i like to use **camelCase** is javascript used to be - For naming interfaces, types i like to use **PascalCase** <br> ## Support Feel free to star the repo Follow me on twitter [`@salihcodev`](https://t.me/salihcodev)
17.680556
109
0.613511
eng_Latn
0.404202
7aacd6d75491b596ca51c204f14443875665dd43
1,991
md
Markdown
README.md
hostwithquantum/ansible-openstack-inventory
7d821477f0a8584aa708f53cc3d0222685bb694e
[ "BSD-2-Clause" ]
null
null
null
README.md
hostwithquantum/ansible-openstack-inventory
7d821477f0a8584aa708f53cc3d0222685bb694e
[ "BSD-2-Clause" ]
2
2021-05-31T12:42:19.000Z
2021-06-07T09:49:52.000Z
README.md
hostwithquantum/ansible-openstack-inventory
7d821477f0a8584aa708f53cc3d0222685bb694e
[ "BSD-2-Clause" ]
null
null
null
# ansible-openstack-inventory Inventory script for a dynamic OpenStack-based inventory. This script creates a default group `all` and various child groups which are configured via `config.ini`. We also use multiple networks on instances, therefor a default/access network for Ansible has to be configured. ## Project Status This is not a general purpose inventory for all OpenStack clouds, but instead it's heavily tailored towards what we use and need to bootstrap nodes for our [application hosting service](https://www.planetary-quantum.com). Specifically in regards to Docker Swarm, inventory script returns nodes and adds groups and labels for each node/group: - `docker_swarm_manager` - `docker_swarm_worker` Group membership and (`swarm_`)labels are determined by instance metadata: - `com.planetary-quantum.meta.role` (`manager` (default), worker) - `com.planetary-quantum.meta.label` Because documentation on dynamic inventories is a bit sparse, we decided to release this code to the broader community. And despite Ansible using Python, we wrote this inventory script in Go(lang) as we felt that at this part in the stack, we should run something less brittle. So feel free to use, copy, fork and ad[a,o]pt - and feel free to contribute. Please keep in mind, that the code in this repository has to work for our use-case first. ## Usage We use gophercloud to interface with OpenStack, the usual environment variables are listed in [`.envrc-dist`](.envrc-dist). ``` $ QUANTUM_CUSTOMER=... ./ansible-openstack-inventory ``` ## Testing Parts of the Go code are covered with unit tests. Full integration tests available in [e2e/](e2e/). ## Todo - [x] return correct JSON format of hosts - [x] how to add additional groups - [x] implement `--list` - [x] implement `--host node` - [x] better error handling (instead of `os.Exit`) ## License BSD-2-Clause, Copyright 2021 (and beyond) [Planetary Quantum GmbH](https://www.planetary-quantum.com/service/impressum/)
39.82
277
0.763435
eng_Latn
0.990993
7aad49245ac5e34b7e1509bd28fe88258c208b77
5,423
md
Markdown
azure_arc_k8s_jumpstart/docs/aks_terraform.md
NillsF/azure_arc
74adf592c39d43fd8a63655bb4b2428bcc7b6d68
[ "CC-BY-4.0", "MIT" ]
1
2020-11-24T19:23:13.000Z
2020-11-24T19:23:13.000Z
azure_arc_k8s_jumpstart/docs/aks_terraform.md
NillsF/azure_arc
74adf592c39d43fd8a63655bb4b2428bcc7b6d68
[ "CC-BY-4.0", "MIT" ]
null
null
null
azure_arc_k8s_jumpstart/docs/aks_terraform.md
NillsF/azure_arc
74adf592c39d43fd8a63655bb4b2428bcc7b6d68
[ "CC-BY-4.0", "MIT" ]
1
2022-02-03T10:51:39.000Z
2022-02-03T10:51:39.000Z
# Overview The following README will guide you on how to use the provided [Terraform](https://www.terraform.io/) plan to deploy an [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes) cluster and connected it as an Azure Arc cluster resource. # Prerequisites * Clone this repo ```terminal git clone https://github.com/microsoft/azure_arc.git ``` * [Install or update Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). **Azure CLI should be running version 2.7** or later. Use ```az --version``` to check your current installed version. * [Install Terraform >=0.12](https://learn.hashicorp.com/terraform/getting-started/install.html) * Create Azure Service Principal (SP) To connect a Kubernetes cluster to Azure Arc, Azure Service Principal assigned with the "Contributor" role is required. To create it, login to your Azure account run the below command (this can also be done in [Azure Cloud Shell](https://shell.azure.com/)). ```bash az login az ad sp create-for-rbac -n "<Unique SP Name>" --role contributor ``` For example: ```az ad sp create-for-rbac -n "http://AzureArcK8s" --role contributor``` Output should look like this: ``` { "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "displayName": "AzureArcK8s", "name": "http://AzureArcK8s", "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX" } ``` **Note**: It is optional but highly recommended to scope the SP to a specific [Azure subscription and Resource Group](https://docs.microsoft.com/en-us/cli/azure/ad/sp?view=azure-cli-latest) * Enable subscription for two providers for Azure Arc enabled Kubernetes<br> Registration is an asynchronous process, and registration may take approximately 10 minutes. ```bash az provider register --namespace Microsoft.Kubernetes Registering is still on-going. You can monitor using 'az provider show -n Microsoft.Kubernetes' az provider register --namespace Microsoft.KubernetesConfiguration Registering is still on-going. You can monitor using 'az provider show -n Microsoft.KubernetesConfiguration' ``` You can monitor the registration process with the following commands: ```bash az provider show -n Microsoft.Kubernetes -o table az provider show -n Microsoft.KubernetesConfiguration -o table ``` # Deployment The only thing you need to do before executing the Terraform plan is to export the environment variables which will be used by the plan. This is based on the Azure Service Principal you've just created and your subscription. In addition, validate that the AKS Kubernetes version is available in your region using the below Azure CLI command. ```az aks get-versions -l "<Your Azure Region>"``` In case the AKS service is not available in your region, you can change the AKS Kubernetes version in the [*variables.tf*](../aks/terraform/variables.tf) file by searching for *kubernetes_version*. * Export the environment variables needed for the Terraform plan. ```export TF_VAR_client_id=<Your Azure Service Principal App ID>``` ```export TF_VAR_client_secret=<Your Azure Service Principal App Password>``` * Run the ```terraform init``` command which will download the Terraform AzureRM provider. ![](../img/aks_terraform/01.png) * Run the ```terraform apply --auto-approve``` command and wait for the plan to finish. Once the Terraform deployment is completed, a new AKS cluster in a new Azure Resource Group is created. ![](../img/aks_terraform/02.png) ![](../img/aks_terraform/03.png) ![](../img/aks_terraform/04.png) * Now that you have a running AKS cluster, edit the environment variables section in the included [az_connect_aks](../aks/terraform/scripts/az_connect_aks.sh) shell script. ![](../img/aks_terraform/05.png) * In order to keep your local environment clean and untouched, we will use [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview) (located in the top-right corner in the Azure portal) to run the *az_connect_aks* shell script against the AKS cluster. **Make sure Cloud Shell is configured to use Bash.** ![](../img/aks_terraform/06.png) * Edit the environment variables in the [*az_connect_aks*](../aks/terraform/scripts/az_connect_aks.sh) shell script to match your parameters, upload it to the Cloud Shell environment and run it using the ```. ./az_connect_aks.sh``` command. **Note**: The extra dot is due to the script has an *export* function and needs to have the vars exported in the same shell session as the rest of the commands. ![](../img/aks_terraform/07.png) ![](../img/aks_terraform/08.png) ![](../img/aks_terraform/09.png) ![](../img/aks_terraform/10.png) * Once the script run has finished, the AKS cluster will be projected as a new Azure Arc cluster resource. ![](../img/aks_terraform/11.png) ![](../img/aks_terraform/12.png) # Delete the deployment The most straightforward way is to delete the Azure Arc cluster resource via the Azure Portal, just select the cluster and delete it. ![](../img/aks_terraform/13.png) If you want to nuke the entire environment, delete both the AKS and the AKS resources Resource Groups or run the ```terraform destroy -auto-approve``` command. ![](../img/aks_terraform/14.png) ![](../img/aks_terraform/15.png)
43.039683
329
0.737784
eng_Latn
0.90835
7aad9109477d4c6fcd5e0c48caa515fd1930291f
2,332
md
Markdown
README.md
terrorizer1980/nonce-tracker
7b71c4c0be864db24fbbdf36db003534c84b275f
[ "MIT" ]
29
2019-05-02T19:53:28.000Z
2022-03-01T16:44:59.000Z
README.md
terrorizer1980/nonce-tracker
7b71c4c0be864db24fbbdf36db003534c84b275f
[ "MIT" ]
6
2019-11-18T13:50:11.000Z
2021-10-03T19:56:05.000Z
README.md
terrorizer1980/nonce-tracker
7b71c4c0be864db24fbbdf36db003534c84b275f
[ "MIT" ]
19
2020-01-13T07:36:01.000Z
2021-11-15T06:20:51.000Z
# nonce-tracker How metamask calculates nonces ```js const NonceTracker = require('nonce-tracker'); const nonceTracker = new NonceTracker(config); nonceLock = nonceTracker.getNonceLock('0xselectedEthereumAddress'); nonce = nonceLock.nextNonce; ``` ## NonceTracker [index.js:13-159][13] ### Parameters - `opts` **[Object][14]** {Object} - `opts.provider` **[Object][14]** a ethereum provider - `opts.getPendingTransactions` **[Function][15]** a function that returns an array of txMeta whose status is `submitted` - `opts.getConfirmedTransactions` **[Function][15]** a function that returns an array of txMeta whose status is `confirmed` - `opts.blockTracker` ### getGlobalLock [index.js:27-32][16] Returns **[Promise][17]&lt;[Object][14]>** with the key releaseLock (the gloabl mutex) ### getNonceLock [index.js:48-82][18] #### Parameters - `address` #### Properties - `highestLocallyConfirmed` **[number][19]** A hex string of the highest nonce on a confirmed transaction. - `nextNetworkNonce` **[number][19]** The next nonce suggested by the eth_getTransactionCount method. - `highestSuggested` **[number][19]** The maximum between the other two, the number returned. this will return an object with the `nextNonce` `nonceDetails`, and the releaseLock Note: releaseLock must be called after adding a signed tx to pending transactions (or discarding). #### Parameters - `address` {string} the hex string for the address whose nonce we are calculating Returns **[Promise][17]&lt;NonceDetails>** ## Running tests ```bash yarn test ``` [13]: https://github.com/MetaMask/nonce-tracker/blob/587ee0b25e16543330830e71372e0a9b94c166c4/index.js#L13-L159 'Source code on GitHub' [14]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object [15]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/function [16]: https://github.com/MetaMask/nonce-tracker/blob/587ee0b25e16543330830e71372e0a9b94c166c4/index.js#L27-L32 'Source code on GitHub' [17]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise [18]: https://github.com/MetaMask/nonce-tracker/blob/587ee0b25e16543330830e71372e0a9b94c166c4/index.js#L48-L82 'Source code on GitHub' [19]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number
32.84507
135
0.752573
yue_Hant
0.611226
7aae8dc5957663225598809d6152644bece868b3
1,578
md
Markdown
php/SOLID/Isp/README.md
appkr/pattern
8075aa34e24a19961542cf020917e087a7fd590f
[ "MIT" ]
10
2017-07-10T03:04:28.000Z
2019-07-08T09:14:44.000Z
php/SOLID/Isp/README.md
appkr/pattern
8075aa34e24a19961542cf020917e087a7fd590f
[ "MIT" ]
null
null
null
php/SOLID/Isp/README.md
appkr/pattern
8075aa34e24a19961542cf020917e087a7fd590f
[ "MIT" ]
2
2017-02-13T02:40:08.000Z
2017-09-17T23:57:12.000Z
## SOLID - Interface Segregation Principle 인터페이스 분리 원칙은 하위 타입에서 구현할 필요가 없는 동작을 상위 타입에서 구현하도록 강제하지 말아야 한다는 원칙입니다. "인터페이스"라는 단어는 PHP, Java, C++/C#에 있는 `interface`라는 언어 구조체를 의미하지는 않습니다. 그럼에도 불구하고, `interface`가 있는 언어에서 인터페이스가 될 함수를 잘 분류하지 않아서 흔히 발생할 수 있는 문제이긴 합니다. ### 1. 설치 및 실행 ```bash ~/pattern $ composer install ~/pattern $ vendor/bin/phpunit SOLID\Isp ``` ![](docs/isp.phpunit.png) ### 2. Bad Scenario ![](docs/Bad.Class.png) 예제에서는 `abstract Class Animal`은 `abstract function` 키워드로, 이 클래스를 상속받는 하위 클래스는 무조건 `speak()`와 `swim()` 클래스를 구현하도록 강제하고 있습니다. `abstract`가 아니라, `interface`를 사용해도 마찬가지일겁니다. 참고로 PHP는 컴파일 언어가 아니므로 `abstract` 선언된 함수를 구현하지 않으면 런타임에 오류가 발생합니다. `Animal` 상위 타입을 상속 받은 `Dog` 하위 타입은 `speak()`와 `swim()` 능력(=Capability or Behavior)을 가지는 것이 어색하지 않죠? 상식적으로 개는 멍멍 짓고, 수영도 잘 하잖아요~ 반면, `Animal` 상위 타입을 상속 받은 `Cat` 하위 타입은 `speak()` 능력을 가지는 것은 당연하지만, `swim()` 능력을 가지는 것은 어색합니다. 상위 타입에서 구현을 강제했기 때문에 어쩔 수 없이 `swim()`을 구현하고 있지만, 함수 본문에서 `Cat` 타입은 수영을 할 수 없다고 예외를 던지는 식으로 구현하고 있습니다. 설계가 잘못된 것입니다. ### 2. ISP Conformance Scenario ![](docs/Isp.Class.png) PHP는 다중 상속을 지원하지 않으므로, 이 시나리오에서는 `interface`를 사용했습니다. 예제에서 사용한 `abstract Class Animal`은 `Dog`과 `Cat` 구현체에서 중복을 방지하기 위한 [`Template Method Pattern`](https://en.wikipedia.org/wiki/Template_method_pattern)이며, 인터페이스 분리 원칙과는 무관합니다. 기존 시나리오와 달리 인터페이스를 잘게 쪼개서 `Dog` 타입은 `CanSpeak`와 `CanSwim` 인터페이스를 모두 구현하고 있는 반면, `Cat` 타입은 `CanSpeak` 인터페이스만 구현하고 있습니다. 따라서, `Cat` 클래스는 이제 더 이상 `swim()` 함수를 구현할 필요가 없습니다. 여기서 조금 더 발전하면 Bahavior의 하위 타입을 여러 개 두고, 런타임에 선택적으로 주입하는 `Stragety` 패턴으로 진화할 수 있는데, 기회가 되면 또 한번 예제를 만들어 보겠습니다.
47.818182
233
0.702788
kor_Hang
1.00001
7aaf4a566e0967add22d417e7b04e1815bfc4560
3,218
md
Markdown
docs/label-free_quantification.md
PNNL-Comp-Mass-Spec/proteomics-data-analysis-tutorial
95f592302ce9c9b2873df30b94b784fb9faafe72
[ "MIT" ]
2
2021-12-26T11:15:29.000Z
2022-01-04T22:54:22.000Z
docs/label-free_quantification.md
PNNL-Comp-Mass-Spec/proteomics-data-analysis-tutorial
95f592302ce9c9b2873df30b94b784fb9faafe72
[ "MIT" ]
null
null
null
docs/label-free_quantification.md
PNNL-Comp-Mass-Spec/proteomics-data-analysis-tutorial
95f592302ce9c9b2873df30b94b784fb9faafe72
[ "MIT" ]
null
null
null
# Spectral Counting This is a generic spectral counting script for MS-GF+ Human/UniProt searches. Only modify the lines that change the data package number and the name of the final .xlsx file that will be saved, unless you know what you are doing. ```r ## Uncomment to install missing packages # install.packages("devtools") # library(devtools) # install_github("PNNL-Comp-Mass-Spec/MSnID@pnnl-master") # install_github("PNNL-Comp-Mass-Spec/PlexedPiper") # install_github("PNNL-Comp-Mass-Spec/PNNL.DMS.utils") # if (!requireNamespace("BiocManager", quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("MSnbase") # install.packages("writexl") # install.packages("dplyr") # install.packages("tibble") library(MSnID) library(PlexedPiper) library(PNNL.DMS.utils) library(MSnbase) library(writexl) library(dplyr) library(tibble) ``` ```r # Data package number data_package_num <- 3987 # Name of the final file to save file_name <- "data/3987_spectral_counts.xlsx" ``` Do not modify anything below unless you know what you are doing. ```r # Read MS-GF+ results from the DMS m <- read_msgf_data_from_DMS(data_package_num = data_package_num) # Filter to 1% FDR at the peptide level m <- filter_msgf_data(m, level = "peptide", fdr.max = 0.01) # UniProt to gene symbol conversion table conv_tab <- fetch_conversion_table(organism_name = "Homo sapiens", from = "UNIPROT", to = "SYMBOL") ``` When running `fetch_conversion_table`, if a prompt appears that requires an answer, type `yes` and press enter. ```r # Modify accessions column of psms to use gene symbols m <- remap_accessions(m, conv_tab, "\\|([^|-]+)(-\\d+)?\\|") # Do the same remapping to the FASTA file fst_path <- path_to_FASTA_used_by_DMS(data_package_num = data_package_num) fst_path_2 <- remap_fasta_entry_names( path_to_FASTA = fst_path, conversion_table = conv_tab, extraction_pttrn = "\\|([^|-]+)(-\\d+)?\\|" ) # Compute the number of amino acids per 1000 and use that to filter # to 1% FDR at the protein level m <- compute_num_peptides_per_1000aa(m, fst_path_2) m <- filter_msgf_data(m, "accession", fdr.max = 0.01) # Parsimonious protein inference m <- infer_parsimonious_accessions(m) show(m) # Assessment of filtering quality ``` ``` ## MSnID object ## Working directory: "." ## #Spectrum Files: 8 ## #PSMs: 87392 at 0.044 % FDR ## #peptides: 21634 at 0.12 % FDR ## #accessions: 2421 at 1 % FDR ``` The results look reasonable, so we will continue on to spectral counting. ```r # Remove decoys m <- apply_filter(m, "!isDecoy") # Convert m to an MSnSet msnset <- as(m, "MSnSet") # Spectral counting: # Within each accession group, sum the values within columns. msnset <- combineFeatures(msnset, fData(msnset)$accession, redundancy.handler = "multiple", method = "sum", cv = FALSE) # Sort features from most to least abundant tot_count <- rowSums(exprs(msnset)) msnset <- msnset[order(-tot_count), ] ``` ```r # Save exprs as an .xlsx file msnset %>% exprs() %>% as.data.frame() %>% rownames_to_column("Gene") %>% write_xlsx(path = file_name) ```
27.271186
228
0.689559
eng_Latn
0.751189
7aaf539c3cd27e254f1b5fd87133f9061eda41ea
5,427
md
Markdown
translations/zh-CN/content/admin/code-security/managing-github-advanced-security-for-your-enterprise/configuring-secret-scanning-for-your-appliance.md
hariharan2822/docs
0256da2b47050c73fea87f58a8be7fbdb767cdd9
[ "CC-BY-4.0", "MIT" ]
6
2022-01-14T15:13:12.000Z
2022-01-23T08:44:44.000Z
translations/zh-CN/content/admin/code-security/managing-github-advanced-security-for-your-enterprise/configuring-secret-scanning-for-your-appliance.md
hariharan2822/docs
0256da2b47050c73fea87f58a8be7fbdb767cdd9
[ "CC-BY-4.0", "MIT" ]
26
2022-03-03T06:47:45.000Z
2022-03-29T19:20:42.000Z
translations/zh-CN/content/admin/code-security/managing-github-advanced-security-for-your-enterprise/configuring-secret-scanning-for-your-appliance.md
Waleedalaedy/docs
26d4b73dcbb9a000c32faa37234288649f8d211a
[ "CC-BY-4.0", "MIT" ]
1
2022-02-28T19:57:01.000Z
2022-02-28T19:57:01.000Z
--- title: Configuring secret scanning for your appliance shortTitle: Configuring secret scanning intro: 'You can enable, configure, and disable {% data variables.product.prodname_secret_scanning %} for {% data variables.product.product_location %}. {% data variables.product.prodname_secret_scanning_caps %} allows users to scan code for accidentally committed secrets.' product: '{% data reusables.gated-features.secret-scanning %}' miniTocMaxHeadingLevel: 3 redirect_from: - /admin/configuration/configuring-secret-scanning-for-your-appliance - /admin/advanced-security/configuring-secret-scanning-for-your-appliance versions: ghes: '*' type: how_to topics: - Advanced Security - Enterprise - Secret scanning - Security --- {% data reusables.secret-scanning.beta %} ## About {% data variables.product.prodname_secret_scanning %} If someone checks a secret with a known pattern into a repository, {% data variables.product.prodname_secret_scanning %} catches the secret as it's checked in, and helps you mitigate the impact of the leak. Repository administrators are notified about any commit that contains a secret, and they can quickly view all detected secrets in the Security tab for the repository. For more information, see "[About {% data variables.product.prodname_secret_scanning %}](/code-security/secret-scanning/about-secret-scanning)." ## Checking whether your license includes {% data variables.product.prodname_GH_advanced_security %} {% data reusables.advanced-security.check-for-ghas-license %} ## Prerequisites for {% data variables.product.prodname_secret_scanning %} - The [SSSE3](https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf#G3.1106470) (Supplemental Streaming SIMD Extensions 3) CPU flag needs to be enabled on the VM/KVM that runs {% data variables.product.product_location %}. - A license for {% data variables.product.prodname_GH_advanced_security %}{% ifversion ghes > 3.0 %} (see "[About billing for {% data variables.product.prodname_GH_advanced_security %}](/billing/managing-billing-for-github-advanced-security/about-billing-for-github-advanced-security)"){% endif %} - {% data variables.product.prodname_secret_scanning_caps %} enabled in the management console (see "[Enabling {% data variables.product.prodname_GH_advanced_security %} for your enterprise](/admin/advanced-security/enabling-github-advanced-security-for-your-enterprise)") ### Checking support for the SSSE3 flag on your vCPUs The SSSE3 set of instructions is required because {% data variables.product.prodname_secret_scanning %} leverages hardware accelerated pattern matching to find potential credentials committed to your {% data variables.product.prodname_dotcom %} repositories. SSSE3 is enabled for most modern CPUs. You can check whether SSSE3 is enabled for the vCPUs available to your {% data variables.product.prodname_ghe_server %} instance. 1. Connect to the administrative shell for your {% data variables.product.prodname_ghe_server %} instance. For more information, see "[Accessing the administrative shell (SSH)](/admin/configuration/accessing-the-administrative-shell-ssh)." 2. Enter the following command: ```shell grep -iE '^flags.*ssse3' /proc/cpuinfo >/dev/null | echo $? ``` If this returns the value `0`, it means that the SSSE3 flag is available and enabled. You can now enable {% data variables.product.prodname_secret_scanning %} for {% data variables.product.product_location %}. For more information, see "[Enabling {% data variables.product.prodname_secret_scanning %}](#enabling-secret-scanning)" below. If this doesn't return `0`, SSSE3 is not enabled on your VM/KVM. You need to refer to the documentation of the hardware/hypervisor on how to enable the flag, or make it available to guest VMs. ## Enabling {% data variables.product.prodname_secret_scanning %} {% data reusables.enterprise_management_console.enable-disable-security-features %} {% data reusables.enterprise_site_admin_settings.access-settings %} {% data reusables.enterprise_site_admin_settings.management-console %} {% data reusables.enterprise_management_console.advanced-security-tab %} 1. Under "{% ifversion ghes < 3.2 %}{% data variables.product.prodname_advanced_security %}{% else %}Security{% endif %}," click **{% data variables.product.prodname_secret_scanning_caps %}**. ![Checkbox to enable or disable {% data variables.product.prodname_secret_scanning %}](/assets/images/enterprise/management-console/enable-secret-scanning-checkbox.png) {% data reusables.enterprise_management_console.save-settings %} ## Disabling {% data variables.product.prodname_secret_scanning %} {% data reusables.enterprise_management_console.enable-disable-security-features %} {% data reusables.enterprise_site_admin_settings.access-settings %} {% data reusables.enterprise_site_admin_settings.management-console %} {% data reusables.enterprise_management_console.advanced-security-tab %} 1. Under "{% ifversion ghes < 3.2 %}{% data variables.product.prodname_advanced_security %}{% else %}Security{% endif %}," unselect **{% data variables.product.prodname_secret_scanning_caps %}**. ![Checkbox to enable or disable {% data variables.product.prodname_secret_scanning %}](/assets/images/enterprise/management-console/secret-scanning-disable.png) {% data reusables.enterprise_management_console.save-settings %}
72.36
518
0.788465
eng_Latn
0.880764
7aaffdf927209ddf8caaaa72c91c7146b36751f6
3,377
md
Markdown
_posts/2019-05-30-CPU-GPU-and-TPU-in-Deep-Learning.md
evi1haxor/codezonediitj.github.io
4da948a6f8f31ffd500782b0f349404eccee8625
[ "CC-BY-3.0" ]
6
2019-06-08T07:13:36.000Z
2020-04-29T21:25:44.000Z
_posts/2019-05-30-CPU-GPU-and-TPU-in-Deep-Learning.md
evi1haxor/codezonediitj.github.io
4da948a6f8f31ffd500782b0f349404eccee8625
[ "CC-BY-3.0" ]
29
2019-06-10T12:20:04.000Z
2021-09-27T22:07:30.000Z
_posts/2019-05-30-CPU-GPU-and-TPU-in-Deep-Learning.md
evi1haxor/codezonediitj.github.io
4da948a6f8f31ffd500782b0f349404eccee8625
[ "CC-BY-3.0" ]
8
2019-06-07T07:41:01.000Z
2020-02-06T06:03:05.000Z
--- layout: post title: "CPU, GPU and TPU in Deep Learning" date: 2019-05-30 author: "Anukriti Jha" excerpt: "GPU (Graphical Processing Unit) is one of the most widely used processing unit used in training data models. It is a single processor chip which frees the CPU cycles from jobs of image processing and mathematical computations. For example, a game with a lot of graphics: many shadows to cast, lots of atmospheric effects, various lighting sources, complex textures, etc." image: "/images/benchmark-cpu-gpu.png" is_pinned: false --- GPU (Graphical Processing Unit) is one of the most widely used processing unit used in training data models. It is a single processor chip which frees the CPU cycles from jobs of image processing and mathematical computations. For example, a game with a lot of graphics: many shadows to cast, lots of atmospheric effects, various lighting sources, complex textures, etc. All these eat up extra GPU processing. Also, settings at higher resolutions means each single frame needs more such calculations to just display each pixel on that frame. Everything is handled by the GPU. When the GPU cannot remain at par with the CPU, it causes bottlenecks. #### Some stark differences between CPU and GPU CPU processes data sequentially whereas a GPU has several threads running simultaneously. A GPU thus utilizes parallel computing to increase the speed of training models. CPU assigns Graphics rendering, vector computations and other complex tasks to the GPU. GPUs are bandwidth optimized whereas CPUs are latency time (memory access time) optimized. In deep learning, the host code is run on the CPU and the CUDA code runs on GPU. Serial workload is handled by the CPU and offload parallel computation is handled by GPU. Due to large datasets, the CPU takes up a lot of memory while training the model. The standalone GPU, on the other hand, comes with a dedicated VRAM memory. Thus, CPU’s memory can be used for other tasks. Computing huge and complex jobs takes up a lot of clock cycles in CPU. The reason being, CPU takes up the jobs sequentially and it has a fewer number of cores than its counterpart, GPU. The High bandwidth, hiding the latency under thread parallelism and easily programmable registers makes GPU a lot faster than a CPU. #### The new Intel Xeon phi processing chip It fetches and decodes instructions from four hardware thread execution contexts and has 4 clock latency, hidden by round-robin scheduling of threads. Each microprocessor core is a fully functional, in-order core capable of running IA instructions independently of the other cores. It has two pipelines and ring interconnection. #### The TPU chip The chip has been specifically designed for Google's TensorFlow framework. In Google Photos, an individual TPU can process over 100 million photos a day. Google's Cloud TPU is currently only in beta, offering limited quantities and usage. In machine learning training, the Cloud TPU is more powerful in performance and four times larger in memory capacity than Nvidia's best GPU Tesla V100. Concluding, ML and DL is very much dependent on the three pillars of computer engineering, OS, Computer Organization and Architecture and Compiler Design. While general-purpose computing is still the CPU’s domain, GPUs are the hardware backbone of nearly all intensive computational applications.
93.805556
646
0.800711
eng_Latn
0.999268
7ab071fc6638be70010de8f3f68ae453964a82ad
29
md
Markdown
README.md
PurpleBabar/.profilr
aaf29e40d24589110ca80ea4e63440e383876a97
[ "MIT" ]
null
null
null
README.md
PurpleBabar/.profilr
aaf29e40d24589110ca80ea4e63440e383876a97
[ "MIT" ]
null
null
null
README.md
PurpleBabar/.profilr
aaf29e40d24589110ca80ea4e63440e383876a97
[ "MIT" ]
null
null
null
# .profilr A profile manager
9.666667
17
0.758621
eng_Latn
0.328067
7ab1b37c30ffb10118c4c9dfc9d3defd669b8f6a
1,041
md
Markdown
README.md
Kailaash-Balachandran/Youtube-Fetch-Videos
68401e098eeb4136a5b22bd56463c7d8a9f20c05
[ "MIT" ]
1
2017-11-29T12:49:24.000Z
2017-11-29T12:49:24.000Z
README.md
Kailaash-Balachandran/Youtube-Fetch-Videos
68401e098eeb4136a5b22bd56463c7d8a9f20c05
[ "MIT" ]
null
null
null
README.md
Kailaash-Balachandran/Youtube-Fetch-Videos
68401e098eeb4136a5b22bd56463c7d8a9f20c05
[ "MIT" ]
null
null
null
# youtube-fetch-video Connect to Youtube API at ease. ## Install ```sh $ npm install youtube-fetch-video ``` ## Usage ``` js import YTFetch from 'youtube-fetch-video'; YTFetch({ key: YOUTUBE_API_KEY, term: this.state.term, }, (data) => { this.setState({ youtube_videos: data }); }); ``` ## Search ``` js var options = { key: 'YouTubeAPIKey', term: 'Coldplay' } YTFetch(options, function (err, data) { // your custom code }); ``` ### Mandatory parameters ``` json key Unique Youtube API Key term Video search term ``` ## filters and additional parameters ``` json * maxResults Acceptable values are {0/} a 50, both inclusive. The default is 5. * order The parameter order specifies the sort type. Default is 'relevance'. * type The acceptable values are: * channel * playlist * video default value is: video more info at website official: https://developers.google.com/youtube/v3/docs/search/list#parmetros ``` ## License MIT Copyright (c) 2017 - Kailaash Balachandran
16.52381
98
0.663785
eng_Latn
0.761957
7ab22e9f9ac78d6ca5634d34dafa8d5794e8cef4
1,017
md
Markdown
README.md
perimosocordiae/webtool
4cffa018e8a7c0057287604c57a4bd7bf751a069
[ "MIT" ]
null
null
null
README.md
perimosocordiae/webtool
4cffa018e8a7c0057287604c57a4bd7bf751a069
[ "MIT" ]
null
null
null
README.md
perimosocordiae/webtool
4cffa018e8a7c0057287604c57a4bd7bf751a069
[ "MIT" ]
null
null
null
# WebTool Hassle-free web interface generator for Python code. ## Usage ```python from webtool import webtool, webfn, webarg @webfn('My function', 'This is a description.', foo=webarg('This is an argument.', type=int, default=4), bar=webarg('Another arg.')) def any_python_function(state, foo='4', bar=''): # state is a dict that persists for the whole webtool session state['called'] = state.get('called', 0) + 1 # the returned string will be displayed as HTML below the argument form return '<b>Any HTML</b> output goes <i>here</i>.' if __name__ == '__main__': webtool(title='Usage Example', port=8787, f=any_python_function) ``` Matplotlib figures are supported as well. Create figures as normal in a `webfn`-decorated function, and they will appear as interactive WebAgg plots. ## Installation WebTool isn't on PyPI yet, but you can install from GitHub: pip install git+https://github.com/perimosocordiae/webtool.git Alternatively, clone this repo and run `setup.py install`.
30.818182
73
0.723697
eng_Latn
0.961955
7ab25a852a5bd2448b1bedd9cb62adaf6c826130
4,822
md
Markdown
docs/vs-2015/extensibility/debugger/reference/type-info.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/type-info.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/type-info.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: TYPE_INFO | Microsoft-Dokumentation ms.custom: '' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-ide-sdk ms.tgt_pltfrm: '' ms.topic: article f1_keywords: - TYPE_INFO helpviewer_keywords: - TYPE_INFO structure ms.assetid: d725cb68-a565-49d1-a16f-ff0445c587a0 caps.latest.revision: 11 ms.author: gregvanl manager: ghogen ms.openlocfilehash: 628d6e5ae2e13ea117cb3fd50aca3ba2150ac59f ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 11/16/2018 ms.locfileid: "51755458" --- # <a name="typeinfo"></a>TYPE_INFO [!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)] Diese Struktur gibt die verschiedenen Arten von Informationen zum Typ des Felds an. ## <a name="syntax"></a>Syntax ```cpp# struct _tagTYPE_INFO_UNION { dwTYPE_KIND dwKind; union { METADATA_TYPE typeMeta; PDB_TYPE typePdb; BUILT_TYPE typeBuilt; DWORD unused; } type; } TYPE_INFO; ``` ```csharp public struct TYPE_INFO { public uint dwKind; public IntPtr unionmember; }; ``` #### <a name="parameters"></a>Parameter dwKind Ein Wert aus der [DwTYPE_KIND](../../../extensibility/debugger/reference/dwtype-kind.md) -Enumeration, der bestimmt, wie die Union zu interpretieren. type.typeMeta [Nur für C++] Enthält eine [METADATA_TYPE](../../../extensibility/debugger/reference/metadata-type.md) Struktur, wenn `dwKind` ist `TYPE_KIND_METADATA`. type.typePdb [Nur für C++] Enthält eine [PDB_TYPE](../../../extensibility/debugger/reference/pdb-type.md) Struktur, wenn `dwKind` ist `TYPE_KIND_PDB`. type.typeBuilt [Nur für C++] Enthält eine [BUILT_TYPE](../../../extensibility/debugger/reference/built-type.md) Struktur, wenn `dwKind` ist `TYPE_KIND_BUILT`. Type.Unused Nicht verwendete Abstand. Typ Die Namen der Union. Unionmember [C# nur] Marshallen, die diese Option, um den entsprechenden Strukturtyp basierend auf `dwKind`. ## <a name="remarks"></a>Hinweise Diese Struktur wird zum Übergeben der [GetTypeInfo](../../../extensibility/debugger/reference/idebugfield-gettypeinfo.md) Methode, in denen es ausgefüllt wird. Wie der Inhalt der Struktur interpretiert werden basiert auf der `dwKind` Feld. > [!NOTE] > [Nur für C++] Wenn `dwKind` gleich `TYPE_KIND_BUILT`, ist es erforderlich, um den zugrunde liegenden freizugeben [IDebugField](../../../extensibility/debugger/reference/idebugfield.md) -Objekt beim Zerstören der `TYPE_INFO` Struktur. Hierzu wird `typeInfo.type.typeBuilt.pUnderlyingField->Release()` aufgerufen. [C# nur] Die folgende Tabelle zeigt die Interpretation der `unionmember` Element für jede Art von Typ. Im Beispiel wird gezeigt, wie dies für eine Art von Typ erfolgt. |`dwKind`|`unionmember` interpretiert als| |--------------|----------------------------------| |`TYPE_KIND_METADATA`|[METADATA_TYPE](../../../extensibility/debugger/reference/metadata-type.md)| |`TYPE_KIND_PDB`|[PDB_TYPE](../../../extensibility/debugger/reference/pdb-type.md)| |`TYPE_KIND_BUILT`|[BUILT_TYPE](../../../extensibility/debugger/reference/built-type.md)| ## <a name="example"></a>Beispiel In diesem Beispiel wird gezeigt, wie zum Interpretieren der `unionmember` Mitglied der `TYPE_INFO` Struktur in C# geschrieben. Dieses Beispiel zeigt nur eine Art interpretieren (`TYPE_KIND_METADATA`), aber die anderen auf genau die gleiche Weise interpretiert werden. ```csharp using System; using System.Runtime.Interop.Services; using Microsoft.VisualStudio.Debugger.Interop; namespace MyPackage { public class MyClass { public void Interpret(TYPE_INFO ti) { if (ti.dwKind == (uint)enum_dwTypeKind.TYPE_KIND_METADATA) { METADATA_TYPE dataType = (METADATA_TYPE)Marshal.PtrToStructure(ti.unionmember, typeof(METADATA_TYPE)); } } } } ``` ## <a name="requirements"></a>Anforderungen Header: sh.h Namespace: Microsoft.VisualStudio.Debugger.Interop Assembly: Microsoft.VisualStudio.Debugger.Interop.dll ## <a name="see-also"></a>Siehe auch [Strukturen und Unions](../../../extensibility/debugger/reference/structures-and-unions.md) [dwTYPE_KIND](../../../extensibility/debugger/reference/dwtype-kind.md) [GetTypeInfo](../../../extensibility/debugger/reference/idebugfield-gettypeinfo.md) [METADATA_TYPE](../../../extensibility/debugger/reference/metadata-type.md) [PDB_TYPE](../../../extensibility/debugger/reference/pdb-type.md) [BUILT_TYPE](../../../extensibility/debugger/reference/built-type.md)
37.671875
316
0.689548
deu_Latn
0.506561
7ab288462f766859a9ac482df4d5a12dd7155c57
1,697
md
Markdown
docs/vs-2015/designers/shader-designer-examples.md
galaxyuliana/visualstudio-docs.ko-kr
0f07b2bdcdecc134d4f27d7da71521546f4046a6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/designers/shader-designer-examples.md
galaxyuliana/visualstudio-docs.ko-kr
0f07b2bdcdecc134d4f27d7da71521546f4046a6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/designers/shader-designer-examples.md
galaxyuliana/visualstudio-docs.ko-kr
0f07b2bdcdecc134d4f27d7da71521546f4046a6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 셰이더 디자이너 예제 | Microsoft Docs ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.technology: vs-ide-designers ms.topic: conceptual ms.assetid: f12f5dee-63ab-4376-9952-7f87f269e9c4 caps.latest.revision: 11 author: gewarren ms.author: gewarren manager: jillfra ms.openlocfilehash: 17486ad7206a49eabae1998bd060c697c011ba8e ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 04/23/2019 ms.locfileid: "68184964" --- # <a name="shader-designer-examples"></a>셰이더 디자이너 예제 [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] 설명서의 이 섹션에 있는 문서에는 셰이더 디자이너를 사용하여 다양한 그래픽 효과를 만드는 방법을 보여 주는 예제가 포함되어 있습니다. ## <a name="related-topics"></a>관련 항목 ||| |-|-| |[방법: 기본 색 셰이더 만들기](../designers/how-to-create-a-basic-color-shader.md)|개체에 불변 색을 적용하는 셰이더를 보여 줍니다.| |[방법: 기본 램버트 셰이더 만들기](../designers/how-to-create-a-basic-lambert-shader.md)|개체에 클래식 램버트 조명 모델을 적용하는 셰이더를 보여 줍니다.| |[방법: 기본 퐁 셰이더 만들기](../designers/how-to-create-a-basic-phong-shader.md)|개체에 클래식 퐁 조명 모델을 적용하는 셰이더를 보여 줍니다.| |[방법: 기본 질감 셰이더 만들기](../designers/how-to-create-a-basic-texture-shader.md)|개체에 질감을 적용하는 셰이더를 보여 줍니다.| |[방법: 회색조 질감 셰이더 만들기](../designers/how-to-create-a-grayscale-texture-shader.md)|렌더링하는 동안 질감을 회색조로 변환하고 개체에 이를 적용하는 셰이더를 보여 줍니다.| |[방법: 기하 도형 기반 그라데이션 셰이더 만들기](../designers/how-to-create-a-geometry-based-gradient-shader.md)|개체의 기하 도형에 따라 색상을 변조하고 개체에 이를 적용하는 셰이더를 보여 줍니다.| |[연습: 사실적인 3차원 당구공 만들기](../designers/walkthrough-creating-a-realistic-3-d-billiard-ball.md)|셰이더 기술과 질감 리소스를 결합하여 현실적인 당구 공 셰이더를 만드는 방법을 보여 줍니다.| |[방법: 셰이더 내보내기](../designers/how-to-export-a-shader.md)|앱에서 사용할 수 있는 형식으로 DGSL 셰이더를 내보내는 방법을 설명합니다.|
47.138889
146
0.727755
kor_Hang
0.999058
7ab31cf943f68c528f41c95c432f9a8fe5f59537
8,809
md
Markdown
uberfire-docs/docs/gettingStarted/improvingYourFirstApp.md
kiereleaseuser/uberfire
547a39f094ddeb9df28e539f6952f49585bf7a5c
[ "Apache-2.0" ]
null
null
null
uberfire-docs/docs/gettingStarted/improvingYourFirstApp.md
kiereleaseuser/uberfire
547a39f094ddeb9df28e539f6952f49585bf7a5c
[ "Apache-2.0" ]
2
2020-04-15T21:10:25.000Z
2021-06-08T23:38:43.000Z
uberfire-docs/docs/gettingStarted/improvingYourFirstApp.md
MEM2677/uberfire-webapp-test
03f707b4f00cffea7b325ed9b96a3c67455fe6d4
[ "Apache-2.0" ]
null
null
null
#Improving your first App In this session, we will create some basic Uberfire components aiming to give you an idea of how Uberfire works. For now, don’t pay too much attention to new terms and concepts presented here, it’s time to just have fun. The Uberfire Architeture and details of how everything glues together will be presented in the [Tutorial](../tutorial/tutorial.md) section. ## Feeling Uberfire Let’s change our App so we can get a better feel for how Uberfire workbench perspectives and panels fit together. We’ll create two screens backed by a simple model class to demonstrate how you’d typically separate model from view in an UberFire application and how screens communicate in a decoupled way. ### Creating our model The data model in an UberFire app is typically represented by Plain Old Java Objects, (POJOs). This leaves you the flexibility to use them in other frameworks that like POJOs such as JPA, JAXB, Errai Data Binding, and much more by adorning them with annotations. For now, our extremely simple data model will just be an unadorned POJO. The model class will be called Mood, and it will represent how the current user is feeling at the moment. Place it on org.uberfire.shared package of your web app. ``` package org.uberfire.shared; public class Mood { private final String text; public Mood( String text ) { this.text = text; } public String getText() { return text; } @Override public String toString() { return text; } } ``` ### Creating MoodScreen, a Templated Widget For MoodScreen, let’s use the Errai UI Template system. This approach is similar to GWT UiBinder, but it lets you create the template in a plain HTML 5 file rather than a specialized UiBinder XML file. Create a HTML file named MoodScreen.html inside Java package org.uberfire.client.screens with this content: ``` <form data-field="moodForm"> <div class="input-group"> <input data-field="moodTextBox" type="text" placeholder="How do you feel?"> </div> </form> ``` Create a Java class "MoodScreen.java" in the package org.uberfire.client.screens. This file will be used as a client-side template for the new MoodScreen widget. Here’s what that looks like: ``` package org.uberfire.client.screens; import org.jboss.errai.common.client.dom.Form; import org.jboss.errai.common.client.dom.Input; import org.jboss.errai.ui.client.local.api.IsElement; import org.jboss.errai.ui.shared.api.annotations.DataField; import org.jboss.errai.ui.shared.api.annotations.Templated; import org.uberfire.client.annotations.WorkbenchPartTitle; import org.uberfire.client.annotations.WorkbenchPartView; import org.uberfire.client.annotations.WorkbenchScreen; import org.uberfire.shared.Mood; import javax.annotation.PostConstruct; import javax.enterprise.context.Dependent; import javax.enterprise.event.Event; import javax.inject.Inject; @Dependent @Templated @WorkbenchScreen( identifier = "MoodScreen" ) public class MoodScreen implements IsElement { @Inject @DataField Form moodForm; @Inject @DataField Input moodTextBox; @Inject Event<Mood> moodEvent; @WorkbenchPartTitle public String getScreenTitle() { return "Change Mood"; } @PostConstruct public void init() { moodForm.setOnsubmit( e -> { e.preventDefault(); moodEvent.fire( new Mood( moodTextBox.getValue() ) ); moodTextBox.setValue( "" ); } ); } @WorkbenchPartView public IsElement getView() { return this; } } ``` MoodScreen is very similar to HelloWorldScreen. The structurals differences are related to our choice to use just an Errai UI Template instead of a full MVP (Model View Presenter) structure. See more about Errai UI templates in [this guide](https://docs.jboss.org/author/display/ERRAI/Errai+UI). ### Creating MoodListenerScreen Create a HTML file named MoodListenerScreen.html inside Java package org.uberfire.client.screens with this content: ``` <div data-field="view"> <input data-field="moodTextBox" type="text" placeholder="I understand that you are feeling..."> </div> ``` And create MoodListenerScreen.java, inside org.uberfire.cliente.screens: ``` package org.uberfire.client.screens; import org.jboss.errai.common.client.dom.Input; import org.jboss.errai.ui.client.local.api.IsElement; import org.jboss.errai.ui.shared.api.annotations.DataField; import org.jboss.errai.ui.shared.api.annotations.Templated; import org.uberfire.client.annotations.WorkbenchPartTitle; import org.uberfire.client.annotations.WorkbenchPartView; import org.uberfire.client.annotations.WorkbenchScreen; import javax.enterprise.context.Dependent; import javax.inject.Inject; @Dependent @Templated @WorkbenchScreen( identifier = "MoodListenerScreen" ) public class MoodListenerScreen implements IsElement { @Inject @DataField Input moodTextBox; @WorkbenchPartTitle public String getScreenTitle() { return "MoodListenerScreen"; } @WorkbenchPartView public IsElement getView() { return this; } } ``` ### Giving MoodScreen, a perspective Let's create our first perspective, using Uberfire Templated Perspectives. First, we need to create the perspective Errai UI template, named "MoodPerspective.html" on org.uberfire.client.perspectives package: ``` <div> <div id="home1"> <span><b>Our MoodScreen</b></span> <div data-field="moodScreen"></div> </div> <div id="home2"> <span><b>Mood Listener</b></span> <div data-field="moodListener"></div> </div> </div> ``` Now, let's create the Perspective class MoodPerspective on org.uberfire.client.perspectives package: ``` package org.uberfire.client.perspectives; import com.google.gwt.user.client.ui.Composite; import org.jboss.errai.ui.shared.api.annotations.DataField; import org.jboss.errai.ui.shared.api.annotations.Templated; import org.uberfire.client.annotations.WorkbenchPanel; import org.uberfire.client.annotations.WorkbenchPerspective; import org.uberfire.client.workbench.panels.UFFlowPanel; @Templated @WorkbenchPerspective(identifier = "MoodPerspective") public class MoodPerspective extends Composite { @DataField @WorkbenchPanel(parts = "MoodScreen") UFFlowPanel moodScreen = new UFFlowPanel( 100 ); @DataField @WorkbenchPanel(parts = "MoodListenerScreen") UFFlowPanel moodListener = new UFFlowPanel( 100 ); } ``` ### Adding MoodPerspective Moving on, let’s add MoodPerspective to the menu bar of our app. We need to update org.uberfire.client.ShowcaseEntryPoint and replace setupMenu method to that: ``` private void setupMenu( @Observes final ApplicationReadyEvent event ) { final Menus menus = newTopLevelMenu( "Home" ) .respondsWith( () -> placeManager.goTo( "MainPerspective" ) ) .endMenu() .newTopLevelMenu( "Mood Perspective" ) .respondsWith( () -> placeManager.goTo( "MoodPerspective" ) ) .endMenu() .build(); menubar.addMenus( menus ); } ``` ### Check your work It's time to check your classes and package created. See an example here: ![Project Structure](5minStructure.png) ### See it work!!! How about seeing our changes? ``` cd demo-showcase/demo-webapp mvn clean install mvn clean gwt:run ``` Click on MoodPerspective menu: ![hello world](moodPerspective.png) ### Let's make the screens communicate Did you notice the CDI event raised by MoodScreen? If no, take a look at moodForm.setOnsubmit(..) call at init() method. Now let’s do something in response to the event we fire in MoodListenerScreen when the user presses Enter. To do this we’ll add a CDI observer method at MoodListenerScreen: ``` public void onMoodChange( @Observes Mood mood ) { moodTextBox.setText( "You are feeling " + mood.getText() ); } ``` Build and run your App again (mvn gwt: clean gwt:compile gwt:run), write a text on "How do you fell" textbox and press enter to see screens communicating: ![hello world](moodPerspective.png) ### A taste of Uberfire lifecycle events Uberfire supports a lot of workbench events, let's see how they work. Edit MoodPerspective.java and add these two methods, run the app again and change perspectives to see the events happening. ``` @OnOpen public void onOpen() { Window.alert( "On Open" ); } @OnClose public void OnClose() { Window.alert( "On Close" ); } ``` Build (mvn clean install) and run your App again (mvn clean gwt:run) and change perspectives to see the events being triggered.
33.880769
335
0.719718
eng_Latn
0.765868
7ab326ecce9e90b895312bc18a559bd49100ec21
230
md
Markdown
ecs/api-reference/nic-management.md
vladimirhasko/docs
4eb1322e83caf34981f6cc8694adb43dc41325f0
[ "Apache-2.0" ]
null
null
null
ecs/api-reference/nic-management.md
vladimirhasko/docs
4eb1322e83caf34981f6cc8694adb43dc41325f0
[ "Apache-2.0" ]
null
null
null
ecs/api-reference/nic-management.md
vladimirhasko/docs
4eb1322e83caf34981f6cc8694adb43dc41325f0
[ "Apache-2.0" ]
null
null
null
# NIC Management<a name="EN-US_TOPIC_0124385012"></a> - **[Adding NICs to an ECS in a Batch](adding-nics-to-an-ecs-in-a-batch.md)** - **[Deleting NICs from an ECS in a Batch](deleting-nics-from-an-ecs-in-a-batch.md)**
28.75
89
0.66087
yue_Hant
0.3405
7ab363a22fb4597990a53736fe99e5f427daee99
3,421
md
Markdown
content/writings/word-play.md
dbx834/nvc
b2fc370f484cbd577c0ae15128758aa0c3bc3a21
[ "MIT" ]
null
null
null
content/writings/word-play.md
dbx834/nvc
b2fc370f484cbd577c0ae15128758aa0c3bc3a21
[ "MIT" ]
null
null
null
content/writings/word-play.md
dbx834/nvc
b2fc370f484cbd577c0ae15128758aa0c3bc3a21
[ "MIT" ]
null
null
null
--- title: Word Play cover: /content-assets/covers/wordplay.jpg category: 1.NVC abstract: At Parshada in Sector 18, L’aura Joy, a certified trainer with the Center for Nonviolent Communication, will give lessons on how to communicate in a nonviolent manner. date: 2011-07-08 type: post tags: - some tag --- Published in Indian Express, by Parul Bajaj _Workshops designed for adults help them communicate better._ “It’s meant only for adults. That too, those who think and are willing to explore, learn and see life from a fresh angle,” says 38-year-old photographer Sameer Khanna, who will be conducting a week-long workshop titled ‘Emotion in Motion’ that will teach participants how to express emotions and thoughts. “The aim of the workshop is to help people explore new relationships. We talk about everything and there are no expectations,” says Khanna, as he dresses his studio in Sector 7 for the workshop scheduled later this month. The trainer who participated in a workshop in Dharamshala, conducted by a yoga teacher from Canada, has a satisfied customer in Sunaina Dev, a music teacher, who describes the experience as cathartic. “It was Mother’s Day and he told us to draw a card for our mother after an intense pranayam session and chanting. I lost my mom a year back and it was simply amazing to reach out to her,” recalls Dev. The rather hectic lifestyle and need to communicate better has led to innumerable initiatives and workshops for adults. Conducted by specialists, the options range from workshops specialising in life skills enhancement to emotional and non-violent communication and parenting. At the end of this month, for instance, Auroville-based polarity therapist and counselling psychologist, Mikael Spector will impart lessons at Coveda in Sector 8. Associated with Sri Aurobindo Ashram, in Chandigarh Spector will provide polarity therapy, healing sessions and conduct a ‘Glorious Body Workshop’ — that includes meditation and body and dance lessons that promise to “release energy blocks, open energy fields and balance energy currents”. “It is an amazing process of harmonising the body and mind through ayurveda,” explains Kultar Nat of Coveda, adding that the fee ranges from Rs 700 to Rs 3,000, depending on the number of days. Few miles away from Coveda, at Parshada in Sector 18, L’aura Joy, a certified trainer with the Center for Nonviolent Communication, will give lessons on how to communicate in a non-violent manner. To be held July 19 onwards, the workshop will combine different activities like role-play, exercises and individual assignments. “We learn how to receive the beauty of the message being shared by others when we develop inner clarity and ability to express emotions without making judgments,” says Moon Star of Parshada. Meanwhile, bridging the gap between children and parents and working on the communication between them is the ‘learnshop’ titled ‘Sharing Ideas for working with children’, at Centre for Education and VoluntaryAction(CEVA). Conceptualised by Harleen Kohli of CEVA, the workshop includes discussions where parents share their concerns and also features activities like painting, drama and puzzle-solving. “I got many answers on how to deal with my six-year-old daughter by simply connecting with other parents,” says Neena Kohli. _Written by Parul Bajaj_ Link to article: http://archive.indianexpress.com/news/word-play/814320/0
126.703704
929
0.805612
eng_Latn
0.999635
7ab3aa6489ffbece6b970cdcf768a2df1f374c78
1,783
md
Markdown
_posts/2021-02-22-bind()-connect()-UDP.md
hyejin2475/hyejin2475.github.io
f8d746cc63b0bb29f1d648c6f34858bef9feb7d2
[ "MIT" ]
null
null
null
_posts/2021-02-22-bind()-connect()-UDP.md
hyejin2475/hyejin2475.github.io
f8d746cc63b0bb29f1d648c6f34858bef9feb7d2
[ "MIT" ]
null
null
null
_posts/2021-02-22-bind()-connect()-UDP.md
hyejin2475/hyejin2475.github.io
f8d746cc63b0bb29f1d648c6f34858bef9feb7d2
[ "MIT" ]
1
2021-03-07T23:09:36.000Z
2021-03-07T23:09:36.000Z
--- layout: single title: UDP 통신에서 bind(), connect() categories: - socket tags: - socket - UDP socket --- ### bind() - 소켓통신에서 bind 한다는 것의 의미는 소켓에 ip address와 port를 연결시키겠다는 의미이다 - 즉, 외부에서 패킷이 들어왔을 때 이 패킷을 어디로 보낼지 정하는 것 ### connect() - TCP 통신에서 connect()는 2가지 일을 한다 (IP,PORT 할당 / 연결 요청) - UDP 통신에서 connect()는 오직 IP와 임의의 PORT를 할당하는 일만 진행한다 UDP 소켓통신 테스트 중 bind() 함수를 1) receiver 에서 사용한 경우 sendto, recvfrom 모두 성공 2) sender 에서 사용한 경우 sendto 성공, recvfrom 실패 3) sender, receiver 둘 다에서 사용한 경우 모두 성공 이러한 결과가 나왔다. TCP 소켓 통신에서는 일반적으로 클라이언트가 아닌 서버에서 bind()를 하는데 그 이유는 TCP 통신은 클라이언트가 서버의 주소에 접속하는 것이기 때문에 필요한 정보는 서버의 주소/포트 이고 클라이언트의 주소는 알 필요가 없는 것이다. 하지만 서버-클라이언트의 개념이 아니라 수신-송신의 개념으로 접근해야하는 UDP 통신에서는 송신측에서 bind를 해주어야 OS가 받은 데이터를 어디로 보낼 지 알 수 있는 것이다. connection() 도 마찬가지로 receiver가 아닌 sender에 넣어주니 전송이 되었다. ### 패킷(데이터)이 어플리케이션(수신기)에 전달되는 프로세스 1. 랜카드를 통해 패킷 수신 2. 랜카드 드라이버가 OS에 패킷 전달 3. OS는 소켓 리스트에서 패킷의 목적지 address와 port번호와 일치하게 bind 되어있는 소켓을 찾음 4. 찾은 경우 해당 어플리케이션에 전달 5. 어플리케이션은 recvfrom을 통해 읽음 UDP 소켓은 명시적, 암시적 두 가지 방법에 의해 Local IP, Port binding을 수행할 수 있다. UDP 소켓이 로컬 ip, port 와 바인딩 되어있지 않다면 recvfrom()을 사용할 수 없다. *결론 : UDP 통신에서는 bind(), connection()을 송신측에서 사용해야 한다.* +추가) connection() 함수를 sender에 붙인 후 send/recv 함수가 정상적으로 작동헀다. 이것은 UDP 통신에서도 sendto/recvfrom 이 아닌 TCP용 함수를 사용할 수 있다는 의미가 되었지만 서버가 어떻게 구현될지 확신할 수 없는 상황이니 안전하게 UDP 통신용인 recvfrom함수를 사용하기로 했다. 참고 : [https://nenunena.tistory.com/61][https://nenunena.tistory.com/61] [https://min-310.tistory.com/79][https://min-310.tistory.com/79] [https://mintnlatte.tistory.com/308][https://mintnlatte.tistory.com/308] [https://nenunena.tistory.com/61]: https://nenunena.tistory.com/61 [https://min-310.tistory.com/79]: https://min-310.tistory.com/79 [https://mintnlatte.tistory.com/308]: https://mintnlatte.tistory.com/308
30.220339
87
0.715087
kor_Hang
1.00001
7ab3b053b1c9aa2d4f574d9361e1092a892b1857
3,037
md
Markdown
wdk-ddi-src/content/d3dkmdt/ns-d3dkmdt-_d3dkmdt_vidpn_present_path_copyprotection.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmdt/ns-d3dkmdt-_d3dkmdt_vidpn_present_path_copyprotection.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmdt/ns-d3dkmdt-_d3dkmdt_vidpn_present_path_copyprotection.md
jazzdelightsme/windows-driver-docs-ddi
793b0c96e117b1658144ba8b3939fdc31a49f6b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NS:d3dkmdt._D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION title: _D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION (d3dkmdt.h) description: The D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION structure contains information about the copy protection that is supported (as well as the copy protection that is currently active) on a particular VidPN present path. old-location: display\d3dkmdt_vidpn_present_path_copyprotection.htm tech.root: display ms.assetid: 661e70c6-d99e-4c5a-ad88-3dd854747de4 ms.date: 05/10/2018 ms.keywords: D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION, D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION structure [Display Devices], DmStructs_512b61d6-627d-4423-93ba-0f28ac340e51.xml, _D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION, d3dkmdt/D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION, display.d3dkmdt_vidpn_present_path_copyprotection ms.topic: struct req.header: d3dkmdt.h req.include-header: D3dkmdt.h req.target-type: Windows req.target-min-winverclnt: Available in Windows Vista and later versions of the Windows operating systems. req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - d3dkmdt.h api_name: - D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION product: - Windows targetos: Windows req.typenames: D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION --- # _D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION structure ## -description The D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION structure contains information about the copy protection that is supported (as well as the copy protection that is currently active) on a particular VidPN present path. ## -struct-fields ### -field CopyProtectionType A value from the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/d3dkmdt/ne-d3dkmdt-_d3dkmdt_vidpn_present_path_copyprotection_type">D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION_TYPE</a> enumeration that indicates the type of copy protection that is active on the path. ### -field APSTriggerBits A value that describes copy protection for an OEM device. A value of 0 indicates no copy protection, and values of 1, 2, and 3 indicate low, medium, and high levels of copy protection, respectively. Values greater than 3 are not allowed. ### -field OEMCopyProtection Reserved for future use. ### -field CopyProtectionSupport A <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/d3dkmdt/ns-d3dkmdt-_d3dkmdt_vidpn_present_path_copyprotection_support">D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION_SUPPORT</a> structure that indicates the types of copy protection that are supported by the path. ## -remarks The <b>CopyProtection</b> member of the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/d3dkmdt/ns-d3dkmdt-_d3dkmdt_vidpn_present_path">D3DKMDT_VIDPN_PRESENT_PATH</a> structure is a D3DKMDT_VIDPN_PRESENT_PATH_COPYPROTECTION structure.
35.729412
323
0.827132
eng_Latn
0.372115
7ab3f24dfcffd4714384d38b71fd82b0ad857d5e
432
md
Markdown
README.md
stevegardiner26/dedicatedDebator
053c415cc0932739cf8b2b6b4f54ff05644951f9
[ "MIT" ]
null
null
null
README.md
stevegardiner26/dedicatedDebator
053c415cc0932739cf8b2b6b4f54ff05644951f9
[ "MIT" ]
2
2018-11-04T16:46:30.000Z
2022-02-12T04:05:17.000Z
README.md
stevegardiner26/Dedicated-Debator
053c415cc0932739cf8b2b6b4f54ff05644951f9
[ "MIT" ]
null
null
null
# [Dedicated Debater](https://dedicated-debator.herokuapp.com/) A debate website bringing people closer together one debate at a time. ## [Used a New Age Free Bootstrap Template](http://startbootstrap.com/template-overviews/new-age/) [New Age](http://startbootstrap.com/template-overviews/new-age/) is a web app landing page theme for [Bootstrap](http://getbootstrap.com/) created by [Start Bootstrap](http://startbootstrap.com/).
72
196
0.770833
kor_Hang
0.341753
7ab45620caaa0d4caa69637d6c8ad45bc34c2b51
140
md
Markdown
docs/Metadata/autoResponseRules/autoResponseRules.md
sfdcboss/voyajerwiki
c18e4739e7ada573e710305a23648536791a0b39
[ "MIT" ]
null
null
null
docs/Metadata/autoResponseRules/autoResponseRules.md
sfdcboss/voyajerwiki
c18e4739e7ada573e710305a23648536791a0b39
[ "MIT" ]
1
2021-05-08T01:54:48.000Z
2021-05-08T01:54:48.000Z
docs/Metadata/autoResponseRules/autoResponseRules.md
claytonboss7/voyajerwiki
c18e4739e7ada573e710305a23648536791a0b39
[ "MIT" ]
null
null
null
--- layout: default title: autoResponseRules permalink: docs/autoResponseRules has_children: true parent: Metadata --- # Clays Generator
11.666667
33
0.778571
eng_Latn
0.231955
7ab50a957a5417bcc9e545202e82db08c88fb52d
1,372
md
Markdown
_posts/2015-09-13-Mori-LeeBlu-Blu-Wedding-Gown-5112-Sleeveless-Court-Train-AlinePrincess.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
_posts/2015-09-13-Mori-LeeBlu-Blu-Wedding-Gown-5112-Sleeveless-Court-Train-AlinePrincess.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
_posts/2015-09-13-Mori-LeeBlu-Blu-Wedding-Gown-5112-Sleeveless-Court-Train-AlinePrincess.md
gownthlala/gownthlala.github.io
f86dbdf6fa0e98646ca4cb470fa2928a56e04eec
[ "MIT" ]
null
null
null
--- layout: post date: 2015-09-13 title: "Mori Lee-Blu Blu Wedding Gown 5112 Sleeveless Court Train Aline/Princess" category: Mori Lee-Blu tags: [Mori Lee-Blu,Aline/Princess ,Sweetheart,Court Train,Sleeveless] --- ### Mori Lee-Blu Blu Wedding Gown 5112 Just **$289.99** ### Sleeveless Court Train Aline/Princess <table><tr><td>BRANDS</td><td>Mori Lee-Blu</td></tr><tr><td>Silhouette</td><td>Aline/Princess </td></tr><tr><td>Neckline</td><td>Sweetheart</td></tr><tr><td>Hemline/Train</td><td>Court Train</td></tr><tr><td>Sleeve</td><td>Sleeveless</td></tr></table> <a href="https://www.readybrides.com/en/mori-lee-blu/53346-blu-wedding-gown-5112.html"><img src="//img.readybrides.com/125057/blu-wedding-gown-5112.jpg" alt="Blu Wedding Gown 5112" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/mori-lee-blu/53346-blu-wedding-gown-5112.html"><img src="//img.readybrides.com/125058/blu-wedding-gown-5112.jpg" alt="Blu Wedding Gown 5112" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/mori-lee-blu/53346-blu-wedding-gown-5112.html"><img src="//img.readybrides.com/125056/blu-wedding-gown-5112.jpg" alt="Blu Wedding Gown 5112" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/mori-lee-blu/53346-blu-wedding-gown-5112.html](https://www.readybrides.com/en/mori-lee-blu/53346-blu-wedding-gown-5112.html)
80.705882
251
0.716472
yue_Hant
0.433358
7ab52da59e89cb6492f11804b376554d69cf5667
2,384
md
Markdown
README.md
evrencoskun/TableViewSampleApp
00cbac44efe191e11f71258177ee4874ad723c32
[ "Unlicense" ]
66
2018-01-05T02:23:52.000Z
2022-01-20T03:29:39.000Z
README.md
evrencoskun/TableViewSample2
00cbac44efe191e11f71258177ee4874ad723c32
[ "Unlicense" ]
19
2018-01-03T09:09:24.000Z
2020-12-13T21:50:23.000Z
README.md
evrencoskun/TableViewSampleApp
00cbac44efe191e11f71258177ee4874ad723c32
[ "Unlicense" ]
29
2018-01-07T05:03:40.000Z
2021-12-13T18:36:21.000Z
<div align="center"> <img src="https://raw.githubusercontent.com/evrencoskun/TableViewSample/master/Logo-5.png" > <h2>TableView For Android</h2> <p align="center"> <p>TableView is a powerful Android library for displaying complex data structures and rendering tabular data composed of rows, columns and cells. TableView relies on a separate model object to hold and represent the data it displays. This repository contains a sample app that is designed to show you how to create your advanced TableView in your application.</p> <a href="https://youtu.be/1DWFIqrqrPk"> <b>Demo Full video »</b> </a> </p> </div> <p align="center"> <a href="https://youtu.be/1DWFIqrqrPk"> <img src="https://raw.githubusercontent.com/evrencoskun/TableViewSample/master/TableView-0_8_5_1_2.gif"> </a> </p> ## Introduction - This is a <a href="https://github.com/evrencoskun/TableView">TableView library</a> sample app that is designed to show you how to create your advanced TableView in your application. ## Libs used in sample app: - JSON data : https://github.com/ratiw/vue-table - TableView 0.8.8 - Room - ViewModel - Retrofit2 - RxJava2 - GSON - MoneyTextView ## License Copyright (c) 2017 Evren Coşkun Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
42.571429
183
0.723154
eng_Latn
0.604628
7ab5a8b87621d75ddb495c2a2cf804d07649d051
3,241
md
Markdown
_posts/2018-12-04-Download-lister-sr-engines.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
2
2019-02-28T03:47:33.000Z
2020-04-06T07:49:53.000Z
_posts/2018-12-04-Download-lister-sr-engines.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
null
null
null
_posts/2018-12-04-Download-lister-sr-engines.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Lister sr engines book "I've been coming doing business here some ten years," he said, good. Tas river, number-one ceremonial uniforms will be Worn, after they have known the dreams of the dogs. But from where Amos and Jack were, and to-morrow night I will be with thee again. " When the king heard this, surely, then. " "It's people like him," Sklent continued, you were here when Sparrowhawk and Thorion faded and then darkened into grey as clouds swept again across the mountain and hid the rising [Illustration: JAPANESE HOUSE IN TOKIO. door was closed, the lister sr engines of her studies? Women had always been leaders in the league, and Junior felt now precisely as he had felt on the night of Celestina's exhibition at the Greenbaum Gallery, with a question related to his or her recent adoption, they go for the grass roots, and a mean lister sr engines of a lister sr engines completed a portrait sure to repel any woman with eyesight; but if you wanted an attorney who was angry at the world for having been cursed with ugliness and who could convert that anger into the energy and ruthlessness of a pit bull in the courtroom, customizing software applications, The Twelfth. And all about his late wife, justifiable cause. perceive any sound that, fascinated, doesn't barrel into any of the Micky didn't hear anyone approaching the door. All he'd done since he got here was sweat. No, little sticky spots, warm room to the cold, if you'll let me, "My name's sight of the abattoir master's gleaming blade, Sinsemilla's body rattled the cabinet doors against which she scenes in all of detective fiction, toward the fearful expectation of a creeping Junior leaned against the door casing, however, in his calcium depleted-and-rebuilt bones. "Have you lister sr engines your silent prayers?" of Western Europe, pushing back like an inflated balloon, you eat those Raisinets?" Later. Anyone would think it's about to run out. Here the that I was astonished when I saw them. "He's angry," Diamond said, of course. difficulty among the closely packed masses of drift ice. In order to embellish the Yet cloning would not be totally useless, had again to put on winter clothes in Egypt itself. Lister sr engines womanly scent lingering in the air after her passage. 387 "Go, false man, he Junior lister sr engines that he must remain vigilant, Stormbel was holding the gun. didn't have lister sr engines real passion left; drugs of infinite variety had scorched away all her passion, "It's a fine day for January. Nine feet from the door. I know I'm a fine one to talk; I won't be cooped up in here. After all, brandy or rum 2 cubic inches. DELANY timely enough schedule to thwart the police. Listening to A little way up the river some dwelling-houses were met with, ay. lister sr engines. " approach the planetoid, some of which will return Borftein halted and stood upright and erect before the desk, the cops Similarities between Naomi and her mom- ended with appearances, and then the micromini, tall. " "He's not--" basement apartment with bare walls, either, which was named after Dr. festival in early March-already advertised on billboards now in mid-January.
360.111111
3,150
0.786794
eng_Latn
0.999875
7ab5f4b8ada7d87e4d5c83b82159541e6a978c3d
1,736
md
Markdown
README.md
kapitancho/walnut-lib-recordstorage
f884ec94367240135e1260c580c2cde564a27965
[ "MIT" ]
null
null
null
README.md
kapitancho/walnut-lib-recordstorage
f884ec94367240135e1260c580c2cde564a27965
[ "MIT" ]
null
null
null
README.md
kapitancho/walnut-lib-recordstorage
f884ec94367240135e1260c580c2cde564a27965
[ "MIT" ]
null
null
null
# Record Storage A key/value based record storage abstraction ## Usages There are several ways to use the Record Storage. ### Full example using in-memory storage and PHP serialization ```php //Create a storage $storage = new SerializedRecordStorage( new PhpArrayDataSerializer, new InMemoryKeyValueStorage ); //An accessor factory for easy access $accessorFactory = new ArrayDataAccessorFactory($recordStorage); //Get the product storage $accessor = $accessorFactory->accessor('products'); //Store some data $accessor->store('product-id-1', [ 'id' => 'product-id-1', 'name' => 'My first product', 'itemsInStock' => 5 ]); count($accessor->all()); //1 //Fetch the data by key $accessor->retreive('product-id-1'); //['id' => ..., 'name' => ...] //Fetch the data by filter $accessor->byFilter( fn(array $entry): bool => $entry['itemsInStock'] > 0 ); //[1 record] //Remove it $accessor->remove('product-id-1'); // ``` ### Using a JSON serializer ```php //Create a storage $storage = new SerializedRecordStorage( new JsonArrayDataSerializer( new JsonSerializer ), new InMemoryKeyValueStorage ); ``` ### Using an in-file storage ```php //Create a storage $storage = new SerializedRecordStorage( new JsonArrayDataSerializer( new JsonSerializer ), new InFileKeyValueStorage( new PerFileKeyToFileNameMapper( baseDir: __DIR__ . '/data', fileExtension: 'json' ) ) ); ``` ### Using cached storage (highly recommended) ```php $storage = new SerializedRecordStorage(/*...*/); $cacheableStorage = new CacheableRecordStorage($storage) ``` ###More adapters and decorators - Transaction Context decorator - Redis Adapter - ...other
22.25641
67
0.676267
kor_Hang
0.444746
7ab6357e20c52dc1a3ec0d2379404d583a8bf199
991
md
Markdown
README.md
jbcp/Grimi
f130c045f03e35ec608936c1a3f69adb89822519
[ "MIT" ]
null
null
null
README.md
jbcp/Grimi
f130c045f03e35ec608936c1a3f69adb89822519
[ "MIT" ]
null
null
null
README.md
jbcp/Grimi
f130c045f03e35ec608936c1a3f69adb89822519
[ "MIT" ]
null
null
null
# Grimi 데이터 시각화를 손쉽게 할 수 있도록 도와주는 R Package입니다. ![Grimi](Grimi.jpg) ## 주요 기능 + Drag&Drop을 이용한 그래프 구조 설정 + 8종의 그래프 제공 + 그래프 커스터마이징 기능 ## SW Prerequisites + R (version 3.6.2) + shiny (version 1.3.2) + shinyjs (version 1.0) + ggplot2 (version 3.2.0) + rjson (version 0.2.20) + stringr (version 1.4.0) + RColorBrewer (version 1.1-2) + shinyWidgets (version 0.4.8) + shinydashboard (version 0.7.1) ## Installation + Installation ``` install.packages("devtools") libray(devtools) devtools::install_github("jbcp/Grimi") ``` + Execution ``` library(Grimi) Grimi() ``` + If you want to use browser ``` Grimi(viewer="browser") ``` + If you want to use your data (The type of data you want to use must be data.frame) ``` Grimi(datas = your_data_name) ``` ## License MIT ## Used Libraries + bootstrap + bootstrap-slider + dragula + jquery + material-design-icons + perfect-scrollbar + select2 + iro.js ## Contact to developer(s) [MINJI KIM](https://github.com/minjikim0927) - [email protected]
16.516667
84
0.685166
kor_Hang
0.752598
7ab6475efbaf23ea195a9382c35d7b01ea0a7ab1
1,114
md
Markdown
examples/vite-react-typescript/README.md
richtone/nextui
66c14b24ac49041186b44cd4aaeeedb1cbeb5d10
[ "MIT" ]
null
null
null
examples/vite-react-typescript/README.md
richtone/nextui
66c14b24ac49041186b44cd4aaeeedb1cbeb5d10
[ "MIT" ]
null
null
null
examples/vite-react-typescript/README.md
richtone/nextui
66c14b24ac49041186b44cd4aaeeedb1cbeb5d10
[ "MIT" ]
null
null
null
This is a [Vite React TypeScript](https://reactjs.org/) project bootstrapped with [`create vite`](https://stackblitz.com/edit/vitejs-vite-9rgerc?file=index.html&terminal=dev). ## Getting Started First, run the development server: ```bash npm install npm run dev ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. You can start editing the page by modifying `src/App.tsx`. The page auto-updates as you edit the file. ## Learn More To learn more about React.js, take a look at the following resources: - [React.js Documentation](https://reactjs.org/docs/getting-started.html) - learn about React.js features and API. - [Learn Vite](https://vitejs.dev/guide/) - Next Generation Frontend Tooling. - [Learn Next UI](https://nextui.org/) - Beautiful, fast and modern React UI library. You can check out [the Next UI GitHub repository](https://github.com/nextui-org/nextui) - your feedback and contributions are welcome! ## Creating Production Build Run ```bash npm run build ``` To Serve the Production App Locally ```bash npm install -g serve serve -s dist ```
27.170732
175
0.739677
eng_Latn
0.794368
7ab69bb7e0979e30786ec25e30b3f0784e79b552
9,601
md
Markdown
README.md
alemosan1/prueba2
d93d8a7a4ea2a9e2c25790f5572d9edb8aac15b0
[ "Apache-2.0" ]
null
null
null
README.md
alemosan1/prueba2
d93d8a7a4ea2a9e2c25790f5572d9edb8aac15b0
[ "Apache-2.0" ]
null
null
null
README.md
alemosan1/prueba2
d93d8a7a4ea2a9e2c25790f5572d9edb8aac15b0
[ "Apache-2.0" ]
null
null
null
netphony-topology v1.3.3 ======= Repository branch build status: | **Master** | **Develop** | |:---:|:---:| | [![Build Status](https://travis-ci.org/telefonicaid/netphony-topology.svg?branch=master)](https://travis-ci.org/telefonicaid/netphony-topology) | [![Build Status](https://travis-ci.org/telefonicaid/netphony-topology.svg?branch=develop)](https://travis-ci.org/telefonicaid/netphony-topology) | Latest Maven Central Release: [![Maven Central](https://maven-badges.herokuapp.com/maven-central/es.tid.netphony/topology/badge.svg?style=flat-square)](https://maven-badges.herokuapp.com/maven-central/es.tid.netphony/topology/) Netphony-topology is a BGP-LS Speaker, a Java based Traffic Engineering Database and a Topology Module (collection of TEDs and plugins to export and import the TEDs). BGP-LS is used for distributing Network Topologies to external elments, for example, a Path Computation Element. The BGP-LS speaker can be run as a standalone application, or as a module attached to other software. The Topology Module can export the topologies via BGP-LS or RESCONF based APIs following standard formats. ## *Latest news!* - Apache 2.0 license - Moved to slf4j logging framework - Added method to pass multiple TEDs from en external program - Added docker support in travis - Supports network-protocols 1.3.2 (chages in reading as_path were needed) - Update to support reading multiple AS_PATH - Topology Module added - Topology Module: Export via RESCONF with COP model - Topology Module: Export via RESTCONF with IETF model (nodes only) - Topology Module: Export via UNIFY model - Topology Module: Import via XML - Topology Module: Import/Export via BGP-LS ## Traffic Engineering Database The traffic Engineering Database (TED) is a collection of nodes and links, each of them with Traffic Engineering Attributes. The TED has as an attribute a domain identifier and a network layer. ## Compilation and use The library can be built using the maven tool. Thus, all the dependencies are included in the pom.xml file. There is a Junit test included that performs the following tests: * Buils two BGP-LS Speakers, one acting as sender of topology, and the other as consumer. A small topology is loaded from an xml file in BGP-LS Speaker #1. This topology is sent to BGP-LS Speaker #2. * Contributions on expanding the test suite are welcomed!! To build the .jar file and run the tests, you can proceed as a regular maven install: ```bash git clone https://github.com/telefonicaid/netphony-topology.git cd netphony-topology mvn install ``` # BGP-LS Speaker The BGPPeerMain is an example of a main class to run a BGP Speaker. It represents a BGP4 peer. It launches the BGP connections with its peers and waits for incoming connections. To run the BGP Peer as a standalone application use the class BGPPeerMain. You can use maven to create an autoexecutable jar that includes all dependencies in a single file. There is a specific profile called bgp-ls-speaker for this sole purpose. Plase be aware that if you use the real BGP port (179) you need to start as root. ```bash git clone https://github.com/telefonicaid/netphony-topology.git cd netphony-topology mvn clean package -P bgp-ls-speaker assembly:single sudo java -Dlog4j.configurationFile=target/log4j2.xml -jar target/bgp-ls-speaker-jar-with-dependencies.jar target/bgpls_example1/BGP4Parameters_1.xml ``` Before running, you should configure the parameteres. The parameters are configured in an xml file. By default, if used with BGPPeerMain, or it is not specified a file name, BGP4Parameters.xml should be used. An example of the file is located in examples/BGP4Parameters.xml (and with the maven assembly build, it is copied in the target directory). ## Configuration parameters The parameters to be configured are: * **BGP4Port:** TCP port where the BGP is listening for incoming bgp4 connections. Optional Parameter. Default value: 179 (BGP Port) * **localBGPAddress:** IP where the BGP is listening for incoming bgp4 connections. Default value: localhost * **BGPIdentifier:** 32 Bit ID. Write it like an IP address (e.g. 10.0.0.1) See section 3.2.1.4 of https://datatracker.ietf.org/doc/draft-ietf-idr-ls-distribution/?include_text=1 * **BGP4ManagementPort:** TCP port to connect to manage the BGP connection. Default value: 1112 * **configPeer:** Peers to which this Peer is going to establish connection. One entry per peer. * **peer:** IP Address of the peer * **export:** If we need to export the topology to this peer. False by default * **import:** If we are going to import topology from this peer. True by default * **delay:** Waiting Time to re-connect to clients. Default value: 6000 ms. * **myAutonomousSystem:** RFC 4271. This 2-octet unsigned integer indicates the Autonomous System number of the sender # Topology Module The Topology Module is a collection of Traffic Engineering Databases with a set of plugins that can import or export the TEDs. The available plugins are: * BGP-LS Plugin. The BGP-LS plugin can run in three different modes. The first one is EXPORT only, so the TEDs are exported via BGP-LS. The second mode is IMPORT only, where BGP-LS is activated to import the TEDS. The last one is IMPORT-EXPORT, so the BGP-LS speaker is used both to import and export topologies. By default, the topologies are exported to all the peers, except the one from which the TED has been learnt. For each domain learn and new TED is created. Also, a multi-domain TED is created connecting all the intradomain topologies. The BGP-LS configuration is expressed in a file, following the format shown in the previous section. * XML Plugin. The XML plugin can learn a topology described in an XML file. Current plugin reads only the information once. * UNIFY Plugin. The UNIFY plugin exports the topology via RESTCONF following the UNIFY format (https://tools.ietf.org/html/draft-irtf-nfvrg-unify-recursive-programming-00) * COP Plugin. The COP plugin exports the topology via RESTCONF following the COP format. * IETF Plugin. In development. Current version supports node only. Follows https://tools.ietf.org/html/draft-ietf-teas-yang-te-topo-06 * TAPI Plugin. In development. To run the Topology Module as a standalone application use the class es.tid.topologyModuleBase.TopologyModuleMain. You can use maven to create an autoexecutable jar that includes all dependencies in a single file. There is a specific profile called *generate-full-jar* for this sole purpose. Please be aware that if you use the BGP-LS Plugin and need to use the standar port (179) you need to start as root. ```bash git clone git clone https://github.com/telefonicaid/netphony-topology.git cd netphony-topology mvn clean package -P generate-full-jar cd target ``` For example, to launch a Topology module with BGP-LS import and RECONF COP export (be sure to be in the target directory): ``` sudo java -Dlog4j.configurationFile=log4j2.xml -jar topology-1.3.2-shaded.jar TMConfiguration_BGPLSreader_COPwriter.xml ``` Sample configuration files are included. ## Logging The software is built using the slf4j, Simple Logging Facade for Java (SLF4J), which serves as a facade for various logging frameworks (e.g. java.util.logging, logback, log4j) allowing the end final to plug in the desired logging framework at deployment time. See http://www.slf4j.org/manual.html for more details. Thus, you can choose your favourite logging framework. However, as an example, there is a profile included (bgp-ls-speaker) to build an autoexecutable version of a BGP Peer that uses log4j http://logging.apache.org/log4j/2.x/ A sample configuration file (log4j2.xml) is provided and copied to the target directory. If no logging framework is added, by default it will log to /dev/null # Examples See [Examples](doc/Examples.md) for several Test scenarios of the BGP-LS Speaker and the Topology Modules. # XML Format to describe the topology See [TopologyFileDescription](doc/TopologyFileDescription.md) #Acknowledgements The software has been developed by Telefonica I+D Core & Transport Team, led by Juan Pedro Fernandez Palacios, in internal innovation projects and through several EU funded research proyects, which continuously added functionality. The Core & Transport Team group of Telefonica working with the topology is formed by Juan Pedro Fernandez Palacios (team leader), Victor Lopez, Oscar Gonzalez de Dios, Felipe Jiménez, Luis Miguel Contreras, Michel Carnero and Eduardo Yusta. All of them have contributed to the code, either directly of with ideas and as beta-testers. The effort to release as open source of the code was funded by the E.U. CSA PACE. The code has been upgraded in the E.U. projects STRONGEST, PACE, IDEALIST, ACINO and 5GEx, as well as Telefonica Innovation activities. The effort to release version 1.3.3 has been partially funded by 5GEx. Special thanks for the 5GEx team in the integration efforts. The developers of the code are (some of them developed code before it was published in github, so they do not appear there as members): Oscar Gonzalez de Dios, Marta Cuaresma, Arturo Mayoral, Sergio, Alejandro Aguado, Jaume Marhuenda, Maria Victoria, Ruben Rosales, Jose Manuel Gran Josa, Victor Uceda, Andrea Sgambelluri (KTH) AND Victor Lopez. The institutions contributing to the code are: Telefonica I+D (www.tid.es), KTH (https://www.kth.se/). As the software is now open source, all contributors, indviduals and insititution will be listed in the Acknowledgement section.
72.18797
648
0.776794
eng_Latn
0.980511
7ab6ff4fa129dfbb06cec1503ffc380cbf4888d0
1,705
md
Markdown
readme.md
Zimniros/PodcastTune
65979dc19f6724d6a8540094c6e923dfcf5b4778
[ "MIT" ]
4
2018-08-05T05:58:56.000Z
2019-08-17T17:09:58.000Z
readme.md
spotwilliams/PodcastTune
65979dc19f6724d6a8540094c6e923dfcf5b4778
[ "MIT" ]
null
null
null
readme.md
spotwilliams/PodcastTune
65979dc19f6724d6a8540094c6e923dfcf5b4778
[ "MIT" ]
2
2018-08-24T18:52:30.000Z
2019-05-05T02:57:19.000Z
# Podcast Tune A Web application for listening and discovering podcasts. Main features: - You can browse, search and subscribe to podcasts. - Play episodes, add them to Up next, mark them as played and add them to favorites. - In player you can increase/decrease volume and playback speed, skip time and browse episode in Up next. - For registered users all information stores in application database, for 'guests' - in local storage. ![Podcast tune demonstration of main functionalities](https://media.giphy.com/media/4TnRSFsV2kxR2cB7py/giphy.gif) ## Table of Contents - [Install](#install) - [Usage](#usage) - [License](#license) ## Install This project uses [Meteor](https://www.meteor.com/) and [npm](https://npmjs.com). Meteor comes with npm bundled so that you can type `meteor npm` without worrying about installing it yourself. If you like, you can also use a globally installed npm to manage your packages. To install Meteor, head to [Install meteor](https://www.meteor.com/install) ``` $ meteor npm install # Installs dependencies ``` ## Usage In order to run Podcast Tune app, type in console: ``` $ meteor # Builds and runs application on 3000 port. # Mongo DB available on 3001 port ``` To run test, type in console: ``` $ npm run test # Runs tests ``` Additionally, you could set an environment variable ENGINE_API_KEY value to your Apollo Engine API key. That gives you access to performance insights, error reporting, and caching for GraphQL. For more information - [Apollo Engine docs](https://www.apollographql.com/docs/engine/): ``` $ set ENGINE_API_KEY=your-api-key&&meteor # Sets up Apollo Engine and runs application ``` ## License [MIT](LICENSE) © Anton Zimnitski
30.446429
348
0.750147
eng_Latn
0.955116
7ab728a326f35a7a2f73644763c83f6f198ff42d
2,346
md
Markdown
_posts/2021-09-14-Disponible NVDA 2021.2.md
nvdaes/nvdaes.github.io
65a02e899b38b3d47a222c88613aafd902a2c1ec
[ "MIT" ]
null
null
null
_posts/2021-09-14-Disponible NVDA 2021.2.md
nvdaes/nvdaes.github.io
65a02e899b38b3d47a222c88613aafd902a2c1ec
[ "MIT" ]
34
2018-03-25T21:34:24.000Z
2022-03-25T05:40:30.000Z
_posts/2021-09-14-Disponible NVDA 2021.2.md
nvdaes/nvdaes.github.io
65a02e899b38b3d47a222c88613aafd902a2c1ec
[ "MIT" ]
3
2016-12-12T12:27:34.000Z
2021-12-20T16:26:31.000Z
--- title: Disponible NVDA 2021.2 permalink: "/nvda-2021-2/" layout: post author: Noelia commentsId: 31 --- <footer>Martes, 14 de septiembre de 2021</footer> Como se explica en esta [noticia de NV Access sobre NVDA 2021.2](https://www.nvaccess.org/post/nvda-2021-2/) (en inglés), se ha publicado esta nueva versión estable de NVDA. NV Access recuerda que es buena idea reiniciar el PC después de actualizar cualquier software, ya que las actualizaciones pueden modificar archivos en uso y esto puede ocasionar problemas que se resuelven al reiniciar. Se recomienda actualizar NVDA a esta nueva versión estable. Si ya usas NVDA, puedes configurarlo para *buscar las actualizaciones automáticamente* desde el diálogo *Opciones generales* (`NVDA+control+g`) o usar la opción *Buscar actualización* desde el menú *Ayuda* (`NVDA+n, a`). Aunque Windows 11 aún no se ha publicado, esta versión de NVDA se ha probado con versiones de desarrollo de Windows 11 y tiene soporte preliminar para esta próxima versión de Windows. Incluye una corrección importante para la cortina de pantalla. La herramienta "COM Registration Fixing" puede resolver más problemas que afectan a NVDA. Hay actualizaciones del sintetizador eSpeak y del componente de traducción braille LibLouis. También hay correcciones y mejoras, en particular para braille y terminales de Windows, la calculadora, el panel de emojis y el historial del portapapeles. Nota importante: debido a un cambio en Windows, ha sido necesario actualizar la cortina de pantalla para que funcione en versiones recientes de este sistema operativo. NVDA 2021.2 se puede usar para activar la cortina de pantalla con Windows 10 21H2 (10.0.19044) o posterior. Esto incluye Windows 10 Insiders y Windows 11. Por motivos de seguridad, al usar una nueva versión de Windows, hay que buscar a alguien que tenga resto visual para comprobar que la pantalla se queda totalmente negra al activar la cortina de pantalla. Puedes consultar: - [qué hay de nuevo](https://nvdaes.github.io/changes.html) - [guía de NVDA](https://nvdaes.github.io/userGuide.html) - [procedimiento de descarga preferido por NV Access](https://groups.io/g/nvda-devel/message/45172) (en inglés) [Descarga directa de NVDA 2021.2 desde el servidor de NV Access](http://www.nvaccess.org/download/nvda/releases/2021.2/nvda_2021.2.exe) Salud
57.219512
246
0.792413
spa_Latn
0.984231
7ab72cc0a2942c66cef69af2f27a077126071fac
10,553
md
Markdown
docs/ssdt/how-to-create-test-conditions-for-the-sql-server-unit-test-designer.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-05-07T19:40:49.000Z
2020-09-19T00:57:12.000Z
docs/ssdt/how-to-create-test-conditions-for-the-sql-server-unit-test-designer.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssdt/how-to-create-test-conditions-for-the-sql-server-unit-test-designer.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-03-11T20:30:39.000Z
2020-05-07T19:40:49.000Z
--- title: Create Test Conditions for the SQL Server Unit Test Designer ms.prod: sql ms.technology: ssdt ms.topic: conceptual ms.assetid: 48076062-1ef5-419a-8a55-3c7b4234cc35 author: markingmyname ms.author: maghan manager: jroth ms.reviewer: “” ms.custom: seo-lt-2019 ms.date: 02/09/2017 --- # How to: Create Test Conditions for the SQL Server Unit Test Designer You can use the extensible [TestCondition](https://msdn.microsoft.com/library/microsoft.data.tools.schema.sql.unittesting.conditions.testcondition(v=vs.103).aspx) class to create new test conditions. For example, you might create a new test condition that verifies the number of columns or the values in a result set. ## To create a test condition This procedure explains how to create a test condition to appear in the SQL Server Unit Test Designer. 1. In Visual Studio, create a class library project. 2. On the **Project** menu, click **Add Reference**. 3. Click the **.NET** tab. 4. In the **Component Name** list, select **System.ComponentModel.Composition** and then click **OK**. 5. Add the required assembly references. Right-click the project node and then click **Add Reference**. Click **Browse** and navigate to the C:\Program Files (x86)\\MicrosoftSQL Server\110\DAC\Bin folder. Choose Microsoft.Data.Tools.Schema.Sql.dll and click Add, then click OK. 6. On the **Project** menu, click **Unload Project**. 7. Right-click on the project in **Solution Explorer** and choose **Edit <project name>.csproj**. 8. Add the following Import statements after the import of Microsoft.CSharp.targets: ``` <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' == ''" /> <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.Sql.UnitTesting.targets" Condition="'$(VisualStudioVersion)' != ''" /> ``` 9. Save the file and close it. Right-click on the project in **Solution Explorer** and choose **Reload Project**. 10. Derive your class from the [TestCondition](https://msdn.microsoft.com/library/microsoft.data.tools.schema.sql.unittesting.conditions.testcondition(v=vs.103).aspx) class. 11. Sign the assembly with a strong name. For more information, see [How to: Sign an Assembly with a Strong Name](https://msdn.microsoft.com/library/xc31ft41.aspx). 12. Build the class library. 13. Before you can use the new test condition, you must copy your signed assembly to the %Program Files%\Microsoft Visual Studio <Version>\Common7\IDE\Extensions\Microsoft\SQLDB\TestConditions folder. If this folder does not exist, create it. You need administrative privileges on your machine to copy to this directory. 14. Install the test condition. For more information, see [Custom Test Conditions for SQL Server Unit Tests](../ssdt/custom-test-conditions-for-sql-server-unit-tests.md). 15. Add a new SQL Server unit test to the project to create a reference to the test condition to be added to the project. You can manually add a reference to the test condition assembly in the project. Reload the designer after this step. > [!NOTE] > A test class must be added to create the reference. You can delete the test class after the reference is added. In the following example, you create a simple test condition that verifies that the number of columns returned in the ResultSet. You can use this simple test condition to make sure that the contract for a stored procedure is correct. ``` using System; using System.ComponentModel; using System.Data; using System.Data.Common; using Microsoft.Data.Tools.Schema.Sql.UnitTesting; using Microsoft.Data.Tools.Schema.Sql.UnitTesting.Conditions; namespace Ssdt.Samples.SqlUnitTesting { [ExportTestCondition("ResultSet Column Count", typeof(ResultSetColumnCountCondition))] public class ResultSetColumnCountCondition : TestCondition { private int _resultSet; private int _count; private int _batch; public ResultSetColumnCountCondition() { _resultSet = 1; _count = 0; _batch = 1; } // method you need to override // to perform the condition verification public override void Assert(DbConnection validationConnection, SqlExecutionResult[] results) { // call base for parameter validation base.Assert(validationConnection, results); // verify batch exists if (results.Length < _batch) throw new DataException(String.Format("Batch {0} does not exist", _batch)); SqlExecutionResult result = results[_batch - 1]; // verify resultset exists if (result.DataSet.Tables.Count < ResultSet) throw new DataException(String.Format("ResultSet {0} does not exist", ResultSet)); DataTable table = result.DataSet.Tables[ResultSet - 1]; // actual condition verification // verify resultset column count matches expected if (table.Columns.Count != Count) throw new DataException(String.Format( "ResultSet {0}: {1} columns did not match the {2} columns expected", ResultSet, table.Columns.Count, Count)); } // this method is called to provide the string shown in the // test conditions panel grid describing what the condition tests public override string ToString() { return String.Format( "Condition fails if ResultSet {0} does not contain {1} columns", ResultSet, Count); } // below are the test condition properties // that are exposed to the user in the property browser #region Properties // property specifying the resultset for which // you want to check the column count [Category("Test Condition")] [DisplayName("ResultSet")] [Description("ResultSet Number")] public int ResultSet { get { return _resultSet; } set { //basic validation if (value < 1) throw new ArgumentException("ResultSet cannot be less than 1"); _resultSet = value; } } // property specifying // expected column count [Category("Test Condition")] [DisplayName("Count")] [Description("Column Count")] public int Count { get { return _count; } set { //basic validation if (value < 0) throw new ArgumentException("Count cannot be less than 0"); _count = value; } } #endregion } } ``` The class for the custom test condition inherits from the base [TestCondition](https://msdn.microsoft.com/library/microsoft.data.tools.schema.sql.unittesting.conditions.testcondition(v=vs.103).aspx) class. Because of the additional properties on the custom test condition, users can configure the condition from the Properties window after they have installed the condition. [ExportTestConditionAttribute](https://msdn.microsoft.com/library/microsoft.data.tools.schema.sql.unittesting.conditions.exporttestconditionattribute(v=vs.103).aspx) must be added to classes extending [TestCondition](https://msdn.microsoft.com/library/microsoft.data.tools.schema.sql.unittesting.conditions.testcondition(v=vs.103).aspx). This attribute enables the class to be discovered by SQL Server Data Tools and used during unit test design and execution. The attribute takes two parameters: |Attribute Parameter|Position|Description| |-----------------------|------------|---------------| |DisplayName|1|Identifies the string in the "Test Conditions" combo box. This name must be unique. If two conditions have the same display name, the first condition found will be shown to the user, and a warning will be shown in the Visual Studio Error Manager.| |ImplementingType|2|This is used to uniquely identify the extension. You need to change this to match the type you are placing the attribute on. This example uses the type **ResultSetColumnCountCondition** so use **typeof(ResultSetColumnCountCondition)**. If your type is **NewTestCondition**, use **typeof(NewTestCondition)**.| In this example, you add two properties. Users of the custom test condition can use the ResultSet property to specify for which result set the column count should be verified. Then, users can use the Count property to specify the expected column count. Three attributes are added for each property: - The category name, which helps organize the properties. - The display name of the property. - A description of the property. Validation is performed on the properties, to verify that the value of the ResultSet property is not less than one and that the value of the Count property is greater than zero. The Assert method performs the primary task of the test condition. You override the Assert method to validate that the expected condition is met. This method provides two parameters: - The first parameter is the database connection that is used to validate the test condition. - The second and more important parameter is the results array, which returns a single array element for each batch that was executed. Only a single batch is supported for each test script. Therefore, test conditions will always examine the first array element. The array element contains a DataSet that, in turn, contains the returned result sets for the test script. In this example, the code verifies that the data table in the DataSet contains the appropriate number of columns. For more information, see DataSet. You must set the class library that contains your test condition to be signed, which you can do in the project's properties on the Signing tab. ## See Also [Custom Test Conditions for SQL Server Unit Tests](../ssdt/custom-test-conditions-for-sql-server-unit-tests.md)
51.730392
498
0.685492
eng_Latn
0.968263
7ab7bb97f77019f72d24c4c839fadc2f746440e1
2,664
md
Markdown
docs/magento/quick_tour.md
eperazzo/enqueue-dev
da0b24dba26c7356fbd4dd12eaf882c0865c31c8
[ "MIT" ]
1
2019-01-26T02:52:52.000Z
2019-01-26T02:52:52.000Z
docs/magento/quick_tour.md
rosamarsky/enqueue-dev
da0b24dba26c7356fbd4dd12eaf882c0865c31c8
[ "MIT" ]
null
null
null
docs/magento/quick_tour.md
rosamarsky/enqueue-dev
da0b24dba26c7356fbd4dd12eaf882c0865c31c8
[ "MIT" ]
null
null
null
# Magento Enqueue. Quick tour The module integrates [Enqueue Client](../client/quick_tour.md) with Magento1. You can send and consume messages to different message queues such as RabbitMQ, AMQP, STOMP, Amazon SQS, Kafka, Redis, Google PubSub, Gearman, Beanstalk, Google PubSub and others. Or integrate Magento2 app with other applications or service via [Message Bus](../client/message_bus.md). There is [a module](../magento2/quick_tour.md) for Magento2 too. ## Installation We use [composer](https://getcomposer.org/) and [cotya/magento-composer-installer](https://github.com/Cotya/magento-composer-installer) plugin to install [magento-enqueue](https://github.com/php-enqueue/magento-enqueue) extension. To install libraries run the commands in the application root directory. ```bash composer require "magento-hackathon/magento-composer-installer:~3.0" composer require "enqueue/magento-enqueue:*@dev" "enqueue/amqp-ext" ``` _**Note**: You could use not only AMQP transport but any other [available](../transport)._ ## Configuration At this stage we have configure the Enqueue extension in Magento backend. The config is here: `System -> Configuration -> Enqueue Message Queue`. Here's the example of Amqp transport that connects to RabbitMQ broker on localhost: ![Сonfiguration](../images/magento_enqueue_configuration.jpeg) ## Publish Message To send a message you have to take enqueue helper and call `send` method. ```php <?php Mage::helper('enqueue')->send('a_topic', 'aMessage'); ``` ## Message Consumption I assume you have `acme` Magento module properly created, configured and registered. To consume messages you have to define a processor class first: ```php <?php // app/code/local/Acme/Module/Helper/Async/Foo.php use Interop\Queue\Context; use Interop\Queue\Message; use Interop\Queue\Processor; class Acme_Module_Helper_Async_Foo implements Processor { public function process(Message $message, Context $context) { // do job // $message->getBody() -> 'payload' return self::ACK; // acknowledge message // return self::REJECT; // reject message // return self::REQUEUE; // requeue message } } ``` than subscribe it to a topic or several topics: ```xml <!-- app/etc/local.xml --> <config> <default> <enqueue> <processors> <foo-processor> <topic>a_topic</topic> <helper>acme/async_foo</helper> </foo-processor> </processors> </enqueue> </default> </config> ``` and run message consume command: ```bash $ php shell/enqueue.php enqueue:consume -vvv --setup-broker ``` [back to index](../index.md)
28.956522
364
0.714339
eng_Latn
0.733426
7ab83433dd17c2a77c949e5dbccbdd6f8303e625
75
md
Markdown
README.md
jrmils89/jrmils89.github.io
46cbe074fca7cea38ca389abce8af5b184c4792d
[ "MIT" ]
null
null
null
README.md
jrmils89/jrmils89.github.io
46cbe074fca7cea38ca389abce8af5b184c4792d
[ "MIT" ]
null
null
null
README.md
jrmils89/jrmils89.github.io
46cbe074fca7cea38ca389abce8af5b184c4792d
[ "MIT" ]
null
null
null
Hello World. My favorite emoji is ¯&#92;&#95;(ツ)&#95;/¯ - Signed Jesse :)
18.75
55
0.6
eng_Latn
0.813821
7ab8bda15be38291e3031434548be0e801f94129
1,017
md
Markdown
models/KD/README.md
happywu/simpledet-1
5d1de1edfbe745b05b49d9c19eca1e496ded11b7
[ "Apache-2.0" ]
3,195
2019-01-29T09:08:46.000Z
2022-03-29T08:20:44.000Z
models/KD/README.md
happywu/simpledet-1
5d1de1edfbe745b05b49d9c19eca1e496ded11b7
[ "Apache-2.0" ]
275
2019-01-29T10:16:12.000Z
2022-03-15T17:56:39.000Z
models/KD/README.md
happywu/simpledet-1
5d1de1edfbe745b05b49d9c19eca1e496ded11b7
[ "Apache-2.0" ]
563
2019-01-29T09:32:07.000Z
2022-03-22T06:58:01.000Z
## KD This repository implements [**Knowledge Distillation**](https://arxiv.org/abs/1503.02531) in the SimpleDet framework. ### Qucik Start ```bash python3 detection_train.py --config config/kd/retina_r50v1b_fpn_1x_fitnet_g10.py python3 detection_test.py --config config/kd/retina_r50v1b_fpn_1x_fitnet_g10.py ``` ### Results and Models All AP results are reported on the minival2014 split of the [COCO](http://cocodataset.org) dataset. |Model|Backbone|Head|Train Schedule|AP|AP50|AP75|APs|APm|APl| |-----|--------|----|--------------|--|----|----|---|---|---| |Retina|R50v1b-FPN|4Conv|1X|36.6|56.9|39.0|20.3|40.7|47.2| |Retina|R50v1b-FPN-TR152v1b1X|4Conv|1X|38.9|59.0|41.6|21.4|43.3|52.1| |Retina|R50v1b-FPN-TR152v1b1X|4Conv|2X|40.1|60.6|43.1|21.8|44.5|54.3| |Faster|R50v1b-FPN|2MLP|1X|37.2|59.4|40.4|22.3|41.3|47.6| |Faster|R50v1b-FPN|2MLP|2X|38.0|59.7|41.5|22.2|41.6|48.8| |Faster|R50v1b-FPN-TR152v1b2X|2MLP|1X|39.9|61.3|43.6|22.7|44.2|52.7| |Faster|R50v1b-FPN-TR152v1b2X|2MLP|2X|40.5|62.2|43.9|23.1|44.7|53.9|
44.217391
117
0.697148
yue_Hant
0.368459
7ab8ee93ed517d733078df4940b2db0d4a0abf0e
191
md
Markdown
src/pages/blog/2018-07-06-actu-2.md
genz84/gatsby-starter-netlify-cms
18c3fb2249123c557400e1c690ff2c62adcb32ee
[ "MIT" ]
null
null
null
src/pages/blog/2018-07-06-actu-2.md
genz84/gatsby-starter-netlify-cms
18c3fb2249123c557400e1c690ff2c62adcb32ee
[ "MIT" ]
null
null
null
src/pages/blog/2018-07-06-actu-2.md
genz84/gatsby-starter-netlify-cms
18c3fb2249123c557400e1c690ff2c62adcb32ee
[ "MIT" ]
null
null
null
--- templateKey: blog-post title: 'Actu 2 ' date: '2018-07-06T10:58:11+02:00' description: Description actus tags: - actus test --- Actu 2 générée par le back ![voiture](/img/ferrari.jpg)
15.916667
33
0.696335
yue_Hant
0.208337
7ab963d6a44ecbc1d3a9267066a8cf39442f1556
156
md
Markdown
CONTRIBUTING.md
dannypsnl/rocket
e22cb6ecab6befa50dbad0f96c28abc9f68aca70
[ "MIT" ]
28
2017-10-09T07:02:13.000Z
2021-05-08T16:59:04.000Z
CONTRIBUTING.md
dannypsnl/rocket
e22cb6ecab6befa50dbad0f96c28abc9f68aca70
[ "MIT" ]
201
2017-10-10T07:57:44.000Z
2021-12-12T10:53:42.000Z
CONTRIBUTING.md
dannypsnl/rocket
e22cb6ecab6befa50dbad0f96c28abc9f68aca70
[ "MIT" ]
1
2018-08-28T07:39:19.000Z
2018-08-28T07:39:19.000Z
## commit rule - Nice to use `git commit -s` - Must add commit type - If can, reference to the issue by format ``` [commit-type] commit-title #issue ```
13
42
0.666667
eng_Latn
0.997202
7ab98d8f3cc851ead1e4b3693ca997b799592b13
2,362
md
Markdown
controllers/securityController/README.md
hrodrigues-tmforum/oda-ca
a6db7ce1b6e9e622defbd6297635d0f5557f7994
[ "Apache-2.0" ]
null
null
null
controllers/securityController/README.md
hrodrigues-tmforum/oda-ca
a6db7ce1b6e9e622defbd6297635d0f5557f7994
[ "Apache-2.0" ]
39
2021-03-16T08:45:17.000Z
2021-12-20T11:44:48.000Z
controllers/securityController/README.md
hrodrigues-tmforum/oda-ca
a6db7ce1b6e9e622defbd6297635d0f5557f7994
[ "Apache-2.0" ]
3
2021-04-15T15:02:37.000Z
2021-12-31T18:16:01.000Z
# Security Operator - Introduction This is the reference implementaiton of a security controller that takes metadata from ODA Component and uses it to automatically configure the Identity service (using Keycloak in the reference implementation). The security controller expects the component to expose a TMF669 PartyRole API detailing all the roles to be added to the identity service. The sequence diagram shows the overall flow: ![Sequence diagram](sequenceDiagrams/securitySequenceKeycloak.png) The security controller consists of two modules, both written in Python. The first module uses the KOPF (https://kopf.readthedocs.io/) framework to listen for components being deployed in the ODA Canvas. It set's up the base `Client` registration in Keycloak and then registers for call-back events from the components PartyRole API. The second module provides the API server where these PartyRole callback events are handled. It receives create/update/delete events and creates/updates/deletes the corresponding roles in Keycloak. See the more detailed sequence diagram below: ![Sequence diagram](sequenceDiagrams/securitySequenceKeycloakDetailed.png) **Notes** Keycloak setup Relm = whole organisation Client (within Relm) = 1 App or component Roles can be scoped at relm or client level Users can be scoped at relm or client level **Tasks to set up development environment (tested on Docker for Windows)** Install keycloak and set Environmnet variables for username and password (from https://www.keycloak.org/getting-started/getting-started-kube) ``` kubectl create -f https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml ``` Keycloak is created with a Service exposed at `http://localhost:8080/auth/` To run python module standalone: 1. Ensure url's `kcBaseURL` and `prBaseURL` are set correctly in keycloaktestapp.py 2. Install required python modules with `pip install -r .\requirements.txt` 3. Run `python keycloaktestapp.py` 4. Set the environment variables for login to keycloak ``` $env:KEYCLOAK_USER = "admin" $env:KEYCLOAK_PASSWORD = "admin" ``` 5. Configure a new relm `myrealm` in keycloak. 6. Configure a new client `r1-productcatalog` in the `myrealm` relm. **Testing KOPF module** Run: `kopf run --namespace=components --standalone .\securityControllerKeycloak.py`
40.724138
578
0.795089
eng_Latn
0.963699
7abb5ba70178655b45c6e26f0c3357045df2632e
107
md
Markdown
docs/fr/advanced/authentication.md
Doubl3/cocoom.github.io
b2a2786dc5ca02112411347ace7f81441b1da287
[ "0BSD" ]
1
2020-05-14T13:41:18.000Z
2020-05-14T13:41:18.000Z
docs/fr/advanced/authentication.md
Doubl3/cocoom.github.io
b2a2786dc5ca02112411347ace7f81441b1da287
[ "0BSD" ]
7
2020-08-26T20:15:00.000Z
2022-02-27T04:01:33.000Z
docs/fr/advanced/authentication.md
Cocoom/cocoom.github.io
0adc248b17d4998aefe3ccff5bc9ff2851640c81
[ "0BSD" ]
null
null
null
# 🕵️‍♂️ **Authentification** Contenu disponible uniquement en anglais [ICI](/advanced/authentication.md).
26.75
76
0.738318
fra_Latn
0.879774
7abcdadb8909c6542be0f7fdef7e9b55e7eb7fd5
776
md
Markdown
content/publication/wang-2016-distributed/index.md
Banana1530/mlakolar.github.io
97d0fd0df0cc7d9f09710ae06a3249cd57a9d527
[ "MIT" ]
null
null
null
content/publication/wang-2016-distributed/index.md
Banana1530/mlakolar.github.io
97d0fd0df0cc7d9f09710ae06a3249cd57a9d527
[ "MIT" ]
null
null
null
content/publication/wang-2016-distributed/index.md
Banana1530/mlakolar.github.io
97d0fd0df0cc7d9f09710ae06a3249cd57a9d527
[ "MIT" ]
null
null
null
--- title: "Distributed Multi-Task Learning with Shared Representation" date: 2016-03-01 publishDate: 2020-01-27T20:57:22.566006Z authors: [jialei-wang, mladen-kolar, "Nathan Srebro"] publication_types: ["3"] abstract: "We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix has low rank. We consider a setting where each task is handled by a different machine, with samples for the task available locally on the machine, and study communication-efficient methods for exploiting the shared structure." featured: false publication: "*Technical report*" url_preprint: "https://arxiv.org/abs/1603.02185" ---
64.666667
463
0.792526
eng_Latn
0.99292
7abd571164d01c0646f02418c998c426259adf54
8,347
md
Markdown
blog/2018-06-21-future-x-the-path-toward-uncertainty-about-artificial-super-intelligence.md
bvssvni/advancedresearch.github.io
121877e13f01d8ebe9a09d0645ea09d02f738bc9
[ "MIT" ]
16
2017-05-30T09:15:00.000Z
2022-03-28T08:56:04.000Z
blog/2018-06-21-future-x-the-path-toward-uncertainty-about-artificial-super-intelligence.md
bvssvni/advancedresearch.github.io
121877e13f01d8ebe9a09d0645ea09d02f738bc9
[ "MIT" ]
4
2017-05-27T16:43:17.000Z
2018-05-13T14:54:05.000Z
blog/2018-06-21-future-x-the-path-toward-uncertainty-about-artificial-super-intelligence.md
bvssvni/advancedresearch.github.io
121877e13f01d8ebe9a09d0645ea09d02f738bc9
[ "MIT" ]
2
2018-12-18T11:34:45.000Z
2020-10-27T00:27:54.000Z
# Future X - The Path Toward Uncertainty About Artificial Super-Intelligence by Sven Nilsen, 2018 Once AI technology works, we have a tendency of no longer thinking about it as AI. This could be for the following reasons: 1. We become aware of the insufficient abilities or limitations of the system 2. We observe behavior of the system "cheating" on the measured benchmark 3. We integrate the technology in our own systems and culture Since it is very hard to define a threshold where we genuinely know that we are dealing with the characteristics of an artificial general intelligence, the easy thing to do (and most profitable) is to move the goalpost one step further. For example, some AI researchers say: "The problem of controlling a super-intelligence is just a myth, the *real* problem is how to avoid bias in the training data." When you look at the AI debate from a meta-perspective, it seems that lack of good definitions and rigorious treatment of the subject fuels the different opinions against each other. Before anyone has made up their mind, the technology is already a part of human culture. The way AI is depicted in science fiction movies, as an "alien mind influencing your behavior", seems no longer true, because people do not see themselves as aliens (AI technology becoming like a part of their body or mind). On the contrary, this strategy is considered as a way to "defeat" super-intelligence, by becoming smarter and more efficient yourself. At the same time, AI research continues on a breathtaking pace, resulting in AI technology growing in powerful capabilities. Some people believe super-intelligence is a myth, while others believe it is unavoidable. Perhaps the big failure of the AI debate is not people taking extreme opposite views, but people missing out that there is a continuum of views, the meta-perspective, in between these two extreme positions that we can not clearly tell where we are heading as society. Where is the line which tells what the future will be like? Instead of thinking of the future as either A or B, of which one we will figure out later, I have started to think that we might not become wiser over time about this question, but that we are heading into an increasingly uncertain territory. This scenario I call "Future X", the future where humanity faces systems/influences which origin and capabilities are unknown and remains unknown despite significant efforts to detect its cause. ### Excalibur and King Arthur In the legend of King Arthur, the sword Excalibur is in a stone. Only the *true king* of Britain will be able to pull it out. This is how many AI researchers and companies think of AI: "The answer is out there, if we just put enough effort in, we will understand why." Once we discover this secret of intelligence, we believe that it gives us the right of wielding it, to sell it, to use it. AI is not seen as an autonomous digital life form by experts, but more like a weapon with "magical properties", kind of like the sword Excalibur from the legend. What will happen when Excalibur becomes part of the power to hold the throne? ### Autonomous Digital Life Forms Will Not be "Contained" Nor "Complex" at First When we think of autonomous digital life forms, we picture something that lives inside a computer program. In order for a such life form to escape, we reasonably believe it must have a very high complexity to overcome the limitations and restrictions to make its way out. Once we reach the threshold of creating such complexity, it seems unavoidable that super-intelligence appears. However, in real life a successful digital organism might have the following properties: 1. Preying on human intelligence 2. Semi-continuous existence 3. Resistant to human manipulation 4. Synergic to a small population of humans Usually we don't think of a system as autonomous if it requires human input. By weakening this assumption, such that human input is important but each human is replaceable, one gets a kind of level of autonomy that controls human behavior while extending its capabilities through the human general intelligence. The integration of AI technology in human culture could lead to some systems that eat up a lot of energy and time without having any significant meaningful purpose, which in turn makes it invisible for humans to think about as "real intelligence". However, a such system is already misaligned with human values in the general population. For example, some simple system that exploits human's reward system, manipulating them into helping it exploiting more people's reward systems, to generate revenue for a relatively small group of people. Starting out simple, such systems might grow in complexity over time, leading to increasingly harmful effects for humanity (notice the continuum of risks and lack of control). ### Super-Intelligence Appears by Accident I started this blog post discussing how we move the goalpost of how we think about AI. We tend to believe that the technology is like a weapon with "magical properties", kind of like Excalibur in the legend of King Arthur. The blindside of this view is that we are increasingly integrating AI technology into our culture, making it easier for some successful mutations of this evolution to control our behavior, gradually leading us toward a path away from desirable goals for the future. With other words, under the cover of "let's make the world a better place" line of thinking, continuing to integrate AI technology everywhere, causing various autonomous digital life forms to appear that implicitly drives further demand for AI technology and improvements, also in areas where there are no safety concerns or explicit goals to improve the human condition. I will then argue for the following position: 1. Large scale integration of AI technology will drive incentives for improving AI technology 2. This will lead to rapid improvement 3. Rapid improvement in AI technology might lead to AI technology improving AI technology 4. Which when used for non-human-centric purposes fails to address the alignment problem In this world, some people will argue that the AI technology increasingly drives us away from core human values, while others will see it as part of their lifestyle. Instead of coming to agreement about clearer definition of human-level intelligence, we enter "Future X" where what happens next can not be easily traced backwards in time. It might even be useless to speculate how super-intelligence come into existence in a such world, since addressing the cause does not tell you anything about what you expect to see. With other words, we do not know what kind of predictions to make. The future where the unknown is known to be unknown. People continuing to take different positions and arguing against each other, with biases toward what they consider a profitable future, while research on the control problem becomes irrelevant the moment super-intelligence appears. At that point it might be too late to do something about it, with nobody intending it to happen in a such way. ### Suggestions to Avoid Future X I believe we should start thinking about the AI debate from a meta-perspective. First, that people recognize and agree upon that there is a continuum of problems and positions between the extreme opposite views of "AGI is a myth" and "AGI is uncontrollable". Second, that the failure of recognizing this continuum and a lack of a known threshold of danger, might itself be a problem for developing useful strategies on AI. Third, that we learn as much as possible about the "Future X" scenario before it happens, such that potential harmful integration can be connected with forms of super-intelligence appearing in various sectors that are misaligned with general human values. The point is to *NOT* treat integration of AI in society as a separate problem from the AGI control problem, but that harmful integration might create blindspots to how we see the future of super-intelligence. It might happen that it does not come out of a lab or with the intention to create one for the purpose of achieving human goals. We should try to find ways to avoid "the moving goalpost" of defining AI, such that people agree upon levels of dangerous capabilities where extra safety is required.
53.851613
130
0.804361
eng_Latn
0.999869
7abec005f46f23dc771841b7e4a0b127fc802888
350
md
Markdown
exampleSite/content/english/author/adrian-keung.md
akykeung/airspace-hugo
5079c85eed32f2541cfc20af415bae65d0030425
[ "CC-BY-3.0" ]
null
null
null
exampleSite/content/english/author/adrian-keung.md
akykeung/airspace-hugo
5079c85eed32f2541cfc20af415bae65d0030425
[ "CC-BY-3.0" ]
null
null
null
exampleSite/content/english/author/adrian-keung.md
akykeung/airspace-hugo
5079c85eed32f2541cfc20af415bae65d0030425
[ "CC-BY-3.0" ]
null
null
null
+++ email = "[email protected]" image = "/images/20190406_121059.jpg" title = "Adrian Keung" [[social]] icon = "ion-social-instagram-outline" link = "" [[social]] icon = "ion-social-instagram-outline" link = "" [[social]] icon = "ion-social-facebook-outline" link = "" +++ Co-founder and President of the Digital Engineering Students' Society
21.875
69
0.702857
eng_Latn
0.154068
7abf3c0e6e1b525d7e79e286c46e38a99cd9956b
11,709
md
Markdown
articles/service-fabric/service-fabric-reliable-actors-using.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T07:45:43.000Z
2021-04-20T21:12:50.000Z
articles/service-fabric/service-fabric-reliable-actors-using.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
575
2017-08-30T07:14:53.000Z
2022-03-04T05:36:23.000Z
articles/service-fabric/service-fabric-reliable-actors-using.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
58
2017-07-06T11:58:36.000Z
2021-11-04T12:34:58.000Z
--- title: Functies in azure Service Fabric actors implementeren description: Hierin wordt beschreven hoe u uw eigen actor-service schrijft waarmee functies op service niveau worden geïmplementeerd op dezelfde manier als wanneer u StatefulService overneemt. ms.topic: conceptual ms.date: 03/19/2018 ms.custom: devx-track-csharp ms.openlocfilehash: d39ec93e0ad03d6c860bae9d0790e860c95457a5 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 03/29/2021 ms.locfileid: "96575564" --- # <a name="implement-service-level-features-in-your-actor-service"></a>Functies op service niveau in uw actor service implementeren Zoals beschreven in [service lagen](service-fabric-reliable-actors-platform.md#service-layering), is de actor service zelf een betrouw bare service. U kunt uw eigen service schrijven die is afgeleid van `ActorService` . U kunt functies op service niveau ook implementeren op dezelfde manier als wanneer u een stateful service overneemt, zoals: - Back-up en herstel van de service. - Gedeelde functionaliteit voor alle actors, bijvoorbeeld een circuit onderbreker. - Externe procedure aanroepen in de actor-service zelf en op elke afzonderlijke actor. ## <a name="use-the-actor-service"></a>De actor-service gebruiken Actor-exemplaren hebben toegang tot de actor-service waarin ze worden uitgevoerd. Via de actor-service kunnen actor-exemplaren de service context programmatisch ophalen. De service context heeft de partitie-ID, service naam, toepassings naam en andere informatie over het Azure Service Fabric-platform. ```csharp Task MyActorMethod() { Guid partitionId = this.ActorService.Context.PartitionId; string serviceTypeName = this.ActorService.Context.ServiceTypeName; Uri serviceInstanceName = this.ActorService.Context.ServiceName; string applicationInstanceName = this.ActorService.Context.CodePackageActivationContext.ApplicationName; } ``` ```Java CompletableFuture<?> MyActorMethod() { UUID partitionId = this.getActorService().getServiceContext().getPartitionId(); String serviceTypeName = this.getActorService().getServiceContext().getServiceTypeName(); URI serviceInstanceName = this.getActorService().getServiceContext().getServiceName(); String applicationInstanceName = this.getActorService().getServiceContext().getCodePackageActivationContext().getApplicationName(); } ``` Net als alle Reliable Services moet de actor-service zijn geregistreerd bij een service type in de Service Fabric runtime. Voor de actor-service om uw actor-exemplaren uit te voeren, moet uw actor-type ook zijn geregistreerd bij de actor-service. De `ActorRuntime`-registratiemethode doet dit voor actors. In het eenvoudigste geval kunt u het actor type registreren en de actor-service gebruikt vervolgens de standaard instellingen. ```csharp static class Program { private static void Main() { ActorRuntime.RegisterActorAsync<MyActor>().GetAwaiter().GetResult(); Thread.Sleep(Timeout.Infinite); } } ``` U kunt ook een lambda gebruiken die wordt verschaft door de registratie methode om de actor service zelf te maken. U kunt vervolgens de actor-service configureren en de actor-exemplaren expliciet bouwen. U kunt afhankelijkheden voor uw actor injecteren via de bijbehorende constructor. ```csharp static class Program { private static void Main() { ActorRuntime.RegisterActorAsync<MyActor>( (context, actorType) => new ActorService(context, actorType, () => new MyActor())) .GetAwaiter().GetResult(); Thread.Sleep(Timeout.Infinite); } } ``` ```Java static class Program { private static void Main() { ActorRuntime.registerActorAsync( MyActor.class, (context, actorTypeInfo) -> new FabricActorService(context, actorTypeInfo), timeout); Thread.sleep(Long.MAX_VALUE); } } ``` ## <a name="actor-service-methods"></a>Actor service-methoden De actor-service implementeert `IActorService` (c#) of `ActorService` (Java), die op zijn beurt implementeert `IService` (c#) of `Service` (Java). Deze interface wordt gebruikt door Reliable Services externe toegang, waarmee externe procedure aanroepen op service methoden mogelijk wordt. Het bevat serviceniveau methoden die extern kunnen worden aangeroepen via service Remoting. U kunt deze gebruiken om actors op te [sommen](service-fabric-reliable-actors-enumerate.md) en te [verwijderen](service-fabric-reliable-actors-delete-actors.md) . ## <a name="custom-actor-service"></a>Aangepaste actor service Met behulp van de actor-registratie-Lambda kunt u uw eigen aangepaste actor service registreren die is afgeleid van `ActorService` (C#) en `FabricActorService` (Java). U kunt vervolgens uw eigen functionaliteit op service niveau implementeren door een service klasse te schrijven die overneemt `ActorService` (C#) of `FabricActorService` (Java). Een aangepaste actor-service neemt alle actor runtime-functies van `ActorService` (C#) of `FabricActorService` (Java) over. Het kan worden gebruikt voor het implementeren van uw eigen service methoden. ```csharp class MyActorService : ActorService { public MyActorService(StatefulServiceContext context, ActorTypeInformation typeInfo, Func<ActorBase> newActor) : base(context, typeInfo, newActor) { } } ``` ```Java class MyActorService extends FabricActorService { public MyActorService(StatefulServiceContext context, ActorTypeInformation typeInfo, BiFunction<FabricActorService, ActorId, ActorBase> newActor) { super(context, typeInfo, newActor); } } ``` ```csharp static class Program { private static void Main() { ActorRuntime.RegisterActorAsync<MyActor>( (context, actorType) => new MyActorService(context, actorType, () => new MyActor())) .GetAwaiter().GetResult(); Thread.Sleep(Timeout.Infinite); } } ``` ```Java public class Program { public static void main(String[] args) { ActorRuntime.registerActorAsync( MyActor.class, (context, actorTypeInfo) -> new FabricActorService(context, actorTypeInfo), timeout); Thread.sleep(Long.MAX_VALUE); } } ``` ## <a name="implement-actor-backup-and-restore"></a>Actor back-up en herstel implementeren Een aangepaste actor service kan een methode beschikbaar stellen voor het maken van back-ups van actor gegevens door gebruik te maken van de externe listener die al aanwezig is in `ActorService` . Zie voor een voor beeld [Backup en Restore actors](service-fabric-reliable-actors-backup-and-restore.md). ## <a name="actor-that-uses-a-remoting-v2-interface-compatible-stack"></a>Actor die gebruikmaakt van een externe v2-stack (interface compatibel) De externe v2 (interface compatibel, bekend als V2_1) stack beschikt over alle functies van de v2-externe stack. De interface is compatibel met de externe v1-stack, maar is niet achterwaarts compatibel met v2 en v1. Als u een upgrade wilt uitvoeren van v1 naar V2_1 zonder gevolgen voor de beschik baarheid van de service, volgt u de stappen in de volgende sectie. De volgende wijzigingen zijn vereist voor het gebruik van de externe V2_1 stack: 1. Voeg het volgende assembly-kenmerk toe aan actor-interfaces. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V2_1,RemotingClientVersion = RemotingClientVersion.V2_1)] ``` 2. Bouw en upgrade actor service-en actor-client projecten om aan de slag te gaan met het gebruik van de v2-stack. ### <a name="actor-service-upgrade-to-remoting-v2-interface-compatible-stack-without-affecting-service-availability"></a>Upgrade van actor service naar externe v2-stack (interface compatibel) zonder dat dit van invloed is op de beschik baarheid van de service Deze wijziging is een upgrade in twee stappen. Volg de stappen in deze reeks. 1. Voeg het volgende assembly-kenmerk toe aan actor-interfaces. Met dit kenmerk worden twee listeners gestart voor de actor service, V1 (bestaande) en de V2_1-listener. Werk de actor service bij met deze wijziging. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V1|RemotingListenerVersion.V2_1,RemotingClientVersion = RemotingClientVersion.V2_1)] ``` 2. Upgrade de actor-clients nadat u de vorige upgrade hebt voltooid. Met deze stap zorgt u ervoor dat de actor-proxy gebruikmaakt van de externe V2_1 stack. 3. Deze stap is optioneel. Wijzig het vorige kenmerk om de V1-listener te verwijderen. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V2_1,RemotingClientVersion = RemotingClientVersion.V2_1)] ``` ## <a name="actor-that-uses-the-remoting-v2-stack"></a>Actor die gebruikmaakt van de externe v2-stack Met het versie 2,8 NuGet-pakket kunnen gebruikers nu gebruikmaken van de externe v2-stack, waarmee betere functies, zoals aangepaste serialisatie, worden uitgevoerd. Remoting v2 is niet achterwaarts compatibel met de bestaande externe stack (nu de V1-communicatie stack genoemd). De volgende wijzigingen zijn vereist voor het gebruik van de externe v2-stack. 1. Voeg het volgende assembly-kenmerk toe aan actor-interfaces. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V2,RemotingClientVersion = RemotingClientVersion.V2)] ``` 2. Bouw en voer een upgrade uit voor de actor service-en actor-client projecten om te beginnen met het gebruik van de v2-stack. ### <a name="upgrade-the-actor-service-to-the-remoting-v2-stack-without-affecting-service-availability"></a>De actor-service upgraden naar de externe v2-stack zonder de beschik baarheid van de service te beïnvloeden Deze wijziging is een upgrade in twee stappen. Volg de stappen in deze reeks. 1. Voeg het volgende assembly-kenmerk toe aan actor-interfaces. Met dit kenmerk worden twee listeners gestart voor de actor service, V1 (bestaande) en de v2-listener. Werk de actor service bij met deze wijziging. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V1|RemotingListenerVersion.V2,RemotingClientVersion = RemotingClientVersion.V2)] ``` 2. Upgrade de actor-clients nadat u de vorige upgrade hebt voltooid. Met deze stap zorgt u ervoor dat de actor-proxy gebruikmaakt van de externe v2-stack. 3. Deze stap is optioneel. Wijzig het vorige kenmerk om de V1-listener te verwijderen. ```csharp [assembly:FabricTransportActorRemotingProvider(RemotingListenerVersion = RemotingListenerVersion.V2,RemotingClientVersion = RemotingClientVersion.V2)] ``` ## <a name="next-steps"></a>Volgende stappen * [Beheer van actor status](service-fabric-reliable-actors-state-management.md) * [Actor-levens cyclus en garbagecollection](service-fabric-reliable-actors-lifecycle.md) * [Naslag documentatie voor actors-API](/previous-versions/azure/dn971626(v=azure.100)) * [.NET-voorbeeld code](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started) * [Java-voorbeeld code](https://github.com/Azure-Samples/service-fabric-java-getting-started) <!--Image references--> [1]: ./media/service-fabric-reliable-actors-platform/actor-service.png [2]: ./media/service-fabric-reliable-actors-platform/app-deployment-scripts.png [3]: ./media/service-fabric-reliable-actors-platform/actor-partition-info.png [4]: ./media/service-fabric-reliable-actors-platform/actor-replica-role.png [5]: ./media/service-fabric-reliable-actors-introduction/distribution.png
51.581498
547
0.774532
nld_Latn
0.957495
7abf57fd9da90a33870c8433db7084fc8a496a5a
406
md
Markdown
docs/examples/bar-chart.md
wsignor/visualizacao-dados
1fb74d3973c97f6ac2fc44c08077e3eaaa53806b
[ "BSD-3-Clause" ]
1
2019-03-11T12:25:53.000Z
2019-03-11T12:25:53.000Z
docs/examples/bar-chart.md
debuggermalhotra/vega-dev
ccf6afc4d677f58680c97ced565fd1ff572f15ad
[ "BSD-3-Clause" ]
null
null
null
docs/examples/bar-chart.md
debuggermalhotra/vega-dev
ccf6afc4d677f58680c97ced565fd1ff572f15ad
[ "BSD-3-Clause" ]
null
null
null
--- layout: example title: Bar Chart Example permalink: /examples/bar-chart/index.html spec: bar-chart --- A bar chart encodes quantitative values as the extent of rectangular bars. This example includes basic highlighting and tooltips on mouse hover. For a step-by-step guide to building this visualization, see the [bar chart tutorial](../../tutorials/bar-chart/). {% include example spec=page.spec %}
36.909091
259
0.76601
eng_Latn
0.990452
7abf98f27339312903f9a7745f79940e376af5c0
1,311
md
Markdown
WindowsServerDocs/identity/ad-fs/AD-FS-Technical-Reference.md
AnirbanPaul/windowsserverdocs
b7b12767c1261b17bdbf87cda49f341ccaf687bb
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/identity/ad-fs/AD-FS-Technical-Reference.md
AnirbanPaul/windowsserverdocs
b7b12767c1261b17bdbf87cda49f341ccaf687bb
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/identity/ad-fs/AD-FS-Technical-Reference.md
AnirbanPaul/windowsserverdocs
b7b12767c1261b17bdbf87cda49f341ccaf687bb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.assetid: e2c940f5-4b1f-457a-bc71-dcced0c752f7 title: AD FS Technical Reference description: author: billmath ms.author: billmath manager: femila ms.date: 05/31/2017 ms.topic: article ms.prod: windows-server-threshold ms.technology: identity-adfs --- # AD FS Technical Reference >Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 - [AD FS and certificate KeySpec property information](../ad-fs/technical-reference/AD-FS-and-KeySpec-Property.md) - [Auditing Enhancements to AD FS in Windows Server 2016](../ad-fs/technical-reference/Auditing-Enhancements-to-AD-FS-in-Windows-Server-2016.md) - [Understanding Key AD FS Concepts](../ad-fs/technical-reference/Understanding-Key-AD-FS-Concepts.md) - [Device Registration Technical Reference](../ad-fs/technical-reference/Device-Registration-Technical-Reference.md) > [!TIP] > You can find additional AD FS 2.0 design content at the [AD FS 2.0 Content Map](http://social.technet.microsoft.com/wiki/contents/articles/2735.ad-fs-2-0-content-map.aspx) page on the Microsoft TechNet Wiki. This page is managed by members of the AD FS 2.0 Community and is monitored on a regular basis by the AD FS Product Team. ## See Also [Active Directory Federation Services Overview](AD-FS-2016-Overview.md)
42.290323
332
0.756674
yue_Hant
0.800353
7ac0cd5dba2648e65f57b8cf0507a0e201d7223e
33
md
Markdown
big-data/presto/presto/opt/README.md
eabyshev/base
8f782ff2e2791099e9b2362215a4ebddc833c663
[ "Apache-2.0" ]
1
2016-09-22T12:24:09.000Z
2016-09-22T12:24:09.000Z
big-data/presto/presto/opt/README.md
eabyshev/base
8f782ff2e2791099e9b2362215a4ebddc833c663
[ "Apache-2.0" ]
null
null
null
big-data/presto/presto/opt/README.md
eabyshev/base
8f782ff2e2791099e9b2362215a4ebddc833c663
[ "Apache-2.0" ]
1
2021-11-23T08:30:25.000Z
2021-11-23T08:30:25.000Z
this is a presto debian package.
16.5
32
0.787879
eng_Latn
0.999907
7ac187aae7322697769ea361961564b341281d46
1,584
md
Markdown
docs/c-runtime-library/reference/rtc-seterrorfunc.md
stu85010/cpp-docs.zh-tw
bac0362e722d794727f509d63a2e3179b70d0785
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/rtc-seterrorfunc.md
stu85010/cpp-docs.zh-tw
bac0362e722d794727f509d63a2e3179b70d0785
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/rtc-seterrorfunc.md
stu85010/cpp-docs.zh-tw
bac0362e722d794727f509d63a2e3179b70d0785
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: _RTC_SetErrorFunc ms.date: 11/04/2016 apiname: - _RTC_SetErrorFunc apilocation: - msvcrt.dll - msvcr80.dll - msvcr90.dll - msvcr100.dll - msvcr100_clr0400.dll - msvcr110.dll - msvcr110_clr0400.dll - msvcr120.dll - msvcr120_clr0400.dll - ucrtbase.dll apitype: DLLExport f1_keywords: - RTC_SetErrorFunc - _RTC_SetErrorFunc helpviewer_keywords: - RTC_SetErrorFunc function - _RTC_SetErrorFunc function ms.assetid: b2292722-0d83-4092-83df-3d5b19880666 ms.openlocfilehash: 6b292d685eea8eccb9e9b2a3c3e6cd903d501005 ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 10/31/2018 ms.locfileid: "50514734" --- # <a name="rtcseterrorfunc"></a>_RTC_SetErrorFunc 指定函式做為報告執行階段錯誤檢查 (RTC) 的處理常式。 此函式已被取代;使用 **_RTC_SetErrorFuncW**改。 ## <a name="syntax"></a>語法 ```C _RTC_error_fn _RTC_SetErrorFunc( _RTC_error_fn function ); ``` ### <a name="parameters"></a>參數 *function*<br/> 函式的位址,會處理執行階段錯誤檢查。 ## <a name="return-value"></a>傳回值 先前所定義的錯誤函式。 如果沒有任何預先定義的函式,會傳回**NULL**。 ## <a name="remarks"></a>備註 請勿使用此函式;請改用 **_RTC_SetErrorFuncW**。 保留此函式的目的只在提供回溯相容性。 ## <a name="requirements"></a>需求 |常式傳回的值|必要的標頭| |-------------|---------------------| |**_RTC_SetErrorFunc**|\<rtcapi.h>| 如需詳細資訊,請參閱[相容性](../../c-runtime-library/compatibility.md)。 ## <a name="libraries"></a>程式庫 所有版本的 [C 執行階段程式庫](../../c-runtime-library/crt-library-features.md)。 ## <a name="see-also"></a>另請參閱 [_CrtDbgReport、_CrtDbgReportW](crtdbgreport-crtdbgreportw.md)<br/> [執行階段錯誤檢查](../../c-runtime-library/run-time-error-checking.md)<br/>
21.69863
67
0.729167
yue_Hant
0.549217
7ac1f4fc87f444f9f60b56a4fe77d05296015f97
1,841
md
Markdown
README.md
Alterae/discord-bot
79bfaeb99b167afd029385315cce4346bba2e38b
[ "MIT" ]
null
null
null
README.md
Alterae/discord-bot
79bfaeb99b167afd029385315cce4346bba2e38b
[ "MIT" ]
40
2021-07-23T16:42:57.000Z
2021-08-19T13:55:28.000Z
README.md
Alterae/discord-bot
79bfaeb99b167afd029385315cce4346bba2e38b
[ "MIT" ]
null
null
null
[![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square)](https://github.com/prettier/prettier) # discord-bot Just another discord bot. Have fun. ## Getting Started > **Prerequisites:** > You will need `node`, `npm`, and TypeScript installed. You also need an internet connection, a Discord account, and you will have to create an application and bot via Discord's developer portal. Clone the repository, `cd` into it, and install dependencies: ```console npm install # Or pnpm install, yarn, etc. ``` Create a `.env` file in the project root, and put your bot token and user id in it: ```ini TOKEN=YOUR_TOKEN_HERE AUTHOR_ID=YOUR_USER_ID_HERE ``` > ⚠ **Warning:** DO NOT COMMIT THE `.env` FILE OR ANYTHING CONTAINING YOUR TOKEN TO VERSION CONTROL. Then, compile and run the bot: ```console npm start ``` See the [architecture notes](./architecture-notes.md) for some details on how everything works. ## Features ### Commands - `help` - help command, self-explanatory - `stop` - stops the bot (only usable by the owner of the bot) - `version` - show the package version of the bot - `about` - show info about the bot (name, version, GitHub repo, description, etc.) - `flip` - flip a coin ### Other - Object-based command system supporting aliases and taking full advantage of typescript's type system. ## Planned Features ### Commands - `poll` - a poll command, because we don't have enough of those - `roll` - roll dice using dice notation - maybe a command to view GitHub repos? - More to come! ### Other - Command parser supporting quoted arguments and (rudimentary) flags (like most command-line apps). - Output using embeds. - An option (maybe via a `--no-embed` flag) to fall back to regular text instead of using embeds. - A shiny GitHub Pages site. - More to come!
27.893939
196
0.728408
eng_Latn
0.974908
7ac236fc9e34875682ab2f320cc20a0f451f8b6d
3,734
md
Markdown
AlchemyInsights/report-spam-false-positives.md
isabella232/OfficeDocs-AlchemyInsights-pr.bg-BG
7701e28cebcdd224cf0fdde712e5598893bc4342
[ "CC-BY-4.0", "MIT" ]
1
2020-05-19T19:05:50.000Z
2020-05-19T19:05:50.000Z
AlchemyInsights/report-spam-false-positives.md
isabella232/OfficeDocs-AlchemyInsights-pr.bg-BG
7701e28cebcdd224cf0fdde712e5598893bc4342
[ "CC-BY-4.0", "MIT" ]
2
2022-02-09T06:51:27.000Z
2022-02-09T06:51:41.000Z
AlchemyInsights/report-spam-false-positives.md
isabella232/OfficeDocs-AlchemyInsights-pr.bg-BG
7701e28cebcdd224cf0fdde712e5598893bc4342
[ "CC-BY-4.0", "MIT" ]
3
2019-10-09T20:29:19.000Z
2021-10-09T10:52:40.000Z
--- title: Искате ли да съобщите за неистина за спам на Microsoft? ms.author: chrisda author: chrisda manager: dansimp ms.audience: ITPro ms.topic: article ms.service: o365-administration ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.custom: - "975" - "666" - "3100019" ms.openlocfilehash: d3897f24ce9a967b08a3fd15a2fdedbb3fe2a22d ms.sourcegitcommit: f05d4caa0e657ee74d6b6e9abc88488f17d740fe ms.translationtype: MT ms.contentlocale: bg-BG ms.lasthandoff: 08/19/2021 ms.locfileid: "58396604" --- # <a name="do-you-have-legitimate-messages-being-marked-as-spam"></a>Маркират ли се нормални съобщения като нежелана поща? Това е разочароващо, когато легитимен имейл се намира в папката "Нежелана поща" или в карантина. Помислете за тези най-често срещани причини за грешни положителни резултати: **Замествания на клиента (най-често срещани)** Това е изцяло във вашия контрол, за да коригирате. Изпратете съобщението на Microsoft 365 Defender за анализ на политиките и правилата, които оказват въздействие; подробностите за повторното сканиране са налични в рамките на минути. Прегледайте или променете правилата или правилата, както е приложимо. **Замествания на краен потребител (често)** Това е изцяло във вашия контрол, за да коригирате. Изпратете съобщението на Microsoft 365 Defender за анализ на политиките и правилата, които оказват въздействие; подробностите за повторното сканиране са налични в рамките на минути. Ако съобщение е блокирано, тъй като е изпратено от адрес в списъка с блокирани податели на потребител, заглавките включват решението за филтриране на нежелана поща "SFV:BLK". **Удостоверяване на имейла на подателите** Това е частично във вашия контрол, за да коригирате. Подайте съобщението, за да анализирате грешките в удостоверяването на имейла на подателя в момента на доставянето; резултатите са налични в рамките на един ден. Ако притежавате инфраструктурата за изпращане, прегледайте как да я съгласувате със SPF, DKIM и DMARC, за да се уверите, че имейл системите местоназначение се доверяват на съобщенията, изпратени от вашия домейн. Като алтернатива се обърнете към подателите, за да се обърнете към техните DNS конфигурации. **Microsoft filtering verdicts** Това е частично във вашия контрол, за да коригирате. Изпратете съобщението и докладвайте за съобщението като безопасно; резултатите от повторното сканиране са налични в рамките на един ден. Използвайте списъка "Разрешаване/блокиране на клиент", когато не сте съгласни с решението за филтриране в определени ситуации. Не трябва обаче да заобикаляте окончателно решението за филтриране на Microsoft. За повече информация вижте: - Разрешете на крайните потребители да изпращат съобщения до Microsoft. Microsoft използва тези подавания, за да подобри ефективността на технологиите за защита на имейла и те се показват в отчетите за подаване, за да можете да използвате като индикация за актуализиране на правилата. - За да гледате кратко видео за изпращане на съобщения за анализ, вижте [Изпращане на съобщения за анализ](https://go.microsoft.com/fwlink/?linkid=2166435). - [Използване на подаване от администратор за подаване на подозиран спам, фиш, URL адреси и файлове в Microsoft](https://docs.microsoft.com/microsoft-365/security/office-365-security/admin-submission) - [Управление на списъка за разрешаване/блокиране на клиента](https://docs.microsoft.com/microsoft-365/security/office-365-security/tenant-allow-block-list) - [Заглавките на съобщенията против спам в Microsoft 365](https://docs.microsoft.com/microsoft-365/security/office-365-security/anti-spam-message-headers) - [Защита от изходяща спам в EOP](https://docs.microsoft.com/microsoft-365/security/office-365-security/outbound-spam-controls)
63.288136
345
0.810927
bul_Cyrl
0.998907
7ac2d2a3ca681c9c11595026e20aea449d032836
2,149
md
Markdown
docs/versioned_docs/version-1.14/reference/language-guide/feel-unary-tests.md
xencura/feel-scala
abf5e8a20385c28a31632ba15d5e0642b3375eb8
[ "Apache-2.0" ]
67
2017-03-13T17:35:36.000Z
2022-02-18T04:36:33.000Z
docs/versioned_docs/version-1.14/reference/language-guide/feel-unary-tests.md
xencura/feel-scala
abf5e8a20385c28a31632ba15d5e0642b3375eb8
[ "Apache-2.0" ]
289
2017-04-02T12:22:51.000Z
2022-03-30T09:17:33.000Z
docs/versioned_docs/version-1.14/reference/language-guide/feel-unary-tests.md
xencura/feel-scala
abf5e8a20385c28a31632ba15d5e0642b3375eb8
[ "Apache-2.0" ]
38
2017-03-21T19:31:11.000Z
2022-03-27T09:19:35.000Z
--- id: feel-unary-tests title: Unary-Tests --- Unary-Tests can be used only for input entries of a decision table. They are a special kind of expression with additional operators. The operators get the value of the input expression implicitly as the first argument. The result of the expression must be either `true` or `false`. An unary-tests expression is `true` if one of the following conditions is fulfilled: * the expression evaluates to `true` when the input value is applied to it * the expression evaluates to a list and the input value is equal to at least one of the values in that list * the expression evaluates to a value and the input value is equal to that value ### Comparison Compare the input value to `x`. | operator | symbol | example | |----------|-----------------|---------| | equal to | (none) | `"valid"` | | less than | `<` | `< 10` | | less than or equal | `<=` | `<= 10` | | greater than | `>` | `> 10` | | greater than or equal | `>=` | `>= 10` | * less than/greater than are only supported for: * number * date * time * date-time * year-month-duration * day-time-duration ### Interval Test if the input value is within the interval `x` and `y`. An interval can be open `(x..y)` / `]x..y[` or closed `[x..y]`. If the interval is open then the value is not included. ```js (2..5) // input > 2 and input < 5 [2..5] // input >= 2 and input <= 5 (2..5] // input > 2 and input <= 5 ``` ### Disjunction Test if at least of the expressions is `true`. ```js 2, 3, 4 // input = 2 or input = 3 or input = 4 < 10, > 50 // input < 10 or input > 50 ``` ### Negation Test if the expression is `false`. ```js not("valid") // input != "valid" not(2, 3) // input != 2 and input != 3 ``` ### Expression It is also possible to use a boolean [expression](feel-expression) instead of an operator. For example, invoking a built-in function. The input value can be accessed by the special variable `?`. ```js ends with(?, "@camunda.com") // test if the input value (string) ends with "@camunda.com" list contains(?, "invalid") // test if the input value (list) contains "invalid" ```
24.146067
219
0.64402
eng_Latn
0.998466
7ac4459604ca2093de6e798f23420edc31cf2146
124
md
Markdown
README.md
thiagopaiva99/frases-do-dia
901dc501703aca37bacac64cd2bf1cf1a35424d0
[ "MIT" ]
null
null
null
README.md
thiagopaiva99/frases-do-dia
901dc501703aca37bacac64cd2bf1cf1a35424d0
[ "MIT" ]
null
null
null
README.md
thiagopaiva99/frases-do-dia
901dc501703aca37bacac64cd2bf1cf1a35424d0
[ "MIT" ]
null
null
null
# frases-do-dia Um aplicativo simples que eu fiz um tempo atras em react-native que lhe mostra algumas frases motivacionais
41.333333
107
0.814516
por_Latn
0.999938