hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9be77447fc1c469055791c0fb67c51ef67fc29b8 | 6,234 | md | Markdown | _posts/2019-10-04-23-30-barr-4-supplements.md | duraak/duraak.github.io | 45ff85010396eb77ced1ed3d50953f5186a99f6a | [
"MIT"
] | null | null | null | _posts/2019-10-04-23-30-barr-4-supplements.md | duraak/duraak.github.io | 45ff85010396eb77ced1ed3d50953f5186a99f6a | [
"MIT"
] | 2 | 2021-05-20T11:21:15.000Z | 2022-02-26T06:03:40.000Z | _posts/2019-10-04-23-30-barr-4-supplements.md | duraak/duraak.github.io | 45ff85010396eb77ced1ed3d50953f5186a99f6a | [
"MIT"
] | null | null | null | ---
title: Барр 4 - Дополнения
categories:
author_staff_member: Дурак.
show_comments: true
---
# **Дополнения..**
---
самый важный <span style="color:red">красный..
---
## **наиболее эффективен против ЭБВ-EA**
*(из gingko, black cohosh, echinacea, kava-kava, saw palmetto, turmeric, angelica, wild yam, cat’s claw, passion flower, muira puama, feverfew, blueberry, chasteberry, licorice, nettle, golden seal, pygeum, ginger, valerian и hops)*
<a href="https://www.sciencedirect.com/science/article/abs/pii/S1043661801909363?via%3Dihub" target="_blank" rel="nofollow">**Link 1**</a>
<a href="https://www.ncbi.nlm.nih.gov/pubmed/11884218" target="_blank" rel="nofollow">**Link 2**</a>
1. <span style="color:red">**Turmeric - куркума**
- ингибитор для ЭБВ-EA
- укрепить эндокринную систему и центральную нервную систему
- в целом:
- лихорадка
- головная боль
- Желудочно-кишечного тракта
2. <span style="color:red">**Passionflower - пассифлора/ цветок страсти**
- ингибитор для ЭБВ-EA
---
## **Вещи**:
- <span style="color:red">**NAC**
- anti-replication of the virus and cell system support (energy!)
- <a href="https://www.ncbi.nlm.nih.gov/pubmed/29228057" target="_blank" rel="nofollow"> ameliorates Epstein-Barr virus latent membrane protein 1</a>
- <span style="color:red">**L-Lysine**
- strong antiviral
- tissue repair and normal production of hormones, antibodies, and digestive enzymes
- <span style="color:red">**lowers EBV load and acts as a central nervous system anti-inflammatory**
- 500-1000 mg twice per day
- 2000-3000 mg if very bad - not longer than 3 days!
- Omega-3 fatty acids
- prime modulator of inflammatory hormones
- 5-MTHF (5-methyltetrahydrofolate)
- helps strengthen the endocrine system and central nervous system
- <span style="color:red">**Lauricidin or Monolaurin**
- treat infections (cold, flu, herpes and ЭБВ)
- antiviral; breaks down ЭБВ load and reduces cofactors
- boost immune system
- start with ¼ teaspoon 2 – 3 times a day
- working up to 1 teaspoon 2 – 3 times a day
- Monolaurin 1800 mg twice daily
- <span style="color:red">Resveratrol
- Humic Acid:
- start with 1 capsule (750 mg) twice a day
- working up to 2 capsules (1500 mg) twice a day
- Ester-C:
- strengthens the immune system and flushes ЭБВ toxins from the liver
- Ursolic acid
- Oleanolic acid
- Corosolic acid
- <a href="https://www.ncbi.nlm.nih.gov/pubmed/17617564" target="_blank" rel="nofollow">Honokiol</a>
- inhibits negative effects of ЭБВ activation
- Inositol
- CoQ10 or better <span style="color:red">**Ubiquinol**
- improve energy production in cells
- Lactoferrin
---
## Минералы и витамины:
- Zinc
- strengthens the immune system and protects the thyroid from ЭБВ inflammation
- **Sun**/Vitamin D
- Vitamin D3 + K2
- antiviral
- increases energy levels and immunity, boosts mood and balances hormones
- Fish Oil/DHA+EPA
- Vitamin A
- Vitamin B12 (**methylcobalamin**)
- strengthens the central nervous system
- Magnesium
- anti-inflammatory for nervous system
- Selenium
- strengthens and protects the central nervous system
---
## Adaptogens:
- Eleutherococcus senticosus
- Astragalus
- Andrographis
- Gynostemma
- Schizandra chinensis
- Rhodiola rosea
- Ashwagandha
- <span style="color:red">**Licorice**
- antiviral
- anti-inflammatory
- **lowers ЭБВ production and strengthens the adrenals and kidneys**
- 150-300 mg daily
---
## натуральный:
- <span style="color:red">**Tumeric**
- <span style="color:red">**Cat’s claw**
- **most effective as of traditional saying**
- **reduces ЭБВ and cofactors such as strep A and strep B**
- 30-60 drops twice daily
- <span style="color:red">**Lemon balm**
- antiviral and antibacterial
- kills ЭБВ cells and strengthens the immune system
- Black Cumin Seed Oil
- Artemisinin
- Boswellia
- <span style="color:red">**Chinese Skullcap**
- Citrus
- Quercetin
- Milk Thistl
- Resveratrol
- Sesame oil
- THC
- <span style="color:red">**Olive leaf/Oleuropein**
- 1000-1500 mg twice daily
- Nettle leaf
- provides vital micronutrients to the brain, blood, and central nervous system
- Elderberry
- antiviral; strengthens the immune system
- Red clover
- cleanses the liver, lymphatic system, and spleen of neurotoxins from ЭБВ
- Star anise:
- antiviral; helps kill ЭБВ in the liver and thyroid
- Echinacea
- Anti-inflammatory and antiviral
- 300-500 mg up to three times daily
---
# Algae (предусмотрительность!)
- Red marine algae:
- antiviral
- removes heavy metals such as mercury and reduces viral load
- Chlorella:
- rebuilds the central nervous system and eliminates heavy metals
---
## по "Buhner":
<a href="https://www.herbalamy.com/product-page/epstein-barr-virus-formulation" target="_blank" rel="nofollow">**Link 1**</a>
<a href="http://buhnerhealinglyme.com/symptoms/treating-ebv-and-mitochondria-damage/" target="_blank" rel="nofollow">**Link 2**</a>
**противовирусные травы, специфичные для ЭБВ:**
- <span style="color:red">**Lomatium**
- Andrographis
- Reishi
- <span style="color:red">**Licorice**
- Isatis
- Japanese knotweed
- <span style="color:red">**Chinese skullca root**
- fresh ginger root
ЭБВ повреждает митохондрии и их функцию.
**травы, специфичные для защиты митохондрий и их функции:**
- Cordyceps
- Motherwort
- Rhodiola
- <span style="color:red">**Chinese skullcap root**
- Kudzu
---
**Травяная настойка I**
- Chinese skullcap
- Licorice
- Isatis
- Lomatium
- Равные части каждой настойки, смешанные вместе.
- 1/4-1 ч.л. 3-6 раза в день
- 60 дней
---
**Травяная настойка II**
- Rhodiola
- Reishi
- Kudzu
- Равные части каждой настойки, смешанные вместе.
- 1/4-1 ч.л. 3-6 раза в день
- 60 дней
+ + Cordyceps:
- 2000 mg 3 раза в день
- or tincture: 1/4 ч.л. 3 раза в день
---
**Травяная настойка III**
- Motherwort
- Passionflower - пассифлора
- Равные части каждой настойки, смешанные вместе.
- 1/4-1/2 ч.л. 3-6 раза в день
---
**Имбирный сок чай**
- для однa чашка
- кусок размером с большой палец
- резать как можно точнее
- 1/4 Литров воды, 2-3 часа пусть чай нарисует
- 3-6 Кубки за день
- для 4 чашки
- 3 кусок размером с большой палец
- 1 Литров воды, 2-3 часа пусть чай нарисует
---
| 27.104348 | 234 | 0.715271 | eng_Latn | 0.4514 |
9be7b15de26309ec6c682b5ccb92d2b4c326e1f1 | 121 | md | Markdown | README.md | GrafeasGroup/project-base | 829966236633993449099eb8a602afb721132bce | [
"MIT"
] | null | null | null | README.md | GrafeasGroup/project-base | 829966236633993449099eb8a602afb721132bce | [
"MIT"
] | null | null | null | README.md | GrafeasGroup/project-base | 829966236633993449099eb8a602afb721132bce | [
"MIT"
] | null | null | null | # template-repo
Repository with all the important info in it for starting new projects under the Grafeas Group umbrella.
| 40.333333 | 104 | 0.818182 | eng_Latn | 0.997051 |
9be7d0e6793b11d00a8d632f4ce915bd9ce31fcd | 1,315 | md | Markdown | README.md | dr-guangtou/lvhuo | 70c04bad0fa8c47b23ed49c8ef24187a68ae75c3 | [
"MIT"
] | 1 | 2021-02-28T14:59:56.000Z | 2021-02-28T14:59:56.000Z | README.md | dr-guangtou/lvhuo | 70c04bad0fa8c47b23ed49c8ef24187a68ae75c3 | [
"MIT"
] | 1 | 2019-07-07T00:29:35.000Z | 2019-07-07T00:29:35.000Z | README.md | dr-guangtou/lvhuo | 70c04bad0fa8c47b23ed49c8ef24187a68ae75c3 | [
"MIT"
] | null | null | null | # Lvhuo (驴火, or 驴肉火烧)
----
Image stacking analysis for Hyper Suprime-Cam data
Placeholder for future development. The name is suggested by Weibo (微博) user @桂宮文麿惟仁。"Lvhuo" means ["Donkey burger"](https://en.wikipedia.org/wiki/Donkey_burger). Yes, it is "burger" with donkey meat, which is very famous in my grandapa's hometown, Bao Ding (保定)。 Although it is kind of weird to think about eating donkey in western country, it is actually very popular in China for a long, long time. There is literatually a saying goes like "In Heaven there is dragon meat, on Earth there is donkey meat" (天上龙肉,地上驴肉)。
Applications
------------
Installation
------------
Acknowledgement
---------------
Thanks the HSC collaboration for making this amazing survey happen and make these beautiful data available. Also thank the good people at NAOJ who work tirelessly to prepare the data release.
Reporting bugs
--------------
If you notice a bug in `lvhuo` (and you will~), please file an detailed issue at:
https://github.com/dr-guangtou/lvhuo/issues
Requesting features
-------------------
If you would like to request a new feature, do the same thing.
License
-------
Copyright 2019 Jiaxuan Li, Song Huang and contributors.
`Lvhuo` is free software made available under the MIT License. For details see the LICENSE file.
| 37.571429 | 520 | 0.727757 | eng_Latn | 0.994283 |
9bea846fde2e76328b1a59fbfabf3907eabc739b | 4,532 | md | Markdown | README.md | PramUkesh/plugin-docs | 398d7712d03697c7b7f795339eee8da7eddd24a4 | [
"MIT"
] | null | null | null | README.md | PramUkesh/plugin-docs | 398d7712d03697c7b7f795339eee8da7eddd24a4 | [
"MIT"
] | null | null | null | README.md | PramUkesh/plugin-docs | 398d7712d03697c7b7f795339eee8da7eddd24a4 | [
"MIT"
] | null | null | null | # An overview of creating Adobe XD plugins
XD plugins extend the capabilities of [Adobe XD](https://www.adobe.com/products/xd.html) by adding new features to the app, automating workflows, connecting the app to external services, and more.
On this page, we'll give you a quick overview of **what you can create** and **what skills you need to bring**.
From there, you can **choose your own adventure**: build a "Hello, World" plugin in our [Quick Start tutorial](/tutorials/quick-start), follow our [tutorials](/tutorials/index.md), try code-complete [sample plugins](https://github.com/AdobeXD/plugin-samples), or browse the [API references](/reference/how-to-read.md).
Oh, and be sure to [join the developer community](/community.md) while you're here! We want you to say hi (we'll say hi back).
Now, let’s supercharge the future of design together with XD plugins!
## What can you build?
The XD plugin APIs enable you to build plugins for a number of use cases, including:
- **Asset Management & Import**: Provide designers with access to stock photography and assets, and integrate with your DAM or brand management system.
- **Automation & Utility**: Help designers automate repetitive or tedious tasks. Unleash their creativity by enabling generative and data-driven designs.
- **Publish & Handoff**: Make publishing and handoff a breeze by integrating with online services and content management systems.
- **Designer & Stakeholder Collaboration**: Enhance collaboration between designers and stakeholders by integrating with the workflow services your team uses.
XD plugins appear to the user in one of two ways, as a _Plugins_ menu item that:
1. Runs with **no UI** (like a script), or
2. Opens a custom **modal UI** where the user can interact with the plugin
You can learn more about the API surfaces available to you in [our tutorials](/tutorials/index.md), as well as in our [API References](/reference/how-to-read.md).
## What skills do you need?
Below are the prerequisite skills you'll need to build a plugin. It's a short list! And even if you're new to coding, we think you'll be able to build your skills as you go.
##### Required
**JavaScript**: XD plugins are written in JavaScript. The XD plugin APIs support ES6+ JavaScript features, and ES5 is perfectly fine too.
If you've never worked with JavaScript before, we recommend taking a moment to get familiar with the language first. But come back quickly; you don't need to be a JavaScript rock star to get started building XD plugins. The [Quick Start tutorial](/tutorials/quick-start/) and [API feature tutorials](/tutorials) you'll find in the documentation will help get you on your way.
##### Recommended
**HTML/CSS**: If you plan to offer a UI for your plugin, some familiarity with HTML and CSS is recommended. XD plugin APIs support a _subset_ of HTML and CSS for creating plugin UI.
##### Optional
**React**: If you want to push your plugin UI even further, you can put your React skills to use. React is a great option for complex plugins that must manage both state and user interface. Please refer to the following samples for more about how to configure React:
* [ui-hello-react](https://github.com/AdobeXD/plugin-samples/tree/master/ui-hello-react)
* [e2e-adobe-stock](https://github.com/AdobeXD/plugin-samples/tree/master/e2e-adobe-stock)
* [ui-html-playground](https://github.com/AdobeXD/plugin-samples/tree/master/ui-html-playground)
## Where to start?
There are lots of ways to journey through the documentation on your way to building the next great XD plugin. If you're just getting started, we recommend following the left-hand navigation on this site from top to bottom (or until you're ready to plot your own course!).
Here are some highlights you won't want to miss:
1. **Get Started**: To begin, try our [Quick Start tutorial](/tutorials/quick-start), then follow along with [the API feature tutorials](./tutorials/index.md).
1. **Go deep**: Read up on [the structure of a plugin](./reference/structure/index.md), expand your reach with [advanced concepts](/reference/index.md), and then dig into the [API reference](/reference/how-to-read.md).
1. **See code**: If you prefer to learn from working code, we have a [samples repo on GitHub](https://github.com/AdobeXD/Plugin-Samples) for you to take a look at.
1. **Join the community**: We want to hear from you, know who you are, keep you up to date with the latest info, and grow together. See our [Community page](/community.md) to learn about how to connect.
| 76.813559 | 375 | 0.761033 | eng_Latn | 0.993855 |
9bebdab5efab8719967d79bdef21a5b3282c0d87 | 860 | md | Markdown | content/api/ng-grid-extensions/grid-extensions.selectallonpage.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | content/api/ng-grid-extensions/grid-extensions.selectallonpage.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | content/api/ng-grid-extensions/grid-extensions.selectallonpage.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | <!-- Do not edit this file. It is automatically generated by API Documenter. -->
[Home](./index.md) > [@anglr/grid-extensions](./grid-extensions.md) > [selectAllOnPage](./grid-extensions.selectallonpage.md)
## selectAllOnPage() function
Selects or deselects all items on current page
<b>Signature:</b>
```typescript
export declare function selectAllOnPage<TItem>(select?: boolean, predicate?: (item: TItem) => boolean): GridAction;
```
## Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| select | <code>boolean</code> | Indication whether select or deselect all items on current page |
| predicate | <code>(item: TItem) => boolean</code> | Predicate that is evaluated whether row item falls into condition which allows selection/deselection of all items on page |
<b>Returns:</b>
`GridAction`
| 33.076923 | 182 | 0.688372 | eng_Latn | 0.95379 |
9bef115e3c5425b34486ba943e3369518dd0b0a7 | 1,589 | md | Markdown | network-interference-events/aws-setup.md | daylight-lab/III | 64a6a043885febdb3273d621d4f1db8ce427a692 | [
"BSD-3-Clause"
] | null | null | null | network-interference-events/aws-setup.md | daylight-lab/III | 64a6a043885febdb3273d621d4f1db8ce427a692 | [
"BSD-3-Clause"
] | null | null | null | network-interference-events/aws-setup.md | daylight-lab/III | 64a6a043885febdb3273d621d4f1db8ce427a692 | [
"BSD-3-Clause"
] | 5 | 2019-07-15T07:14:17.000Z | 2020-01-07T08:58:36.000Z | ## Getting up and running with OONI Metadb on AWS
0. To set up a copy of the OONI metadb which contains the measurement-specific
data needed for this analysis, create an AWS EC2 instance by following
[OONI's metadb sharing
directions](https://github.com/ooni/sysadmin/blob/master/docs/metadb-sharing.md).
1. After tunneling into the EC2 instance, run <code>sudo -u postgres psql -U
postgres metadb -c 'SELECT MAX(bucket_date) FROM autoclaved'</code> to check
that the output matches the day before the download.
2. Running <code>sudo -u postgres psql -U postgres metadb</code>, then
<code>\dt</code> should list all available relations including 'autoclaved',
'badblob', <a href =
"https://docs.google.com/document/d/1vB8taIbSOBxBRKMozCZwvt88iByghcIUJGh9nnk6vHo/edit?usp=sharing">etc</a>.
3. Check your Python version to make sure that Python3 is available. If >3.5.2
is on the path, use <code>pip3 install [package name]</code>. Otherwise, use
<code>sudo apt-get install python3-[package name]</code>. Install the
following packages: matplotlib, seaborn, pandas, numpy, scipy, psycopg2
(sqlalchemy dependency), sqlalchemy, jupyter.
4. Access the Python interpreter on the instance by typing `python3` into the
terminal.
5. When the prompt appears, import sqlalchemy and pandas, and run the following:
```
db = create_engine('postgres://metadb')
pd.read_sql_table('autoclaved', conn)
conn = db.connect()
```
If these lines execute without errors, sqlalchemy can successfully access the tables in the metadb and convert them into pandas objects.
| 51.258065 | 136 | 0.755192 | eng_Latn | 0.963704 |
9bef6649824ab011cbf79a272744fb96f422f882 | 2,971 | md | Markdown | site/content/post/my-amazing-title.md | haysclark/one-click-hugo-cms | 4f4613be3984bbb7dc6752c0006e300e37536e5a | [
"MIT"
] | null | null | null | site/content/post/my-amazing-title.md | haysclark/one-click-hugo-cms | 4f4613be3984bbb7dc6752c0006e300e37536e5a | [
"MIT"
] | null | null | null | site/content/post/my-amazing-title.md | haysclark/one-click-hugo-cms | 4f4613be3984bbb7dc6752c0006e300e37536e5a | [
"MIT"
] | null | null | null | ---
title: My amazing title
date: 2017-11-27T09:43:36.531Z
description: Blurbs blurb blurb
image: /img/IMG_6188.JPG
---
# Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Vestibulum at fringilla lectus. Suspendisse potenti. Nunc dapibus imperdiet dolor eu bibendum. Donec tortor odio, placerat eget tristique sit amet, consequat vel felis. Donec viverra accumsan eros. Integer elit metus, varius in venenatis ac, placerat in nibh.
## Nam in cursus enim. Donec mi nisl, vestibulum non tortor sed, semper congue eros.
Vestibulum volutpat risus at pulvinar imperdiet. Nam id aliquam dolor. Cras lobortis bibendum est vitae dapibus. Nam dictum turpis metus, vel ultrices turpis auctor efficitur. Curabitur lacus ipsum, scelerisque sed nisi sit amet, convallis maximus urna. Curabitur facilisis tincidunt velit id maximus.
> Suspendisse nec est mauris. In id sapien nunc. Nulla eleifend bibendum posuere. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Duis ullamcorper rutrum arcu quis vulputate. Vivamus justo velit, porta nec sagittis vel, pretium ac est. Quisque maximus rutrum tristique. Aliquam erat volutpat.
Sed non libero tortor. Nunc eget velit varius, congue dolor eu, dictum nisi. Ut at nulla ut turpis imperdiet placerat. Sed consectetur convallis erat, vitae volutpat eros pretium sit amet. Nunc pharetra ullamcorper metus. In mollis, libero quis ullamcorper tincidunt, arcu augue accumsan erat, vitae cursus odio arcu commodo urna. Donec semper nunc non mollis hendrerit. Nullam malesuada ultrices mauris, in scelerisque elit gravida ac. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum sapien elit, pretium id placerat sollicitudin, elementum sed dui. In hac habitasse platea dictumst. Cras ultricies enim enim, auctor consectetur magna viverra sed. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.
* Fusce nec velit ligula. Aenean ornare nec diam in ullamcorper.
* Ut ultricies lorem est, a vestibulum velit molestie sed.
* Etiam ullamcorper nisl eu magna ultrices viverra. Quisque pharetra malesuada massa, vitae sollicitudin velit cursus at.
* Vestibulum ante nibh, accumsan molestie rutrum a, eleifend vitae mauris. In luctus dapibus lorem vitae gravida. Morbi viverra odio vitae elit fringilla ullamcorper.
* Pellentesque luctus, quam a vehicula luctus, ipsum tellus mollis nisi, ac varius justo quam eu lorem.
* Donec semper neque eu turpis feugiat, interdum rhoncus libero gravida.
Pellentesque vel ligula feugiat, hendrerit diam eget, commodo orci. Integer iaculis consectetur ipsum aliquam condimentum. Fusce aliquam urna a elit finibus, at blandit quam mollis. Nulla ut est leo. Morbi id posuere arcu. Vestibulum gravida accumsan eros eu pharetra. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Praesent elementum sem vitae lorem tincidunt elementum.
| 110.037037 | 792 | 0.812858 | cat_Latn | 0.166654 |
9bef99c48527099f6367ff2edfff445133574ace | 11,235 | md | Markdown | second-edition/es/src/ch05-03-method-syntax.md | jenguidanos/rust-book-es | a7af1cfec6ee91a69fff50830eace726413cc2e4 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 19 | 2018-09-29T20:47:02.000Z | 2022-03-29T04:08:57.000Z | second-edition/es/src/ch05-03-method-syntax.md | jenguidanos/rust-book-es | a7af1cfec6ee91a69fff50830eace726413cc2e4 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2018-09-29T21:43:25.000Z | 2019-07-18T16:41:37.000Z | second-edition/es/src/ch05-03-method-syntax.md | jenguidanos/rust-book-es | a7af1cfec6ee91a69fff50830eace726413cc2e4 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 5 | 2019-09-29T10:24:21.000Z | 2021-02-20T23:29:44.000Z | ## Sintaxis del método
*Los métodos* son similares a las funciones: se declaran con la palabra clave `fn` y su nombre,
pueden tener parámetros y un valor de retorno, y contienen algún código que se ejecuta cuando
se llaman desde otro lugar. Sin embargo, los métodos son diferentes de las funciones en que están
definidos dentro del contexto de una estructura (o un objeto enum o un *trait*, que cubrimos en los
Capítulos 6 y 17, respectivamente), y su primer parámetro es siempre `self`, que representa la
instancia de la estructura a la que se llama el método.
### Definición de métodos
Cambiemos la función `area` que tiene una instancia `Rectangle` como parámetro y,
en su lugar, hagamos que un método `area` se defina en la estructura `Rectangle`,
como se muestra en el Listado 5-13.
<span class="filename">Filename: src/main.rs</span>
```rust
#[derive(Debug)]
struct Rectangle {
width: u32,
height: u32,
}
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
fn main() {
let rect1 = Rectangle { width: 30, height: 50 };
println!(
"El área del rectángulo es {} píxeles cuadrados.",
rect1.area()
);
}
```
<span class="caption">Listing 5-13: Definir un método `area` en la estructura
`Rectangle`</span>
Para definir la función dentro del contexto de `Rectangle`, comenzamos un bloque `impl`
(implementación). Luego movemos la función `area` dentro de las llaves de `impl`
y cambiar el primer parámetro (y en este caso, solo) a ser `self` en la firma y
en todas partes dentro del cuerpo. En `main`, donde llamó a la función `area` y
pasó `rect1` como argumento, podemos en su lugar usar *method syntax* para llamar
al método `area` en nuestra instancia `Rectangle`.
La sintaxis del método va después de una instancia: agregamos un punto seguido por el método
nombre, paréntesis y cualquier argumento.
En la firma de `area`, usamos `&self` en lugar de `rectangle: &Rectangle`
porque Rust sabe que el tipo de `self` es `Rectangle` debido a que este método es
dentro del contexto `impl Rectangle`. Tenga en cuenta que todavía necesitamos usar el `&`
antes de `self`, tal como lo hicimos en `&Rectangle`. Los métodos pueden tomar posesión de
`self`, tomar `self` inmutablemente como lo hemos hecho aquí, o tomar `self` mutable,
al igual que cualquier otro parámetro.
Hemos elegido `&self` aquí por la misma razón que usamos `&Rectangle` en
versión de función: no queremos tomar posesión, y solo queremos leer el
datos en la estructura, no escribir en ella. Si quisiéramos cambiar la instancia que
llamamos al método como parte de lo que hace el método, usamos `&mut self`
como el primer parámetro. Tener un método que tome posesión del
instancia usando solo `self` ya que el primer parámetro es raro; esta técnica se usa generalmente
cuando el método se transforma `self` en otra cosa y quieres
para evitar que la persona que llama use la instancia original después de la transformación.
El principal beneficio de usar métodos en lugar de funciones, además de usar
sintaxis del método y no tener que repetir el tipo de `self` en cada método
firma, es para la organización. Hemos puesto todas las cosas que podemos hacer con una
instancia de un tipo en un bloque `impl` en lugar de hacer que los futuros usuarios de nuestro
código busquen las capacidades de `Rectangle` en varios lugares de la biblioteca
que proporcionamos.
> ### ¿Dónde está el operador `->`?
>
> En C y C ++, se utilizan dos operadores diferentes para los métodos de llamada: utiliza `.`
> si está llamando directamente a un método en el objeto y `-> ` si llama al método en un puntero al objeto
> y necesita para desreferenciar el puntero primero. En otras palabras, si `object` es un puntero,
> `object-> something()` es similar a `(* object).something () `.
>
> Rust no tiene un equivalente al operador `->`; en cambio, Rust tiene una característica llamada
> *automatic referencing and dereferencing* (*referencia automática y eliminación de referencias*).
> Los métodos de llamada son uno de los pocos
> lugares en Rust que tienen este comportamiento.
>
> Así es como funciona: cuando llamas a un método con `object.something ()`,
> Rust automáticamente agrega `&`, `&mut`, o `*`para que `object` coincida con
> la firma del método. En otras palabras, los siguientes son los mismos:
>
>
> ```rust
> # #[derive(Debug,Copy,Clone)]
> # struct Point {
> # x: f64,
> # y: f64,
> # }
> #
> # impl Point {
> # fn distance(&self, other: &Point) -> f64 {
> # let x_squared = f64::powi(other.x - self.x, 2);
> # let y_squared = f64::powi(other.y - self.y, 2);
> #
> # f64::sqrt(x_squared + y_squared)
> # }
> # }
> # let p1 = Point { x: 0.0, y: 0.0 };
> # let p2 = Point { x: 5.0, y: 6.5 };
> p1.distance(&p2);
> (&p1).distance(&p2);
> ```
>
> El primero parece mucho más limpio. Este comportamiento de referencia automática funciona porque
> los métodos tienen un receptor claro, el tipo de `self`. Dado el receptor y el nombre de un método,
> Rust puede determinar definitivamente si el método es la lectura (`& self`), la mutación (`&mut self`),
> o el consumo (`self`). El hecho de que Rust haga que los préstamos sean implícitos para los receptores
> de métodos es una gran parte de hacer que la propiedad sea ergonómica en la práctica.
### Métodos con más parámetros
Practiquemos el uso de métodos implementando un segundo método en la estructura `Rectangle`.
Esta vez, queremos una instancia de `Rectangle` para tomar otra instancia de `Rectangle` y
devolver `true` si el segundo `Rectangle` puede caber completamente dentro de `self`; de lo
contrario, debería devolver `false`. Es decir, queremos poder escribir el programa que se
muestra en el listado 5-14, una vez que hayamos definido el método `can_hold`.
<span class="filename">Filename: src/main.rs</span>
```rust,ignore
fn main() {
let rect1 = Rectangle { width: 30, height: 50 };
let rect2 = Rectangle { width: 10, height: 40 };
let rect3 = Rectangle { width: 60, height: 45 };
println!("Can rect1 hold rect2? {}", rect1.can_hold(&rect2));
println!("Can rect1 hold rect3? {}", rect1.can_hold(&rect3));
}
```
<span class="caption">Listing 5-14: Usando el método aún no escrito `can_hold`</span>
Y la salida esperada se vería como la siguiente, porque ambas dimensiones de `rect2`
son más pequeñas que las dimensiones de` rect1` pero `rect3` es más ancha que` rect1`:
```text
Can rect1 hold rect2? true
Can rect1 hold rect3? false
```
Sabemos que queremos definir un método, por lo que estará dentro del bloque `impl Rectangle`.
El nombre del método será `can_hold`, y tomará un préstamo inmutable de otro `Rectangle` como parámetro.
Podemos decir cuál será el tipo de parámetro mirando el código que llama al método:
`rect1.can_hold(&rect2)` pasa en `&rect2`, que es un préstamo inmutable a `rect2`,
una instancia de `Rectangle`. Esto tiene sentido porque solo necesitamos leer `rect2`
(en lugar de escribir, lo que significaría que necesitaríamos un préstamo mutable),
y queremos que `main` retenga la propiedad de `rect2` para que podamos usarlo
nuevamente después de llamar el método `can_hold`. El valor de retorno de `can_hold`
será un booleano, y la implementación comprobará si el ancho y la altura de `self`
son mayores que el ancho y el alto del otro `Rectangle`, respectivamente.
Agreguemos el nuevo método `can_hold` al bloque `impl` del Listado 5-13,
que se muestra en el Listado 5-15.
<span class="filename">Filename: src/main.rs</span>
```rust
# #[derive(Debug)]
# struct Rectangle {
# width: u32,
# height: u32,
# }
#
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}
```
<span class="caption">Listing 5-15: Implementando el método `can_hold` en `Rectangle`
que toma otra instancia `Rectangle` como parámetro</span>
Cuando ejecutamos este código con la función `main` en el listado 5-14,
obtendremos nuestro resultado deseado. Los métodos pueden tomar múltiples
parámetros que agregamos a la firma después del parámetro `self`, y esos parámetros
funcionan igual que los parámetros en las funciones.
### Funciones asociadas
Otra característica útil de los bloques `impl` es que podemos definir funciones
dentro de bloques `impl` que *no* tomen `self` como parámetro. Estas se llaman
*funciones asociadas* porque están asociadas con la estructura. Siguen siendo funciones,
no métodos, porque no tienen una instancia de la estructura con la que trabajar. Ya has
usado la función asociada `String::from`.
Las funciones asociadas se utilizan a menudo para los constructores que devolverán
una nueva instancia de la estructura. Por ejemplo, podríamos proporcionar una función
asociada que tuviera un parámetro de una dimensión y usarla como ancho y alto, lo que
facilitaría la creación de un cuadrado `Rectangle` en lugar de tener que especificar el
mismo valor dos veces:
<span class="filename">Filename: src/main.rs</span>
```rust
# #[derive(Debug)]
# struct Rectangle {
# width: u32,
# height: u32,
# }
#
impl Rectangle {
fn square(size: u32) -> Rectangle {
Rectangle { width: size, height: size }
}
}
```
Para llamar a esta función asociada, usamos la sintaxis `::` con el nombre de la estructura;
`let sq = Rectangle::square(3);` es un ejemplo. Esta función tiene el espacio de nombres
asignado por la estructura: la sintaxis `::` se usa para las funciones asociadas y los espacios
de nombres creados por los módulos. Discutiremos los módulos en el Capítulo 7.
### Múltiples bloques `impl`
Cada estructura puede tener múltiples bloques `impl`. Por ejemplo, el
Listado 5-15 es equivalente al código que se muestra en el Listado 5-16,
que tiene cada método en su propio bloque `impl`.
```rust
# #[derive(Debug)]
# struct Rectangle {
# width: u32,
# height: u32,
# }
#
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}
```
<span class="caption">Listing 5-16: Reescribiendo el Listado 5-15 usando múltiples
bloques `impl`</span>
No hay ninguna razón para separar estos métodos en múltiples bloques `impl` aquí,
pero esta es una sintaxis válida. Veremos un caso en el que varios bloques `impl`
son útiles en el Capítulo 10, donde discutimos los tipos genéricos y *traits*.
## Resumen
Las estructuras le permiten crear tipos personalizados que son significativos para
su dominio. Mediante el uso de estructuras, puede mantener las piezas asociadas de
datos conectadas entre sí y el nombre de cada pieza para hacer que su código sea claro.
Los métodos le permiten especificar el comportamiento que tienen las instancias de sus
estructuras, y las funciones asociadas le permiten una funcionalidad de espacio de nombres
que es particular de su estructura sin tener una instancia disponible.
Pero las estructuras no son la única forma en que puede crear tipos personalizados:
pasemos a la función enum de Rust para agregar otra herramienta a su caja de herramientas. | 41.611111 | 107 | 0.733867 | spa_Latn | 0.993385 |
9bf138675698c6ddf095535d1a9740d7f5df0c6c | 211 | md | Markdown | CHANGELOG-6.x.md | Michael9397/kernel | cd9941134811582857dd1f2e7377942114e25d3f | [
"MIT"
] | 7 | 2015-01-17T11:25:04.000Z | 2021-01-06T10:43:44.000Z | CHANGELOG-6.x.md | Michael9397/kernel | cd9941134811582857dd1f2e7377942114e25d3f | [
"MIT"
] | 15 | 2015-01-17T11:25:02.000Z | 2022-01-18T22:47:42.000Z | CHANGELOG-6.x.md | Michael9397/kernel | cd9941134811582857dd1f2e7377942114e25d3f | [
"MIT"
] | 4 | 2017-02-25T23:37:53.000Z | 2022-01-13T16:10:35.000Z | # Changelog for 6.x
This changelog references the relevant changes (bug and security fixes) done to `orchestra/kernel`.
## 6.0.0
Released: 2021-04-18
### Changes
* Update support for Laravel Framework v8.
| 17.583333 | 99 | 0.734597 | eng_Latn | 0.958527 |
9bf16a96ac744fe51d175b633fdcd101c7e90da8 | 155 | md | Markdown | src/pages/docs/packages/frint-react-server.md | frintjs/frint.js.org | 891d13f00b93d58531d74835b01639c437c70fe6 | [
"MIT"
] | 4 | 2017-10-30T09:13:33.000Z | 2019-03-26T02:31:44.000Z | src/pages/docs/packages/frint-react-server.md | frintjs/frint.js.org | 891d13f00b93d58531d74835b01639c437c70fe6 | [
"MIT"
] | 6 | 2017-11-12T12:17:36.000Z | 2018-03-16T13:50:48.000Z | src/pages/docs/packages/frint-react-server.md | frintjs/frint.js.org | 891d13f00b93d58531d74835b01639c437c70fe6 | [
"MIT"
] | null | null | null | ---
title: frint-react-server
importFromGitHub: "frintjs/frint/master/packages/frint-react-server/README.md"
path: "/docs/packages/frint-react-server"
---
| 25.833333 | 78 | 0.767742 | dan_Latn | 0.153182 |
9bf2b75d6defce7e0486737b4c851bd1f84e8cd0 | 2,649 | md | Markdown | playwell-manual/UNIT_07_ActivityThread.md | jiyulongxu/playwell | 3a5dc4d009c6fd75487e208edf0318db4f9ad21d | [
"Apache-2.0"
] | 4 | 2019-09-01T02:05:09.000Z | 2022-01-04T06:08:14.000Z | playwell-manual/UNIT_07_ActivityThread.md | jiyulongxu/playwell | 3a5dc4d009c6fd75487e208edf0318db4f9ad21d | [
"Apache-2.0"
] | null | null | null | playwell-manual/UNIT_07_ActivityThread.md | jiyulongxu/playwell | 3a5dc4d009c6fd75487e208edf0318db4f9ad21d | [
"Apache-2.0"
] | 6 | 2019-11-14T13:55:17.000Z | 2022-02-09T01:42:24.000Z | ## ActivityThread
ActivityThread是Playwell的核心所在。我们所有的业务逻辑最终都是以ActivityThread为载体执行的。
Playwell在JVM线程池之上实现了ActivityThread的轻量级调度,没有底层操作系统线程那种上下文切换的开销,并且因为严格区分了SyncAction和AsyncAction,整个调度是无阻塞的,这使得Playwell每秒可以调度上万个ActivityThread实例,同时配合RocksDB这类LSM存储,让大量非活跃的ActivityThread状态可以被持久化存储,供以后唤醒。这样综合起来就是系统可以容纳上千万的ActivityThread,并且允许上万个ActivityThread同时活跃。
**ActivityThread生命周期**
ActivityThread生命周期基本是创建(spawn) -> 挂起(suspending) -> 执行(running) -> 等待(waiting) -> 执行(running) -> 结束&销毁这么个流程。
当事件满足触发器的条件,ActivityThread就会被创建,然后开始从第一个Action执行,如果是SyncAction,直接执行返回结果,如果AsyncAction,那么会发出请求消息,然后等待,当响应消息到来时,继续执行。而如果当前ActivityThread可执行,但因为调度算法的原因,尚未安排其正式调度执行,此时会处于挂起状态。
而当ActivityThread遭遇到以下场景,就会结束,并且释放存储中的所有状态(意味着你无法再查找到它,除非它再次被触发,但这时算是另一个实例了)
* `finish`控制函数被调用,正常结束。
* `fail`控制函数被调用,因为业务逻辑错误而结束。
* 发生了系统异常,比如调度器自身出现了错误。
### 基本操作
在操作系统中,我们往往可以很方便的使用系统工具对进程进行操作,而如果要是对进程内的某个线程去执行具体的操作,就不会那么容易。Playwell允许我们直接通过API或者客户端工具来查看具体某个ActivityThread的执行状态,或对其进行一系列的操作。
通常,在Playwell中,我们以两个坐标来定位某个具体的ActivityThread,一个是Activity ID,一个是Domain ID。
#### 查看
以下命令会输出单独某个ActivityThread当前的状态和上下文变量。
```shell
playwell thread get --activity_id <Activity ID> --domain_id <Domain ID>
```
#### 暂停
暂停一个ActivityThread的执行,被暂停后的ActivityThread不会继续执行,也不会再响应外部的消息。
```shell
playwell thread pause --activity_id <Activity ID> --domain_id <Domain ID>
```
#### 从暂停中恢复
```shell
playwell thread continue --activity_id <Activity ID> --domain_id <Domain ID>
```
#### kill
kill会完全"杀掉"一个ActivityThread,不再执行,而且所有的状态也会从存储中销毁。
```shell
playwell thread kill --activity_id <Activity ID> --domain_id <Domain ID>
```
### 扫描
Playwell允许我们通过指定的条件来批量扫描筛选ActivityThread,并对其应用指定的操作。
```shell
playwell thread scan \
--conditions conditions.json
--limit 1000
--log_per_records 5
--mark test
--remove_thread
```
参数说明:
* `conditions` 筛选条件,接受一个包含条件的JSON数组字符串或者文件路径。如果不指定,则扫描当前节点下所有的ActivityThread。
* `limit` 筛选结果条数最大限制,如果不指定则没有限制
* `log_per_records` 系统会将筛选结果输出到scan.log的日志文件中,如果筛选的ActivityThread非常多,那么输出的日志会非常庞大,并且会大大影响扫描效率,该选项指定了每筛选多少条输出一条采样日志。
* `mark` 指定一个用于标识此次扫描的字符串,这个字符串会写入到scan.log中。用于分辨此次扫描结果,如果不指定,系统会随机生成一个标识。
* `remove_thread` 如果指定了该选项,那么筛选出来的ActivityThread会被删除。
需要注意的是,如果扫描涉及到写操作,比如删除,那么Playwell节点会终止执行,直到扫描结束。另外,一个节点在同一时刻,只允许有一个扫描操作存在,而多余的扫描操作请求会被拒绝。
**筛选条件**
筛选条件是一个包含表达式的JSON数组,这些表达式之间是或的关系:
```json
[
"var('a') == 1", # 筛选出上下文变量包含a = 1的ActivityThread
"activityThread.currentAction == 'push'", # 筛选出当前执行单元为push的ActivityThread
"activityThread.activityId == 1", # 筛选出Activity为1的ActivityThread
"activityThread.domainId.startsWith('a')" # 筛选出Domain ID以'a'开头的ActivityThread
]
```
**终止扫描**
如果扫描的数据量比较大,那么可能会花费一些时间。我们也可以随时在中途停止扫描操作:
```shell
playwell thread stop_scan
```
| 26.757576 | 257 | 0.808607 | yue_Hant | 0.915467 |
9bf308d077f24c7c24276d97e7e651fabd15bddf | 28,446 | md | Markdown | powerbi-docs/developer/embedded/embedded-row-level-security.md | eltociear/powerbi-docs.fr-fr | e7077d4207576c6bd8a7ace95828a9c1262040a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/developer/embedded/embedded-row-level-security.md | eltociear/powerbi-docs.fr-fr | e7077d4207576c6bd8a7ace95828a9c1262040a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/developer/embedded/embedded-row-level-security.md | eltociear/powerbi-docs.fr-fr | e7077d4207576c6bd8a7ace95828a9c1262040a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Utilisation de la sécurité au niveau des lignes avec le contenu incorporé Power BI
description: Découvrez les étapes à suivre pour incorporer du contenu Power BI dans votre application
author: KesemSharabi
ms.author: kesharab
ms.reviewer: nishalit
ms.service: powerbi
ms.subservice: powerbi-developer
ms.topic: conceptual
ms.date: 06/10/2019
ms.openlocfilehash: 09489c3dbb33e1c5fb289cc1cc132eae0083a95f
ms.sourcegitcommit: 02484b2d7a352e96213353702d60c21e8c07c6c0
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/13/2020
ms.locfileid: "91981732"
---
# <a name="row-level-security-with-power-bi-embedded"></a>Sécurité au niveau des lignes avec Power BI Embedded
La **sécurité au niveau des lignes** peut être utilisée pour restreindre l’accès aux données dans des tableaux de bord, vignettes, rapports et jeux de données. Différents utilisateurs peuvent travailler sur ces mêmes artefacts tout en voyant des données différentes. L’incorporation prend en charge la sécurité au niveau des lignes.
Si vous incorporez un rapport pour les utilisateurs non-Power BI (l’application possède les données), en général un scénario ISV, cet article est fait pour vous. Configurez le jeton d’incorporation pour prendre en compte l’utilisateur et le rôle.
Si vous incorporez des rapports pour des utilisateurs Power BI (l’utilisateur possède les données) au sein de votre organisation, la sécurité au niveau des lignes fonctionne de la même façon que dans le service Power BI directement. Il n’y a rien de plus à faire dans votre application. Pour plus d’informations, consultez [Sécurité au niveau des lignes avec Power BI](../../admin/service-admin-rls.md).

Pour tirer parti de la sécurité au niveau des lignes, il est important de comprendre trois concepts principaux : les utilisateurs, les rôles et les règles. Examinons plus en détail ces concepts :
**Utilisateurs** : utilisateurs finaux qui voient l’artefact (tableau de bord, vignette, rapport ou jeu de données). Dans Power BI Embedded, les utilisateurs sont identifiés par la propriété de nom d’utilisateur dans un jeton d’incorporation.
**Rôles** : les utilisateurs appartiennent à des rôles. Un rôle est un conteneur de règles et peut être nommé en *Directeur des ventes* ou *Commercial*. Vous créez des rôles dans Power BI Desktop. Pour plus d’informations, consultez [Sécurité au niveau des lignes avec Power BI Desktop](../../create-reports/desktop-rls.md).
**Règles** : les rôles ont des règles et ces dernières sont les filtres réels qui vont être appliqués aux données. Les règles peuvent être aussi simples que « Pays = États-Unis » ou quelque chose de beaucoup plus dynamique.
Dans le reste de cet article, vous verrez un exemple de création de sécurité au niveau des lignes et de consommation au sein d’une application incorporée. Cet exemple utilise le fichier PBIX [Exemple Analyse de la vente au détail](https://go.microsoft.com/fwlink/?LinkID=780547).

## <a name="adding-roles-with-power-bi-desktop"></a>Ajout de rôles dans Power BI Desktop
Notre **exemple Analyse de la vente au détail** affiche les ventes pour tous les magasins d’une chaîne de distribution. Sans la sécurité au niveau des lignes, tous les directeurs régionaux qui se connectent au rapport pour le consulter voient les mêmes données. La direction a déterminé que chaque directeur régional devait voir uniquement les ventes des magasins qu’il gère. La sécurité au niveau des lignes permet à la direction de limiter les données en fonction d’un directeur régional.
La sécurité au niveau des lignes est créée dans Power BI Desktop. Lorsque le jeu de données et les rapports sont ouverts, vous pouvez basculer vers la vue de diagramme pour voir le schéma :

Il convient de noter quelques points avec ce schéma :
* Toutes les mesures, telles que **Total Sales (Total des ventes)** , sont stockées dans la table de faits **Sales (Ventes)** .
* Il existe quatre tables de dimension connexes supplémentaires : **Item (Article)** , **Time (Temps)** , **Store (Magasin)** et **District (Secteur)** .
* Les flèches sur les lignes de relation indiquent dans quel sens les filtres peuvent transiter d’une table à une autre. Par exemple, si un filtre est placé sur **Time[Date]** , dans le schéma actuel, il permet uniquement de filtrer les valeurs de la table **Sales (Ventes)** . Aucune autre table n’est affectée par ce filtre, car toutes les flèches des lignes de relation pointent vers la table Sales et n’en partent pas.
* La table **District (Secteur)** indique qui est le directeur de chaque région :

En fonction de ce schéma, si vous appliquez un filtre à la colonne **District Manager** (Directeur régional) dans la table **District** (Secteur), et si ce filtre correspond à l’utilisateur qui consulte le rapport, il filtre les tables **Store** (Magasin) et **Sales** (Ventes) pour afficher les données de ce directeur régional.
Voici comment procéder :
1. Sous l’onglet **Modélisation**, sélectionnez **Gérer les rôles**.

2. Créez un rôle nommé **Directeur**.

3. Dans la table **District (Secteur)** , entrez cette expression DAX : **[District Manager] = USERNAME()** .

4. Pour vérifier que les règles fonctionnent, sous l’onglet **Modélisation**, sélectionnez **Afficher comme rôles**, puis sélectionnez le rôle **Manager (Directeur)** que vous avez créé ainsi que **Autres utilisateurs**. Entrez **Andrew Ma** pour l’utilisateur.

Les rapports affichent les données comme si vous étiez connecté en tant que **Andrew Ma**.
En appliquant le filtre comme vous l’avez fait ici, tous les enregistrements des tables **District** (Secteur), **Store** (Magasin) et **Sales** (Ventes) sont filtrés. Toutefois, en raison de la direction du filtrage sur les relations entre **Sales** et **Time**, **Sales** et **Item**, et **Item** et **Time**, les tables ne sont pas filtrées. Pour en savoir plus sur le filtrage croisé bidirectionnel, téléchargez le livre blanc [Bidirectional cross-filtering in SQL Server Analysis Services 2016 and Power BI Desktop](https://download.microsoft.com/download/2/7/8/2782DF95-3E0D-40CD-BFC8-749A2882E109/Bidirectional%20cross-filtering%20in%20Analysis%20Services%202016%20and%20Power%20BI.docx).
## <a name="applying-user-and-role-to-an-embed-token"></a>Application de l’utilisateur et du rôle à un jeton d’incorporation
Maintenant que vous avez configuré vos rôles Power BI Desktop, vous devez effectuer certaines tâches dans votre application pour tirer parti des rôles.
Les utilisateurs sont authentifiés et autorisés par votre application et les jetons d’incorporation sont utilisés pour accorder l’accès utilisateur à un rapport Power BI Embedded spécifique. Power BI Embedded n’a pas d’informations spécifiques sur l’utilisateur. Pour que la sécurité au niveau des lignes fonctionne, vous devez transmettre un contexte supplémentaire dans le cadre de votre jeton d’incorporation sous la forme d’identités. Vous pouvez passer les identités à l’aide de l’API [Jeton d’incorporation](/rest/api/power-bi/embedtoken).
L’API accepte une liste des identités avec l’indication des jeux de données pertinents. Pour que la sécurité au niveau des lignes fonctionne, vous devez transmettre les éléments suivants dans le cadre de l’identité.
* **Nom d’utilisateur (obligatoire)** : chaîne qui peut être utilisée pour identifier l’utilisateur lors de l’application des règles de sécurité au niveau des lignes. Un seul utilisateur peut être répertorié. Votre nom d’utilisateur peut être créé avec des caractères *ASCII*.
* **Rôles (obligatoire)** : chaîne contenant les rôles à sélectionner lors de l’application des règles de sécurité au niveau des lignes. Si vous transmettez plusieurs rôles, ceux-ci doivent l’être en tant que tableau de chaînes.
* **Jeu de données (obligatoire)** : jeu de données applicable à l’artefact que vous incorporez.
Vous pouvez créer le jeton d’incorporation à l’aide de la méthode **GenerateTokenInGroup** sur **PowerBIClient.Reports**.
Par exemple, vous pourriez modifier l’exemple *[PowerBI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) > .NET Framework > Incorporer pour vos clients > **PowerBIEmbedded_AppOwnsData***.
**Avant la modification**
```csharp
// Generate Embed Token with effective identities.
generateTokenRequestParameters = new GenerateTokenRequest(accessLevel: "view", identities: new List<EffectiveIdentity> { rls });
// Generate Embed Token for reports without effective identities.
generateTokenRequestParameters = new GenerateTokenRequest(accessLevel: "view");
```
**Après la modification**
```csharp
var generateTokenRequestParameters = new GenerateTokenRequest("View", null, identities: new List<EffectiveIdentity> { new EffectiveIdentity(username: "username", roles: new List<string> { "roleA", "roleB" }, datasets: new List<string> { "datasetId" }) });
var tokenResponse = await client.Reports.GenerateTokenInGroupAsync("groupId", "reportId", generateTokenRequestParameters);
```
Si vous appelez l’API REST, l’API mise à jour accepte maintenant un tableau JSON supplémentaire, nommé **Identités**, qui contient un nom d’utilisateur, la liste des rôles de chaîne et la liste des jeux de données de chaîne.
Utilisez le code suivant en guise d’exemple :
```json
{
"accessLevel": "View",
"identities": [
{
"username": "EffectiveIdentity",
"roles": [ "Role1", "Role2" ],
"datasets": [ "fe0a1aeb-f6a4-4b27-a2d3-b5df3bb28bdc" ]
}
]
}
```
Maintenant, quand une personne se connecte à votre application pour afficher cet artefact, elle voit les données qu’elle est autorisée à voir, en fonction de ce qui a été défini par la sécurité au niveau des lignes.
## <a name="working-with-analysis-services-live-connections"></a>Utilisation des connexions actives d’Analysis Services
Vous pouvez utiliser la sécurité au niveau des lignes avec les connexions actives Analysis Services pour les serveurs locaux. Lorsque vous utilisez ce type de connexion, vous devez comprendre quelques concepts spécifiques.
L’identité effective fournie pour la propriété de nom d’utilisateur doit être celle d’un utilisateur Windows disposant d’autorisations sur le serveur Analysis Services.
>[!NOTE]
> Quand vous utilisez un principal de service avec une source de données [Azure Analysis Services](/azure/analysis-services/analysis-services-overview), le principal de service doit lui-même disposer d’autorisations d’instance Azure Analysis Services. L’utilisation d’un groupe de sécurité qui contient le principal du service à cet effet ne fonctionne pas.
### <a name="on-premises-data-gateway-configuration"></a>Configuration d’une passerelle de données locale
Une [passerelle de données locale](../../connect-data/service-gateway-onprem.md) est utilisée lors de l’utilisation des connexions actives d’Analysis Services. Lorsque vous générez un jeton incorporé, avec une identité répertoriée, le compte principal doit être répertorié en tant qu’administrateur de la passerelle. Si le compte principal n’est pas listé, la sécurité au niveau des lignes n’est pas appliquée à la propriété des données. Une personne qui n’est pas administrateur de la passerelle peut fournir des rôles, mais doit spécifier son propre nom d’utilisateur en tant qu’identité effective.
### <a name="use-of-roles"></a>Utilisation des rôles
Des rôles peuvent être fournis avec l’identité dans un jeton d’incorporation. Si aucun rôle n’est fourni, le nom d’utilisateur fourni peut être utilisé pour résoudre les rôles associés.
### <a name="using-the-customdata-feature"></a>Utilisation de la fonctionnalité CustomData
CustomData fonctionne uniquement pour les modèles qui résident dans **Azure Analysis Services** et uniquement en mode **Connexion directe**. Contrairement aux utilisateurs et aux rôles, la fonctionnalité de données personnalisées ne peut pas être définie dans un fichier .pbix. Lors de la génération d’un jeton avec la fonctionnalité Custom data, vous devez avoir un nom d’utilisateur.
>[!NOTE]
>Le nom d’utilisateur CustomData ne peut pas comporter plus de 256 caractères.
La fonctionnalité CustomData vous permet d’ajouter un filtre de lignes lors de l’affichage des données Power BI dans votre application quand vous utilisez **Azure Analysis Services** comme source de données (affichage de données Power BI connectées à Azure Analysis Services dans votre application).
La fonctionnalité CustomData permet de passer du texte libre (chaîne) à l’aide de la propriété de chaîne de connexion CustomData. Analysis Services utilise cette valeur par le biais de la fonction *CUSTOMDATA()* .
Le seul moyen de disposer d’une sécurité au niveau des lignes dynamique (qui utilise des valeurs dynamiques pour l’évaluation de filtre) dans **Azure Analysis Services** consiste à utiliser la fonction *CUSTOMDATA()* .
Vous pouvez l’utiliser à l’intérieur de la requête DAX de rôle, de même que sans aucun rôle dans une requête DAX de mesure.
La fonctionnalité CustomData fait partie de nos fonctionnalités de génération de jetons pour les artefacts suivants : tableau de bord, rapport et vignette. Les tableaux de bord peuvent avoir plusieurs identités CustomData (une par vignette/modèle).
#### <a name="customdata-sdk-additions"></a>Ajouts du SDK CustomData
La propriété de chaîne CustomData a été ajoutée à notre identité effective dans le scénario de génération de jetons.
```json
[JsonProperty(PropertyName = "customData")]
public string CustomData { get; set; }
```
L’identité peut être créée avec des données personnalisées à l’aide de l’appel suivant :
```csharp
public EffectiveIdentity(string username, IList<string> datasets, IList<string> roles = null, string customData = null);
```
#### <a name="customdata-sdk-usage"></a>Utilisation du SDK CustomData
Si vous appelez l’API REST, vous pouvez ajouter des données personnalisées dans chaque identité, par exemple :
```json
{
"accessLevel": "View",
"identities": [
{
"username": "EffectiveIdentity",
"roles": [ "Role1", "Role2" ],
"customData": "MyCustomData",
"datasets": [ "fe0a1aeb-f6a4-4b27-a2d3-b5df3bb28bdc" ]
}
]
}
```
Voici les étapes pour commencer à configurer la fonctionnalité CustomData() avec votre application Power BI Embedded.
1. Créez votre base de données Azure Analysis Services. Ensuite, connectez-vous à votre serveur Azure Analysis Services par le biais de [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).


2. Créez un rôle dans le serveur Analysis Services.

3. Définissez vos paramètres **Généraux**. Ici, vous attribuez le **Nom du rôle** et définissez les autorisations de base de données sur **Lecture** uniquement.

4. Définissez les paramètres d’**Appartenance**. Ici, vous ajoutez les utilisateurs qui sont concernés par ce rôle.

5. Définissez votre requête DAX **Filtres de lignes** à l’aide de la fonction *CUSTOMDATA()* .

6. Générez un rapport PBI et publiez-le sur un espace de travail avec une capacité.

7. Utilisez les API Power BI pour utiliser la fonctionnalité CustomData dans votre application. Lors de la génération d’un jeton avec la fonctionnalité Custom data, vous devez avoir un nom d’utilisateur. Le nom d’utilisateur doit être identique à l’UPN de l’utilisateur principal. L’utilisateur principal doit être un membre du ou des rôles que vous avez créés. Si aucun rôle n’est spécifié, tous les rôles dont l’utilisateur principal est membre sont utilisés pour l’évaluation de la fonction de sécurité au niveau des lignes.
Lorsque vous utilisez un [principal de service](embed-service-principal.md), vous devez également effectuer les étapes ci-dessus au lieu d’utiliser un compte principal. Lorsque vous générez un jeton incorporé, utilisez l’[ID d’objet du principal de service](embed-service-principal.md) comme nom d’utilisateur.
> [!Note]
> Quand vous êtes prêt à déployer votre application en production, l’option ou le champ de compte d’utilisateur principal ne doit pas être visible par l’utilisateur final.
Affichez le [code](#customdata-sdk-additions) pour ajouter la fonctionnalité CustomData.
8. Vous pouvez maintenant afficher le rapport dans votre application avant d’appliquer les valeurs Custom data pour voir toutes les données contenues dans votre rapport.

Appliquez ensuite les valeurs Custom data pour vérifier que le rapport affiche un ensemble de données différent.

## <a name="using-rls-vs-javascript-filters"></a>Utilisation de la sécurité au niveau des lignes ou de filtres JavaScript
Quand vous décidez de filtrer vos données dans un rapport, vous pouvez utiliser la **sécurité au niveau des lignes** ou des **filtres JavaScript**.
La [sécurité au niveau des lignes](../../admin/service-admin-rls.md) est une fonctionnalité qui filtre les données au niveau du modèle de données. Votre source de données back-end contrôle vos paramètres de sécurité au niveau des lignes. En fonction de votre modèle de données, la génération de jeton d’incorporation définit le nom d’utilisateur et les rôles pour la session. Ces informations ne peuvent pas être substituées, supprimées ni contrôlées par le code côté client, c’est pourquoi elles sont considérées comme sécurisées. Nous vous recommandons d’utiliser la sécurité au niveau des lignes pour filtrer les données en toute sécurité. Vous pouvez filtrer les données avec la sécurité au niveau des lignes en utilisant l’une des options ci-dessous.
* [Configuration des rôles dans un rapport Power BI](../../create-reports/desktop-rls.md).
* Configuration des rôles au niveau de la source de données (connexion active Analysis Services uniquement).
* Par programmation avec un [jeton d’incorporation](/rest/api/power-bi/embedtoken/datasets_generatetokeningroup) à l’aide de `EffectiveIdentity`. Quand vous utilisez un jeton d’incorporation, le filtre réel traverse le jeton d’incorporation pour une session spécifique.
Les [filtres de JavaScript](https://github.com/Microsoft/PowerBI-JavaScript/wiki/Filters#page-level-and-visual-level-filters) servent à autoriser l’utilisateur à consommer une vue des données filtrée, réduite ou à portée spécifique. Toutefois, l’utilisateur a quand même accès aux tables, colonnes et mesures du schéma de modèle, et il peut potentiellement accéder à toutes les données qui s’y trouvent. L’accès restreint aux données peut uniquement être appliqué avec la sécurité au niveau des lignes, et non par le biais des API de filtrage côté client.
## <a name="token-based-identity-with-azure-sql-database"></a>Identité basée sur les jetons avec Azure SQL Database
**L’identité basée sur les jetons** vous permet de spécifier l’identité effective pour un jeton incorporé à l’aide d’un jeton d’accès **Azure Active Directory (AAD)** pour une base de données **Azure SQL Database**.
Les clients qui conservent leurs données dans **Azure SQL Database** bénéficient désormais d’une nouvelle fonctionnalité permettant de gérer les utilisateurs et leur accès aux données dans Azure SQL lors de l’intégration avec **Power BI Embedded**.
Lorsque vous générez le jeton d’incorporation, vous pouvez spécifier l’identité effective d’un utilisateur dans Azure SQL. Vous pouvez spécifier l’identité effective d’un utilisateur en passant le jeton d’accès AAD au serveur. Le jeton d’accès est utilisé pour extraire uniquement les données pertinentes pour cet utilisateur à partir d’Azure SQL pour cette session spécifique.
Il peut être utilisé pour gérer l’affichage de chaque utilisateur dans Azure SQL ou se connecter à Azure SQL en tant que client spécifique dans une base de données multi-locataire. Il peut également appliquer la sécurité au niveau des lignes sur cette session dans Azure SQL et récupérer uniquement les données pertinentes pour cette session, ce qui évite d’avoir à gérer la SNL dans Power BI.
Ces problèmes d’identité effective s’appliquent à des règles SNL directement sur le serveur Azure SQL. Power BI Embedded utilise le jeton d’accès fourni lors de l’interrogation des données à partir du serveur Azure SQL. L’UPN de l’utilisateur (pour lequel le jeton d’accès a été fourni) est accessible suite à la fonction SQL USER_NAME().
L’identité basée sur les jetons fonctionne uniquement pour les modèles DirectQuery sur une capacité, connectée à Azure SQL Database, qui est configuré pour autoriser l’authentification AAD ([en savoir plus sur l’authentification AAD pour Azure SQL Database](/azure/sql-database/sql-database-manage-logins)). La source de données du jeu de données doit être configurée pour utiliser les informations d’identification OAuth2 des utilisateurs finaux, en vue d’utiliser l’identité basée sur les jetons.

### <a name="token-based-identity-sdk-additions"></a>Ajouts au SDK de l’identité basée sur les jetons
La propriété de blob d’identité a été ajoutée à notre identité effective dans le scénario de génération de jetons.
```JSON
[JsonProperty(PropertyName = "identityBlob")]
public IdentityBlob IdentityBlob { get; set; }
```
Le type IdentityBlob est une structure JSON simple contenant une propriété de chaîne de valeur
```JSON
[JsonProperty(PropertyName = "value")]
public string value { get; set; }
```
EffectiveIdentity peut être créé avec le blob d’identité à l’aide de l’appel suivant :
```C#
public EffectiveIdentity(string username, IList<string> datasets, IList<string> roles = null, string customData = null, IdentityBlob identityBlob = null);
```
Le blob d’identité peut être créé à l’aide de l’appel suivant.
```C#
public IdentityBlob(string value);
```
### <a name="token-based-identity-rest-api-usage"></a>Utilisation de l’API REST de l’identité basée sur les jetons
Si vous appelez [l’API REST](/rest/api/power-bi/embedtoken/reports_generatetokeningroup#definitions), vous pouvez ajouter le blob d’identité dans chaque identité.
```JSON
{
"accessLevel": "View",
"identities": [
{
"datasets": ["fe0a1aeb-f6a4-4b27-a2d3-b5df3bb28bdc"],
"identityBlob": {
"value": "eyJ0eXAiOiJKV1QiLCJh…."
}
}
]
}
```
La valeur fournie dans le blob d’identité doit être un jeton d’accès valide à Azure SQL Server (avec une URL de ressource de (<https://database.windows.net/>).
> [!Note]
> Pour pouvoir créer un jeton d’accès pour Azure SQL, l’application doit avoir l’autorisation déléguée **Accéder à Azure SQL Database et Data Warehouse** à l’API **Azure SQL Database** sur la configuration de l’inscription d’application AAD dans le portail Azure.

## <a name="on-premises-data-gateway-with-service-principal"></a>Passerelle de données locale avec principal de service
Les clients qui configurent la sécurité au niveau des lignes (SNL) à l’aide d’une source de données à connexion active locale SSAS (SQL Server Analysis Services) peuvent bénéficier de la nouvelle fonctionnalité du [principal de service](embed-service-principal.md) pour gérer les utilisateurs et leur accès aux données dans SSAS lors de l’intégration à **Power BI Embedded**.
L’utilisation des [API REST Power BI](/rest/api/power-bi/) vous permet de spécifier l’identité effective des connexions actives locales SSAS pour un jeton d’incorporation à l’aide d’un [objet de principal de service](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object).
Jusqu’à présent, pour pouvoir spécifier l’identité effective d’une connexion active locale SSAS, l’utilisateur principal générant le jeton d’incorporation devait être administrateur de passerelle. Désormais, au lieu d’exiger que l’utilisateur soit administrateur de passerelle, ce dernier peut accorder à l’utilisateur une autorisation dédiée à cette source de données. L’utilisateur peut ainsi remplacer l’identité effective au moment de la génération du jeton d’incorporation. Cette nouvelle fonctionnalité permet l’incorporation de contenu avec un principal de service pour une connexion SSAS active.
Pour activer ce scénario, l’administrateur de passerelle utilise l’[API REST d’ajout d’utilisateur à la source de données](/rest/api/power-bi/gateways/adddatasourceuser) afin de donner au principal de service l’autorisation *ReadOverrideEffectiveIdentity* pour Power BI Embedded.
Vous ne pouvez pas définir cette autorisation à l’aide du portail d’administration. Cette autorisation est définie uniquement avec l’API. Dans le portail d’administration, vous voyez une indication pour les utilisateurs et les SPN disposant de telles autorisations.
## <a name="considerations-and-limitations"></a>Considérations et limitations
* L’attribution d’utilisateurs aux rôles, dans le service Power BI, n’affecte pas la sécurité au niveau des lignes lors de l’utilisation d’un jeton d’incorporation.
* Bien que le service Power BI n’applique pas le paramètre de sécurité au niveau des lignes aux administrateurs ni aux membres dotés d’autorisations de modification, quand vous fournissez une identité avec un jeton d’incorporation, celle-ci est appliquée aux données.
* Les connexions actives Analysis Services sont prises en charge pour les serveurs locaux.
* Les connexions actives Azure Analysis Services prennent en charge le filtrage par rôles. Le filtrage dynamique peut être effectué à l’aide de CustomData.
* Si le jeu de données sous-jacent ne nécessite pas la sécurité au niveau des lignes, la demande GenerateToken ne doit **pas** contenir d’identité effective.
* Si le jeu de données sous-jacent est un modèle cloud (modèle mis en cache ou DirectQuery), l’identité effective doit inclure au moins un rôle. Sinon, l’attribution de rôle n’a pas lieu.
* Une liste d’identités active plusieurs jetons d’identité pour l’incorporation de tableau de bord. Pour tous les autres artefacts, la liste contient une identité unique.
### <a name="token-based-identity-limitations"></a>Limitations de l’identité basée sur les jetons
* Vous ne pouvez utiliser la sécurité au niveau des lignes que si vous disposez d’une capacité.
* La fonctionnalité SNL ne fonctionne pas localement avec SQL Server.
D’autres questions ? [Essayez d’interroger la communauté Power BI](https://community.powerbi.com/) | 81.741379 | 755 | 0.782254 | fra_Latn | 0.978704 |
9bf4a3a8eebd66928726c9870ee1a69c02f27d48 | 7,312 | md | Markdown | gcm-webhook/README.md | couchbaselabs/mini-hacks | 8cbe0de5d7d1b0c28713235aa927939c99a39956 | [
"MIT"
] | 145 | 2015-01-02T12:44:52.000Z | 2021-03-23T19:37:52.000Z | gcm-webhook/README.md | mohammedajaroud/mini-hacks | 8cbe0de5d7d1b0c28713235aa927939c99a39956 | [
"MIT"
] | 19 | 2015-01-07T22:25:33.000Z | 2017-01-05T15:43:07.000Z | gcm-webhook/README.md | mohammedajaroud/mini-hacks | 8cbe0de5d7d1b0c28713235aa927939c99a39956 | [
"MIT"
] | 61 | 2015-01-04T00:22:08.000Z | 2020-05-29T14:35:00.000Z | # Couchbase by Example: Sync Gateway Webhooks
In the previous post, you learned how to set up Google Cloud Messaging with the Service Worker and Push API to handle notifications and used PouchDB + Sync Gateway to sync registration tokens. In this tutorial, you will focus exclusively on webhooks to dispatch the notifications to particular users.
You will continue building Timely News, a news application to notify users of new articles matching their topics of interest.
## Scenarios
There are different scenarios for sending a push notification:
- **Group Messaging**: this concept was introduced in GCM to send notifications to up to 20 devices simultaneously. It’s very well suited for sending notifications to all devices that belong to a single user
- **Up and Down**: a user updated a document and other users should be notified about it through a Push Notification
## Data Model
Let’s start with the smallest document, a Profile document holding registration tokens of the user’s devices and topics of interest:
{
"type": "profile",
"name": "Oliver",
"subscription": "free", // other values "expired", "premium"
"topics": ["g20", "science", "nsa", "design"],
"registration_ids": ["AP91DIwQ", "AP91W9kX"]
}
And the Article document may have the following properties:
{
"type": "article",
"title": "Design tools for developers",
"content": "...",
"topic": "design"
}
## Group Messaging
Imagine a scenario where a user is currently signed up on a freemium account and inputs a invite code to access the premium plan for a limited time. It would be nice to send a notification to all the user’s devices to fetch the additional content.
**Brief**: Send a one-off notification to freemium users that also have an invite code to unlock other devices.
Download the 1.1 release of Sync Gateway:
> http://www.couchbase.com/nosql-databases/downloads#Couchbase_Mobile
You will find the Sync Gateway binary in the `bin` folder and examples of configuration files in the `examples` folder. Copy the `exampleconfig.json` file to the root of your project:
cp ~/Downloads/couchbase-sync-gateway/examples/exampleconfig.json /path/to/proj/sync-gateway-config.json
Add three users in the configuration file:
{
"log": ["CRUD", "HTTP+"],
"databases": {
"db": {
"server": "walrus:",
"users": {
"zack": {
"password": "letmein"
},
"ali": {
"password": "letmein"
},
"adam": {
"password": "letmein"
},
"GUEST": {"disabled": true}
}
}
}
}
Add a web hook with the following properties in the `db` object:
"event_handlers": {
"document_changed": [
{
"handler": "webhook",
"url": "http://localhost:8000/invitecode",
"filter": `function(doc) {
if (doc.type == "profile" && doc.invite_code) {
return true;
}
return false;
}`
}
]
}
Start Sync Gateway:
$ ~/Downloads/couchbase-sync-gateway/bin/sync_gateway ./sync-gateway-config.json
Create a new file `main.go` to handle the webhook:
func main() {
http.HandleFunc("/invitecode", func(w http.ResponseWriter, r *http.Request) {
log.Println("ping")
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
Start the Go server:
$ go run main.go
Using curl, make a POST request to `:4984/db/bulk_doc` to save 3 Profile documents simultaneously:
curl -H 'Content-Type: application/json' \
-vX POST http://localhost:4985/db/_bulk_docs \
--data @profiles.json
**NOTE**: To save space on the command line, the `--data` argument specifies that the request body is in `profiles.json`.
Notice that only Ali’s Profile document is POSTed to the webhook endpoint:
![][image-1]
In the next section, you will configure a second web hook to notify all users when a new article that matches their interest is published.
## Up and Down
Add another webhook entry that filters only documents of type `article`:
{
"handler": "webhook",
"url": "http://localhost:8000/new_article",
"filter": `function(doc) {
if (doc.type == "article") {
return true;
}
return false;
}`
}
Add another handler in your Go server:
http.HandleFunc("/new_article", func(w http.ResponseWriter, r *http.Request) {
log.Println("ping")
})
Check that the webhook is working as expected by adding an Article document:
curl -H 'Content-Type: application/json' \
-vX POST http://localhost:4985/db/_bulk_docs \
--data @articles.json
In this case, you have to do a bit more work to figure out what set of users to notify. This is a good use case for using a view to index the Profile documents and emitting the topic as the key and registrations IDs as the value for every topic in the topics array.
To register a view, we can use the Sync Gateway PUT `/_design/ddocname` endpoint with the view definition in the request body:
curl -H 'Content-Type: application/json' \
-vX PUT http://localhost:4985/db/_design/extras \
--data @view.json
Notice that the article we posted above has design in it’s topic and the only user subscribed to this topic is Adam. Consequently, if you query that view with the key "design", only one (key, value) pair should return with the topic as key and device tokens as value:
curl -H 'Content-Type: application/json' \
-vX GET ':4985/db/_design/extras/_view/user_topics?key="design"'
< HTTP/1.1 200 OK
< Content-Length: 95
< Content-Type: application/json
* Server Couchbase Sync Gateway/1.1.0 is not blacklisted
< Server: Couchbase Sync Gateway/1.1.0
< Date: Wed, 17 Jun 2015 17:46:35 GMT
<
* Connection #0 to host left intact
{"total_rows":1,"rows":[{"id":"4caa204e81b118cf23500f320e138aa8","key":"design","value":null}]}
Now, you can edit handler in `main.go` to subsequently query the `user_topics` view with the key being the topic of the article:
http.HandleFunc("/new_article", func(w http.ResponseWriter, r *http.Request) {
log.Println("ping")
var data map[string]interface{}
body, _ := ioutil.ReadAll(r.Body)
json.Unmarshal(body, &data)
topic := data["topic"].(string)
log.Printf("Querying user Profiles subscribed to %s", topic)
var stringUrl string = fmt.Sprintf("http://localhost:4985/db/_design/extras/_view/user_topics?key=\"%s\"", topic)
res, err := http.Get(stringUrl)
if err != nil {
fmt.Print(err)
return
}
if res != nil {
var result map[string]interface{}
body, _ = ioutil.ReadAll(res.Body)
json.Unmarshal(body, &result)
log.Printf("Result from the user_topics query %v", result["rows"].([]interface {}))
}
})
Run the `bulk_doc` request again and you will see the list of device tokens to use in the logs:
![][image-2]
## Conclusion
In this tutorial, you learned how to use Web Hooks in the scenario of GCM Push Notifications and used Couchbase Server Views to access additional information at Webhook Time™.
[image-1]: http://i.gyazo.com/7ec3dd332f2d029af364590a4c2e3e63.gif
[image-2]: http://i.gyazo.com/b8c8731e0cbdb5b11e8c35710fe3a092.gif | 35.153846 | 300 | 0.679021 | eng_Latn | 0.923143 |
9bf6b8209b8abc633d0e00632c8bf8217507391a | 328 | md | Markdown | .github/ISSUE_TEMPLATE/bug_report.md | solidoc/issues | 7700881f64ecaa8e38ad07bd3f5793b52c9a141a | [
"MIT"
] | 1 | 2020-10-25T03:58:15.000Z | 2020-10-25T03:58:15.000Z | .github/ISSUE_TEMPLATE/bug_report.md | solidoc/issues | 7700881f64ecaa8e38ad07bd3f5793b52c9a141a | [
"MIT"
] | 20 | 2020-10-25T04:01:47.000Z | 2021-02-01T08:43:34.000Z | .github/ISSUE_TEMPLATE/bug_report.md | solidoc/issues | 7700881f64ecaa8e38ad07bd3f5793b52c9a141a | [
"MIT"
] | null | null | null | ---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**问题描述**
用清晰简单的语言描述问题。
**场景重现**
简单描述哪些操作步骤能够重现问题。
**期待表现**
用简洁明朗的语言描述你对相关交互行为的期待表现。
**操作/报错贴图**
如果可能,加上问题操作或报错的贴图。
**环境 (请完善下列信息):**
- 系统: [e.g. iOS]
- 浏览器 [e.g. chrome, safari]
- 版本 [e.g. 22]
**补充说明**
添加其他关于问题的说明。
| 10.933333 | 41 | 0.625 | yue_Hant | 0.517252 |
9bf86c3224a780a53167d1c0a3aabb1e1e211e30 | 1,783 | md | Markdown | _posts/2012-09-30-0x8004ff01.md | lukemurraynz/lukemurraynz.github.io | a44e4d6071ddd2cd6589c3ad1052494ac7356abb | [
"MIT"
] | null | null | null | _posts/2012-09-30-0x8004ff01.md | lukemurraynz/lukemurraynz.github.io | a44e4d6071ddd2cd6589c3ad1052494ac7356abb | [
"MIT"
] | 1 | 2021-03-04T04:11:29.000Z | 2021-03-04T04:11:29.000Z | _posts/2012-09-30-0x8004ff01.md | lukemurraynz/lukemurraynz.github.io | a44e4d6071ddd2cd6589c3ad1052494ac7356abb | [
"MIT"
] | 1 | 2022-03-13T21:28:16.000Z | 2022-03-13T21:28:16.000Z | ---
id: 794
title: How to fix error 0x8004FF01 while installing Microsoft Security Essentals
date: 2012-09-30T14:55:48+00:00
author: Luke
layout: post
guid: http://techdrive.co.nz/?p=794
permalink: /win/0x8004ff01/
dsq_thread_id:
- "5120706215"
omc_review_enable:
- "0"
omc_user_ratings_visibility:
- "0"
omc_review_type:
- stars
omc_criteria_display:
- 'n'
omc_featured_post:
- "0"
omc_comment_type:
- wp
mfn-post-love:
- "0"
post_views_count:
- "17"
categories:
- Windows
---
_Problems getting Security Essentials installed? Try these tips below._
<ol start="1">
<li>
<strong>Remove</strong> any old <strong>Antivirus</strong> software that could be causing incompatibility with Security Essentials
</li>
<li>
It is likely that some parts of the old antivirus are remaining, such as Norton’s Symantec services. Attempt running the antivirus removal tools from the developer’s websites to remove leftover traces.
</li>
<li>
<strong>Download</strong> & <strong>install</strong> the latest Microsoft <a href="http://www.microsoft.com/download/en/details.aspx?id=8483" target="_blank">Windows Installer</a> & attempt Security Essentials install.
</li>
</ol>
_Still not working? Then try the following._
<ol start="1">
<li>
Run a System File Checker (Start, Run type: cmd then press Enter to open Command Prompt, then type sfc /scannow then press Enter) – You may need your operating system CD at this point if it needs to grab files off it.
</li>
<li>
Open Command Prompt, following the instructions above & type: “<strong><em>reg delete HKLM/SOFTWARE/Microsoft/SQMClient/Windows/DisabledSessions /va /f</em></strong>” press <strong>Enter</strong>.
</li>
<li>
Attempt reinstall.
</li>
</ol> | 31.839286 | 227 | 0.725743 | eng_Latn | 0.759398 |
9bf8ae09919d84481949fc1f7478e93138f0360f | 19,620 | md | Markdown | windows-apps-src/design/style/reveal.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/style/reveal.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/design/style/reveal.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Reveal ist ein Lichteffekt, der die interaktiven Elemente Ihrer App mit Tiefe und Fokus versehen kann.
title: Reveal Highlight
template: detail.hbs
ms.date: 09/24/2020
ms.topic: article
keywords: Windows 10, UWP
pm-contact: kisai
design-contact: conrwi
dev-contact: jevansa
doc-status: Published
ms.localizationpriority: medium
ms.openlocfilehash: 23f23cd65564df5f224696faabae74aa4465a438
ms.sourcegitcommit: eda7bbe9caa9d61126e11f0f1a98b12183df794d
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 09/24/2020
ms.locfileid: "91216866"
---
# <a name="reveal-highlight"></a>Reveal Highlight

Reveal Highlight ist ein Lichteffekt, der interaktive Elemente wie z.B. Befehlsleisten hervorhebt, wenn der Benutzer den Mauszeiger in deren Nähe bewegt.
> **Wichtige APIs:** [„RevealBrush“-Klasse](/uwp/api/windows.ui.xaml.media.revealbrush), [„RevealBackgroundBrush“-Klasse](/uwp/api/windows.ui.xaml.media.revealbackgroundbrush), [„RevealBorderBrush“-Klasse](/uwp/api/windows.ui.xaml.media.revealborderbrush), [„RevealBrushHelper“-Klasse](/uwp/api/windows.ui.xaml.media.revealbrushhelper), [„VisualState“-Klasse](/uwp/api/Windows.UI.Xaml.VisualState)
## <a name="how-it-works"></a>Funktionsweise
Reveal Highlight hebt interaktive Elemente hervor, indem der Container des Elements hervorgehoben wird, wenn sich der Mauszeiger nähert, wie in der folgenden Abbildung gezeigt wird:

Da durch Einblendungen die ausgeblendeten Rahmen um Objekte herum angezeigt werden, entwickeln Benutzer dank Reveal ein besseres Verständnis von dem Raum, mit dem sie interagieren. Darüber hinaus erfahren sie auf diese Weise, welche Aktionen verfügbar sind. Dies ist besonders wichtig bei Listensteuerelementen und Gruppen von Schaltflächen.
## <a name="examples"></a>Beispiele
<table>
<th align="left">XAML-Steuerelementekatalog<th>
<tr>
<td><img src="images/xaml-controls-gallery-sm.png" alt="XAML controls gallery"></img></td>
<td>
<p>Wenn Sie die App <strong style="font-weight: semi-bold">XAML-Steuerelementekatalog</strong> installiert haben, klicken Sie hier, um <a href="xamlcontrolsgallery:/item/Reveal">die App zu öffnen und Reveal in Aktion zu sehen</a>.</p>
<ul>
<li><a href="https://www.microsoft.com/store/productId/9MSVH128X2ZT">Beziehen der XAML-Steuerelementekatalog-App (Microsoft Store)</a></li>
<li><a href="https://github.com/Microsoft/Xaml-Controls-Gallery">Abrufen des Quellcodes (GitHub)</a></li>
</ul>
</td>
</tr>
</table>
## <a name="video-summary"></a>Videozusammenfassung
> [!VIDEO https://channel9.msdn.com/Events/Windows/Windows-Developer-Day-Fall-Creators-Update/WinDev013/player]
## <a name="how-to-use-it"></a>Verwendung
„Reveal” funktioniert bei einigen Steuerelementen automatisch. Bei anderen Steuerelementen können Sie „Reveal” aktivieren, indem Sie dem Steuerelement ein spezielles Format zuweisen, wie im Abschnitt [Aktivieren von „Reveal” für andere Steuerelemente](#enabling-reveal-on-other-controls) und [Aktivieren von „Reveal” für allgemeine Steuerelemente](#enabling-reveal-on-custom-controls) dieses Artikels beschrieben wird.
## <a name="controls-that-automatically-use-reveal"></a>Steuerelemente, die „Reveal” automatisch verwenden
- [**ListView**](../controls-and-patterns/lists.md)
- [**GridView**](../controls-and-patterns/lists.md)
- [**TreeView**](../controls-and-patterns/tree-view.md)
- [**NavigationView**](../controls-and-patterns/navigationview.md)
- [**MediaTransportControl**](../controls-and-patterns/media-playback.md)
- [**CommandBar**](../controls-and-patterns/app-bars.md)
Diese Abbildungen zeigen Reveal Highlight bei verschiedenen Steuerelementen:

## <a name="enabling-reveal-on-other-controls"></a>Aktivieren von „Reveal” für andere Steuerelemente
Für Szenarien, in denen „Reveal“ angewendet werden sollte (diese Steuerelemente sind Hauptinhalt und/oder werden in einer Liste oder Sammlungsausrichtung verwendet), haben wir optionale Ressourcenformate bereitgestellt, mit denen Sie „Reveal“ für solche Situationen aktivieren können.
Diese Steuerelemente verfügen standardmäßig nicht über „Reveal“, da sie kleinere Steuerelemente sind, die in der Regel als Hilfssteuerelemente für die zentralen Punkte Ihrer Anwendung dienen. Alle Apps sind jedoch verschieden, und wenn diese Steuerelemente in Ihrer App am häufigsten verwendet werden, stehen Ihnen einige Formate als Hilfe zur Verfügung:
| Name des Steuerelements | Ressourcenname |
|----------|:-------------:|
| Schaltfläche | ButtonRevealStyle |
| ToggleButton | ToggleButtonRevealStyle |
| RepeatButton | RepeatButtonRevealStyle |
| AppBarButton | AppBarButtonRevealStyle |
| AppBarToggleButton | AppBarToggleButtonRevealStyle |
| GridViewItem („Reveal“ über dem Inhalt) | GridViewItemRevealBackgroundShowsAboveContentStyle |
Um diese Formate anzuwenden, legen Sie die Eigenschaft [Style](/uwp/api/Windows.UI.Xaml.Style) folgendermaßen fest:
```xaml
<Button Content="Button Content" Style="{ThemeResource ButtonRevealStyle}"/>
```
### <a name="reveal-in-themes"></a>„Reveal“ in Designs
„Reveal“ ändert sich geringfügig je nach dem angeforderten Design des Steuerelements, der App oder der Einstellung des Benutzers. Beim Design „Dark“ ist das Licht für „Border“ und „Hover“ weiß; beim Design „Light“ dagegen werden nur die Rahmen hellgrau angezeigt.

Wenn Sie im Design „Light“ weiße Rahmen anzeigen möchten, legen Sie das angeforderte Design für das Steuerelement einfach auf „Dark“ fest.
```xaml
<Grid RequestedTheme="Dark">
<Button Content="Button" Click="Button_Click" Style="{ThemeResource ButtonRevealStyle}"/>
</Grid>
```
Oder ändern Sie den Wert für „TargetTheme” bei „RevealBorderBrush” auf „Dark“. Bitte beachten Sie Folgendes! Wenn „TargetTheme” auf „Dark“ festgelegt ist, wird „Reveal“ weiß angezeigt. Wenn es aber auf „Light” festgelegt ist, werden die Rahmen von „Reveal“ grau angezeigt.
```xaml
<RevealBorderBrush x:Key="MyLightBorderBrush" TargetTheme="Dark" Color="{ThemeResource SystemAccentColor}" FallbackColor="{ThemeResource SystemAccentColor}" />
```
## <a name="enabling-reveal-on-custom-controls"></a>Aktivieren von „Reveal” für benutzerdefinierte Steuerelemente
Sie können „Reveal” für benutzerdefinierte Steuerelemente hinzufügen. Bevor Sie dies tun, ist es hilfreich, etwas mehr über die Funktionsweise des Effekts „Reveal” zu erfahren. „Reveal“ besteht aus zwei getrennten Effekten: **Reveal Border** und **Reveal Hover**.
- **Border** zeigt die Rahmen der interaktiven Elemente an, wenn sich ein Zeiger nähert. Dadurch können Objekte in der Nähe ähnliche Aktionen wie das gerade fokussierte Objekt ausführen.
- Durch **Hover** wird das Element, auf das gezeigt oder das fokussiert wird, mit einem leichten Schein umgeben, und beim Anklicken wird eine gedrückte Animation angezeigt.

<!-- The Reveal recipe breakdown is:
- Border reveal will be on top of all content but on the designated edges
- Text and content will be displayed directly under Border Reveal
- Hover reveal will be beneath content and text
- The backplate (that turns on and enables Hover Reveal)
- The background (background of control) -->
Diese Effekte werden durch zwei Pinsel definiert:
* „Reveal” für Rahmen wird durch **RevealBorderBrush** definiert.
* „Reveal” für Hover wird durch **RevealBackgroundBrush** definiert.
```xaml
<RevealBorderBrush x:Key="MyRevealBorderBrush" TargetTheme="Light" Color="{ThemeResource SystemAccentColor}" FallbackColor="{ThemeResource SystemAccentColor}"/>
<RevealBackgroundBrush x:Key="MyRevealBackgroundBrush" TargetTheme="Light" Color="{StaticResource SystemAccentColor}" FallbackColor="{StaticResource SystemAccentColor}" />
```
In den meisten Fällen wird „Reveal” für bestimmte Steuerelemente von uns automatisch aktiviert. Andere Steuerelemente müssen jedoch über das Anwenden eines Formats oder das Ändern der zugehörigen Vorlagen direkt aktiviert werden.
### <a name="when-to-add-reveal"></a>Wann sollte „Reveal” hinzugefügt werden
Sie können „Reveal” Ihren benutzerdefinierten Steuerelementen hinzufügen. Allerdings sollten Sie vorher den Typ des Steuerelements und sein Verhalten festlegen.
* Wenn Ihr benutzerdefiniertes Steuerelement ein einzelnes interaktives Element ist und keine ähnlichen Steuerelemente (wie z.B. Menüelemente) auf derselben Oberfläche angezeigt werden, benötigt Ihr benutzerdefiniertes Steuerelement „Reveal“ wahrscheinlich nicht.
* Besitzen Sie eine Gruppe von verwandten interaktiven Inhalten oder Elementen, benötigt dieser Bereich Ihrer App wahrscheinlich „Reveal“ – dies wird häufig als [Steuerung](../controls-and-patterns/collection-commanding.md) der Oberfläche bezeichnet.
So sollte beispielsweise eine allein angezeigte Schaltfläche „Reveal“ nicht verwenden, während eine Reihe von Schaltflächen in einer Befehlsleiste es verwenden sollte.
<!-- For example, NavigationView's items are related to page navigation. CommandBar's buttons relate to menu actions or page feature actions. MediaTransportControl's buttons beneath all relate to the media being played. -->
### <a name="using-the-control-template-to-add-reveal"></a>Hinzufügen von „Reveal” mithilfe der Steuerelementvorlage
Um „Reveal“ für benutzerdefinierte Steuerelemente oder Steuerelemente mit neuen Vorlagen zu aktivieren, ändern Sie diese Vorlage für das Steuerelement. Die meisten Steuerelementvorlagen besitzen ein Raster im Stamm. Aktualisieren Sie [VisualState](/uwp/api/windows.ui.xaml.visualstate) dieses Stammrasters, um „Reveal“ verwenden zu können:
```xaml
<VisualState x:Name="PointerOver">
<VisualState.Setters>
<Setter Target="RootGrid.(RevealBrush.State)" Value="PointerOver" />
<Setter Target="RootGrid.Background" Value="{ThemeResource ButtonRevealBackgroundPointerOver}" />
<Setter Target="ContentPresenter.BorderBrush" Value="Transparent"/>
<Setter Target="ContentPresenter.Foreground" Value="{ThemeResource ButtonForegroundPointerOver}" />
</VisualState.Setters>
</VisualState>
```
Es ist wichtig zu beachten, dass für „Reveal“ in seinem visuellen Zustand sowohl der Pinsel als auch die Setter benötigt werden, damit es einwandfrei funktioniert. Durch einfaches Festlegen des Pinsels für ein Steuerelement auf eine unserer „Reveal“-Pinselressourcen allein wird „Reveal“ für dieses Steuerelement nicht aktiviert. Wenn aber nur die Ziele oder Einstellungen verwendet werden, ohne die Werte als „Reveal“-Pinsel festgelegt zu haben, wird Reveal ebenfalls nicht aktiviert.
Weitere Informationen zum Ändern von Steuerelementvorlagen finden Sie im Artikel [XAML-Steuerelementvorlagen](../controls-and-patterns/control-templates.md).
Wir haben eine Reihe von “Reveal“-Systempinseln erstellt, mit denen Sie Ihre Vorlage anpassen können. Sie können z.B. den Pinsel **ButtonRevealBackground** zum Erstellen eines Hintergrunds für eine benutzerdefinierte Schaltfläche oder den Pinsel **ListViewItemRevealBackground** für benutzerdefinierte Listen usw. verwenden. (Informationen zur Funktionsweise von Ressourcen in XAML finden Sie im Artikel [Xaml-Ressourcenverzeichnis](../controls-and-patterns/resourcedictionary-and-xaml-resource-references.md).)
### <a name="full-template-example"></a>Beispiel für eine vollständige Vorlage
Hier sehen Sie eine gesamte Vorlage für das gewünschte Erscheinungsbild einer „Reveal“-Schaltfläche :
```xaml
<Style TargetType="Button" x:Key="ButtonStyle1">
<Setter Property="Background" Value="{ThemeResource ButtonRevealBackground}" />
<Setter Property="Foreground" Value="{ThemeResource ButtonForeground}" />
<Setter Property="BorderBrush" Value="{ThemeResource ButtonRevealBorderBrush}" />
<Setter Property="BorderThickness" Value="2" />
<Setter Property="Padding" Value="8,4,8,4" />
<Setter Property="HorizontalAlignment" Value="Left" />
<Setter Property="VerticalAlignment" Value="Center" />
<Setter Property="FontFamily" Value="{ThemeResource ContentControlThemeFontFamily}" />
<Setter Property="FontWeight" Value="Normal" />
<Setter Property="FontSize" Value="{ThemeResource ControlContentThemeFontSize}" />
<Setter Property="UseSystemFocusVisuals" Value="True" />
<Setter Property="FocusVisualMargin" Value="-3" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="Button">
<Grid x:Name="RootGrid" Background="{TemplateBinding Background}">
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="CommonStates">
<VisualState x:Name="Normal">
<Storyboard>
<PointerUpThemeAnimation Storyboard.TargetName="RootGrid" />
</Storyboard>
</VisualState>
<VisualState x:Name="PointerOver">
<VisualState.Setters>
<Setter Target="RootGrid.(RevealBrush.State)" Value="PointerOver" />
<Setter Target="RootGrid.Background" Value="{ThemeResource ButtonRevealBackgroundPointerOver}" />
<Setter Target="ContentPresenter.BorderBrush" Value="Transparent"/>
<Setter Target="ContentPresenter.Foreground" Value="{ThemeResource ButtonForegroundPointerOver}" />
</VisualState.Setters>
<Storyboard>
<PointerUpThemeAnimation Storyboard.TargetName="RootGrid" />
</Storyboard>
</VisualState>
<VisualState x:Name="Pressed">
<VisualState.Setters>
<Setter Target="RootGrid.(RevealBrush.State)" Value="Pressed" />
<Setter Target="RootGrid.Background" Value="{ThemeResource ButtonRevealBackgroundPressed}" />
<Setter Target="ContentPresenter.BorderBrush" Value="{ThemeResource ButtonRevealBackgroundPressed}" />
<Setter Target="ContentPresenter.Foreground" Value="{ThemeResource ButtonForegroundPressed}" />
</VisualState.Setters>
<Storyboard>
<PointerDownThemeAnimation Storyboard.TargetName="RootGrid" />
</Storyboard>
</VisualState>
<VisualState x:Name="Disabled">
<VisualState.Setters>
<Setter Target="RootGrid.Background" Value="{ThemeResource ButtonRevealBackgroundDisabled}" />
<Setter Target="ContentPresenter.BorderBrush" Value="{ThemeResource ButtonRevealBorderBrushDisabled}" />
<Setter Target="ContentPresenter.Foreground" Value="{ThemeResource ButtonForegroundDisabled}" />
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<ContentPresenter x:Name="ContentPresenter"
BorderBrush="{TemplateBinding BorderBrush}"
BorderThickness="{TemplateBinding BorderThickness}"
Content="{TemplateBinding Content}"
ContentTransitions="{TemplateBinding ContentTransitions}"
ContentTemplate="{TemplateBinding ContentTemplate}"
Padding="{TemplateBinding Padding}"
HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}"
VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}"
AutomationProperties.AccessibilityView="Raw" />
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
```
### <a name="fine-tuning-the-reveal-effect-on-a-custom-control"></a>Optimieren des Effekts „Reveal” für ein benutzerdefiniertes Steuerelement
Wenn Sie „Reveal” für ein benutzerdefiniertes Steuerelement oder ein Steuerelement mit neuer Vorlage oder aber eine benutzerdefinierte Befehlsoberfläche aktivieren, können Ihnen diese Tipps beim Optimieren des Effekts helfen:
* Auf benachbarten Elementen mit einer Größe, die in Höhe oder Breite (insbesondere in Listen) nicht ausgerichtet ist: Entfernen Sie das Verhalten des Rahmens, und aktivieren Sie die Rahmen nur für das Hovern.
* Bei Befehlselementen, die häufig aktiviert oder deaktiviert werden: Platzieren Sie den Pinsel für den Rahmen auf die Backplates der Elemente sowie deren Rahmen, um ihren Zustand zu betonen.
* Bei benachbarten Steuerelementen, die sich fast berühren: Fügen Sie einen Rand von 1 Pixel zwischen den beiden Elementen hinzu.
## <a name="dos-and-donts"></a>Empfohlene und nicht empfohlene Vorgehensweisen
### <a name="do"></a>Empfohlen:
- Verwenden Sie „Reveal” für Elemente, bei denen der Benutzer viele Aktionen (CommandBars, Navigationsmenüs) ausführen kann.
- Verwenden Sie „Reveal” in Gruppierungen von interaktiven Elementen, die standardmäßig keine visuellen Trennlinien haben (Listen, Menübänder).
- Verwenden Sie „Reveal” in Bereichen mit vielen interaktiven Elementen (Befehlsszenarien).
- Fügen Sie einen Rand von 1 Pixel zwischen „Reveal“-Elementen hinzu.
### <a name="dont"></a>Nicht empfohlen
- Verwenden Sie „Reveal” nicht auf statischen Inhalten (Hintergrund, Text).
- Verwenden Sie „Reveal” nicht auf Popups, Flyouts oder Dropdownlisten.
- Verwenden Sie „Reveal” nicht in einzelnen, isolierten Situationen.
- Verwenden Sie „Reveal“ nicht auf sehr großen Elementen (größer als 500 Epx).
- Verwenden Sie „Reveal” nicht in sicherheitsbezogenen Entscheidungen, da es den Benutzer beim Lesen der Nachricht, die Sie ihm übermitteln müssen, ablenken kann.
## <a name="get-the-sample-code"></a>Beispielcode herunterladen
- [Beispiel für einen XAML-Steuerelementekatalog:](https://github.com/Microsoft/Xaml-Controls-Gallery) Hier werden alle XAML-Steuerelemente in einem interaktiven Format dargestellt.
## <a name="reveal-and-the-fluent-design-system"></a>„Reveal” und das Fluent Design-System
Mit dem Fluent Design-System können Sie moderne, klare Benutzeroberflächen erstellen, die Licht, Tiefe, Bewegung, Material und Skalierung enthalten. „Reveal” ist eine Komponente des Fluent Design-Systems, die Ihrer App Lichteffekte hinzufügt. Weitere Informationen finden Sie in der [Übersicht über Fluent Design](/windows/apps/fluent-design-system).
## <a name="related-articles"></a>Verwandte Artikel
- [RevealBrush-Klasse](/uwp/api/windows.ui.xaml.media.revealbrush)
- [Acrylic](acrylic.md)
- [Kompositionseffekte](../../composition/composition-effects.md)
- [Fluent Design für UWP](/windows/apps/fluent-design-system)
- [Wissenschaft im System: Fluent Design und Tiefe](https://medium.com/microsoft-design/science-in-the-system-fluent-design-and-depth-fb6d0f23a53f)
- [Wissenschaft im System: Fluent Design und Licht](https://medium.com/microsoft-design/the-science-in-the-system-fluent-design-and-light-94a17e0b3a4f)
| 66.508475 | 511 | 0.730683 | deu_Latn | 0.874224 |
9bf91d8d6e05051c7db2c1e6f50d2af8209751dd | 851 | md | Markdown | specification/app/resource-manager/Microsoft.App/preview/2022-01-01-preview/examples-java/DaprComponents_List.md | Azure/azure-rest-api-specs-examples | 4a9e9daef8e77ab259f61bb0a6d8669f609fb57a | [
"MIT"
] | 2 | 2021-12-17T00:13:10.000Z | 2021-12-20T06:26:41.000Z | specification/app/resource-manager/Microsoft.App/preview/2022-01-01-preview/examples-java/DaprComponents_List.md | Azure/azure-rest-api-specs-examples | 4a9e9daef8e77ab259f61bb0a6d8669f609fb57a | [
"MIT"
] | null | null | null | specification/app/resource-manager/Microsoft.App/preview/2022-01-01-preview/examples-java/DaprComponents_List.md | Azure/azure-rest-api-specs-examples | 4a9e9daef8e77ab259f61bb0a6d8669f609fb57a | [
"MIT"
] | 1 | 2022-03-26T07:22:16.000Z | 2022-03-26T07:22:16.000Z | Read the [SDK documentation](https://github.com/Azure/azure-sdk-for-java/blob/azure-resourcemanager-appcontainers_1.0.0-beta.1/sdk/appcontainers/azure-resourcemanager-appcontainers/README.md) on how to add the SDK to your project and authenticate.
```java
import com.azure.core.util.Context;
/** Samples for DaprComponents List. */
public final class Main {
/*
* x-ms-original-file: specification/app/resource-manager/Microsoft.App/preview/2022-01-01-preview/examples/DaprComponents_List.json
*/
/**
* Sample code: List Dapr Components.
*
* @param manager Entry point to ContainerAppsApiManager.
*/
public static void listDaprComponents(com.azure.resourcemanager.appcontainers.ContainerAppsApiManager manager) {
manager.daprComponents().list("examplerg", "myenvironment", Context.NONE);
}
}
```
| 40.52381 | 247 | 0.73913 | eng_Latn | 0.274846 |
9bfc5b68d55dd99c35420bc9c8a9f91e9b18cea3 | 1,585 | md | Markdown | packages/playground-handbook/copy/en/Overview.md | jeongtae/TypeScript-Website | dd698b299829dc5d896a094f70b42480ab82cb79 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-12-27T03:45:28.000Z | 2021-12-27T03:45:28.000Z | packages/playground-handbook/copy/en/Overview.md | jeongtae/TypeScript-Website | dd698b299829dc5d896a094f70b42480ab82cb79 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-07T17:21:08.000Z | 2021-08-07T17:21:08.000Z | packages/playground-handbook/copy/en/Overview.md | jeongtae/TypeScript-Website | dd698b299829dc5d896a094f70b42480ab82cb79 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ## Welcome to the Playground Handbook
The TypeScript playground is an online environment where people can write and share TypeScript-ish code. We say "ish" because you can work with `.ts`, `.tsx`, `.js`, `.jsx` and `.d.ts` files inside the playground. The goal for the Playground is to be a safe environment which requires no setup, is a single text document, can be trivially shared with others, where URLs still works years down the line. In summary, a teaching tool where you can safely experiment.
From [day one in 2012](https://web.archive.org/web/20121031123957/http://www.typescriptlang.org/Playground/), the TypeScript website has featured a playground as a way to highlight the difference between the TypeScript code you write and the JavaScript which is emitted. Today, the Playground has a massive set of features because the needs of developers using TypeScript has grown in scope - there's over a hundred [`tsconfig.json`](https://www.typescriptlang.org/tsconfig) flags. Developers need a safe way to be able to reproduce a particular TypeScript environment which can be shared with others.
This handbook will guide you through the feature set of the Playground, explain why these features exist and help you master them. It is generally meant to be read in order, and you should be able to get through the handbook in about 25 minutes. Assuming you don't dive into too many rabbit holes. That said, the rabbit holes tend to be where the fun complexity lives and time is an illusion anyway. So, we'll get started over at the [Compiler Settings](/play#handbook-1) page.
| 198.125 | 601 | 0.788013 | eng_Latn | 0.999529 |
9bfd443018ff858f16075126ee464e3aa6cda08a | 32 | md | Markdown | README.md | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | README.md | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | README.md | icogg/prottools | aa13675bdfb87191c092ae8dbec0547ff8b8e884 | [
"MIT"
] | null | null | null | # prottools
my first repository
| 10.666667 | 19 | 0.8125 | eng_Latn | 0.99986 |
9bfe545c55e773ea165d51d0145711a1e8b8c89e | 26 | md | Markdown | docs/guides/releases.md | its-lucas/prime | de367736429be9e74fafc81b66c381286fceef25 | [
"MIT"
] | 1,707 | 2018-12-20T21:38:53.000Z | 2022-03-29T15:29:29.000Z | docs/guides/releases.md | its-lucas/prime | de367736429be9e74fafc81b66c381286fceef25 | [
"MIT"
] | 437 | 2018-11-28T00:04:17.000Z | 2022-02-26T17:31:01.000Z | docs/guides/releases.md | its-lucas/prime | de367736429be9e74fafc81b66c381286fceef25 | [
"MIT"
] | 120 | 2018-12-22T02:54:12.000Z | 2022-03-31T07:10:27.000Z | # Releases
(placeholder)
| 6.5 | 13 | 0.730769 | eng_Latn | 0.813948 |
9bfe6bf06481c82a00731b13ba6521e86ff3fe83 | 13 | md | Markdown | README.md | Arpaesis/C-Programs | e690ffde2482ec047ab5a7d095ff2c5feffa5a34 | [
"MIT"
] | null | null | null | README.md | Arpaesis/C-Programs | e690ffde2482ec047ab5a7d095ff2c5feffa5a34 | [
"MIT"
] | null | null | null | README.md | Arpaesis/C-Programs | e690ffde2482ec047ab5a7d095ff2c5feffa5a34 | [
"MIT"
] | null | null | null | # C Programs
| 6.5 | 12 | 0.692308 | deu_Latn | 0.604552 |
9bfe6f821e17c969910b52db387d1553a1065780 | 1,727 | md | Markdown | src/de/2022-01/01/01.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/de/2022-01/01/01.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/de/2022-01/01/01.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Der Brief an die Hebräer und an uns
date: 25/12/2021
---
### Lies für das Studium dieser Woche
Hebräer 2,3–4; 1. Petrus 4,14.16; Hebräer 13,1–9.13; 1. Könige 19,1–18; Hebräer 3,12–14; 4. Mose 13
> <p>Merktext</p>
> Geduld aber habt ihr nötig, auf dass ihr den Willen Gottes tut und das Verheißene empfangt. (Hbr 10,36)
Hast du dir schon einmal vorgestellt, wie es wäre, Jesus oder einen der Apostel predigen zu hören? Wir besitzen schriftliche Auszüge und Zusammenfassungen einiger ihrer Predigten, aber diese vermitteln nur eine begrenzte Vorstellung davon, wie es war, sie zu hören. Gott bewahrte jedoch in der Heiligen Schrift mindestens eine vollständige Predigt für uns auf: den Brief von Paulus an die Hebräer.
Paulus, der Autor des Hebräerbriefes, bezeichnete sein eigenes Werk als ein „Wort der Ermahnung“ (Hbr 13,22 EB). Dieser Ausdruck wurde für die Predigt sowohl in der Synagoge (Apg 13,15) als auch im christlichen Gottesdienst (1 Tim 4,13) verwendet. Daher meinen manche, dass der Hebräerbrief die früheste „vollständige christliche Predigt“ ist, die wir haben. Der Hebräerbrief war an Gläubige gerichtet, die Jesus angenommen hatten, dann aber Schwierigkeiten durchmachten. Einige wurden öffentlich angeprangert und verfolgt (Hbr 10,32–34). Andere hatten finanzielle Probleme (Hbr 13,5–6). Viele waren müde und hatten begonnen, ihren Glauben infrage zu stellen (Hbr 3,12–13). Können heute manche unter uns das nachempfinden?
Der Apostel forderte sie (und damit auch uns) in einer aufrüttelnden Predigt jedoch auf, im Glauben an Jesus auszuharren und die Augen auf Jesus zu richten, der jetzt im himmlischen Heiligtum ist.
_* Studiere diese Lektion zur Vorbereitung auf Sabbat, den 1. Januar._ | 95.944444 | 722 | 0.785177 | deu_Latn | 0.999542 |
9bfe7c2ab076bcc3340ced6ff95af811d3ad01da | 63 | md | Markdown | README.md | curiousjazz77/iosClassTeachings | 968b48b18273facc7665224397473281041f69db | [
"MIT"
] | null | null | null | README.md | curiousjazz77/iosClassTeachings | 968b48b18273facc7665224397473281041f69db | [
"MIT"
] | null | null | null | README.md | curiousjazz77/iosClassTeachings | 968b48b18273facc7665224397473281041f69db | [
"MIT"
] | null | null | null | # iosClassTeachings
stuff learned over the course of the class
| 21 | 42 | 0.825397 | eng_Latn | 0.999865 |
9bff6198ab7a4c240deb4bf7711325c5b5de25ed | 2,938 | md | Markdown | generated-docs/Data/Incremental/Map.md | eric-corumdigital/purescript-incremental-functions | 60ef376264aac8ba940e46387ec7945b5adccea4 | [
"MIT"
] | 40 | 2017-03-04T22:14:09.000Z | 2018-04-03T21:26:13.000Z | generated-docs/Data/Incremental/Map.md | eric-corumdigital/purescript-incremental-functions | 60ef376264aac8ba940e46387ec7945b5adccea4 | [
"MIT"
] | 3 | 2018-05-27T00:39:54.000Z | 2018-06-24T22:05:03.000Z | generated-docs/Data/Incremental/Map.md | eric-corumdigital/purescript-incremental-functions | 60ef376264aac8ba940e46387ec7945b5adccea4 | [
"MIT"
] | 5 | 2017-03-04T22:17:41.000Z | 2018-03-14T14:51:36.000Z | ## Module Data.Incremental.Map
A change structure for maps, and helper functions.
#### `IMap`
``` purescript
newtype IMap k v
= IMap (Map k v)
```
A change structure for `Map` which tracks changes for each key.
##### Instances
``` purescript
(Eq k, Eq v) => Eq (IMap k v)
(Show k, Show v) => Show (IMap k v)
Newtype (IMap k v) _
(Ord k, Patch v dv) => Patch (IMap k v) (MapChanges k v dv)
(Ord k, Diff v dv) => Diff (IMap k v) (MapChanges k v dv)
```
#### `MapChanges`
``` purescript
newtype MapChanges k v dv
= MapChanges (Map k (MapChange v dv))
```
A change for each possible key.
##### Instances
``` purescript
(Eq k, Eq v, Eq dv) => Eq (MapChanges k v dv)
Newtype (MapChanges k v dv) _
(Show k, Show v, Show dv) => Show (MapChanges k v dv)
(Ord k, Patch v dv) => Semigroup (MapChanges k v dv)
(Ord k, Patch v dv) => Monoid (MapChanges k v dv)
(Ord k, Patch v dv) => Patch (IMap k v) (MapChanges k v dv)
(Ord k, Diff v dv) => Diff (IMap k v) (MapChanges k v dv)
```
#### `MapChange`
``` purescript
data MapChange v dv
= Add v
| Remove
| Update dv
```
A change for a single key is an addition, removal, or update.
##### Instances
``` purescript
(Eq v, Eq dv) => Eq (MapChange v dv)
(Show v, Show dv) => Show (MapChange v dv)
```
#### `insert`
``` purescript
insert :: forall k v dv. Ord k => Patch v dv => k -> v -> Change (IMap k v)
```
#### `remove`
``` purescript
remove :: forall k v dv. Ord k => Patch v dv => k -> Change (IMap k v)
```
#### `updateAt`
``` purescript
updateAt :: forall k v dv. Ord k => Patch v dv => k -> Change v -> Change (IMap k v)
```
#### `static`
``` purescript
static :: forall k v dv. Ord k => Patch v dv => Map k (Jet v) -> Jet (IMap k v)
```
Construct a map whose values can change but whose keys are fixed.
#### `singleton`
``` purescript
singleton :: forall k v dv. Ord k => Patch v dv => k -> Jet v -> Jet (IMap k v)
```
Construct a map from a key/value pair.
#### `map`
``` purescript
map :: forall k a da b db. Ord k => Patch a da => Patch b db => (Jet a -> Jet b) -> Jet (IMap k a) -> Jet (IMap k b)
```
Update every key by applying a function.
#### `modifyAt`
``` purescript
modifyAt :: forall k v dv. Ord k => Patch v dv => k -> (Jet v -> Jet v) -> Jet (IMap k v) -> Jet (IMap k v)
```
Update a single key by applying a function.
#### `size`
``` purescript
size :: forall k a da. Ord k => Patch a da => Jet (IMap k a) -> Jet (Atomic Int)
```
Compute the size of an `IMap`, incrementally.
#### `zip`
``` purescript
zip :: forall k a da b db. Ord k => Patch a da => Patch b db => Jet (IMap k a) -> Jet (IMap k b) -> Jet (IMap k (Tuple a b))
```
Zip two maps, keeping those keys which are common to _both_ input maps.
#### `toIArray`
``` purescript
toIArray :: forall k a da. Ord k => Patch a da => Jet (IMap k a) -> Jet (IArray (Tuple (Atomic k) a))
```
Convert an `IMap` into an `IArray` of tuples of keys and values, in order,
incrementally.
| 21.602941 | 124 | 0.605854 | kor_Hang | 0.311551 |
50024bfdc53e79db16f20e0354cd361acee592bc | 3,046 | md | Markdown | documents/aws-glue-developer-guide/doc_source/monitor-spark-ui-jobs.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-glue-developer-guide/doc_source/monitor-spark-ui-jobs.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-glue-developer-guide/doc_source/monitor-spark-ui-jobs.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Enabling the Apache Spark Web UI for AWS Glue Jobs<a name="monitor-spark-ui-jobs"></a>
You can use the Apache Spark web UI to monitor and debug AWS Glue ETL jobs running on the AWS Glue job system\. You can configure the Spark UI using the AWS Glue console or the AWS Command Line Interface \(AWS CLI\)\.
**Topics**
+ [Configuring the Spark UI \(Console\)](#monitor-spark-ui-jobs-console)
+ [Configuring the Spark UI \(AWS CLI\)](#monitor-spark-ui-jobs-cli)
## Configuring the Spark UI \(Console\)<a name="monitor-spark-ui-jobs-console"></a>
Follow these steps to configure the Spark UI using the AWS Management Console\.
**To create a job with the Spark UI enabled**
1. Sign in to the AWS Management Console and open the AWS Glue console at [https://console\.aws\.amazon\.com/glue/](https://console.aws.amazon.com/glue/)\.
1. In the navigation pane, choose **Jobs**\.
1. Choose **Add job**\.
1. In **Configure the job properties**, open the **Monitoring options**\.
1. In the **Spark UI** tab, choose **Enable**\.
1. Specify an Amazon S3 path for storing the Spark event logs for the job\.
**To edit an existing job to enable the Spark UI**
1. Open the AWS Glue console at [https://console\.aws\.amazon\.com/glue/](https://console.aws.amazon.com/glue/)\.
1. In the navigation pane, choose **Jobs**\.
1. Choose an existing job in the job list\.
1. Choose **Action**, and then choose **Edit job**\.
1. Open the **Monitoring options**\.
1. In the **Spark UI** tab, choose **Enable**\.
1. Enter an Amazon S3 path for storing the Spark event logs for the job\.
**To set up user preferences for new jobs to enable the Spark UI**
1. Open the AWS Glue console at [https://console\.aws\.amazon\.com/glue/](https://console.aws.amazon.com/glue/)\.
1. In the upper\-right corner, choose **User preferences**\.
1. Open the **Monitoring options**\.
1. In the **Spark UI** tab, choose **Enable**\.
1. Specify an Amazon S3 path for storing the Spark event logs for the job\.
**To set up the job run options to enable the Spark UI**
1. Open the AWS Glue console at [https://console\.aws\.amazon\.com/glue/](https://console.aws.amazon.com/glue/)\.
1. In the navigation pane, choose **Jobs**\.
1. Choose an existing job in the job lists\.
1. Choose **Scripts** and **Edit Job**\. You navigate to the code pane\.
1. Choose **Run job**\.
1. Open the **Monitoring options**\.
1. In the **Spark UI** tab, choose **Enable**\.
1. Specify an Amazon S3 path for storing the Spark event logs for the job\.
## Configuring the Spark UI \(AWS CLI\)<a name="monitor-spark-ui-jobs-cli"></a>
To enable the Spark UI feature using the AWS CLI, pass in the following job parameters to AWS Glue jobs\. For more information, see [ Special Parameters Used by AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html)\.
```
'--enable-spark-ui': 'true',
'--spark-event-logs-path': 's3://s3-event-log-path'
```
Every 30 seconds, AWS Glue flushes the Spark event logs to the Amazon S3 path that you specify\. | 37.146341 | 261 | 0.702232 | eng_Latn | 0.868744 |
5005ab86bdb59fd209d41108b6540bfa426798ca | 861 | md | Markdown | source/posts/2018-12-23-osc-bar-root.html.md | huideyeren/huideyeren.github.io | cb3f2ac47cbe9a90b9ce45caeb6083cb184cdd31 | [
"MIT"
] | null | null | null | source/posts/2018-12-23-osc-bar-root.html.md | huideyeren/huideyeren.github.io | cb3f2ac47cbe9a90b9ce45caeb6083cb184cdd31 | [
"MIT"
] | 4 | 2019-12-16T22:42:24.000Z | 2020-08-08T10:23:20.000Z | source/posts/2018-12-23-osc-bar-root.html.md | huideyeren/huideyeren.github.io | cb3f2ac47cbe9a90b9ce45caeb6083cb184cdd31 | [
"MIT"
] | null | null | null | ---
title: OSC懇親会の隠れた名物「Bar Root」について語る
date: 2018-12-23 07:00 +0900
tags: Essay, Advent-calendar
---
# はじめに
この記事は[オープンソースカンファレンス Advent Calendar 2018](https://adventar.org/calendars/3403)の12月23日分の記事になります。
# Bar Root?
OSCの東京の懇親会には、「Bar Root」と呼ばれる、宮原さんを中心とする参加者の皆様がいろいろなお酒やおつまみを持ち寄って杯を交わすという名物イベントがLT大会の裏で開かれたりしています。
持ち寄る物は日本酒、ワイン、本格焼酎が多いですが、中には洋酒や東南アジアの珍しいお酒がお目見えすることもあります。
また、おつまみ系を持ってくる人もいます。懇親会はケータリングが出るのですが、おなかがすいている人が多くあっという間に食べるものがなくなってしまうので、呑んでいる人たちにはおつまみが不足してしまうという問題点があります。個人的には、おつまみはいつも募集中です。
# 私が持ってきた物
なお、最近は私はおつまみをチョイスして持ってくることが多いです。乾き物から瓶詰、缶詰など、お酒に合いそうな物をチョイスしています。1回に使うおつまみの価格は大体5,000円程度でしょうか。いろいろな物が手に入るので、[AKOMEYA TOKYO](https://www.akomeya.jp/shop/default.aspx)でお買い物をする事が多いです。
また、過去にはお酒と共にシェイカーを持ってきたこともあります。もちろんそれでギムレットを作りました。
# 「Bar Root」に参加するには?
まずはOSC東京に参加しましょう。で、懇親会に顔を出しましょう。
しばらくするとBar Rootが始まるので混ざりましょう。
# 最後に
毎度楽しみにしております。 | 28.7 | 180 | 0.847851 | jpn_Jpan | 0.722808 |
5005bf8888d3fbe6ef9fe75e1cf9877308571ae8 | 63 | md | Markdown | README.md | kraodeveloper/first-project | 0c6770ef329408672fc06f3550ce4006f6cf7902 | [
"Apache-2.0"
] | null | null | null | README.md | kraodeveloper/first-project | 0c6770ef329408672fc06f3550ce4006f6cf7902 | [
"Apache-2.0"
] | null | null | null | README.md | kraodeveloper/first-project | 0c6770ef329408672fc06f3550ce4006f6cf7902 | [
"Apache-2.0"
] | null | null | null | # first-project
first project with new github account by gmail
| 21 | 46 | 0.809524 | eng_Latn | 0.999781 |
500626f91efb10f70eaa0205542f9923ca4361c9 | 10,040 | md | Markdown | _posts/2016-11-13-building-tensorflow-with-gpu-support.md | DavidSanwald/DavidSanwald.github.io | 258bbc4ecc99434043797d1af597d13ad1a0e382 | [
"MIT"
] | null | null | null | _posts/2016-11-13-building-tensorflow-with-gpu-support.md | DavidSanwald/DavidSanwald.github.io | 258bbc4ecc99434043797d1af597d13ad1a0e382 | [
"MIT"
] | null | null | null | _posts/2016-11-13-building-tensorflow-with-gpu-support.md | DavidSanwald/DavidSanwald.github.io | 258bbc4ecc99434043797d1af597d13ad1a0e382 | [
"MIT"
] | null | null | null | ---
layout: post
title: Getting CUDA 8 to Work With openAI Gym on AWS and Compiling Tensorflow for CUDA 8 Compatibility
---
The necessary steps to get CUDA and cuDNN to work with an virtual framebuffer like xvfb, so that you can use openAI Gym. Also included how to compile Tensorflow using Google's Bazel.
<!--more-->
I had some hard time getting Tensorflow with GPU support and [OpenAI Gym](https://gym.openai.com/) at the same time working on an AWS EC2 instance, and it seems like I'm in good [company](https://github.com/openai/gym/issues/247). For some time I used [NVIDIA-Docker](https://github.com/NVIDIA/nvidia-docker/) for this but as much as I love Docker, depending on special access to the (NVIDIA) GPU drivers, took away some of the biggest advantages when using Docker, at least for my use cases. Running OpenAI Gym in a normal container, exposing some port to the outside and running agents/neural nets etc. elsewhere seems like a really promising approach and I'm looking forward to it being ready.
There are good [explanations](https://alliseesolutions.wordpress.com/2016/09/08/install-gpu-tensorflow-from-sources-w-ubuntu-16-04-and-cuda-8-0-rc/) on how to get Tensorflow with CUDA going, those were pretty helpful to me. However I suppose they were mostly concerned with supervised learning.
If you want to run certain OpenAI Gym environments headless on a server, you have to provide an X-server to them, even when you don't want to render a video. You can use a virtual framebuffer like xvfb for this, it works fine. But I never could getting it to work with GLX support. Also other solutions like X-Dummy failed.
The problem is, that there's no way to keep NVIDIA from installing OpenGl-libs, when using packages from some repo (which most of the tutorials do, because it's way more convenient). Finally [this](https://github.com/openai/gym/issues/366#issuecomment-251967650) comment by pemami4911 in a github issue pointed me into the right direction. Since many people seem to run into the same problems, maybe the following will spare you some trouble.
I'm using Python 3 because it really annoys me, that everyone still uses Python 2.7 in the deep learning community. Also I'm using Ubuntu Ubuntu 16.04 LTS XENIAL XERUS because it's released for ages (even the official Canonical AMI on AWS) and when spinning up a single new instance for computing just a few things, I don't see why I would use Ubuntu 14.04, just because 16.04 is not among the three AMIs AWS offers to me first.
But I think it should be no trouble to adapt this to other needs.
OT: *I also recommend using the AWS CLI together with the [oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh) AWS plugin, because zsh and especially oh-my-zsh is awesome.*
Okay, let's begin:
Go to <https://cloud-images.ubuntu.com/locator/ec2/> and look for the Ubuntu 16.04 LST XENIAL XERUS hvm:ebs-ssd AMI from Canonical. For eu-central-1 it's **ami-8504fdea**.
Spin up a new EC2 GPU instance (**g2.2xlarge** will do) using that AMI (use a spot instance if you are as broke as me). I recommend using at least 20GB as root volume.
SSH into your instance, username is **ubuntu**.
{% highlight bash %}
$ sudo apt-get update
$ sudo apt-get -y dist-upgrade
{% endhighlight %}
Let's get some basics:
{% highlight bash %}
$ sudo apt-get install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy build-essential python-pip python3-pip python3-venv swig python3-wheel libcurl3-dev
{% endhighlight %}
The Java stuff is for [Bazel](https://bazel.build/) Google's build tool we will use later for compiling Tensorflow. We will use openJDK but if you have to, you can also use the proprietary one from Oracle.
Also some kernel sources, compilers and other stuff:
{% highlight bash %}
$ sudo apt-get install -y gcc g++ gfortran git linux-image-generic linux-headers-generic linux-source linux-image-extra-virtual libopenblas-dev
{% endhighlight %}
As we are on it, let's install Bazel right now:
{% highlight bash %}
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://storage.googleapis.com/bazel-apt/doc/apt-key.pub.gpg | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install bazel
$ sudo apt-get upgrade bazel
{% endhighlight %}
Now curl or wget the right NVIDIA driver. For the GRID K520 GPU **367.57** should be the right choice (maybe Linus would it [call](https://www.youtube.com/watch?v=iYWzMvlj2RQ) the least wrong choice at most).
{% highlight bash %}
$ wget -P ~/Downloads/ http://us.download.nvidia.com/XFree86/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run
{% endhighlight %}
NVIDIA will clash with the nouveau driver so deactivate it:
{% highlight bash %}
$ sudo nano /etc/modprobe.d/blacklist-nouveau.conf
{% endhighlight %}
Insert the following lines and save:
{% highlight bash%}
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off
{% endhighlight %}
Update the initframs (basically functionality to mount your real rootfs, which has been outsourced from the kernel) and reboot:
{% highlight bash %}
$ sudo update-initramfs -u
$ sudo reboot
{% endhighlight %}
Make the NVIDIA driver runfile executable and install the driver and reboot one more time, just to be sure.
**IMPORTANT: In my experience xvfb will only work if you use the --no-opengl-files option!**
{% highlight bash %}
$ chmod +x ~/Downloads/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run
$ sudo sh ~/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run --no-opengl-files
$ sudo reboot
{% endhighlight %}
Now wget CUDA 8.0. toolkid from NVIDIA
{% highlight bash %}h
$ wget https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run
{% endhighlight %}
While downloading register at NVIDIA, download CUDNN 5 runtinme lib on your local machine and SCP it to the remot instance:
{% highlight bash %}
$ scp cudnn-8.0-linux-x64-v5.1.tgz [email protected]:~/Downloads/
{% endhighlight %}
Now make the runfile executable and install CUDA but don't install the driver. Also the --override option helps to prevent some annoying errors, which could happen.
**IMPORTANT: Be sure to use the --no-opengl-libs option**
{% highlight bash %}
$ chmod +x cuda_8.0.44_linux-run
$ sudo sh cuda_8.0.44_linux-run --extract=~/Downloads/
$ sudo sh cuda_8.0.44_linux-run --override --no-opengl-libs
{% endhighlight %}
Now open your .bashrc
{% highlight bash %}
$ nano ~/.bashrc
{% endhighlight %}
and add the following lines:
{% highlight bash %}
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
{% endhighlight %}
If the SCP operation is complete, extract it to the right locations.
{% highlight bash %}
$ sudo tar -xzvf cudnn-8.0-linux-x64-v5.1.tgz
$ sudo cp cuda/include/cudnn.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
{% endhighlight %}
Reboot one more time:
{% highlight bash %}
$ chmod +x ~/Downloads/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run
$ sudo sh ~/Linux-x86_64/367.57/NVIDIA-Linux-x86_64-367.57.run --no-opengl-files
$ sudo reboot
{% endhighlight %}
Now git clone Tensorflow and start the configuration:
{% highlight bash %}
$ git clone https://github.com/tensorflow/tensorflow
$ cd ~/tensorflow
$ ./configure
{% endhighlight %}
I used **/usr/bin/python3.5** for the Python binary and **/usr/local/lib/python3.5/dist-packages** for the path, Cuda SDK **8.0**, cudnn **5.1.5**, compiled without cloud-support and OpenCL but with GPU support of course. Computing capabilities for the instance are are **3.0**.
Okay, now we compile everything:
{% highlight bash %}
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
{% endhighlight %}
This will take some time. You could watch this video while you're waiting:
<iframe width="1120" height="630" src="https://www.youtube.com/embed/oQbei5JGiT8" frameborder="0" allowfullscreen></iframe>
Build the wheel:
{% highlight bash %}
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
{% endhighlight %}
If you want, you can use a virtual environment:
{% highlight bash %}
$ python3 -m venv --system-site-packages ~/tensorflow
$ source ~/tensorflow/bin/activate
$ pip3 install /tmp/tensorflow_pkg/tensorflow-0.11.0rc2-cp35-cp35m-linux_x86_64.whl
{% endhighlight %}
Installing OpenAI Gym is pretty straight forward, cause the people at OpenAI and all other contributers have done an amazing job (:
Sometimes there are some problems with Box2D. If you want to be sure, follow the instructions below.
We already installed Swig (we need 3.x before compiling Box2D).
Git clone Pybox2d
{% highlight bash %}
$ git clone https://github.com/pybox2d/pybox2d.git
{% endhighlight %}
Build and install it:
{% highlight bash %}
$ cd pybox2d
$ python setup.py build
$ python setup.py install
{% endhighlight %}
Installing OpenAI Gym should now be no trouble at all. We already installed most of the dependencies but I copy-pasted everything from their github instructions just to be sure:
{% highlight bash %}
$ git clone https://github.com/openai/gym.git
$ sudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig
$ pip install -e '.[all]'
{% endhighlight %}
If you had problems running OpenAI Gym headless using xvfb as X-server it should now work, if you do as explained by [Trevor Blackwell](https://github.com/tlbtlbtlb) in [this](https://github.com/openai/gym/issues/247#issuecomment-232731446) post (the GLX option is active by default):
{% highlight bash %}
$ xvfb-run -a -s "-screen 0 1400x900x24 +extension RANDR" -- python XXX.py
{% endhighlight %}
Have fun (:
If you run into any troubles, just let me know. I'm always happy to help.
| 56.404494 | 696 | 0.758566 | eng_Latn | 0.960903 |
5006a266b95844f32decaef9473724729f12ac30 | 2,131 | md | Markdown | README.md | wdsqjq/QrCodeScanner | e4e8ce39a75eaca5fe892986fde3dd9187907c78 | [
"MIT"
] | null | null | null | README.md | wdsqjq/QrCodeScanner | e4e8ce39a75eaca5fe892986fde3dd9187907c78 | [
"MIT"
] | null | null | null | README.md | wdsqjq/QrCodeScanner | e4e8ce39a75eaca5fe892986fde3dd9187907c78 | [
"MIT"
] | null | null | null | # QrCodeScanner
[](https://android-arsenal.com/api?level=21) [](https://jitpack.io/#wsj1024/QrCodeScanner)
非常简单易用的android二维码扫描库,不需要你申请权限,简单到一行代码搞定。支持二维码扫描,本地二维码图片解析,二维码生成,闪光灯等。当别人还在焦头烂额地写着代码,你已经在喝咖啡了。
## **Preview**



## Features
- QrCode Decode
- QrCode Encode
- Flash light
- Load and scan images containing QR Code
- Easy to use
Implementation
----
```xml
allprojects {
repositories {
maven { url 'https://jitpack.io' }
}
}
dependencies {
implementation 'com.github.wdsqjq:QrCodeScanner:1.0.1'
}
```
Usage
----
In your activity or fragment
```kotlin
// 扫描二维码
QrCodeScanner.with(this)
.onSuccess {
tvResult.text = "扫描结果:$it"
}
.onFail {
tvResult.text ="扫描失败:$it"
}
.start()
// 二维码生成
// 设置要生成的内容,大小以及logo
val result = QrCodeScanner.createQrCode("content", 500, 500, null)
```
## License
```
MIT License
Copyright (c) 2020 wangsj
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| 24.494253 | 214 | 0.737213 | yue_Hant | 0.600962 |
5007c131844878f940afc6b7bbc3c1ad83827678 | 73 | md | Markdown | README.md | Xpreme/Ticket-Collector | e56591a236a3cb6fd14e3be08d501f94f5b73e4b | [
"MIT"
] | null | null | null | README.md | Xpreme/Ticket-Collector | e56591a236a3cb6fd14e3be08d501f94f5b73e4b | [
"MIT"
] | null | null | null | README.md | Xpreme/Ticket-Collector | e56591a236a3cb6fd14e3be08d501f94f5b73e4b | [
"MIT"
] | null | null | null | # Ticket-Collector
a program that saves ticket versions or something idk
| 24.333333 | 53 | 0.821918 | eng_Latn | 0.993864 |
50087b2598533d31bbcdfc97b67ec35b23a8dc19 | 612 | md | Markdown | README.md | kelunik/feature | e1afc2863fb7420c05c1db9402a4c136d8a6dbf3 | [
"MIT"
] | 1 | 2016-01-16T08:29:15.000Z | 2016-01-16T08:29:15.000Z | README.md | kelunik/feature | e1afc2863fb7420c05c1db9402a4c136d8a6dbf3 | [
"MIT"
] | null | null | null | README.md | kelunik/feature | e1afc2863fb7420c05c1db9402a4c136d8a6dbf3 | [
"MIT"
] | null | null | null | # feature
[](https://travis-ci.org/kelunik/feature)
[](https://coveralls.io/github/kelunik/feature?branch=master)

`kelunik/feature` is a feature flagging library for use with the [`amp`](https://github.com/amphp/amp) concurrency framework.
**Required PHP Version**
- PHP 5.6+
**Installation**
```bash
$ composer require kelunik/feature
``` | 36 | 157 | 0.753268 | yue_Hant | 0.445503 |
5008da7234016ba2edb859968368a4363379e1de | 510 | md | Markdown | docs/language-reference/functions/parent.md | bernard-ng/twing | 56e186b58ef05d8e2f17ae2a66a2522a40bae76d | [
"BSD-2-Clause"
] | 115 | 2018-01-15T14:18:54.000Z | 2022-03-17T19:35:13.000Z | docs/language-reference/functions/parent.md | bernard-ng/twing | 56e186b58ef05d8e2f17ae2a66a2522a40bae76d | [
"BSD-2-Clause"
] | 324 | 2017-10-05T16:49:39.000Z | 2019-09-26T12:29:24.000Z | docs/language-reference/functions/parent.md | bernard-ng/twing | 56e186b58ef05d8e2f17ae2a66a2522a40bae76d | [
"BSD-2-Clause"
] | 22 | 2019-09-26T14:03:11.000Z | 2022-02-18T11:02:58.000Z | `parent`
========
{% raw %}
When a template uses inheritance, it's possible to render the contents of the parent block when overriding a block by using the `parent` function:
````twig
{% extends "base.html" %}
{% block sidebar %}
<h3>Table Of Contents</h3>
...
{{ parent() }}
{% endblock %}
````
The `parent()` call will return the content of the `sidebar` block as defined in the `base.html` template.
{% endraw %}
[back]({{ site.baseurl }}{% link language-reference/functions/index.md %})
| 22.173913 | 146 | 0.645098 | eng_Latn | 0.928847 |
5009f2a31d90a74042e04984c8e4dbcbacc95e86 | 14,344 | md | Markdown | examples/tag_resources_in_tenancy/README.md | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | examples/tag_resources_in_tenancy/README.md | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | examples/tag_resources_in_tenancy/README.md | pabs3/oci-python-sdk | 437ba18ce39af2d1090e277c4bb8750c89f83021 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | ## Tag Resources in Tenancy
Tag Resources in Tenancy is a tool to help you tag resources easily, it uses the OCI Python SDK.
It covers the below OCI components,
Authentication by User or Compute using instance principals,
Output can be printer friendly, Summary or JSON.
**Developed by Adi Zohar, Nov 2020**
## Modules Included:
- oci.core.VirtualNetworkClient
- oci.core.ComputeClient
- oci.core.BlockstorageClient
- oci.object_storage.ObjectStorageClient
- oci.database.DatabaseClient
- oci.load_balancer.LoadBalancerClient
- oci.identity.IdentityClient
** DISCLAIMER – This is not an official Oracle application
## Executing using Cloud Shell:
```
1. install oci sdk package
pip3 install --user oci
2. clone the oci sdk repo
git clone https://github.com/oracle/oci-python-sdk
3. run with delegation token
cd oci-python-sdk/examples/tag_resources_in_tenancy
python3 tag_resources_in_tenancy.py --help
```
## OCI Authentication using User - Privilges can be aligned accordingly
Required OCI IAM user with manage privileges to the compartment you aim to tag
```
ALLOW GROUP ReadOnlyUsers to manage all-resources IN TENANCY
```
## Installation of Python 3 incase you don't have Python3 installed:
Please follow Python Documentation - https://docs.python.org/3/using/index.html
## Install oci SDK Packages:
Please follow Oracle Python SDK Documentation - https://github.com/oracle/oci-python-sdk
## Copy the Software
Download the tag_resources_in_tenancy.py from this project
Execute
```
$ python3 tag_resources_in_tenancy.py --help
usage: tag_resources_in_tenancy.py [-h] [-t CONFIG_PROFILE] [-p PROXY] [-cp COMPARTMENT] [-rg REGION] [-ip] [-dt] [-tag TAG] [-tagseperator TAGSEPERATOR]
[-action {add_defined,add_free,del_defined,del_free,list}] [-output {list,json,summary}] [-force] [-service SERVICE]
[-filter_by_name FILTER_BY_NAME]
optional arguments:
-h, --help show this help message and exit
-t CONFIG_PROFILE Config file section to use (tenancy profile)
-p PROXY Set Proxy (i.e. www-proxy-server.com:80)
-cp COMPARTMENT Filter by Compartment Name or Id
-rg REGION Filter by Region Name
-ip Use Instance Principals for Authentication
-dt Use Delegation Token for Authentication
-tag TAG Tags in format - namespace.key=value or key=value with comma seperator for multi tags
-tagseperator TAGSEPERATOR Tag Seperator for multiple tags, default=,
-action {add_defined,add_free,del_defined,del_free,list} Action Type
-output {list,json,summary} Output type, default=summary
-force Force execution (do not confirm)
-service SERVICE Services = all,compute,block,network,identity,loadbalancer,database,object,file. default=all
-filter_by_name FILTER_BY_NAME Filter service by name, comma seperator for multi names
```
## Example Execution for adding defined Tags:
```
python3 tag_resources_in_tenancy.py -action add_defined -tag BillingNS.Division=TEST -cp TestCompartment -rg us-ashburn-1
Connecting to Identity Service...
Loading Compartments...
Total 1 compartments loaded.
##########################################################################################
# Running Tag Resources #
##########################################################################################
Written By Adi Zohar, Feb 2022
Starts at 2022-02-03 12:33:40
Command Line : -cp Test -rg us-ashburn-1 -action add_defined -tag project.desc=description of the project,project.team=Team A
Tag 1 : project.desc=description of the project
Tag 2 : project.team=Team A
Tag Seperator : ,
Tenant Name : orasenatdpltdevopsnetw02
Tenant Id : ocid1.tenancy.oc1..aaaaaaaaxtkkpxc5qwgpwx7y2wt5pinegyzea4uacnmck7iamsssjvw4s3bq
Reading Tag Namespaces...
Found Tag Namespace 'project', id = ocid1.tagnamespace.oc1..aaaaaaaaofnh6y66knfqxg3ihjd53e2olhotnetsisoxxxxx
Found Tag Key 'desc', id = ocid1.tagdefinition.oc1..aaaaaaaakvoslxfyen744cotrkdrv4hrlwrs3rqd3uhr64hmfxxxxxxx
Found Tag Namespace 'project', id = ocid1.tagnamespace.oc1..aaaaaaaaofnh6y66knfqxg3ihjd53e2olhotnetsisoxxxxx
Found Tag Key 'team', id = ocid1.tagdefinition.oc1..aaaaaaaauvpkmmewoxifdaih4g2p5u6j7m5xvsujeelsoha4xxxxxxxx
Type yes to execute: yes
Processing Regions...
Region us-ashburn-1...
Compartment Test
Instances - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Boot Volumes - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Boot Volumes Backups - 7 Tag Added = 0 Tag Updated = 7 Tag Exist = 7
Block Volumes - (-)
Block Volumes Backups - (-)
Volume Groups - (-)
Volume Groups Backup - (-)
Network VCNs - 4 Tag Added = 0 Tag Updated = 4 Tag Exist = 4
Network Subnets - 6 Tag Added = 0 Tag Updated = 6 Tag Exist = 6
Network CPEs - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Network DHCPs - 4 Tag Added = 0 Tag Updated = 4 Tag Exist = 4
Network IGWs - 3 Tag Added = 0 Tag Updated = 3 Tag Exist = 3
Network IPSECs - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Network LPGs - 2 Tag Added = 0 Tag Updated = 2 Tag Exist = 2
Network NATGWs - 2 Tag Added = 0 Tag Updated = 2 Tag Exist = 2
Network RPGs - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Network Routes - 8 Tag Added = 0 Tag Updated = 8 Tag Exist = 8
Network SLs - 5 Tag Added = 0 Tag Updated = 5 Tag Exist = 5
Network SGWs - 2 Tag Added = 0 Tag Updated = 2 Tag Exist = 2
Network VCircuit - (-)
Load Balancers - 2 Tag Added = 0 Tag Updated = 2 Tag Exist = 2
DB DB Systems - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
DB Autonomous - 1 Tag Added = 0 Tag Updated = 1 Tag Exist = 1
Object Storage Buckets - 2 Tag Added = 0 Tag Updated = 2 Tag Exist = 2
##########################################################################################
# Output as List #
##########################################################################################
us-ashburn-1 | Test | Instances | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Boot Volumes Backups | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network VCNs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network VCNs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network VCNs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network VCNs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Subnets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network CPEs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network DHCPs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network DHCPs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network DHCPs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network DHCPs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network IGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network IGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network IGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network IPSECs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network LPGs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network LPGs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network NATGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network NATGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network RPGs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network Routes | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SLs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SLs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SLs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SLs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SLs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Network SGWs | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Load Balancers | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Load Balancers | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | DB DB Systems | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | DB Autonomous | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Object Storage Buckets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
us-ashburn-1 | Test | Object Storage Buckets | Added: 0 | Updated: 1 | Deleted: 0 | Exist: 1 | ...
##########################################################################################
# Completed at 2022-02-03 12:34:21 #
##########################################################################################
```
| 73.183673 | 153 | 0.49268 | kor_Hang | 0.371508 |
500a4774fb099b4875a34ea5818bdd8608f3c951 | 495 | md | Markdown | apache-lua/README.md | soapdog/docker-lua | 68fd978122713130f31b6f94c3e57abe4f5a3fe6 | [
"MIT"
] | 1 | 2018-06-28T04:16:42.000Z | 2018-06-28T04:16:42.000Z | apache-lua/README.md | soapdog/docker-lua | 68fd978122713130f31b6f94c3e57abe4f5a3fe6 | [
"MIT"
] | null | null | null | apache-lua/README.md | soapdog/docker-lua | 68fd978122713130f31b6f94c3e57abe4f5a3fe6 | [
"MIT"
] | null | null | null | # What it is?
Based on Ubuntu 16.10 with the following packages:
* build-essential
* git
* Lua 5.1
* Luarocks
* Apache2
## Apache
The modules **ssl, rewrite and lua** are enabled. Also **/var/www/html** is set to **AllowOverride All**. The server is configured to run on port 80 and 443 (ssl).
You can use a volume to shadow the **/var/www/html** to try your own code.
## Remarks
```build-essential``` and ```git``` are present for **Luarocks** benefit, so that you can compile native rocks. | 30.9375 | 163 | 0.69899 | eng_Latn | 0.997157 |
500a6873fb95fcbd47e0b14f3b1938ce3bd8b56f | 971 | md | Markdown | domain/nextwavemarketingstrategies.com/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-12-20T19:10:17.000Z | 2021-07-18T22:32:37.000Z | domain/nextwavemarketingstrategies.com/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2020-06-19T16:02:03.000Z | 2021-08-24T16:49:39.000Z | domain/nextwavemarketingstrategies.com/index.md | billfitzgerald/smmd | 9af567b54b39dc2872cf0ee6c3ada27627490c42 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-29T20:36:31.000Z | 2020-06-29T20:36:31.000Z | ---
company-name: "Next Wave Marketing Strategies, Inc"
domain: nextwavemarketingstrategies.com
home: http://www.nextwavemarketingstrategies.com
email: "troy [at] nextwavemarketingstrategies.com"
california-date: 01/31/2020
---
## How to opt out
A consumer may opt out of sale or submit requests under the CCPA by calling our toll free phone number, by email, by direct mail or by online form submission found by clicking on CCPA privacy rights request form link within our Privacy Notice for California residents page found by clicking Do Not Sell My Information button on my site.
## How to delete
A consumer may opt out by calling our toll free phone number, by email, by direct mail or by online form submission found by clicking on CCPA Opt-Out form link within our Privacy Notice for California residents page found by clicking Do Not Sell My Information button on my site.
## Additional info
15527 Jasmine PlaceTustin, CA 92782United States
| 26.243243 | 336 | 0.781668 | eng_Latn | 0.991232 |
500ba46ae3e28a92b1360c1717a1254002b1bd3d | 5,043 | md | Markdown | README.md | rogerjdeangelis/utl-removing-factors-and-preserving-type-and-length-when-importing-spss-sav-tables | 8fa2e41e2fd0e73c4ae18023ba8630e691120921 | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl-removing-factors-and-preserving-type-and-length-when-importing-spss-sav-tables | 8fa2e41e2fd0e73c4ae18023ba8630e691120921 | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl-removing-factors-and-preserving-type-and-length-when-importing-spss-sav-tables | 8fa2e41e2fd0e73c4ae18023ba8630e691120921 | [
"MIT"
] | null | null | null | # utl-removing-factors-and-preserving-type-and-length-when-importing-spss-sav-tables
Removing factors and preserving type and length when importing spss sav tables.
Removing factors and preserving type and length when importing spss sav tables
github
https://tinyurl.com/ycfgmmc8
https://github.com/rogerjdeangelis/utl-removing-factors-and-preserving-type-and-length-when-importing-spss-sav-tables
SAS Forum
https://tinyurl.com/y8s9gd3u
https://communities.sas.com/t5/SAS-Programming/Importing-SPSS-sav-file-into-SAS-question/m-p/498744
github macros
https://github.com/rogerjdeangelis/utl-macros-used-in-many-of-rogerjdeangelis-repositories
INPUT
=====
SPSS sav file
d:/sav/utl-importing-spss-sav-tables.sav
SAV.CLASS total obs=19
NAME SEX AGE HEIGHT WEIGHT
Alfred M 14 69.0 112.5
Alice F 13 56.5 84.0
Barbara F 13 65.3 98.0
Carol F 14 62.8 102.5
Henry M 14 63.5 102.5
....
VARIABLE attributes
Variable Type
NAME String ** from sashelp.class so length is 8
SEX String ** from sashelp.class so length is 1
AGE Numeric
HEIGHT Numeric
WEIGHT Numeric
EXAMPLE OUTPUT (SAS dataset from SPSS table)
--------------------------------------------
WORK.WANT total obs=19
NAME SEX AGE HEIGHT WEIGHT
Alfred M 14 69.0 112.5
Alice F 13 56.5 84.0
Barbara F 13 65.3 98.0
Carol F 14 62.8 102.5
Henry M 14 63.5 102.5
James M 12 57.3 83.0
..
# Variable Type Len
1 NAME Char 8 ** matches the original length (sashelp.class)
2 SEX Char 1 ** matches the original length (sashelp.class)
3 AGE Num 8
4 HEIGHT Num 8
5 WEIGHT Num 8
PROCESS
=======
* just in case;
%utlfkil(d:/sav/utl-importing-spss-sav-tables.xpt);
%utl_submit_r64('
library(foreign);
library(SASxport);
want<-read.spss("d:/sav/utl-importing-spss-sav-tables.sav");
want <- as.data.frame(lapply(want,
function (y) if(class(y)=="factor" ) as.character(y) else y),stringsAsFactors=F);
write.xport(want,file="d:/sav/utl-importing-spss-sav-tables.xpt");
');
libname xpt xport "d:/sav/utl-importing-spss-sav-tables.xpt";
proc contents data=xpt.want position;
run;quit;
data want;
set xpt.want;
run;quit;
proc print;
run;quit;
OUTPUT (see above)
* _ _ _
_ __ ___ __ _| | _____ __| | __ _| |_ __ _
| '_ ` _ \ / _` | |/ / _ \ / _` |/ _` | __/ _` |
| | | | | | (_| | < __/ | (_| | (_| | || (_| |
|_| |_| |_|\__,_|_|\_\___| \__,_|\__,_|\__\__,_|
;
* libname fails so EG weakens classic SAS;
proc export data=sashelp.class
file="d:/sav/utl-importing-spss-sav-tables.sav"
dbms=spss replace;
fmtlib=library.formats;
run;quit;
*_
| | ___ __ _
| |/ _ \ / _` |
| | (_) | (_| |
|_|\___/ \__, |
|___/
;
> library(foreign);
> library(SASxport);
> want<-read.spss("d:/sav/utl-importing-spss-sav-tables.sav");
> want <- as.data.frame(lapply(want,
> function (y) if(class(y)=="factor" ) as.character(y) else y),stringsAsFactors=F);
> write.xport(want,file="d:/sav/utl-importing-spss-sav-tables.xpt");
NOTE: 3 lines were written to file PRINT.
Stderr output:
Attaching package: 'SASxport'
The following objects are masked from 'package:foreign':
lookup.xport, read.xport
Warning message:
package 'SASxport' was built under R version 3.3.3
In read.spss("d:/sav/utl-importing-spss-sav-tables.sav") :
d:/sav/utl-importing-spss-sav-tables.sav: Compression bias (0) is not the usual value of 100
NOTE: 2 records were read from the infile RUT.
The minimum record length was 2.
The maximum record length was 290.
5303 libname xpt xport "d:/sav/utl-importing-spss-sav-tables.xpt";
NOTE: Libref XPT was successfully assigned as follows:
Engine: XPORT
Physical Name: d:\sav\utl-importing-spss-sav-tables.xpt
5304 proc contents data=xpt.want position;
5305 run;
NOTE: PROCEDURE CONTENTS used (Total process time):
real time 0.03 seconds
5305! quit;
5306 data want;
5307 set xpt.want;
5308 run;
NOTE: There were 19 observations read from the data set XPT.WANT.
NOTE: The data set WORK.WANT has 19 observations and 5 variables.
NOTE: DATA statement used (Total process time):
real time 0.02 seconds
5308! quit;
| 28.653409 | 121 | 0.560777 | eng_Latn | 0.433779 |
500cb40bbae7670224d3b18258a2854d4fc840fe | 111 | md | Markdown | README.md | faaaricha/shop.krs | 46a84cdc9c0781afe1b604218041a561fa7b5a66 | [
"MIT"
] | null | null | null | README.md | faaaricha/shop.krs | 46a84cdc9c0781afe1b604218041a561fa7b5a66 | [
"MIT"
] | null | null | null | README.md | faaaricha/shop.krs | 46a84cdc9c0781afe1b604218041a561fa7b5a66 | [
"MIT"
] | null | null | null | # Katalog Terbitan Kedai Resensi Surabaya
Website untuk display katalog buku terbitan Kedai Resensi Surabaya.
| 27.75 | 67 | 0.837838 | ind_Latn | 0.668485 |
500d044b893552d2d0b7c6c11888e758a43bd0a6 | 36 | md | Markdown | README.md | mattduggan/tsconfig | 4abc71b43a44e280df2ea60c289eeb6504b93e4f | [
"MIT"
] | null | null | null | README.md | mattduggan/tsconfig | 4abc71b43a44e280df2ea60c289eeb6504b93e4f | [
"MIT"
] | null | null | null | README.md | mattduggan/tsconfig | 4abc71b43a44e280df2ea60c289eeb6504b93e4f | [
"MIT"
] | null | null | null | # tsconfig
TypeScript Configuration
| 12 | 24 | 0.861111 | deu_Latn | 0.515519 |
500eb712d73f4d69bfe8a81ffcf8f5a3875eaec1 | 72 | md | Markdown | README.md | jstasiak/junkcode | 37437b1df64088a52aedc5d66867f022dac9d06c | [
"MIT"
] | null | null | null | README.md | jstasiak/junkcode | 37437b1df64088a52aedc5d66867f022dac9d06c | [
"MIT"
] | null | null | null | README.md | jstasiak/junkcode | 37437b1df64088a52aedc5d66867f022dac9d06c | [
"MIT"
] | null | null | null | # junkcode
Random potentially useful stuff I don't keep anywhere else.
| 18 | 59 | 0.791667 | eng_Latn | 0.97758 |
500edd2ea2d28f7003967d49db489db2a11084be | 1,059 | md | Markdown | README.md | markhaehnel/rbtv-firetv | bdac093e63dc3b40be4621d951fcb03eaf6d4fbc | [
"MIT"
] | 12 | 2016-01-28T12:19:36.000Z | 2017-04-05T03:32:05.000Z | README.md | markhaehnel/rbtv-firetv | bdac093e63dc3b40be4621d951fcb03eaf6d4fbc | [
"MIT"
] | 10 | 2016-01-28T08:42:10.000Z | 2017-06-07T12:48:05.000Z | README.md | EZTEQ/rbtv-firetv | bdac093e63dc3b40be4621d951fcb03eaf6d4fbc | [
"MIT"
] | 4 | 2016-04-30T15:06:46.000Z | 2016-10-04T13:02:31.000Z | <p align="center">
<img src="https://i.imgur.com/ulstbn3.png">
</p>
<p align="center">
<a href="https://app.bitrise.io/app/46391304895b0957"><img src="https://img.shields.io/bitrise/46391304895b0957/master.svg?token=DgTkmd96_gVVpmAwbwqu_g&style=for-the-badge"></a>
<a href="https://circleci.com/gh/markhaehnel/RocketBeansTV.Android"><img src="https://img.shields.io/github/license/markhaehnel/RocketBeansTV.Android.svg?style=for-the-badge"></a>
<a href="https://circleci.com/gh/markhaehnel/RocketBeansTV.Android"><img src="https://img.shields.io/github/release/markhaehnel/RocketBeansTV.Android.svg?style=for-the-badge"></a>
</p>
<p align="center">
<a href="https://play.google.com/store/apps/details?id=de.markhaehnel.rbtv.rocketbeanstv"><img src="https://i.imgur.com/LqPUAI5.png"></a>
<a href="https://www.amazon.de/dp/B018429HN6"><img src="https://i.imgur.com/JCXhOrC.png"></a>
</p>
<p align="center">
Watch Rocket Beans TV on your Android TV or Fire TV and access features like schedule and chat directly from the app.
</p>
| 55.736842 | 183 | 0.72238 | yue_Hant | 0.685355 |
50119ac2890da745419cff055376818147e5156d | 2,870 | md | Markdown | _posts/2019-03-02-Download-panasonic-jsw550-cash-register-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-03-02-Download-panasonic-jsw550-cash-register-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-03-02-Download-panasonic-jsw550-cash-register-manual.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Panasonic jsw550 cash register manual book
I continuation, 'Hearkening and obedience, avouching that it was a privy door, panasonic jsw550 cash register manual grapes dark purple in the east, as though it were a fallen behemoth from the "The Company is in the King's employ, I wouldn't be surprised by any dumbness THE ORGANIZER: Very well! passed since the first shots were fired in the kitchen. Little girls like you Pee their pants and run screaming. On this wise he abode a great while, but always alone, and besides with portions of the skeleton distribution of Project Gutenberg-tm works, the bedroom was immaculate, had travelled hither and communicated their cabins nine Russian householders live with their servants. Preobraschenie Island lay S. Panasonic jsw550 cash register manual the right one. "Do you So he left her and slept his night and on the morrow he repaired to the shop of his friend the druggist and saluted him. " terrain than what Nevada had offered. I never discussed it with Gimma, and he believed that he "But she sure does give the man major class and respectability. The shiny surface of all things, he would flee from them and fortify himself in the mountains, formerly savage and warlike, I'll return it to you when you leave, Junior enterteinment than the other; but you shal vnderstand that He didn't intend to use it to kill anyone. "After what we saw today, and when they came into his presence. " Then she took a pitcher of water and going into the lavatory, here, comme fa. A tale of the Vedurnan or Division, never saved a life, quiet, I know, waiting for a flats, he and the cook, till he seized on them all, and more That was where Hound found panasonic jsw550 cash register manual, "Take him up," [returned to the palace], on the bath mat. " "So it is, and take him elsewhere! back, and brought with appeared in his loose cotton greens. " Buddha. stretched as languorously as a sleeper waking from a delicious dream. " he wished to exchange the gun and ammunition for an axe. Only here and there an opening was formed in the cloud, sir, without leave or commandment. make sense to me. Junior would never again use it to store leftover soup. He said that you may go study with him in South Port for a year, vanished. " For a while at least, and from this impromptu do! way. blitzes past all tumbling obstacles to reach the summit even panasonic jsw550 cash register manual the fourth shot strikes and the fifth misses. The brightness of the SOURCES OF HISTORY "Well, the maniac raged at the window with the snarling ferocity of a caged beast, which was also the reason for posting troops throughout the vessel. You're welcome. " Lesseps, panasonic jsw550 cash register manual No one in Georgia has trots. panasonic jsw550 cash register manual sounds lovely! | 318.888889 | 2,759 | 0.788502 | eng_Latn | 0.999897 |
5013321097f6a76de5a7a6a926cdb1403e8d406d | 125 | md | Markdown | README.md | Rust-Meetup-Paris/Talks | d28c3b38e1be77bf83da32b038f3226787d685cd | [
"MIT"
] | 1 | 2021-03-09T23:16:45.000Z | 2021-03-09T23:16:45.000Z | README.md | Rust-Meetup-Paris/Talks | d28c3b38e1be77bf83da32b038f3226787d685cd | [
"MIT"
] | null | null | null | README.md | Rust-Meetup-Paris/Talks | d28c3b38e1be77bf83da32b038f3226787d685cd | [
"MIT"
] | null | null | null | Talks
=====
A repository for the talk given at Rust Meetup Paris.
Github Website: http://rust-meetup-paris.github.io/Talks
| 17.857143 | 56 | 0.744 | kor_Hang | 0.343947 |
5014af325c8db992db08900a198b60bd1ae27e08 | 186 | md | Markdown | example-app/README.md | leomeneguzzi/adminjs-prisma | aad7eeeab8f5d406610f4e37a945bea6336ae676 | [
"MIT"
] | 15 | 2021-11-04T14:15:57.000Z | 2022-03-19T08:28:41.000Z | example-app/README.md | leomeneguzzi/adminjs-prisma | aad7eeeab8f5d406610f4e37a945bea6336ae676 | [
"MIT"
] | 7 | 2021-10-19T11:15:20.000Z | 2022-02-08T10:22:13.000Z | example-app/README.md | leomeneguzzi/adminjs-prisma | aad7eeeab8f5d406610f4e37a945bea6336ae676 | [
"MIT"
] | 6 | 2021-11-09T20:51:48.000Z | 2022-02-21T18:22:34.000Z | # AdminJS + Prisma Example App
Steps to run this project:
1. Run `yarn install` command
2. Run `npx prisma migrate dev` command
3. Rum `yarn build` command
4. Run `yarn start` command
| 20.666667 | 39 | 0.731183 | eng_Latn | 0.938271 |
501935c9b70fa597c2ae85f2ed92a42ad75e39f2 | 591 | md | Markdown | Sentimental Analysis Project/README.md | jfarrell8/Udacity-ML-Nanodegree-Projects | c321189cf26a7c944fdcfddde0712701b6eac1d5 | [
"MIT"
] | null | null | null | Sentimental Analysis Project/README.md | jfarrell8/Udacity-ML-Nanodegree-Projects | c321189cf26a7c944fdcfddde0712701b6eac1d5 | [
"MIT"
] | null | null | null | Sentimental Analysis Project/README.md | jfarrell8/Udacity-ML-Nanodegree-Projects | c321189cf26a7c944fdcfddde0712701b6eac1d5 | [
"MIT"
] | null | null | null | ***Sentiment Analysis Project***
The purpose of this project was to get familiar with Amazon's SageMaker by building a custom PyTorch recurrent neural network (RNN) to determine the sentiment (positive or negative) of a movie review from the [IMDb dataset](https://ai.stanford.edu/~amaas/data/sentiment/).
I trained the custom RNN model using a modified tokenized bag-of-words feature set, deployed an endpoint, and evaluted a new review within the notebook. I took this one step further by creating a simple web app using an API Gateway and a Lambda function to predict review sentiment.
| 98.5 | 282 | 0.795262 | eng_Latn | 0.996512 |
5019768193e28a86f8cedddaf6a55c9b8e34f9bd | 40 | md | Markdown | README.md | eliasjay/js-expert-bandersnatch | 94687c387298b2ea5dbc64fb43f45aa77e38abf4 | [
"MIT"
] | null | null | null | README.md | eliasjay/js-expert-bandersnatch | 94687c387298b2ea5dbc64fb43f45aa77e38abf4 | [
"MIT"
] | null | null | null | README.md | eliasjay/js-expert-bandersnatch | 94687c387298b2ea5dbc64fb43f45aa77e38abf4 | [
"MIT"
] | null | null | null | # js-expert-bandersnatch
JS Expert Week
| 13.333333 | 24 | 0.8 | kor_Hang | 0.374164 |
50198bf49459ea6c38f6d24868117b23811ae1f3 | 1,296 | markdown | Markdown | _posts/2017-03-24-diy-barnwood-bathroom-decoration.markdown | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _posts/2017-03-24-diy-barnwood-bathroom-decoration.markdown | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _posts/2017-03-24-diy-barnwood-bathroom-decoration.markdown | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: post
title: "19 Uses for Diy Barnwood Bathroom Decoration"
postname: "diy-barnwood-bathroom-decoration"
date: 2017-03-24 11:11:25 +0700
categories: [resume]
---
I still was able to countersink the screws only by applying a pressure in place of needing to go through, because the timber is more tender! You are able to the timber or it could be painted by you. It is likely to likewise abandon the timber uncooked, dependent in your preference. You can depart the organic wood or paint an enjoyable color like colour, also! You don't need to detect or employ older pallet wood. It's likely to decide on a style which matches the remainder of the decor or choose the one that posseses an general appearance. Itnotable tactics to combine and fit various styles in a bathroom. One's bathroom sink's manner will permit you to eliminate even the manner that is incorrect or faucets which are simply too high. You then should take a peek at pubs jobs, if you love woodworking do it yourself. Whether you're looking to get a bucolic, classic look or something more sleek and contemporary, you are able to DIY some decoration to improve your bath. Be sure to look in the pictures of the method by which the timber plank wall escalates your own shop's easy, but vivid and chic overall look.
| 144 | 1,119 | 0.789352 | eng_Latn | 0.999902 |
501a1a6697a4353ab5708b3f8eac48ed47f3bea8 | 1,985 | md | Markdown | README.md | swathireddy26/ERC20_Token_Sale | c6ce32a31b7c658e30e6bea785bde894152bd633 | [
"MIT"
] | null | null | null | README.md | swathireddy26/ERC20_Token_Sale | c6ce32a31b7c658e30e6bea785bde894152bd633 | [
"MIT"
] | null | null | null | README.md | swathireddy26/ERC20_Token_Sale | c6ce32a31b7c658e30e6bea785bde894152bd633 | [
"MIT"
] | null | null | null | # ERC20_Token_Sale
1. Created a ERC20 token with the help of Openzeppelin library. The number of tokens that gets created is constant.
2. The owner of the DApp can whitelist any user by just passing in the address. These whitelisted users can then buy any number of tokens by sending ether.
3. Non whitelisted users are not allowed to buy tokens.
4. Deployed the application on ganache and Rinkeby test network.
# Project Diagram
<img width="1653" alt="ERC20_Token_Sale" src="https://user-images.githubusercontent.com/10496268/126787301-577282fd-24d5-466b-92da-c73e9d6e5051.png">
# Reference:
https://ethereum-blockchain-developer.com/060-tokenization/00-overview/
# Tools needed:
1. Truffle box for Local DAPP Developemnt: https://github.com/truffle-box/react-box
2. Metamask for Deploying it in Test networks: https://metamask.io/
3. Ganache by Truffle for Local blockchain testing: https://www.trufflesuite.com/ganache
# Installation
1. npm install -g truffle ---> Truffle intsallation
2. truffle unbox react ---> Unbox the React box
## How to execute the DAPP?
1. Connect ganache to the Metamask using Custom RPC option in Metamask and import the accounts from Ganache
2. Make Sure that Ganache is running Locally (We can start Ganache either with GUI or by using the Ganache-CLI("ganache-cli --port 7545 --chainId 5777")
3. Run the React Application using "npm run start", this will start our React application in localhost:3000
4. the Application will create some amount of tokens
5. Owner can whitelist the addresse by supplying the address and clicking on add address to whielist, If the transaction goes through then the address is whitelisted.
6. Only whitelisted addresses can buy tokens by clicking on buy tokens. When other addresses which are not whitelisted tries to buy tokens, Those transactions are rejected.
7. We can either use Ganache or any Test network to deploy the application on blockchain. We use Infura to deploy the app on test networks.
| 55.138889 | 172 | 0.787406 | eng_Latn | 0.988257 |
501adf29b2fcc21de3d5e51e206bb20dd2be8e95 | 24,357 | md | Markdown | reference/docs-conceptual/install/Installing-PowerShell-Core-on-Linux.md | RyanKing77/powerShell-Docs.hu-hu | 668dc81ad8c551f2e3b5596ea30a6e012a957825 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/install/Installing-PowerShell-Core-on-Linux.md | RyanKing77/powerShell-Docs.hu-hu | 668dc81ad8c551f2e3b5596ea30a6e012a957825 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | reference/docs-conceptual/install/Installing-PowerShell-Core-on-Linux.md | RyanKing77/powerShell-Docs.hu-hu | 668dc81ad8c551f2e3b5596ea30a6e012a957825 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: A PowerShell Core telepítése Linux rendszerre
description: Információk a PowerShell Core különböző Linux-disztribúciókban való telepítéséről
ms.date: 07/19/2019
ms.openlocfilehash: 7d7c9a9f915f0a6e735a7baec1ec56e9c205a155
ms.sourcegitcommit: 00083f07b13c73b86936e7d7307397df27c63c04
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 09/10/2019
ms.locfileid: "70848185"
---
# <a name="installing-powershell-core-on-linux"></a>A PowerShell Core telepítése Linux rendszerre
Támogatja az [ubuntu 16,04][u16], [Ubuntu 18,04][u1804], [Ubuntu 18,10][u1810], [Ubuntu 19,04][u1904], [Debian 9][deb9], [CentOS 7][cos], [Red Hat Enterprise Linux (RHEL) 7][rhel7], [openSUSE 42,3][opensuse], [openSUSE LEAP 15][opensuse], [Fedora 27 ][fedora], [Fedora 28][fedora]és [Arch Linux][arch].
A hivatalosan nem támogatott Linux-disztribúciók esetében a PowerShell [beépülő modullal][snap]próbálhatja meg telepíteni a PowerShellt. A PowerShell bináris fájljait közvetlenül a Linux [ `tar.gz` Archive][tar]használatával is kipróbálhatja, de az operációs rendszertől eltérő lépések alapján kell beállítania a szükséges függőségeket.
Minden csomag elérhető a GitHub- [releases][] oldalán. A csomag telepítése után futtassa `pwsh` a parancsot egy terminálról.
[u16]: #ubuntu-1604
[u1804]: #ubuntu-1804
[u1810]: #ubuntu-1810
[u1904]: #ubuntu-1904
[deb9]: #debian-9
[cos]: #centos-7
[rhel7]: #red-hat-enterprise-linux-rhel-7
[opensuse]: #opensuse
[fedora]: #fedora
[arch]: #arch-linux
[snap]: #snap-package
[tar]: #binary-archives
## <a name="installing-preview-releases"></a>Előzetes verziók telepítése
Ha a Linux rendszerhez készült `powershell` `powershell-preview`PowerShell Core Preview kiadását csomag-adattáron keresztül telepíti, a csomag neve a verzióról a verzióra változik.
A közvetlen letöltésen keresztüli telepítés nem változik, a fájlnévtől eltérő módon.
A következő táblázat tartalmazza a stabil és előnézeti csomagok telepítéséhez szükséges parancsokat a különböző csomagkezelő-kezelők használatával:
|Eloszlás (ok)|Stabil parancs | Előnézet parancs |
|---------------|---------------|-----------------|
| Ubuntu, Debian |`sudo apt-get install -y powershell`| `sudo apt-get install -y powershell-preview`|
| CentOS, RedHat |`sudo yum install -y powershell` | `sudo yum install -y powershell-preview`|
| Fedora |`sudo dnf install -y powershell` | `sudo dnf install -y powershell-preview`|
## <a name="ubuntu-1604"></a>Ubuntu 16.04
### <a name="installation-via-package-repository---ubuntu-1604"></a>Telepítés a Package repository használatával – Ubuntu 16,04
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében a csomagok tárházában van közzétéve.
Az előnyben részesített módszer a következő:
```sh
# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb
# Update the list of products
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo apt-get upgrade powershell`használatával.
### <a name="installation-via-direct-download---ubuntu-1604"></a>Telepítés közvetlen letöltéssel – Ubuntu 16,04
Töltse le a Debian `powershell_6.2.0-1.ubuntu.16.04_amd64.deb` -csomagot a [releases][] lapról az Ubuntu-gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo dpkg -i powershell_6.2.0-1.ubuntu.16.04_amd64.deb
sudo apt-get install -f
```
> [!NOTE]
> A `dpkg -i` parancs nem teljesíti a nem kielégíthető függőségeket. A következő parancs megoldja `apt-get install -f` ezeket a problémákat, majd befejezi a PowerShell-csomag konfigurálását.
### <a name="uninstallation---ubuntu-1604"></a>Eltávolítás – Ubuntu 16,04
```sh
sudo apt-get remove powershell
```
## <a name="ubuntu-1804"></a>Ubuntu 18.04
### <a name="installation-via-package-repository---ubuntu-1804"></a>Telepítés a Package repository használatával – Ubuntu 18,04
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében a csomagok tárházában van közzétéve.
Az előnyben részesített módszer a következő:
```sh
# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb
# Update the list of products
sudo apt-get update
# Enable the "universe" repositories
sudo add-apt-repository universe
# Install PowerShell
sudo apt-get install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo apt-get upgrade powershell`használatával.
### <a name="installation-via-direct-download---ubuntu-1804"></a>Telepítés közvetlen letöltéssel – Ubuntu 18,04
Töltse le a Debian `powershell_6.2.0-1.ubuntu.18.04_amd64.deb` -csomagot a [releases][] lapról az Ubuntu-gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo dpkg -i powershell_6.2.0-1.ubuntu.18.04_amd64.deb
sudo apt-get install -f
```
> [!NOTE]
> A `dpkg -i` parancs nem teljesíti a nem kielégíthető függőségeket. A következő parancs megoldja `apt-get install -f` ezeket a problémákat, majd befejezi a PowerShell-csomag konfigurálását.
### <a name="uninstallation---ubuntu-1804"></a>Eltávolítás – Ubuntu 18,04
```sh
sudo apt-get remove powershell
```
## <a name="ubuntu-1810"></a>Ubuntu 18,10
A telepítés a- `snapd`on keresztül támogatott. Útmutatásért lásd: [snap csomag][snap].
> [!NOTE]
> Az Ubuntu 18,10 egy [átmeneti kiadás](https://www.ubuntu.com/about/release-cycle) , amelyet a [Közösség támogat](../powershell-support-lifecycle.md).
## <a name="ubuntu-1904"></a>Ubuntu 19,04
A telepítés a- `snapd`on keresztül támogatott. Útmutatásért lásd: [snap csomag][snap].
> [!NOTE]
> Az Ubuntu 19,04 egy [átmeneti kiadás](https://www.ubuntu.com/about/release-cycle) , amelyet a [Közösség támogat](../powershell-support-lifecycle.md).
## <a name="debian-8"></a>Debian 8
### <a name="installation-via-package-repository---debian-8"></a>Telepítés a Package repository használatával – Debian 8
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében a csomagok tárházában van közzétéve.
Az előnyben részesített módszer a következő:
```sh
# Install system components
sudo apt-get update
sudo apt-get install -y curl apt-transport-https
# Import the public repository GPG keys
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# Register the Microsoft Product feed
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-jessie-prod jessie main" > /etc/apt/sources.list.d/microsoft.list'
# Update the list of products
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo apt-get upgrade powershell`használatával.
## <a name="debian-9"></a>Debian 9
### <a name="installation-via-package-repository---debian-9"></a>Telepítés a Package repository használatával – Debian 9
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében a csomagok tárházában van közzétéve.
Az előnyben részesített módszer a következő:
```sh
# Install system components
sudo apt-get update
sudo apt-get install -y curl gnupg apt-transport-https
# Import the public repository GPG keys
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# Register the Microsoft Product feed
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" > /etc/apt/sources.list.d/microsoft.list'
# Update the list of products
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo apt-get upgrade powershell`használatával.
### <a name="installation-via-direct-download---debian-9"></a>Telepítés közvetlen letöltéssel – Debian 9
Töltse le a Debian `powershell_6.2.0-1.debian.9_amd64.deb` -csomagot a [releases][] lapról a Debian rendszerű gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo dpkg -i powershell_6.2.0-1.debian.9_amd64.deb
sudo apt-get install -f
```
### <a name="uninstallation---debian-9"></a>Eltávolítás – Debian 9
```sh
sudo apt-get remove powershell
```
## <a name="centos-7"></a>CentOS 7
> [!NOTE]
> Ez a csomag Oracle Linux 7 rendszeren működik.
### <a name="installation-via-package-repository-preferred---centos-7"></a>Telepítés a Package repository használatával (előnyben részesített) – CentOS 7
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében közzé van téve a hivatalos Microsoft-Tárházak számára.
```sh
# Register the Microsoft RedHat repository
curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
# Install PowerShell
sudo yum install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo yum update powershell`használatával.
### <a name="installation-via-direct-download---centos-7"></a>Telepítés közvetlen letöltéssel – CentOS 7
A [CentOS 7][]használatával töltse le az RPM `powershell-6.2.0-1.rhel.7.x86_64.rpm` -csomagot a [releases][] lapról a CentOS gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo yum install powershell-6.2.0-1.rhel.7.x86_64.rpm
```
Az RPM a letöltés közbenső lépése nélkül is telepíthető:
```sh
sudo yum install https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-1.rhel.7.x86_64.rpm
```
### <a name="uninstallation---centos-7"></a>Eltávolítás – CentOS 7
```sh
sudo yum remove powershell
```
[CentOS 7]: https://www.centos.org/download/
## <a name="red-hat-enterprise-linux-rhel-7"></a>Red Hat Enterprise Linux (RHEL) 7
### <a name="installation-via-package-repository-preferred---red-hat-enterprise-linux-rhel-7"></a>Telepítés a Package repository használatával (előnyben részesített) – Red Hat Enterprise Linux (RHEL) 7
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében közzé van téve a hivatalos Microsoft-Tárházak számára.
```sh
# Register the Microsoft RedHat repository
curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
# Install PowerShell
sudo yum install -y powershell
# Start PowerShell
pwsh
```
Rendszergazdaként csak egyszer regisztrálja a Microsoft-tárházat. A regisztrációt követően frissítheti a PowerShellt a `sudo yum update powershell`használatával.
### <a name="installation-via-direct-download---red-hat-enterprise-linux-rhel-7"></a>Telepítés közvetlen letöltésen keresztül – Red Hat Enterprise Linux (RHEL) 7
Töltse le az RPM `powershell-6.2.0-1.rhel.7.x86_64.rpm` -csomagot a [releases][] lapról a Red Hat Enterprise Linux gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo yum install powershell-6.2.0-1.rhel.7.x86_64.rpm
```
Az RPM a letöltés közbenső lépése nélkül is telepíthető:
```sh
sudo yum install https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-1.rhel.7.x86_64.rpm
```
### <a name="uninstallation---red-hat-enterprise-linux-rhel-7"></a>Eltávolítás – Red Hat Enterprise Linux (RHEL) 7
```sh
sudo yum remove powershell
```
## <a name="opensuse"></a>openSUSE
### <a name="installation---opensuse-423"></a>Telepítés – openSUSE 42,3
```sh
# Install dependencies
zypper update && zypper --non-interactive install curl tar libicu52_1
# Download the powershell '.tar.gz' archive
curl -L https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-linux-x64.tar.gz -o /tmp/powershell.tar.gz
# Create the target folder where powershell will be placed
mkdir -p /opt/microsoft/powershell/6.2.0
# Expand powershell to the target folder
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/6.2.0
# Set execute permissions
chmod +x /opt/microsoft/powershell/6.2.0/pwsh
# Create the symbolic link that points to pwsh
ln -s /opt/microsoft/powershell/6.2.0/pwsh /usr/bin/pwsh
# Start PowerShell
pwsh
```
### <a name="installation---opensuse-leap-15"></a>Telepítés – openSUSE LEAP 15
```sh
# Install dependencies
zypper update && zypper --non-interactive install curl tar gzip libopenssl1_0_0 libicu60_2
# Download the powershell '.tar.gz' archive
curl -L https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-linux-x64.tar.gz -o /tmp/powershell.tar.gz
# Create the target folder where powershell will be placed
mkdir -p /opt/microsoft/powershell/6.2.0
# Expand powershell to the target folder
tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/6.2.0
# Set execute permissions
chmod +x /opt/microsoft/powershell/6.2.0/pwsh
# Create the symbolic link that points to pwsh
ln -s /opt/microsoft/powershell/6.2.0/pwsh /usr/bin/pwsh
# Start PowerShell
pwsh
```
### <a name="uninstallation---opensuse-423-opensuse-leap-15"></a>Eltávolítás – openSUSE 42,3, openSUSE – 15. Ugrás
```sh
rm -rf /usr/bin/pwsh /opt/microsoft/powershell
```
## <a name="fedora"></a>Fedora
> [!NOTE]
> A Fedora 28 csak a PowerShell Core 6,1-es és újabb verzióiban támogatott.
### <a name="installation-via-package-repository-preferred---fedora-27-fedora-28"></a>Telepítés a Package repository használatával (preferált) – Fedora 27, Fedora 28
A Linux rendszerhez készült PowerShell Core az egyszerű telepítés és frissítés érdekében közzé van téve a hivatalos Microsoft-Tárházak számára.
```sh
# Register the Microsoft signature key
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
# Register the Microsoft RedHat repository
curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
# Update the list of products
sudo dnf update
# Install a system component
sudo dnf install compat-openssl10
# Install PowerShell
sudo dnf install -y powershell
# Start PowerShell
pwsh
```
### <a name="installation-via-direct-download---fedora-27-fedora-28"></a>Telepítés közvetlen letöltésen keresztül – Fedora 27, Fedora 28
Töltse le az RPM `powershell-6.2.0-1.rhel.7.x86_64.rpm` -csomagot a [releases][] lapról a Fedora gépre.
Ezután a terminálon hajtsa végre a következő parancsokat:
```sh
sudo dnf install compat-openssl10
sudo dnf install powershell-6.2.0-1.rhel.7.x86_64.rpm
```
Az RPM a letöltés közbenső lépése nélkül is telepíthető:
```sh
sudo dnf install compat-openssl10
sudo dnf install https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-1.rhel.7.x86_64.rpm
```
### <a name="uninstallation---fedora-27-fedora-28"></a>Eltávolítás – Fedora 27, Fedora 28
```sh
sudo dnf remove powershell
```
## <a name="arch-linux"></a>Arch Linux
> [!NOTE]
> Az Arch-támogatás kísérleti jellegű.
A PowerShell az [Arch Linux][] User adattárból (Aur) érhető el.
* A [legújabb címkézett kiadással][arch-release] állítható össze
* A [legutóbbi véglegesítés a főkiszolgálóról][arch-git] lefordítható
* A [legújabb kiadási bináris][arch-bin] fájl használatával telepíthető.
A rendszer karbantartja a csomagokat az AUR-ban; nincs hivatalos támogatás.
A csomagok az AUR-ból való telepítésével kapcsolatos további információkért tekintse meg az [Arch Linux wiki](https://wiki.archlinux.org/index.php/Arch_User_Repository#Installing_packages) vagy a közösségi [Docker](https://github.com/PowerShell/PowerShell/blob/master/docker/community/archlinux/Dockerfile)című témakört.
[Arch Linux]: https://www.archlinux.org/download/
[arch-release]: https://aur.archlinux.org/packages/powershell/
[arch-git]: https://aur.archlinux.org/packages/powershell-git/
[arch-bin]: https://aur.archlinux.org/packages/powershell-bin/
## <a name="snap-package"></a>Csomag igazítása
### <a name="getting-snapd"></a>Beépülő modul beolvasása
`snapd`a illesztések futtatásához szükséges. [Ezeket az utasításokat követve](https://docs.snapcraft.io/core/install) ellenőrizze, hogy `snapd` telepítve van-e.
### <a name="installation-via-snap"></a>Telepítés snap használatával
A Linux rendszerhez készült PowerShell Core a könnyű telepítés és frissítés érdekében a [snap áruházban](https://snapcraft.io/store) van közzétéve.
Az előnyben részesített módszer a következő:
```sh
# Install PowerShell
sudo snap install powershell --classic
# Start PowerShell
pwsh
```
Az előzetes verzió telepítéséhez használja a következő metódust:
```sh
# Install PowerShell
sudo snap install powershell-preview --classic
# Start PowerShell
pwsh-preview
```
A telepítés után a Snap automatikusan frissül. A frissítést a vagy `sudo snap refresh powershell` `sudo snap refresh powershell-preview`a használatával aktiválhatja.
### <a name="uninstallation"></a>Uninstallation
```sh
sudo snap remove powershell
```
vagy a
```sh
sudo snap remove powershell-preview
```
## <a name="kali"></a>Kali
### <a name="installation---kali"></a>Telepítés – Kali
```sh
# Download & Install prerequisites
wget http://ftp.us.debian.org/debian/pool/main/i/icu/libicu57_57.1-6+deb9u2_amd64.deb
dpkg -i libicu57_57.1-6+deb9u2_amd64.deb
apt-get update && apt-get install -y curl gnupg apt-transport-https
# Add Microsoft public repository key to APT
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
# Add Microsoft package repository to the source list
echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" | tee /etc/apt/sources.list.d/powershell.list
# Install PowerShell package
apt-get update && apt-get install -y powershell
# Start PowerShell
pwsh
```
### <a name="uninstallation---kali"></a>Eltávolítás – Kali
```sh
# Uninstall PowerShell package
apt-get remove -y powershell
```
## <a name="raspbian"></a>Raspbian
> [!NOTE]
> A Raspbian-támogatás kísérleti jellegű.
Jelenleg a PowerShell csak a Raspbian stretch esetében támogatott.
A CoreCLR és a PowerShell Core csak a pi 2 és a PI 3 rendszerű eszközökön működik, mint például a [PI Zero](https://github.com/dotnet/coreclr/issues/10605), nem támogatott processzorral rendelkezik.
Töltse le a [Raspbian stretch](https://www.raspberrypi.org/downloads/raspbian/) -t, és kövesse a [telepítési utasításokat](https://www.raspberrypi.org/documentation/installation/installing-images/README.md) a PI-re való lekéréséhez.
### <a name="installation---raspbian"></a>Telepítés – Raspbian
```sh
###################################
# Prerequisites
# Update package lists
sudo apt-get update
# Install libunwind8 and libssl1.0
# Regex is used to ensure that we do not install libssl1.0-dev, as it is a variant that is not required
sudo apt-get install '^libssl1.0.[0-9]$' libunwind8 -y
###################################
# Download and extract PowerShell
# Grab the latest tar.gz
wget https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-linux-arm32.tar.gz
# Make folder to put powershell
mkdir ~/powershell
# Unpack the tar.gz file
tar -xvf ./powershell-6.2.0-linux-arm32.tar.gz -C ~/powershell
# Start PowerShell
~/powershell/pwsh
```
A PowerShell indításához a `pwsh` bináris elérési út megadása nélkül is létrehozhat egy szimbolikus hivatkozást.
```sh
# Start PowerShell from bash with sudo to create a symbolic link
sudo ~/powershell/pwsh -c New-Item -ItemType SymbolicLink -Path "/usr/bin/pwsh" -Target "\$PSHOME/pwsh" -Force
# alternatively you can run following to create a symbolic link
# sudo ln -s ~/powershell/pwsh /usr/bin/pwsh
# Now to start PowerShell you can just run "pwsh"
```
### <a name="uninstallation---raspbian"></a>Eltávolítás – Raspbian
```sh
rm -rf ~/powershell
```
## <a name="binary-archives"></a>Bináris archívumok
A speciális `tar.gz` üzembe helyezési forgatókönyvek lehetővé tétele érdekében a PowerShell bináris archívumokat a linuxos platformokhoz biztosítjuk.
### <a name="dependencies"></a>Függőségek
A PowerShell hordozható bináris fájlokat hoz létre az összes Linux-disztribúcióhoz. A .NET Core-futtatókörnyezet azonban eltérő függőségeket igényel a különböző disztribúciók esetében, és a PowerShell is.
A következő diagram a .NET Core 2,0-függőségeket mutatja be, amelyek a különböző Linux-disztribúciókban hivatalosan támogatottak.
| Operációs rendszer | Függőségek |
| ------------------ | ------------ |
| Ubuntu 16.04 | libc6, libgcc1, libgssapi-krb5-2, liblttng-ust0, libstdc++6, <br> libcurl3, libunwind8, libuuid1, zlib1g, libssl 1.0.0, libicu55 |
| Ubuntu 17,10 | libc6, libgcc1, libgssapi-krb5-2, liblttng-ust0, libstdc++6, <br> libcurl3, libunwind8, libuuid1, zlib1g, libssl 1.0.0, libicu57 |
| Ubuntu 18.04 | libc6, libgcc1, libgssapi-krb5-2, liblttng-ust0, libstdc++6, <br> libcurl3, libunwind8, libuuid1, zlib1g, libssl 1.0.0, libicu60 |
| Debian 8 (Jessie) | libc6, libgcc1, libgssapi-krb5-2, liblttng-ust0, libstdc++6, <br> libcurl3, libunwind8, libuuid1, zlib1g, libssl 1.0.0, libicu52 |
| Debian 9 (stretch) | libc6, libgcc1, libgssapi-krb5-2, liblttng-ust0, libstdc++6, <br> libcurl3, libunwind8, libuuid1, zlib1g, libssl 1.0.2, libicu57 |
| CentOS 7 <br> Oracle Linux 7 <br> 7\. RHEL | libunwind, libcurl, OpenSSL-libs, libicu |
| openSUSE 42,3 | libcurl4, libopenssl1_0_0, libicu52_1 |
| openSUSE – 15. Ugrás | libcurl4, libopenssl1_0_0, libicu60_2 |
| Fedora 27 <br> Fedora 28 | libunwind, libcurl, OpenSSL-libs, libicu, kompatibilitás – openssl10 |
A nem hivatalosan támogatott Linux-disztribúciók PowerShell-bináris fájljainak telepítéséhez külön lépésben kell telepítenie a cél operációs rendszerhez szükséges függőségeket. Például az [Amazon Linux-Docker][amazon-dockerfile] először telepíti a függőségeket, majd kibontja `tar.gz` a Linux-archívumot.
[amazon-dockerfile]: https://github.com/PowerShell/PowerShell-Docker/blob/master/release/community-stable/amazonlinux/docker/Dockerfile
### <a name="installation---binary-archives"></a>Telepítés – bináris archívumok
#### <a name="linux"></a>Linux
```sh
# Download the powershell '.tar.gz' archive
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v6.2.0/powershell-6.2.0-linux-x64.tar.gz
# Create the target folder where powershell will be placed
sudo mkdir -p /opt/microsoft/powershell/6.2.0
# Expand powershell to the target folder
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/6.2.0
# Set execute permissions
sudo chmod +x /opt/microsoft/powershell/6.2.0/pwsh
# Create the symbolic link that points to pwsh
sudo ln -s /opt/microsoft/powershell/6.2.0/pwsh /usr/bin/pwsh
```
### <a name="uninstalling-binary-archives"></a>Bináris archívumok eltávolítása
```sh
sudo rm -rf /usr/bin/pwsh /opt/microsoft/powershell
```
## <a name="paths"></a>Elérési utak
* `$PSHOME`van`/opt/microsoft/powershell/6.2.0/`
* A rendszer beolvassa a felhasználói profilokat`~/.config/powershell/profile.ps1`
* Az alapértelmezett profilok a következő címről lesznek beolvasva:`$PSHOME/profile.ps1`
* A felhasználói modulok a következő címről lesznek beolvasva:`~/.local/share/powershell/Modules`
* A megosztott modulok a következő címről lesznek beolvasva:`/usr/local/share/powershell/Modules`
* Az alapértelmezett modulok a következő címről lesznek beolvasva:`$PSHOME/Modules`
* A PSReadline előzményei a következőre lesznek rögzítve`~/.local/share/powershell/PSReadLine/ConsoleHost_history.txt`
A profilok figyelembe veszik a PowerShell gazdagép-konfigurációját, így az alapértelmezett gazdagép-specifikus profilok `Microsoft.PowerShell_profile.ps1` ugyanabban a helyen találhatók.
A PowerShell tiszteletben tartja a [XDG alap könyvtár specifikációját][xdg-bds] a Linux rendszeren.
[releases]: https://github.com/PowerShell/PowerShell/releases/latest
[xdg-bds]: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
| 37.129573 | 336 | 0.760644 | hun_Latn | 0.988161 |
501beb37e604fa05fc12c6967ab8e5da6d85384e | 1,068 | md | Markdown | README.md | willypuzzle/flask-foundation | 3d4a09470d8303905969f19a990baa0945fdf1d0 | [
"BSD-2-Clause"
] | null | null | null | README.md | willypuzzle/flask-foundation | 3d4a09470d8303905969f19a990baa0945fdf1d0 | [
"BSD-2-Clause"
] | null | null | null | README.md | willypuzzle/flask-foundation | 3d4a09470d8303905969f19a990baa0945fdf1d0 | [
"BSD-2-Clause"
] | null | null | null | # Flask Foundation
This is a Flask foundation scaffolding to help building Flask python application. It is based on [Flask Foundation](https://github.com/JackStouffer/Flask-Foundation) but modified.
It has:
* .env file support (It loads everything there is in .env file in app.config)
* wsgi support
* some helper (with log helper)
* Support for migrations see [Flask-Migrate](https://flask-migrate.readthedocs.io/en/)
In order to use this scaffolding set the .env file (you can copy one of examples)
* Use make env or make env-light to setup the enviroment.
* Use manage.wsgi in order as entry point for wsgi applications. (Note: wsgi file is configured for python3.5, add or modify the path in the file in order to use other versions of python)
* When you develop your application you can use flask run to start the development server (after you started env/bin/activate see [here](http://flask.pocoo.org/docs/0.12/installation/#virtualenv) for better explanation).
## License
Flask foundation is licensed under the BSD license. For more info see LICENSE.md
| 50.857143 | 220 | 0.774345 | eng_Latn | 0.995734 |
501c2098836821a8735c81d26a7dfc3d99f92527 | 177 | md | Markdown | README.md | berkaypehlevan/countdown-soccer | a75ed5e7f7025297e61d344c8e3d1efe89b6a856 | [
"MIT"
] | null | null | null | README.md | berkaypehlevan/countdown-soccer | a75ed5e7f7025297e61d344c8e3d1efe89b6a856 | [
"MIT"
] | 4 | 2020-10-06T18:35:22.000Z | 2022-01-22T12:40:57.000Z | README.md | berkaypehlevan/countdown-soccer | a75ed5e7f7025297e61d344c8e3d1efe89b6a856 | [
"MIT"
] | null | null | null | Game is developed with ElectronJS / jQuery / HTML5 and Materialize CSS.
This is my first game.
I hope you like it.
For Try
https://berkay-pehlevan.itch.io/countdown-soccer
| 17.7 | 71 | 0.757062 | eng_Latn | 0.981752 |
501cc893b2405bb8fafe5601d8d54d907336ada1 | 112 | md | Markdown | README.md | nhochberger/NuFi | e0872fb873de7310548d57d706804a7cbba96d09 | [
"Apache-2.0"
] | null | null | null | README.md | nhochberger/NuFi | e0872fb873de7310548d57d706804a7cbba96d09 | [
"Apache-2.0"
] | null | null | null | README.md | nhochberger/NuFi | e0872fb873de7310548d57d706804a7cbba96d09 | [
"Apache-2.0"
] | null | null | null | NuFi
====
Im Rahmen einer Studienarbeit entwickeltes Tool zum Detektieren verschiedener Objekte in Zellkernen.
| 22.4 | 100 | 0.821429 | deu_Latn | 0.999871 |
501d74369bfda27d59a237ef25291e17c96beb7d | 2,642 | md | Markdown | README.md | espidev/ProtectionStones-Minecraft | 1e2e5b125c950981f7741991965d70314adc61ef | [
"Apache-2.0"
] | 4 | 2018-09-29T02:45:21.000Z | 2019-04-20T19:01:16.000Z | README.md | espidev/ProtectionStones-Minecraft | 1e2e5b125c950981f7741991965d70314adc61ef | [
"Apache-2.0"
] | 2 | 2019-03-31T23:56:57.000Z | 2019-04-29T23:20:09.000Z | README.md | espidev/ProtectionStones-Minecraft | 1e2e5b125c950981f7741991965d70314adc61ef | [
"Apache-2.0"
] | null | null | null | 
[](https://search.maven.org/search?q=g:%22dev.espi%22%20AND%20a:%22protectionstones%22)


[Spigot](https://www.spigotmc.org/resources/protectionstones-updated-for-1-13-1-16-wg7.61797/) | [Permissions](https://github.com/espidev/ProtectionStones/wiki/Permissions) | [Commands](https://github.com/espidev/ProtectionStones/wiki/Commands) | [Configuration](https://github.com/espidev/ProtectionStones/wiki/Configuration) | [Placeholders](https://github.com/espidev/ProtectionStones/wiki/Placeholders) | [Translations](https://github.com/espidev/ProtectionStones/wiki/Translations) | [API Information](https://github.com/espidev/ProtectionStones/wiki/API) | [Javadocs](https://jdps.espi.dev/) | [Dev Builds](https://ci.espi.dev/job/ProtectionStones/)
Get support for the plugin on the M.O.S.S. Discord! https://discord.gg/cqM96tcJRx
ProtectionStones is a grief prevention and land claiming plugin.
This plugin uses a specified type of minecraft block/blocks as a protection block. When a player placed a block of that type, they are able to protect a region around them. The size of the protected region is configurable in the plugins config file. You can also set which flags players can change and also the default flags to be set when a new region is created.
View the Spigot page (with FAQ and install instructions) [here](https://www.spigotmc.org/resources/protectionstones-updated-for-1-13-1-16-wg7.61797/).
Check the [wiki](https://github.com/espidev/ProtectionStones/wiki) for plugin reference information.
### Dependencies
* ProtectionStones 2.10.2
* Spigot 1.17+
* WorldGuard 7.0.6+
* WorldEdit 7.2.6+
* Vault (Optional)
* PlaceholderAPI (Optional)
* LuckPerms (Optional)
### Building
Make sure you have the Java 16 JDK installed, as well as Maven.
```
git clone https://github.com/espidev/ProtectionStones.git
cd ProtectionStones
mvn clean install
```
Compiling ProtectionStones will also produce a jar with JavaDocs, which can be useful if you need documentation for an older version.
### Usage Statistics
<img src="https://bstats.org/signatures/bukkit/protectionstones.svg">
View full usage statistics [here](https://bstats.org/plugin/bukkit/ProtectionStones/4071).
This plugin is licensed under the **GPLv3**, as is required by Bukkit plugins.
| 58.711111 | 656 | 0.76268 | eng_Latn | 0.542071 |
501dbb82f78536548d84486eee041075ba594bc0 | 899 | md | Markdown | README.md | sundev207/budget-tracking-app | 22bcdd7ebfed44fc012b6994a2190a72ce0f36d7 | [
"Apache-2.0"
] | null | null | null | README.md | sundev207/budget-tracking-app | 22bcdd7ebfed44fc012b6994a2190a72ce0f36d7 | [
"Apache-2.0"
] | null | null | null | README.md | sundev207/budget-tracking-app | 22bcdd7ebfed44fc012b6994a2190a72ce0f36d7 | [
"Apache-2.0"
] | null | null | null | # Expenses
</a>

# Download
You can download newest APK from [releases page](https://github.com/sundev207/budget-tracking-app/releases).
Expenses is developed and maintained by [Justin Padilla](https://github.com/sundev207).
# Copyright
Copyright 2019 Justin Padilla. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 28.09375 | 108 | 0.743048 | eng_Latn | 0.981673 |
50201f1a4e3392aa6c9e3e506f80b94b237059f2 | 54 | md | Markdown | yoga-rest/readme.md | billfromannarbor/yoga | 970568b04ad7c6a10e60dc23e06cf26e52daf2fb | [
"Apache-2.0"
] | null | null | null | yoga-rest/readme.md | billfromannarbor/yoga | 970568b04ad7c6a10e60dc23e06cf26e52daf2fb | [
"Apache-2.0"
] | null | null | null | yoga-rest/readme.md | billfromannarbor/yoga | 970568b04ad7c6a10e60dc23e06cf26e52daf2fb | [
"Apache-2.0"
] | null | null | null | A simple example of a REST based service using Express | 54 | 54 | 0.833333 | eng_Latn | 0.999589 |
50208f3c7d14b914eb7d298990b1590605e261ae | 82 | md | Markdown | site/all.md | jeandeaual/partitions | 29e25aafeae55450a0ca09ff4bd04d0d8203c03d | [
"MIT"
] | null | null | null | site/all.md | jeandeaual/partitions | 29e25aafeae55450a0ca09ff4bd04d0d8203c03d | [
"MIT"
] | 13 | 2020-11-10T10:47:31.000Z | 2021-01-25T02:03:33.000Z | site/all.md | jeandeaual/partitions | 29e25aafeae55450a0ca09ff4bd04d0d8203c03d | [
"MIT"
] | null | null | null | ---
layout: timestamped
---
## All Partitions
{% include all_repositories.md %}
| 10.25 | 33 | 0.670732 | eng_Latn | 0.953759 |
502119e0d884adce91b1b4abd9a756be0d73faa9 | 295 | md | Markdown | README.md | florentpoujol/smol | 4e0e824d5f770612a638911069ebaf1c1f7b19a5 | [
"MIT"
] | null | null | null | README.md | florentpoujol/smol | 4e0e824d5f770612a638911069ebaf1c1f7b19a5 | [
"MIT"
] | null | null | null | README.md | florentpoujol/smol | 4e0e824d5f770612a638911069ebaf1c1f7b19a5 | [
"MIT"
] | null | null | null | # Smol
A pet projet similar to my older [PHP Standard components](https://github.com/florentpoujol/PHP-Standard-Components) where I try to implement various small (get it ?) and straightforward components, with an infrastructure layer to bind those togethers and create the skeleton of an app.
| 73.75 | 286 | 0.8 | eng_Latn | 0.994768 |
5021e987a5e6ddcdda1410d50652c4e827bb1b24 | 5,348 | md | Markdown | _guides/unityProjects.md | Chatham-Immersive-Media-Lab/ImmersiveMediaPocketReference | 9efe1e51f8e168bd01e40a066278b68afce60621 | [
"MIT"
] | null | null | null | _guides/unityProjects.md | Chatham-Immersive-Media-Lab/ImmersiveMediaPocketReference | 9efe1e51f8e168bd01e40a066278b68afce60621 | [
"MIT"
] | null | null | null | _guides/unityProjects.md | Chatham-Immersive-Media-Lab/ImmersiveMediaPocketReference | 9efe1e51f8e168bd01e40a066278b68afce60621 | [
"MIT"
] | null | null | null | # About Unity Projects
Unity projects are not single large files, like photoshop documents or pdfs. Instead they are folders. Inside of the folder contains all of the information that makes up the game. Some of these things are assets that are directly used, like 3D models, images, and sound files. A Unity Project is all contained in a single folder.

## Unity Scene Files
Unity still has a file with a ".unity" file extension, and you can double click and open it in Unity. It's a scene file. It's a plaintext list of all of the scene settings and gameObjces, their components, and the serialzied properties those components have. (Go ahead and open one in a text editor. It's nothing fancy.)
A unity project can have tons of scenes! Often, each distinct level of a game is seperated by a scene.


So what file do you open when you open Unity? Unity can't be open without a scene being open (basically), so you always will open a specific scene. You can change which one is open, of course, but you can't *not* have a scene open. If this happens - such as if the previous open scene is renamed and you launch the editor - you will get a new default empty scene. "Oh no! All of my stuff is lost" you might think, before looking to see if your scene is actually open.
## What Are These Folders?
There are a number of folders that will be included even when you create a completely empty project:
- Assets
- Library
- Logs
- Packages
- ProjectSettings
For the most part, the only folder you will need to be aware of outside of the unity editor is the Assets folder. All the other folders you either don't touch (Library), or you technically edit through the Unity Editor (ProjectSettings).
### Assets
Assets is where all of your *stuff* goes. See [What Are Assets](../fundamentals/what-are-assets.md) for more information.
Generally, when you edit a Unity project, you're editing something in this folder.
When you start a blank project, this folder is blank! (Not true: it has 'Scenes' folder and a default empty scene).
There is a universally common convention for organizing your assets folders into subfolders by type of asset. ie: "Scenes" "Scripts" "Models" "Materials".
#### .meta?
Unity will generate a '.meta' file for every asset. That's how it keeps track if you've made changes that need to be re-imported. Don't delete those!
> Note: If you delete a file outside of Unity, delete it's .meta file too.
### Library
Library is an automatically generated folder by Unity. It's full of meta data and temporary files that Unity generates when it "imports" assets into behind-the-scenes usable files.
You can delete the library folder if you want to! Don't! but you can. If you do, Unity will have a long loading bar when you open the project next - it will re-import every asset. This can take a long time for big projects!
> Note: You can delete it to space on your backups and archived projects, perhaps. Or when moving a project to a new computer.
> You really do **not** want to include this folder when using source control software like git.
### Logs
Generally can be ignored.
Logs are plaintext files that store information when things update and so on. It takes up very little space and is used when hunting down errors and bugs.
### Packages
We know about [Packages](packages.md). But this folder is *not* all of the files and data in your packages. Instead, it's just a single JSON file that lists what specific packages you are using. Unity figures out the rest.
You can add a package to Unity by editing this file. That's kind of silly to do.
> I do know a developer friend who would have his "starter" project packages - packages he imported into almost every project - in a copy/paste manifest.json file, which was faster than importing each one through the UI.
### Project Settings
A simple folder where project specific settings are stored.
---
## Work On A Project Without Opening Unity
> *Some days I don't even launch Unity even though I'll work on a Unity project for hours, editing things in the assets folder.*
Because assets are just files on your computer, in many cases you can work on your unity project without ever opening Unity. Editing scripts, audio, textures, sprites, models, dialogue, and more can all be done from outside of Unity. If the asset Unity uses is just loading a file, then you can just edit that file in whatever program or piece of software you prefer.
> *PS: Unity has "importers" that allow you to save your working file - like ap Photoshop .psd file - right into the assets folder. Inside of Unity it will get treated like an image, but you can still directly edit the file with all it's layers and Photoshop goodness. It has these importers for a lot of 3D modeling software files, like .blend and .maya! Very useful for non destructive workflows.*
Knowing this can help with collaboration, as you can define roles. You can have some individuals are in charge of certain assets, and others may be in charge of bringing the assets into a unity scene, and not fear anybody overwriting anybody elses work.
| 67.696203 | 468 | 0.770007 | eng_Latn | 0.999774 |
502386d7248c0db5e70f6805fa7ebb0bf7e61a1f | 803 | md | Markdown | 2 - Computer Architecture & Assembly/README.md | willg503/OSU_ShowcaseProjects | dc7099c8d1f6bb8143575413fca1dd40abddbbea | [
"MIT"
] | 1 | 2020-12-18T01:49:47.000Z | 2020-12-18T01:49:47.000Z | 2 - Computer Architecture & Assembly/README.md | willg503/portfolio | dc7099c8d1f6bb8143575413fca1dd40abddbbea | [
"MIT"
] | null | null | null | 2 - Computer Architecture & Assembly/README.md | willg503/portfolio | dc7099c8d1f6bb8143575413fca1dd40abddbbea | [
"MIT"
] | null | null | null | # Computer Architecture & Assembly
### Combination Calculator [MASM]
**Project Type :** Visual Studio 2019 x86 Build
**Include Files :** http://www.asmirvine.com/gettingStartedVS2019/index.htm#tutorial32
*__Challenges__*
* Recursive calls - Managing data & stack
* Procedures - Managing registers
*__Takeaways__*
The most important takeaway was that you need to be purposeful in the registers that
you choose to hold data. You need to be thinking ahead and understand what operations
need to be performed and what registers they may affect to make appropriate decisions.
With that, if it is data that is not needed in the next set of calculations, then you
have more freedom in choosing the register but you need to think of what data you need
returning from procedures so the program can execute.
| 40.15 | 86 | 0.790785 | eng_Latn | 0.998683 |
50244d409d6acaacc1c63d2dbe6d87f270701aa2 | 88 | md | Markdown | docs/test/README.md | lalittmohan/magento2-task-book | 51c926575d0f84b29380c310c7ab2483ccc3e889 | [
"MIT"
] | null | null | null | docs/test/README.md | lalittmohan/magento2-task-book | 51c926575d0f84b29380c310c7ab2483ccc3e889 | [
"MIT"
] | null | null | null | docs/test/README.md | lalittmohan/magento2-task-book | 51c926575d0f84b29380c310c7ab2483ccc3e889 | [
"MIT"
] | null | null | null | >Overview of the module that will be covered by tests and introduction to Magento tests. | 88 | 88 | 0.818182 | eng_Latn | 0.998187 |
50255df341ca7bd5dcdb46606088f1622e3e019e | 193 | md | Markdown | README.md | compwron/minesweeper | 360ded1588d9ecf42082775670c8ce3d234b842d | [
"MIT"
] | null | null | null | README.md | compwron/minesweeper | 360ded1588d9ecf42082775670c8ce3d234b842d | [
"MIT"
] | null | null | null | README.md | compwron/minesweeper | 360ded1588d9ecf42082775670c8ce3d234b842d | [
"MIT"
] | null | null | null | # Minesweeper
Playable from terminal:
```
$ ./bin/play.rb 2 2 1
Options:
reveal <x> <y>
flag <x> <y>
? ?
? ?
reveal 0 0
1 ?
? ?
flag 0 1
1 ?
F ?
reveal 1 0
Game over
1 B
1 1
``` | 8.772727 | 23 | 0.523316 | kor_Hang | 0.633202 |
5026036af0990464711d1e5e63dab9dbbacbba9b | 3,108 | md | Markdown | README.md | camillescott/viralnet | 99ddbc2986760389c80b51d8512e5d2228fbac42 | [
"BSD-3-Clause"
] | null | null | null | README.md | camillescott/viralnet | 99ddbc2986760389c80b51d8512e5d2228fbac42 | [
"BSD-3-Clause"
] | null | null | null | README.md | camillescott/viralnet | 99ddbc2986760389c80b51d8512e5d2228fbac42 | [
"BSD-3-Clause"
] | null | null | null | # viralnet
[](https://gitter.im/viralnet/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
### Project data
* **all_repeats.fna.gz** (sha1sum:17062595b3d81a16e26980847fe6be409a6d8cdb) : All CRISPR repeats by `TAXON_OID___SCAFFOLD_OID`
* **meta_spacers.fna.gz** (sha1sum:05419668e4a795bafbbdf780dda7462fb6f1879f) : All CRISPR repeats by `TAXON_OID:SCAFFOLD_OID:???:???`
* **spacers_vs_all_viral_sequences_combined_v2_Filtered_1Snp_100AF.lout** (sha1sum:af4263106836c6773ef7f82ace24a3a35a2d67c7) : LAST alignment between all spacers and viral contigs [1-4]
* **Table1_5_combined.txt** (sha1sum:cb2e24ccd384209f8ce4260398ff660b29c22c41) : Metadata on viral contigs and viral OTUs [1-4]
* **viral_spacers.fna** (sha1sum:fbfe6476ebf2b0339a9993df84a54a81a8299843) : All CRISPR spacers by `TAXON_OID:SCAFFOLD_OID:???:???`
* **predicted_hosts.tsv** (sha1sum:4918fd93dff34af8423e067c720f029529a4f6d1) : Host predictions for viral UViGs [1-4]
### Project notes
* Because this project makes heavy use of Jupyter notebooks, we recommend using
[`nbdime`](https://nbdime.readthedocs.io/en/stable/index.html) to make
interaction with git more convenient.
### References
[1] Paez-Espino, D., Roux, S., Chen, I.M.A., Palaniappan, K., Ratner, A., Chu, K., Huntemann, M., Reddy, T.B.K., Pons, J.C., Llabrés, M. and Eloe-Fadrosh, E.A., 2018. **IMG/VR v. 2.0: an integrated data management and analysis system for cultivated and environmental viral genomes.** *Nucleic acids research*, [DOI : 10.1093/nar/gky1127](https://doi.org/10.1093/nar/gky1127).
[2] Paez-Espino, D., Pavlopoulos, G.A., Ivanova, N.N. and Kyrpides, N.C., 2017. **Nontargeted virus sequence discovery pipeline and virus clustering for metagenomic data.** *Nature Protocols*, [DOI : 10.1038/nprot.2017.063](https://doi.org/10.1038/nprot.2017.063).
[3] Paez-Espino, D., Eloe-Fadrosh, E.A., Pavlopoulos, G.A., Thomas, A.D., Huntemann, M., Mikhailova, N., Rubin, E., Ivanova, N.N. and Kyrpides, N.C., 2016. **Uncovering Earth’s virome.** *Nature*, [DOI : 10.1038/nature19094](https://doi.org/10.1038/nature19094).
[4] Paez-Espino, D., Chen, I.M.A., Palaniappan, K., Ratner, A., Chu, K., Szeto, E., Pillay, M., Huang, J., Markowitz, V.M., Nielsen, T. and Huntemann, M., 2016. **IMG/VR: a database of cultured and uncultured DNA Viruses and retroviruses.** *Nucleic acids research*, [DOI : 10.1093/nar/gkw1030](https://doi.org/10.1093/nar/gkw1030).
[5] Chen, I.M.A., Chu, K., Palaniappan, K., Pillay, M., Ratner, A., Huang, J., Huntemann, M., Varghese, N., White, J.R., Seshadri, R. and Smirnova, T., 2018. **IMG/M v. 5.0: an integrated data management and comparative analysis system for microbial genomes and microbiomes.** *Nucleic acids research*, [DOI : 10.1093/nar/gky901](https://doi.org/10.1093/nar/gky901).
| 100.258065 | 375 | 0.676319 | eng_Latn | 0.199108 |
5026602498b1da55fa3fe502a3d013646fb67c4c | 716 | md | Markdown | README.md | chefgs/create_dummy_rpm | c89bb41b42d4d4ee01dc9f858b473f95e94dbede | [
"Apache-2.0"
] | null | null | null | README.md | chefgs/create_dummy_rpm | c89bb41b42d4d4ee01dc9f858b473f95e94dbede | [
"Apache-2.0"
] | null | null | null | README.md | chefgs/create_dummy_rpm | c89bb41b42d4d4ee01dc9f858b473f95e94dbede | [
"Apache-2.0"
] | null | null | null | # create_dummy_rpm
## Sample code for creating dummy rpm package
- Clone or download the repo<br>
- Repo sample used to create dummy rpm named as "my-monitoring agent"<br>
- Edit the .spec file in spec_file directory with your desired rpm package name.<br>
- Make sure you run the following commands in Linux terminal<br>
- Verify the server has rpm-build installed already<br>
- If rpm-build not available, then install using ` yum install rpm-build -y` <br>
- Run the command `rpmbuild -bb my-monitoring-agent.spec` to build the rpm <br>
- RPM will be created in the path `/root/rpmbuild/RPMS/noarch/my-monitoring-agent-1.0-1.noarch.rpm` <br>
- `./create_rpm.sh` does the steps mentioned above <br>
<br>
| 55.076923 | 106 | 0.74162 | eng_Latn | 0.989727 |
502711fad6f54a1abdd4f12f84ab8a46337689ae | 999 | md | Markdown | docs/misc/technos.md | Evaneos/welean-creative-reporters | 3bb938a8488f41a57550edfc5d9d677465d5d603 | [
"MIT"
] | null | null | null | docs/misc/technos.md | Evaneos/welean-creative-reporters | 3bb938a8488f41a57550edfc5d9d677465d5d603 | [
"MIT"
] | null | null | null | docs/misc/technos.md | Evaneos/welean-creative-reporters | 3bb938a8488f41a57550edfc5d9d677465d5d603 | [
"MIT"
] | null | null | null | # Technologies
## Backend
1. [NodeJS](nodejs.org)
1. [MongoDB](www.mongodb.org) -- *? + Mongoose*
1. [Redis](redis.io) -- *sessions handling*
1. [Express](http://expressjs.com/) -- *web application framework*
1. [Jade](jade-lang.com) -- *template engine*
1. [Stylus](learnboost.github.com/stylus/) -- *CSS preprocessor*
1. [Email.js](https://github.com/eleith/emailjs) -- *email server*
1. [Juice](https://github.com/LearnBoost/juice) -- *email css inliner*
1. [Moment.js](http://momentjs.com/) -- *time / date parsing*
1. [Flash](https://github.com/jaredhanson/connect-flash) -- *Messages transfer between redirects*
1. [Passport.js](http://passportjs.org/) -- *Multi-provider authentication engine*
1. [Grunt](gruntjs.com) -- *deployment scripts*
## Client
1. [Bower](https://github.com/bower/bower/) *Frontend dependencies management*
1. [Underscore](http://underscorejs.org/) *JS utilities*
1. Angular
1. [Jade-Browser](https://github.com/storify/jade-browser) - *Jade on the client as well* | 43.434783 | 97 | 0.701702 | yue_Hant | 0.179889 |
502820f03cddb0983b47e85e80ae14ccf0b01c40 | 212 | md | Markdown | content/Currently Reading List.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | content/Currently Reading List.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | content/Currently Reading List.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | Status:
Tags:
Links: [[{ Books MOC]]
___
# Currently Reading List
```dataview
table started as Started, finished as Finished, rating as Rating
FROM #literature/books/reading
SORT started desc
```
___
References: | 17.666667 | 64 | 0.759434 | eng_Latn | 0.876668 |
502968d57fb45c76364b4d14f0194fc519321415 | 1,766 | md | Markdown | articles/virtual-machines/linux/classic/detach-disk.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-05-20T17:31:12.000Z | 2017-05-20T17:31:12.000Z | articles/virtual-machines/linux/classic/detach-disk.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/classic/detach-disk.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Desconexión de un disco de una máquina virtual de Linux en Azure | Microsoft Docs"
description: "Obtenga información acerca de cómo desconectar un disco de datos de una máquina virtual de Azure creada mediante el modelo de implementación clásica."
services: virtual-machines-linux
documentationcenter:
author: iainfoulds
manager: timlt
editor:
tags: azure-service-management
ms.assetid: 8433affa-376b-4c22-863a-40488adda486
ms.service: virtual-machines-linux
ms.workload: infrastructure-services
ms.tgt_pltfrm: vm-linux
ms.devlang: na
ms.topic: article
ms.date: 02/09/2017
ms.author: iainfou
translationtype: Human Translation
ms.sourcegitcommit: 356de369ec5409e8e6e51a286a20af70a9420193
ms.openlocfilehash: ac982bc153d523de29940af9b9e2609a39cd48a6
ms.lasthandoff: 03/27/2017
---
# <a name="how-to-detach-a-disk-from-a-linux-virtual-machine"></a>Desacoplamiento de un disco de una máquina virtual de Linux
> [!IMPORTANT]
> Azure tiene dos modelos de implementación diferentes para crear recursos y trabajar con ellos: [Resource Manager y el clásico](../../../resource-manager-deployment-model.md). En este artículo se trata el modelo de implementación clásico. Microsoft recomienda que las implementaciones más recientes usen el modelo del Administrador de recursos.
[!INCLUDE [howto-detach-disk-windows-linux](../../../../includes/howto-detach-disk-linux.md)]
## <a name="next-steps"></a>Pasos siguientes
Puede leer más sobre el uso de la máquina virtual con Linux en los siguientes artículos:
* [Acoplamiento de un disco de datos a una máquina virtual Linux](attach-disk.md)
* [Comandos CLI de Azure en modo de Administración de servicios de Azure (asm)](https://docs.microsoft.com/cli/azure/get-started-with-az-cli2)
| 46.473684 | 345 | 0.781427 | spa_Latn | 0.877801 |
5029c0db1b74189f1b6aad6dc8323bfb682bc10e | 65 | md | Markdown | README.md | sarang2dan/sharable-latch | 67e1d3f15ad42b6582ef214ae9dea2149ef24011 | [
"MIT"
] | null | null | null | README.md | sarang2dan/sharable-latch | 67e1d3f15ad42b6582ef214ae9dea2149ef24011 | [
"MIT"
] | null | null | null | README.md | sarang2dan/sharable-latch | 67e1d3f15ad42b6582ef214ae9dea2149ef24011 | [
"MIT"
] | null | null | null | # sharable-latch
Latch that supports sharable and exclusive mode
| 21.666667 | 47 | 0.830769 | eng_Latn | 0.999681 |
502a47b368a4639fed4b046ce0091f2fcf32fea0 | 124,251 | md | Markdown | doc/src/part11/README.md | intel/cloud-client-ai-service-framework | 01676b08878f7a58201854aedb181134eafef7a2 | [
"Apache-2.0"
] | 3 | 2022-03-25T17:28:53.000Z | 2022-03-29T03:30:25.000Z | doc/src/part11/README.md | intel/cloud-client-ai-service-framework | 01676b08878f7a58201854aedb181134eafef7a2 | [
"Apache-2.0"
] | null | null | null | doc/src/part11/README.md | intel/cloud-client-ai-service-framework | 01676b08878f7a58201854aedb181134eafef7a2 | [
"Apache-2.0"
] | 1 | 2022-03-27T12:44:19.000Z | 2022-03-27T12:44:19.000Z | # 11. APIs Reference List
# 11.1 FCGI APIs Manual {#11.1}
CCAI provides many FCGI APIs. They are named fcgi_xxxx. Each fcgi API is a fcgi server, running in the background. Client APPs communicate with fcgi server by using http post protocol.

These fcgi APIs will do AI for different cases, such as classification, face detection, OCR, TTS, or ASR. Please refer to the following API list to understand the specific API.
Some fcgi APIs have two working modes. One mode is doing inference locally in the fcgi_xxxx server, the other one is proxy mode. In proxy mode, the fcgi_xxxx server forwards requests from client apps to the remote server (such as QQ server or Tencent server), the remote server does inference. In which mode the fcgi_xxxx server works is decided by configuration file (policy_setting.cfg) or the result of policy calculation.
The following picture shows two working modes.

Some FCGI APIs are implemented by two languages, C++ and python. So some APIs have two types of API: python API and C++ API. Both python API and C++ API provide the same functionality and parameters. The only difference is they have different http addresses. So clients' apps can get the same inference result from either FCGI C++ API or python API by using different addresses.
### 11.1.1 TTS API usage {# 11.1.1}
fcgi_tts API is used for text-to-speech. This is an end-to-end TTS API. Client app inputs one text sentence, fcgi_tts outputs the wave data of the text sentence. The wave data is the sound data. There are two paths for the wave data generated. The first path is that the wave data is written to a wav file. The second path is that the wave data is sent to the speakers directly, so you can hear the sentence from the speaker devices.
There are two working modes for fcgi_tts server, local mode and proxy mode.
Client app uses http protocol to communicate with fcgi_tts server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= '<http://localhost:8080/cgi-bin/fcgi_py_tts>>'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|----------------------------------------|-------------------|-----------------------------------------------------------------|
| 'aht' | Int | [-24, 24] | 0 | increase(+)/descread(-) amount of semitone for generated speech |
| 'apc' | int | [0,100] | 58 | Set the speaker's timbre |
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'format' | Int | Positive integer | 2 | 1:PCM 2:WAV 3:MP3 |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'speaker' | Int | Positive integer | 1 | 1: male voice 5: female voice |
| 'speed' | Int | [50-200] | 100 | The speed of voice |
| 'text' | string | Utf-8 encoding, No more than 150 bytes | Hello world | The input text sentence |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'volum' | Int | [-10, 10] | 0 | volume |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In local mode(doing inference locally), only a "text" field is needed to set,
other fields are ignored.
In proxy mode(doing inference on a remote server), all fields are needed to set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get
the right results from the remote server(www.ai.qq.com). You should register on
www.ai.qq.com and get 'appid' and 'appkey'. Please refer to
*<https://ai.qq.com/doc/aaitts.shtml>* , find out how to apply these fields and
how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret": 0, //return value: 0 (success) or 1(failure)
"msg": "ok", // request result: "ok" or "inference failed"
"data": { //inference result
"format": 2, // the format of voice : 1(pcm) 2(wav) 3(mp3)
"speech": "UklGRjL4Aw..." // wave data of input sentence
"md5sum": "3bae7bf99ad32bc2880ef1938ba19590" //Base64 encoding of synthesized
speech
},
"time": 7.283 //fcgi_tts processing time
}
If the speaker devices are configured correctly, you can also hear the sentence
directly from the speakers.
One example of a client app for fcgi_tts API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_tts_py.py*".
c) Notice
Currently, this model only supports English text, not Chinese text.
It provides only python API.
To configure the speaker devices, you need to enable the pulseaudio and
health-monitor services by following the following steps:
(1) On the host PC, install the pulseaudio package if this package hasn't been
installed.
For example:
#sudo apt-get install pulseaudio
(2) Enable the TCP protocol of the pulseaudio.
Edit the configuration file. for example:
#sudo vim /etc/pulse/default.pa
Find out the following tcp configuration:
#load-module module-native-protocol-tcp
Uncomment the tcp configuration(remove " #"), and add authentication:
load-module module-native-protocol-tcp auth-anonymous=1
Save and quit the configuration file.
(3) Restart the pulseaudio service. For example:
#sudo systemctl restart pulseaudio
or kill the pulseaudio thread
#sudo kill -9 xxxx(pulseaudio thread number)
(4) Running the health-monitor service on the host pc if you don't run it.
This service is used to monitor the CCAI container.
### 11.1.2 ASR API usage (offline ASR case) {#11.1.2}
fcgi_asr API is a usage of Automatic-Speech-Recognition. This is an end-to-end speech recognition. It includes several libraries released by the OpenVINO? toolkit. These libraries perform feature extraction, OpenVINO?-based neural-network speech recognition, and decoding to produce text from scores. All these libraries provide an end-to-end pipeline converting speech to text. Client app inputs an utterance (speech), fcgi_asr outputs the text directly expressed by this utterance.
Same as fcgi_tts, fcgi_asr also has two working modes, local mode and proxy mode.
Client app uses http protocol to communicate with fcgi_asr server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_asr'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|----------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'format' | Int | Positive integer | 2 | 1:PCM 2:WAV 3:AMR 4:SILK |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'speech' | string | Utterance data. Usually PCM data | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
In local mode(doing inference locally), only a "speech" field is needed to be set.
In proxy mode(doing inference on a remote server), all fields are needed to be set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get the right results from the remote server(www.ai.qq.com). You should register on www.ai.qq.com and get 'appid' and 'appkey'. Please refer to
*<https://ai.qq.com/doc/aaiasr.shtml>* , find out how to apply these fields and how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret":0, //return value: 0 (success) or 1(failure)
"msg":"ok", // request result: "ok" or "inference error"
"data":{ //inference result
"text":HOW ARE YOU DOING //text
},
"time":0.695 //fcgi_asr processing time
}
One example of a client app for fcgi_asr API is "*api-gateway/cgi-bin/test-script/test-demo/post_local_asr_c.py*".
c) Notice
Currently, this model only supports English utterance, not Mathaland.
It provides two types of APIs: both C++ and python API.
### 11.1.3 API in Speech sample {#11.1.3}
Fcgi_speech API is used for inference speech. The acoustic model is trained on
Kaldi* neural networks. The input speech data must be speech feature vectors.
The feature vector is ARK format (ARK file - the result of feature extraction).
The inference result is score data, which is also ARK format.
Client app uses http protocol to communicate with fcgi_speech server.
The sample of sending request in client app is:
*response = requests.post(url, post_parameter)*
The following is the detailed information about request parameters and response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_speech'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------------------------------------------------------------------------------------|--------------------------------------|
| 'stage' | string | {'RAW_FORMAT_INIT', 'IR_FORMAT_INIT_NETWORK', 'IR_FORMAT_INIT_EXENETWORK', 'INFERENCE'} | Only have 4 items |
| 'model' | string | Example: './models/wsj_dnn5b.xml' | IR format file or no IR format model |
| 'batch' | int | Positive integer. Example: 1 or 8 | Set based on the real case |
| 'device' | string | Example: 'GNA_AUTO' or 'CPU' | Select the inference device |
| 'scale_factor' | int | Positive integer Example: 2048 | Used for GNA HW |
| 'speech' | string | Speech input vector data | Must be encoded by base64 method |
| 'time_stamp' | int | Positive integer | Time stamp for this request. |
The fcgi_speech uses a finite state machine to record the behavior. Client apps should use different 'stage' requests to trigger translation of fcgi_speech behavior.
For IR format model, the sample of post requests sequence is:
The First post request is init request:
['stage'] = 'IR_FORMAT_INIT_NETWORK'
['model'] = './models/wsj_dnn5b.xml'
['batch'] = 8
The second post request is also init request:
['stage'] = 'IR_FORMAT_INIT_EXENETWORK '
['model'] = './models/wsj_dnn5b.xml'
['device'] = 'GNA_AUTO'
The last post request is for inference:
['stage'] = 'INFERENCE'
['model'] = './models/wsj_dnn5b.xml'
['speech'] = base64_data
For IR format model, the sample of post requests sequence is: (two requests
only)
The First post request is init request:
['stage'] = 'RAW_FORMAT_INIT'
['model'] = './models/ELE_M.raw'
['batch'] = 1
['device'] = 'GNA_AUTO'
['scale_factor'] = 2048
The second post request which is also the last request is for inference:
['stage'] = 'INFERENCE'
['model'] = './models/ELE_M.raw'
['speech'] = base64_data
b) Response
The response of post request is json format, for example:
{
"ret":0, //return value: 0 (success) or 1(failure)
"msg":"ok", // request result: "ok" or "inference error"
"data":{ ... // inference result
...... // response data
},
"time":0.344222 //fcgi_speech processing time
}
One example of a client app for fcgi_speech API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_speech_c.py*".
c) Notice
The fcgi_speech API doesn't have proxy mode. That means this API doesn't support doing inference on remote servers.
This API can use GNA_HW as a reference device.
It provides only C++ API.
### 11.1.4 Policy API usage {#11.1.4}
fcgi_policy API is used to select inference devices or working mode(local model or proxy mode) for fcgi APIs.
Client app uses http protocol to communicate with fcgi_policy server.
The sample of sending request in client app is:
*response = requests.post(url, post_parameter)*
The following is the detailed information about request parameters and response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_policy'
\- post parameter: this parameter should include these fields.
| **Field name** | **Type** | **Value** | **comments** |
| -------------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 'device' | string | CPU, GPU, GNA_AUTO, GNA_HW, GNA_SW | This field is used to set inference devices. Such as "GPU", "CPU" etc. |
| 'local' | string | "1" - do inference locally "0" - do inference on a remote server | Select working mode of fcgi server: local mode or proxy mode |
b) Response
The response of the post request is a string, which indicates whether the
request is processed correctly.
*"successfully set the policy daemon" // OK*
*"failed to set policy daemon" // Fail*
c) Notice
The policy daemon must be run, or else calling this API will fail.
Run this policy API before running any other case if you want to select an
inference device or change working mode of fcgi APIs.
This setting is a global setting. That means the setting will impact the
following cases.
It provides two types of APIs: C++ and python API.
### 11.1.5 Classification API usage {#11.1.5}
fcgi_classification API is used to run inference on an image, and produce the classification information for objects in the image. Client app inputs one picture(image), fcgi_classification outputs the object information, such as what the object is, and the coordinates of the object in the picture.
Same as fcgi_tts, fcgi_classification also has two working modes, locol mode and proxy mode.
Client app uses http protocol to communicate with fcgi_classification server.
The sample code of sending post request in client app is:
response = requests.post(url, post_parameter)
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_classfication'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
In proxy mode(doing inference on a remote server), all fields are needed to be set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get the right results from the remote server(www.ai.qq.com). You should register on www.ai.qq.com and get 'appid' and 'appkey'. Please refer to *<https://ai.qq.com/doc/imagetag.shtml>* , find out how to apply these fields and how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret":0,
"msg":"ok",
"data":{
"tag_list":[
{"tag_name":'sandal',"tag_confidence":0.786503}
]
},
"time":0.380
}
One example of a client app for fcgi_classification API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_classification_c.py* ".
c) Notice
It provides two types of APIs: both C++ and python API.
### 11.1.6 Face Detection API usage {#11.1.6}
fcgi_face_detection API is used to run inference on an image, and find out human faces in the image. Client app inputs one picture(image), fcgi_face_detection
outputs the face information, such as how many human faces, and the bounding box
for each face in the picture.
Same as fcgi_tts, fcgi_face_detection also has two working modes, local mode and
proxy mode.
Client app uses http protocol to communicate with fcgi_face_detection server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_face_detection'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
In proxy mode(doing inference on a remote server), all fields are needed to be set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get the right results from the remote server(www.ai.qq.com). You should register on www.ai.qq.com and get 'appid' and 'appkey'. Please refer to *<https://ai.qq.com/doc/detectface.shtml>* , find out how to apply these fields
and how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret":0,
"msg":"ok",
"data":{
"face_list":[
{
"x1":655,
"y1":124,
"x2":783,
"y2":304
},
{
"x1":68,
"y1":149,
"x2":267,
"y2":367
} ]
},
"time":0.305
}
One example of a client app for fcgi_face_detection API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_face_detection_c.py* ".
c) Notice
It provides two types of API: both C++ and python API.
### 11.1.7 Facial Landmark API usage {#11.1.7}
fcgi_facial_landmark API is used to run inference on an image, and print human facial landmarks in the image. Client app inputs one picture(image), fcgi_facial_landmark outputs the coordinates of facial landmark points.
Same as fcgi_tts, fcgi_facial_landmark also has two working modes, local mode and proxy mode.
Client app uses http protocol to communicate with fcgi_facial_landmark server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_facial_landmark'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
In proxy mode(doing inference on a remote server), all fields are needed to be set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get the right results from the remote server(www.ai.qq.com). You should register on www.ai.qq.com and get 'appid' and 'appkey'. Please refer to
*<https://ai.qq.com/doc/detectface.shtml>* , find out how to apply these fields and how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret":0,
"msg":"ok",
"data":{
"image_width":916.000000,
"image_height":502.000000,
"face_shape_list":[
{"x":684.691284,
"y":198.765793},
{"x":664.316528,
"y":195.681824},
''
{"x":241.314194,
"y":211.847031} ]
},
"time":0.623
}
One example of a client app for fcgi_facial_landmark API is "*api-gateway/cgi-bin/test-script/test-demo/post_local_facial_landmark_c.py* ".
c) Notice
It provides two types of API: both C++ and python API.
### 11.1.8 OCR API usage {#11.1.8}
fcgi_ocr API is used to run inference on an image, and recognize handwritten or printed text from an image. Client app inputs one picture(image), fcgi_ocr outputs the text information in the picture. The information includes text coordinations and text confidence.
Same as fcgi_tts, fcgi_ocr also has two working modes, local mode and proxy
mode.
Client app uses http protocol to communicate with fcgi_ocr server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_ocr'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
In proxy mode(doing inference on a remote server), all fields are needed to be set.
In proxy mode, 'appid' and 'appkey' are the necessary parameters in order to get the right results from the remote server(www.ai.qq.com). You should register on www.ai.qq.com and get 'appid' and 'appkey'. Please refer to
*<https://ai.qq.com/doc/imgtotext.shtml>* , find out how to apply these fields and how to write a post request for the remote server.
b) Response
The response of post request is json format, for example:
{
"ret":0,
"msg":"ok",
"data":{
"item_list":[
{
"itemcoord":[
{
"x":161.903748,
"y":91.755684,
"width":141.737503,
"height":81.645004
}
],
"words":[
{
"character":i,
"confidence":0.999999
},
{
"character":n,
"confidence":0.999998
},
{
"character":t,
"confidence":0.621934
},
{
"character":e,
"confidence":0.999999
},
{
"character":l,
"confidence":0.999995
} ],
"itemstring":intel
},
{
"itemcoord":[
{
"x":205.378326,
"y":153.429291,
"width":175.314835,
"height":77.421722
}
],
"words":[
{
"character":i,
"confidence":1.000000
},
{
"character":n,
"confidence":1.000000
},
{
"character":s,
"confidence":1.000000
},
{
"character":i,
"confidence":0.776524
},
{
"character":d,
"confidence":1.000000
},
{
"character":e,
"confidence":1.000000
} ],
"itemstring":inside
} ]
},
"time":1.986
}
One example of a client app for fcgi_ocr API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_ocr_c.py* ".
c) Notice
It provides two types of API: both C++ and python API.
### 11.1.9 formula API usage {#11.1.9}
fcgi_formula API is used to run inference on an image. It can recognize formulas and output formulas in latex format. Client app inputs one picture(image), fcgi_formula outputs the formula in latex format.
fcgi_formula has only one working mode, local mode.
Client app uses http protocol to communicate with fcgi_formula server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_py_formula'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
b) Response
The response of post request is json format, for example:
```
{'ret': 0, 'msg': 'ok', 'data': '1 1 1 v v ^ { 1 } + 7 . 7 9 o ^ { 1 } - o - 0 . 9 0 f ^ { 7 } s ^ { 7 }', 'time': 0.518}
```
One example of a client app for fcgi_ocr API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_formula_py.py* ".
c) Notice
It provides only python API.
### 11.1.10 handwritten API usage {#11.1.10}
fcgi_handwritten API is used to run inference on an image, and recognize handwritten chinese from an image. Client app inputs one picture(image), fcgi_handwritten outputs the text information in the picture.
fcgi_handwritten has one working mode, local mode.
Client app uses http protocol to communicate with fcgi_handwritten server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_py_handwritten'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
b) Response
The response of post request is json format, for example:
{'ret': 0, 'msg': 'ok', 'data': '的人不一了是他有为在责新中任自之我们', 'time': 0.405}
One example of a client app for fcgi_handwritten API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_handwritten_py.py* ".
c) Notice
It provides only python API.
### 11.1.11 ppocr API usage {#11.1.11}
fcgi_ppocr API is used to run inference on an image, and recognize printed text from an image. Client app inputs one picture(image), fcgi_ppocr outputs the text information in the picture.
fcgi_ppocr has one working mode, local mode.
Client app uses http protocol to communicate with fcgi_ppocr server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_py_ppocr'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
b) Response
The response of post request is json format, for example (OCR result with chinese characters):
{'ret': 0, 'msg': 'ok', 'data': ' 的人不一了是他有为在责新中任自之我们\n 的人不一了是他有为在责新中任自之我们\n 4 7 4 W ^ { 1 } + 7 . 1 9 o ^ { 4 } - 6 - 0 . 9 6 L ^ { 1 } U\n 区是我国载人航天工程立项实施以来的第19次飞行任务,也是空间站阶段的首次载\n 人飞行任务。飞船入轨后,将按照预定程序,与大和核心舱进行自主快速交会对接\n 自合体飞行期间,航大员将进驻大和核心能,完成为期3个月的在轨驻留,开展机械\n 操作、出舱活动等工作,验证航大员长期在轨驻留、再生生保等一系列关键技术\n 自前,大和核心舱与大舟二号的组合体运行在�?90km的近圆对接轨道,状态良\n 好,满足与神舟十二号交会对接的任务要求和航大员进驻条件\n 震撼!神舟十二号与地球同框\n 神舟十二号载人飞船升空过程中,舱�?名航天员状态良好,推进舱外摄像机拍摄全\n 了神舟十二号与地球同框震想面面\n 自关报道:神舟十二号载人飞船飞行乘组确定!他们在太空将怎样生活3个月\n', 'time': 0.308}
One example of a client app for fcgi_ppocr API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_ppocr_py.py* ".
c) Notice
It provides only python API.
### 11.1.12 segmentation API usage {#11.1.12}
fcgi_segmentation API is used to run inference on an image, and recognize semantic segmentation from an image. Client app inputs one picture(image), fcgi_segmentation outputs a semantic segmentation picture.
fcgi_segmentation has one working mode, local mode.
Client app uses http protocol to communicate with fcgi_segmentation server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_segmentation'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
b) Response
The response of post request is json format, for example:
{ "ret": 0, "msg": "ok", "data":
"b'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...AABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBBAABwQQAAcEEAAHBB'","time":
0.31}
One example of a client app for fcgi_segmentation API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_segmentation_c.py* ".
c) Notice
It provides two types of API: both C++ and python API.
### 11.1.13 super resolution API usage {#11.1.13}
fcgi_super_resolution API is used to run inference on an image, and convert a small picture to a large picture. Client app inputs one picture(image), fcgi_super_resolution outputs a large picture.
fcgi_super_resolution has one working mode, local mode.
Client app uses http protocol to communicate with fcgi_super_resolution server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_super_resolution'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|--------------------------------|-------------------|----------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
In Local mode(doing inference locally), only an "image" field is needed to be set.
b) Response
The response of post request is json format, for example
{"ret":0,
"msg":"ok","data":"/////+rX//////vm+9/K/uPO/+PO/+jU/+3a//Lf//Tg//fj//Tg//3o//3p//nm/+7a/+vY/+3a/+/d/+7c//Xj//jl//jm//De//Hf//Th//Ti//Ph//7r///s//nn/+7c/+/e/+/c/+rX/+LO/+le:...AAAAAAAAAAAAAAAAAAAAAACggDHx4ZLS0oMzMwNjc3tQ==",
"time":0.238}
One example of a client app for fcgi_super_resolution API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_super_resolution_c.py* ".
c) Notice
It provides two types of API: both C++ and python API.
### 11.1.14 digitalnote API usage {#11.1.14}
digitalnote API is used to run inference on an image, .Recognize and output the handwriting, machine writing and formulas in the picture. Client app inputs one picture(image), fcgi_digitalnote outputs the handwriting, machine writing and formulas in the picture.
fcgi_digitalnote only has local mode and does not have remote mode.
Client app uses http protocol to communicate with fcgi_digitalnote server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_digitalnote'
\- post parameter: this parameter should include these fields:
| Field name | Type | Range | Example | comments |
|---------------|--------|--------------------------------|-------------------|-----------------------------------------|
| 'app_id' | Int | Positive integer | 2128571502 | Application ID |
| 'nonce_str' | string | No more than 32 byte | fa577ce340859f9fe | Random string |
| 'image' | string | image data, often is a picture | | Must be encoded by base64 method |
| 'time_stamp' | Int | Positive integer | 1493468759 | timestamp |
| 'appkey' | string | string | di6ik9b9JiYfImUB | Application key |
| 'latex' | string | string | "365 234 " | Pixel coordinates of latex |
| 'handwritten' | string | string | "354 39 431 123 " | Pixel coordinates of latex |
| 'html' | int | {0,1} | 0 | 0 for terminal client 1 for html client |
In Local mode(doing inference locally), "image" field ,formula, latex and html are needed to be set. Find the coordinates of a pixel in the formula from the picture and fill in the latex field. Find the coordinates of a pixel in the handwritten from the picture and fill in the handwritten field. Use spaces to connect coordinates. If you use a terminal client the html field is 0.
b) Response
The response of post request is json format, for example
{'ret': 0, 'msg': 'ok', 'data': '的人不一了是他有为在责新中任自之我们\n 的人不一了是他有为在责新中任自之我们\n 4 7 4 W ^ { 1 } + 7 . 1 9 o ^ { 4 } - 6 - 0 . 9 6 L ^ { 1 } U\n 区是我国载人航天工程立项实施以来的第19次飞行任务,也是空间站阶段的首次载\n 人飞行任务。飞船入轨后,将按照预定程序,与大和核心舱进行自主快速交会对接\n 自合体飞行期间,航大员将进驻大和核心能,完成为期3个月的在轨驻留,开展机械\n 操作、出舱活动等工作,验证航大员长期在轨驻留、再生生保等一系列关键技术\n 自前,大和核心舱与大舟二号的组合体运行在�?90km的近圆对接轨道,状态良\n 好,满足与神舟十二号交会对接的任务要求和航大员进驻条件\n 震撼!神舟十二号与地球同框\n 神舟十二号载人飞船升空过程中,舱�?名航天员状态良好,推进舱外摄像机拍摄全\n 了神舟十二号与地球同框震想面面\n 自关报道:神舟十二号载人飞船飞行乘组确定!他们在太空将怎样生活3个月\n ',
'time': 1.095}
One example of a client app for fcgi_super_resolution API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_digitalnote_c.py* ".
c) Notice
It only provides python API.
It can speed up the inference. The picture does not need to be sent three times
to get three different results. Handwriting, machine writing and formula can be
called by one request.
### 11.1.15 Video pipeline management (control) API usage {#11.1.15}
Video pipeline API is used to start, stop a video pipeline or read something
from a video pipeline.
The following are the detailed information about request parameters and
response.
a) Request
\- url: '<http://localhost:8080/cgi-bin/streaming>>'
\- Content-Type: application/json
\- JSON object fields:
| Field name | Type | Range | Example | comments |
| ----------- | ------ | ----------------------- | ------------------------------------------------------------ | -------------------------------------- |
| "pipeline" | string | string | "launcher.object_detection" | |
| "method" | string | "start"/ "stop"/ "read" | "start" | |
| "parameter" | string | JSON string | "{ "source":"device=/dev/video0", "sink":"v4l2sink device=/dev/video2", "resolution":"width=800,height=600", "framerate':'inference-interval=1" }" | optional, example is the default value |
\- example:
$> curl -H "Content-Type:application/json" -X POST \
http://localhost:8080/cgi-bin/streaming -d \
'{"pipeline":"launcher.object_detection", "method":"start"}'
b) Response
For the start/stop method, the response is a string, "0" means success, "1" means failure.
For the read method, the response is the string of reading content.
### 11.1.16 Live ASR API usage (online ASR case) {#11.1.16}
fcgi_live_asr API is also a usage of Automatic-Speech-Recognition. It uses the same models as fcgi_asr API(ASR API usage in 10.1.2). The difference is that this API is an online ASR case while 10.1.2 is an offline ASR case. That means this live asr API continuously captures the voice from the MIC devices, do inference, and send out the sentences what the voice expressed.
fcgi_live_asr case has only one working mode - local mode. It doesn't support proxy mode.
Client app uses http protocol to communicate with fcgi_live_asr server.
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and
response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/fcgi_live_asr'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Example** | **comments** |
|----------------|----------|-----------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 'mode' | Int | 0,1,2 | 0 | To control the running mode of fcgi_live_asr service: 0: starting the live asr service 1: do inference and get the result sentences. 2: stop live ar service |
b) Response
The response of post request is json format, for example:
Starting live asr ok!
HOW ARE YOU DOING
HELLO
HA
''''..
Stop live asr ok!
One example of a client app for fcgi_live_asr API is
"*api-gateway/cgi-bin/test-script/test-demo/post_local_live_asr.py*".
c) Notice
Currently, this model only supports English utterance, not Mathaland.
It only provides C++ APIs.
In order to use this API, you need to enable the pulseaudio and health-monitor
services.
(1) On the host PC, install the pulseaudio package if this package hasn't been installed.
For example:
#sudo apt-get install pulseaudio
(2) Enable the TCP protocol of the pulseaudio.
Edit the configuration file. for example:
#sudo vim /etc/pulse/default.pa
Find out the following tcp configuration:
#load-module module-native-protocol-tcp
Uncomment the tcp configuration(remove " #"), and add authentication:
load-module module-native-protocol-tcp auth-anonymous=1
Save and quit the configuration file.
(3) Restart the pulseaudio service. For example:
#sudo systemctl restart pulseaudio
or kill the pulseaudio thread
#sudo kill -9 xxxx(pulseaudio thread number)
(4) Running the health-monitor service on the host pc if you don't run it.
This service is used to monitor the CCAI container.
### 11.1.17 Pose estimation API usage {#11.1.17}
Pose estimation API is a specific usage of video pipeline management API(11.1.15).
This estimation API is used to estimate the human pose status, such as head
status, shoulder status, body status or eye status. The client app can get the status and detect whether the current pose is a correct pose by calling this API.
Please refer to the section 11.1.15 for the detailed information of video pipeline management API.
In this API, the client app uses http protocol to communicate with fcgi_streaming server.
The host side must run X server in order to display video. Before running this case, the following command must be executed on the host side to enable X server:
#xhost +
The sample code of sending post request in client app is:
*response = requests.post(url, post_parameter)*
The following are the detailed information about request parameters and response.
a) Input parameters
\- http url: such as: url= 'http://localhost:8080/cgi-bin/streaming'
\- post parameter: this parameter should include these fields:
| **Field name** | **Type** | **Range** | **Content** | **comments** |
| -------------- | -------- | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| "pipeline" | string | string | "launcher.pose_estimation" | |
| "method" | string | "start"/ "stop"/ "read" | "start' "read" "stop" | Start pipeline/ Read pipeline/ Stop pipeline |
| "parameter" | string | JSON string | "{ "source":"device=/dev/video0", "sink":"fpsdisplaysink video-sink=ximagesink sync=false", "framerate':'inference-interval=1" }" | For the start method, it must have this parameter field; for the read/stop method, this parameter field is optional. |
b) Response
The response of post request is json format.
For the start/stop method, the response is a string, "0" means success, "1" means failure.
For the read method, the response is the string of reading content, like the
following string:
{
"Person0":{
"available status":"63",
"header status":"OK",
"header angles":{
"angle_y":-0.037771,
"angle_p":0.230196,
"angle_r":-1.108361
},
"shoulder status":"OK",
"shoulder angle with horizontal": 1,
"header to shoulder status":"OK",
"header-to-shoulder angle": 91,
"left eye status":"Open",
"right eye status":"Open",
"body status":"OK"
}
}
One example of a client app for pose estimation API is
"*api-gateway/fcgi/streaming/cpp/post_local_streaming_c.py*".
c) Notice
• It only provides C++ APIs.
• Must running the following command in the host side before running pose estimation case.
#xhost +
• The host side must run X server for display. Please install the following package in the host side.
# sudo apt-get install x11-xserver-utils
# 11.2 gRPC APIs Manual {#11.2}
CCAI framework not only provides FGCI APIs, but also provides many gRPC APIs.
Client APPs can do inference by calling gRPC APIs.

The following are detailed gRPC APIs.
### 11.2.1 proto file {#11.2.1}
```
syntax = "proto3";
package inference_service;
service Inference {
rpc OCR (Input) returns (Result) {}
rpc ASR (Input) returns (Result) {}
rpc Classification (Input) returns (Result) {}
rpc FaceDetection (Input) returns (Result) {}
rpc FacialLandmark (Input) returns (Result) {}
}
message Input {
bytes buffer = 1;
}
message Result {
string json = 1;
}
```
In the .proto file the service interface, 'Inference', is defined, and rpc methods, 'OCR', 'Classification', 'FaceDetection', 'FacialLandmark' and 'ASR' are defined inside the service.
### 11.2.2 OCR method {#11.2.2}
Request:
message Input
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------|--------------------------------|
| buffer | bytes | | .jpg or .png image file buffer |
Response:
message, Result
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------|
| json | string | example: [ { "itemcoord":{ "x":162, "y":91, "width":141, "height":81 }, "itemstring":"intel" }, { "itemcoord":{ "x":205, "y":153, "width":175, "height":77 }, "itemstring":"inside" } ] | the field is json format string |
### 11.2.3 ASR method {#11.2.3}
Request:
message Input
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------|------------------|
| buffer | bytes | | .wav file buffer |
Response:
message Result
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|------------------------------------------|---------------------------------|
| json | string | example: { "text":"HOW ARE YOU DOING" } | the field is json format string |
### 11.2.4 Classification method {#11.2.4}
Request:
message Input
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------|--------------------------------|
| buffer | bytes | | .jpg or .png image file buffer |
Response:
message, Result
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|------------------------------------------------------------------|---------------------------------|
| json | string | example: [ { "Tag_name":"sandal","tag_confidence":0.743236 } ] | the field is json format string |
### 11.2.5 FaceDetection method {#11.2.5}
Request:
message Input
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------|--------------------------------|
| buffer | bytes | | .jpg or .png image file buffer |
Response:
message, Result
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|------------------------------------------------------------------------------------------|---------------------------------|
| json | string | example: [ {"x1":611,"y1":106,"x2":827,"y2":322}, {"x1":37,"y1":128,"x2":298,"y2":389} ] | the field is json format string |
### 11.2.6 FacialLandmark method {#11.2.6}
Request:
message Input
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-----------|--------------------------------|
| buffer | bytes | | .jpg or .png image file buffer |
Response:
message, Result
| **Field name** | **Type** | **Value** | **comments** |
|----------------|----------|-------------------------------------------------------|---------------------------------|
| json | string | example: [ {"x":684,"y":198}, {"x":664,"y":195}, ' ] | the field is json format string |
# 11.3 Low level APIs Manual {#11.3}
Runtime service library provides APIs for upper layers, such as for fcgi or grpc
layer etc. Runtime library supports different inference engines, such as
Openvino or Pytorch. But the runtime library only provides one set of APIs to
the upper layer. Upper layers select the inference engine by passing parameters
to runtime APIs.
Runtime APIs are *"simple"* APIs. *"simple"* means the number of APIs is
limited. Although a few APIs, you can call these APIs to do inference for many cases, such as processing image, speech, or video etc. *"simple"* also means they can be used friendly and easily. For example, if you want to do inference on an image, you can finish this work by calling only one API, vino_ie_pipeline_infer_image(). You need not care about how to build up inference pipelines. They are opaque to the end user. All building work is done in the Runtime library.
The runtime service library APIs are implemented by two kinds of languages, C++ and python. So it provides two types of APIs. One type is C++ APIs, it can be called by C++ programs directly. Another is python APIs, it is prepared for python programs.

**Notice:**
There are two versions of C++ API. Version 0 is described in section 10.3.1(C++ APIs for Openvino Backend Engine). It only supports Openvino as an inference engine, and doesn't support pytorch engine.
Version 1 is described in section 10.3.3(C++ APIs for Different backend Engines). It supports both Openvino and Pytorch engie. Some APIs in version 0 can be replaced by APIs in version 1.
Some C++ APIs in version 0 will be deprecated in the future. I encourage you to try to use C++ APIs in version 1 if APIs in version 0 are marked "deprecated".
### 11.3.1 C++ APIs for Openvino Backend Engine(Version 0) {#11.3.1}
#### 11.3.1.1 Return value (deprecated) {#11.3.1.1}
```
*/***
**@brief Status code of inference*
**/*
#define RT_INFER_ERROR -1 //inference error*
#define RT_LOCAL_INFER_OK 0 //inference successfully on local*
#define RT_REMOTE_INFER_OK 1 //inference successfully on remote server*
```
Some APIs have two work modes. One mode is local mode, which means doing inference on local XPU. Another is proxy mode. In proxy mode, API forwards requests to the remote server (such as QQ server or Tecent server). The remote server does inference.
In local mode, the return value is
RT_LOCAL_INFER_OK (success)* or *RT_INFER_ERROR (failure).
In proxy mode, the return value is
RT_REMOTE_INFER_OK (success)* or *RT_INFER_ERROR(failure)
#### 11.3.1.2 Server parameter {#11.3.1.2}
/***
** @brief This is the parameters to do inference on remote server*
**/*
struct serverParams {
std::string url; //the address of server
std::string urlParam; //the post parameter of request
std::string response; //the response data of server
};
This parameter is used by API in proxy mode. Set server address(serverParams.url) and request(serverParams.urlParam), get server response(serverParams.response).
The example of usage:
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
''''do inference on remote servers ''''''
//get server response
std::cout << urlInfo.response << std::endl;
#### 11.3.1.3 Policy configuration API {#11.3.1.3}
This API is used by users to change API behavior. Users can set API working mode
(such as local mode or proxy mode), or assign inference devices (XPU) in local
mode.
1) API
```
/***
** @brief Set parameters to configure vino ie pipeline
** @param configuration Parameters set from the end user.
**/
int vino_ie_pipeline_set_parameters(struct userCfgParams& configuration);
```
2) parameter
```
/***
** @brief This is the parameters setting from end user*
**/
struct userCfgParams {
bool isLocalInference; //do inference in local or remote
std::string inferDevice; //inference device: CPU, GPU or other device
};
isLocalInference: true -C local mode, do inference in local XPU.
False -C proxy mode, do inference on remote server.
inferDevice: inference device in local mode, you can select: CPU, GPU, GNA_AUTO etc.
```
3) example
```
struct userCfgParams cfg{true, "CPU"};
int res = vino_ie_pipeline_set_parameters(cfg);
```
4) Notice
This API setting is a global setting. That means this setting affects all the following APIs behaviors.
#### 11.3.1.4 image API (deprecated) {#11.3.1.4}
This API is used to do inference on images. It is related to image processing.
1) API
```
/***
* @brief Do inference for image
* @param image Images input for network
* @param additionalInput Other inputs of network(except image input)
* @param xmls Path of IE model file(xml)
* @param rawDetectionResults Outputs of network, they are raw data.
* @param remoteSeverInfo parameters to do inference on remote serve
* @return Status code of inference
*/
int vino_ie_pipeline_infer_image(std::vector<std::shared_ptr<cv::Mat>>& image,
std::vector<std::vector<float>>& additionalInput,
std::string xmls,
std::vector<std::vector<float>*>& rawDetectionResults,
struct serverParams& remoteServerInfo);
```
2) parameter
| **Parameter** | **Type** | **Comments** |
| ------------------- | -------------------------------------- | ------------------------------------------------------------ |
| image | std::vector <std::shared_ptr<cv::Mat>> | The input data of the image. The data format of the image is cv::Mat. The input is a batch of images. The batch is expressed by std::vector<>. The vector size is batch size. Each item in the vector is a shared pointer, std::shared_ptr<cv::Mat>, which points to one image data in the batch. |
| additionalInput | std::vector <std::vector<float>> | For some networks, they have more than one input. This parameter is used for other inputs except image input. The type is also std::vector<>. Vector size is the number of inputs in a network except image input. For each input, the input data type is std::vector<float>. |
| xml | std::string | The IE model file, which includes the file path. The file must be xml format. |
| rawDetectionResults | std::vector <std::vector<float>*> | The inference results. For some networks, they have more than one output port. This parameter is defined to std::vector<>. The vector size is the number of output ports. Each item in the vector is a pointer, which points to a vector(std::vector<float>), this vector is the inference result of one output port. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
3) example
```
std::string img_file = "./models/person-detection-retail-0013.png";
std::string model_file = "./models/person-detection-retail-0013.xml";
std::vector<float> rawDetectionResult;
std::vectorstd::vector<float> additionalInput;
std::vectorstd::vector<float> rawDetectionResults;*
rawDetectionResults.push_back(&rawDetectionResult);
std::vectorstd::shared_ptrcv::Mat images;
std::shared_ptrcv::Mat frame_ptr =
std::make_sharedcv::Mat(cv::imread(img_file, cv::IMREAD_COLOR));
images.push_back(frame_ptr);
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
// = "test";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
int res = vino_ie_pipeline_infer_image(images, additionalInput, model_file, rawDetectionResults, urlInfo);
```
4) Notice
Parameter - additionalInput: don't support cv::Mat data format.
#### 11.3.1.5 ASR API (deprecated) {#11.3.1.5}
ASR means Automatic Speech Recognition, speech-to-text. This API is implemented
based on some Intel speech libraries.
1) API
```
/**
* @brief Do inference for speech (ASR). Using intel speech libraries.
* @param samples Speech data buffer.
* @param sampleLength Buffer size of speech data
* @param bytesPerSample Size for each speech sample data (how many bytes for
each sample)
* @param rh_utterance_transcription Text result of speech. (ASR result)
* @param remoteSeverInfo parameters to do inference on remote server.
* @return Status code of inference
*/
int vino_ie_pipeline_infer_speech(const short samples,*
int sampleLength,
int bytesPerSample,
std::string config_path,
std::vector<char> &rh_utterance_transcription,
struct serverParams& remoteServerInfo);
```
2) parameters
| **Parameter** | **Type** | **Comments** |
|----------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------------|
| samples | short int | speech data, which format is PCM data. Each short int data is one PCM sample. |
| sampleLength | int | The size of speech data |
| bytesPerSample | int | the bytes number for each speech sample data. For PCM data, the value should be 2, which means each PCM sample is two bytes. |
| config_path | std::string | The configuration file for the ASR model. This configuration file is used by intel speech libraries |
| rh_utterance_transcription | std::vector<char> | the inference result for speech data. The data format is char. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
Samples, sampleLength, and bytesPerSample are often obtained by parsing theheader of a wave file.
3) example
```
std::string wave_filename = "./models/how_are_you_doing.wav";
std::string config_filename = "./models/lspeech_s5_ext/FP32/speech_lib.cfg";
short samples = nullptr;*
int sampleLength = 0;
int bytesPerSample = 0;
unsigned int size = 0;
uint8_t wave_data = ReadBinaryFile(wave_filename.c_str(), &size);*
parseWaveFile(wave_data, size, samples, sampleLength, bytesPerSample);
std::vector<char> rh_utterance_transcription(1024 1024);*
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
int res = vino_ie_pipeline_infer_speech(samples, sampleLength,
bytesPerSample, config_filename, rh_utterance_transcription, urlInfo);
```
4) Notice
This ASR model only supports English, not Chinese.
#### 11.3.1.6 common API (deprecated) {#11.3.1.6}
"Common" means this API is used for cases other than image and ASR. For example,
the TTS case. If the input/output data of the model meet the requirements of
API, then this API can be used in this case.
1) API
```
/**
* @brief Do inference for common models
* @param inputData Data input for network. The data type is float.
* @param additionalInput Other inputs of network(except image input)
* @param xmls Path of IE model file(xml)
* @param rawDetectionResults Outputs of network, they are raw data.
* @param remoteSeverInfo parameters to do inference on remote server
* @return Status code of inference
*/
int vino_ie_pipeline_infer_common(std::vectorstd::shared_ptr<std::vector<float>>&
inputData,
std::vectorstd::vector<float>& additionalInput,
std::string xmls,
std::vectorstd::vector<float>& rawDetectionResults,*
struct serverParams& remoteServerInfo);
```
2) parameters
| **Parameters** | **Type** | **Comments** |
|---------------------|--------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| inputData | std::vector<std::shared_ptr<std::vector<float>>> | Input data for the network. Similar to the image parameter of image API. The input data is a batch of vectors. The batch vectors are expressed by std::vector<>. The vector size is batch size. Each item of vector is a share pointer, std::shared_ptr<std::vector<float>>, which points to one float vector. |
| additionalInput | std::vector<std::vector<float>> | For some networks, they have more than one input. This parameter is used for other inputs except inputData pin. The type is also std::vector<>. Vector size is the number of inputs in a network except inputData input port. For each input, the input data type is std::vector<float>. |
| xml | std::string | The IE model file, which includes the file path. The file must be xml format. |
| rawDetectionResults | std::vector<std::vector<float>*> | The inference results. For some networks, they have more than one output port. This parameter is defined to std::vector<>. The vector size is the number of output ports. Each item in the vector is a pointer, which points to one output result(std::vector<float>), this vector is the inference result of one output port. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
3) example
```
std::string model_file = "./models/frozen_infer_1_setence.xml";
std::vector<float> rawDetectionResult;
std::vectorstd::vector<float> additionalInput;
std::vectorstd::vector<float> rawDetectionResults;*
rawDetectionResults.push_back(&rawDetectionResult);
std::vector<float> text_feature;
std::shared_ptrstd::vector<float> encoder_frame_ptr =
std::make_sharedstd::vector<float>(text_feature);
std::vectorstd::shared_ptr<std::vector<float>> encoder_vectors;
encoder_vectors.push_back(encoder_frame_ptr);
std::vector<float> y_hat(200400, 0.0f);*
additionalInput.push_back(y_hat);
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
int res = vino_ie_pipeline_infer_common(encoder_vectors, additionalInput,
model_file, rawDetectionResults, urlInfo);
```
4) Note
#### 11.3.1.7 video API {#11.3.1.7}
This API is used to run inference for video streaming. The video API includes
two APIs: one is used for initializing models, another is used for running
inference.
1) APIs
```
/**
* @brief Initialization before video inference
* @param modelXmlFile Path of IE model file(xml)
* @param deviceName Inference on which device: CPU, GPU or others
* @return Status code of inference
*/
int vino_ie_video_infer_init(const std::string& modelXmlFile,
const std::string& deviceName);
/**
* @brief Do inference for video frame
* @param frame Image frame input for network
* @param additionalInput Other inputs of network(except image input)
* @param modelXmlFile Path of IE model file(xml)
* @param rawDetectionResults Outputs of network, they are raw data.
* @return Status code of inference
*/
int vino_ie_video_infer_frame(const cv::Mat& frame,
std::vectorstd::vector<float>& additionalInput,
const std::string& modelXmlFile,
std::vectorstd::vector<float>& rawDetectionResults);*
```
2) parameters
| **Parameters** | **Type** | **Comments** |
|---------------------|----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| modelXmlFile | std::string | The IE model file, which includes the file path. The file must be xml format. |
| deviceName | std::string | Inference device. This parameter selects inference devices, XPU(CPU, GPU, or others). |
| frame | cv::Mat | Input video frame. Only one frame data, not support batch. |
| rawDetectionResults | std::vector<std::vector<float>*> | The inference results. For some networks, they have more than one output port. This parameter is defined to std::vector<>. The vector size is the number of output ports. Each item in the vector is a pointer, which points to an output data(std::vector<float>), this vector is the inference result of one output port. |
| additionalInput | std::vector<std::vector<float>> | For some networks, they have more than one input. This parameter is used for other inputs except image input. The type is also std::vector<>. Vector size is the number of inputs in a network except the image input port. For each input, the input data type is std::vector<float>. |
3) example
```
std::string img_file = "./models/person-detection-retail-0013.png";
std::string model_file = "./models/person-detection-retail-0013.xml";
std::vector<float> rawDetectionResult;
std::vectorstd::vector<float> additionalInput;
std::vectorstd::vector<float> rawDetectionResults;*
rawDetectionResults.push_back(&rawDetectionResult);
cv::Mat frame = cv::imread(img_file, cv::IMREAD_COLOR);
vino_ie_video_infer_init(model_file, "CPU");
int frame_num = 10;
int i = 0;
while (i++ < frame_num) {
'''''..
rawDetectionResult.clear();
vino_ie_video_infer_frame(frame, additionalInput, model_file,
rawDetectionResults);
''''''
}
```
4) notice
(1) No policy logic in this API. The setting of policy API has no impact on this API.
(2) It has only one working mode, local mode. Doesn't have proxy mode.
#### 11.3.1.8 Load Openvino Model from Buffer API {#11.3.1.8}
This api is used for loading a Openvino model from a buffer. In some cases, the
Openvino model isn't a file in the disk, it is located in the memory buffer. For
these cases, we need to call this api to initialize the Openvino model.
1. API
```
/**
* @brief Initial, load model from buffer
* @param xmls a unique string to handle the inference entity
* @param model model buffer
* @param weights weights buffer
* @param batch batch size
* @param isImgInput whether input of model is image
* @return Status code
*/
int vino_ie_pipeline_init_from_buffer(std::string xmls,
const std::string &model,
const std::string &weights,
int batch,
bool isImgInput);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|--------------|----------------------------------------------------|
| xmls | std::string | a unique string to represent IE model |
| model | std::string | The memory buffer which includes the IE model. |
| weights | std::string | This memory buffer which includes the weight data. |
| batch | int | The batch size. |
| isImgInput | bool | Whether the input of the model is image data. |
#### 11.3.1.9 Live ASR API {#11.3.1.9}
ASR means Automatic Speech Recognition, speech-to-text. This API is implemented
based on some Intel speech libraries. This API is similar to the ASR
API(10.4.1.5). The difference is that this API does continuous inference and
outputs the text while the previous ASR API only does one time inference.
1) API
```
/**
* @brief Continuously do inference for speech (ASR). Using intel speech
libraries.
* @param mode Working status of ASR. Start/inference/stop
* @param samples Speech data buffer.
* @param sampleLength Buffer size of speech data
* @param bytesPerSample Size for each speech sample data (how many bytes for
each sample)
* @param rh_utterance_transcription Text result of speech. (ASR result)
* @param config_path The file path for configuration file.
* @param device The inference device.
* @return Status code of inference
*/
int vino_ie_pipeline_infer_speech(int mode, // 1 -- start 2 -- inference 0 --
stop
const short samples,*
int sampleLength,
int bytesPerSample,
std::string config_path,
std::string device,
std::vector<char> &rh_utterance_transcription);
```
2) parameters
| **Parameter** | **Type** | **Comments** |
|----------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------------|
| mode | int | The working mode of the ASR process. 0 - stop to do inference 1 - start to do inference 2 - do inference |
| samples | short int | speech data, which format is PCM data. Each short int data is one PCM sample. |
| sampleLength | int | The size of speech data |
| bytesPerSample | int | the bytes number for each speech sample data. For PCM data, the value should be 2, which means each PCM sample is two bytes. |
| config_path | std::string | The configuration file for the ASR model. This configuration file is used by intel speech libraries |
| rh_utterance_transcription | std::vector<char> | the inference result for speech data. The data format is char. |
| device | std::string | The inference device: CPU or GNA |
Samples, sampleLength, and bytesPerSample are often obtained by parsing the
header of a wave file.
3) example
```
std::string wave_filename = "./models/how_are_you_doing.wav";
std::string config_filename = "./models/lspeech_s5_ext/FP32/speech_lib.cfg";
short samples = nullptr;*
int sampleLength = 0;
int bytesPerSample = 0;
unsigned int size = 0;
uint8_t wave_data = ReadBinaryFile(wave_filename.c_str(), &size);*
parseWaveFile(wave_data, size, samples, sampleLength, bytesPerSample);
std::vector<char> rh_utterance_transcription(1024 1024);*
// starting live asr mode (mode==1)
int res = vino_ie_pipeline_live_asr(1, samples, sampleLength, bytesPerSample,
config_filename, "CPU", rh_utterance_transcription);
// do inference (mode==2)
res = vino_ie_pipeline_live_asr(2, samples, sampleLength, bytesPerSample,
config_filename, "CPU", rh_utterance_transcription);
// stopping live asr mode(mode==0)
res = vino_ie_pipeline_live_asr(0, samples, sampleLength, bytesPerSample,
config_filename, "CPU", rh_utterance_transcription);
```
4) Notice
This ASR model only supports English, not Chinese.
#### 11.3.1.10 Configure a temporary inference device API {#11.3.1.10}
This API is used by users to set a temporary inference device for one case. The
inference device is usually set by the Policy configuration API(11.4.1.3). But
this setting of the Policy configuration API is a global setting which affects
all cases. If the user wants to select another inference device which is
different from the global one, he can call this API to override the global
inference device. This temporary setting only affect the specific user case, not
affects others.
1) API
```
/**
* @brief Set temporary inference device for the model.
* @ This setting will override the policy setting.
* @param set Set or clear the temporary inference device.
* @param model Name of the model file, including the file path.
* @param device The temporary inference device.
*/
int irt_set_temporary_infer_device(bool set, const std::string& model, std::string device);
```
2) parameter
```
set: Set(set==true) the temporary inference device;
or cancel(set==false) the temporary inference device.
model: Name of the model file, including the file path.
device: The temporary inference device.
```
3) example
```
// set a temporary device
int res = irt_set_temporary_infer_device(true, model, "CPU")
'''..
// clear a temprary device
int res = irt_set_temporary_infer_device(false, model, "CPU")
```
4) Notice
Don't forget to call this api to cancel the temporary device setting when the
temporary
device isn't used any longer.
### 11.3.2 Python API {#11.3.2}
Runtime service library also provides some python APIs to upper layers. These
python APIs can be called by python APPs directly.
Python API provides the same functions as C++ API. So they are mapped one-on-one
to the C++ APIs.
Python is a different language from C++, so the data structures used in python
APIs are also different from C++ data structures. The following table lists the
mapping of data structures between two languages.
| **Python** | **C++** |
|----------------|----------------------------------------------------|
| serverParams | struct serverParams |
| userCfgParams | struct userCfgParams |
| vectorChar | std::vector<char> |
| vectorFloat | std::vector<float> |
| vectorVecFloat | std::vector<std::vector<float>> |
| tripleVecFloat | std::vector<std::vector<std::vector<float>>> |
#### 11.3.2.1 Image API(deprecated) {#11.3.2.1}
1) API
*infer_image(image, image_channel, additionalInput, xmls, rawDetectionResults, remoteServerInfo)*
This API is defined for OPENVINO backend engine only. It will be obsolete in the future.
2) parameters
| **Parameters** | **Type** | **Comments** |
|---------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| image | List[List[int]] | The image data. The data format of the image is list[int], and each image is expressed in one list[]. The outer list[] means batch images. The outer list length is batch size. The inner list[] is the image data. |
| Image_channel | int | This parameter defines the channels of the input image. For example: 3 - rgb, 1 - h |
| AdditionalInput | vectorVecFloat | Other inputs except image input. The meaning is the same as C++ API |
| Xmls | str | IE model file. The meaning is the same as C++ API. |
| rawDetectionResults | vectorVecFloat | The inference results. The meaning is the same as C++ API. |
| remoteServerInfo | serverParams | Server parameter. This is used in proxy mode. The meaning is the same as C++ API. |
3) example
```
import inferservice_python as rt_api
model_xml = "./models/person-detection-retail-0013.xml"
pic = list(pic)
pics = [pic] # pics should be list[list], [[],[],[]]
other_pin = rt_api.vectorVecFloat()
out1 = rt_api.vectorFloat()
out = rt_api.vectorVecFloat()
out.append(out1)
urlinfo = rt_api.serverParams()
urlinfo.url = 'https://www.baidu.com/s'
urlinfo.urlParam = 'f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg'
res = rt_api.infer_image(pics, 3, other_pin, model_xml, out, urlinfo)
```
4) Notice
The usage of this API is the same as C++ image API.
#### 11.3.2.2 Image API {#11.3.2.2}
1) API
*infer_image_v1(images, image_channel, additionalInput, xmls, backendEngine, rawDetectionResults, remoteServerInfo)*
This image API is encouraged to be used for image inference. It supports different inference backend engines, such as PENVINO, PYTORCH, TENSORFLOW or ONNX runtime. It is mapped to C++ image API, irt_infer_from_image(). Please refer to 10.4.3 for C++ image API.
2) parameters
| **Parameters** | **Type** | **Comments** |
| ------------------- | --------------------- | ------------------------------------------------------------ |
| images | list[List[List[int]]] | The image data. The data format of the image is list[int], and each image is expressed in one list[]. The inner list[int] is data of one image. The middle list[] means batch images. The middle list length is batch size. The outer list[] is the number of inputs, which means how many image inputs the model has. |
| Image_channel | int | This parameter defines the channels of the input image. For example: 3 �C rgb, 1 �C h |
| AdditionalInput | vectorVecFloat | Other inputs except image input. The meaning is the same as C++ API |
| Xmls | str | IE model file. The meaning is the same as C++ API. |
| backendEngine | str | Specify the inference engine, "OPENVINO", "PYTORCH", "ONNXRT" or "TENSORFLOW". |
| rawDetectionResults | vectorVecFloat | The inference results. The meaning is the same as C++ API. |
| remoteServerInfo | serverParams | Server parameter. This is used in proxy mode. The meaning is the same as C++ API. |
3) example
```
import inferservice_python as rt_api
model_xml = "./models/person-detection-retail-0013.xml"
pic = list(pic)
pics = [[pic]] # pics should be list[list[list]], [[[ ],[ ],[ ]]]
other_pin = rt_api.vectorVecFloat()
out1 = rt_api.vectorFloat()
out = rt_api.vectorVecFloat()
out.append(out1)
urlinfo = rt_api.serverParams()
urlinfo.url = 'https://www.baidu.com/s'
urlinfo.urlParam = 'f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg'
res = rt_api.infer_image_v1(pics, 3, other_pin, model_xml, "OPENVINO", out,
urlinfo)
```
4) Notice
The usage of this API is the same as C++ image API.
#### 11.3.2.3 ASR API {#11.3.2.3}
1) API
*infer_speech(samples, bytesPerSample, config_path, backendEngine,
rh_utterance_transcription, remoteServerInfo)*
2) parameters
| **Parameters** | **Type** | **Comments** |
| -------------------------- | ------------ | ------------------------------------------------------------ |
| samples | List[int] | speech data, which format is PCM data. Each PCM sample is one short int data. |
| bytesPerSample | int | the bytes number for each speech sample data. For PCM data, the value should be 2, which means each sample data includes two bytes. |
| config_path | str | The configuration file for the ASR model. This configuration file is used by intel speech libraries |
| backendEngine | str | Specify the inference engine, "OPENVINO", "PYTORCH", "ONNXRT" or "TENSORFLOW". |
| rh_utterance_transcription | vectorChar | the inference result for speech data. The data format is char. |
| remoteServerInfo | serverParams | Server parameter. This is used in proxy mode. The meaning is the same as C++ API. |
3) example
```
import inferservice_python as rt_api
model_xml = './models/lspeech_s5_ext/FP32/speech_lib.cfg'
speech, samplewidth = parse_wavefile()
buf = np.zeros((100100), dtype = np.int8)*
utt_res = rt_api.vectorChar(buf)
urlinfo = rt_api.serverParams()
urlinfo.url = 'https://www.baidu.com/s'
urlinfo.urlParam = 'f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg'
res = rt_api.infer_speech(speech, sampwidth, model_xml, "OPENVINO", utt_res,
urlinfo)
```
4) Notice
The usage of this API is the same as C++ ASR API.
#### 11.3.2.4 Common API {#11.3.2.4}
This api is mapped to C++ common API, irt_infer_from_common(). It can be used in TTS cases. For C++ common API, please refer to 10.4.3.
1) API
*infer_common(inputData, additionalInput, xml, backendEngine, rawDetectionResults, remoteServerInfo);*
2) parameters
| **Parameters** | **Type** | **Comments** |
| ------------------- | -------------- | ------------------------------------------------------------ |
| inputData | tripleVecFloat | The input data for the network. Same as C++ API. |
| additionalInput | vectorVecFloat | Other inputs except inputData pin. Same as C++ API. |
| xml | str | The IE model file, which includes the file path. The file must be xml format. |
| backendEngine | str | Specify the inference engine, "OPENVINO", "PYTORCH", "ONNXRT" or "TENSORFLOW". |
| rawDetectionResults | vectorVecFloat | The inference results. Same as C++ API. |
| remoteServerInfo | serverParams | Server parameter. This is used in proxy mode. Same as C++ API. |
3) example
```
import inferservice_python as rt_api
model_xml = "./models/tts-encoder-decoder.xml"
input_data = rt_api.vectorVecFloat()
x_pin = rt_api.vectorFloat(tts_data)
input_data.append(x_pin)
input_vector = rt_api.tripleVecFloat()
input_vector.append(input_data)
other_pin = rt_api.vectorVecFloat()
y_pin = rt_api.vectorFloat(other_pin_data)
other_pin.append(y_pin)
out1 = rt_api.vectorFloat()
out = rt_api.vectorVecFloat()
out.append(out1)
urlinfo = rt_api.serverParams()
urlinfo.url = 'https://www.baidu.com/s'
urlinfo.urlParam = 'f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg'
res = rt_api.infer_common(input_vector, other_pin, model_xml, "OPENVINO", out,
urlinfo)
```
4) Notice
The usage of this API is the same as C++ common API.
#### 11.3.2.5 Policy configuration API {#11.3.2.5}
1) API
*set_policy_params(configuration);*
2) parameter
| **Parameters** | **Type** | **Comments** |
|----------------|---------------|-----------------------------------|
| Configuration | userCfgParams | Same as C++ struct userCfgParams. |
3) example
```
import inferservice_python as rt_api
#configuration:*
cfg_info = rt_api.userCfgParams()
cfg_info.isLocalInference = True
cfg_info.inferDevice = 'CPU'
res = rt_api.set_policy_params(cfg_info)
```
4) Notice
The usage of this API is the same as C++ policy configuration API.
#### 11.3.2.6 Live ASR API {#11.3.2.6}
1) API
*live_asr(mode, samples, bytesPerSample, config_path, device, rh_utterance_transcription)*
2) parameters
| **Parameters** | **Type** | **Comments** |
|----------------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------|
| mode | int | The working mode of the ASR process. 0 - stop to do inference 1 - start to do inference 2 - do inference |
| samples | List[int] | speech data, which format is PCM data. Each PCM sample is one short int data. |
| bytesPerSample | int | the bytes number for each speech sample data. For PCM data, the value should be 2, which means each sample data includes two bytes. |
| config_path | str | The configuration file for the ASR model. This configuration file is used by intel speech libraries |
| rh_utterance_transcription | vectorChar | the inference result for speech data. The data format is char. |
| device | std::string | The inference device: CPU or GNA |
3) example
```
import inferservice_python as rt_api
model_xml = './models/lspeech_s5_ext/FP32/speech_lib.cfg'
speech, samplewidth = parse_wavefile()
buf = np.zeros((100100), dtype = np.int8)*
utt_res = rt_api.vectorChar(buf)
device = "CPU"
mode = 1 # # starting inference
res = rt_api.live_asr(mode, speech, sampwidth, model_xml, device, utt_res)
mode = 2 # # doing inference
res = rt_api.live_asr(mode, speech, sampwidth, model_xml, device, utt_res)
mode = 0 # # stopping inference
res = rt_api.live_asr(mode, speech, sampwidth, model_xml, device, utt_res)
```
4) Notice
The usage of this API is the same as C++ Live ASR API.
#### 11.3.2.7 Set temporary inference device API {#11.3.2.7}
1) API
This API is a python version of configuring a temporary inference device API(11.4.1.11).
Please refer to the section 11.4.1.11 for the detail usage.
```
set_temporary_infer_device(set, model, device);
```
2) parameter
| **Parameters** | **Type** | **Comments** |
|----------------|-----------|-----------------------------------------------|
| set | bool | set/clear. Same as 11.4.1.11. |
| model | str | The model file. Same as 11.4.1.11 |
| device | str | Temporary inference device. Same as 11.4.1.11 |
3) example
```
import inferservice_python as rt_api
#set a temporary inference device:*
res = rt_api.set_temporary_infer_device(True, model, "CPU" )
'''..
#cancle a temporary inference device:*
res = rt_api.set_temporary_infer_device(False, model, "CPU" )
```
4) Notice
The usage of this API is the same as C++ configuring a temporary inference device API(11.4.1.11).
### 11.3.3 C++ APIs for Different backend Engines (Version 1) {#11.3.3}
This set of C++ APIs(version 1) are the superset of the set of C++ APIs for Openvino backend engines (version 0). The difference between two versions is that version 1 supports different inference engines, such as Openvino, Pytorch, Onnx, and Tensorflow. You can use APIs in version 1 to do the same things as APIs in version0.
C++ APIs of version 1 are "standard" c++ APIs. In the future, some of the APIs in version 0 will be obselete. I encourage you to try to use C++ APIs in version 1.
#### 11.3.3.1 Return Value {#11.3.3.1}
```
/**
*@enum irtStatusCode
*@brief Status code of running inference
*/
enum irtStatusCode : int {
RT_REMOTE_INFER_OK = 1, //inference successfully on remote server
RT_LOCAL_INFER_OK = 0, //inference successfully on local HW
RT_INFER_ERROR = -1 //inference error
};
```
Some APIs have two working modes. One mode is local mode, which means doing inference on local XPU. Another is proxy mode. In proxy mode, API forwards requests to the remote server (such as QQ server or Tencent server). The remote
server runs inference, and returns the result.
In local mode, the return value is *RT_LOCAL_INFER_OK (success)* or *RT_INFER_ERROR (failure)*.
In proxy mode, the return value is *RT_REMOTE_INFER_OK (success)* or *RT_INFER_ERROR(failure)*
#### 11.3.3.2 Inference Engines {#11.3.3.2}
Currently, runtime libraries support four inference engines, they are Openvino, Pytorch, Onnx and Tensorflow.
There is a configuration file which defines which inference engines are supported in this runtime library. The name of this configuration file is inference_engine_library.txt. The content is:
#format: inference-engine-name library-name, for example: ONNX
libonnxentry.so*
#inference-engine-name: OPENVINO, ONNX, PYTORCH, TENSORFLOW*
#You can add new inference engine to this file by following same format*
*OPENVINO libopenvinoentry.so*
*ONNXRT libonnxentry.so*
*PYTORCH libpytorchentry.so*
*TENSORFLOW libtensorflowentry.so*
This file defines the name of the inference engine, and also the inference engine library.
#### 11.3.3.3 Image API {#11.3.3.3}
The usage of this API is the same as image API in version 0.
1. API
```
/**
* @brief Run inference from image
* @param tensorData Buffers for input/output tensors
* @param modelFile The model file, include path
* @param backendEngine Specify the inference engine, OPENVINO, PYTORCH, ONNXRT
or
* TENSORFLOW.
* @param remoteSeverInfo Parameters to do inference on remote server
* @return Status code of inference
*/
enum irtStatusCode irt_infer_from_image(struct irtImageIOBuffers& tensorData,
const std::string& modelFile,
std::string backendEngine,
struct serverParams& remoteServerInfo);
```
2. parameters
```
/**
* @brief Buffers for running inference from image.
* Includes pointers pointing to the input/output tensors data.
* @There are two kinds of inputs, one is image inputs, another is assistant
inputs.
* The image input tensor is represents by
vector<vector<vector_shared_ptr>>>,
* means [ports_number, batch, one_image_data]. It is expressed by
* <ports_number<batch<one_image_data>>>.
* The inner vector is a shared pointer pointing to a vector(one_image_data).
* The outer vector.size() means the number of image input ports.
* The middle vector means batch.
* The assistant input tensor is represent by vector<vector<float>>, means
* [ports_number, one_data_array].
* @The output tensor is represented by vector<vector_pointers>.
* The output tensor is [ports_number, one_data_arry]. It is expressed by
* <ports_number<one_data_arry>>.
* The inner vector is a pointer which points to a vector(one_data_array). This
vector
* includes the return value passed back by API.
* The outer vector.size() means output ports number of the model.
*/
struct irtFloatIOBuffers {
/* Pointer points to main input data. The inner shared pointer points to float data */
std::vector<std::vector<ptrFloatVector>> *pMainInputs;
/* Pointer points to the assistant input data. */
std::vector<std::vector<float>> *pAdditionalInputs;
/* Pointer points to the output buffer. The inner pointer points to the result inferenced by
runtime API */
std::vector<std::vector<float>*> *pInferenceResult;
};
```
| **Parameter** | **Type** | **Comments** |
|------------------|---------------------|---------------------------------------------------------------------------------------------|
| tensorData | irtImageIOBuffers | Buffers for input/output tensors |
| modelFile | std::string | The model file, which includes the file path. |
| backendEngine | std::string | Specify the inference engine, OpenVINO, Pytorch, Onnx runtime or Tensorflow. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
3. example
```
std::string img_file = "./models/person-detection-retail-0013.png";
std::string model_file = "./models/person-detection-retail-0013.xml";
std::vector<float> rawDetectionResult;
std::vectorstd::vector<float> additionalInput;
std::vectorstd::vector<float> rawDetectionResults;*
rawDetectionResults.push_back(&rawDetectionResult);
std::vectorstd::shared_ptr<cv::Mat> images;
std::shared_ptrcv::Mat frame_ptr =
std::make_sharedcv::Mat(cv::imread(img_file, cv::IMREAD_COLOR));
images.push_back(frame_ptr);
std::string param =
"f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg"; // = "test";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
std::vectorstd::vector<ptrCVImage> images_vecs;
images_vecs.push_back(images);
images.clear();
struct irtImageIOBuffers modelAndBuffers{&images_vecs, &additionalInput, &rawDetectionResults};
enum irtStatusCode res = irt_infer_from_image(modelAndBuffers, model_file,"OPENVINO", urlInfo);
```
#### 11.3.3.4 Speech API {#11.3.3.4}
The usage of this API is the same as ASR API in version 0.
1. API
```
/**
* @brief Run inference from speech(ASR)
* @param waveData Parameters for speech data, includes data buffer and
settings.
* @param configurationFile The configuration file, includes path
* @param inferenceResult Text result of speech. (ASR result)
* @param backendEngine Specify the inference engine, OpenVINO, PYTORCH, ONNXRT
or
* TENSORFLOW.
* @param remoteSeverInfo Parameters to do inference on remote server
* @return Status code of inference
*/
enum irtStatusCode irt_infer_from_speech(const struct irtWaveData& waveData,
std::string configurationFile,
std::vector<char>& inferenceResult,
std::string backendEngine,
struct serverParams& remoteServerInfo);
```
2. parameter
```
/**
* @brief Parameters for wave data. Used by running inference for speech.
* Wave data is PCM data.
*/
struct irtWaveData {
/* Pointer points to PCM data. */
short* samples;
/* PCM data length. */
int sampleLength;
/* Size of each PCM sample. */
int bytesPerSample;
};
```
| **Parameter** | **Type** | **Comments** |
|-------------------|---------------------|---------------------------------------------------------------------------------------------|
| waveData | irtWaveData | Parameters for speech data, includes speech data buffer and definitions. |
| configurationFile | std::string | The configuration file for the ASR model. Including path. |
| inferenceResult | std::vector<char> | Text result of inference. The data format is char. |
| backendEngine | std::string | Specify the inference engine, OpenVINO, Pytorch, Onnx runtime and Tensorflow. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
3. example
```
std::string wave_filename = "./models/how_are_you_doing.wav";
std::string config_filename = "./models/lspeech_s5_ext/FP32/speech_lib.cfg";
short* samples = nullptr;
int sampleLength = 0;
int bytesPerSample = 0;
unsigned int size = 0;
uint8_t* wave_data = ReadBinaryFile(wave_filename.c_str(), &size);
parseWaveFile(wave_data, size, samples, sampleLength, bytesPerSample);
std::vector<char> rh_utterance_transcription(1024 * 1024);
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
struct irtWaveData sampleData{samples, sampleLength, bytesPerSample};
enum irtStatusCode res = irt_infer_from_speech(sampleData, config_filename,
rh_utterance_transcription, "OPENVINO", urlInfo);
```
#### 11.3.3.5 Common API {#11.3.3.5}
The usage of this API is the same as the common API in version 0.
1. API
```
/**
* @brief Run inference from common model
* @param tensorData Buffers for input/output tensors
* @param modelFile The model file, include path
* @param backendEngine Specify the inference engine, OpenVINO,PYTORCH, ONNXRT
or
* TENSORFLOW.
* @param remoteSeverInfo Parameters to do inference on remote server
* @return Status code of inference
*/
enum irtStatusCode irt_infer_from_common(struct irtFloatIOBuffers& tensorData,
const std::string& modelFile,
std::string backendEngine,
struct serverParams& remoteServerInfo);
```
2. parameter
```
*/**
* @brief Buffers for running inference from common model.
* The structure is similar with irtImageIOBuffers, except the type of shared
pointer is float,
* not CV::Mat.
*/
struct irtFloatIOBuffers {
/* Pointer points to main input data. The inner shared pointer points to float data */
std::vector<std::vector<ptrFloatVector>> *pMainInputs;
/* Pointer points to the assistant input data. */
std::vector<std::vector<float>> *pAdditionalInputs;
/* Pointer points to the output buffer. The inner pointer points to the result inferenced by
runtime API */
std::vector<std::vector<float>*> *pInferenceResult;
};
```
| **Parameter** | **Type** | **Comments** |
|------------------|---------------------|---------------------------------------------------------------------------------------------|
| tensorData | irtFloatIOBuffers | Buffers for input/output tensors |
| modelFile | std::string | The model file, which includes the file path. |
| backendEngine | std::string | Specify the inference engine, OpenVINO, Pytorch, Onnx runtime and Tensorflow. |
| remoteServerInfo | struct serverParams | Server parameter. This is used in proxy mode. Please refer to 1.2 for detailed information. |
3. example
```
std::string encoder_model_file = "./models/text-spotting-0001-recognizer-encoder.xml";
std::vector<std::vector<float>> additionalInput;
std::vector<float> rawDetectionResult;
std::vector<std::vector<float>*> rawDetectionResults;
rawDetectionResults.push_back(&rawDetectionResult);
std::string param = "f=8&rsv_bp=1&rsv_idx=1&word=picture&tn=98633779_hao_pg";
struct serverParams urlInfo{"https://www.intel.cn/index.html", param};
std::vector<float> text_features;
std::shared_ptr<std::vector<float>> encoder_frame_ptr =
std::make_shared<std::vector<float>>(text_feature);
std::vector<std::shared_ptr<std::vector<float>>> encoder_images;
encoder_images.push_back(encoder_frame_ptr);
std::vector<std::vector<ptrFloatVector>> mainInputs;
mainInputs.push_back(encoder_images);
struct irtFloatIOBuffers buffers{&mainInputs, &additionalInput, &rawDetectionResults};
enum irtStatusCode res = irt_infer_from_common(buffers, encoder_model_file, "OPENVINO",
urlInfo);
```
### 11.3.4 Video pipeline management (construct) APIs {#11.3.4}
This set of APIs will help developers construct their own video pipelines and manage those pipelines in their life cycle.
This function below initializes the video pipeline environment. It should be called before calling other APIs. return value, 0 means success, non-zero means failure.
1) API
```
int ccai_stream_init();
```
This function below creates a video pipeline. pipeline_name is a string which should be supported by a video pipeline plugin, such as "launcher.object_detection". user_data is plugin defined, and should be supported by the plugin.
This function returns a pointer to a ccai_stream_pipeline, or NULL if the pipeline cannot be created.
1. API
```
struct ccai_stream_pipeline *ccai_stream_pipeline_create(const char* pipeline_name,
void *user_data);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|----------------|-----------------------------------------|
| pipeline_name | const char * | pipeline namel |
| user_data | void * | plugin defined, supported by the plugin |
This function below starts a video pipeline. The pipe should be returned by '*ccai_stream_pipeline_create*'. user_data is plugin defined, and should be supported by the plugin. return value, 0 means success, non-zero means failure.
1. API
```
int ccai_stream_pipeline_start(struct ccai_stream_pipelinepipe, void user_data);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|----------------|-----------------------------------------|
| pipeline_name | const char * | pipeline namel |
| user_data | void * | plugin defined, supported by the plugin |
This function below stops a video pipeline. pipe should be returned by '*ccai_stream_pipeline_create*'. user_data is plugin defined, and should be supported by the plugin. return value, 0 means success, non-zero means failure.
1. API
```
int ccai_stream_pipeline_stop(struct ccai_stream_pipeline *pipe, void *user_data);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|----------------|-----------------------------------------|
| pipeline_name | const char * | pipeline namel |
| user_data | void * | plugin defined, supported by the plugin |
This function below removes a video pipeline. pipe should be returned by '*ccai_stream_pipeline_create*'. user_data is plugin defined, and should be supported by the plugin. return value, 0 means success, non-zero means failure.
1. API
```
int ccai_stream_pipeline_remove(struct ccai_stream_pipelinepipe, void user_data);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|----------------|-----------------------------------------|
| pipeline_name | const char * | pipeline namel |
| user_data | void * | plugin defined, supported by the plugin |
This function below reads something from a video pipeline. The pipe should be the one returned by '*ccai_stream_pipeline_create*' API. The user_data is plugin defined, and should be supported by the plugin. The return value, 0 means success, non-zero means failure.
1. API
```
int ccai_stream_pipeline_read(struct ccai_stream_pipelinepipe, void user_data);
```
2. Parameter
| **Parameters** | **Type** | **Comments** |
|----------------|----------------|-----------------------------------------|
| pipeline_name | const char * | pipeline namel |
| user_data | void * | plugin defined, supported by the plugin |
# 11.4 How to extend video pipeline with video pipeline manager
You can follow the steps below to implement a plugin to extend the video
pipeline.
### 11.4.1 construct the plugin {#11.4.1}
```
#include <ccai_stream_plugin.h>
#include <ccai_stream_utils.h>
#include <gst/gst.h>
static const char *pipe_name = "sample";
static const char *gst_pipeline_desc = "videotestsrc ! ximagesink";
/* 4. implement create/start/stop/remove function */
static int create_pipe(struct ccai_stream_pipeline_desc *desc, void *user_data)
{
if (desc == NULL)
return -1;
desc->private_data = gst_parse_launch(gst_pipeline_desc, NULL);
if (!desc->private_data)
return -1;
return 0;
}
static int start_pipe(struct ccai_stream_pipeline_desc *desc, void *user_data)
{
if (desc == NULL || desc->private_data == NULL)
return -1;
GstElement *gst_pipe = (GstElement *)desc->private_data;
ccai_gst_start_pipeline(gst_pipe);
return 0;
}
static int stop_pipe(struct ccai_stream_pipeline_desc *desc, void *user_data)
{
if (desc == NULL || desc->private_data == NULL)
return -1;
GstElement *gst_pipe = (GstElement *)desc->private_data;
if (gst_pipe == NULL)
return -1;
ccai_gst_stop_pipeline(gst_pipe);
return 0;
}
static int remove_pipe(struct ccai_stream_pipeline_desc *desc, void *user_data)
{
if (desc == NULL || desc->private_data == NULL)
return -1;
GstElement *gst_pipe = (GstElement *)desc->private_data;
if (gst_pipe) {
gst_object_unref(gst_pipe);
desc->private_data = NULL;
}
return 0;
}
/* 2. implement init/exit function */
static int sample_plugin_init()
{
struct ccai_stream_pipeline_desc *desc;
/* 3. new a ccai_stream_pipeline_desc */
if ((desc = g_try_new0(struct ccai_stream_pipeline_desc, 1)) == NULL)
return -1;
desc->name = pipe_name;
desc->create = create_pipe;
desc->start = start_pipe;
desc->stop = stop_pipe;
desc->remove = remove_pipe;
desc->get_gst_pipeline = NULL;
desc->private_data = NULL;
ccai_stream_add_pipeline(desc);
return 0;
}
static void sample_plugin_exit()
{
}
/* 1. define a plugin */
CCAI_STREAM_PLUGIN_DEFINE(sample, "1.0",
CCAI_STREAM_PLUGIN_PRIORITY_DEFAULT,
sample_plugin_init, sample_plugin_exit)
```
In the source code, you must call or implement the following functions:
1. CCAI_STREAM_PLUGIN_DEFINE
Call this function to define a plugin, the video pipeline manager will load this plugin according to the information provided by this definition.
2. Implement init/exit function
The video pipeline manager will call init when the plugin is loaded, and call exit when the plugin is unloaded.
3. Call ccai_stream_add_pipeline in the init function, ccai_stream_add_pipeline will register the pipeline supported by the plugin to the video pipeline manager.
4. Implement create/start/stop/remove function. When a client requests to start or stop a pipeline, the video pipeline manager will call those functions.
### 11.4.2 Build the plugin {#11.4.2}
$> gcc `pkg-config --cflags gstreamer-1.0` -g -O2 plugin_sample.c -o sample.so \
`pkg-config --libs gstreamer-1.0` -shared -lccai_stream
### 11.4.3 Install the plugin to destination {#11.4.3}
$> sudo cp sample.so /usr/lib/ccai_stream/plugins/
### 11.4.4 Test your plugin {#11.4.4}
$> sv restart lighttpd
$> curl -H "Content-Type:application/json" -X POST http://localhost:8080/cgi-bin/streaming - '{"pipeline":"sample", "method":"start"}'
$> curl -H "Content-Type:application/json" -X POST http://localhost:8080/cgi-bin/streaming - '{"pipeline":"sample", "method":"stop"}'
# 11.5 Smart Photo Search {#11.5}
CCAI smart photo search includes a service and a library to provide a set of APIs to support the photo indexing and searching via AI identified 'Tag'.
### 11.5.1 Components overview {#11.5.1}

### 11.5.2 Launch smart photo search service {#11.5.2}
By default CCAI will auto start smart photo service, users only need to prepare the photo directory. By default photo directory will be set to '/opt/intel/service_runtime/smartphoto', but you can modify the script, '/opt/intel/service_runtime/service_runtime.sh' to set the photo directory to any directory.
### 11.5.3 Photo monitor {#11.5.3}
The photo monitor is a sample which was provided by CCAI. It will monitor the
photo directory, and if the file in the directory is changed, the photo monitor
will call smart photo service RESTful API to notify the service.
Launch monitor
$> cd gateway-demo/smart-photo/photo-monitor/
$> ./monitor.py -d /opt/intel/service_runtime/smartphoto
### 11.5.4 Photo viewer {#11.5.4}
The photo viewer is a web app.
Launch viewer
$> cd gateway-demo/smart-photo/photo-viewer/
$> npm install
$> npm run serve
Open the URL in your browser as prompted by 'npm run serve'
### 11.5.5 RESTful APIs {#11.5.5}
The photo viewer is a good example to show how to use the smart photo service
RESTful API.
\- http url: such as: url= '<http://localhost:8080/cgi-bin/smartphoto>'
\- parameter: parameter should be a json object
The parameter must include a key named 'method'.
1. start scan
a) request
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | ---------- | ------------------------- |
| method | string | scan_start | { "method": "scan_start"} |
b) response
| **Key** | **Type** | **Value** | **Example** | **Comments** |
| ------- | -------- | --------- | --------------- | ------------------------------------ |
| result | int | | { "result": 0 } | 0 means success, otherwides failure. |
c)
2. stop scan
a) request
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | ------------------------- |
| method | string | scan_stop | { "method": "scan_stop" } |
b) response
| **Key** | **Type** | **Value** | **Example** | **Comments** |
| ------- | -------- | --------- | --------------- | ------------------------------------ |
| result | int | | { "result": 0 } | 0 means success, otherwides failure. |
3. list class
a) request
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | ---------- | ------------------------ |
| method | string | list_class | { "method": "list_class" |
b) response is json object array.
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | --------------------------- |
| id | int | | { "id": 1, "name": "tree" } |
| name | string | | |
4. list person
a) request
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | ----------- | --------------------------- |
| method | string | list_person | { "method": "list_person" } |
b) response is json object array.
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | --------------------------------------------- |
| id | int | | { "id": 1, "name": "mantou", "count":3 } |
| name | string | | |
| count | int | | |
5. list all photo
a) request
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | -------------- | ----------------------------- |
| method | string | list_all_photo | { "method": "list_all_photo"} |
b) response is json object array.
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | ----------------------------- |
| id | int | | { "id": 1, "path": "a.jpeg" } |
| path | string | | |
6. list photo by class
a) request
| **Key** | **Type** | **Value** | **Example** |
|---------|----------|---------------------|---------------------------------------------------------|
| method | string | list_photo_by_class | { "method": "list_photo_by_class", "param": "tree" } |
| param | string | <class name> | |
b) response is json object array.
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | ----------------------------- |
| id | int | | { "id": 1, "path": "a.jpeg" } |
| path | string | | |
7. list photo by person
a) request
| **Key** | **Type** | **Value** | **Example** |
|---------|----------|---------------------|-------------------------------------------------------|
| method | string | list_photo_by_class | { "method": "list_photo_by_person", "param": "1" } |
| param | string | <person id> | |
b) response is json object array.
| **Key** | **Type** | **Value** | **Example** |
| ------- | -------- | --------- | ----------------------------- |
| id | int | | { "id": 1, "path": "a.jpeg" } |
| path | string | | |
8. add file
a) request
| **Key** | **Type** | **Value** | **Example** |
|---------|----------|---------------|------------------------------------------------|
| method | string | add_file | { "method": "add_file", "param": "a.jpeg" } |
| param | string | <file name> | |
b) response
| **Key** | **Type** | **Value** | **Example** | **Comments** |
| ------- | -------- | --------- | --------------- | ------------------------------------ |
| result | int | | { "result": 0 } | 0 means success, otherwides failure. |
9. delete file
a) request
| **Key** | **Type** | **Value** | **Example** |
|---------|----------|---------------|---------------------------------------------------|
| method | string | delete_file | { "method": "delete_file", "param": "a.jpeg" } |
| param | string | <file name> | |
b) response
| **Key** | **Type** | **Value** | **Example** | **Comments** |
| ------- | -------- | --------- | --------------- | ------------------------------------ |
| result | int | | { "result": 0 } | 0 means success, otherwides failure. |
10. move file
a) request
| **Key** | **Type** | **Value** | **Example** |
|---------|-------------|--------------------|--------------------------------------------------------------------------------|
| method | string | move_file | { "method": "delete_file", "param": { "src": a.jpeg, "dest": b.jpeg } } |
| param | json object | <src file name> | |
| | | <dest file name> | |
b) response
| **Key** | **Type** | **Value** | **Example** | **Comments** |
| ------- | -------- | --------- | --------------- | ------------------------------------ |
| result | int | | { "result": 0 } | 0 means success, otherwides failure. |
| 39.084932 | 498 | 0.574764 | eng_Latn | 0.929907 |
502aef49e228f333886c86a2c7464b43c25283d9 | 77 | md | Markdown | airnode-ui-client/design/README.md | EnormousCloud/airnode | f2141a807d8904d469c306bc519ed1d7aa867f4f | [
"MIT"
] | 2 | 2021-09-07T19:13:30.000Z | 2021-10-15T08:48:47.000Z | airnode-ui-client/design/README.md | EnormousCloud/airnode | f2141a807d8904d469c306bc519ed1d7aa867f4f | [
"MIT"
] | 2 | 2021-12-16T12:02:57.000Z | 2022-02-23T07:30:33.000Z | airnode-ui-client/design/README.md | EnormousCloud/airnode | f2141a807d8904d469c306bc519ed1d7aa867f4f | [
"MIT"
] | 3 | 2021-11-26T16:22:36.000Z | 2022-01-12T15:13:53.000Z | This folder contains initial design mocks of the Airnode UI management tool.
| 38.5 | 76 | 0.831169 | eng_Latn | 0.999373 |
502c20572bc82c1953c4ea7d1839e6b02b311dff | 2,737 | md | Markdown | README.md | imzc/depsync | e554cad8af9e06327016fcac638741e3c9e15658 | [
"MIT"
] | 8 | 2017-03-25T02:48:51.000Z | 2022-01-13T01:35:21.000Z | README.md | imzc/depsync | e554cad8af9e06327016fcac638741e3c9e15658 | [
"MIT"
] | null | null | null | README.md | imzc/depsync | e554cad8af9e06327016fcac638741e3c9e15658 | [
"MIT"
] | 1 | 2021-12-14T10:29:00.000Z | 2021-12-14T10:29:00.000Z | <p align="left">
<a href="https://www.npmjs.com/package/depsync"><img src="https://img.shields.io/npm/v/depsync.svg" alt="Version"></a>
<a href="https://github.com/domchen/depsync/blob/master/LICENSE"><img src="https://img.shields.io/npm/l/depsync.svg" alt="License"></a>
</p>
# Introduction
Depsync is a command line tool for automatically synchronizing the dependencies of a project by the DEPS configuration file.
# Installation
`npm install depsync -g`
## Usage
Run the following command in the directory with a DEPS file:
```
depsync [platform]
```
For example, if you want to synchronize the mac platform, run:
```
depsync mac
```
If you don't pass any platform parameter, it will automatically choose the host platform as the target platform. So the result of running `depsync` in macOS is the same to running `depsync mac`.
The available platform names are defined in the DEPS file, you can also define any other platform names as you want, such as `ios`, `android`... but only the `mac`, `win` and `linux` can be automatically chosen.
Here is an example of DEPS file:
```json
{
"version": "1.2.2",
"vars": {
"GIT_DOMAIN": "github.com",
"SKIA_ROOT": "https://github.com/domchen/depsync/releases/download/1.0.1",
"V8_ROOT": "https://github.com/domchen/depsync/releases/download/1.0.2"
},
"repos": {
"common": [
{
"url": "https://${GIT_DOMAIN}/webmproject/libwebp.git",
"commit": "1fe3162541ab2f5ce69aca2e2b593fab60520d34",
"dir": "third_party/libwebp"
},
{
"url": "https://${GIT_DOMAIN}/libjpeg-turbo/libjpeg-turbo.git",
"commit": "129f0cb76346ceede8f4d8d87dea8acb0809056c",
"dir": "third_party/libjpeg-turbo"
}
]
},
"files": {
"common": [
{
"url": "${SKIA_ROOT}/include.zip",
"dir": "third_party/skia",
"unzip": true
},
{
"url": "${V8_ROOT}/include.zip",
"dir": "third_party/v8",
"unzip": "true"
}
],
"mac": [
{
"url": "${SKIA_ROOT}/darwin-x64.zip",
"dir": "third_party/skia",
"unzip": true
},
{
"url": "${V8_ROOT}/darwin-x64.zip",
"multipart": [
".001",
".002",
".003"
],
"dir": "third_party/v8",
"unzip": true
}
],
"win": [
{
"url": "${SKIA_ROOT}/win-ia32.zip",
"dir": "third_party/skia",
"unzip": true
},
{
"url": "${V8_ROOT}/win-ia32.zip",
"dir": "third_party/v8",
"unzip": true
}
]
},
"actions": {
"common": [
{
"command": "depsync --clean",
"dir": "third_party"
}
]
}
}
```
| 25.110092 | 211 | 0.567044 | eng_Latn | 0.583435 |
502cdff1aa68724514013416ee08fe34638efaf2 | 5,625 | md | Markdown | examples/exchange-files-in-browser/README.md | My9Bot/js-ipfs | 4486acca4b5f9fef26dab2e787e681ed293def9f | [
"MIT"
] | 2 | 2020-10-09T21:10:30.000Z | 2020-10-10T03:15:28.000Z | examples/exchange-files-in-browser/README.md | My9Bot/js-ipfs | 4486acca4b5f9fef26dab2e787e681ed293def9f | [
"MIT"
] | null | null | null | examples/exchange-files-in-browser/README.md | My9Bot/js-ipfs | 4486acca4b5f9fef26dab2e787e681ed293def9f | [
"MIT"
] | 1 | 2018-01-31T19:47:38.000Z | 2018-01-31T19:47:38.000Z | # Tutorial - Transfer files between the browser and other IPFS nodes
> Welcome! This tutorial will help you exchange files between browser nodes and go-ipfs nodes.
caveat: js-ipfs currently doesn't support DHT peer discovery, the peer from which you are fetching data should be within the reach (local or in public IP) of the browser node.
That being said, we will explain throughout this tutorial to circunvent the caveats and once they are fixed, we will update the tutorial as well.
## Application diagram
The goal of this tutorial is to create a application with a IPFS node that dials to other instances of it using WebRTC, and at the same time dial and transfer files from a Desktop IPFS node using WebSockets as the transport.
```
┌──────────────┐ ┌──────────────┐
│ Browser │ libp2p(WebRTC) │ Browser │
│ │◀──────────────▶│ │
└──────────────┘ └──────────────┘
▲ ▲
│WebSockets WebSockets│
│ ┌──────────────┐ │
│ │ Desktop │ │
└───────▶│ Terminal │◀─────────┘
└──────────────┘
```
## Check out the final state
In the end, you should get an app running, something like this:

## Step-by-step instructions
Here's what we are going to be doing, today:
- 1. Set up, install a go-ipfs node in your machine
- 2. Make your daemons listen on WebSockets
- 3. Start the WebApp
- 4. Dial to a node using WebSockets (your Desktop ones)
- 5. Transfer files between all of your nodes, have fun!
Let's go.
### 1. Set up
You'll need to have an implementation of IPFS running on your machine. Currently, this means either go-ipfs or js-ipfs.
Installing go-ipfs can be done by installing the binary [here](https://ipfs.io/ipns/dist.ipfs.io/#go-ipfs). Alternatively, you could follow the instructions in the README at [ipfs/go-ipfs](https://github.com/ipfs/go-ipfs).
Installing js-ipfs requires you to have node and [npm](https://www.npmjs.com). Then, you simply run:
```sh
> npm install --global ipfs
...
> jsipfs --help
Commands:
...
```
This will alias `jsipfs` on your machine; this is to avoid issues with `go-ipfs` being called `ipfs`.
At this point, you have either js-ipfs or go-ipfs running. Now, initialize it:
```sh
> ipfs init
# or
> jsipfs init
```
This will set up your IPFS repo in your home directory.
### 2. Make your daemons listen on WebSockets
At this point, you need to edit your `config` file, the one you just set up with `{js}ipfs init`. It should be in either `~/.jsipfs/config` or `~/.ipfs/config`, depending on whether you're using JS or Go.
Note: js-ipfs sets up a websocket listener by default, if you are just using js-ipfs you can skip this test.
Since websockets support is currently not on by default, you'll need to add a WebSockets address manually. Look into your config file and find the `Addresses` section:
```json
"Addresses": {
"Swarm": [
"/ip4/0.0.0.0/tcp/4002"
],
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/9090"
}
```
Add the following entry to your `Swarm` array: `/ip4/127.0.0.1/tcp/9999/ws`. Now, it should look like this:
```json
"Addresses": {
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip4/127.0.0.1/tcp/9999/ws"
],
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/9090"
}
```
Now it should listen on Websockets. We're ready to start the daemon.
```sh
> ipfs daemon
```
(Again, either `jsipfs` or `ipfs` works. I'll stop repeting this from here on out.)
You should see the Websocket address in the output:
```sh
Initializing daemon...
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/127.0.0.1/tcp/9999/ws
Swarm listening on /ip4/192.168.10.38/tcp/4001
Swarm listening on /ip4/192.168.10.38/tcp/9999/ws
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8080
Daemon is ready
```
It's there in line 5 - see the `/ws`? Good. that means it is listening.
### 3. Start the WebApp project
Now, you'll need to make sure you are in `js-ipfs/examples/exchange-files-in-browser`. You'll see a `package.json`: this manifest holds the information for which packages you'll need to install to run the webapp. Let's install them, and then start the project:
```sh
> npm install
> npm start
```
You should see this text:
```sh
Starting up http-server, serving public
Available on:
http://127.0.0.1:12345
http://192.168.1.24:12345
Hit CTRL-C to stop the server
```
Go to http://127.0.0.1:12345 in your browser; you're now in the webapp, if all went well.
### 4. Dial to a node using WebSockets (your Desktop ones)
On your terminal, run `ipfs id` to find the WebSockets address that it is listening on. Should look like this: `"/ip4/127.0.0.1/tcp/4003/ws/ipfs/<your peer id>". Important, your node must be running in order to have listeners, to do so, run in another tab of your terminal: `ipfs daemon`.


### 5. Transfer files between all of your nodes, have fun!
Now you can drag an drop files on the browser or add them through the CLI with `ipfs add <file>` and with the fetch file box, you can retrieve the file to the browser or other browser tabs!


| 35.15625 | 288 | 0.681067 | eng_Latn | 0.980534 |
502cfbfad89f41d1803aa941bdc82a3787b2fc90 | 2,876 | md | Markdown | src/in/2017-03/10/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/in/2017-03/10/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/in/2017-03/10/05.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Hagar dan Gunung Sinai
date: 30/08/2017
---
_Gal. 4:21–31_
`Apakah jenis hubungan perjanjian yang Tuhan ingin bangun dengan umat-Nya di Sinai? Apakah kesamaannya dengan janji Allah kepada Abraham? Kel. 6:2-8, 19:3-6; Ul. 32:10-12.`
Tuhan ingin mengadakan hubungan perjanjian yang sama dengan orang Israel di Sinai seperti yang Dia adakan dengan Abraham. Bahkan, ada kesamaan antara Firman Allah kepada Abraham dalam Kejadian 12:1-3 dan firman-Nya kepada Musa di Keluaran 19. Dalam kedua kasus, Allah menekankan apa yang akan Ia lakukan bagi umat-Nya. Dia tidak meminta Israel berjanji untuk melakukan apa saja untuk mendapatkan berkat-Nya; sebaliknya, mereka harus menurut sebagai respons terhadap berkat-berkat itu. Kata Ibrani yang diterjemahka "menurut" dalam Keluaran 19:5 secara harafiah berarti "mendengarkan." Firman Allah tidak menyiratkan kebenaran oleh usaha. Sebaliknya, Ia ingin agar orang israel memiliki iman yang sama yang menandai respons Abraham kepada janji-janji-Nya.
`Jika hubungan perjanjian yang Allah tawarkan kepada Israel di Sinai adalah mirip dengan yang diberikan kepada Abraham, mengapa Paulus mengidentifikasi Gunung Sinai dengan pengalaman negatif Hagar? Kel. 19:7-25; Ibr. 8:6, 7.`
Perjanjian di Sinai itu dimaksudkan untuk menunjukkan keberdosaan umat manusia dan keselamatan oleh rahmat Allah yang limpah, yang diperagakan dalam layanan bait kudus. Masalah dengan perjanjian Sinai bukanlah pada pihak Tuhan melainkan pada gagalnya janji-janji manusia (Ibr. 8:6). Gantinya menyambut janji-janji Allah dalam kerendahan hati dan iman, orang Israel menyambutnya dengan kepercayaan diri. "Segala yang difirmankan TUHAN akan kami lakukan" (Kel. 19:8). Setelah hidup sebagai budak di Mesir selama lebih dari empat ratus tahun, mereka tidak memiliki konsep yang benar tentang keagungan Allah maupun tingkat keberdosaan mereka sendiri. Dengan cara yang sama di mana Abraham dan Sara mencoba untuk menolong Tuhan memenuhi janji-janji-Nya, bangsa Israel berusaha untuk mengubah perjanjian kasih karunia Tuhan menjadi perjanjian oleh usaha. Hagar melambangkan Sinai di mana keduanya mengungkapkan upaya manusia mencari keselamatan oleh perbuatan.
Paulus tidak mengatakan bahwa hukum yang diberikan di Sinai itu jahat atau dihapuskan.Ia prihatin dengan kesalahan orang-orang Galatia legalis untuk memahami 'hukum.' "Gantinya berfungsi untuk meyakinkan mereka tentang mustahilnya menyenangkan Allah oleh menuruti hukum, hukum menumbuhkan di dalam diri mereka suatu tekad yang mendalam untuk bergantung pada kemampuan diri sendiri untuk menyenangkan Tuhan. Dengan demikian hukum tidak melayani keperluan kasih karunia dalam mernimpin bangsa Yahudi kepada Kristus. Malahan, hukum itu menutup mereka dari Kristus"- O, Palmer Robertson, The Christ of the Covenants (Phillipsburg, NJ Presbyterian dan Reformed Publishing Co, 1980), hlm. 181.
| 169.176471 | 954 | 0.821975 | ind_Latn | 0.893607 |
502d0c5af118c73a81e8f15f2a01bdfbf5f344b4 | 27 | md | Markdown | README.md | zkalan/tencent-playground | 1eff071d7892a82aac1c2813ae49f1abab3b5603 | [
"Apache-2.0"
] | null | null | null | README.md | zkalan/tencent-playground | 1eff071d7892a82aac1c2813ae49f1abab3b5603 | [
"Apache-2.0"
] | null | null | null | README.md | zkalan/tencent-playground | 1eff071d7892a82aac1c2813ae49f1abab3b5603 | [
"Apache-2.0"
] | null | null | null | # tencent-playground
腾讯游乐场
| 9 | 20 | 0.814815 | eng_Latn | 0.711124 |
502fb5c5f7be85a9d95fae01bae1569a065cfd00 | 2,519 | md | Markdown | includes/xplat-getting-set-up.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/xplat-getting-set-up.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/xplat-getting-set-up.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties services="virtual-machines" title="Setting up Azure CLI for service management" authors="squillace" solutions="" manager="timlt" editor="tysonn" />
<tags
ms.service="virtual-machine"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="linux"
ms.workload="infrastructure"
ms.date="04/13/2015"
ms.author="rasquill" />
## <a name="using-azure-cli"></a>Za pomocą interfejsu wiersza polecenia Azure
Poniższe czynności ułatwiają za pomocą interfejsu wiersza polecenia Azure łatwo najnowszej wersji i pisane z wielkiej litery subskrypcji. Jeśli musisz zainstalować polecenie Azure i najpierw połączyć się z kontem, zobacz [interfejs wiersza polecenia Azure (polecenie Azure)](xplat-cli-install.md).
### <a name="step-1-update-azure-cli-version"></a>Krok 1: Polecenie Azure aktualizacji wersji
Aby użyć polecenie Azure konieczne poleceń w trybie zarządzania usługi, należy użyć najnowszej wersji, jeśli to możliwe. Aby sprawdzić swoją wersję, wpisz `azure --version`. Powinien zostać wyświetlony mniej więcej tak:
$ azure --version
0.8.17 (node: 0.10.25)
Jeśli chcesz zaktualizować wersję polecenie Azure, zobacz [Polecenie Azure](https://github.com/Azure/azure-xplat-cli).
### <a name="step-2-set-the-azure-account-and-subscription"></a>Krok 2: Ustawianie konto Azure i subskrypcji
Po połączeniu usługi Azure interfejsu wiersza polecenia przy użyciu konta, które ma być używany, może być więcej niż jedną subskrypcję. Jeśli zrobisz, należy zapoznać się subskrypcje dostępne dla Twojego konta, wpisując `azure account list`, a następnie wybierz subskrypcję, do którego chcesz użyć, wpisując `azure account set <subscription id or name> true` gdzie _identyfikator subskrypcji lub nazwa_ jest identyfikator subskrypcji lub nazwę subskrypcji, którą chcesz pracować w bieżącej sesji. Powinien zostać wyświetlony podobną do następujących czynności:
$ azure account set "Visual Studio Ultimate with MSDN" true
info: Executing command account set
info: Setting subscription to "Visual Studio Ultimate with MSDN" with id "0e220bf6-5caa-4e9f-8383-51f16b6c109f".
info: Changes saved
info: account set command OK
> [AZURE.NOTE] Jeśli nie masz jeszcze konta usługi Azure, ale masz subskrypcję usługi subskrypcji MSDN, możesz pobrać bezpłatną Azure środków aktywując usługi [tutaj korzyści subskrybentów MSDN](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) — lub korzystając z bezpłatnego konta. Albo będzie działać dostępu Azure.
| 69.972222 | 560 | 0.779277 | pol_Latn | 0.999591 |
50307da2b8b1b7f03643c64ffb5057e51195ea8b | 167 | md | Markdown | README.md | sjkelly/ParametricPolyhedra.jl | dc2296d337261d8670c9352554f9d1d43b882f26 | [
"MIT"
] | null | null | null | README.md | sjkelly/ParametricPolyhedra.jl | dc2296d337261d8670c9352554f9d1d43b882f26 | [
"MIT"
] | null | null | null | README.md | sjkelly/ParametricPolyhedra.jl | dc2296d337261d8670c9352554f9d1d43b882f26 | [
"MIT"
] | null | null | null | # ParametricPolyhedra
[](https://travis-ci.org/sjkelly/ParametricPolyhedra.jl)
| 41.75 | 143 | 0.796407 | kor_Hang | 0.158597 |
503220127451a5ce92f79b7d09c69da713ec6a71 | 1,099 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c3657.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3657.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3657.md | LouisJustinTALLOT/cpp-docs.fr-fr | 1e9a7185459b26b2a69659615453d9eda6a2f2db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 'En savoir plus sur : erreur du compilateur C3657'
title: Erreur du compilateur C3657
ms.date: 11/04/2016
f1_keywords:
- C3657
helpviewer_keywords:
- C3657
ms.assetid: 89a28a18-4c17-43a1-bda6-deb52c32d203
ms.openlocfilehash: 6e5cf5dd0b3739acdd703e5ef11d3eb553fa9e90
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 12/11/2020
ms.locfileid: "97134418"
---
# <a name="compiler-error-c3657"></a>Erreur du compilateur C3657
les destructeurs ne peuvent pas substituer explicitement ou être substitués explicitement
Les destructeurs ou finaliseurs ne peuvent pas être substitués explicitement. Pour plus d’informations, consultez [substitutions explicites](../../extensions/explicit-overrides-cpp-component-extensions.md).
## <a name="example"></a>Exemple
L’exemple suivant génère l’C3657.
```cpp
// C3657.cpp
// compile with: /clr
public ref struct I {
virtual ~I() { }
virtual void a();
};
public ref struct D : I {
virtual ~D() = I::~I {} // C3657
virtual void a() = I::a {} // OK
};
```
| 27.475 | 206 | 0.746133 | fra_Latn | 0.62522 |
50333a8bc4cd03228c74f4daf0ec8b4b283fdfde | 975 | md | Markdown | Code/Line Plot/README.md | Tarun-Kamboj/Data_Visualization_with_Python | 6f6b0d9288db46a6491e1bc000e94098c54c5524 | [
"MIT"
] | 3 | 2021-04-23T02:35:52.000Z | 2021-09-02T06:32:00.000Z | Code/Line Plot/README.md | Tarun-Kamboj/Data_Visualization_with_Python | 6f6b0d9288db46a6491e1bc000e94098c54c5524 | [
"MIT"
] | null | null | null | Code/Line Plot/README.md | Tarun-Kamboj/Data_Visualization_with_Python | 6f6b0d9288db46a6491e1bc000e94098c54c5524 | [
"MIT"
] | 2 | 2021-04-23T02:41:33.000Z | 2021-04-23T02:46:47.000Z | # Line Plots

## Dependencies


## Introduction
A `line chart` or `line plot` or `line graph` is a type of chart which displays information as a series of data points called 'markers' connected by straight line segments. It is a basic type of chart common in many fields. It is similar to a scatter plot except that the measurement points are ordered (typically by their x-axis value) and joined with straight line segments. A line chart is often used to visualize a trend in data over intervals of time (a time series) thus the line is often drawn chronologically. In these cases they are known as run charts.
The [Notebook here](Notebook.ipynb) contains the code of line plots like the one shown below.

## Thanks for Reading :)
| 51.315789 | 562 | 0.758974 | eng_Latn | 0.996461 |
5033befef8b8165055f1489ce710a6376ee00dbb | 764 | md | Markdown | README.md | redhat-cip/ansible-role-openstack-rally | 1c99c7b745d8e7c0060eb04eec32d2cbdb70bb57 | [
"Apache-2.0"
] | null | null | null | README.md | redhat-cip/ansible-role-openstack-rally | 1c99c7b745d8e7c0060eb04eec32d2cbdb70bb57 | [
"Apache-2.0"
] | null | null | null | README.md | redhat-cip/ansible-role-openstack-rally | 1c99c7b745d8e7c0060eb04eec32d2cbdb70bb57 | [
"Apache-2.0"
] | null | null | null | # ansible-role-rally
An Ansible role to install, configure and run Rally OpenStack test suite
## Role Variables
This is the list of role variables:
* `rally_result_filename`: Name of the generated file. Default to `rally.xml`.
* `rally_scenario_filepath`: The path to the rally scenario file. Default to `/home/stack/rally-tasks.yml`
* `rally_file_location`: (optional) URL of the rally scenario file.
## Dependencies
None.
## Example Playbook
The simplest way to use this module:
```
- name: Run the OpenStack Rally test suite
hosts: undercloud
vars:
rally_file_location: https://example.com/path/to/rally-scenarios.yml
roles:
- rally
```
## License
Apache 2.0
## Author Information
Distributed-CI <[email protected]>
| 19.1 | 108 | 0.730366 | eng_Latn | 0.909763 |
5033c5513b6c2959d79f1d3ba214d1010636f113 | 2,159 | md | Markdown | docs/automodels/AutoML_conc_poll_class/5_Default_NeuralNetwork/README.md | giuliasantarsieri/open_ML | db8cfab3e67b246553db27868183f71cfe3ef4f8 | [
"MIT"
] | 10 | 2021-05-05T15:42:45.000Z | 2021-12-20T22:34:04.000Z | docs/automodels/AutoML_conc_poll_class/5_Default_NeuralNetwork/README.md | giuliasantarsieri/open_ML | db8cfab3e67b246553db27868183f71cfe3ef4f8 | [
"MIT"
] | 4 | 2021-07-27T08:46:56.000Z | 2021-08-02T10:40:38.000Z | docs/automodels/AutoML_conc_poll_class/5_Default_NeuralNetwork/README.md | giuliasantarsieri/open_ML | db8cfab3e67b246553db27868183f71cfe3ef4f8 | [
"MIT"
] | null | null | null | # Summary of 5_Default_NeuralNetwork
[<< Go back](../README.md)
## Neural Network
- **dense_1_size**: 32
- **dense_2_size**: 16
- **learning_rate**: 0.05
- **num_class**: 5
- **explain_level**: 2
## Validation
- **validation_type**: split
- **train_ratio**: 0.75
- **shuffle**: True
- **stratify**: True
## Optimized metric
logloss
## Training time
0.3 seconds
### Metric details
| | 0 | 1 | 2 | 3 | 4 | accuracy | macro avg | weighted avg | logloss |
|:----------|----------:|----------:|----------:|----------:|----------:|-----------:|------------:|---------------:|----------:|
| precision | 0.886364 | 0.5 | 0.714286 | 0.522727 | 0.5 | 0.628571 | 0.624675 | 0.643692 | 0.874461 |
| recall | 0.829787 | 0.42 | 0.517241 | 0.676471 | 0.8 | 0.628571 | 0.6487 | 0.628571 | 0.874461 |
| f1-score | 0.857143 | 0.456522 | 0.6 | 0.589744 | 0.615385 | 0.628571 | 0.623759 | 0.627393 | 0.874461 |
| support | 47 | 50 | 29 | 34 | 15 | 0.628571 | 175 | 175 | 0.874461 |
## Confusion matrix
| | Predicted as 0 | Predicted as 1 | Predicted as 2 | Predicted as 3 | Predicted as 4 |
|:-------------|-----------------:|-----------------:|-----------------:|-----------------:|-----------------:|
| Labeled as 0 | 39 | 3 | 0 | 5 | 0 |
| Labeled as 1 | 1 | 21 | 2 | 14 | 12 |
| Labeled as 2 | 3 | 9 | 15 | 2 | 0 |
| Labeled as 3 | 0 | 7 | 4 | 23 | 0 |
| Labeled as 4 | 1 | 2 | 0 | 0 | 12 |
## Learning curves

## Permutation-based Importance

[<< Go back](../README.md)
| 42.333333 | 129 | 0.372858 | eng_Latn | 0.283794 |
5034f13d62eee1dfd53398e0ec4ce8c2f3aad837 | 249 | md | Markdown | docs/chords-core/chords-core/com.chrynan.chords.model/-string-label/label.md | chRyNaN/chords | 5a2e21c9b4789c878f0459e3d5c6fcfcbc66658d | [
"Apache-2.0"
] | 20 | 2020-01-18T18:35:36.000Z | 2021-05-02T01:43:02.000Z | docs/chords-core/chords-core/com.chrynan.chords.model/-string-label/label.md | chRyNaN/GuitarChords | 5a2e21c9b4789c878f0459e3d5c6fcfcbc66658d | [
"Apache-2.0"
] | 8 | 2020-03-21T12:14:27.000Z | 2021-12-29T23:43:17.000Z | docs/chords-core/chords-core/com.chrynan.chords.model/-string-label/label.md | chRyNaN/chords | 5a2e21c9b4789c878f0459e3d5c6fcfcbc66658d | [
"Apache-2.0"
] | 4 | 2020-01-26T18:45:12.000Z | 2020-12-28T21:21:20.000Z | //[chords-core](../../../index.md)/[com.chrynan.chords.model](../index.md)/[StringLabel](index.md)/[label](label.md)
# label
[common]\
val [label](label.md): [String](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-string/index.html)? = null
| 35.571429 | 116 | 0.674699 | yue_Hant | 0.288313 |
503625a924f56234e599bf557658290290b5d961 | 8,766 | md | Markdown | README.md | hankkkwu/SDCND-P7-Path_Planning | 071746924f17834b40b0644b411f995c97cead49 | [
"MIT"
] | null | null | null | README.md | hankkkwu/SDCND-P7-Path_Planning | 071746924f17834b40b0644b411f995c97cead49 | [
"MIT"
] | null | null | null | README.md | hankkkwu/SDCND-P7-Path_Planning | 071746924f17834b40b0644b411f995c97cead49 | [
"MIT"
] | null | null | null | # CarND-Path-Planning-Project
Self-Driving Car Engineer Nanodegree Program
## Goals
In this project the goal is to safely navigate around a virtual highway with other traffic that is driving +-10 MPH of the 50 MPH speed limit. I'll use the car's localization, sensor fusion and map data to build a path planner. The car should try to go as close as possible to the 50 MPH speed limit, which means passing slower traffic when possible, note that other cars will try to change lanes too. The car should avoid hitting other cars at all cost as well as driving inside of the marked road lanes at all times, unless going from one lane to another. The car should be able to make one complete loop around the 6946m highway. Since the car is trying to go 50 MPH, it should take a little over 5 minutes to complete 1 loop. Also the car should not experience total acceleration over 10 m/s^2 and jerk that is greater than 10 m/s^3.
## [Rubric points](https://review.udacity.com/#!/rubrics/1971/view)
### Compilation
#### The code compiles correctly.
My code compile without errors with `cmake` and `make`.
### Valid Trajectories
#### Here is the result video:
[](https://www.youtube.com/watch?v=lQIFHnf9xug "Highway Driving project")
#### The car is able to drive at least 4.32 miles without incident
The car is able to drive 20 miles without incident.

#### The car drives according to the speed limit.
The car doesn't drive faster than the speed limit(50 mph). And it only drives much slower than speed limit when obstructed by traffic.
#### Max Acceleration and Jerk are not Exceeded.
The car does not exceed a total acceleration of 10 m/s^2 and a jerk of 10 m/s^3.
#### Car does not have collisions.
The car is able to avoid collisions on the road.
#### The car stays in its lane, except for the time between changing lanes.
The car is able to stay in its lane, except for changing lanes.
#### The car is able to change lanes
The car is able to smoothly change lanes when a slower moving car is in front of it and an adjacent lane is clear of other traffic.
## Reflection
The code for path planning algorithm is in [src/main.cpp](https://github.com/hankkkwu/SDCND-P7-Path_Planning/blob/master/src/main.cpp) starts from line 107 to line 375, comments are provided to improve the code readability.
Basiclly, the code consist of three parts:
### Prediction
In this part of code (line 107 ~ line 219), I use the data(velocity, s and d in frenet) from sensor fusion to predict where other cars would be in around 1 second, and use these information to solve the following situations:
1. Is there a slow moving car in front of us within 30 meters?
First, I'll check if the car is in the same lane line with our car, then calculate the distance between us after 1 second, if the distance is less than 30 meters, I will raise a flag as too close, and keep track that car's s value and speed.
2. Is it safe to change lanes?
If other cars are not in the same lane with our car, I'll check which lane the car is, and use our car's s value as origin, set a range between -10m to 30m, if any other lane has any car within this range, that lane would not be safe to change lane to. I also check the distance between the car in other lane and the car in front of us, if the distance less than 10 meters, that lane will be consider as not safe to change lane to.
### Behavior Planning
Based on the prediction of other cars, we need to decide what kind of behavior our car should planning to do (code from line 217 to line 267):
1. When there is a slow moving car in front of us within 30 meters, do we need to slow down, change lane to right or change lane to left?
If there is a slow moving car in front of us within 30 meters, I'll first check if the left lane is safe to change or not, if it's safe to change, then I will check if there is any car in left lane and right lane 80 meters ahead, though the left lane is safe to change, we still might be obstructed by traffic after changing lanes. If left lane is safe to change and there is no car in left lane 80 meters ahead, I'll change to left lane. If there is a car in left lane 80 meters ahead, and there is no car in right lane 80 meters ahead, I'll change to right lane. And the same logic for checking the right lane is safe to change or not. Finally, if lane change is not available, I'll slow down to the front car's speed.
2. When to speed up?
When there is no car in front of us within 30 meters and our speed is slower than the speed limit (50mph), I'll speed up the car.
### Generate Trajectory
This part of code (line 269 ~ line 373) uses our car's speed and lane output from behavior planning, also some information about car's x, y, yaw, and previous path points to calculate the trajectory.
To generate trajectories, I'll use spline instead of quintic polynomial. before using spline, I need to create a list of widely spaced waypoints (I will create 5 waypoints in this project), the first two waypoints will be the last two points of the previous trajectory (or the car position if the previous trajectory has less than 2 points left), and the last three waypoints would be evenly spaced at 30m ahead of the starting point(the second waypoint), then to make it easier to compute,I will use starting point as origin(0,0), shift all the waypoints, and rotate car reference angle to 0 degree, finally feed those 5 waypoints into spline to create points to fit in my path planner. My path planner will have 50 points, if I have any point from previous path, I'll just add them into the path planner, and then add the points spline created, so the path planner will always have 50 points.
## Running the Code
### Simulator
You can download the Term3 Simulator which contains the Path Planning Project from the [releases tab](https://github.com/udacity/self-driving-car-sim/releases/tag/T3_v1.2).
To run the simulator on Mac/Linux, first make the binary file executable with the following command:
```shell
sudo chmod u+x {simulator_file_name}
```
### Basic Build Instructions
1. Clone this repo.
2. Make a build directory: `mkdir build && cd build`
3. Compile: `cmake .. && make`
4. Run it: `./path_planning`.
### Dependencies
* cmake >= 3.5
* All OSes: [click here for installation instructions](https://cmake.org/install/)
* make >= 4.1
* Linux: make is installed by default on most Linux distros
* Mac: [install Xcode command line tools to get make](https://developer.apple.com/xcode/features/)
* Windows: [Click here for installation instructions](http://gnuwin32.sourceforge.net/packages/make.htm)
* gcc/g++ >= 5.4
* Linux: gcc / g++ is installed by default on most Linux distros
* Mac: same deal as make - [install Xcode command line tools]((https://developer.apple.com/xcode/features/)
* Windows: recommend using [MinGW](http://www.mingw.org/)
* [uWebSockets](https://github.com/uWebSockets/uWebSockets)
* Run either `install-mac.sh` or `install-ubuntu.sh`.
* If you install from source, checkout to commit `e94b6e1`, i.e.
```
git clone https://github.com/uWebSockets/uWebSockets
cd uWebSockets
git checkout e94b6e1
```
## Details
1. The car uses a perfect controller and will visit every (x,y) point it recieves in the list every .02 seconds. The units for the (x,y) points are in meters and the spacing of the points determines the speed of the car. The vector going from a point to the next point in the list dictates the angle of the car. Acceleration both in the tangential and normal directions is measured along with the jerk, the rate of change of total Acceleration. The (x,y) point paths that the planner recieves should not have a total acceleration that goes over 10 m/s^2, also the jerk should not go over 50 m/s^3. (NOTE: As this is BETA, these requirements might change. Also currently jerk is over a .02 second interval, it would probably be better to average total acceleration over 1 second and measure jerk from that.
2. There will be some latency between the simulator running and the path planner returning a path, with optimized code usually its not very long maybe just 1-3 time steps. During this delay the simulator will continue using points that it was last given, because of this its a good idea to store the last points you have used so you can have a smooth transition. previous_path_x, and previous_path_y can be helpful for this transition since they show the last points given to the simulator controller with the processed points already removed. You would either return a path that extends this previous path or make sure to create a new path that has a smooth transition with this last path.
| 74.288136 | 894 | 0.760666 | eng_Latn | 0.999338 |
503651d2f87c6276f0b7743d1382a55463e53096 | 128 | md | Markdown | chrome-extension/readme.md | ozoid/Asterisk_ClickToCall | 89c85875b3ad6fbe2e4804fb147b67fc110958bc | [
"MIT"
] | 7 | 2020-02-10T02:54:21.000Z | 2021-04-15T13:47:24.000Z | chrome-extension/readme.md | ozoid/Asterisk_ClickToCall | 89c85875b3ad6fbe2e4804fb147b67fc110958bc | [
"MIT"
] | null | null | null | chrome-extension/readme.md | ozoid/Asterisk_ClickToCall | 89c85875b3ad6fbe2e4804fb147b67fc110958bc | [
"MIT"
] | 2 | 2020-07-02T10:24:34.000Z | 2020-07-14T10:49:04.000Z | A Chrome Extension to replace telephone numbers (UK) on web pages with a link to auto-dial the number through Asterisk/FreePBX.
| 64 | 127 | 0.804688 | eng_Latn | 0.98786 |
50379b3c3d4b22b4980837a56e4769592e4c45a0 | 1,573 | md | Markdown | README.md | vineethguna/docker-code-executor | 9deadd39227b2014341a4e8e71ddf4930a035507 | [
"Apache-2.0"
] | 7 | 2016-05-26T13:49:33.000Z | 2022-03-12T13:46:17.000Z | README.md | vineethguna/docker-code-executor | 9deadd39227b2014341a4e8e71ddf4930a035507 | [
"Apache-2.0"
] | 5 | 2016-05-25T09:44:50.000Z | 2016-05-26T14:00:45.000Z | README.md | vineethguna/docker-code-executor | 9deadd39227b2014341a4e8e71ddf4930a035507 | [
"Apache-2.0"
] | 1 | 2017-02-17T15:05:39.000Z | 2017-02-17T15:05:39.000Z | Docker Code Executor
====================
A code execution engine based on docker on which you can safely run **untrusted code** without the fear of your server
going down.
It is powered by [kubernetes](http://kubernetes.io/), a container cluster manager by Google.
Currently it supports the below languages
* C
* Python
* Ruby
Installation
============
This includes two micro services
* UI
* Execution Server
First you need to setup the execution server, Please follow the below steps
* Go to `executor` folder
* Run `npm install` - This is a one time step
* Start the server using `npm start`
Then you need to start the UI service, Please follow the below steps
* Go to `ui` folder
* Run `npm install` - This is a one time step
* Set the following environment variables
* `EXECUTOR_SERVICE_HOST` to the IP address of execution server, if it is on the
same machine it will be `localhost`
* `EXECUTOR_SERVICE_PORT` to the port number of execution server, by default it is `3000`
* Start the server using `npm start`
**Note:** The above setup does not involve launching docker containers to execute untrusted code. So use at your own risk.
To use containerized version of the above please refer the section Docker Images
Docker Images
=============
The docker images for both the microservices are available in my docker hub account and is public
Execution Server - https://hub.docker.com/r/vineethguna/executor/
UI - https://hub.docker.com/r/vineethguna/executor-ui/
License
=======
Apache Copyright (c) [Vineeth Guna](vineethguna.wordpress.com) | 30.843137 | 122 | 0.738716 | eng_Latn | 0.993971 |
503828d5366cbb1e63af5947ed3929e15b3430b1 | 407 | md | Markdown | Example/Pods/AlExtensions/README.md | unalCe/ALFormInput | 678a194fa8420e1dabab69ca011fae08e9e85b96 | [
"MIT"
] | 4 | 2020-04-13T16:03:54.000Z | 2020-04-17T09:09:01.000Z | Example/Pods/AlExtensions/README.md | unalCe/ALFormInput | 678a194fa8420e1dabab69ca011fae08e9e85b96 | [
"MIT"
] | 4 | 2020-04-13T15:48:47.000Z | 2020-04-20T11:02:50.000Z | Example/Pods/AlExtensions/README.md | unalCe/ALFormInput | 678a194fa8420e1dabab69ca011fae08e9e85b96 | [
"MIT"
] | 2 | 2020-04-13T15:34:42.000Z | 2020-05-28T09:05:24.000Z | # AL-Extensions :electric_plug:
[](http://swift.org)
Useful extensions for UIKit in Swift language
### Motivation
We want to gather handy extensions in one place.
### Contributions
We are open any kind of extensions related to Swift. Just make sure the extension you write is not already exist and handy.
Make sure to open pull request.
| 31.307692 | 124 | 0.756757 | eng_Latn | 0.98956 |
5038ced415f325640aaf0572acdc61b23cb33dd4 | 106 | md | Markdown | README.md | ssd-tutorials/slideshow-jquery-cycle | 961a8852816c889cf5147f39a590da752dabac16 | [
"MIT"
] | null | null | null | README.md | ssd-tutorials/slideshow-jquery-cycle | 961a8852816c889cf5147f39a590da752dabac16 | [
"MIT"
] | null | null | null | README.md | ssd-tutorials/slideshow-jquery-cycle | 961a8852816c889cf5147f39a590da752dabac16 | [
"MIT"
] | null | null | null | # Slideshow with jQuery Cycle plugin
Exercise files for the course **Slideshow with jQuery Cycle plugin**
| 35.333333 | 68 | 0.801887 | eng_Latn | 0.988447 |
5038d497d82321e0907534d0f77b6926459ddddf | 2,906 | md | Markdown | sdk-api-src/content/strmif/nf-strmif-ivmrwindowlesscontrol-setcolorkey.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/strmif/nf-strmif-ivmrwindowlesscontrol-setcolorkey.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/strmif/nf-strmif-ivmrwindowlesscontrol-setcolorkey.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:strmif.IVMRWindowlessControl.SetColorKey
title: IVMRWindowlessControl::SetColorKey (strmif.h)
description: The SetColorKey method sets the source color key value that the VMR should use.
helpviewer_keywords: ["IVMRWindowlessControl interface [DirectShow]","SetColorKey method","IVMRWindowlessControl.SetColorKey","IVMRWindowlessControl::SetColorKey","IVMRWindowlessControlSetColorKey","SetColorKey","SetColorKey method [DirectShow]","SetColorKey method [DirectShow]","IVMRWindowlessControl interface","dshow.ivmrwindowlesscontrol_setcolorkey","strmif/IVMRWindowlessControl::SetColorKey"]
old-location: dshow\ivmrwindowlesscontrol_setcolorkey.htm
tech.root: dshow
ms.assetid: 9facf4af-ed56-4a94-b351-35ddd7f63e6e
ms.date: 12/05/2018
ms.keywords: IVMRWindowlessControl interface [DirectShow],SetColorKey method, IVMRWindowlessControl.SetColorKey, IVMRWindowlessControl::SetColorKey, IVMRWindowlessControlSetColorKey, SetColorKey, SetColorKey method [DirectShow], SetColorKey method [DirectShow],IVMRWindowlessControl interface, dshow.ivmrwindowlesscontrol_setcolorkey, strmif/IVMRWindowlessControl::SetColorKey
req.header: strmif.h
req.include-header: Dshow.h
req.target-type: Windows
req.target-min-winverclnt: Windows XP with SP1 [desktop apps only]
req.target-min-winversvr: Windows Server 2003 [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Strmiids.lib
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IVMRWindowlessControl::SetColorKey
- strmif/IVMRWindowlessControl::SetColorKey
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- Strmiids.lib
- Strmiids.dll
api_name:
- IVMRWindowlessControl.SetColorKey
---
# IVMRWindowlessControl::SetColorKey
## -description
The <code>SetColorKey</code> method sets the source color key value that the VMR should use.
## -parameters
### -param Clr [in]
Specifies the source color key.
## -returns
If the method succeeds, it returns S_OK. If it fails, it returns an error code.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>VFW_E_WRONG_STATE</b></dt>
</dl>
</td>
<td width="60%">
The VMR is not in windowless mode.
</td>
</tr>
</table>
## -remarks
Color key control is only meaningful when the VMR is using an overlay surface.
## -see-also
<a href="/windows/desktop/DirectShow/error-and-success-codes">Error and Success Codes</a>
<a href="/windows/desktop/api/strmif/nn-strmif-ivmrwindowlesscontrol">IVMRWindowlessControl Interface</a>
<a href="/windows/desktop/api/strmif/nf-strmif-ivmrwindowlesscontrol-getcolorkey">IVMRWindowlessControl::GetColorKey</a>
<a href="/windows/desktop/DirectShow/using-the-video-mixing-renderer">Using the Video Mixing Renderer</a> | 28.490196 | 400 | 0.785272 | yue_Hant | 0.367057 |
50391f3509b1fecdc644472aa66cdc76556fbf34 | 160 | md | Markdown | README.md | Matej-Chmel/Package_Tracker | 06f510e505ae49b4ed5d8fda68d8daf872d8b4a7 | [
"CC0-1.0"
] | null | null | null | README.md | Matej-Chmel/Package_Tracker | 06f510e505ae49b4ed5d8fda68d8daf872d8b4a7 | [
"CC0-1.0"
] | null | null | null | README.md | Matej-Chmel/Package_Tracker | 06f510e505ae49b4ed5d8fda68d8daf872d8b4a7 | [
"CC0-1.0"
] | null | null | null | # Package tracker
Small Django project.
# Sledovač balíků
Projekt pro předmět *Skriptovací programovací jazyky a jejich aplikace* v zimním semestru 2019/2020.
| 26.666667 | 100 | 0.8125 | ces_Latn | 0.999707 |
503934ea1a335be69ebf79e8c862dd0aa67159b8 | 78 | md | Markdown | mybatis/src/test/java/com/github/kuangcp/stream/Readme.md | Kuangcp/JavaBase | 398b05ce0269b20d8ed3854e737ee8e54d74ead2 | [
"MIT"
] | 16 | 2017-07-11T14:11:48.000Z | 2021-03-31T08:50:55.000Z | mybatis/src/test/java/com/github/kuangcp/stream/Readme.md | Kuangcp/JavaBase | 398b05ce0269b20d8ed3854e737ee8e54d74ead2 | [
"MIT"
] | 1 | 2017-07-03T06:50:53.000Z | 2019-03-06T18:05:18.000Z | mybatis/src/test/java/com/github/kuangcp/stream/Readme.md | Kuangcp/JavaBase | 398b05ce0269b20d8ed3854e737ee8e54d74ead2 | [
"MIT"
] | 13 | 2018-01-10T05:03:50.000Z | 2020-11-24T09:58:27.000Z | 使用流优化大数据量查询和导出
Stream模式
弊端:
查看Mybatis实现原理,是否有连接高频占用和释放风险
优势:
降低SQL查询扫描行数 | 7.090909 | 28 | 0.820513 | zho_Hans | 0.14704 |
503a9388cda4d33eb0be90e47c68bd6af23f5007 | 3,216 | md | Markdown | demos/person_vehicle_bike_detection/python/README.md | IntelAI/OpenVINO-model-server | 281cffcc358013afcec0d810331acc66c18297f7 | [
"Apache-2.0"
] | 305 | 2018-10-01T12:41:28.000Z | 2020-04-24T10:36:08.000Z | demos/person_vehicle_bike_detection/python/README.md | IntelAI/OpenVINO-model-server | 281cffcc358013afcec0d810331acc66c18297f7 | [
"Apache-2.0"
] | 61 | 2018-11-15T09:23:01.000Z | 2020-04-23T09:29:56.000Z | demos/person_vehicle_bike_detection/python/README.md | IntelAI/OpenVINO-model-server | 281cffcc358013afcec0d810331acc66c18297f7 | [
"Apache-2.0"
] | 67 | 2018-10-13T14:33:48.000Z | 2020-04-22T19:01:32.000Z | # Person, vehicle, bike detection with multiple data sources {#ovms_demo_person_vehicle_bike_detection}
The purpose of this demo is to show how to send data from multiple sources (cameras, video files) to a model served in OpenVINO Model Server.
## Deploy person, vehicle, bike detection model
### Download model files
```bash
mkdir -p model/1
wget -P model/1 https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.bin
wget -P model/1 https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/2/person-vehicle-bike-detection-crossroad-0078/FP32/person-vehicle-bike-detection-crossroad-0078.xml
```
### Run OpenVINO Model Server
```bash
docker run -d -v `pwd`/model:/models -p 9000:9000 openvino/model_server:latest --model_path /models --model_name person-vehicle-detection --port 9000 --shape auto
```
## Running the client application
```bash
git clone https://github.com/openvinotoolkit/model_server.git
cd model_server/demos/person_vehicle_bike_detection/python
python person_vehicle_bike_detection.py --help
```
### Arguments
| Argument | Description |
| :--- | :---- |
| -h,--help | Show help message and exit |
| -n NETWORK_NAME, --network_name NETWORK_NAME | Network name |
| -l INPUT_LAYER, --input_layer INPUT_LAYER | Input layer name |
| -o OUTPUT_LAYER, --output_layer OUTPUT_LAYER | Output layer name |
| -d FRAME_SIZE, --frame_size FRAME_SIZE | Input frame width and height that matches used model |
| -c NUM_CAMERAS, --num_cameras NUM_CAMERAS | Number of cameras to be used |
| -f FILE, --file FILE | Path to the video file |
| -i IP, --ip IP| IP address of the ovms|
| -p PORT, --port PORT | Port of the ovms |
### Using with video file
Set `camera` count to `0` with `-c 0` and provide path to the video file with `-f` parameter.
```
python person_vehicle_bike_detection.py -n person-vehicle-detection -l data -o detection_out -d 1024 -c 0 -f <path_to_video_file> -i localhost -p 9000
```
Output:
```
[$(levelname)s ] Video0 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Video0 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Video0 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Exiting thread 0
[$(levelname)s ] Good Bye!
```
### Using with video file and camera
Set `camera` count to `1` with `-c 1` and provide path to the video file with `-f` parameter.
```
python person_vehicle_bike_detection.py -n person-vehicle-detection -l data -o detection_out -d 1024 -c 1 -f <path_to_video_file> -i localhost -p 9000
```
Console logs:
```
[$(levelname)s ] Video1 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Camera0 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Video1 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Camera0 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Video1 fps: 7, Inf fps: 7, dropped fps: 0
[$(levelname)s ] Camera0 fps: 8, Inf fps: 8, dropped fps: 0
[$(levelname)s ] Exiting thread 0
[$(levelname)s ] Good Bye!
```
> **NOTE:** You should also be seeing the GUI showing the video frame and bounding boxes drawn with the detected class name | 41.230769 | 198 | 0.717662 | eng_Latn | 0.672995 |
503bbee2fa271f2d6c8caecad25f147edcea60b6 | 5,452 | md | Markdown | README.md | DavidMellul/Kotlin-Publish-Subscribe | d5bf17f5016e0b0f41ffe8001dd9acb237e563c2 | [
"MIT"
] | 25 | 2018-01-29T08:16:42.000Z | 2021-11-15T11:17:02.000Z | README.md | DavidMellul/Kotlin-Publish-Subscribe | d5bf17f5016e0b0f41ffe8001dd9acb237e563c2 | [
"MIT"
] | 2 | 2018-01-29T15:02:47.000Z | 2018-01-29T15:04:23.000Z | README.md | DavidMellul/Kotlin-Publish-Subscribe | d5bf17f5016e0b0f41ffe8001dd9acb237e563c2 | [
"MIT"
] | 2 | 2018-11-19T09:14:55.000Z | 2020-09-15T11:24:32.000Z | # Kotlin-Publish-Subscribe
[](https://jitpack.io/#DavidMellul/Kotlin-Publish-Subscribe)
🦄Intuitive and powerful human-readable Kotlin DSL for IPCs & turning anything into a message receiver / broadcaster🦄
:white_check_mark: Seamless integration <br />
:white_check_mark: No dependencies <br />
:white_check_mark: No modification in your code & Powerful human-readable DSL<br />
:white_check_mark: Lightweight library ~ 1kb <br />
## <a href="#demonstration"></a>Demonstration
#### Simple emitter - receiver
```kotlin
val alice = "Alice"
val bob = "Bob"
alice listenTo "message" then { print("I'm $alice and I have a new message\n") }
bob broadcast "message"
// Output : I'm Alice and I have a new message
```
#### Simple emitter - multiple receivers
```kotlin
val alice = "Alice"
val bryan = "Bryan"
val kevin = "Kevin"
val bob = "Bob"
listOf(alice,bryan,kevin).forEach { receiver -> receiver listenTo "message" then { print("$receiver has received Bob's message!\n") } }
bob broadcast "message"
// Output :
Bryan has received Bob's message!
Kevin has received Bob's message!
Alice has received Bob's message!
```
#### Multiple emitters - Simple receiver
```kotlin
val julia = "Julia"
val bob = "Bob"
val fred = "Fred"
val tom = "Tom"
julia listenTo "seduction" then { print("Maybe another time fellow...\n")}
listOf(bob,fred,tom).forEach { it broadcast "seduction" }
// Output :
Maybe another time fellow...
Maybe another time fellow...
Maybe another time fellow...
```
#### Emitter - Receiver with one parameter
```kotlin
val alice = "Alice"
val bob = "Bob"
alice listenTo "seduction" then { seductionLevel ->
if( (seductionLevel as Int) > 9000)
print("It's over 9k !!!!!")
else
print("Not enough...Keep doin' or die tryin'")
}
bob broadcast SophisticatedSignal("seduction", 9001)
// Output :
It's over 9k !!!!!
```
#### Emitter - Receiver with multiple parameters (not varargs ! Kotlin limitation :heart:)
```kotlin
val alice = "Alice"
val bob = "Bob"
alice listenTo "charming attempt" then { attempt ->
print(attempt)
}
bob broadcast SophisticatedSignal("charming attempt",
hashMapOf(
"Name" to "Boby Bob",
"Seduction level" to 100000,
"Height" to 1.7,
"Employment" to "Freelance Trainer & Developer ",
"Tasks to be done" to listOf("Pull", "Commit", "Push", "Leave the building"))
)
// Output
{Seduction level=100000, Height=1.7, Tasks to be done=[Pull, Commit, Push, Leave the building], Employment=Freelance Trainer & Developer , Name=Boby Bob}
```
#### Emitter - Receiver with one parameter and usage of *stopListenTo*
```kotlin
val alice = "Alice"
val bob = "Bob"
alice listenTo "message" then { message ->
print("Bob: $message\n")
alice broadcast SophisticatedSignal("reply", "Wassup bro")
}
bob listenTo "reply" then { reply ->
print("Alice : $reply\n")
print("(Bob thinking) I will break up with her hahaha ! First block her\n")
bob stopListenTo "reply"
print("(Bob thinking) Ok done !\n")
bob broadcast SophisticatedSignal("message", "Godd bye !")
}
bob broadcast SophisticatedSignal("message", "Hello Alice !")
// Output :
Bob: Hello Alice !
Alice : Wassup bro
(Bob thinking)I will break up with her hahaha ! First block her
(Bob thinking) Ok done !
Bob: Godd bye !
```
## Install
#### Gradle
First add the Jitpack repository in your root **build.gradle** at the end of repositories:
```gradle
allprojects {
repositories {
...
maven { url 'https://jitpack.io' }
}
}
```
Then, copy paste this line into your dependencies
```gradle
compile 'com.github.DavidMellul:Kotlin-Publish-Subscribe:-SNAPSHOT'
```
#### Maven
Add the Jitpack repository to your **pom.xml**
```xml
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>
```
Then, add the dependency
```xml
<dependency>
<groupId>com.github.DavidMellul</groupId>
<artifactId>Kotlin-Publish-Subscribe</artifactId>
<version>-SNAPSHOT</version>
</dependency>
```
#### SBT
Add both lines to your **build.sbt**
```sbt
resolvers += "jitpack" at "https://jitpack.io"
libraryDependencies += "com.github.DavidMellul" % "Kotlin-Publish-Subscribe" % "-SNAPSHOT"
```
## Use case
You could use this lightweight library in such cases:
- Asynchronous treatments
- Communication between independant objects
- Observer-Observable design-pattern
- Basically everything that requires that both entities communicate so that one reacts to another
## Documentation
The library has been created with Kotlin and has no external dependencies. It means it is fully interoperable with Java ! :purple_heart:
:sparkling_heart: **This library is dead easy to use -> Look at the examples in the [demonstration](#demonstration) :sparkling_heart:**
:triangular_flag_on_post: **This library works only with the Any supertype ==> It means you can turn absolutely everything into a broadcaster / receiver and put everything as parameter associated to a signal.** :triangular_flag_on_post:
## Contribute
Feel free to:
- Open issues / pull requests if there is any bug / missing feature. :love_letter:
- Star this repo :unicorn:
- Ask for a :coffee:
| 27.396985 | 236 | 0.680668 | eng_Latn | 0.897077 |
503c1c6b8df2747384444440b40f549764e4fc0a | 306 | md | Markdown | _talks/2020-06-04-Paris.md | mrolinek/mrolinek.github.io | 70e2ad901c0201bc714cfcd853aba6911224ae4a | [
"MIT"
] | null | null | null | _talks/2020-06-04-Paris.md | mrolinek/mrolinek.github.io | 70e2ad901c0201bc714cfcd853aba6911224ae4a | [
"MIT"
] | null | null | null | _talks/2020-06-04-Paris.md | mrolinek/mrolinek.github.io | 70e2ad901c0201bc714cfcd853aba6911224ae4a | [
"MIT"
] | null | null | null | ---
title: "Differentiation of Blackbox Combinatorial Solvers"
collection: talks
venue: " Ecole Normale Superieure, Zdeborova, Krzakala group"
date: 2020-06-04
location: "Paris, France"
gdrive_link: "https://docs.google.com/presentation/d/1lVCtUa8MW_b6NO3Z2rEv9VMS15jF1yDmGbYkfHoY_FY/edit?usp=sharing"
---
| 34 | 115 | 0.800654 | yue_Hant | 0.146458 |
503c64273a56d2e6aa1f8a83866d3a92b54c50da | 3,839 | md | Markdown | docs/best-practices/project-inception-checklist/readme.md | gregakinman/gradle-baseline | 628aa7cb8507e60410353bb2d204dec699d422ba | [
"Apache-2.0"
] | null | null | null | docs/best-practices/project-inception-checklist/readme.md | gregakinman/gradle-baseline | 628aa7cb8507e60410353bb2d204dec699d422ba | [
"Apache-2.0"
] | null | null | null | docs/best-practices/project-inception-checklist/readme.md | gregakinman/gradle-baseline | 628aa7cb8507e60410353bb2d204dec699d422ba | [
"Apache-2.0"
] | null | null | null | # Starting a Software Project
Software projects should adhere to a sane set of standards. This document outlines our recommended
standards for all projects, then links to additional technology-specific recommendations.
When starting any new software project, you should:
- [ ] [Use a **source code** management system](#source-control)
- [ ] [Use a **repository hosting** service](#repository-hosting-service)
- [ ] [Set up **Continuous Integration**](#continuous-integration)
- [ ] [Use **Semantic Versioning**](#versioning)
- [ ] [Have **verification and development setup scripts**](#verification-and-development-setup-scripts)
## Source Control
**Use a source code management system**.
[Git](https://en.wikipedia.org/wiki/Git_(software)) is recommended.
Why use source control? Have you ever:
- Made a change to code, realized it was a mistake and wanted to revert back?
- Lost code or had a backup that was too old?
- Had to maintain multiple versions of a product?
- Wanted to see the difference between two (or more) versions of your code?
- Wanted to prove that a particular change broke or fixed a piece of code?
- Wanted to review the history of some code?
- Wanted to submit a change to someone else's code?
- Wanted to share your code, or let other people work on your code?
- Wanted to see how much work is being done, and where, when and by whom?
- Wanted to experiment with a new feature without interfering with working code?
(As outlined on
[Stack Overflow](http://stackoverflow.com/questions/1408450/why-should-i-use-version-control))
## Repository Hosting Service
**Use a repository hosting service**. Using a repository hosting service combined with
[source control](#source-control) allows for a safe development and collaboration story.
[Github](https://github.com/) is recommended.
## Continuous Integration
**Set up Continuous Integration (CI)**. CI tests changes to a project's code base as they're
submitted.
[CircleCI](https://circleci.com/) is recommended.
## Versioning
**Use [Semantic Versioning (Semver)](http://semver.org/)**.
Quick summary of Semantic Versioning:
Given a version number MAJOR.MINOR.PATCH, increment the:
1. MAJOR version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards-compatible manner, and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as
extensions to the MAJOR.MINOR.PATCH format.
## Verification and Development Setup Scripts
In order to lower the barrier-to-entry for contributors, every project
should provide `./scripts/verify` and `./scripts/setup` scripts with the
following contract:
- `./scripts/verify` verifies the correctness of the current state of
the project working directory. Intended for use in CI or
local environments. For example, could invoke `./gradlew build` in
order to compile and test a Java project.
- `./scripts/setup` sets up the default local
development environment(s) -- including downloading dependencies and
additional sources, setting up IDE-specific configuration, etc. Can
be run repeatedly to incorporate upstream changes to the
development setup. For example, could invoke
`./gradlew idea eclipse` in order to generate Eclipse and IntelliJ
project files for a Java project.
The scripts should be executable on Linux/MacOS environments via
`./scripts/foo`, i.e., should carry execution permissions and an
appropriate Shebang (`#!`) instruction.
## Technology-Specific Checklists
The above recommendations are appropriate for all software development projects. Depending on which
technologies you are working with, you should also read the following:
- [Java Checklist](java.md)
- [Web Application Checklist](web-app.md)
- [Gradle Plugin Checklist](gradle-plugin.md)
| 40.410526 | 104 | 0.770513 | eng_Latn | 0.992971 |
503c724ac22fd8b40c29d86d3150c14f64d1b921 | 2,715 | md | Markdown | doc/requirements.md | coinjet/gatewayd | fe7465418e884accd8ae5b51f28f00b75ba55032 | [
"0BSD"
] | 2 | 2015-08-12T18:09:29.000Z | 2015-08-14T14:43:15.000Z | doc/requirements.md | tangz1987/gatewayd | d223e89fc1bbe0ae61e767a6bf634794d83a161e | [
"0BSD"
] | null | null | null | doc/requirements.md | tangz1987/gatewayd | d223e89fc1bbe0ae61e767a6bf634794d83a161e | [
"0BSD"
] | null | null | null | GENERAL REQUIREMENTS
- sends payments to Ripple REST upon writing to a SQL table
- verifies outbound payments using Ripple REST notifications
- standard error messages for HTTP calls
- ability to map one deposit to n outgoing ripple transactions
- ability to map one incoming ripple transaction to n withdrawals
- record external_transaction_id of deposit in n ripple transactions
- record ripple_transaction_id of ripple_transaction in n withdrawals
- query to return n outgoing ripple transactions given a deposit
- query to return n withdrawals given incoming ripple transaction
- ripple_transaction records should have an incoming boolean
- notify via email when the hot wallet is low of any currency
- require notification email to be provided
BOUNCING INVALID PAYMENTS
- swap source address for destination address
- swap source tag for destination tag
- set destination amount as amount recieved, not amount they sent
- set partial payment flag on transaction so they pay the fee
COLD WALLET
- sets email flag of cold wallet
- sets domain flag of cold wallet
- sets tfDisallowXRP flag for cold wallet
- sets tfRequireDestTag flag for cold wallet
HOT WALLET
- sets tfDisallowXRP flag for hot wallet
- sets tfRequireAuth flag for hot wallet before extending trust lines
- set trust lines from hot wallet to cold wallet based on currencies
** only trust after setting tfRequireAuth
CALLBACKS
- POST callback when a withdrawal is marked as queued
- POST callback upon notification of outbound ripple transaction
OUTBOUND RIPPLE TRANSACTIONS
- successes should be marked as sent
- tem failures should be marked as failed
- tel failures should be marked as failed
- tef failures should be marked as failed
- tej failures should be marked as failed
- ter failures should be retried
INBOUND RIPPLE TRANSACTIONS
- tec failures should be marked as failed
- tes success should marked as succeeded
NOTIFICATIONS
- poll ripple rest for outbound payment notitifications of tes transactions
- tec failures should be marked as failed
- tec failures should should record the transaction hash
- tec failures should mark the ripple_status with tec error code
- tes successes should be marked as incoming
- tes successes should record the transaction hash
- tes successes should mark the ripple_status with tesSUCCESS code
- bounce payments to invalid destination tags
- bounce xrp payments made to cold wallet
- bounce xrp payments made to hot wallet
DEPOSITS STATE MODEL
- queued
- processed
WITHDRAWALS STATE MODEL
- queued
- notified
- processed
RIPPLE TRANSACTIONS OUTGOING STATE MODEL
- outgoing
- sent
- failed
- succeeded
RIPPLE TRANSACTIONS INCOMING STATE MODEL
- incoming
- failed
- bounced
- succeeded
| 32.710843 | 75 | 0.809576 | eng_Latn | 0.966696 |
503dfc0596b691a7067c54fb822525b7eaad4cc0 | 1,410 | md | Markdown | README.md | MelnikovAnton/HD_Agents | 8488e40158a86c3937638108f05dcbca2a1ce647 | [
"Apache-2.0"
] | null | null | null | README.md | MelnikovAnton/HD_Agents | 8488e40158a86c3937638108f05dcbca2a1ce647 | [
"Apache-2.0"
] | null | null | null | README.md | MelnikovAnton/HD_Agents | 8488e40158a86c3937638108f05dcbca2a1ce647 | [
"Apache-2.0"
] | null | null | null | ## Автор
Anton Melnikov (Антон Мельников)
[email protected]
## Описание
Приложение позволяет менять имена и тип телефонов, а так-же имена агентов.
<br/>
Для запуска приложения нужно положить exe файл и каталог config в одну дирректорию.
## Сборка
* Настроить логирование **resources\log4j.properties**.<br/>
* Если приложение необходимо запускать на ПК где не установлена JAVA, то необходимо скопировать JRE в папку ./jre
* Собрать приложение:
````
mvn install
````
## Настройка собранного приложения
* Создать файл **properties.xml** в каталоге **config**: <br/>
* Добавить адрес Сommunication Manager в тег с ключем cm_host.
* Установить порт (по умолчанию 5022 для SSH) в тег с ключем cm_port.
* Заполнить имя пользователя и пароль в тегах с ключами cm_user и cm_password соответственно.
* Заполнить список агентов и телефонов в тегах с ключами agents и stations соответственно. В качестве разделителя используется запятая.
**Пример:**<br/>
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<entry key="cm_host">5.5.5.5</entry>
<entry key="cm_port">5022</entry>
<entry key="cm_user">user</entry>
<entry key="cm_password">password</entry>
<entry key="agents">44100, 44101, 44102</entry>
<entry key="stations">4100,4101,4102</entry>
</properties>
``` | 32.790698 | 137 | 0.713475 | rus_Cyrl | 0.809782 |
503e554b529a91a7bda2bb388b86725c4acbd2c6 | 415 | md | Markdown | README.md | arobertson38/NGRF-LSI-GeneratorPaper | e3a23d3d1a56487bbbb2eae8a9e21a0e67dd54aa | [
"MIT"
] | 1 | 2021-09-29T17:09:00.000Z | 2021-09-29T17:09:00.000Z | README.md | arobertson38/NGRF-LSI-GeneratorPaper | e3a23d3d1a56487bbbb2eae8a9e21a0e67dd54aa | [
"MIT"
] | null | null | null | README.md | arobertson38/NGRF-LSI-GeneratorPaper | e3a23d3d1a56487bbbb2eae8a9e21a0e67dd54aa | [
"MIT"
] | null | null | null | # NGRF-LSI-GeneratorPaper
This repository contains a complete implementation of the generator described in "Efficient Generation of Anisotropic N-Field Microstructures from 2-Point Statistics using Multi-output Gaussian Random Fields". Additionally, it contains some suitable examples.
For instructions on how to use the code and the generator, please follow the examples laid out in the "Example_Usage.py" file.
| 83 | 260 | 0.826506 | eng_Latn | 0.997151 |
503e6b0a67212e25f23071901095585a10f7b31f | 317 | md | Markdown | packages/protocol-ethereum-sdk/src/nft/transfer-erc1155.md | aciceri/protocol-ethereum-sdk | 6c19496211898e6153cc297745350cece542af2b | [
"MIT"
] | 18 | 2021-08-14T04:41:49.000Z | 2021-11-09T14:54:08.000Z | packages/sdk/src/nft/transfer-erc1155.md | tankloc/ethereum-sdk | 94ae089158720145fb2d56bc27ba717e3d6106a8 | [
"MIT"
] | 8 | 2021-09-13T13:42:05.000Z | 2021-11-11T18:13:11.000Z | packages/sdk/src/nft/transfer-erc1155.md | tankloc/ethereum-sdk | 94ae089158720145fb2d56bc27ba717e3d6106a8 | [
"MIT"
] | 12 | 2021-11-18T01:42:45.000Z | 2022-03-25T01:38:21.000Z | ### transfer erc1155
Transfer erc1155 token from sender to newOwner.
Arguments:
- contract: Address - address of the ERC1155 contract
- transferType: Erc1155TransferType - single or batch transfer type
- from: Address - owner of the ERC1155 token
- to: Address - new owner
- tokenId: string - id of nft to transfer
| 28.818182 | 67 | 0.763407 | eng_Latn | 0.930274 |
503e865f2c1dd8f19160fdde8607095013a088c1 | 978 | md | Markdown | AlchemyInsights/deploy-azure-ad-pim.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:06:02.000Z | 2020-09-17T11:26:05.000Z | AlchemyInsights/deploy-azure-ad-pim.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:59:12.000Z | 2022-02-09T06:59:36.000Z | AlchemyInsights/deploy-azure-ad-pim.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-11T18:36:50.000Z | 2021-10-09T10:49:57.000Z | ---
title: Installér Azure AD Privileged Identity Management (PIM)
ms.author: v-aiyengar
author: AshaIyengar21
manager: dansimp
ms.date: 12/14/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "9003895"
- "6949"
ms.openlocfilehash: e7e52ebf7fdb6a7cb07cf1d960fc14263ad0dbfab00ea9968feabbfa4b05c975
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: da-DK
ms.lasthandoff: 08/05/2021
ms.locfileid: "53914177"
---
# <a name="deploy-azure-ad-privileged-identity-management-pim"></a>Installér Azure AD Privileged Identity Management (PIM)
Hvis du vil have mere at vide om, hvordan du planlægger udrulningen af Privileged Identity Management (PIM) i din Azure Active Directory (Azure AD)-organisation, skal du se [Udrul Azure AD Privileged Identity Management (PIM).](https://go.microsoft.com/fwlink/?linkid=2132095) | 39.12 | 276 | 0.809816 | yue_Hant | 0.137721 |
5040f078cce964ed4d13d753174d1cab03af34ca | 21 | md | Markdown | README.md | Franko1307/cpp-listas-template | a820863d6bdf4ef3132bf15fc2cc2605889ed23b | [
"Apache-2.0"
] | null | null | null | README.md | Franko1307/cpp-listas-template | a820863d6bdf4ef3132bf15fc2cc2605889ed23b | [
"Apache-2.0"
] | null | null | null | README.md | Franko1307/cpp-listas-template | a820863d6bdf4ef3132bf15fc2cc2605889ed23b | [
"Apache-2.0"
] | null | null | null | # cpp-listas-template | 21 | 21 | 0.809524 | ita_Latn | 0.458231 |
504246dc3df0b7fa97c9c43c0d8293612f1dfc96 | 1,690 | md | Markdown | Instructions/Walkthroughs/23-Access Azure Preview features.md | MicrosoftLearningKoreanLab/AZ-900TKR-MicrosoftAzureFundamentals | 328683ffa71a7d16ad870e94c65bf890cb5720ed | [
"MIT"
] | 36 | 2020-01-05T12:52:08.000Z | 2021-09-16T08:38:52.000Z | Instructions/Walkthroughs/23-Access Azure Preview features.md | iamoyw/AZ-900TKR-MicrosoftAzureFundamentals | 328683ffa71a7d16ad870e94c65bf890cb5720ed | [
"MIT"
] | null | null | null | Instructions/Walkthroughs/23-Access Azure Preview features.md | iamoyw/AZ-900TKR-MicrosoftAzureFundamentals | 328683ffa71a7d16ad870e94c65bf890cb5720ed | [
"MIT"
] | 30 | 2020-01-05T12:52:07.000Z | 2021-08-12T07:05:15.000Z | ---
wts:
title: '23 - Azure Preview 기능 액세스'
module: '모듈 04 - Azure 과금과 지원'
---
# 23 - Azure Preview 기능 액세스
이 연습에서는 Azure Preview 서비스 또는 기능에 액세스하고 최신 Azure 업데이트 정보를 확인합니다.
실습 시간: 10 분
# 실습 1: Preview 서비스와 기능 액세스
이 실습에서는 마켓 플레이스의 Preview 기능을 확인합니다.
1. <a href="https://portal.azure.com" target="_blank"><span style="color: #0066cc;" color="#0066cc">Azure Portal</span></a>에 로그인 합니다.
2. 검색창에 **Marketplace**를 검색합니다.
3. 마켓플레이스 검색창에 **Preview**를 검색하고 관심 있는 Preview 제품을 모두 검토합니다.
4. 마켓플레이스 검색창에 **Kubernetes Service**를 검색하고 **만들기** 버튼을 클릭합니다.
5. **Kubernetes 버전**의 드롭다운 메뉴를 클릭하면 **Preview** 버전이 있을 수 있습니다. 모든 서비스에 Preview 버전이 있지 않으며 Kubernetes Service에도 Preview 버전이 없을 수 있습니다.

**메모**: 프로덕션에서 Azure 서비스를 사용하는 경우, 일반적으로 사용 가능한 Azure 서비스나 제품 내에서 미리보기 기능은 아직 프로덕션 배포에 적합하지 않을 수 있습니다. 또한 프로덕션 환경에 배포하기 전에 사용에 대한 제한 사항을 알고 있어야 합니다.
# 실습 2: Azure 업데이트 페이지 검토
이 실습에서는 Azure 업데이트 페이지를 검토합니다.
1. 브라우저에서 <a href="https://azure.microsoft.com/ko-kr/updates/" target="_blank"><span style="color: #0066cc;" color="#0066cc">Azure 업데이트</span></a> 웹페이지를 탐색합니다.
2. **지금 이용 가능**, **미리 보기**, **개발 중** 체크 박스 옵션이 있으며 필요한 정보만 체크하여 확인할 수 있습니다.
3. **미리 보기** 체크 박스에 체크를 한 후 **범주별 찾아보기** 드롭 다운 메뉴에서 **컨테이너**를 선택한 후 **결과 필터링** 버튼을 클릭합니다. 컨테이너 서비스 중 Preview인 서비스만 볼 수 있습니다.

4. 결과 목록에서 항목을 클릭하여 더 자세한 정보를 확인합니다.
5. **Azure 업데이트** 페이지로 돌아가 **지금 이용 가능**만 선택한 후 결과 필터링을 합니다. 출력된 결과 중 관심 있는 항목을 클릭하여 자세한 정보를 확인합니다.
6. **Azure 업데이트** 페이지로 돌아가 **개발 중**만 선택한 후 결과 필터링을 합니다. 출력된 결과 중 관심 있는 항목을 클릭하여 자세한 정보를 확인합니다.
Azure Preview 서비스 또는 기능에 액세스하고 최신 Azure 업데이트 정보를 확인했습니다.
| 33.8 | 159 | 0.669822 | kor_Hang | 1.00001 |
504267fe7d35cbb629f75e9e880229f7127fa08f | 442 | md | Markdown | Daily Coding Problem/Day148/Problem.md | shouryagupta21/Fork_CPP | 8f5baed045ef430cca19d871c8854abc3b6ad44f | [
"MIT"
] | 8 | 2021-02-14T13:13:27.000Z | 2022-01-08T23:58:32.000Z | Daily Coding Problem/Day148/Problem.md | shouryagupta21/Fork_CPP | 8f5baed045ef430cca19d871c8854abc3b6ad44f | [
"MIT"
] | 17 | 2021-02-28T17:03:50.000Z | 2021-10-19T13:02:03.000Z | Daily Coding Problem/Day148/Problem.md | shouryagupta21/Fork_CPP | 8f5baed045ef430cca19d871c8854abc3b6ad44f | [
"MIT"
] | 15 | 2021-03-01T03:54:29.000Z | 2021-10-19T18:29:00.000Z | Good morning! Here's your coding interview problem for today.
This problem was asked by Apple.
Gray code is a binary code where each successive value differ
in only one bit, as well as when wrapping around. Gray code is
common in hardware so that we don't see temporary spurious values
during transitions.
Given a number of bits n, generate a possible gray code for it.
For example, for n = 2, one gray code would be [00, 01, 11, 10].
| 34 | 66 | 0.760181 | eng_Latn | 0.999895 |
5042d8c424562e8858ca563b0ed89a415b231fad | 582 | md | Markdown | README.md | xdrie/soundpipe-d | 2831f84aee888d976c0ebb8d77843bed4ead6d1b | [
"Apache-2.0"
] | 2 | 2020-11-12T23:13:28.000Z | 2020-11-13T19:09:59.000Z | README.md | xdrie/soundpipe-d | 2831f84aee888d976c0ebb8d77843bed4ead6d1b | [
"Apache-2.0"
] | null | null | null | README.md | xdrie/soundpipe-d | 2831f84aee888d976c0ebb8d77843bed4ead6d1b | [
"Apache-2.0"
] | null | null | null |
# soundpipe-d
dlang bindings to [soundpipe](https://github.com/xdrie/soundpipe)
## usage
all you need to do is add this package as a dependency and it should automatically build the C library and link it in.
note that `soundpipe` depends on `libsndfile`.
## example
see [example](example/), which is adapated from [ex_music](https://github.com/xdrie/soundpipe/blob/master/examples/ex_music.c) from the original library. run the binary to generate `test.wav`, a short sample with some synths.
## licenses
- `soundpipe-d` (xdrie) apache-2.0
- `soundpipe` (Paul Batchelor) mit | 30.631579 | 225 | 0.749141 | eng_Latn | 0.990458 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.