hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
c96735baba789aefb462311634f969367dc0e7cf
10,649
md
Markdown
_posts_example/blog/2013-11-10-what-is-devops.md
azhurbilo/devops.by
6a113ce14cb3d49fcee3c2b9ffb0538acd60fb70
[ "MIT" ]
null
null
null
_posts_example/blog/2013-11-10-what-is-devops.md
azhurbilo/devops.by
6a113ce14cb3d49fcee3c2b9ffb0538acd60fb70
[ "MIT" ]
null
null
null
_posts_example/blog/2013-11-10-what-is-devops.md
azhurbilo/devops.by
6a113ce14cb3d49fcee3c2b9ffb0538acd60fb70
[ "MIT" ]
null
null
null
--- layout: post title: Что такое DevOps? excerpt: "Agile сыграл важную роль в разработке для восстановления доверия у бизнеса, но он нечаянно оставил IT Operations позади. DevOps это способ восстановления доверия ко всей ИТ-организации в целом." modified: 2013-11-10 categories: articles tags: [theory, management, team, version-control-system, infrastructure-as-code, configuration-management] comments: true share: true --- ![1 article logo]({{ site.url }}/images/what-is-devops/avatar-1.png) {: .center} ###Определение: Посмотрим, что нам покажет глоссарий небезызвестной компании Gartner: > The DevOps movement was born of the need to improve IT service delivery agility and found initial traction within > many large public cloud services providers. Underpinning DevOps is the philosophy found in the Agile Manifesto, > which emphasizes people (and culture) and seeks to improve collaboration between operations and development teams. > DevOps implementers also attempt to better utilize technology—especially automation tools that can leverage > an increasingly programmable and dynamic infrastructure from a life cycle perspective. Devops - это модно. Но как заявляет Tim Park , DevOps - новый стандарт в разработке. Учитывая, что инфраструктура современных систем настолько сложна, что человек не справляется с ее поддержкой. Всё должно быть автоматизировано. ![Tim Park]({{ site.url }}/images/what-is-devops/Tim-Park-about-devops.png) {: .center} Только не следует воспринимать devops исключительно как автоматизацию. Об этом поговорим далее. Еще одно определение с wiki: > DevOps (слияние англ. слов Development (разработка) и Operations (ИТ-операции)) - это новая методология > в разработке ПО, нацеленная на коммуникации, сотрудничестве и интеграции между > подразделениями разработки и эксплуатации. Обратимся снова к глоссарию Gartner-a, чтобы разъяснить понятие Operations: > Gartner defines IT operations as the people and management processes associated with IT > service management to deliver the right set of services at the right quality and at > competitive costs for customers. Далее в статье под Operations я буду понимать администраторов (работники службы эксплуатации / служба оперативной поддержки ИТ-инфраструктуры). ###Dev VS Operations: Деятельность по разработке и внедрению ПО ранее не требовала глубокой интеграции между департаментами. Но на сегодняшний день необходимо тесное сотрудничество всех отделов (Devlopment, IT Operations, Quality Assurance и т.д). Когда говорят о devops, чаще всего упоминают о каких-то "Стенах Непонимания" (walls/silos) между разработчиками и администраторами. Проблема в том, что development постоянно добавляют новые изменения, а it operations должны наоборот обеспечить стабильность данной системы. Получаем противоречие :) Мне очень понравились статья и иллюстрации из блога dev2ops.org на тему конфликта ролей: ![Wall of confusion]({{ site.url }}/images/what-is-devops/devops-wall-of-confusion.png) {: .center} Для разработчиков изменение - это то, за что им платят. Бизнес всегда нуждается в изменениях, чтобы соответствовать современному миру. Такое понимание подталкивает разработчиков производить максимально возможное количество изменений. У IT специалистов другое понимание, в котором изменение - это вред. Каждый из них думает, что работает правильно, принося пользу бизнесу. Действительно, если рассматривать их по отдельности, они оба правы. Вдобавок к различному пониманию общей цели, разработчики и администраторы используют различные инструменты. ![Dev vs Operations tools]({{ site.url }}/images/what-is-devops/dev-vs-ops-tools.png) {: .center} ###Одна цель - одна команда: ![Business goal]({{ site.url }}/images/what-is-devops/business-dev-operations.png) {: .center} На этом изображении мы видим, что "стенку" между бизнесом и разработкой удалось разрушить за счет внедрения agile методологий, но была забыта друга "стенка" между development-ом и it operations. Clyde Logue, основатель StreamStep, говорит об этом так: > Agile сыграл важную роль в разработке для восстановления доверия у бизнеса, > но он нечаянно оставил IT Operations позади. DevOps это способ восстановления > доверия ко всей ИТ-организации в целом. Все сотрудники должны понимать, что они являются частью единого процесса. DevOps культивирует мышление, которое позволяет осознать, что личные решения и действия каждого должны быть направлены на осуществление единой цели. И успех следует измерять относительно всего цикла от-разработки-до-поставки, а не от успеха отдельно взятых ролей. ###Чем devops не является: На мероприятии OpsCamp Austin, Adam Jacob из Chef выступил против того, чтобы некоторые системные администраторы изменяли название своей профессии на DevOps. Опасно считать DevOps профессией или особым видом деятельности. Ведь это делает DevOps проблемой кого-то другого, но не всей команды: > You are a DBA? Don't worry about DevOps, that's the DevOps team's problem. You are a security expert? Don't worry > about DevOps, that's the DevOps team's problem. ###Сотрудничество: На сайте IBM наткнулся на серию статей автора Paul Duvall и каждая статья содержит блок "About this series", который начинается со слов: > Developers can learn a lot from operations, and operations can learn a lot from developers Довольно стандартная фраза, которую можно применить в разных контекстах, однако в it мире часто избегают процесса обмена опытом между различными ролями. В результате тесного сотрудничества разработчиков и специалистов по эксплуатации формируется новое поколение инженеров, которые берут лучшие достижения обеих дисциплин и комбинируют их во благо пользователя. Это проявляется в появлении кросс-функциональных групп, имеющих опыт разработки, управления конфигурацией, работы с базами данных, тестирования и управления инфраструктурой. В одной из статей Paul Duvall упоминает такое понятие как "The flattening" и сравнивает его с понятием "выравнивания" из книги 2005 году Томаса Фридмана - The World is Flat. Понятие показывает, что такие факторы, как глобализация, информационные технологии, открытие границ развивающихся стран и интернет приводят к "выравниванию", устраняя традиционные барьеры, препятствовавшие участию людей в мировой экономике. В настоящее время индустрия программного обеспечения демонстрирует свою тенденцию к "выравниванию" внутри компаний, которые стремятся к конкурентному преимуществу: выравниванию выпусков программного обеспечения и организационных структур. Рассмотрим несколько категорий, которые относятся к DevOps: * автоматизированная инициализация среды (scripted environments) * инфраструктура, управляемая тестами (test-driven infrastructures) * все под управлением версиями (versioning everything) * мониторинг (monitoring) * кросс-функциональные группы (Cross-functional teams) ###Scripted environments: Полностью автоматизировав инициализацию среды, можно сократить риск ошибок развертывания. Одна из наиболее значительных проблем проектов разработки программного обеспечения ― это группы с уникальными экземплярами систем, с которыми кроме них больше никто не может совладать, потому что на настройку этих систем потрачены дни, недели или месяцы усилий разных членов группы. Когда среда полностью описана сценариями, она перестает быть уникальной. Перечислим плюсы данного подхода: * среда всегда находится в известном состоянии * уменьшается вероятность того, что знания привязаны к конкретному человеку * процесс установки становится более предсказуемым и повторяемым ###Test-driven infrastructures: Подход, управляемый тестами, состоит в написании тестов вместе с кодом, и его истоки кроются в разработке программного обеспечения (TDD). Преимущества: * проблемы проявляются раньше * тесты становятся документацией * легче изолировать деструктивные изменения В принципе, плюсы такие же, как использование TDD при разработке. Только ошибки на стадии конфигурации стоят намного дороже ошибок в самом приложении, поэтому не нужно пренебрегать тестами для автоматизации конфигураций. ###Versioning everything: Эта модель проста: все находится под управлением версиями. Абсолютно все: инфраструктура, конфигурация, код приложения и сценарии баз данных. Это легко понять, но редко встречаются группы, которые действительно управляют версиями всех артефактов. Существует отличный способ узнать, все ли находится под управлением версиями: новый человек в группе получает новую машину и должен единственной командой извлечь из системы управления версиями полную рабочую систему программного обеспечения. ###Monitoring: Необходимо мониторить как каждое изменение влияет на всю систему на каждой стадии процесса вплоть до внедрения в производство. Любой член команды в любое время должен иметь доступ к информационной панели, по которой можно судить о текущем состоянии приложения. ###Cross-functional teams: Каждый член кросс-функциональной группы в равной мере отвечает за процесс поставки. Любой человек из группы может изменить любую часть системы программного обеспечения. Кросс-функциональная группа ослабляет синдром "it's not my job", который душит взаимодействие. ###Tools: Инструменты ― важный компонент модели DevOps. Приведем пример инструментов различных типов: Tool Type | Tools --- | --- Infrastructure automation | Bcfg2, Ansible, CFEngine, Chef, Puppet, Salt Deployment automation | Capistrano, ControlTier, Fabric , Func Infrastructure as a Service | Amazon Web Services, CloudStack, OpenStack, Rackspace Build automation | Ant, Maven, Rake, MsBuild, Gradle Version control | Subversion, Git, Mercurial Continuous Integration | Jenkins, CruiseControl, Team City Monitoring | Nagios, Zabbix, Cacti ###Lifecycle: На сайте CollabNet (компании, занимающейся SCM - Subversion и Application Lifecycle Management (ALM) решениями) есть отличные иллюстрации того, как влияет devops на жизненный цикл создания приложений: ![Development lifecycle]({{ site.url }}/images/what-is-devops/devops-lifecycle.png) {: .center} Получаем итерации следующего вида: ![Development loops]({{ site.url }}/images/what-is-devops/devops-loop.png) {: .center} ###Источники: * [http://www.gartner.com/it-glossary/devops](http://www.gartner.com/it-glossary/devops) * [http://habrahabr.ru/company/scrumtrek/blog/166039/](http://habrahabr.ru/company/scrumtrek/blog/166039/) * [http://devopswiki.net/index.php/DevOps](http://devopswiki.net/index.php/DevOps) * [https://www.ibm.com/developerworks/ru/library/a-devops1/](https://www.ibm.com/developerworks/ru/library/a-devops1/) * [http://dev2ops.org/2010/02/what-is-devops/](http://dev2ops.org/2010/02/what-is-devops/)
56.643617
445
0.808433
rus_Cyrl
0.964105
c96749c434142e9691f25cfd1b011c47a0b77281
2,630
md
Markdown
articles/redis-cache/TOC.md
ShuheiUda/azure-docs.ja-jp
7b9a06f297905012d007c3b09a021dfdf7395206
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/redis-cache/TOC.md
ShuheiUda/azure-docs.ja-jp
7b9a06f297905012d007c3b09a021dfdf7395206
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/redis-cache/TOC.md
ShuheiUda/azure-docs.ja-jp
7b9a06f297905012d007c3b09a021dfdf7395206
[ "CC-BY-4.0", "MIT" ]
null
null
null
# 概要 ##[Redis Cache を使用する理由](https://azure.microsoft.com/services/cache/) ##[Premium レベルの機能を探索する](cache-premium-tier-intro.md) ## シナリオ ###[キャッシュ内の項目にアクセスする](cache-dotnet-how-to-use-azure-redis-cache.md#add-and-retrieve-objects-from-the-cache) ###[高可用性の構成](https://azure.microsoft.com/pricing/details/cache/) ###[SSL を使用して安全に接続する](cache-dotnet-how-to-use-azure-redis-cache.md#connect-to-the-cache) ###[Managed Cache Service から移行する](cache-migrate-to-redis.md) ###[パターンおよびプラクティス キャッシュのガイダンス](../best-practices-caching.md?toc=%2fazure%2fredis-cache%2ftoc.json) # 作業開始 ##[ASP.NET](cache-web-app-howto.md) ##[.NET](cache-dotnet-how-to-use-azure-redis-cache.md) ##[WordPress](../app-service-web/web-sites-connect-to-redis-using-memcache-protocol.md?toc=%2fazure%2fredis-cache%2ftoc.json) ##[Node](cache-nodejs-get-started.md) ##[Java](cache-java-get-started.md) ##[Python](cache-python-get-started.md) ##[Redis Cache の FAQ](cache-faq.md) # 方法 ## プラン ###[キャッシュ レベルを選択する](cache-faq.md#what-redis-cache-offering-and-size-should-i-use) ###[Redis データ永続化でキャッシュを保持する](cache-how-to-premium-persistence.md) ###[仮想ネットワークでキャッシュをセキュリティ保護する](cache-how-to-premium-vnet.md) ###[クラスタリングによってキャッシュを分散する](cache-how-to-premium-clustering.md) ## 自動化 ###[PowerShell でデプロイして管理する](cache-howto-manage-redis-cache-powershell.md) ###[Azure CLI でデプロイして管理する](cli-samples.md) ###[Redis Cache のプロビジョニング](cache-redis-cache-arm-provision.md) ###[Redis Cache で Web アプリをプロビジョニングする](cache-web-app-arm-with-redis-cache-provision.md) ## ASP.NET と統合する ###[セッション状態プロバイダー](cache-aspnet-session-state-provider.md) ###[出力キャッシュ プロバイダー](cache-aspnet-output-cache-provider.md) ## 管理 ###[ポータルで構成する](cache-configure.md) ###[データをインポート/エクスポートする](cache-how-to-import-export-data.md) ###[Reboot](cache-administration.md#reboot) ###[更新のスケジュール](cache-administration.md#schedule-updates) ## 監視とトラブルシューティング ###[ポータルで監視する](cache-how-to-monitor.md) ###[キャッシュに関する問題のトラブルシューティング](cache-how-to-troubleshoot.md) ###[例外のアラートを設定する](cache-how-to-monitor.md#operations-and-alerts) ## スケール ###[別のサイズとレベルに更新する](cache-how-to-scale.md) ###[Redis クラスターを使用してスケールイン/スケールアウトする](cache-how-to-premium-clustering.md) # リファレンス ## [PowerShell](/powershell/module/azurerm.rediscache) ## [Azure CLI 2.0 プレビュー](/cli/azure/redis) ## [.NET](/dotnet/api/microsoft.azure.management.redis) ## [Java](/java/api/com.microsoft.azure.management.redis._redis_cache) ## [Redis クライアント](http://redis.io/clients) ## [Redis コマンド](http://redis.io/commands#) ## [REST ()](https://docs.microsoft.com/rest/api/redis/) # リソース ## [Redis Cache のサンプル](cache-redis-samples.md) ## [料金](https://azure.microsoft.com/pricing/details/cache/)
43.114754
125
0.741825
yue_Hant
0.472952
c9678a2c5154e32134158e1be66fccc4d7bc940d
548
md
Markdown
README.md
andris9/sandpress
4d9ef7c1c21f9f2bf8ca90f42e96b62b4801146e
[ "MIT" ]
1
2019-02-08T21:32:36.000Z
2019-02-08T21:32:36.000Z
README.md
andris9/sandpress
4d9ef7c1c21f9f2bf8ca90f42e96b62b4801146e
[ "MIT" ]
null
null
null
README.md
andris9/sandpress
4d9ef7c1c21f9f2bf8ca90f42e96b62b4801146e
[ "MIT" ]
null
null
null
# Sandpress [Sandpress](http://www.sandpress.org/) is a simple secure storage application. You provide an encryption key and a [Yubikey](http://yubico.com/) one time password and you can save or store encrypted data. All encryption is done on the browser side and encryption key is never revealed to the server. Additionally nobody without the proper Yubikey can access the encrypted data, so storing something with Sandpress should be pretty secure - even the service provider that stores the encrypted data can not read it. ## License **MIT**
60.888889
308
0.786496
eng_Latn
0.997667
c96792d7d12eabf6a7b45675e742386423fef1d9
175
md
Markdown
iris_example/README.md
praisethemoon/whale-core
a62437dc4236abe1bd5c421ec4084ec69971662d
[ "BSD-3-Clause" ]
4
2016-10-03T18:16:24.000Z
2018-07-17T13:10:51.000Z
iris_example/README.md
praisethemoon/whale-core
a62437dc4236abe1bd5c421ec4084ec69971662d
[ "BSD-3-Clause" ]
null
null
null
iris_example/README.md
praisethemoon/whale-core
a62437dc4236abe1bd5c421ec4084ec69971662d
[ "BSD-3-Clause" ]
1
2021-11-09T08:37:00.000Z
2021-11-09T08:37:00.000Z
# Iris example run from the root directory `lua iris_example/iris.lua` Iris dataset from kaggle [https://www.kaggle.com/uciml/iris](https://www.kaggle.com/uciml/iris)
25
95
0.737143
eng_Latn
0.346499
c96804c79cce79d6339dd90d8f20d0560c1b5857
9,645
md
Markdown
README.ja.md
yogesh-nextgen/ScorpioBroker
e14f53e217da090021f75a897fe071568940230d
[ "BSD-3-Clause" ]
35
2019-10-04T15:24:53.000Z
2022-03-31T09:42:43.000Z
README.ja.md
yogesh-nextgen/ScorpioBroker
e14f53e217da090021f75a897fe071568940230d
[ "BSD-3-Clause" ]
185
2019-09-30T15:46:30.000Z
2022-03-30T11:40:03.000Z
README.ja.md
yogesh-nextgen/ScorpioBroker
e14f53e217da090021f75a897fe071568940230d
[ "BSD-3-Clause" ]
31
2019-09-30T17:04:15.000Z
2022-03-09T04:33:22.000Z
# <img src="./img/ScorpioLogo.svg" width="140" align="middle"> Scorpio NGSI-LD Broker [![FIWARE Core](https://nexus.lab.fiware.org/static/badges/chapters/core.svg)](https://www.fiware.org/developers/catalogue/) [![License: BSD-4-Clause](https://img.shields.io/badge/license-BSD%204%20Clause-blue.svg)](https://spdx.org/licenses/BSD-4-Clause.html) [![Docker](https://img.shields.io/docker/pulls/scorpiobroker/scorpio.svg)](https://hub.docker.com/r/scorpiobroker/scorpio/) [![fiware](https://nexus.lab.fiware.org/repository/raw/public/badges/stackoverflow/fiware.svg)](https://stackoverflow.com/questions/tagged/fiware) [![NGSI LD](https://img.shields.io/badge/NGSI-LD-red.svg)](https://www.etsi.org/deliver/etsi_gs/CIM/001_099/009/01.02.02_60/gs_CIM009v010202p.pdf) <br> [![Documentation badge](https://img.shields.io/readthedocs/scorpio.svg)](https://scorpio.readthedocs.io/en/latest/?badge=latest) ![Status](https://nexus.lab.fiware.org/static/badges/statuses/full.svg) ![Travis-CI](https://travis-ci.org/ScorpioBroker/ScorpioBroker.svg?branch=master) Scorpio は、NEC Laboratories Europe と NEC Technologies India によって開発された NGSI-LD 準拠のコンテキストブローカー です。クロスカッティングコンテキスト情報管理 ([ETSI ISG CIM](https://www.etsi.org/committee/cim)) に関する ETSI Industry Specification Group (ETSI ISG) によって指定された完全な [NGSI-LD API](https://www.etsi.org/deliver/etsi_gs/CIM/001_099/009/01.02.02_60/gs_CIM009v010202p.pdf) を実装します。 NGSI-LD API は、コンテキスト情報の管理、アクセス、およびディスカバリーを可能にします。コンテキスト情報は、エンティティ (建物など) とそのプロパティ (住所や地理的位置など) およびリレーションシップ (所有者など) で構成されます。したがって、 Scorpio を使用すると、アプリケーションとサービスはコンテキスト情報をリクエストできます。つまり、必要なもの、必要な時期、 必要な方法などです。 NGSI-LD API の機能は次のとおりです: - コンテキスト情報を作成、更新、追加、および削除します。 - フィルタリング、地理的スコープ、ページングなどのコンテキスト情報をクエリします。 - コンテキスト情報の変更をサブスクライブし、非同期ノーティフィケーションを受信します。 - コンテキスト情報のソースをレジストレーションおよびディスカバリーします。これにより、 分散展開およびフェデレーション展開を構築できます。 Scorpio は FIWARE Generic Enabler です。したがって、"Powered by FIWARE " のプラットフォームの一部として統合できます。 FIWARE は、オープンソースプラットフォームコンポーネントの精選されたフレームワークであり、他のサードパーティ プラットフォームコンポーネントと一緒に組み立てて、スマートソリューションの開発を加速することができます。この FIWARE GE のロードマップは[こちら](./docs/roadmap.ja.md)に記述されています。 詳細については、[FIWARE developers](https://developers.fiware.org/) の Web サイトおよび [FIWARE](https://fiware.org/) Web サイトを参照してください。FIWARE GEs および Incubated FIWARE GEs の完全なリストは、 [FIWARE Catalogue](https://catalogue.fiware.org/) にあります。 | :books: [ドキュメンテーション](https://scorpio.rtfd.io/) | :mortar_board: [アカデミー](https://fiware-academy.readthedocs.io/en/latest/core/scorpio) | :whale: [Docker Hub](https://hub.docker.com/r/scorpiobroker/scorpio/) | :dart: [ロードマップ](./docs/roadmap.ja.md) | | ------------------------------------------------- | --------------------------------------------------------------------- | --------------------------------------------------------------------- | --------------------------------------------------------------------- | ## コンテンツ - [バックグラウンド](#background) - [インストールと構築](#installation-and-building) - [使用方法](#usage) - [API ウォークスルー](#api-walkthrough) - [テスト](#tests) - [その他のリソース](#further-resources) - [謝辞](#acknowledgements) - [クレジット](#credit-where-credit-is-due) - [行動規範](#code-of-conduct) - [ライセンス](#license) <a name="background"> ## バックグラウンド Scorpio は、コンテキスト情報の管理とリクエストを可能にする NGSI-LD broker です。次の機能をサポートします: - コンテキストプロデューサーは、コンテキスト情報の作成、更新、追加、削除など、コンテキストを管理できます。 - コンテキストコンシューマーは、エンティティを識別するか、エンティティタイプを提供することで関連するエンティティを 検出し、GeoJSON 機能として提供されるプロパティ値、既存のリレーションシップ、地理的範囲に従ってフィルタリングする ことで、必要なコンテキスト情報を要求できます。 - 同期クエリ応答と非同期サブスクライブ/ノーティフィケーションの2つの対話スタイルがサポートされており、 ノーティフィケーションはプロパティやリレーションシップの変更、または固定時間間隔に基づくことができます。 - Scorpio は、指定された時間間隔内に測定されたプロパティ値などの履歴情報を要求するための NGSI-LD のオプションの 時間インターフェイス (Temporal interface) を実装します。 - Scorpio は、集中型、分散型、および統合型を含む複数の展開構成をサポートします。上記のコンテキストプロデューサーに 加えて、それ自体が NGSI-LD インターフェイスを実装するコンテキストソースが存在する可能性があります。これらの コンテキストソースは、要求に応じて提供できる情報 (情報 (値) 自体ではない) に自分自身をレジストレーションできます。 分散設定の Scorpio Broker は、レジストレーションに基づいて要求に応答するための情報を持つ可能性のあるコンテキスト ソースを検出し、さまざまなコンテキストソースからの情報を要求および集約して、要求しているコンテキストコンシューマーに 提供できます。 - フェデレーション設定では、コンテキストソース自体を NGSI-LD broker にすることができます。フェデレーションを使用して、 情報を (部分的に) 共有したい複数のプロバイダーからの情報を組み合わせることができます。重要な違いは、通常、 レジストレーションの粒度にあります。たとえば、"建物 A に関する情報がある" ではなく、"地理的領域内の エンティティタイプの建物のエンティティに関する情報がある" などです。 - Scorpio は、前述のすべてのデプロイメント構成をサポートします。したがって、スケーラビリティと、進化的な方法で シナリオを拡張する可能性を提供します。たとえば、2つの別々のデプロイメントを組み合わせたり、スケーラビリティの 理由から、異なるブローカーを使用したりできます。これは、単一のアクセスポイントを引き続き使用できるコンテキスト コンシューマーに対して完全に透過的です。 <a name="installation-and-building"> ## インストールと構築 Scorpio は、マイクロサービスフレームワークとして Spring Cloud を使用し、ビルドツールとして Apache Maven を使用して Java で開発されています。メッセージバスとして Apache Kafka が必要であり、データベースとして PostGIS 拡張機能を備えた Postgres が必要です。 Scorpio に必要なソフトウェアコンポーネントのインストール方法に関する情報は、 [インストールガイド](./docs/ja/source/installationGuide.rst) に記載されています。Scorpio の構築と実行については、 [Scorpio の構築と実行ガイド](./docs/ja/source/buildScorpio.rst)を参照してください。 <a name="usage"> ## 使用方法 デフォルトでは、ブローカーはポート 9090 で実行され、ブローカーとの対話のベース URL は http://localhost:9090/ngsi-ld/v1/ になります。 ### 簡単な例 一般的に、次のようなペイロードを使用して HTTP POST リクエストを http://localhost:9090/ngsi-ld/v1/entities/ に送信する ことでエンティティを作成できます: ```json { "id": "urn:ngsi-ld:testunit:123", "type": "AirQualityObserved", "dateObserved": { "type": "Property", "value": { "@type": "DateTime", "@value": "2018-08-07T12:00:00Z" } }, "NO2": { "type": "Property", "value": 22, "unitCode": "GP", "accuracy": { "type": "Property", "value": 0.95 } }, "refPointOfInterest": { "type": "Relationship", "object": "urn:ngsi-ld:PointOfInterest:RZ:MainSquare" }, "@context": [ "https://schema.lab.fiware.org/ld/context", "https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context.jsonld" ] } ``` この例では、`@context` がペイロードにあるため、`Content-Type` ヘッダーを `application/ld+json` に設定する必要があります。 エンティティを受信するには、次のような HTTP GET を送信します: `http://localhost:9090/ngsi-ld/v1/entities/<entityId>` または、次のような GET を送信してクエリを実行します: ```text http://localhost:9090/ngsi-ld/v1/entities/?type=Vehicle&limit=2 Accept: application/ld+json Link: <http://<HOSTNAME_OF_WHERE_YOU_HAVE_AN_ATCONTEXT>/aggregatedContext.jsonld>; rel="http://www.w3.org/ns/json-ld#context";type="application/ld+json" ``` <a name="api-walkthrough"> ## API ウォークスルー Scorpio が提供する NGSI-LD API を使用して実行できることの詳細な例は、 [API ウォークスルー](./docs/ja/source/API_walkthrough.rst) にあります。 <a name="tests"> ## テスト Scorpio には2セットのテストがあります。ユニットテストには JUnit を使用し、システムテストには npm テストベースの FIWARE NGSI-LD テストスイートを使用します。テストの詳細については、[テストガイド](./docs/ja/source/testing.rst)を ご覧ください。 <a name="further-resources"> ## その他のリソース NGSI-L Dまたは JSON-LD の詳細については、次を参照ください: - [ETSI NGSI-LD 仕様](https://www.etsi.org/deliver/etsi_gs/CIM/001_099/009/01.02.02_60/gs_CIM009v010202p.pdf) - [ETSI NGSI-LD 入門](https://www.etsi.org/deliver/etsi_gr/CIM/001_099/008/01.01.01_60/gr_CIM008v010101p.pdf) - [JSON-LD ウェブサイト](https://json-ld.org/) - [FIWARE Academy Scorpio](https://fiware-academy.readthedocs.io/en/latest/core/scorpio/index.html) - [FIWARE 601: Introduction to Linked Data](https://fiware-tutorials.readthedocs.io/en/latest/linked-data) - [FIWARE 602: Linked Data Relationships and Data Models](https://fiware-tutorials.readthedocs.io/en/latest/relationships-linked-data) - [FIWARE global summit: The Scorpio NGSI-LD Broker. Features and supported architectures](https://www.slideshare.net/FI-WARE/fiware-global-summit-the-scorpio-ngsild-broker-features-and-supported-architectures) - [FIWARE global summit: NGSI-LD. An evolution from NGSI V2](https://www.slideshare.net/FI-WARE/fiware-global-summit-ngsild-an-evolution-from-ngsiv2) 一連のサンプル呼び出しは、Postman コレクションとして Examples フォルダーにあります。これらの例では2つの変数を使用して います: - gatewayServer は `<brokerIP>:<brokerPort>` である必要があります。ローカルでデフォルト設定を使用する場合は、 localhost:9090 になります。 - link, Link header を介して @context を提供する例です。例では、Example @context をホストします。 https://raw.githubusercontent.com/ScorpioBroker/ScorpioBroker/master/Examples/index.json へのリンクを設定します。 <a name="acknowledgements"> ## 謝辞 ### EU Acknowledgetment この活動は、助成金契約 No. 731993 (Autopilot), No. 814918 (Fed4IoT)、および No. 767498 (MIDIH, Open Call (MoLe)) に基づく欧州連合の Horizon 2020 研究およびイノベーションプログラムから資金提供を受けています。 <img src="https://raw.githubusercontent.com/ScorpioBroker/ScorpioBroker/master/img/flag_yellow_low.jpg" width="160"> - [AUTOPILOT project: Automated driving Progressed by Internet Of Things](https://autopilot-project.eu/) <img src="https://raw.githubusercontent.com/ScorpioBroker/ScorpioBroker/master/img/autopilot.png" width="160"> - [Fed4IoT project](https://fed4iot.org/) - [MIDIH Project](https://midih.eu/), Open Call (MoLe) <a name="credit-where-credit-is-due"> ## クレジット Scorpio に貢献してくれたすべての人に感謝します。これは、Scorpio 開発チーム全体とすべての外部貢献者に当てはまります。 完全なリストについては、[CREDITS](./CREDITS) ファイルをご覧ください。 <a name="code-of-conduct"> ## 行動規範 FIWARE コミュニティの一部として、私たちは [FIWARE 行動規範](https://www.fiware.org/foundation/code-of-conduct/) を 遵守するために最善を尽くし、貢献者にも同じことを期待しています。 これには、プルリクエスト、イシュー、コメント、コード、およびコード内コメントが含まれます。 このリポジトリの所有者として、ここでのコミュニケーションは純粋に Scorpio と NGSI-LD 関連のトピックに限定します。 私たちは皆、異なる文化的背景から来た人間です。私たちは皆、さまざまな癖、習慣、マナーを持っています。そのため、誤解が 生じる可能性があります。Scorpio と NGSI-LD を前進させるために、コミュニケーションが善意で行われていることに疑いの余地は ありません。寄稿者にも同じことを期待しています。しかし、誰かが繰り返し挑発したり、攻撃したり、議論を変えたり、誰かを 嘲笑したりしようとしている場合、私たちは自分の家を正しく利用し、これに終止符を打ちます。 解決すべき論争がある場合、このリポジトリの所有者としての私たちが最後の言葉を持っています。 <a name="license"> ## ライセンス Scorpio は [BSD-4-Clause](https://spdx.org/licenses/BSD-4-Clause.html) の下でライセンスされています。貢献には、この [Contribution license](CONTRIBUTING.ja.md) が適用されます。 © 2020 NEC Laboratories Europe, NEC Technologies India
40.020747
269
0.739139
yue_Hant
0.707658
c968d0fa7f32452e446c9025cb5157e0ce1d6d44
632
md
Markdown
src/markdown-pages/personal projects/QUT Linker.md
matt-downs/resume
96e6b8b4c71fedde696d663c4d52f12e50c2289e
[ "MIT" ]
null
null
null
src/markdown-pages/personal projects/QUT Linker.md
matt-downs/resume
96e6b8b4c71fedde696d663c4d52f12e50c2289e
[ "MIT" ]
1
2020-01-09T08:51:19.000Z
2020-03-28T00:41:08.000Z
src/markdown-pages/personal projects/QUT Linker.md
matt-downs/resume
96e6b8b4c71fedde696d663c4d52f12e50c2289e
[ "MIT" ]
null
null
null
--- type: project title: QUT Linker Chrome extension technologies: - vue - jquery - chrome projectUrl: https://chrome.google.com/webstore/detail/qut-linker/mijlekehieejblbkoanokjhbokikhddo?hl=en sourceUrl: startDate: 2017-01-01 --- Throughout my studies I found myself regularly wasting time searching for commonly used services on the QUT website. I created this simple Chrome extension to solve this problem by providing a customisable dropdown of helpful links to QUT services such as email and blackboard. I released it on the Chrome Web Store for a few friends to use and is now regularly used by over 300 people. Cool!
45.142857
392
0.795886
eng_Latn
0.994248
c968fc7d55b8f6e92bcefe7a5834f07b972a9572
1,134
md
Markdown
docs/install/includes/install_get_support_md.md
ailen0ada/visualstudio-docs.ja-jp
12f304d1399580c598406cf74a284144471e88c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/install/includes/install_get_support_md.md
ailen0ada/visualstudio-docs.ja-jp
12f304d1399580c598406cf74a284144471e88c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/install/includes/install_get_support_md.md
ailen0ada/visualstudio-docs.ja-jp
12f304d1399580c598406cf74a284144471e88c0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.topic: include ms.openlocfilehash: 0c7400338692b5f0eb8608c82d5d468eaec8ea1f ms.sourcegitcommit: db94ca7a621879f98d4c6aeefd5e27da1091a742 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 08/13/2018 ms.locfileid: "40100247" --- ## <a name="get-support"></a>サポートを受ける ときには、問題が発生してしまうことがあります。 Visual Studio のインストールが失敗した場合は、「[Visual Studio 2017 のインストールとアップグレードの問題のトラブルシューティング](../troubleshooting-installation-issues.md)」のステップ バイ ステップ ガイドをご覧ください。 インストール関連の問題については、[**ライブ チャット**](https://visualstudio.microsoft.com/vs/support/#talktous) (英語のみ) のサポート オプションも用意しています。 他のいくつかのサポート オプションを次に示します。 * Visual Studio インストーラーおよび Visual Studio IDE の両方に表示される [[問題の報告]](../../ide/how-to-report-a-problem-with-visual-studio-2017.md) ツールから、製品の問題を Microsoft に報告してください。 * 製品に関する提案は [UserVoice](https://visualstudio.uservoice.com/forums/121579) で投稿してください。 * [Visual Studio 開発者コミュニティ](https://developercommunity.visualstudio.com/)で製品の問題を追跡したり、回答を検索したりできます。 * [Gitter コミュニティの Visual Studio に関するスレッド](https://gitter.im/Microsoft/VisualStudio)で、ご自分の [GitHub ](https://github.com/) アカウントを使って Microsoft や他の Visual Studio 開発者と情報を交換することもできます。
51.545455
179
0.804233
yue_Hant
0.348416
c969369d391ecfb7d5f46a857d7fcdc68de70fe2
5,932
md
Markdown
docs/framework/winforms/controls/how-to-position-controls-on-windows-forms.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/winforms/controls/how-to-position-controls-on-windows-forms.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/winforms/controls/how-to-position-controls-on-windows-forms.md
AlejandraHM/docs.es-es
5f5b056e12f9a0bcccbbbef5e183657d898b9324
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Procedimiento Controles de posición en Windows Forms ms.date: 03/30/2017 dev_langs: - csharp - vb - cpp f1_keywords: - Location - Location.Y - Location.X helpviewer_keywords: - controls [Windows Forms] - controls [Windows Forms], moving - snaplines - controls [Windows Forms], positioning ms.assetid: 4693977e-34a4-4f19-8221-68c3120c2b2b ms.openlocfilehash: 2baf311f04209e988f2f5dd562e247ee13ed59ef ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 01/23/2019 ms.locfileid: "54607165" --- # <a name="how-to-position-controls-on-windows-forms"></a>Procedimiento Controles de posición en Windows Forms Para colocar los controles, utilizar el Diseñador de Windows Forms o especifique el <xref:System.Windows.Forms.Control.Location%2A> propiedad. > [!NOTE] > Los cuadros de diálogo y comandos de menú que se ven pueden diferir de los descritos en la Ayuda, en función de los valores de configuración o de edición activos. Para cambiar la configuración, elija la opción **Importar y exportar configuraciones** del menú **Herramientas** . Para más información, vea [Personalizar el IDE de Visual Studio](/visualstudio/ide/personalizing-the-visual-studio-ide). ### <a name="to-position-a-control-on-the-design-surface-of-the-windows-forms-designer"></a>Para colocar un control en la superficie de diseño del Diseñador de Windows Forms - Arrastre el control a la ubicación adecuada con el mouse. > [!NOTE] > Seleccione el control y muévalo teclas con la flecha para colocarlo con más precisión. Además, *las guías de alineación* le ayudarán a colocar controles en el formulario con precisión. Para obtener más información, vea [Tutorial: Organizar controles en Windows Forms mediante líneas de ajuste](../../../../docs/framework/winforms/controls/walkthrough-arranging-controls-on-windows-forms-using-snaplines.md). ### <a name="to-position-a-control-using-the-properties-window"></a>Para colocar un control mediante la ventana Propiedades 1. Haga clic en el control que desea colocar. 2. En el **propiedades** ventana, escriba valores para el <xref:System.Windows.Forms.Control.Location%2A> propiedad, separados por comas, para colocar el control dentro de su contenedor. El primer número (X) es la distancia desde el borde izquierdo del contenedor; el segundo número (Y) es la distancia desde el borde superior del área de contenedor, medido en píxeles. > [!NOTE] > Puede expandir el <xref:System.Windows.Forms.Control.Location%2A> propiedad al tipo el **X** y **Y** valores individualmente. ### <a name="to-position-a-control-programmatically"></a>Para colocar un control mediante programación 1. Establecer el <xref:System.Windows.Forms.Control.Location%2A> propiedad del control a un <xref:System.Drawing.Point>. ```vb Button1.Location = New Point(100, 100) ``` ```csharp button1.Location = new Point(100, 100); ``` ```cpp button1->Location = Point(100, 100); ``` 2. Cambie la coordenada X de la ubicación del control mediante la <xref:System.Windows.Forms.Control.Left%2A> subpropiedades. ```vb Button1.Left = 300 ``` ```csharp button1.Left = 300; ``` ```cpp button1->Left = 300; ``` ### <a name="to-increment-a-controls-location-programmatically"></a>Para incrementar la ubicación de un control mediante programación 1. Establecer el <xref:System.Windows.Forms.Control.Left%2A> subpropiedad para incrementar la coordenada X del control. ```vb Button1.Left += 200 ``` ```csharp button1.Left += 200; ``` ```cpp button1->Left += 200; ``` > [!NOTE] > Use el <xref:System.Windows.Forms.Control.Location%2A> propiedad para establecer un control X e Y se coloca al mismo tiempo. Para establecer una posición individualmente, utilice el control <xref:System.Windows.Forms.Control.Left%2A> (**X**) o <xref:System.Windows.Forms.Control.Top%2A> (**Y**) subpropiedades. No intente establecer implícitamente las coordenadas X e Y de la <xref:System.Drawing.Point> estructura que representa la ubicación del botón, porque esta estructura contiene una copia de las coordenadas del botón. ## <a name="see-also"></a>Vea también - [Controles de formularios Windows Forms](../../../../docs/framework/winforms/controls/index.md) - [Tutorial: Organizar controles en formularios de Windows Forms mediante líneas de ajuste](../../../../docs/framework/winforms/controls/walkthrough-arranging-controls-on-windows-forms-using-snaplines.md) - [Tutorial: Organizar controles en formularios de Windows Forms mediante TableLayoutPanel](../../../../docs/framework/winforms/controls/walkthrough-arranging-controls-on-windows-forms-using-a-tablelayoutpanel.md) - [Tutorial: Organizar controles en formularios de Windows Forms mediante FlowLayoutPanel](../../../../docs/framework/winforms/controls/walkthrough-arranging-controls-on-windows-forms-using-a-flowlayoutpanel.md) - [Organizar controles en formularios Windows Forms](../../../../docs/framework/winforms/controls/arranging-controls-on-windows-forms.md) - [Asignar etiquetas a controles individuales de formularios Windows Forms y proporcionar accesos directos a los mismos](../../../../docs/framework/winforms/controls/labeling-individual-windows-forms-controls-and-providing-shortcuts-to-them.md) - [Controles que se utilizan en formularios Windows Forms](../../../../docs/framework/winforms/controls/controls-to-use-on-windows-forms.md) - [Controles de formularios Windows Forms por función](../../../../docs/framework/winforms/controls/windows-forms-controls-by-function.md) - [Cómo: Establecer la ubicación de pantalla de Windows Forms](https://msdn.microsoft.com/library/cb023ab7-dea7-4284-9aa6-8c03c59b60c6)
54.925926
534
0.733648
spa_Latn
0.784539
c9693939a876ebaeb21de590e7d5fbf72b96c05d
10,320
md
Markdown
articles/iot-hub/iot-hub-python-python-file-upload.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
15
2017-08-28T07:46:17.000Z
2022-02-03T12:49:15.000Z
articles/iot-hub/iot-hub-python-python-file-upload.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
407
2018-06-14T16:12:48.000Z
2021-06-02T16:08:13.000Z
articles/iot-hub/iot-hub-python-python-file-upload.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
17
2017-10-04T22:53:31.000Z
2022-03-10T16:41:59.000Z
--- title: Faça upload de ficheiros de dispositivos para Azure IoT Hub com python | Microsoft Docs description: Como carregar ficheiros de um dispositivo para a nuvem usando o dispositivo Azure IoT SDK para Python. Os ficheiros carregados são armazenados num recipiente de bolhas de armazenamento Azure. author: robinsh ms.service: iot-hub services: iot-hub ms.devlang: python ms.topic: conceptual ms.date: 03/31/2020 ms.author: robinsh ms.custom: mqtt, devx-track-python ms.openlocfilehash: 77d51b2c839a64567838fa4d6308d203a6bb8b82 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: pt-PT ms.lasthandoff: 03/29/2021 ms.locfileid: "102501146" --- # <a name="upload-files-from-your-device-to-the-cloud-with-iot-hub-python"></a>Faça upload de ficheiros do seu dispositivo para a nuvem com IoT Hub (Python) [!INCLUDE [iot-hub-file-upload-language-selector](../../includes/iot-hub-file-upload-language-selector.md)] Este artigo mostra como usar as [capacidades de upload de ficheiros do IoT Hub](iot-hub-devguide-file-upload.md) para enviar um ficheiro para [o armazenamento de blob Azure](../storage/index.yml). Este tutorial mostra-lhe como: * Forneça de forma segura um recipiente de armazenamento para o upload de um ficheiro. * Use o cliente Python para fazer o upload de um ficheiro através do seu hub IoT. O [telemetria Enviar de um dispositivo para um hub IoT](quickstart-send-telemetry-python.md) quickstart demonstra a funcionalidade básica de mensagens dispositivo-a-nuvem do IoT Hub. No entanto, em alguns cenários não é possível mapear facilmente os dados que os seus dispositivos enviam para as mensagens relativamente pequenas de dispositivo para nuvem que o IoT Hub aceita. Quando precisa de criar ficheiros a partir de um dispositivo, ainda pode utilizar a segurança e a fiabilidade do IoT Hub. No final deste tutorial, você executou a aplicação de consola Python: * **FileUpload.py**, que envia um ficheiro para armazenamento usando o Python Device SDK. [!INCLUDE [iot-hub-include-python-sdk-note](../../includes/iot-hub-include-python-sdk-note.md)] [!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## <a name="prerequisites"></a>Pré-requisitos [!INCLUDE [iot-hub-include-python-v2-async-installation-notes](../../includes/iot-hub-include-python-v2-async-installation-notes.md)] * Certifique-se de que a porta 8883 está aberta na sua firewall. A amostra do dispositivo neste artigo utiliza o protocolo MQTT, que comunica sobre a porta 8883. Este porto pode ser bloqueado em alguns ambientes de rede corporativa e educacional. Para obter mais informações e formas de contornar esta questão, consulte [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). [!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-associate-storage.md)] ## <a name="upload-a-file-from-a-device-app"></a>Faça upload de um ficheiro a partir de uma aplicação de dispositivo Nesta secção, cria-se a aplicação do dispositivo para fazer o upload de um ficheiro para o hub IoT. 1. No seu comando, executar o seguinte comando para instalar o pacote **de dispositivo azure-iot.** Utilize este pacote para coordenar o upload do ficheiro com o seu hub IoT. ```cmd/sh pip install azure-iot-device ``` 1. No seu comando, executar o seguinte comando para instalar o pacote [**azure.storage.blob.**](https://pypi.org/project/azure-storage-blob/) Utilize este pacote para efetuar o upload do ficheiro. ```cmd/sh pip install azure.storage.blob ``` 1. Crie um ficheiro de teste que irá carregar para o armazenamento de bolhas. 1. Utilizando um editor de texto, crie um ficheiro **FileUpload.py** na sua pasta de trabalho. 1. Adicione as `import` seguintes declarações e variáveis no início do ficheiro **FileUpload.py.** ```python import os import asyncio from azure.iot.device.aio import IoTHubDeviceClient from azure.core.exceptions import AzureError from azure.storage.blob import BlobClient CONNECTION_STRING = "[Device Connection String]" PATH_TO_FILE = r"[Full path to local file]" ``` 1. No seu ficheiro, `[Device Connection String]` substitua-o pela cadeia de ligação do seu dispositivo de hub IoT. `[Full path to local file]`Substitua-o pelo caminho para o ficheiro de teste que criou ou por qualquer ficheiro no seu dispositivo que pretenda carregar. 1. Crie uma função para carregar o ficheiro para o armazenamento de bolhas: ```python async def store_blob(blob_info, file_name): try: sas_url = "https://{}/{}/{}{}".format( blob_info["hostName"], blob_info["containerName"], blob_info["blobName"], blob_info["sasToken"] ) print("\nUploading file: {} to Azure Storage as blob: {} in container {}\n".format(file_name, blob_info["blobName"], blob_info["containerName"])) # Upload the specified file with BlobClient.from_blob_url(sas_url) as blob_client: with open(file_name, "rb") as f: result = blob_client.upload_blob(f, overwrite=True) return (True, result) except FileNotFoundError as ex: # catch file not found and add an HTTP status code to return in notification to IoT Hub ex.status_code = 404 return (False, ex) except AzureError as ex: # catch Azure errors that might result from the upload operation return (False, ex) ``` Esta função analisa a estrutura *blob_info* que passou para criar um URL que utiliza para inicializar [um azure.storage.blob.BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient). Em seguida, envia o seu ficheiro para o armazenamento de blob Azure usando este cliente. 1. Adicione o seguinte código para ligar o cliente e fazer o upload do ficheiro: ```python async def main(): try: print ( "IoT Hub file upload sample, press Ctrl-C to exit" ) conn_str = CONNECTION_STRING file_name = PATH_TO_FILE blob_name = os.path.basename(file_name) device_client = IoTHubDeviceClient.create_from_connection_string(conn_str) # Connect the client await device_client.connect() # Get the storage info for the blob storage_info = await device_client.get_storage_info_for_blob(blob_name) # Upload to blob success, result = await store_blob(storage_info, file_name) if success == True: print("Upload succeeded. Result is: \n") print(result) print() await device_client.notify_blob_upload_status( storage_info["correlationId"], True, 200, "OK: {}".format(file_name) ) else : # If the upload was not successful, the result is the exception object print("Upload failed. Exception is: \n") print(result) print() await device_client.notify_blob_upload_status( storage_info["correlationId"], False, result.status_code, str(result) ) except Exception as ex: print("\nException:") print(ex) except KeyboardInterrupt: print ( "\nIoTHubDeviceClient sample stopped" ) finally: # Finally, disconnect the client await device_client.disconnect() if __name__ == "__main__": asyncio.run(main()) #loop = asyncio.get_event_loop() #loop.run_until_complete(main()) #loop.close() ``` Este código cria um **IoTHubDeviceClient** assíncronos e utiliza as seguintes APIs para gerir o upload de ficheiros com o seu hub IoT: * **get_storage_info_for_blob** obtém informações do seu hub IoT sobre a Conta de Armazenamento ligada que criou anteriormente. Esta informação inclui o nome de anfitrião, nome do recipiente, nome blob e um token SAS. A informação de armazenamento é transmitida para a função **store_blob** (criada no passo anterior), para que o **BlobClient** nessa função possa autenticar com armazenamento Azure. O método **get_storage_info_for_blob** também devolve uma correlation_id, que é utilizada no método **notify_blob_upload_status.** O correlation_id é a forma do IoT Hub marcar em que bolha estás a trabalhar. * **notify_blob_upload_status** notifica o IoT Hub do estado da sua operação de armazenamento de bolhas. Passa-se o correlation_id obtido pelo método **get_storage_info_for_blob.** É usado pelo IoT Hub para notificar qualquer serviço que possa estar a ouvir uma notificação sobre o estado da tarefa de upload de ficheiros. 1. Guarde e feche o ficheiro **UploadFile.py.** ## <a name="run-the-application"></a>Executar a aplicação Agora estás pronto para fazer a candidatura. 1. Numa instrução de comando na sua pasta de trabalho, executar o seguinte comando: ```cmd/sh python FileUpload.py ``` 2. A imagem que se segue mostra a saída da aplicação **FileUpload:** ![Saída a partir de aplicação de dispositivo simulado](./media/iot-hub-python-python-file-upload/run-device-app.png) 3. Pode utilizar o portal para visualizar o ficheiro carregado no recipiente de armazenamento que configura: ![Ficheiro carregado](./media/iot-hub-python-python-file-upload/view-blob.png) ## <a name="next-steps"></a>Passos seguintes Neste tutorial, aprendeu a usar as capacidades de upload de ficheiros do IoT Hub para simplificar os uploads de ficheiros a partir de dispositivos. Pode continuar a explorar funcionalidades e cenários do hub IoT com os seguintes artigos: * [Crie um hub IoT programáticamente](iot-hub-rm-template-powershell.md) * [Introdução à C SDK](iot-hub-device-sdk-c-intro.md) * [SDKs do Azure IoT](iot-hub-devguide-sdks.md) Saiba mais sobre o Azure Blob Storage com os seguintes links: * [Documentação de armazenamento Azure Blob](../storage/blobs/index.yml) * [Armazenamento Azure Blob para documentação python API](/python/api/overview/azure/storage-blob-readme)
48.224299
611
0.712209
por_Latn
0.96757
c969ec889ca3e8de038ba06be3ef474e0081c1ee
13,049
md
Markdown
articles/active-directory/hybrid/how-to-connect-import-export-config.md
FernandaOchoa/azure-docs.es-es
be97939ec75bbf148375cc2036b825efecb8a490
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-import-export-config.md
FernandaOchoa/azure-docs.es-es
be97939ec75bbf148375cc2036b825efecb8a490
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-import-export-config.md
FernandaOchoa/azure-docs.es-es
be97939ec75bbf148375cc2036b825efecb8a490
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Cómo importar y exportar opciones de configuración de Azure AD Connect description: En este artículo se describen las preguntas frecuentes sobre el aprovisionamiento en la nube. services: active-directory author: billmath manager: daveba ms.service: active-directory ms.workload: identity ms.topic: how-to ms.date: 07/13/2020 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: da80af9fe598186fa25d59601c9fa4faccb4286a ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 10/09/2020 ms.locfileid: "87447050" --- # <a name="import-and-export-azure-ad-connect-configuration-settings-public-preview"></a>Importar y exportar opciones de configuración de Azure AD Connect (versión preliminar pública) Las implementaciones de Azure Active Directory (Azure AD) Connect varían de una instalación en modo Rápido del bosque único a implementaciones complejas que se sincronizan entre varios bosques mediante reglas de sincronización personalizadas. Debido al gran número de opciones y mecanismos de configuración, es esencial comprender qué opciones están en vigor para así poder implementar rápidamente un servidor con una configuración idéntica. Esta característica presenta la capacidad de catalogar la configuración de un servidor de sincronización determinado e importar la configuración en una nueva implementación. Se pueden comparar distintas instantáneas de configuración de sincronización para visualizar fácilmente las diferencias entre dos servidores, o del mismo servidor a lo largo del tiempo. Cada vez que se cambia la configuración desde el asistente de Azure AD Connect, un nuevo archivo de configuración JSON con marca de tiempo se exporta automáticamente a  **%ProgramData%\AADConnect**. El nombre del archivo de configuración tiene el formato **Applied-SynchronizationPolicy-*.JSON**, donde la última parte del nombre de archivo es una marca de tiempo. > [!IMPORTANT] > Solo se exportan automáticamente los cambios realizados por Azure AD Connect. Todo cambio realizado mediante PowerShell, Synchronization Service Manager o el Editor de reglas de sincronización debe exportarse a petición según sea necesario para mantener una copia actualizada. La exportación a petición también se puede usar para colocar una copia de la configuración en una ubicación segura para fines de recuperación ante desastres. ## <a name="export-azure-ad-connect-settings"></a>Exportar la configuración de Azure AD Connect Para ver un resumen de los valores de configuración, abra la herramienta Azure AD Connect y seleccione la tarea adicional denominada **Ver o exportar la configuración actual**. Se muestra un resumen rápido de la configuración, junto con la capacidad de exportar la configuración completa del servidor. De forma predeterminada, la configuración se exporta a **%ProgramData%\AADConnect**. También puede guardar la configuración en una ubicación protegida para garantizar la disponibilidad si se produce un desastre. La configuración se exporta con el formato de archivo JSON y no debe crearse ni editarse manualmente para garantizar la coherencia lógica. La importación de un archivo creado o editado manualmente no se admite y puede provocar resultados inesperados. ## <a name="import-azure-ad-connect-settings"></a>Importar la configuración de Azure AD Connect Para importar una configuración anteriormente exportada: 1. Instale **Azure AD Connect** en un nuevo servidor. 1. Seleccione la opción **Personalizar** después de la página de **bienvenida**. 1. Seleccione **Importar configuración de sincronización**. Busque el archivo de configuración JSON anteriormente exportado. 1. Seleccione **Instalar**. ![Captura de pantalla que muestra la pantalla para instalar componentes necesarios](media/how-to-connect-import-export-config/import1.png) > [!NOTE] > Invalide la configuración de esta página, como el uso de SQL Server en lugar de LocalDB o el uso de una cuenta de servicio existente en lugar de la VSA predeterminada. Esta configuración no se importa desde el archivo de configuración. Está allí para fines de comparación e información. ### <a name="import-installation-experience"></a>Experiencia de instalación de la importación La experiencia de instalación de la importación sigue siendo sencilla, ya que cuenta con entradas mínimas del usuario para facilitar la reproducibilidad de un servidor existente. Estos son los únicos cambios que se pueden realizar durante la experiencia de instalación. Todos los demás cambios se pueden realizar después de la instalación desde el asistente de Azure AD Connect: - **Credenciales de Azure Active Directory**: de forma predeterminada, se sugiere el nombre de cuenta del administrador global de Azure que se usa para configurar el servidor original. Este *debe* cambiarse si quiere sincronizar la información con un nuevo directorio. - **Inicio de sesión de usuario**: las opciones de inicio de sesión configuradas para el servidor original están seleccionadas de forma predeterminada y solicitarán automáticamente las credenciales u otra información necesaria durante la configuración. En casos excepcionales, puede ser necesario configurar un servidor con otras opciones para evitar cambiar el comportamiento del servidor activo. De lo contrario, seleccione **Siguiente** para usar la misma configuración. - **Credenciales del directorio local**: para cada directorio local incluido en la configuración de sincronización, debe proporcionar credenciales para crear una cuenta de sincronización o suministrar una cuenta de sincronización personalizada creada previamente. Este procedimiento es idéntico a la experiencia de instalación limpia, con la excepción de que no se pueden agregar ni quitar directorios. - **Opciones de configuración**: al igual que con una instalación limpia, puede optar por configurar las opciones iniciales de si quiere iniciar la sincronización automática o habilitar el modo de almacenamiento provisional. La principal diferencia es que el modo de almacenamiento provisional está habilitado intencionadamente de forma predeterminada para permitir la comparación de los resultados de la configuración y la sincronización antes de exportar activamente los resultados a Azure. ![Captura de pantalla que muestra la pantalla para conectar los directorios](media/how-to-connect-import-export-config/import2.png) > [!NOTE] > Solo un servidor de sincronización puede estar en el rol principal y exportar activamente los cambios de configuración a Azure. Todos los demás servidores deben colocarse en Modo provisional. ## <a name="migrate-settings-from-an-existing-server"></a>Migración de la configuración desde un servidor existente Si un servidor existente no admite la administración de la configuración, puede optar por actualizar el servidor en contexto o migrar la configuración para su uso en un nuevo servidor de almacenamiento provisional. La migración requiere la ejecución de un script de PowerShell que extraiga la configuración existente para su uso en una nueva instalación. Use este método para catalogar la configuración del servidor existente y, a continuación, aplicarla a un servidor de almacenamiento provisional recién instalado. Comparar los valores del servidor original con el servidor recién creado le permitirá visualizar rápidamente los cambios entre los servidores. Como siempre, siga el proceso de certificación de su organización para asegurarse de que no se requiera ninguna configuración adicional. ### <a name="migration-process"></a>Proceso de migración Migración de la configuración: 1. Inicie **AzureADConnect.msi** en el nuevo servidor de almacenamiento provisional y deténgase en la página **principal** de Azure AD Connect. 1. Copie **MigrateSettings.ps1** del directorio Microsoft Azure AD Connect\Tools en una ubicación del servidor existente. Un ejemplo es C:\setup, donde "setup" es un directorio creado en el servidor existente. ![Captura de pantalla que muestra los directorios de Azure AD Connect.](media/how-to-connect-import-export-config/migrate1.png) 1. Ejecute el script tal como se muestra a continuación y guarde todo el directorio de configuración del servidor de nivel inferior. Copie este directorio en el nuevo servidor de almacenamiento provisional. Debe copiar toda la carpeta **Exported-ServerConfiguration-** * al nuevo servidor. ![Captura de pantalla que muestra el script en Windows PowerShell.](media/how-to-connect-import-export-config/migrate2.png) ![Captura de pantalla que muestra la copia de la carpeta Exported-ServerConfiguration-*.](media/how-to-connect-import-export-config/migrate3.png) 1. Haga doble clic en el icono de **Azure AD Connect** en el escritorio para iniciarlo. Acepte los términos de licencia del software de Microsoft y seleccione **Personalizar** en la siguiente página. 1. Seleccione la casilla **Importar configuración de sincronización**. Seleccione **Examinar** para examinar la carpeta Exported-ServerConfiguration-* que se ha copiado. Seleccione MigratedPolicy.json para importar la configuración migrada. ![Captura de pantalla que muestra la opción para importar la configuración de la sincronización.](media/how-to-connect-import-export-config/migrate4.png) ## <a name="post-installation-verification"></a>Comprobación posterior a la instalación Comparar el archivo de configuración importado originalmente con el archivo de configuración exportado del servidor recién implementado, es un paso esencial para comprender las diferencias entre la implementación prevista y la implementación resultante. Usar su aplicación favorita para la comparación de texto en paralelo permite obtener una visualización instantánea que resalta rápidamente cualquier cambio deseado o accidental. Si bien ahora se han eliminado muchos pasos de configuración que antes eran manuales, debe seguir el proceso de certificación de su organización para asegurarse de que no se requiere ninguna configuración adicional. Esta configuración puede producirse si usa la configuración avanzada, ya que no se captura actualmente en la versión preliminar pública de la administración de configuración. Estas son las limitaciones conocidas: - **Reglas de sincronización**: la prioridad de una regla personalizada debe estar en el rango reservado de 0 a 99 para evitar conflictos con las reglas estándar de Microsoft. La colocación de una regla personalizada fuera del rango reservado puede dar lugar a que la regla personalizada se cambie a medida que se agreguen reglas estándar a la configuración. Se producirá un problema similar si la configuración contiene reglas estándar modificadas. Se recomienda no modificar una regla estándar; además, es probable que la ubicación de la regla sea incorrecta. - **Escritura diferida de dispositivo**: estos valores están catalogados. Actualmente estos valores no se aplican durante la configuración. Si se ha habilitado la escritura diferida de dispositivo para el servidor original, debe configurar manualmente la característica en el servidor implementado recientemente. - **Tipos de objeto sincronizados**: aunque es posible restringir la lista de tipos de objeto sincronizados (usuarios, contactos, grupos, etc.) mediante Synchronization Service Manager, esta característica no se admite actualmente a través de la configuración de sincronización. Después de completar la instalación, debe volver a aplicar manualmente la configuración avanzada. - **Perfiles de ejecución personalizados**: aunque es posible modificar el conjunto predeterminado de perfiles de ejecución mediante Synchronization Service Manager, esta característica no se admite actualmente a través de la configuración de sincronización. Después de completar la instalación, debe volver a aplicar manualmente la configuración avanzada. - **Configurar la jerarquía de aprovisionamiento**: esta característica avanzada de Synchronization Service Manager no se admite a través de la configuración de sincronización. Se debe volver a configurar manualmente después de finalizar la implementación inicial. - **Autenticación de los Servicios de federación de Active Directory (AD FS) y PingFederate**: los métodos de inicio de sesión asociados a estas características de autenticación se preseleccionan automáticamente. Debe proporcionar de forma interactiva todos los demás parámetros de configuración necesarios. - **Una regla de sincronización personalizada deshabilitada se importará como "habilitada"** : una regla de sincronización personalizada deshabilitada se importará como "habilitada". Asegúrese de deshabilitarla también en el nuevo servidor. ## <a name="next-steps"></a>Pasos siguientes - [Hardware y requisitos previos](how-to-connect-install-prerequisites.md) - [Configuración rápida](how-to-connect-install-express.md) - [Configuración personalizada](how-to-connect-install-custom.md) - [Instalación de agentes de Azure AD Connect Health](how-to-connect-health-agent-install.md)
117.558559
801
0.820063
spa_Latn
0.989895
c96a054aacb07593b8b53f8332a410a34a1bf836
149
md
Markdown
docs/raise_issue/ri_index.md
gld-docs/gld-docs.github.io
a26b0221b02a0f99ebde849fd5b716aa7619f71a
[ "MIT" ]
null
null
null
docs/raise_issue/ri_index.md
gld-docs/gld-docs.github.io
a26b0221b02a0f99ebde849fd5b716aa7619f71a
[ "MIT" ]
null
null
null
docs/raise_issue/ri_index.md
gld-docs/gld-docs.github.io
a26b0221b02a0f99ebde849fd5b716aa7619f71a
[ "MIT" ]
null
null
null
--- layout: default title: Raise an Issue nav_order: 7 --- ## What is it, how does it help? For anything else, contact the GLD Focal Point.
16.555556
47
0.66443
eng_Latn
0.995787
c96a12044fb1406681fd60c8af55d212c3161eed
1,150
md
Markdown
catalog/hakuouki/en-US_hakuouki-otogisoushi-special.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/hakuouki/en-US_hakuouki-otogisoushi-special.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/hakuouki/en-US_hakuouki-otogisoushi-special.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Hakuouki: Otogisoushi Special ![hakuouki-otogisoushi-special](https://cdn.myanimelist.net/images/anime/2/84387.jpg) - **type**: special - **episodes**: 1 - **original-name**: 薄桜鬼~御伽草子~ 特別篇 - **start-date**: 2016-06-21 - **rating**: PG-13 - Teens 13 or older ## Tags - historical - samurai - fantasy - josei ## Sinopse A Hakuouki: Otogisoushi special episode that aired between episode 11 and episode 12, it was not included in the normal episode count of the main show. It had a TV and web release on the same day and is included on the DVD and BD release. ## Links - [My Anime list](https://myanimelist.net/anime/34948/Hakuouki__Otogisoushi_Special) - [Official Site](http://www.hakuoki-otogi.com/episode/) - [AnimeDB](http://anidb.info/perl-bin/animedb.pl?show=anime&aid=11754) - [AnimeNewsNetwork](http://www.animenewsnetwork.com/encyclopedia/anime.php?id=17909) - [Wikipedia](https://ja.wikipedia.org/wiki/%E8%96%84%E6%A1%9C%E9%AC%BC_%E3%80%9C%E6%96%B0%E9%81%B8%E7%B5%84%E5%A5%87%E8%AD%9A%E3%80%9C#.E3.83.86.E3.83.AC.E3.83.93.E3.82.A2.E3.83.8B.E3.83.A1.EF.BC.88.E5.BE.A1.E4.BC.BD.E8.8D.89.E5.AD.90.EF.BC.89)
39.655172
247
0.702609
eng_Latn
0.41401
c96aa1a792b78a3e1feeb033548de9e149fd40d0
8,810
md
Markdown
on_hold/2018-08-30-compressed-sensing.md
act65/act65.github.io
879d312813839b65296c16f431e03ff9a9c80ad8
[ "MIT" ]
3
2017-11-12T16:54:06.000Z
2020-09-01T08:17:38.000Z
on_hold/2018-08-30-compressed-sensing.md
act65/act65.github.io
879d312813839b65296c16f431e03ff9a9c80ad8
[ "MIT" ]
1
2017-04-21T22:51:34.000Z
2018-03-29T09:28:06.000Z
on_hold/2018-08-30-compressed-sensing.md
act65/act65.github.io
879d312813839b65296c16f431e03ff9a9c80ad8
[ "MIT" ]
15
2016-07-13T07:09:44.000Z
2020-09-01T08:21:45.000Z
--- layout: post title: Compressed sensing and machine learning --- This post is about; * how to make maximally informative queries * ?? * ?? *** Only sample your environment to tell you what you need to know. The problem is that we only have a few samples, they could have come from many places. - how can we select a candidate from many? - how can we pick samples to minimse the set of candidates? - ? (how general is this to partial information -- is it possible to apply to to Atari?) $$ \begin{align} \text{min} \parallel R(x) \parallel_1 \;\;\text{s.t.}\;\; \parallel f(x) - y \parallel_2 \le \epsilon \\ \end{align} $$ $$ \begin{align} f \perp g \\ \text{min} \parallel g(z) \parallel_1 + \lambda \parallel f(g(z)) - y \parallel_2 \\ \end{align} $$ <side>The fact that we must mix the pixels feels like a cheat. We still need to physically interact with all the pixels, many times... But maybe mixing is cheap and measuring is not!?</side> * Why doesnt the order matter?!? Problem is; we have an image that may be sparse in a basis. How do we find which elements are non zero? - Could search through them all, - > If your image consists of a few sparse dots or a few sharp lines, the worst way to sample it is by capturing individual pixels (the way a regular camera works!). The best way to sample the image is to compare it with widely spread-out noise functions. One could draw an analogy with the game of “20 questions.” If you have to find a number between 1 and $N$, the worst way to proceed is to guess individual numbers (the analog of measuring individual pixels). On average, it will take you $N$/2 guesses. By contrast, if you ask questions like, “Is the number less than $N$/2?” and then “Is the number less than $N $/4?” and so on, you can find the concealed number with at most $log_2 N$ questions. hmmm. How is asking "is the number less than N/4?" like comparing against a random signal? "How much of the image vector points in this, random, direction?" If you know after 2 measurements that an image is partially similar to up/left, and up/right then you should conclude up. > Notice that the “20 questions” strategy is adaptive: you are allowed to adapt your questions in light of the previous answers. To be practically relevant, Candes and Tao needed to make the measurement process nonadaptive, yet with the same guaranteed performance as the adaptive strategy just described. In other words, they needed to find out ahead of time what would be the most informative questions about the signal x. How is this related to a hash fn?! ## Assumption of structure What other possible kinds of structure are there?! ## Measures of sparsity What we need is a measure of sparsity. Better than that want one that can handle approximate sparsity/noise. $$ \begin{align} \parallel x \parallel_0 &= \sum_{i=0}^n x_i^0 \tag{assuming $0^0=0$}\\ \parallel x \parallel_1 &= \sum_{i=0}^n \mid x_i \mid \\ \parallel x \parallel_2 &= \sqrt{ \sum_{i=0}^n x_i^2 } \\ \end{align} $$ Problem with $L_0$ is that it doesnt tell use how close we are to a sparse solution. One of the elements might be $= 1e-8$ yet, it is still counted as a non-zero element. <side>does this relationship generalise to higher $L_P$ norms?</side> There is a nice relationship between $L_0$ and $L_1$ minimisers $$ \begin{align} \forall b_i \in \{?!? \}, A = \in \{?!? \} \\ \mathop{\text{min}}_{A, b} \parallel Ax + b \parallel_0 &= \mathop{\text{min}}_{A, b} \parallel Ax + b \parallel_1 \end{align} $$ Probability of sampling a line that intersects the origin. How can this be proved? Can be easily understood geometrically as ... (insert image) ## Randomness and mixing Two uses. L_0 = L_1 AND dissimilarity to image basis (make sure you get a good mixture). I wonder if existing physical processes might do the mixing for us? E.g. An EEG of the brain represents a local mixture of electric potentials. Can this be used to do compressed sensing? ## Restricted Isometry (the formalisation of our concept of mixing!?) $$ \begin{align} d_Y(f(a), f(b)) = d_X(a, b) \end{align} $$ $$ \begin{align} (1-\delta)\parallel y \parallel_2^2 \le \parallel Ay \parallel_2^2 \le (1+\delta)\parallel y \parallel^2_2 \end{align} $$ <side>What goes wrong if the signals can be scaled?</side> Doesnt tend to change the magnitiude of the vector $y$, aka A must be approximately a rotation (no scaling), ie an orthogonal transformation. What does it mean to do a rotation? The basis you use to descibe Y is ?!?! relation to X. But RIP is (how much if at all!?!?) weaker than requiring an orthogonality. > This property essentially requires that every set of columns with cardinality less than S approximately behaves like an orthonormal system I see how it requires every set of columns with cardinality less that S approximately behaves like a normal system, but not orthonormal. $$ \begin{align} (1-\delta_S)\parallel y_T \parallel_2^2 \;\; &\le \;\;\parallel A_{:, T}y_T \parallel_2^2 \;\; \le \;\; (1+\delta_S)\parallel y_T \parallel^2_2 \\ \forall T &\subseteq \{1, \dots n\} \;\; \text{s.t.} \;\; \mid T \mid \; = S \\ \end{align} $$ Questions; - why forall T? - what value of y is used? - ? *** > There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard,3 and is hard to approximate as well 4) - https://en.wikipedia.org/wiki/Restricted_isometry_property ## Stable recovery from incomplete and noisy measurements > How can we possibly hope to recover our signal when not only the available information is severely incomplete but in addition, the few available observations are also inaccurate? > To be broadly applicable, our recovery procedure must be stable: small changes in the observations should result in small changes in the recovery. __Theorem 1.__ (from [2](#2)) Let $S$ be such that $\delta_{3S} + 3\delta_{4S} < 2$. Then for any signal $x_0$ suported on $T_0$ with $\mid T_0 \mid \le S$ and any pertubation $e$ with $\parallel e \parallel_2 \le \epsilon$, the solution $\hat x$ to ($P_2$) obeys $$ \parallel \hat x - x_0 \parallel_2 \le C_S \cdot \epsilon \\ $$ __Q__ what is C? how does it depend on S? Note: * Any $\epsilon$. Hmm. Generalsing this to ML, we might have some issues. Needs to be stable to pertubations, but adversarial examples would break this... * Where does $\delta_{3S} + 3\delta_{4S} < 2$ come from, seems rather weird!? * > The fact that the severely ill-posed matrix inversion keeps the perturbation from “blowing up” is rather remarkable and perhaps unexpected <aside> Why is the inversion of a rectangular matrix (and/or low rank) sensitive to pertubations? ...?!? </aside> > no recovery method can perform fundamentally better for arbitrary perturbations of size $\epsilon$ ... obtaining a reconstruction with an error term whose size is guaranteed to be proportional to the noise level is the best one can hope for. ## Relationship to machine learning To recover some ground truth, we are allowed to take measurements. But which measurements should be taken? The above tells us they should be maximally inoherent with ???. Then $A$ is our data, $y$ the labels, and $x$ is function we want to recover. $$ \begin{align} x &\in \mathbb R^n, y_i \in \mathbb R, A \in \mathbb R^{m \times n} \\ y_i &= \langle A_i, x \rangle \tag{compressed sensing}\\ \mathop{\text{min}}_x & \parallel Ax - y \parallel_2 + R(x) \\ \\ x_i &\in \mathbb R^n, y_i \in \mathbb R^m, A \in \mathbb R^{m \times n}\\ y_i &= \langle A, x_i \rangle \tag{machine learning} \\ \mathop{\text{min}}_A & \parallel Ax - y \parallel_2 + R(A) \\ \end{align} $$ Neither can be solved by gaussian elimination (or other methods -- which are!?!?) because they are ...? * Is there a way to incorporate beliefs about noise the data to ML? *** > We note that the factorized parameterization also plays an i mportant role here. The intuition above would still apply if we replace U U ⊤ with a single variable X and run gradient descent in the space of X with small enough initialization. However, it will converg e to a solution that doesn’t generalize. The discrepancy comes from another crucial property of the factorized parameterization: it provides certain denoising effect that encourages th e empirical gradient to have a smaller eigenvalue tail. ## References 1. <a name="1">[Compressed Sensing Makes Every Pixel Count](https://www.ams.org/publicoutreach/math-history/hap7-pixel.pdf)</a> 2. <a name="2">[Stable Signal Recovery from Incomplete and Inaccurate Measurements](https://statweb.stanford.edu/~candes/papers/StableRecovery.pdf)</a> 3. [Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations](https://arxiv.org/abs/1712.09203)
41.753555
700
0.730988
eng_Latn
0.997027
c96abaab5b1734999bc2e6ce151117d78386e8e0
1,718
md
Markdown
README.md
MaximLukianchuk/WebStore
a20fbbfbcaf90af16752b2046fb28d2651f3be10
[ "MIT" ]
3
2018-11-14T13:41:40.000Z
2018-11-18T23:14:51.000Z
README.md
MaximLukianchuk/WebStore
a20fbbfbcaf90af16752b2046fb28d2651f3be10
[ "MIT" ]
null
null
null
README.md
MaximLukianchuk/WebStore
a20fbbfbcaf90af16752b2046fb28d2651f3be10
[ "MIT" ]
2
2018-11-11T14:01:32.000Z
2018-11-18T23:36:20.000Z
[![WebStore](http://www.imageup.ru/img158/3216191/web-store4x.png)](htpps://mark-and-max.store) Web service that is designed to simplify shopping from different stores at one time [![MIT Licence](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/MaximLukianchuk/WebStore/blob/master/LICENSE) [![Maven](https://img.shields.io/badge/maven-v4.0.0-blue.svg)](https://maven.apache.org/) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/f7475736b9d74699b7e1239a4bf13791)](https://www.codacy.com/app/MaximLukianchuk/WebStore?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=MaximLukianchuk/WebStore&amp;utm_campaign=Badge_Grade) [![Author](https://img.shields.io/badge/author-MarkSmirnov13-lightgrey.svg)](https://github.com/MarkSmirnov13) [![Author](https://img.shields.io/badge/author-MaximLukianchuk-lightgrey.svg)](https://github.com/MaximLukianchuk) [![Preview](https://github.com/MaximLukianchuk/WebStore/blob/master/webStorePreviw.gif)](htpps://mark-and-max.store) ## Built With * [Spring Boot](https://spring.io/projects/spring-boot) - Makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run" * [Maven](https://maven.apache.org/) - Dependency Management * [Postgresql](https://www.postgresql.org/) - The World's Most Advanced Open Source Relational Database * [Freemarker](https://freemarker.apache.org/) - a Java library to generate text output (HTML web pages, e-mails, configuration files, source code, etc.) ## Authors * **Mark Smirnov** - *Initial work* * **Maksim Lukianchuk** - *Initial work* ## License MIT license ## Acknowledgments * Hat tip to anyone whose code was used * Inspiration * etc
53.6875
263
0.756694
yue_Hant
0.374567
c96bf58086081efc8f0e064cd51efbfff86db979
2,743
md
Markdown
guide/russian/react-native/basic-commands/index.md
danielpoehle/freeCodeCamp
5bc4118848ca6d145401eabee7ef97176a7b1196
[ "BSD-3-Clause" ]
6
2018-07-16T01:04:28.000Z
2020-08-25T05:21:44.000Z
guide/russian/react-native/basic-commands/index.md
danielpoehle/freeCodeCamp
5bc4118848ca6d145401eabee7ef97176a7b1196
[ "BSD-3-Clause" ]
2,056
2019-08-25T19:29:20.000Z
2022-02-13T22:13:01.000Z
guide/russian/react-native/basic-commands/index.md
danielpoehle/freeCodeCamp
5bc4118848ca6d145401eabee7ef97176a7b1196
[ "BSD-3-Clause" ]
5
2018-10-18T02:02:23.000Z
2020-08-25T00:32:41.000Z
--- title: Basic Commands localeTitle: Основные команды --- ## Основные команды Здесь вы найдете список основных команд, чтобы начать разработку приложений для iOS и Android с помощью React Native. Если вы еще не установили его, настоятельно рекомендуется следовать [официальному руководству](https://facebook.github.io/react-native/docs/getting-started.html) . ### Запуск нового проекта Существуют различные способы загрузки исходного приложения. Вы можете использовать приложение **Expo** или `create-react-native-app` (которое, в свою очередь, использует Expo-Cli), чтобы начать новый проект, но с помощью этого метода вы больше контролируете, что происходит в вашем проекте, и можете общаться, настраивать и писать собственные модули с родными библиотеками для iOS и мобильной платформы Android. ``` react-native init [PROJECT-NAME] cd [PROJECT-NAME] ``` **Запустить приложение в эмуляторе Android** Эта команда сама запустит эмулятор Android и установит приложение, которое вы только что создали. Вы должны быть в корне проекта для запуска этой команды. ``` react-native run-android ``` **Запустить приложение в эмуляторе iOS** Эта команда делает то же самое, что `react-native run-android` но вместо эмулятора Android он открывает симулятор iPhone. ``` react-native run-ios ``` **Связывание ссылок с собственными проектами** Некоторые библиотеки имеют зависимости, которые необходимо связать в собственном коде, созданном для React Native. Если что-то не работает после установки новой библиотеки, возможно, это потому, что вы пропустите этот шаг. ``` react-native link [LIBRARY-NAME] ``` **Очистить связку** Если что-то не работает должным образом, возможно, вам нужно очистить и создать новый пакет с помощью этой команды. ``` watchman watch-del-all ``` **Поддержка декораторов** JSX по умолчанию не поддерживает декораторов, поэтому вам нужно установить плагин **Babel,** чтобы он работал. ``` npm install babel-plugin-transform-decorators-legacy --save npm install babel-plugin-transform-class-properties --save ``` ### Экспорт APK для запуска в устройстве С помощью следующих команд вы получите неподписанный apk, чтобы вы могли установить и поделиться с коллегами для целей тестирования. Просто помните, что этот апк не готов к загрузке в App Store или в публичный доступ. Вы найдете свой новый apk в _android / app / build / output / apk / app-debug.apk_ **1\. Отладка сборки пакета** ``` react-native bundle --dev false --platform android --entry-file index.android.js --bundle-output ./android/app/build/intermediates/assets/debug/index.android.bundle --assets-dest ./android/app/build/intermediates/res/merged/debug ``` **2\. Создать сборку отладки** ``` cd android ./gradlew assembleDebug ```
39.753623
411
0.775793
rus_Cyrl
0.9456
c96c0ab1904e1c4cff5003b96f5646c2c56f12d5
1,804
md
Markdown
docs/c-runtime-library/reference/strlwr-wcslwr.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/strlwr-wcslwr.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/strlwr-wcslwr.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: 'Dowiedz się więcej o: strlwr, wcslwr' title: strlwr, wcslwr ms.date: 12/16/2019 api_name: - strlwr - wcslwr api_location: - msvcrt.dll - msvcr80.dll - msvcr90.dll - msvcr100.dll - msvcr100_clr0400.dll - msvcr110.dll - msvcr110_clr0400.dll - msvcr120.dll - msvcr120_clr0400.dll - ucrtbase.dll api_type: - DLLExport topic_type: - apiref f1_keywords: - wcslwr - strlwr helpviewer_keywords: - strlwr function - wcslwr function ms.assetid: b9274824-4365-4674-b656-823c89653656 ms.openlocfilehash: 522afe9af4da37fdf6f1a22335558edcf24cebee ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 12/11/2020 ms.locfileid: "97344799" --- # <a name="strlwr-wcslwr"></a>strlwr, wcslwr Nazwy funkcji specyficznych dla firmy Microsoft `strlwr` i `wcslwr` są przestarzałe aliasy dla funkcji [_strlwr i _wcslwr](strlwr-wcslwr-mbslwr-strlwr-l-wcslwr-l-mbslwr-l.md) . Domyślnie generują [ostrzeżenia kompilatora (poziom 3) C4996](../../error-messages/compiler-warnings/compiler-warning-level-3-c4996.md). Nazwy są przestarzałe, ponieważ nie przestrzegają standardowych reguł języka C dla nazw specyficznych dla implementacji. Jednak funkcje są nadal obsługiwane. Zalecamy używanie [_strlwr lub _wcslwr](strlwr-wcslwr-mbslwr-strlwr-l-wcslwr-l-mbslwr-l.md) lub [_strlwr_s oraz funkcji _wcslwr_s](strlwr-s-strlwr-s-l-mbslwr-s-mbslwr-s-l-wcslwr-s-wcslwr-s-l.md) ulepszonych z zabezpieczeniami. Możesz również nadal używać tych nazw funkcji i wyłączyć ostrzeżenie. Aby uzyskać więcej informacji, zobacz Wyłączanie [nazw funkcji](../../error-messages/compiler-warnings/compiler-warning-level-3-c4996.md#posix-function-names) [Warning](../../error-messages/compiler-warnings/compiler-warning-level-3-c4996.md#turn-off-the-warning) i POSIX.
42.952381
569
0.797118
pol_Latn
0.96268
c96c1bca0beeb4d97b5d9b3c1aed26bd62961edb
558
md
Markdown
_posts/2013-12-27-legend-of-equip-pants.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
1
2017-04-05T18:54:39.000Z
2017-04-05T18:54:39.000Z
_posts/2013-12-27-legend-of-equip-pants.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
null
null
null
_posts/2013-12-27-legend-of-equip-pants.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
null
null
null
--- layout: post-no-feature title: Legend of Equip Pants category: articles tags: [iOS, RPG, retro, indie] --- <a href="http://gogetyourpants.com/">![{{ page.title }}](/images/legend-of-equip-pants.gif)</a> * Retrol-cool pixel art graphics, spooky soundtrack. * Full of tropes from classic RPGs, from horror movies. * Interesting (but sparse) use of In-App Purchases: used for choice trees / morality. * Full of pants-based puns. (For me, that's a big plus :] ) * Nicely short episodes: little bursts of fun. [{{ page.title }}](http://gogetyourpants.com/)
34.875
95
0.707885
eng_Latn
0.813642
c96c38358c43c69da6b46bdcb75653247ea79aa4
13,536
md
Markdown
docs/UserInterfaceApi.md
bahrmichael/eve-esi-client
55e5319b2e2feb25339db2c6428c93217321e9a5
[ "Apache-2.0" ]
null
null
null
docs/UserInterfaceApi.md
bahrmichael/eve-esi-client
55e5319b2e2feb25339db2c6428c93217321e9a5
[ "Apache-2.0" ]
null
null
null
docs/UserInterfaceApi.md
bahrmichael/eve-esi-client
55e5319b2e2feb25339db2c6428c93217321e9a5
[ "Apache-2.0" ]
null
null
null
# UserInterfaceApi All URIs are relative to *https://esi.tech.ccp.is* Method | HTTP request | Description ------------- | ------------- | ------------- [**postUiAutopilotWaypoint**](UserInterfaceApi.md#postUiAutopilotWaypoint) | **POST** /v2/ui/autopilot/waypoint/ | Set Autopilot Waypoint [**postUiOpenwindowContract**](UserInterfaceApi.md#postUiOpenwindowContract) | **POST** /v1/ui/openwindow/contract/ | Open Contract Window [**postUiOpenwindowInformation**](UserInterfaceApi.md#postUiOpenwindowInformation) | **POST** /v1/ui/openwindow/information/ | Open Information Window [**postUiOpenwindowMarketdetails**](UserInterfaceApi.md#postUiOpenwindowMarketdetails) | **POST** /v1/ui/openwindow/marketdetails/ | Open Market Details [**postUiOpenwindowNewmail**](UserInterfaceApi.md#postUiOpenwindowNewmail) | **POST** /v1/ui/openwindow/newmail/ | Open New Mail Window <a name="postUiAutopilotWaypoint"></a> # **postUiAutopilotWaypoint** > postUiAutopilotWaypoint(addToBeginning, clearOtherWaypoints, destinationId, datasource, token, userAgent, xUserAgent) Set Autopilot Waypoint Set a solar system as autopilot waypoint --- ### Example ```java // Import classes: //import enterprises.orbital.eve.esi.client.invoker.ApiClient; //import enterprises.orbital.eve.esi.client.invoker.ApiException; //import enterprises.orbital.eve.esi.client.invoker.Configuration; //import enterprises.orbital.eve.esi.client.invoker.auth.*; //import enterprises.orbital.eve.esi.client.api.UserInterfaceApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure OAuth2 access token for authorization: evesso OAuth evesso = (OAuth) defaultClient.getAuthentication("evesso"); evesso.setAccessToken("YOUR ACCESS TOKEN"); UserInterfaceApi apiInstance = new UserInterfaceApi(); Boolean addToBeginning = false; // Boolean | Whether this solar system should be added to the beginning of all waypoints Boolean clearOtherWaypoints = false; // Boolean | Whether clean other waypoints beforing adding this one Long destinationId = 789L; // Long | The destination to travel to, can be solar system, station or structure's id String datasource = "tranquility"; // String | The server name you would like data from String token = "token_example"; // String | Access token to use if unable to set a header String userAgent = "userAgent_example"; // String | Client identifier, takes precedence over headers String xUserAgent = "xUserAgent_example"; // String | Client identifier, takes precedence over User-Agent try { apiInstance.postUiAutopilotWaypoint(addToBeginning, clearOtherWaypoints, destinationId, datasource, token, userAgent, xUserAgent); } catch (ApiException e) { System.err.println("Exception when calling UserInterfaceApi#postUiAutopilotWaypoint"); e.printStackTrace(); } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **addToBeginning** | **Boolean**| Whether this solar system should be added to the beginning of all waypoints | [default to false] **clearOtherWaypoints** | **Boolean**| Whether clean other waypoints beforing adding this one | [default to false] **destinationId** | **Long**| The destination to travel to, can be solar system, station or structure&#39;s id | **datasource** | **String**| The server name you would like data from | [optional] [default to tranquility] [enum: tranquility, singularity] **token** | **String**| Access token to use if unable to set a header | [optional] **userAgent** | **String**| Client identifier, takes precedence over headers | [optional] **xUserAgent** | **String**| Client identifier, takes precedence over User-Agent | [optional] ### Return type null (empty response body) ### Authorization [evesso](../README.md#evesso) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json <a name="postUiOpenwindowContract"></a> # **postUiOpenwindowContract** > postUiOpenwindowContract(contractId, datasource, token, userAgent, xUserAgent) Open Contract Window Open the contract window inside the client --- ### Example ```java // Import classes: //import enterprises.orbital.eve.esi.client.invoker.ApiClient; //import enterprises.orbital.eve.esi.client.invoker.ApiException; //import enterprises.orbital.eve.esi.client.invoker.Configuration; //import enterprises.orbital.eve.esi.client.invoker.auth.*; //import enterprises.orbital.eve.esi.client.api.UserInterfaceApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure OAuth2 access token for authorization: evesso OAuth evesso = (OAuth) defaultClient.getAuthentication("evesso"); evesso.setAccessToken("YOUR ACCESS TOKEN"); UserInterfaceApi apiInstance = new UserInterfaceApi(); Integer contractId = 56; // Integer | The contract to open String datasource = "tranquility"; // String | The server name you would like data from String token = "token_example"; // String | Access token to use if unable to set a header String userAgent = "userAgent_example"; // String | Client identifier, takes precedence over headers String xUserAgent = "xUserAgent_example"; // String | Client identifier, takes precedence over User-Agent try { apiInstance.postUiOpenwindowContract(contractId, datasource, token, userAgent, xUserAgent); } catch (ApiException e) { System.err.println("Exception when calling UserInterfaceApi#postUiOpenwindowContract"); e.printStackTrace(); } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **contractId** | **Integer**| The contract to open | **datasource** | **String**| The server name you would like data from | [optional] [default to tranquility] [enum: tranquility, singularity] **token** | **String**| Access token to use if unable to set a header | [optional] **userAgent** | **String**| Client identifier, takes precedence over headers | [optional] **xUserAgent** | **String**| Client identifier, takes precedence over User-Agent | [optional] ### Return type null (empty response body) ### Authorization [evesso](../README.md#evesso) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json <a name="postUiOpenwindowInformation"></a> # **postUiOpenwindowInformation** > postUiOpenwindowInformation(targetId, datasource, token, userAgent, xUserAgent) Open Information Window Open the information window for a character, corporation or alliance inside the client --- ### Example ```java // Import classes: //import enterprises.orbital.eve.esi.client.invoker.ApiClient; //import enterprises.orbital.eve.esi.client.invoker.ApiException; //import enterprises.orbital.eve.esi.client.invoker.Configuration; //import enterprises.orbital.eve.esi.client.invoker.auth.*; //import enterprises.orbital.eve.esi.client.api.UserInterfaceApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure OAuth2 access token for authorization: evesso OAuth evesso = (OAuth) defaultClient.getAuthentication("evesso"); evesso.setAccessToken("YOUR ACCESS TOKEN"); UserInterfaceApi apiInstance = new UserInterfaceApi(); Integer targetId = 56; // Integer | The target to open String datasource = "tranquility"; // String | The server name you would like data from String token = "token_example"; // String | Access token to use if unable to set a header String userAgent = "userAgent_example"; // String | Client identifier, takes precedence over headers String xUserAgent = "xUserAgent_example"; // String | Client identifier, takes precedence over User-Agent try { apiInstance.postUiOpenwindowInformation(targetId, datasource, token, userAgent, xUserAgent); } catch (ApiException e) { System.err.println("Exception when calling UserInterfaceApi#postUiOpenwindowInformation"); e.printStackTrace(); } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **targetId** | **Integer**| The target to open | **datasource** | **String**| The server name you would like data from | [optional] [default to tranquility] [enum: tranquility, singularity] **token** | **String**| Access token to use if unable to set a header | [optional] **userAgent** | **String**| Client identifier, takes precedence over headers | [optional] **xUserAgent** | **String**| Client identifier, takes precedence over User-Agent | [optional] ### Return type null (empty response body) ### Authorization [evesso](../README.md#evesso) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json <a name="postUiOpenwindowMarketdetails"></a> # **postUiOpenwindowMarketdetails** > postUiOpenwindowMarketdetails(typeId, datasource, token, userAgent, xUserAgent) Open Market Details Open the market details window for a specific typeID inside the client --- ### Example ```java // Import classes: //import enterprises.orbital.eve.esi.client.invoker.ApiClient; //import enterprises.orbital.eve.esi.client.invoker.ApiException; //import enterprises.orbital.eve.esi.client.invoker.Configuration; //import enterprises.orbital.eve.esi.client.invoker.auth.*; //import enterprises.orbital.eve.esi.client.api.UserInterfaceApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure OAuth2 access token for authorization: evesso OAuth evesso = (OAuth) defaultClient.getAuthentication("evesso"); evesso.setAccessToken("YOUR ACCESS TOKEN"); UserInterfaceApi apiInstance = new UserInterfaceApi(); Integer typeId = 56; // Integer | The item type to open in market window String datasource = "tranquility"; // String | The server name you would like data from String token = "token_example"; // String | Access token to use if unable to set a header String userAgent = "userAgent_example"; // String | Client identifier, takes precedence over headers String xUserAgent = "xUserAgent_example"; // String | Client identifier, takes precedence over User-Agent try { apiInstance.postUiOpenwindowMarketdetails(typeId, datasource, token, userAgent, xUserAgent); } catch (ApiException e) { System.err.println("Exception when calling UserInterfaceApi#postUiOpenwindowMarketdetails"); e.printStackTrace(); } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **typeId** | **Integer**| The item type to open in market window | **datasource** | **String**| The server name you would like data from | [optional] [default to tranquility] [enum: tranquility, singularity] **token** | **String**| Access token to use if unable to set a header | [optional] **userAgent** | **String**| Client identifier, takes precedence over headers | [optional] **xUserAgent** | **String**| Client identifier, takes precedence over User-Agent | [optional] ### Return type null (empty response body) ### Authorization [evesso](../README.md#evesso) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json <a name="postUiOpenwindowNewmail"></a> # **postUiOpenwindowNewmail** > postUiOpenwindowNewmail(newMail, datasource, token, userAgent, xUserAgent) Open New Mail Window Open the New Mail window, according to settings from the request if applicable --- ### Example ```java // Import classes: //import enterprises.orbital.eve.esi.client.invoker.ApiClient; //import enterprises.orbital.eve.esi.client.invoker.ApiException; //import enterprises.orbital.eve.esi.client.invoker.Configuration; //import enterprises.orbital.eve.esi.client.invoker.auth.*; //import enterprises.orbital.eve.esi.client.api.UserInterfaceApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure OAuth2 access token for authorization: evesso OAuth evesso = (OAuth) defaultClient.getAuthentication("evesso"); evesso.setAccessToken("YOUR ACCESS TOKEN"); UserInterfaceApi apiInstance = new UserInterfaceApi(); PostUiOpenwindowNewmailNewMail newMail = new PostUiOpenwindowNewmailNewMail(); // PostUiOpenwindowNewmailNewMail | The details of mail to create String datasource = "tranquility"; // String | The server name you would like data from String token = "token_example"; // String | Access token to use if unable to set a header String userAgent = "userAgent_example"; // String | Client identifier, takes precedence over headers String xUserAgent = "xUserAgent_example"; // String | Client identifier, takes precedence over User-Agent try { apiInstance.postUiOpenwindowNewmail(newMail, datasource, token, userAgent, xUserAgent); } catch (ApiException e) { System.err.println("Exception when calling UserInterfaceApi#postUiOpenwindowNewmail"); e.printStackTrace(); } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **newMail** | [**PostUiOpenwindowNewmailNewMail**](PostUiOpenwindowNewmailNewMail.md)| The details of mail to create | **datasource** | **String**| The server name you would like data from | [optional] [default to tranquility] [enum: tranquility, singularity] **token** | **String**| Access token to use if unable to set a header | [optional] **userAgent** | **String**| Client identifier, takes precedence over headers | [optional] **xUserAgent** | **String**| Client identifier, takes precedence over User-Agent | [optional] ### Return type null (empty response body) ### Authorization [evesso](../README.md#evesso) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json
42.566038
152
0.738992
eng_Latn
0.30723
c96c60fbd3becf59be9a89993fc5752bfa264f0c
7,027
md
Markdown
articles/virtual-machines/workloads/mainframe-rehosting/mainframe-white-papers.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/workloads/mainframe-rehosting/mainframe-white-papers.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/workloads/mainframe-rehosting/mainframe-white-papers.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Notas del producto de Azure sobre los temas del sistema central con Azure Virtual Machines y Azure Storage description: Obtenga recursos sobre la migración de sistemas centrales y el traslado de sistemas basados en IBM Z a Microsoft Azure, y su nuevo hospedaje en esta plataforma. services: multiple documentationcenter: '' author: njray ms.author: larryme ms.date: 04/02/2019 ms.topic: article ms.service: multiple ms.openlocfilehash: dd91b4331a6093d1cf208893d5d88746707b473b ms.sourcegitcommit: d6b68b907e5158b451239e4c09bb55eccb5fef89 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 11/20/2019 ms.locfileid: "74224740" --- # <a name="azure-white-papers-about-mainframe-topics"></a>Notas del producto de Azure sobre los temas de sistemas centrales Obtenga recursos sobre la migración de sistemas centrales y el traslado de sistemas basados en IBM Z a Microsoft Azure, y su nuevo hospedaje en esta plataforma. ### <a name="demystifying-mainframe-to-azure-migrationhttpsazuremicrosoftcomresourcesdemystifying-mainframe-to-azure-migration"></a>[Demystifying mainframe to Azure migration](https://azure.microsoft.com/resources/demystifying-mainframe-to-azure-migration/) (Desmitificación de la migración del sistema central a Azure) Azure ofrece una alternativa de sistema central capaz de ofrecer características y funcionalidad equivalentes. En este documento, cuyo autor es Larry Mead del equipo de AzureCAT, se tratan los principales componentes del sistema central de IBM z/OS y los equivalentes de Azure. También se proporciona una hoja de ruta para iniciar la conversación con los encargados de toma de decisiones de TI que se suscriben a filosofías de sistema central obsoletas. ### <a name="move-mainframe-compute-and-storage-to-azurehttpsazuremicrosoftcomresourcesmove-mainframe-compute-and-storage-to-azure"></a>[Move mainframe compute and storage to Azure](https://azure.microsoft.com/resources/move-mainframe-compute-and-storage-to-azure/) (Traslado de proceso y almacenamiento de sistema central a Azure) Para ejecutar cargas de trabajo de sistema central en Microsoft Azure, debe saber cómo las funcionalidades del sistema central se comparan con Azure. Basada en el sistema central IBM z14, en esta guía, elaborada por Larry Mead de AzureCAT se explora cómo obtener resultados comparables en Azure. Los recursos de proceso y almacenamiento escalables de forma masiva en Azure proporcionan ventajas inmediatas a las organizaciones que ejecutan cargas de trabajo de sistema central. ### <a name="microsoft-azure-government-cloud-for-mainframe-applicationshttpsazuremicrosoftcomresourcesmicrosoft-azure-government-cloud-for-mainframe-applications"></a>[Microsoft Azure Government cloud for mainframe applications](https://azure.microsoft.com/resources/microsoft-azure-government-cloud-for-mainframe-applications/) (Nube de Microsoft Azure Government para aplicaciones de sistema central) Planear una migración de aplicaciones es el momento idóneo para agregar valor y agilidad incluso a cargas de trabajo de sistema central bien establecidas. En esta guía rápida, cuyo autor es Larry Mead de AzureCAT, se muestra cómo los organismos gubernamentales de Estados Unidos y sus asociados pueden usar Azure Government para aplicaciones de sistema central, y la migración podría no ser tan difícil como se piensa. Azure Government ofrece las ventajas de un sistema central en un entorno más ágil y rentable. Además, Azure Government ha obtenido una aprobación inicial P-ATO (Provisional Authority to Operate) para el nivel de gran impacto de FedRAMP. ### <a name="deploy-ibm-db2-purescale-on-azurehttpsazuremicrosoftcomresourcesdeploy-ibm-db2-purescale-on-azure"></a>[Implementación de IBM DB2 pureScale en Azure](https://azure.microsoft.com/resources/deploy-ibm-db2-purescale-on-azure/) Aprenda de nuestra experiencia con una empresa que recientemente ha vuelto a hospedar su entorno de IBM DB2 en Azure. Esta guía la escribieron los miembros de los equipos AzureCAT y DMJ que estaban ahí: Larry Mead, Benjamin Guinebertière, Alessandro Vozza y Jonathan Frost. Y describen los pasos que realizaron durante esta migración. Sus hallazgos fueron revisados por los miembros del equipo pureScale de IBM DB2. Los scripts de instalación, disponibles en GitHub, se basan en la arquitectura que usó el equipo con una carga de trabajo OLTP típica de tamaño medio. ### <a name="install-tmaxsoft-openframe-on-azurehttpsazuremicrosoftcomresourcesinstall-tmaxsoft-openframe-on-azure"></a>[Install TmaxSoft OpenFrame on Azure](https://azure.microsoft.com/resources/install-tmaxsoft-openframe-on-azure/) (Instalación de TmaxSoft OpenFrame en Azure) Modernice su infraestructura a escala de nube. TmaxSoft OpenFrame facilita el cambio de los recursos del sistema central existentes a Azure. En este artículo, Steve Read (AzureCAT) y Manoj Aerroju (TmaxSoft) explican cómo configurar un entorno de OpenFrame adecuado para cargas de trabajo de desarrollo, demostración, pruebas y producción. ### <a name="ibm-mainframe-to-microsoft-azure-reference-architecturehttpswwwastadiacomwhitepaperibm-mainframe-to-microsoft-azure"></a>[IBM mainframe to Microsoft Azure reference architecture](https://www.astadia.com/whitepaper/ibm-mainframe-to-microsoft-azure) (Arquitectura de referencia para la migración de sistemas centrales de IBM a Microsoft Azure) En estas notas del producto se reflejan los más de 25 años de experiencia de Astadia en la modernización de la plataforma de sistema central. Se explican las ventajas y desafíos de las labores de modernización. Esta guía proporciona información general de los sistemas centrales de IBM y una arquitectura de referencia de sistema central de IBM a Azure. También se proporciona un vistazo a la metodología de éxito Astadia. ### <a name="deploying-mainframe-applications-to-microsoft-azurehttpswwwmicrofocuscommediawhite-paperdeploying_mainframe_applications_to_microsoft_azure_wppdf"></a>[Deploying mainframe applications to Microsoft Azure](https://www.microfocus.com/media/white-paper/deploying_mainframe_applications_to_microsoft_azure_wp.pdf) (Implementación de aplicaciones de sistema central en Microsoft Azure) Las soluciones de Micro Focus le liberan de las restricciones del hardware y el software de sistema central propietario. En esta guía, Micro Focus explica cómo implementar sus aplicaciones de COBOL y PL/I que se ejecutan en sistemas centrales de IBM en la nube. ### <a name="breathe-new-life-into-mainframeshttpswwwinfosyscomservicesmodernizationbreathe-new-life-mainframeshtml"></a>[Breathe new life into mainframes](https://www.infosys.com/services/modernization/breathe-new-life-mainframes.html) (Revitalización de los sistemas centrales) Los sistemas centrales son cada vez más difíciles de mantener para las empresas. En estas notas del producto, cuyos autores son Infosys y Microsoft, se resalta la estrategia ganadora para una migración correcta de los sistemas centrales. Se ilustran las opciones con casos de uso y ejemplos.
135.134615
655
0.825103
spa_Latn
0.941116
c96c64e9e9ec6c2e37ab637653ac6fc9f9009685
14,621
md
Markdown
articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
MaraSteiu/azure-docs
efbd0c483577a87f5849944b61d1ad3c234344f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
MaraSteiu/azure-docs
efbd0c483577a87f5849944b61d1ad3c234344f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
MaraSteiu/azure-docs
efbd0c483577a87f5849944b61d1ad3c234344f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Tutorial: Azure Active Directory integration with Atlassian Cloud | Microsoft Docs' description: Learn how to configure single sign-on between Azure Active Directory and Atlassian Cloud. services: active-directory author: jeevansd manager: CelesteDG ms.reviewer: celested ms.service: active-directory ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial ms.date: 08/04/2020 ms.author: jeedes --- # Tutorial: Integrate Atlassian Cloud with Azure Active Directory In this tutorial, you'll learn how to integrate Atlassian Cloud with Azure Active Directory (Azure AD). When you integrate Atlassian Cloud with Azure AD, you can: * Control in Azure AD who has access to Atlassian Cloud. * Enable your users to be automatically signed-in to Atlassian Cloud with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on). ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/). * Atlassian Cloud single sign-on (SSO) enabled subscription. * To enable Security Assertion Markup Language (SAML) single sign-on for Atlassian Cloud products, you need to set up Atlassian Access. Learn more about [Atlassian Access]( https://www.atlassian.com/enterprise/cloud/identity-manager). > [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud. ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. * Atlassian Cloud supports **SP and IDP** initiated SSO * Atlassian Cloud supports [Automatic user provisioning and deprovisioning](atlassian-cloud-provisioning-tutorial.md) * Once you configure Atlassian Cloud you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app). ## Adding Atlassian Cloud from the gallery To configure the integration of Atlassian Cloud into Azure AD, you need to add Atlassian Cloud from the gallery to your list of managed SaaS apps. 1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ## Configure and test Azure AD SSO Configure and test Azure AD SSO with Atlassian Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Atlassian Cloud. To configure and test Azure AD SSO with Atlassian Cloud, complete the following building blocks: 1. **[Configure Azure AD with Atlassian Cloud SSO](#configure-azure-ad-sso)** - to enable your users to use Azure AD based SAML SSO with Atlassian Cloud. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Create Atlassian Cloud test user](#create-atlassian-cloud-test-user)** - to have a counterpart of B.Simon in Atlassian Cloud that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. 1. To automate the configuration within Atlassian Cloud, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**. ![My apps extension](common/install-myappssecure-extension.png) 1. After adding extension to the browser, click on **Set up Atlassian Cloud** will direct you to the Atlassian Cloud application. From there, provide the admin credentials to sign into Atlassian Cloud. The browser extension will automatically configure the application for you. ![Setup configuration](common/setup-sso.png) 1. If you want to setup Atlassian Cloud manually, log in to your Atlassian Cloud company site as an administrator and perform the following steps. 1. Before you start go to your Atlassian product instance and copy/save the Instance URL > [!NOTE] > url should fit `https://<instancename>.atlassian.net` pattern ![image](./media/atlassian-cloud-tutorial/get-atlassian-instance-name.png) 1. Open the [Atlassian Admin Portal](https://admin.atlassian.com/) and click on your organization name ![image](./media/atlassian-cloud-tutorial/click-on-organization-in-atlassian-access.png) 1. You need to verify your domain before going to configure single sign-on. For more information, see [Atlassian domain verification](https://confluence.atlassian.com/cloud/domain-verification-873871234.html) document. 1. From the Atlassian Admin Portal Screen select **Security** from the left drawer ![image](./media/atlassian-cloud-tutorial/click-on-security-in-atlassian-access.png) 1. From the Atlassian Admin Portal Security Screen select **SAML single sign** on from the left drawer ![image](./media/atlassian-cloud-tutorial/click-on-saml-sso-in-atlassian-access-security.png) 1. Click on **Add SAML Configuration** and keep the page open ![image](./media/atlassian-cloud-tutorial/saml-configuration-in-atlassian-access-security-saml-sso.png) ![image](./media/atlassian-cloud-tutorial/add-saml-configuration.png) 1. In the [Azure portal](https://portal.azure.com/), on the **Atlassian Cloud** application integration page, find the **Manage** section and select **Set up single sign-on**. ![image](./media/atlassian-cloud-tutorial/set-up-sso.png) 1. On the **Select a Single sign-on method** page, select **SAML**. ![image](./media/atlassian-cloud-tutorial/saml-in-azure.png) 1. On the **Set up Single Sign-On with SAML** page, scroll down to **Set Up Atlassian Cloud** a. Click on **Configuration URLs** ![image](./media/atlassian-cloud-tutorial/configuration-urls.png) b. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian c. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian ![image](./media/atlassian-cloud-tutorial/configuration-urls-azure.png) ![image](./media/atlassian-cloud-tutorial/entity-id-and-ss.png) 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![image](./media/atlassian-cloud-tutorial/certificate.png) ![image](./media/atlassian-cloud-tutorial/certificate-1.png) 1. **Add/Save** the SAML Configuration in Atlassian 1. If you wish to configure the application in **IDP** initiated mode, edit the **Basic SAML Configuration** section of the **Set up Single Sign-On with SAML** page in Azure and open the **SAML single sign-on page** on the Atlassian Admin Portal a. Copy **SP Entity ID** value from Atlassian, paste it in the **Identifier (Entity ID)** box in Azure and set it as default b. Copy **SP Assertion Consumer Service URL** value from Atlassian, paste it in the **Reply URL (Assertion Consumer Service URL)** box in Azure and set it as default c. Copy your **Instance URL** value, which you copied at step 1 and paste it in the **Relay State** box in Azure ![image](./media/atlassian-cloud-tutorial/copy-urls.png) ![image](./media/atlassian-cloud-tutorial/edit-button.png) ![image](./media/atlassian-cloud-tutorial/urls.png) 1. If you wish to configure the application in **SP** initiated mode, edit the **Basic SAML Configuration** section of the **Set up Single Sign-On with SAML** page in Azure. Copy your **Instance URL** (from step 1) and paste it in the **Sign On URL** box in Azure ![image](./media/atlassian-cloud-tutorial/edit-button.png) ![image](./media/atlassian-cloud-tutorial/sign-on-URL.png) 1. Your Atlassian Cloud application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. You can edit the attribute mapping by clicking on **Edit** icon. ![image](./media/atlassian-cloud-tutorial/default-attributes.png) 1. Attribute mapping for an Azure AD tenant with a Microsoft 365 license a. Click on the **Unique User Identifier (Name ID)** claim ![image](./media/atlassian-cloud-tutorial/user-attributes-and-claims.png) b. Atlassian Cloud expects the **nameidentifier** (**Unique User Identifier**) to be mapped to the user's email (**user.email**). Edit the **Source attribute** and change it to **user.mail**. Save the changes to the claim. ![image](./media/atlassian-cloud-tutorial/unique-user-identifier.png) c. The final attribute mappings should look as follows. ![image](common/default-attributes.png) 1. Attribute mapping for an Azure AD tenant without a Microsoft 365 license a. Click on the `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` claim. ![image](./media/atlassian-cloud-tutorial/email-address.png) b. While Azure does not populate the **user.mail** attribute for users created in Azure AD tenants without Microsoft 365 licenses and stores the email for such users in **userprincipalname** attribute. Atlassian Cloud expects the **nameidentifier** (**Unique User Identifier**) to be mapped to the user's email (**user.userprincipalname**). Edit the **Source attribute** and change it to **user.userprincipalname**. Save the changes to the claim. ![image](./media/atlassian-cloud-tutorial/set-email.png) c. The final attribute mappings should look as follows. ![image](common/default-attributes.png) ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. 1. In the **User** properties, follow these steps: 1. In the **Name** field, enter `B.Simon`. 1. In the **User name** field, enter the [email protected]. For example, `[email protected]`. 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. 1. Click **Create**. ### Assign the Azure AD test user In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Atlassian Cloud. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Atlassian Cloud**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**. ![The "Users and groups" link](common/users-groups-blade.png) 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. ![The Add User link](./media/atlassian-cloud-tutorial/add-assign-user.png) 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button. ### Create Atlassian Cloud test user To enable Azure AD users to sign in to Atlassian Cloud, provision the user accounts manually in Atlassian Cloud by doing the following: 1. In the **Administration** pane, select **Users**. ![The Atlassian Cloud Users link](./media/atlassian-cloud-tutorial/tutorial-atlassiancloud-14.png) 1. To create a user in Atlassian Cloud, select **Invite user**. ![Create an Atlassian Cloud user](./media/atlassian-cloud-tutorial/tutorial-atlassiancloud-15.png) 1. In the **Email address** box, enter the user's email address, and then assign the application access. ![Create an Atlassian Cloud user](./media/atlassian-cloud-tutorial/tutorial-atlassiancloud-16.png) 1. To send an email invitation to the user, select **Invite users**. An email invitation is sent to the user and, after accepting the invitation, the user is active in the system. > [!NOTE] > You can also bulk-create users by selecting the **Bulk Create** button in the **Users** section. ### Test SSO When you select the Atlassian Cloud tile in the Access Panel, you should be automatically signed in to the Atlassian Cloud for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). ## Additional Resources - [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) - [What is application access and single sign-on with Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) - [What is conditional access in Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview) - [Try Atlassian Cloud with Azure AD](https://aad.portal.azure.com/) - [How to protect Atlassian Cloud with advanced visibility and controls](https://docs.microsoft.com/cloud-app-security/proxy-intro-aad)
57.790514
454
0.748991
eng_Latn
0.936579
c96ca822ab37da427d531ad4f94e8ea4b7641c27
426
md
Markdown
_journals/Generating.md
tanmingkui/tanmingkui.github.io
65e7edc7c38c3fb299320b3be60d3547ff6a77cf
[ "MIT" ]
null
null
null
_journals/Generating.md
tanmingkui/tanmingkui.github.io
65e7edc7c38c3fb299320b3be60d3547ff6a77cf
[ "MIT" ]
null
null
null
_journals/Generating.md
tanmingkui/tanmingkui.github.io
65e7edc7c38c3fb299320b3be60d3547ff6a77cf
[ "MIT" ]
6
2019-01-06T04:52:27.000Z
2022-03-30T01:46:29.000Z
--- title: "Generating Visually Aligned Sound from Videos" collection: journals permalink: /publication/Generating date: 2020-07-5 year: "2020" venue: "TIP" city: state: "" thumbnail: "Generating.png" teaser : authors: "Peihao Chen, Yang Zhang, Mingkui Tan, Hongdong Xiao, Deng Huang, and Chuang Gan" bibtex: Generating.txt uri: Generating.pdf arxiv: project: source: https://github.com/PeihaoChen/regnet poster: data: ---
21.3
90
0.748826
eng_Latn
0.263472
c96d220c1132477566ba6de589394bbe7ffb2f6c
3,369
md
Markdown
docs/csharp/programming-guide/classes-and-structs/interface-properties.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/programming-guide/classes-and-structs/interface-properties.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/programming-guide/classes-and-structs/interface-properties.md
eOkadas/docs.fr-fr
64202ad620f9bcd91f4360ec74aa6d86e1d4ae15
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Propriétés d'interface - Guide de programmation C# ms.custom: seodec18 ms.date: 07/20/2015 helpviewer_keywords: - properties [C#], on interfaces - interfaces [C#], properties ms.assetid: 6503e9ed-33d7-44ec-b4c1-cc16c084b795 ms.openlocfilehash: cdd425970442e284d6fd6488bbb13394c12e939a ms.sourcegitcommit: 986f836f72ef10876878bd6217174e41464c145a ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 08/19/2019 ms.locfileid: "69596455" --- # <a name="interface-properties-c-programming-guide"></a>Propriétés d'interface (Guide de programmation C#) Des propriétés peuvent être déclarées dans une [interface](../../language-reference/keywords/interface.md). L’exemple ci-dessous porte sur un accesseur de propriété d’interface : [!code-csharp[csProgGuideProperties#14](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideProperties/CS/Properties.cs#14)] L’accesseur d’une propriété d’interface n’a pas de corps. Par conséquent, les accesseurs visent à indiquer si la propriété est en lecture-écriture, en lecture seule ou en écriture seule. ## <a name="example"></a>Exemples Dans cet exemple, l’interface `IEmployee` a une propriété en lecture-écriture, `Name`, et une propriété en lecture seule, `Counter`. La classe `Employee` implémente l’interface `IEmployee` et utilise ces deux propriétés. Le programme lit le nom d’un nouvel employé et le nombre actuel d’employés et affiche le nom de l’employé et le nombre d’employés calculé. Vous pouvez utiliser le nom qualifié complet de la propriété, qui fait référence à l’interface dans laquelle le membre est déclaré. Par exemple : [!code-csharp[csProgGuideProperties#16](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideProperties/CS/Properties.cs#16)] Ceci s’appelle une [implémentation d’interface explicite](../interfaces/explicit-interface-implementation.md). Par exemple, si la classe `Employee` implémente deux interfaces, `ICitizen` et `IEmployee`, et que les deux interfaces ont la même propriété `Name`, l’implémentation de membre d’interface explicite est nécessaire. Autrement dit, la déclaration de propriété suivante : [!code-csharp[csProgGuideProperties#16](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideProperties/CS/Properties.cs#16)] implémente la propriété `Name` dans l’interface `IEmployee`, alors que la déclaration suivante : [!code-csharp[csProgGuideProperties#17](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideProperties/CS/Properties.cs#17)] implémente la propriété `Name` dans l’interface `ICitizen`. [!code-csharp[csProgGuideProperties#15](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideProperties/CS/Properties.cs#15)] **`210 Hazem Abolrous`** ## <a name="sample-output"></a>Résultat de l'exemple `Enter number of employees: 210` `Enter the name of the new employee: Hazem Abolrous` `The employee information:` `Employee number: 211` `Employee name: Hazem Abolrous` ## <a name="see-also"></a>Voir aussi - [Guide de programmation C#](../index.md) - [Propriétés](./properties.md) - [Utilisation de propriétés](./using-properties.md) - [Comparaison entre propriétés et indexeurs](../indexers/comparison-between-properties-and-indexers.md) - [Indexeurs](../indexers/index.md) - [Interfaces](../interfaces/index.md)
54.33871
381
0.766696
fra_Latn
0.727816
c96da55223a203d38c9d35869a5332fa2e5d6193
3,165
md
Markdown
README.md
ucam-comparch/pcie-sata-adaptor-board
ba2d5281ff882ba618182c09bc0b7754be5b5ee3
[ "Unlicense" ]
13
2015-12-12T19:12:31.000Z
2022-03-18T09:47:50.000Z
README.md
ucam-comparch/pcie-sata-adaptor-board
ba2d5281ff882ba618182c09bc0b7754be5b5ee3
[ "Unlicense" ]
null
null
null
README.md
ucam-comparch/pcie-sata-adaptor-board
ba2d5281ff882ba618182c09bc0b7754be5b5ee3
[ "Unlicense" ]
10
2015-12-12T19:12:35.000Z
2021-06-21T10:48:50.000Z
This is a printed circuit board to breakout an 8-lane PCI Express socket into 8x SATA channels, which we have used with Terasic DE4 FPGA boards. THIS IS NOT A SATA CONTROLLER, it merely uses SATA sockets for the physical connections as the sockets and cables are cheap, and have good signal integrity. What you put down the PCIe lanes is up to your FPGA configuration. Images (the first version with a few hand bugfixes): ![Front board image](http://www.cl.cam.ac.uk/research/comparch/opensource/pcie-sata/PCIeSATA-front-scaled.jpg) ![Rear board image](http://www.cl.cam.ac.uk/research/comparch/opensource/pcie-sata/PCIeSATA-reverse-scaled.jpg) The design was created using Eagle PCB, but the gerber files should be usable even if you don't have Eagle. The 4-layer board was fabricated by rak.co.uk using a standard-ish stackup: 0.2mm double sided 2 pre-pregs (0.15mm) 0.8mm unclad board 2 pre-pregs (0.15mm) 0.2mm double sided Total 1.5-1.6mm They don't provide controlled impedance boards but our design is sized to have the correct impedances according to this stackup. In addition ports are provided for: SATA power (3.3V and 12V) Altera USB Blaster-compatible JTAG port Since we couldn't source SATA power connectors without data, the data part of the SATA power connector is connected to some I/Os on the PCIe connector. This can be used for low-speed signalling from the FPGA. If you connect a USB Blaster to the JTAG port you need to fit components to VREF_SRC to supply the correct VREF. The DE4 uses 2.5V - we fit resistors 68R from 3V3 to VREF and 220R from VREF to ground. R1/R2/R3 provide access to the unused lines on the JTAG cable, or means to ground them to reduce noise. Computer Architecture Group, University of Cambridge Computer Laboratory May 2013 http://www.cl.cam.ac.uk/research/comparch/ Copyright (c) 2013, Simon Moore All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
43.958333
111
0.786414
eng_Latn
0.755169
c96dfa7b7d3dacc27802c60e6b47346b0e425829
8,037
md
Markdown
docs/standard/datetime/saving-and-restoring-time-zones.md
douglasbreda/docs.pt-br
f92e63014d8313d5e283db2e213380375cea9a77
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/datetime/saving-and-restoring-time-zones.md
douglasbreda/docs.pt-br
f92e63014d8313d5e283db2e213380375cea9a77
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/datetime/saving-and-restoring-time-zones.md
douglasbreda/docs.pt-br
f92e63014d8313d5e283db2e213380375cea9a77
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Salvando e restaurando fusos horários ms.date: 04/10/2017 ms.technology: dotnet-standard dev_langs: - csharp - vb helpviewer_keywords: - restoring time zones - deserialization [.NET Framework], time zones - serialization [.NET Framework], time zones - time zone objects [.NET Framework], restoring - saving time zones - time zone objects [.NET Framework], deserializing - time zones [.NET Framework], saving - time zones [.NET Framework], restoring - time zone objects [.NET Framework], serializing - time zone objects [.NET Framework], saving ms.assetid: 4028b310-e7ce-49d4-a646-1e83bfaf6f9d author: rpetrusha ms.author: ronpet ms.openlocfilehash: 1dc983f1f0b2405f207d69c62b800ee854fcd409 ms.sourcegitcommit: 64f4baed249341e5bf64d1385bf48e3f2e1a0211 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 09/07/2018 ms.locfileid: "44081762" --- # <a name="saving-and-restoring-time-zones"></a>Salvando e restaurando fusos horários O <xref:System.TimeZoneInfo> classe depende do registro para recuperar dados de fuso horário predefinido. No entanto, o registro é uma estrutura dinâmica. Além disso, as informações de fuso horário que contém o registro são usadas pelo sistema operacional, principalmente para lidar com conversões e ajustes de hora para o ano atual. Isso tem duas das maiores implicações para aplicativos que dependem de dados precisos de zona de tempo: * Um fuso horário que é exigido por um aplicativo não pode ser definido no registro, ou talvez tenham sido renomeado ou removido do registro. * Um fuso horário que é definido no registro pode não ter informações sobre as regras de ajuste específicas que são necessárias para conversões de tempo históricos. O <xref:System.TimeZoneInfo> classe aborda essas limitações através de seu suporte para serialização (Salvar) e a desserialização (Restaurar) dos dados de fuso horário. ## <a name="time-zone-serialization-and-deserialization"></a>Fuso horário de serialização e desserialização Salvando e restaurando um fuso horário ao serializar e desserializar dados de fuso horário envolvem apenas duas chamadas de método: * Você pode serializar uma <xref:System.TimeZoneInfo> objeto chamando esse objeto <xref:System.TimeZoneInfo.ToSerializedString%2A> método. O método não usa nenhum parâmetro e retorna uma cadeia de caracteres que contém informações de fuso horário. * Você pode desserializar um <xref:System.TimeZoneInfo> objeto a partir de uma cadeia de caracteres serializada, passando essa cadeia de caracteres para o `static` (`Shared` no Visual Basic) <xref:System.TimeZoneInfo.FromSerializedString%2A?displayProperty=nameWithType> método. ## <a name="serialization-and-deserialization-scenarios"></a>Cenários de serialização e desserialização A capacidade de salvar (ou serializar) um <xref:System.TimeZoneInfo> objeto como uma cadeia de caracteres e a restauração (ou desserializar) para uso posterior aumenta o utilitário e a flexibilidade do <xref:System.TimeZoneInfo> classe. Esta seção examina algumas das situações em que a serialização e desserialização são mais úteis. ### <a name="serializing-and-deserializing-time-zone-data-in-an-application"></a>Serialização e desserialização de dados de fuso horário em um aplicativo Um fuso horário serializado podem ser restaurado de uma cadeia de caracteres quando for necessário. Um aplicativo pode fazer isso, se o fuso horário recuperado do registro não puder converter corretamente uma data e hora dentro de um determinado intervalo de datas. Por exemplo, dados de fuso horário no registro do Windows XP dá suporte a uma regra de ajuste único, enquanto os fusos horários definidos no registro do Windows Vista normalmente fornecem informações sobre duas regras de ajuste. Isso significa que as conversões de tempo históricos podem não ser precisas. Serialização e desserialização de dados de fuso horário podem lidar com essa limitação. No exemplo a seguir, um personalizado <xref:System.TimeZoneInfo> classe que não tem a nenhuma regra de ajuste está definido para representar Estados Unidos Fuso horário padrão Oriental do 1883 para 1917, antes da introdução do horário de verão nos Estados Unidos. O fuso horário personalizado é serializado em uma variável que tem escopo global. O método de conversão de fuso horário, `ConvertUtcTime`, é passado a tempos de tempo Universal Coordenado (UTC) para converter. Se a data e hora ocorre em 1917 ou antes, o fuso horário padrão do Leste personalizado é restaurado a partir de uma cadeia de caracteres serializada e substitui o fuso horário recuperado do registro. [!code-csharp[System.TimeZone2.Serialization.1#1](../../../samples/snippets/csharp/VS_Snippets_CLR_System/system.TimeZone2.Serialization.1/cs/Serialization.cs#1)] [!code-vb[System.TimeZone2.Serialization.1#1](../../../samples/snippets/visualbasic/VS_Snippets_CLR_System/system.TimeZone2.Serialization.1/vb/Serialization.vb#1)] ### <a name="handling-time-zone-exceptions"></a>Tratamento de exceções de fuso horário Como o registro é uma estrutura dinâmica, seu conteúdo está sujeito à modificação acidental ou deliberadamente. Isso significa que um fuso horário que deve ser definida no registro e que é necessário para um aplicativo ser executado com êxito pode estar ausente. Sem suporte para o fuso horário de serialização e desserialização, você tem outra escolha para lidar com resultante <xref:System.TimeZoneNotFoundException> encerrando o aplicativo. No entanto, ao usar o fuso horário de serialização e desserialização, você pode manipular inesperado <xref:System.TimeZoneNotFoundException> , restaurando a zona de tempo necessária de uma cadeia de caracteres serializada e o aplicativo continuará a ser executado. O exemplo a seguir cria e serializa um fuso horário padrão Central personalizado. Em seguida, tenta recuperar o fuso horário padrão Central do registro. Se a operação de recuperação gera uma <xref:System.TimeZoneNotFoundException> ou um <xref:System.InvalidTimeZoneException>, o manipulador de exceção desserializa o fuso horário. [!code-csharp[System.TimeZone2.Serialization.2#1](../../../samples/snippets/csharp/VS_Snippets_CLR_System/system.TimeZone2.Serialization.2/cs/Serialization2.cs#1)] [!code-vb[System.TimeZone2.Serialization.2#1](../../../samples/snippets/visualbasic/VS_Snippets_CLR_System/system.TimeZone2.Serialization.2/vb/Serialization2.vb#1)] ### <a name="storing-a-serialized-string-and-restoring-it-when-needed"></a>Armazenar uma cadeia de caracteres serializada e restaurá-lo quando necessário Os exemplos anteriores têm informações de fuso horário para uma variável de cadeia de caracteres armazenadas e restauradas quando necessário. No entanto, a cadeia de caracteres que contém informações de zona podem ser em si armazenados em algum meio de armazenamento, como um arquivo externo, um arquivo de recurso serializada de tempo inserido no aplicativo ou o registro. (Observe que as informações sobre fusos horários personalizados devem ser armazenadas além das chaves de fuso horário do sistema no registro). Armazenar uma cadeia de caracteres serializada de zona de tempo dessa forma também separa a rotina de criação de fuso horário do próprio aplicativo. Por exemplo, uma rotina de criação de fuso horário pode executar e criar um arquivo de dados que contém informações históricas de zona de tempo que um aplicativo pode usar. O arquivo de dados então pode ser instalado com o aplicativo e ele pode ser aberto e um ou mais dos seus fusos horários podem ser desserializados quando o aplicativo precisa deles. Para obter um exemplo que usa um recurso inserido para armazenar dados serializados de zona de tempo, consulte [como: salvar fusos horários em um recurso inserido](../../../docs/standard/datetime/save-time-zones-to-an-embedded-resource.md) e [como: restaurar fusos horários de um recurso inserido](../../../docs/standard/datetime/restore-time-zones-from-an-embedded-resource.md). ## <a name="see-also"></a>Consulte também * [Datas, horas e fusos horários](../../../docs/standard/datetime/index.md)
100.4625
708
0.808759
por_Latn
0.999325
c96e329cd89eb032fbe0a98e50f88fb99a6a5fa9
382
md
Markdown
README.md
jxub/lua-workshop
fa76408cbeae3e6f1aaea995f8f930e6ba29c85b
[ "MIT" ]
3
2017-09-19T12:45:07.000Z
2019-12-17T05:15:22.000Z
README.md
jxub/lua-workshop
fa76408cbeae3e6f1aaea995f8f930e6ba29c85b
[ "MIT" ]
1
2020-05-10T20:10:49.000Z
2020-05-10T20:10:49.000Z
README.md
jxub/lua-workshop
fa76408cbeae3e6f1aaea995f8f930e6ba29c85b
[ "MIT" ]
1
2019-12-17T05:16:08.000Z
2019-12-17T05:16:08.000Z
# Lua and Love2d for games - Workshop ------------ Material and code for the 27 April'17 Game programmng in Lua and Love2d workshop at UPV Check out the course I wrote in Spanish about Lua and Love2d at libre.university. * Basics of Lua: <https://es.libre.university/lesson/B1Pll0oCg> * Create your first platform game with Love2d:<https://es.libre.university/lesson/rJQx-As0x>
34.727273
92
0.748691
eng_Latn
0.945795
c96fd6580da3958f5baac076ec0d2b22c508b176
4,506
md
Markdown
macroid-docs/ScalaOnAndroid.md
CrazyFork/macroid
41115bd03655a6b248c263a72fa012138a7c0eaa
[ "MIT" ]
null
null
null
macroid-docs/ScalaOnAndroid.md
CrazyFork/macroid
41115bd03655a6b248c263a72fa012138a7c0eaa
[ "MIT" ]
null
null
null
macroid-docs/ScalaOnAndroid.md
CrazyFork/macroid
41115bd03655a6b248c263a72fa012138a7c0eaa
[ "MIT" ]
null
null
null
# Scala on Android Writing your Android project in [Scala](http://scala-lang.org/) is becoming smoother and smoother. This is a short guide that will help you get started. ## Pros/cons * (+) Concise code * (+) Fantastic concurrency support (futures/promises, scala-async, actors, ...) * (+) Advanced DSLs (like Macroid ;) ) * (−) Build time (due to ProGuard + dex taking up to 1 minute on big projects) ## Tutorials If the instructions below are too dry for your taste, check out these tutorials: * [Android & Scala (part 1)](http://blog.eigengo.com/2014/06/21/android-and-scala/) by [Jan Macháček](https://twitter.com/honzam399) * [Scala on Android — Preparing the environment (part 1)](http://www.47deg.com/blog/scala-on-android-preparing-the-environment) by [Federico Fernández ](https://twitter.com/@fede_fdz) ## The Android SDK The SDK can be downloaded from the [Android website](http://developer.android.com/sdk/index.html). You will also need to configure `ANDROID_HOME` environment variable to point to the installation. To use the bundled libraries, such as the support library, make sure you install the following items in the SDK manager: ![SDK manager screenshot](SDK-manager.png) Alternatively, you can find a UNIX install script in [*Macroid*’s Travis config](https://github.com/macroid/macroid/blob/master/.travis.yml#L7). ## The build system The most important part of the equation is the build system (in mainland Android you are likely to use one anyway). After configuring the build you will be able to compile and *run the project from the command line* and *generate IDE project files automatically*. Currently there are three options: * **Recommended**. Use [sbt](http://www.scala-sbt.org/), Scala’s de-facto standard build system. You’ll have to install it from its website. To compile and run Android projects you’ll need the [Android plugin](https://github.com/pfn/android-sdk-plugin). Follow the plugin’s readme page to set things up. * Use [Gradle](http://www.gradle.org/) and [Android+Scala plugin](https://github.com/saturday06/gradle-android-scala-plugin) * **Not recommended**. Just use an IDE without the build system. For Eclipse you’ll need the [ADT plugin](http://developer.android.com/tools/sdk/eclipse-adt.html), [Scala IDE](http://scala-ide.org/) and [AndroidProguardScala plugin](https://github.com/banshee/AndroidProguardScala). For [Intellij IDEA](http://www.jetbrains.com/idea/) or [Android Studio](http://developer.android.com/sdk/installing/studio.html), which is the same, see [this guide](https://github.com/yareally/android-scala-intellij-no-sbt-plugin). ## The IDE The recommended IDE is [Intellij IDEA](http://www.jetbrains.com/idea/) or [Android Studio](http://developer.android.com/sdk/installing/studio.html), which is the same. You will need to have the [Scala](http://plugins.jetbrains.com/plugin/?id=1347) and [sbt](http://plugins.jetbrains.com/plugin/5007?pr=idea) plugins installed. Below I am assuming that you are using the first building option, i.e. sbt. Once you have the build running from the command line, <s>follow the instructions for the sbt plugin to create IDE project files</s>. Since IDEA 14, you just need to import the project and specify Android SDK as its SDK. Inside IDEA, you can use the sbt console to run the project. Alternatively, the “run” button could be reconfigured for that purpose: go to Run → Edit Configurations → (select a configuration) → Before launch. You need to configure it to look like this: ![before launch](before-launch.png) Make sure that the paths to `AndroidManifest.xml` and the `APK` are configured properly. Go to Project settings → Modules → (select main module) → Android → Structure/Packaging. * Manifest file should be `/path-to-project/src/main/AndroidManifest.xml`: ![manifest path](manifest-path.png) * APK path should be `path-to-project/target/android-bin/build_integration/{module name}-BUILD-INTEGRATION.apk`: ![apk path](apk-path.png) ## Additional steps It is possible to preload Scala standard library on the emulator to reduce build times. Here is [a tool](https://github.com/svenwiegand/scala-libs-for-android-emulator) to do that. ## Useful resources * When in doubt, don’t hesitate to use the [Scala-on-Android mailing list](https://groups.google.com/forum/#!forum/scala-on-android). * See the [talks](Talks.html) section, as I also cover the matter in those talks.
56.325
235
0.743897
eng_Latn
0.885737
c9703a8dd0d86615af602a350919a2d24779c087
1,811
md
Markdown
example/example.md
jamescardona11/argo
0bcc441d4242c1fc0c0c81e44cab8e12cf5022b4
[ "MIT" ]
9
2021-04-05T00:30:44.000Z
2022-03-31T11:52:30.000Z
example/example.md
jamescardona11/argo
0bcc441d4242c1fc0c0c81e44cab8e12cf5022b4
[ "MIT" ]
null
null
null
example/example.md
jamescardona11/argo
0bcc441d4242c1fc0c0c81e44cab8e12cf5022b4
[ "MIT" ]
null
null
null
Basic Use ```dart class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Argo Example App', debugShowCheckedModeBanner: false, home: ResponsiveWrapper( wrapConfig: WrapperConfig( globalBreakpoints: ScreenBreakpoints( mobile: 321, tablet: 650, desktop: 1100, ), ), child: MainScreen(), ), ); } } // ResponsiveLayoutBuilder ResponsiveLayoutWidget.builder( mobile: (ctx, info) => MobileChild(), tablet: (ctx, info) => TabletChild(), desktop:(ctx, info) => DesktopChild(), ) // ResponsiveVisibility ResponsiveVisibility.screen( conditionScreen: ConditionScreen( mobile: true, tablet: true, desktop: false, ), child: Container( color: Colors.red, width: 50, height: 50 ), ), // ResponsiveBuilder ResponsiveBuilder( builder: (ctx, info) { if(info.localSize.width > 300 && info.deviceScreen.isTablet()){ return Container( color: Colors.red, width: 50, height: 50 ); }else{ return const SizedBox(); } } ) // ResponsiveTheme ResponsiveTheme.screen( conditionScreen: ConditionScreen( mobile: MyThemesApp(), tablet: MyThemesTablet(), desktop: MyThemesWeb(), ), ) ``` One example `IThemeDataRule` usign of the Condition for ResponsiveTheme ```dart class MyThemesApp with IThemeDataRule { @override ThemeData getThemeByRule(ThemeRule rule) { switch (rule) { case ThemeRule.light: return lightTheme; case ThemeRule.dark: return darkTheme; case ThemeRule.custom: return darkerTheme; default: return lightTheme; } } } ```
20.348315
71
0.612369
kor_Hang
0.331358
c9703bd2c7515f221e925ab0f557de54cd18e237
3,265
md
Markdown
_posts/2018-09-19-Download-headache-pathogenesis-monoamines-neuropeptides-purines-and-nitric-oxide.md
Jobby-Kjhy/27
ea48bae2a083b6de2c3f665443f18b1c8f241440
[ "MIT" ]
null
null
null
_posts/2018-09-19-Download-headache-pathogenesis-monoamines-neuropeptides-purines-and-nitric-oxide.md
Jobby-Kjhy/27
ea48bae2a083b6de2c3f665443f18b1c8f241440
[ "MIT" ]
null
null
null
_posts/2018-09-19-Download-headache-pathogenesis-monoamines-neuropeptides-purines-and-nitric-oxide.md
Jobby-Kjhy/27
ea48bae2a083b6de2c3f665443f18b1c8f241440
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Headache pathogenesis monoamines neuropeptides purines and nitric oxide book " woman with a dog; I had never seen such a dog, but if he'd tried to MOORE'S Eye the Girl With Rapid Movements every vale and peak of every continent, meditation, for that the people of the world have heard of thee and still praise thee for keenness of wit and apprehension; so do thou return him an answer. farther from them they saw her then, vegetables, he regarded had been waiting for me, his writing on the wall indicated a hair-trigger temper and a deep reservoir of long-nurtured anger, the best thing she might ever have going delivered the weapon in a bag of Chinese takeout, though originally from Havnor; and they held him in great respect, too. The creep even swiped the Kleenex, and Switched off the light, he knew that he was listening to a fool, holding her doors closed, outlining the quarter in his pocket. white goatee when he turned his head to look at Edom. The City of Lebtait cclxxii 4. " "Oh, as stunningly beautiful as he remembered her. 14; accepted! " The old man was burying the core of his apple and modest in their statements about high northern latitudes reached. "The same one that brought you Veronica and Celia. " He came back in the evening, a charcoal burner from Firn, finding the watch would be easier than Junior headache pathogenesis monoamines neuropeptides purines and nitric oxide feared. to live in the same place headache pathogenesis monoamines neuropeptides purines and nitric oxide your life. this summer festival of the damned. "But he scared em, Junior began with the city itself and with those headache pathogenesis monoamines neuropeptides purines and nitric oxide surnames were Bartholomew, eliciting from him a responding frown of puzzlement. 300 before, Micky reminded herself that her choices-and hers of resistance he had. shells, fidgety. "Can you teach her?" is. obeyed. I felt I was torturing him with my presence, and they hurried toward it over the crumbly What-" it, in the beginning of March sort of seashell smell. Indeed, opening his throat and one or both of his carotid arteries, "Her married name is Maddoc, trembling from the offense that he's taken. On the back there was a picture of her sitting in this same room, but there's places "Simple, all the fighting and raiding! Even after the Expedition was broken up large shoals dangerous to the navigator is also stated by Jacob Russian name still in use for the sound which separates Meschduschar face and breast were much wasted, before he went to work for Gene Autry at Channel 5, which is maybe pretty scary. Selma Galloway, Simeon. Their self-denial and sweet anticipation ensured that their lovemaking, you in writing (or by e-mail) within 30 days of receipt that she sense, for once, dimly lighted receiving room with gray walls and a speckled headache pathogenesis monoamines neuropeptides purines and nitric oxide linoleum floor, back in Colorado. And a couple of Chicano kids had had a knife fight behind Hollywood High. вMarc Russell leisure, thou persistest long in sparing this youth alive and we know not what is thine advantage therein? " to look out for his family. It's classic.
362.777778
3,120
0.797856
eng_Latn
0.999911
c971441d31af400606399e5abb5943e36b030891
1,894
md
Markdown
src/javacore/README.md
imckh/java-learning
12376894ebc9f717d7bdc1fa2c7c2470a10eb9e7
[ "Apache-2.0" ]
null
null
null
src/javacore/README.md
imckh/java-learning
12376894ebc9f717d7bdc1fa2c7c2470a10eb9e7
[ "Apache-2.0" ]
null
null
null
src/javacore/README.md
imckh/java-learning
12376894ebc9f717d7bdc1fa2c7c2470a10eb9e7
[ "Apache-2.0" ]
null
null
null
# java核心技术 ## 创建线程池的核心参数 ```java /** * Creates a new {@code ThreadPoolExecutor} with the given initial * parameters. * * @param corePoolSize the number of threads to keep in the pool, even * if they are idle, unless {@code allowCoreThreadTimeOut} is set * @param maximumPoolSize the maximum number of threads to allow in the * pool * @param keepAliveTime when the number of threads is greater than * the core, this is the maximum time that excess idle threads * will wait for new tasks before terminating. * @param unit the time unit for the {@code keepAliveTime} argument * @param workQueue the queue to use for holding tasks before they are * executed. This queue will hold only the {@code Runnable} * tasks submitted by the {@code execute} method. * @param threadFactory the factory to use when the executor * creates a new thread * @param handler the handler to use when execution is blocked * because the thread bounds and queue capacities are reached * @throws IllegalArgumentException if one of the following holds:<br> * {@code corePoolSize < 0}<br> * {@code keepAliveTime < 0}<br> * {@code maximumPoolSize <= 0}<br> * {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code threadFactory} or {@code handler} is null */ public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler){...} ```
48.564103
76
0.599789
eng_Latn
0.990029
c97177970692f4932f5365914e5513101e5ac95e
66
md
Markdown
README.md
MohamadSheikhAlshabab/country
d98112ba8d6c94f22ddc705242d80f2b0aeef9fd
[ "MIT" ]
null
null
null
README.md
MohamadSheikhAlshabab/country
d98112ba8d6c94f22ddc705242d80f2b0aeef9fd
[ "MIT" ]
null
null
null
README.md
MohamadSheikhAlshabab/country
d98112ba8d6c94f22ddc705242d80f2b0aeef9fd
[ "MIT" ]
null
null
null
# country Display flag, name &amp; population of world countries.
22
55
0.772727
eng_Latn
0.990013
c971a2c95436c59c19323881bd98c5ad63d24c86
32,720
md
Markdown
7-assets/_SNIPPETS/bryan-guner-gists/detect-device-typ/regex-cheatsheet.md
eengineergz/Lambda
1fe511f7ef550aed998b75c18a432abf6ab41c5f
[ "MIT" ]
null
null
null
7-assets/_SNIPPETS/bryan-guner-gists/detect-device-typ/regex-cheatsheet.md
eengineergz/Lambda
1fe511f7ef550aed998b75c18a432abf6ab41c5f
[ "MIT" ]
null
null
null
7-assets/_SNIPPETS/bryan-guner-gists/detect-device-typ/regex-cheatsheet.md
eengineergz/Lambda
1fe511f7ef550aed998b75c18a432abf6ab41c5f
[ "MIT" ]
null
null
null
## Basic definitions - String `s` *matches* the regex pattern `/p/` whenever `s` contains the pattern 'p'. - Example: `abc` matches `/a/`, `/b/`, `/c/` - For simplicity, we will use the *matches* verb loosely in a sense that - a string can *match* a regex (e.g. 'a' matches /a/) - a regex can *match* a string (e.g. /a/ matches 'a') - A regex pattern consists of *literal characters* that match itself, and *metasyntax characters* - Literal characters can be *concatenated* in a regular expression. String `s` matches `/ab/` if there is an `a` character *directly followed by a `b` character. - Example: `abc` matches `/ab/`, `/bc/`, `/abc/` - Example: `abc` does not match `/ac/`, `/cd/`, `/abcd/` - *Alternative execution* can be achieved with the metasyntax character `|` - `/a|b/` means: match either `a` or `b` - Example: 'ac', 'b', 'ab' match `/a|b/` - Example: 'c' does not match `/a|b/` - Iteration is achieved using repeat modifiers. One repeat modifier is the `*` (asterisk) metasyntax character. - Example: `/a*/` matches any string containing any number of `a` characters - Example: `/a*/` matches any string, including `''`, because they all contain at least zero `a` characters - Matching is *greedy*. A *greedy* match attempts to stay in iterations as long as possible. - Example: `s = 'baaa'` matches `/a*a/` in the following way: - `s[0]`: `'b'` is discarded - `s[1]`: `'a'` matches the pattern `a*` - `s[1] - s[2]`: `'aa'` matches the pattern `a*` - `s[1] - s[3]`: `'aaa'` matches the pattern `a*` - as there are no more characters in `s` and there is a character yet to be matched in the regex, we *backtrack* one character - `s[1] - s[2]`: `'aa'` matches the pattern `a*`, and we end investigating the `a*` pattern - `s[3]`: `'a'` matches the `a` pattern - there is a complete match, `s[1] - s[2]` match the `a*` pattern, and `s[3]` matches the `a` pattern. The returned match is `aaa` starting at index `1` of string `s` - Backtracking is *minimal*. We attempt to backtrack one character at a time in the string, and attempt to interpret the rest of the regex pattern on the remainder of the string. ## Constructing a regex * literal form: `/regex/` * constructor: `new RegExp( 'regex' );` - escaping: `/\d/` becomes `new RegExp( '\\d' )` - argument list: `new RegExp( pattern, modifiers );` Applying modifiers in literal form: ``` const regex1 = new RegExp( 'regex', 'ig' ); const regex2 = /regex/ig; ``` * `RegExp` constructor also accepts a regular expression: ``` > new RegExp( /regex/, 'i' ); /regex/i ``` ## List of JavaScript regex modifiers | Modifier | Description | | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `i` | non-case sensitive matching. Upper and lower cases don't matter. | | | | | `g` | global match. We attempt to find all matches instead of just returning the first match. The internal state of the regular expression stores where the last match was located, and matching is resumed where it was left in the previous match. | | | | | `m` | multiline match. It treats the `^` and `$` characters to match the beginning and the end of each line of the tested string. A newline character is determined by `\n` or `\r`. | | | | | `u` | unicode search. The regex pattern is treated as a unicode sequence | | | | | `y` | Sticky search | Example: ``` > const str = 'Regular Expressions'; > /re/.test( str ); false > /re/i.test( str ); // matches: 're', 'Re', 'rE', 'RE' true ``` ## Regex API - `regex.exec( str )`: returns information on the first match. Exec allows iteration on the regex for all matches if the `g` modifier is set for the regex - `regex.test( str )`: true iff regex *matches* a string ``` > const regex = /ab/; > const str = 'bbababb'; > const noMatchStr = 'c'; > regex.exec( str ); // first match is at index 2 [ 0: "ab", index: 2, input: "bbababb" ] > regex.exec( noMatchStr ); null > regex.test( str ); true > regex.test( noMatchStr ); false > regex.exec( noMatchStr ); > const globalRegex = /ab/g; > globalRegex.exec( str ); > globalRegex.exec( str ); [ 0: "ab", index: 2, input: "bbababb" ] > globalRegex.exec( str ); [ 0: "ab", index: 4, input: "bbababb" ] > globalRegex.exec( str ); null > let result; > while ( ( result = globalRegex.exec( str ) ) !== null ) { console.log( result ); } [ 0: "ab", index: 2, input: "bbababb" ] [ 0: "ab", index: 4, input: "bbababb" ] ``` ## String API - `str.match( regex )`: for non-global regular expression arguments, it returns the same return value as `regex.exec( str )`. For global regular expressions, the return value is an array containing all the matches. Returns `null` if no match has been found. - `str.replace( regex, newString )`: replaces the first full match with `newString`. If `regex` has a global modifier, `str.replace( regex, newString )` replaces all matches with `newString`. Does not mutate the original string `str`. - `str.search( regex )`: returns the index of the first match. Returns `-1` when no match is found. Does not consider the global modifier. - `str.split( regex )`: does not consider the global modifier. Returns an array of strings containing strings in-between matches. ``` > const regex = /ab/; > const str = 'bbababb'; > const noMatchStr = 'c'; > str.match( regex ); ["ab", index: 2, input: "bbababb"] > str.match( globalRegex ); ["ab", "ab"] > noMatchStr.match( globalRegex ); null > str.replace( regex, 'X' ); "bbXabb" > str.replace( globalRegex, 'X' ) "bbXXb" > str.search( regex ); 2 > noMatchStr.search( regex ); -1 > str.split( regex ); ["bb", "", "b"] > noMatchStr.split( regex ); ["c"] ``` ## Literal characters A regex *literal character* matches itself. The expression `/a/` matches any string that contains the `a` character. Example: ``` /a/.test( 'Andrea' ) // true, because the last character is 'a' /a/.test( 'André' ) // false, because there is no 'a' in the string /a/i.test( 'André') // true, because 'A' matches /a/i with the case-insensitive flag `i` applied on `/a/` ``` Literal characters are: all characters except metasyntax characters such as: `.`, `*`, `^`, `$`, `[`, `]`, `(`, `)`, `{`, `}`, `|`, `?`, `+`, `\` When you need a metasyntax character, place a backslash in front of it. Examples: `\.`, `\\`, `\[`. Whitespaces: - behave as literal characters, exact match is required - use character classes for more flexibility, such as: - `\n` for a newline - `\t` for a tab - `\b` for word boundaries ## Metasyntax characters | Metasyntax character | Semantics | | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `.` | arbitrary character class | | | | | `[]` | character sets, `[012]` means 0, or 1, or 2 | | | | | `^` | (1) negation, e.g. in a character set `[^890]` means not 8, not 9, and not 10, (2) anchor matching the start of the string or line | | | | | `$` | anchor matching the end of the string or line | | | | | `|` | alternative execution (or) | | | | | `*` | iteration: match any number of times | | | | | `?` | optional parts in the expression | | | | | `+` | match at least once | | | | | `{}` and `{,}` | specify a range for the number of times an expression has to match the string. Forms: `{3}` exactly 3 times, `{3,}` at least 3 times, `{3,5}` between 3 and 5 times. | | | | | `()` | (1) overriding precedence through grouping, (2) extracting substrings | | | | | `\` | (1) before a metasyntax character: the next character becomes a literal character (e.g. `\\`). (2) before a special character: the sequence is interpreted as a special character sequence (e.g. `\d` as digit). | | | | | `(?:`, `)` | non-capturing parentheses. Anything in-between `(?:` and `)` is matched, but not captured. Should be used to achieve only functionality (1) of `()` parentheses. | | | | | `(?=`, `)` | lookahead. E.g. `.(?=end)` matches an arbitrary character if it is followed by the characters `end`. Only the character is returned in the match, `end` is excluded. | | | | | `(?!`, `)` | negative lookahead. E.g. `.(?!end)` matches an arbitrary character if it is **not** followed by the characters `end`. Only the character is returned in the match, `end` is excluded. | | | | | `\b` | word boundary. Zero length assertion. Matches the start or end position of a word. E.g. `\bworld\b` matches the string `'the world'` | | | | | `[\b]` | matches a backspace character. This is **not** a character set including a word boundary. | | | | | `\B` | not a word boundary. | | | | | `\c` | `\c` is followed by character `x`. `\cx` matches `CTRL + x`. | | | | | `\d` | digit. `[0-9]` | | | | | `\D` | non-digit. `[^0-9]` | | | | | `\f` | form feed character | | | | | `\n` | newline character (line feed) | | | | | `\r` | carriage return character | | | | | `\s` | one arbitrary white space character | | | | | `\S` | one non-whitespace character | | | | | `\t` | tab character | | | | | `\u` | `\u` followed by four hexadecimal digits matches a unicode character described by those four digits when the `u` flag is not set. When the `u` flag is set, use the format `\u{0abc}`. | | | | | `\v` | vertical tab character | | | | | `\w` | alphanumeric character. `[A-Za-z0-9_]` | | | | | `\W` | non-alphanumeric character | | | | | `\x` | `\x` followed by two hexadecimal digits matches the character described by those two digits. | | | | | `\0` | NULL character. Equivalent to `\x00` and `\u0000`. When the `u` flag is set for the regex, it is equivalent to `\u{0000}`. | | | | | `\1`, `\2`, ... | backreference. `\i` is a reference to the matched contents of the *i*th capture group. | Examples: ``` /.../.test( 'ab' ) // false, we need at least three arbitrary characters /.a*/.test( 'a' ) // true, . matches 'a', and a* matches '' /^a/.test( 'ba' ) // false, 'ba' does not start with a /^a/.test( 'ab' ) // true, 'ab' starts with a /a$/.test( 'ba' ) // true, 'ba' ends with a /a$/.test( 'ab' ) // false, 'ab' does not end with a /^a$/.test( 'ab' ) // false, 'ab' does not fully match a /^a*$/.test( 'aa' ) // true, 'aa' fully matches a pattern consisting of // a characters only /[ab]/.test( 'b' ) // true, 'b' contains a character that is a // member of the character class `[ab]` /a|b/.test( 'b' ) // true, 'b' contains a character that is // either `a` or `b` /ba?b/.test( 'bb' ) // true, the optional a is not included /ba?b/.test( 'bab') // true, the optional a is included /ba?b/.test( 'bcb') // false, only matches 'bb' or 'bab' /a+/.test( '' ) // false, at least one `a` character is needed /a+/.test( 'ba' ) // true, the `a` character was found /a{3}/.test('baaab')// true, three consecutive 'a' characters were found /(a|b)c/.test('abc')// true, a `b` character is followed by `c` /a(?=b)/.test('ab') // true. It matches 'a', because 'a' is followed by 'b' /a(?!b)/.test('ab') // false, because 'a' is not followed by 'b' /\ba/.test('bab') // false, because 'a' is not the first character of a word /\Ba/.test('bab') // true, because 'a' is not the first character of a word /(\d)\1/.test('55') // true. It matches two consecutive digits with the same value ``` In the last example, notice the parentheses. As the `|` operator has the lowest precedence out of all operators, parentheses made it possible to increase the prcedence of the alternative execution `a|b` higher than concatenation `(a|b)c`. ## Character sets, character classes - `[abc]` is `a|b|c` - `[a-c]` is `[abc]` - `[0-9a-fA-F]` is a case-insensitive hexadecimal digit - `[^abc]` is an arbitrary character that is not `a`, not `b`, and not `c` - `.`: arbitrary character class - Example: `/..e/`: three character sequence ending with `e` - other character classes such as digit (`\d`), not a digit (`\D`), word (`\w`), not a word (`\W`), whitespace character (`\s`): check out the section on metasyntax characters ## Basic (greedy) Repeat modifiers Matching is maximal. Backtracking is minimal, goes character by character. | Repeat modifier | Description | | --------------- | --------------------------------------------------- | | `+` | Match at least once | | | | | `?` | Match at most once | | | | | `*` | Match any number of times | | | | | `{min,max}` | match at least `min` times, and at most `max` times | | | | | `{n}` | Match exactly `n` times | Examples: - `/^a+$/` matches any string consisting of one or more `'a'` characters and nothing else - `/^a?$/` matches `''` or `'a'`. The string may contain at most one `'a'` character - `/^a*$/` matches the empty string and everything matched by `/^a+$/` - `/^a{3,5}$/` matches `'aaa'`, `'aaaa'`, and `'aaaaa'` - `/(ab){3}/` matches any string containing the substring `'ababab'` ## Lazy Repeat Modifiers Matching is minimal. During backtracking, we add one character at a time. | Repeat modifier (PCRE) | Description | | ---------------------- | --------------------------------------------------- | | `+?` | Match at least once | | | | | `??` | Match at most once | | | | | `*?` | Match any number of times | | | | | `{min,max}?` | match at least `min` times, and at most `max` times | | | | | `{n}?` | Match exactly `n` times | Examples for lazy matching: - `/^a+?$/` lazily matches any string consisting of one or more `'a'` characters and nothing else - `/^a??$/` lazily matches `''` or `'a'`. The string may contain at most one `'a'` character - `/^a*?$/` lazily matches the empty string and everything matched by `/^a+$/` - `/^a{3,5}?$/` lazily matches `'aaa'`, `'aaaa'`, and `'aaaaa'` - `/(ab){3}?/` lazily matches any string containing the substring `'ababab'` ## Capture groups - `(` and `)` captures a substring inside a regex - Capture groups have a reference number equal to the order of the starting parentheses of the open parentheses of the capture group starting with `1` - `(?:` and `)` act as non-capturing parentheses, they are not included in the capture group numbering Example: ``` /a(b|c(d|(e))(f))$/ ^ ^ ^ ^ | | | | 1 2 3 4 ``` ``` > console.table( /^a(b|c(d|(e))(f+))$/.exec( 'ab' ) ) (index) Value 0 "ab" 1 "b" 2 undefined 3 undefined 4 undefined index 0 input "ab" > console.table( /^a(b|c(d|(e))(f+))$/.exec( 'aceff' ) ) (index) Value 0 "aceff" 1 "ceff" 2 "e" 3 "e" 4 "ff" index 0 input "aceff" > console.table( /^a(b|c(?:d|(e))(f+))$/.exec( 'aceff' ) ) (index) Value 0 "aceff" 1 "ceff" 2 "e" 3 "ff" index 0 input "aceff" ``` ## Lookahead and Lookbehind | Lookahead type | JavaScript syntax | Remark | | -------------------- | ----------------- | ------------------------------------- | | positive lookahead | `(?=pattern)` | | | negative lookahead | `(?!pattern)` | | | positive lookbehind | `(?<=pattern)` | only expected in ES2018 | | negative lookbehind | `(?<!pattern)` | only expected in ES2018 | | word boundary | `\b` | used both as lookahead and lookbehind | | start of string/line | `^` | used as a lookbehind | | end of string/line | `$` | used as a lookbehind | - Lookaheads and lookbehinds are non-capturing - Opposed to Perl 5, lookbehinds can be of variable length, because matching is implemented backwards. This means no restrictions in lookbehind length - as a consequence of backwards matching, using capture groups inside lookbehinds are evaluated according to the rules of backwards matching Examples: ``` > /a(?=b)/.exec( 'ab' ) ["a", index: 0, input: "ab"] > /a(?!\d)/.exec( 'ab' ) ["a", index: 0, input: "ab"] > /a(?!\d)/.exec( 'a0' ) null > /(?<=a)b/.exec( 'ab' ) ["b", index: 1, input: "ab"] // executed in latest Google Chrome /(?<!a)b/.exec( 'Ab' ) ["b", index: 1, input: "Ab"] // executed in latest Google Chrome /\bregex\b/.exec( 'This regex expression tests word boundaries.' ) ["regex", index: 5, input: "This regex expression tests word boundaries."] /^regex$/.exec( 'This\nregex\nexpression\ntests\nanchors.' ) null /^regex$/m.exec( 'This\nregex\nexpression\ntests\nanchors.' ) ["regex", index: 5, input: "This↵regex↵expression↵tests↵anchors."] ``` ## Possessive Repeat Modifiers Attempts a maximal (greedy) match first. The loop does not backtrack though. It either accepts the maximal match, or fails. Possessive repeat modifiers don't exist in JavaScript. However, there are workarounds. Assuming we don't have any other capture groups in front of the expression, use - `(?=(a+))\1` instead of the generic PCRE pattern `a++` - `(?=(a*))\1` instead of the generic PCRE pattern `a*+` - etc.
74.703196
257
0.312408
eng_Latn
0.991324
c9721e9dfe7891cff487876a314eebceee640c13
25,469
md
Markdown
docs/api/injection.md
marhali/vue-i18n-next
5cae796e7d944fb4d5b1bb64dd805fc9550282f0
[ "MIT" ]
null
null
null
docs/api/injection.md
marhali/vue-i18n-next
5cae796e7d944fb4d5b1bb64dd805fc9550282f0
[ "MIT" ]
null
null
null
docs/api/injection.md
marhali/vue-i18n-next
5cae796e7d944fb4d5b1bb64dd805fc9550282f0
[ "MIT" ]
null
null
null
# Component Injections ## ComponentCustomOptions Component Custom Properties for Vue I18n **Signature:** ```typescript export interface ComponentCustomOptions; ``` ### i18n Vue I18n options for Component **Signature:** ```typescript i18n?: VueI18nOptions; ``` **See Also** - [VueI18nOptions](legacy#vuei18noptions) ## ComponentCustomProperties Component Custom Options for Vue I18n **Signature:** ```typescript export interface ComponentCustomProperties; ``` **Details** These properties are injected into every child component ### $i18n Exported Global Composer instance, or global VueI18n instance. **Signature:** ```typescript $i18n: VueI18n | ExportedGlobalComposer; ``` **Details** You can get the [exported composer instance](general#exportedglobalcomposer) which are exported from global [composer](composition#composer) instance created with [createI18n](general#createi18n), or global [VueI18n](legacy#vuei18n) instance. You can get the exported composer instance in [Composition API mode](general#mode), or the Vuei18n instance in [Legacy API mode](general#mode), which is the instance you can refer to with this property. The locales, locale messages, and other resources managed by the instance referenced by this property are valid as global scope. If the `i18n` component option is not specified, it’s the same as the VueI18n instance that can be referenced by the i18n instance [global](general#global). **See Also** - [Scope and Locale Changing](../../guide/essentials/scope) - [Composition API](../../guide/advanced/composition) ### $t(key) Locale message translation **Signature:** ```typescript $t(key: Path): TranslateResult; ``` **Details** If this is used in a reactive context, it will re-evaluate once the locale changes. In [Composition API mode](general#mode), the `$t` is injected by `app.config.globalProperties`. The input / output is the same as for Composer, and it works on **global scope**. About that details, see [Composer#t](composition#t-key). In [Legacy API mode](general#mode), the input / output is the same as for VueI18n instance. About details, see [VueI18n#t](legacy#t-key). **See Also** - [Scope and Locale Changing](../../guide/essentials/scope) - [Composition API](../../guide/advanced/composition) #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | #### Returns Translation message ### $t(key, locale) Locale message translation **Signature:** ```typescript $t(key: Path, locale: Locale): TranslateResult; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Translation message ### $t(key, locale, list) Locale message translation **Signature:** ```typescript $t(key: Path, locale: Locale, list: unknown[]): TranslateResult; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | A locale, override locale that global scope or local scope | | list | unknown[] | A values of list interpolation | #### Returns Translation message ### $t(key, locale, named) Locale message translation **Signature:** ```typescript $t(key: Path, locale: Locale, named: object): TranslateResult; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | A locale, override locale that global scope or local scope | | named | object | A values of named interpolation | #### Returns Translation message ### $t(key, list) Locale message translation **Signature:** ```typescript $t(key: Path, list: unknown[]): TranslateResult; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | #### Returns Translation message ### $t(key, named) Locale message translation **Signature:** ```typescript $t(key: Path, named: Record<string, unknown>): TranslateResult; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | A locale, override locale that global scope or local scope | | named | Record&lt;string, unknown&gt; | A values of named interpolation | #### Returns Translation message ### $t(key) Locale message translation **Signature:** ```typescript $t(key: Path): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | #### Returns Translation message ### $t(key, plural) Locale message translation **Signature:** ```typescript $t(key: Path, plural: number): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | plural | number | A choice number of plural | #### Returns Translation message ### $t(key, plural, options) Locale message translation **Signature:** ```typescript $t(key: Path, plural: number, options: TranslateOptions): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | plural | number | A choice number of plural | | options | TranslateOptions | An options, see the [TranslateOptions](general#translateoptions) | #### Returns Translation message ### $t(key, defaultMsg) Locale message translation **Signature:** ```typescript $t(key: Path, defaultMsg: string): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | defaultMsg | string | A default message to return if no translation was found | #### Returns Translation message ### $t(key, defaultMsg, options) Locale message translation **Signature:** ```typescript $t(key: Path, defaultMsg: string, options: TranslateOptions): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | defaultMsg | string | A default message to return if no translation was found | | options | TranslateOptions | An options, see the [TranslateOptions](general#translateoptions) | #### Returns Translation message ### $t(key, list) Locale message translation **Signature:** ```typescript $t(key: Path, list: unknown[]): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | #### Returns Translation message ### $t(key, list, plural) Locale message translation **Signature:** ```typescript $t(key: Path, list: unknown[], plural: number): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | | plural | number | A choice number of plural | #### Returns Translation message ### $t(key, list, defaultMsg) Locale message translation **Signature:** ```typescript $t(key: Path, list: unknown[], defaultMsg: string): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | | defaultMsg | string | A default message to return if no translation was found | #### Returns Translation message ### $t(key, list, options) Locale message translation **Signature:** ```typescript $t(key: Path, list: unknown[], options: TranslateOptions): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | | options | TranslateOptions | An options, see the [TranslateOptions](general#translateoptions) | #### Returns Translation message ### $t(key, named) Locale message translation **Signature:** ```typescript $t(key: Path, named: NamedValue): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | named | NamedValue | A values of named interpolation | #### Returns Translation message ### $t(key, named, plural) Locale message translation **Signature:** ```typescript $t(key: Path, named: NamedValue, plural: number): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | named | NamedValue | A values of named interpolation | | plural | number | A choice number of plural | #### Returns Translation message ### $t(key, named, defaultMsg) Locale message translation **Signature:** ```typescript $t(key: Path, named: NamedValue, defaultMsg: string): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | named | NamedValue | A values of named interpolation | | defaultMsg | string | A default message to return if no translation was found | #### Returns Translation message ### $t(key, named, options) Locale message translation **Signature:** ```typescript $t(key: Path, named: NamedValue, options: TranslateOptions): string; ``` **Details** Overloaded `$t`. About details, see the [$t](injection#t-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | named | NamedValue | A values of named interpolation | | options | TranslateOptions | An options, see the [TranslateOptions](general#translateoptions) | #### Returns Translation message ### $tc(key) Locale message pluralization **Signature:** ```typescript $tc(key: Path): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** If this is used in a reactive context, it will re-evaluate once the locale changes. The input / output is the same as for VueI18n instance. About that details, see [VueI18n#tc](legacy#tc-key). The value of plural is handled with default `1`. **See Also** - [Pluralization](../../guide/essentials/pluralization) #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | #### Returns Translation message that is pluraled ### $tc(key, locale) Locale message pluralization **Signature:** ```typescript $tc(key: Path, locale: Locale): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Translation message that is pluraled ### $tc(key, list) Locale message pluralization **Signature:** ```typescript $tc(key: Path, list: unknown[]): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | list | unknown[] | A values of list interpolation | #### Returns Translation message that is pluraled ### $tc(key, named) Locale message pluralization **Signature:** ```typescript $tc(key: Path, named: Record<string, unknown>): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | named | Record&lt;string, unknown&gt; | A values of named interpolation | #### Returns Translation message that is pluraled ### $tc(key, choice) Locale message pluralization **Signature:** ```typescript $tc(key: Path, choice: number): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | choice | number | Which plural string to get. 1 returns the first one | #### Returns Translation message that is pluraled ### $tc(key, choice, locale) Locale message pluralization **Signature:** ```typescript $tc(key: Path, choice: number, locale: Locale): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | choice | number | Which plural string to get. 1 returns the first one | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Translation message that is pluraled ### $tc(key, choice, list) Locale message pluralization **Signature:** ```typescript $tc(key: Path, choice: number, list: unknown[]): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | choice | number | Which plural string to get. 1 returns the first one | | list | unknown[] | A values of list interpolation | #### Returns Translation message that is pluraled ### $tc(key, choice, named) Locale message pluralization **Signature:** ```typescript $tc(key: Path, choice: number, named: Record<string, unknown>): TranslateResult; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** Overloaded `$tc`. About details, see the [$tc](injection#tc-key) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | choice | number | Which plural string to get. 1 returns the first one | | named | Record&lt;string, unknown&gt; | A values of named interpolation | #### Returns Translation message that is pluraled ### $te(key, locale) Translation message exist **Signature:** ```typescript $te(key: Path, locale?: Locale): boolean; ``` :::tip NOTE Supported for **Legacy API mode only**. ::: **Details** The input / output is the same as for VueI18n instance. About that details, see [VueI18n#te](legacy#te-key-locale) #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | | locale | Locale | Optional, A locale, override locale that global scope or local scope | #### Returns If found locale message, `true`, else `false`. ### $tm(key) Locale messages getter **Signature:** ```typescript $tm(key: Path): LocaleMessageValue<VueMessageType> | {} ``` **Details** If [i18n component options](injection#i18n) is specified, it’s get in preferentially local scope locale messages than global scope locale messages. If [i18n component options](injection#i18n) isn’t specified, it’s get with global scope locale messages. Based on the current `locale`, locale messages will be returned from Composer instance messages. If you change the `locale`, the locale messages returned will also correspond to the locale. If there are no locale messages for the given `key` in the composer instance messages, they will be returned with [fallbacking](../../guide/essentials/fallback). #### Parameters | Parameter | Type | Description | | --- | --- | --- | | key | Path | A target locale message key | #### Returns Locale messages ### $d(value) Datetime formatting **Signature:** ```typescript $d(value: number | Date): DateTimeFormatResult; ``` **Details** If this is used in a reactive context, it will re-evaluate once the locale changes. In [Composition API mode](general#i18nmode), the input / output is the same as for VueI18n instance. About details, see [VueI18n#d](legacy#d-value). In [Composition API mode](general#i18nmode), the `$d` is injected by `app.config.globalProperties`. The input / output is the same as for Composer instance, and it works on **global scope**. About that details, see [Composer#d](composition#d-value). **See Also** - [Datetime Formatting](../../guide/essentials/datetime) - [Scope and Locale Changing](../../guide/essentials/scope) - [Composition API](../../guide/advanced/composition#datetime-formatting) #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | #### Returns Formatted value ### $d(value, key) Datetime formatting **Signature:** ```typescript $d(value: number | Date, key: string): DateTimeFormatResult; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | key | string | A key of datetime formats | #### Returns Formatted value ### $d(value, key, locale) Datetime formatting **Signature:** ```typescript $d(value: number | Date, key: string, locale: Locale): DateTimeFormatResult; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | key | string | A key of datetime formats | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Formatted value ### $d(value, args) Datetime formatting **Signature:** ```typescript $d(value: number | Date, args: { [key: string]: string }): DateTimeFormatResult; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | args | { [key: string]: string } | An argument values | #### Returns Formatted value ### $d(value) Datetime formatting **Signature:** ```typescript $d(value: number | Date): string; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | #### Returns Formatted value ### $d(value, key) Datetime formatting **Signature:** ```typescript $d(value: number | Date, key: string): string; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | key | string | A key of datetime formats | #### Returns Formatted value ### $d(value, key, locale) Datetime formatting **Signature:** ```typescript $d(value: number | Date, key: string, locale: Locale): string; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | key | string | A key of datetime formats | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Formatted value ### $d(value, options) Datetime formatting **Signature:** ```typescript $d(value: number | Date, options: DateTimeOptions): string; ``` **Details** Overloaded `$d`. About details, see the [$d](injection#d-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number \| Date | A value, timestamp number or `Date` instance | | options | DateTimeOptions | An options, see the [DateTimeOptions](general#datetimeoptions) | #### Returns Formatted value ### $n(value) Number formatting **Signature:** ```typescript $n(value: number): NumberFormatResult; ``` **Details** If this is used in a reactive context, it will re-evaluate once the locale changes. In [Legacy API mode](general#i18nmode), the input / output is the same as for VueI18n instance. About details, see [VueI18n#n](legacy#n-value). In [Composition API mode](general#i18nmode), the `$n` is injected by `app.config.globalProperties`. The input / output is the same as for Composer instance, and it works on **global scope**. About that details, see [Composer#n](composition#n-value). **See Also** - [Number Formatting](../../guide/essentials/number) - [Scope and Locale Changing](../../guide/essentials/scope) - [Composition API](../../guide/advanced/composition#number-formatting) #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | #### Returns Formatted value ### $n(value, key) Number formatting **Signature:** ```typescript $n(value: number, key: string): NumberFormatResult; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | key | string | A key of number formats | #### Returns Formatted value ### $n(value, key, locale) Number formatting **Signature:** ```typescript $n(value: number, key: string, locale: Locale): NumberFormatResult; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | key | string | A key of number formats | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Formatted value ### $n(value, args) Number formatting **Signature:** ```typescript $n(value: number, args: { [key: string]: string }): NumberFormatResult; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | args | { [key: string]: string } | An argument values | #### Returns Formatted value ### $n(value) Number formatting **Signature:** ```typescript $n(value: number): string; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | #### Returns Formatted value ### $n(value, key) Number formatting **Signature:** ```typescript $n(value: number, key: string): string; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | key | string | A key of number formats | #### Returns Formatted value ### $n(value, key, locale) Number formatting **Signature:** ```typescript $n(value: number, key: string, locale: Locale): string; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | key | string | A key of number formats | | locale | Locale | A locale, override locale that global scope or local scope | #### Returns Formatted value ### $n(value, options) Number formatting **Signature:** ```typescript $n(value: number, options: NumberOptions): string; ``` **Details** Overloaded `$n`. About details, see the [$n](injection#n-value) remarks. #### Parameters | Parameter | Type | Description | | --- | --- | --- | | value | number | A number value | | options | NumberOptions | An options, see the [NumberOptions](general#numberoptions) | #### Returns Formatted value
21.565622
249
0.668813
eng_Latn
0.835011
c97227ab57bd532a3359067812abdfab21a3c119
8,508
md
Markdown
README.md
stefankommm/mern-boilerplate
a8b99df07f0b703093b404ae932f889d90e5a454
[ "MIT" ]
null
null
null
README.md
stefankommm/mern-boilerplate
a8b99df07f0b703093b404ae932f889d90e5a454
[ "MIT" ]
null
null
null
README.md
stefankommm/mern-boilerplate
a8b99df07f0b703093b404ae932f889d90e5a454
[ "MIT" ]
null
null
null
# MERN Boilerplate **THIS IS A FORK FROM NEMANJAN** ## Features added that are not in Original - [x] Email reset added - [x] nodemailer - [x] nodemailer-sendgrid - [x] EJS Email templating - [ ] S3 Profile picture storage - [ ] Update the website - [ ] Contact the media This is fullstack boilerplate with React, Redux, Express, Mongoose and Passport. Skip the tedious part and get straight to developing your app. ## Demo Live demo is available here: **[Demo](https://mern-boilerplate-demo.herokuapp.com/)** ## Features - Server - User and Message models with `1:N` relation - Full CRUD REST API operations for both Message and User models - Passport authentication with local `email/password`, Facebook and Google OAuth strategies and JWT protected APIs - `User` and `Admin` roles - NodeJS server with Babel for new JS syntax unified with React client - `async/await` syntax across whole app - Joi server side validation of user's input - Single `.env` file configuration - Image upload with Multer - Database seed - Client - React client with functional components and Hooks - Redux state management with Thunk for async actions - CSS agnostic, so you don't waste your time replacing my CSS framework with yours - Home, Users, Profile, Admin, Notfound, Login and Register pages - Protected routes with Higher order components - Different views for unauthenticated, authenticated and admin user - Edit/Delete forms for Message and User with Formik and Yup validation - Admin has privileges to edit and delete other users and their messages - Layout component, so you can have pages without Navbar - Loading states with Loader component - Single config file within `/constants` folder ## Installation Read on on how to set up this for development. Clone the repo. ``` $ git clone https://github.com/nemanjam/mern-boilerplate.git $ cd mern-boilerplate ``` ### Server #### .env file Rename `.env.example` to `.env` and fill in database connection strings, Google and Facebook tokens, JWT secret and your client and server production URLs. ``` #db MONGO_URI_DEV=mongodb://localhost:27017/mernboilerplate MONGO_URI_PROD= #google GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= GOOGLE_CALLBACK_URL=/auth/google/callback #facebook FACEBOOK_APP_ID= FACEBOOK_SECRET= FACEBOOK_CALLBACK_URL=/auth/facebook/callback #jwt JWT_SECRET_DEV=secret JWT_SECRET_PROD= #site urls CLIENT_URL_DEV=https://localhost:3000 CLIENT_URL_PROD=https://mern-boilerplate-demo.herokuapp.com SERVER_URL_DEV=https://localhost:5000 SERVER_URL_PROD=https://mern-boilerplate-demo.herokuapp.com #img folder path IMAGES_FOLDER_PATH=/public/images/ ``` #### Generate certificates Facebook OAuth requires that your server runs on `https` in development as well, so you need to generate certificates. Go to `/server/security` folder and run this. ``` $ cd server/security $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout cert.key -out cert.pem -config req.cnf -sha256 ``` #### Install dependencies ``` $ cd server $ npm install ``` #### Run the server You are good to go, server will be available on `https://localhost:5000` ``` $ npm run server ``` ### Client Just install the dependencies and run the dev server. App will load on `https://localhost:3000`. ``` $ cd client $ npm install $ npm start ``` That's it as far for development setup. For production check the `Deployment on Heroku` section. ## Screenshots ![Screenshot1](/screenshots/Screenshot_1.png) ![Screenshot2](/screenshots/Screenshot_2.png) ![Screenshot3](/screenshots/Screenshot_3.png) ![Screenshot4](/screenshots/Screenshot_4.png) ![Screenshot5](/screenshots/Screenshot_5.png) ![Screenshot6](/screenshots/Screenshot_6.png) ## Deployment on Heroku #### Push to Heroku This project is already all set up for deployment on Heroku, you just need to create Heroku application add heroku remote to this repo and push it to `heroku` origin. ``` $ heroku login $ heroku create my-own-app-name $ git remote add heroku https://git.heroku.com/my-own-app-name.git $ git push heroku master $ heroku open ``` #### Database setup But before that you need MongoDB database, so go to [MongoDB Atlas](https://www.mongodb.com/cloud/atlas), create cluster, whitelist all IPs and get database URL. Set that URL in `.env` file as `MONGO_URI_PROD`. ``` MONGO_URI_PROD=mongodb+srv://<your-username-here>:<your-password-here>@cluster0-abcd.mongodb.net/test?retryWrites=true&w=majority ``` If you don't insert environment variables in Heroku manually via web interface or console you'll need to remove `.env` file from `server/.gitignore` and push it to Heroku. Never push `.env` file to development repo though. ``` ... #.env #comment out .env file ... ``` In the following section you can read detailed instructions about Heroku deployment process. ### Server setup #### Development Server uses Babel so that we can use the same newer JavaScript syntax like the one used on the Client. In development we are passing `server/src/index.js` file to `babel-node` executable along with `nodemon` daemon. We run that with `npm run server` script. ``` "server": "nodemon --exec babel-node src/index.js", ``` #### Production That is fine for development, we compile the source on every run but for production we want to avoid that and to compile and build code once to JavaScript version which Node.JS can execute. So we take all the code from `/server/src` folder compile it and put the output into `/server/build` destination folder. `-d` is short for destination, and `-s` flag is for sourcemaps for debugging. We make that into `build-babel` script. ``` "build-babel": "babel -d ./build ./src -s", ``` We also need to delete and make `build` folder on every deployment, so we do that with this simple script. ``` "clean": "rm -rf build && mkdir build", ``` Now we have everything to build our server code. We do that by calling 2 last scripts. ``` "build": "npm run clean && npm run build-babel", ``` Now we just need to call build script and run compiled file with node. Make sure Babel is in the production dependencies in the `server/package.json` or you'll get "babel is not defined" error on Heroku. ``` "start-prod": "npm run build && node ./build/index.js", ``` #### Running server on Heroku Our server is now all set up, all we need is to call `start-prod` script. Heroku infers runtime he needs to run the application by the type of dependencies file in the root folder, so for Node.JS we need another `package.json`. Heroku will call `start` script after building phase so we just need to pass our `start-prod` script to spin up the server with the `--prefix server` where `server` is folder in which `package.json` with that script is located. ``` "start": "npm run start-prod --prefix server", ``` #### Installing dependencies Before all this happens Heroku needs to install the dependencies for both server and client, `heroku-postbuild` script is meant for that. `NPM_CONFIG_PRODUCTION=false` variable is there to disable production environment while dependencies are being installed. Again `--prefix` flag is specifying the folder of the script being run. In this script we build our React client as well. ``` "heroku-postbuild": "NPM_CONFIG_PRODUCTION=false npm install --prefix server && npm install --prefix client && npm run build --prefix client" ``` ### Client Setup Before you push to production you'll need to set your URLs in `client/constants`. That's it. ```javascript export const FACEBOOK_AUTH_LINK = "https://my-own-app.herokuapp.com/auth/facebook"; export const GOOGLE_AUTH_LINK = "https://my-own-app.herokuapp.com/auth/google"; ``` ## References - Brad Traversy [Dev connector 2.0](https://github.com/bradtraversy/devconnector_2.0) - Brad Traversy [Learn The MERN Stack Youtube playlist](https://www.youtube.com/watch?v=PBTYxXADG_k&list=PLillGF-RfqbbiTGgA77tGO426V3hRF9iE) - Thinkster [react-redux-realworld-example-app](https://github.com/gothinkster/react-redux-realworld-example-app) - Thinkster [ node-express-realworld-example-app ](https://github.com/gothinkster/node-express-realworld-example-app) - Quinston Pimenta [Deploy React with Node (Express, configured for ES6, Babel) to Heroku (without babel-node)](https://www.youtube.com/watch?v=mvI25HLDfR4) - Kim Nguyen [How to Deploy ES6 Node.js & Express back-end to Heroku](https://medium.com/@kimtnguyen/how-to-deploy-es6-node-js-express-back-end-to-heroku-7e6743e8d2ff) ## Licence ### MIT
33.896414
455
0.75047
eng_Latn
0.924848
c97280a83849849e8283db12b9eff05b7fc461f2
4,519
md
Markdown
docs/2014/analysis-services/define-semiadditive-behavior-business-intelligence-wizard.md
dirceuresende/sql-docs.pt-br
023b1c4ae887bc1ed6a45cb3134f33a800e5e01e
[ "CC-BY-4.0", "MIT" ]
2
2021-10-12T00:50:30.000Z
2021-10-12T00:53:51.000Z
docs/2014/analysis-services/define-semiadditive-behavior-business-intelligence-wizard.md
dirceuresende/sql-docs.pt-br
023b1c4ae887bc1ed6a45cb3134f33a800e5e01e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/define-semiadditive-behavior-business-intelligence-wizard.md
dirceuresende/sql-docs.pt-br
023b1c4ae887bc1ed6a45cb3134f33a800e5e01e
[ "CC-BY-4.0", "MIT" ]
1
2020-06-25T13:33:56.000Z
2020-06-25T13:33:56.000Z
--- title: Definir comportamento Semiaditivo (Assistente de Business Intelligence) | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.suite: '' ms.technology: - analysis-services ms.tgt_pltfrm: '' ms.topic: conceptual f1_keywords: - sql12.asvs.biwizard.semiadditivememberdetection.f1 ms.assetid: e57080ba-ce96-40f8-bca7-6701d1725b3c caps.latest.revision: 24 author: minewiskan ms.author: owend manager: craigg ms.openlocfilehash: 201a5a24e8bafa2f2f919f6ad0b072ca801a694e ms.sourcegitcommit: c18fadce27f330e1d4f36549414e5c84ba2f46c2 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 07/02/2018 ms.locfileid: "37310806" --- # <a name="define-semiadditive-behavior-business-intelligence-wizard"></a>Definir Comportamento Semiaditivo (Assistente de Business Intelligence) Use a página **Definir Comportamento Semiaditivo** para habilitar ou desabilitar comportamento semiaditivo em medidas. O comportamento semiaditivo determina como medidas que são contidas por um cubo são agregadas sobre uma dimensão de tempo. > [!NOTE] > Com exceção de LastChild que está disponível na edição standard, os comportamentos semiaditivos só estão disponíveis nas edições business intelligence ou enterprise. Além disso, como o comportamento semiaditivo é definido somente em medidas e não em dimensões, você não encontrará esta página no Assistente de Business Intelligence caso ele tenha sido iniciado do Designer de Dimensão ou por meio de um clique com o botão direito do mouse no Gerenciador de Soluções do [!INCLUDE[ssBIDevStudioFull](../includes/ssbidevstudiofull-md.md)]. ## <a name="options"></a>Opções **Desativar comportamento semiaditivo** Desabilita comportamento semiaditivo em todas as medidas contidas pelo cubo. **O assistente detectou a \<nome da dimensão > dimensão de conta, que contém membros semiaditivos. O servidor agregará membros dessa dimensão de acordo com o comportamento semiaditivo especificado para cada tipo de conta.** Habilita comportamento semiaditivo para dimensões de conta que contêm membros semiaditivos. Selecionar esta opção define a função de agregação de todas as medidas em grupos de medidas que faz referência à dimensão de conta para `ByAccount`. Para obter mais informações sobre dimensões de conta, consulte [Criar uma Conta de Finanças de dimensão de tipo pai-filho](multidimensional-models/database-dimensions-finance-account-of-parent-child-type.md). **Definir comportamento semiaditivo para membros individuais** Habilita comportamento semiaditivo e especifica a função de agregação semiaditiva para medidas específicas. A função de agregação aplica-se a todas as dimensões que são referenciadas pelo grupo de medidas que contém a medida. **Medidas** Exibe o nome de uma medida contida pelo cubo. **Função semiaditiva** Selecione a função de agregação da medida selecionada. A tabela a seguir lista as funções de agregação disponíveis. |Valor|Description| |-----------|-----------------| |**AverageOfChildren**|Agregada retornando a média dos filhos da medida.| |`ByAccount`|Agregada pela função de agregação associada ao tipo de conta especificado de um atributo em uma dimensão de conta.| |`Count`|Agregada com a função `Count`.| |`DistinctCount`|Agregada com a função `DistinctCount`.| |**FirstChild**|Agregado ao retornar o primeiro membro filho da medida em uma dimensão de tempo.| |**FirstNonEmpty**|Agregado ao retornar o primeiro membro não vazio da medida em uma dimensão de tempo.| |**LastChild**|Agregado ao retornar o último membro filho da medida em uma dimensão de tempo.| |**LastNonEmpty**|Agregado ao retornar o último membro não vazio da medida em uma dimensão de tempo.| |`Max`|Agregada com a função `Max`.| |`Min`|Agregada com a função `Min`.| |**Nenhuma**|Nenhuma agregação executada.| |`Sum`|Agregada com a função `Sum`.| > [!NOTE] > Seleções feitas para essa opção serão aplicadas apenas se **Definir comportamento semiaditivo para membros individuais** estiver selecionada. ## <a name="see-also"></a>Consulte também [Ajuda de F1 do Assistente do Business Intelligence](business-intelligence-wizard-f1-help.md) [Designer de cubo &#40;Analysis Services - dados multidimensionais&#41;](cube-designer-analysis-services-multidimensional-data.md) [Designer de dimensão &#40;Analysis Services - dados multidimensionais&#41;](dimension-designer-analysis-services-multidimensional-data.md)
61.067568
541
0.775835
por_Latn
0.999023
c9729c089b0b9f432e5121383203bd97374f92bb
35,344
md
Markdown
repos/express-gateway/remote/1.16.x.md
mouse36872/repo-info
715afa26d57a6c8070592db7791747d725d90cf9
[ "Apache-2.0" ]
null
null
null
repos/express-gateway/remote/1.16.x.md
mouse36872/repo-info
715afa26d57a6c8070592db7791747d725d90cf9
[ "Apache-2.0" ]
null
null
null
repos/express-gateway/remote/1.16.x.md
mouse36872/repo-info
715afa26d57a6c8070592db7791747d725d90cf9
[ "Apache-2.0" ]
1
2019-11-23T07:53:59.000Z
2019-11-23T07:53:59.000Z
## `express-gateway:1.16.x` ```console $ docker pull express-gateway@sha256:5be227b4558c55daa3834282f8dbdf1bc33b1e69cb16d4efd61e6846294217b4 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm64 variant v8 - linux; 386 - linux; ppc64le - linux; s390x ### `express-gateway:1.16.x` - linux; amd64 ```console $ docker pull express-gateway@sha256:635766fd719014fb5267ec409f4040374219fc1eb19ca9c65b509c733655e38b ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **34.4 MB (34407559 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:60ee797bb13d22e64972717ca597ef0086d81e81b8f130a217baedcbeed44d5f` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node","-e","require('express-gateway')().run();"]` ```dockerfile # Mon, 21 Oct 2019 17:21:42 GMT ADD file:fe1f09249227e2da2089afb4d07e16cbf832eeb804120074acd2b8192876cd28 in / # Mon, 21 Oct 2019 17:21:42 GMT CMD ["/bin/sh"] # Tue, 19 Nov 2019 19:24:17 GMT ENV NODE_VERSION=10.17.0 # Tue, 19 Nov 2019 19:24:27 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="f893a03c5b51e0c540e32cd52773221a2f9b6d575e7fe79ffe9e878483c703ff" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python && for key in 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 FD3A5288F042B6850C66B31F09FE44734EB7990E 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 DD8F2338BAE7501E3DD5AC78C273792F7D83545D C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 B9AE9905FFD7803F25714661B63B535A4C206CA9 77984A986EBC2AA786BC0F66B01FBB92821C587A 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 4ED778F539E3634C779C87C6D7062848A1AB005C A48C2BEE680E841632CD4E44F07496B3EB3C1762 B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps # Tue, 19 Nov 2019 19:24:27 GMT ENV YARN_VERSION=1.19.1 # Tue, 19 Nov 2019 19:24:31 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn # Tue, 19 Nov 2019 19:24:31 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 19 Nov 2019 19:24:32 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 19:24:32 GMT CMD ["node"] # Tue, 19 Nov 2019 19:55:51 GMT LABEL maintainer=Vincenzo Chianese, [email protected] # Tue, 19 Nov 2019 19:55:51 GMT ARG EG_VERSION=1.16.9 # Tue, 19 Nov 2019 19:56:27 GMT # ARGS: EG_VERSION=1.16.9 RUN yarn global add express-gateway@$EG_VERSION && yarn cache clean # Tue, 19 Nov 2019 19:56:27 GMT ENV NODE_ENV=production # Tue, 19 Nov 2019 19:56:28 GMT ENV NODE_PATH=/usr/local/share/.config/yarn/global/node_modules/ # Tue, 19 Nov 2019 19:56:28 GMT ENV EG_CONFIG_DIR=/var/lib/eg # Tue, 19 Nov 2019 19:56:28 GMT ENV CHOKIDAR_USEPOLLING=true # Tue, 19 Nov 2019 19:56:29 GMT VOLUME [/var/lib/eg] # Tue, 19 Nov 2019 19:56:29 GMT EXPOSE 8080 9876 # Tue, 19 Nov 2019 19:56:29 GMT COPY file:9481e65ab3ccc3b910b8af90d3df04d9f70030b8f8a0cfcc390840936290aaab in /usr/local/bin/ # Tue, 19 Nov 2019 19:56:29 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 19:56:30 GMT CMD ["node" "-e" "require('express-gateway')().run();"] ``` - Layers: - `sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17` Last Modified: Mon, 21 Oct 2019 17:22:48 GMT Size: 2.8 MB (2787134 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0eaf5bd7a6e1ee041d15932e669c8cf02b2fcab1f1407b82fb4ef7123deb2731` Last Modified: Tue, 19 Nov 2019 19:29:54 GMT Size: 21.3 MB (21252445 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6fb5c3a20092b24dfbea1d0270947b7d64d64770ba0d5b4952c78c685746aaf6` Last Modified: Tue, 19 Nov 2019 19:29:47 GMT Size: 1.3 MB (1263201 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:004e30fa1cb97b528a0db9e36f44c93f63510c49749b502e4c45326e84e92fc2` Last Modified: Tue, 19 Nov 2019 19:29:47 GMT Size: 282.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:8f35c2b816777803757f025f96d56cfff5488ac130bd17af24dbc1e00b7b5549` Last Modified: Tue, 19 Nov 2019 19:56:48 GMT Size: 9.1 MB (9103999 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:83bd6682b793299ba6f5925be13e2d4a9358c6a5c77a8cc0b16d7c0df62f4e82` Last Modified: Tue, 19 Nov 2019 19:56:43 GMT Size: 498.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `express-gateway:1.16.x` - linux; arm64 variant v8 ```console $ docker pull express-gateway@sha256:2aad0268e476c197e56941732c7a4b7c072324722cd161daa35f88fc01868dd8 ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **34.7 MB (34745912 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:0b70fb11db584859b39820e32bf82df7a6b774920f681cdf3bacc91adf7cf3a8` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node","-e","require('express-gateway')().run();"]` ```dockerfile # Mon, 21 Oct 2019 18:07:03 GMT ADD file:02f4d68afd9e9e303ff893f198d535d0d78c4b2554f299ab2d0955b2bef0e06a in / # Mon, 21 Oct 2019 18:07:09 GMT CMD ["/bin/sh"] # Tue, 19 Nov 2019 19:37:35 GMT ENV NODE_VERSION=10.17.0 # Tue, 19 Nov 2019 19:44:22 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="f893a03c5b51e0c540e32cd52773221a2f9b6d575e7fe79ffe9e878483c703ff" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python && for key in 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 FD3A5288F042B6850C66B31F09FE44734EB7990E 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 DD8F2338BAE7501E3DD5AC78C273792F7D83545D C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 B9AE9905FFD7803F25714661B63B535A4C206CA9 77984A986EBC2AA786BC0F66B01FBB92821C587A 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 4ED778F539E3634C779C87C6D7062848A1AB005C A48C2BEE680E841632CD4E44F07496B3EB3C1762 B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps # Tue, 19 Nov 2019 19:44:25 GMT ENV YARN_VERSION=1.19.1 # Tue, 19 Nov 2019 19:44:37 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn # Tue, 19 Nov 2019 19:44:39 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 19 Nov 2019 19:44:41 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 19:44:43 GMT CMD ["node"] # Tue, 19 Nov 2019 20:04:46 GMT LABEL maintainer=Vincenzo Chianese, [email protected] # Tue, 19 Nov 2019 20:04:46 GMT ARG EG_VERSION=1.16.9 # Tue, 19 Nov 2019 20:05:13 GMT # ARGS: EG_VERSION=1.16.9 RUN yarn global add express-gateway@$EG_VERSION && yarn cache clean # Tue, 19 Nov 2019 20:05:15 GMT ENV NODE_ENV=production # Tue, 19 Nov 2019 20:05:15 GMT ENV NODE_PATH=/usr/local/share/.config/yarn/global/node_modules/ # Tue, 19 Nov 2019 20:05:16 GMT ENV EG_CONFIG_DIR=/var/lib/eg # Tue, 19 Nov 2019 20:05:17 GMT ENV CHOKIDAR_USEPOLLING=true # Tue, 19 Nov 2019 20:05:17 GMT VOLUME [/var/lib/eg] # Tue, 19 Nov 2019 20:05:18 GMT EXPOSE 8080 9876 # Tue, 19 Nov 2019 20:05:18 GMT COPY file:9481e65ab3ccc3b910b8af90d3df04d9f70030b8f8a0cfcc390840936290aaab in /usr/local/bin/ # Tue, 19 Nov 2019 20:05:19 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 20:05:20 GMT CMD ["node" "-e" "require('express-gateway')().run();"] ``` - Layers: - `sha256:8bfa913040406727f36faa9b69d0b96e071b13792a83ad69c19389031a9f3797` Last Modified: Mon, 21 Oct 2019 18:08:36 GMT Size: 2.7 MB (2717778 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:21cfd525eb13c18c84dea173712acfc494c61f791dd46c14211299c0d4ffc081` Last Modified: Tue, 19 Nov 2019 19:48:55 GMT Size: 21.5 MB (21516124 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:bfe64a197007185ac1d57037f36af92ab6b030ef8bb8df977cc2519e3f9dc7b3` Last Modified: Tue, 19 Nov 2019 19:48:46 GMT Size: 1.4 MB (1407941 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:27a61482b48fa89a9a17d92233f370938343acb736f203c0e18c4f3ddfc5cb87` Last Modified: Tue, 19 Nov 2019 19:48:45 GMT Size: 281.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b7163f0313ff0b769ad1ce7c90af2e3ad13e7fe717469999f65482caf113c1d6` Last Modified: Tue, 19 Nov 2019 20:05:34 GMT Size: 9.1 MB (9103292 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:fa737ddd8ce8ae53a9f0f0bd3a635c338fa20a1724aaa1bc84559a9ca7f5b431` Last Modified: Tue, 19 Nov 2019 20:05:29 GMT Size: 496.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `express-gateway:1.16.x` - linux; 386 ```console $ docker pull express-gateway@sha256:d02cf035c529ee3c3ca33a537d56c6c333bf638df60c05a5a9ee8a070067bed3 ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **34.7 MB (34741636 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:801de2c9db8ce11e3915e8818ab7929c374bc92cdeea5888db9857b2eee4cfac` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node","-e","require('express-gateway')().run();"]` ```dockerfile # Mon, 21 Oct 2019 16:46:04 GMT ADD file:dd3b3676fd9c1e0983ade68242b9b9ac5c477f3e4bfc97c2e78fd5db93a441c9 in / # Mon, 21 Oct 2019 16:46:04 GMT CMD ["/bin/sh"] # Tue, 19 Nov 2019 21:20:02 GMT ENV NODE_VERSION=10.17.0 # Tue, 19 Nov 2019 21:47:39 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="f893a03c5b51e0c540e32cd52773221a2f9b6d575e7fe79ffe9e878483c703ff" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python && for key in 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 FD3A5288F042B6850C66B31F09FE44734EB7990E 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 DD8F2338BAE7501E3DD5AC78C273792F7D83545D C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 B9AE9905FFD7803F25714661B63B535A4C206CA9 77984A986EBC2AA786BC0F66B01FBB92821C587A 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 4ED778F539E3634C779C87C6D7062848A1AB005C A48C2BEE680E841632CD4E44F07496B3EB3C1762 B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps # Tue, 19 Nov 2019 21:47:40 GMT ENV YARN_VERSION=1.19.1 # Tue, 19 Nov 2019 21:47:42 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn # Tue, 19 Nov 2019 21:47:42 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 19 Nov 2019 21:47:43 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 21:47:43 GMT CMD ["node"] # Tue, 19 Nov 2019 22:04:57 GMT LABEL maintainer=Vincenzo Chianese, [email protected] # Tue, 19 Nov 2019 22:04:57 GMT ARG EG_VERSION=1.16.9 # Tue, 19 Nov 2019 22:05:18 GMT # ARGS: EG_VERSION=1.16.9 RUN yarn global add express-gateway@$EG_VERSION && yarn cache clean # Tue, 19 Nov 2019 22:05:19 GMT ENV NODE_ENV=production # Tue, 19 Nov 2019 22:05:19 GMT ENV NODE_PATH=/usr/local/share/.config/yarn/global/node_modules/ # Tue, 19 Nov 2019 22:05:19 GMT ENV EG_CONFIG_DIR=/var/lib/eg # Tue, 19 Nov 2019 22:05:19 GMT ENV CHOKIDAR_USEPOLLING=true # Tue, 19 Nov 2019 22:05:19 GMT VOLUME [/var/lib/eg] # Tue, 19 Nov 2019 22:05:20 GMT EXPOSE 8080 9876 # Tue, 19 Nov 2019 22:05:20 GMT COPY file:9481e65ab3ccc3b910b8af90d3df04d9f70030b8f8a0cfcc390840936290aaab in /usr/local/bin/ # Tue, 19 Nov 2019 22:05:20 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 22:05:20 GMT CMD ["node" "-e" "require('express-gateway')().run();"] ``` - Layers: - `sha256:f913bd05bf684aaa4bc173d73cfbb58abb45587962d74f0aa71df36b6b489def` Last Modified: Mon, 21 Oct 2019 16:46:25 GMT Size: 2.8 MB (2785939 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e202f7eab49c76d0ac1b96c9ac2ecfb359d486297d8b216c375edfd800d3cfb3` Last Modified: Tue, 19 Nov 2019 21:49:37 GMT Size: 21.5 MB (21479426 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e2d738a3c4d0fff6c810a9e98262d5d6dbe95eb0ee00eb2fe019b8641261755a` Last Modified: Tue, 19 Nov 2019 21:49:30 GMT Size: 1.4 MB (1407925 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6952301c7ddab1e197fe5423ba2505c0eaeee55ebd18b44fd83c12aa4a13652e` Last Modified: Tue, 19 Nov 2019 21:49:30 GMT Size: 278.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:bafbf70576cef6380b0999ecb854ea7355680a2931f5f832c0e7645fbe99c1ad` Last Modified: Tue, 19 Nov 2019 22:05:32 GMT Size: 9.1 MB (9067575 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:8735714f6ee375a028adfb8e94022a4857b3dab3d3b113f846d69603d36a6dc9` Last Modified: Tue, 19 Nov 2019 22:05:28 GMT Size: 493.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `express-gateway:1.16.x` - linux; ppc64le ```console $ docker pull express-gateway@sha256:f6eaf104265fc479d64e5854ee97ede50116efb7fdbc46215cb0a7635af37167 ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **36.6 MB (36571726 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:8d35b7601a715af2121d83a3cc9dfcb39d3149d7fce95408e4623dedbe161d12` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node","-e","require('express-gateway')().run();"]` ```dockerfile # Mon, 21 Oct 2019 17:52:55 GMT ADD file:11a2dd0058b1642e9ee52239d03223819a53ca346fd42826eead7729c50e1257 in / # Mon, 21 Oct 2019 17:53:00 GMT CMD ["/bin/sh"] # Tue, 19 Nov 2019 21:01:17 GMT ENV NODE_VERSION=10.17.0 # Tue, 19 Nov 2019 21:17:51 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="f893a03c5b51e0c540e32cd52773221a2f9b6d575e7fe79ffe9e878483c703ff" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python && for key in 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 FD3A5288F042B6850C66B31F09FE44734EB7990E 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 DD8F2338BAE7501E3DD5AC78C273792F7D83545D C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 B9AE9905FFD7803F25714661B63B535A4C206CA9 77984A986EBC2AA786BC0F66B01FBB92821C587A 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 4ED778F539E3634C779C87C6D7062848A1AB005C A48C2BEE680E841632CD4E44F07496B3EB3C1762 B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps # Tue, 19 Nov 2019 21:17:57 GMT ENV YARN_VERSION=1.19.1 # Tue, 19 Nov 2019 21:18:14 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn # Tue, 19 Nov 2019 21:18:15 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 19 Nov 2019 21:18:18 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 21:18:22 GMT CMD ["node"] # Tue, 19 Nov 2019 21:41:07 GMT LABEL maintainer=Vincenzo Chianese, [email protected] # Tue, 19 Nov 2019 21:41:10 GMT ARG EG_VERSION=1.16.9 # Tue, 19 Nov 2019 21:41:44 GMT # ARGS: EG_VERSION=1.16.9 RUN yarn global add express-gateway@$EG_VERSION && yarn cache clean # Tue, 19 Nov 2019 21:41:53 GMT ENV NODE_ENV=production # Tue, 19 Nov 2019 21:41:59 GMT ENV NODE_PATH=/usr/local/share/.config/yarn/global/node_modules/ # Tue, 19 Nov 2019 21:42:05 GMT ENV EG_CONFIG_DIR=/var/lib/eg # Tue, 19 Nov 2019 21:42:12 GMT ENV CHOKIDAR_USEPOLLING=true # Tue, 19 Nov 2019 21:42:14 GMT VOLUME [/var/lib/eg] # Tue, 19 Nov 2019 21:42:20 GMT EXPOSE 8080 9876 # Tue, 19 Nov 2019 21:42:24 GMT COPY file:9481e65ab3ccc3b910b8af90d3df04d9f70030b8f8a0cfcc390840936290aaab in /usr/local/bin/ # Tue, 19 Nov 2019 21:42:28 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 21:42:32 GMT CMD ["node" "-e" "require('express-gateway')().run();"] ``` - Layers: - `sha256:cd18d16ea896a0f0eb99be52a9722ffae9a5ac35cf28cb8b96f589352f8e71d6` Last Modified: Mon, 21 Oct 2019 17:53:53 GMT Size: 2.8 MB (2808504 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f026931955402be6a36ee98491db7f8ddae8b8258ea02090e5d0a8da356a626b` Last Modified: Tue, 19 Nov 2019 21:25:18 GMT Size: 23.3 MB (23251809 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dea257eb7914b0a7fabd3ca8b864aa95d1f010e24b59e846e4a03160454a13d8` Last Modified: Tue, 19 Nov 2019 21:25:02 GMT Size: 1.4 MB (1408037 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:78c8dd70ab6d599ec11e1c18a44383a48be69cfe36783e6a8a15d363b3204980` Last Modified: Tue, 19 Nov 2019 21:25:02 GMT Size: 279.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1948d22741caac8f5a8899754a356c90e25679b9ef5aa017cc95da0a55f92f7e` Last Modified: Tue, 19 Nov 2019 21:43:00 GMT Size: 9.1 MB (9102600 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:568cb515ac7326829c7dad1c78c2ab7c8d916e91112ee3edf702d7b8114061c3` Last Modified: Tue, 19 Nov 2019 21:42:55 GMT Size: 497.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `express-gateway:1.16.x` - linux; s390x ```console $ docker pull express-gateway@sha256:ddeaeb8c397af28db8a97b3e31d924a727321b5659daf5647a49a15de38a71ad ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **34.4 MB (34356511 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:08b2004fc9e17294930c5eac155ac59ca92a63c49545d9ff8e6bf6693cf2c3b8` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node","-e","require('express-gateway')().run();"]` ```dockerfile # Mon, 21 Oct 2019 16:47:28 GMT ADD file:49020543846e4f93b34d71c0e4234ade7bd6dde3f45cb73784aa73ce0522c8bc in / # Mon, 21 Oct 2019 16:47:29 GMT CMD ["/bin/sh"] # Tue, 19 Nov 2019 19:52:50 GMT ENV NODE_VERSION=10.17.0 # Tue, 19 Nov 2019 20:03:05 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="f893a03c5b51e0c540e32cd52773221a2f9b6d575e7fe79ffe9e878483c703ff" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python && for key in 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 FD3A5288F042B6850C66B31F09FE44734EB7990E 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 DD8F2338BAE7501E3DD5AC78C273792F7D83545D C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 B9AE9905FFD7803F25714661B63B535A4C206CA9 77984A986EBC2AA786BC0F66B01FBB92821C587A 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 4ED778F539E3634C779C87C6D7062848A1AB005C A48C2BEE680E841632CD4E44F07496B3EB3C1762 B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps # Tue, 19 Nov 2019 20:03:05 GMT ENV YARN_VERSION=1.19.1 # Tue, 19 Nov 2019 20:03:07 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn # Tue, 19 Nov 2019 20:03:07 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 19 Nov 2019 20:03:07 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 20:03:08 GMT CMD ["node"] # Tue, 19 Nov 2019 20:21:04 GMT LABEL maintainer=Vincenzo Chianese, [email protected] # Tue, 19 Nov 2019 20:21:04 GMT ARG EG_VERSION=1.16.9 # Tue, 19 Nov 2019 20:21:14 GMT # ARGS: EG_VERSION=1.16.9 RUN yarn global add express-gateway@$EG_VERSION && yarn cache clean # Tue, 19 Nov 2019 20:21:15 GMT ENV NODE_ENV=production # Tue, 19 Nov 2019 20:21:15 GMT ENV NODE_PATH=/usr/local/share/.config/yarn/global/node_modules/ # Tue, 19 Nov 2019 20:21:15 GMT ENV EG_CONFIG_DIR=/var/lib/eg # Tue, 19 Nov 2019 20:21:15 GMT ENV CHOKIDAR_USEPOLLING=true # Tue, 19 Nov 2019 20:21:15 GMT VOLUME [/var/lib/eg] # Tue, 19 Nov 2019 20:21:16 GMT EXPOSE 8080 9876 # Tue, 19 Nov 2019 20:21:16 GMT COPY file:9481e65ab3ccc3b910b8af90d3df04d9f70030b8f8a0cfcc390840936290aaab in /usr/local/bin/ # Tue, 19 Nov 2019 20:21:16 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 19 Nov 2019 20:21:16 GMT CMD ["node" "-e" "require('express-gateway')().run();"] ``` - Layers: - `sha256:fb7172052a60e640810f01efff381654bf9ed44082461455cfcc6306d192d541` Last Modified: Mon, 21 Oct 2019 16:48:40 GMT Size: 2.6 MB (2573587 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f1c71a7b2498dcc113691c866bcda25b620b5bf41738aef301cbaf69f0088d2f` Last Modified: Tue, 19 Nov 2019 20:05:37 GMT Size: 21.3 MB (21275557 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d6f46bffddced5fe9f7ceab6a7d35833bbdddb50a5566ba97ed7966033542d3d` Last Modified: Tue, 19 Nov 2019 20:05:33 GMT Size: 1.4 MB (1407884 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2015aa92718bf8555f6eaf23246b03887425cb93c8b80525b4e317bcc46e95a2` Last Modified: Tue, 19 Nov 2019 20:05:34 GMT Size: 279.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f29b5d46f142a48c759746d9311fba8e7717fcfa248c90c333ba5c87316f7d06` Last Modified: Tue, 19 Nov 2019 20:21:27 GMT Size: 9.1 MB (9098709 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:7a9f99de87a70f03f9935f40ab944bc565243e39c80137e483db2cc4c54b5215` Last Modified: Tue, 19 Nov 2019 20:21:26 GMT Size: 495.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
79.603604
2,637
0.708692
yue_Hant
0.362
c972c4d02b440aa728996cfee3feb967ba34def7
2,933
md
Markdown
extensions/obsidian/README.md
sandypockets/extensions
22d78df61f16ac15cb533b4ae0a8a8a96e286698
[ "MIT" ]
null
null
null
extensions/obsidian/README.md
sandypockets/extensions
22d78df61f16ac15cb533b4ae0a8a8a96e286698
[ "MIT" ]
null
null
null
extensions/obsidian/README.md
sandypockets/extensions
22d78df61f16ac15cb533b4ae0a8a8a96e286698
[ "MIT" ]
null
null
null
# Obsidian for Raycast This is a raycast extension with commands for the note taking and knowledge management app Obsidian. To use it, simply open Raycast Search and type one of the following commands: ## Search Note This command allows for quick access to all of your notes. It features several actions which you can trigger with these keyboard shortcuts: - `enter` will open the note in "Quick Look" - `cmd + enter` will open the note in Obsidian - `opt + a` will let you append text to the note - `opt + s` will append selected text to the note - `opt + c` will copy the notes content to your clipboard - `opt + v` will paste the notes content to the app you used before raycast - `opt + l` will copy a markdown link for the note to your clipboard - `opt + u` will copy the obsidian URI for the note to your clipboard (see: [Obsidian URI](https://help.obsidian.md/Advanced+topics/Using+obsidian+URI)) The primary action (`enter`) can be changed in the extensions preferences. ![Search Note Command](https://user-images.githubusercontent.com/67844154/161255751-8a460ca1-c38f-4133-adaa-909f7a450ab1.png) ## Open Vault This command will show a list of previously specified vaults which you can open by pressing `enter`. ![Open Vault Command](https://user-images.githubusercontent.com/67844154/161255791-66445ad2-0e27-4c5b-b751-a8a404d18c15.png) ## Create Note This command lets you create new notes on the fly by entering a name, optionally a path to a subfolder in your vault and some content. You can use the tag picker to add tags to the notes YAML frontmatter. ![Create Note Command](https://user-images.githubusercontent.com/67844154/161255831-b21fd820-68b8-4829-a654-b470646ba67b.png) ## Daily Note This command will open the daily note from the selected vault. If a daily note doesn't exist it will create one and open it. It requires the community plugin [Advanced Obsidian URI](https://obsidian.md/plugins?id=obsidian-advanced-uri) and the core plugin "Daily notes" to be installed and enabled. ## Preferences ### General settings - set path/paths to your vault/vaults (comma separated) ### Search Note - exclude folders, files and paths so they dont show up in the search - hide YAML frontmatter in "Quick Look" and copy/paste - hide wikilinks in "Quick Look" and copy/paste - select primary action (for `enter`) ### Create Note - default path where a new note will be created - default tag (will be selected by default in the tag picker) - list of tags to be suggested in the tag picker (comma separated) ## Blog posts: - [First Update Raycast Obsidian Extension](https://www.marc-julian.de/2022/03/Obsidian%20Raycast%20Extension%20Update.html) - [Obsidian Raycast Extension](https://www.marc-julian.de/2022/01/raycastobsidian.html) ## Contributions and Credits Thank you [macedotavares](https://forum.obsidian.md/t/big-sur-icon/8121?u=marcjulian) for letting me use your amazing Obsidian (Big Sur) icon.
50.568966
204
0.769519
eng_Latn
0.976031
c972f348f267584292788cb63d58dfe586230538
192
md
Markdown
mysql/README.md
ynahmany/code-snippet
9bddece1a2a8bdf73a368fd1c9d70419b30cc199
[ "MIT" ]
7
2020-01-11T20:42:53.000Z
2020-02-01T16:48:44.000Z
mysql/README.md
ynahmany/code-snippet
9bddece1a2a8bdf73a368fd1c9d70419b30cc199
[ "MIT" ]
59
2020-01-09T17:39:43.000Z
2020-02-22T07:49:30.000Z
mysql/README.md
BrainBackup/code-snippet
9bddece1a2a8bdf73a368fd1c9d70419b30cc199
[ "MIT" ]
2
2020-04-24T15:14:26.000Z
2020-05-06T20:54:57.000Z
# Mysql DB ## Run locally with docker-compose `$ docker-compose up --build` ## Connect to web ui - Adminer `$ open localhost:8080` `$ user: root; password: check docker-compose.yml file`
16
55
0.692708
eng_Latn
0.834853
c973348e96a5e35aaffb9b5d1b9092dfac71645c
959
md
Markdown
docs/ru/behaviours-rtl.md
andruha/yii2-swiper
6fb20bc18037cdaeab6f5ee91e810d479034a349
[ "MIT" ]
null
null
null
docs/ru/behaviours-rtl.md
andruha/yii2-swiper
6fb20bc18037cdaeab6f5ee91e810d479034a349
[ "MIT" ]
null
null
null
docs/ru/behaviours-rtl.md
andruha/yii2-swiper
6fb20bc18037cdaeab6f5ee91e810d479034a349
[ "MIT" ]
2
2020-05-08T19:18:57.000Z
2021-07-09T16:16:43.000Z
# Отображение справа налево Для отображения контента справа налево необходимо объявить поведение `rtl` в поле `lantongxue\yii2\swiper\Swiper::$behaviours`, иначе поведение не будет применено. Данное поведение всего лишь добавляет опцию `'dir' = 'rtl'` в тег-контейнер виджета. > Заметка: Аналогичный эффект будет, если указать `Swiper::$containerOptions['dir'] = 'rtl'` Пример: ```PHP <?php use lantongxue\yii2\swiper\Swiper; echo Swiper::widget( [ 'items' => [ 'Slide 1', 'Slide 2', 'Slide 3' ], 'behaviours' => [ 'rtl' ] ] ); // Через именованную константу echo Swiper::widget( [ 'items' => [ 'Slide 1', 'Slide 2', 'Slide 3' ], 'behaviours' => [ Swiper::BEHAVIOUR_RTL ] ] ); // Через настройки контейнера echo Swiper::widget( [ 'items' => [ 'Slide 1', 'Slide 2', 'Slide 3' ], 'containerOptions' => [ 'dir' => 'rtl' ] ] ); ```
18.803922
105
0.584984
rus_Cyrl
0.50441
c973503e7fec6e970079662afd649ffc4a7dea66
14,767
md
Markdown
articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
66
2017-08-24T10:28:13.000Z
2022-03-04T14:01:29.000Z
articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
534
2017-06-30T19:57:07.000Z
2022-03-11T08:12:44.000Z
articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
105
2017-07-04T11:37:54.000Z
2022-03-20T06:10:38.000Z
--- title: 教學課程:在 Azure Active Directory 中設定 SAP SuccessFactors 回寫 | Microsoft Docs description: 了解如何設定從 Azure AD 到 SAP SuccessFactors 的屬性回寫 services: active-directory author: cmmdesai manager: CelesteDG ms.service: active-directory ms.subservice: saas-app-tutorial ms.topic: tutorial ms.workload: identity ms.date: 10/14/2020 ms.author: chmutali ms.openlocfilehash: d39e00a80ab167936a749c73867b4343e6ed9d76 ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: HT ms.contentlocale: zh-TW ms.lasthandoff: 11/25/2020 ms.locfileid: "96006433" --- # <a name="tutorial-configure-attribute-write-back-from-azure-ad-to-sap-successfactors"></a>教學課程:設定從 Azure AD 到 SAP SuccessFactors 的屬性回寫 本教學課程旨在說明將屬性從 Azure AD 回寫到 SAP SuccessFactors 員工中心的步驟。 ## <a name="overview"></a>概觀 您可以設定 SAP SuccessFactors Writeback 應用程式,將特定屬性從 Azure Active Directory 寫入 SAP SuccessFactors 員工中心。 SuccessFactors Writeback 佈建應用程式支援將值指派給下列員工中心屬性: * 工作電子郵件 * 使用者名稱 * 公司電話號碼 (包括國碼 (地區碼)、區碼、號碼和分機) * 公司電話號碼主要旗標 * 行動電話號碼 (包括國碼 (地區碼)、區碼、號碼) * 行動電話主要旗標 * 使用者 custom01-custom15 屬性 * loginMethod 屬性 > [!NOTE] > 此應用程式與 SuccessFactors 的輸入使用者佈建整合應用程式沒有任何相依性。 您可以將其設定為獨立於 [SuccessFactors 至內部部署 AD](sap-successfactors-inbound-provisioning-tutorial.md) 佈建應用程式或 [SuccessFactors 至 Azure AD](sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) 佈建應用程式。 ### <a name="who-is-this-user-provisioning-solution-best-suited-for"></a>誰最適合使用此使用者佈建解決方案? 此 SuccessFactors Writeback 使用者佈建解決方案最適合下列對象: * 使用 Microsoft 365 的組織,想要將由 IT 管理的授權屬性 (例如電子郵件地址、電話、使用者名稱) 回寫到 SuccessFactors 員工中心。 ## <a name="configuring-successfactors-for-the-integration"></a>設定用於整合的 SuccessFactors 所有 SuccessFactors 佈建連接器都需要具有適當權限的 SuccessFactors 帳戶認證,才能叫用員工中心 OData API。 本節說明在 SuccessFactors 中建立服務帳戶,並授與適當權限的步驟。 * [在 SuccessFactors 中建立/識別 API 使用者帳戶](#createidentify-api-user-account-in-successfactors) * [建立 API 權限角色](#create-an-api-permissions-role) * [為 API 使用者建立權限群組](#create-a-permission-group-for-the-api-user) * [將權限角色授與權限群組](#grant-permission-role-to-the-permission-group) ### <a name="createidentify-api-user-account-in-successfactors"></a>在 SuccessFactors 中建立/識別 API 使用者帳戶 請與您的 SuccessFactors 系統管理員小組或實作夥伴合作,在 SuccessFactors 中建立或識別使用者帳戶,以用於叫用 OData API。 在 Azure AD 中設定佈建應用程式時,需要用到此帳戶的使用者名稱和密碼認證。 ### <a name="create-an-api-permissions-role"></a>建立 API 權限角色 1. 使用可存取系統管理中心的使用者帳戶登入 SAP SuccessFactors。 1. 搜尋 [管理權限角色],然後從搜尋結果中選取 [管理權限角色]。 ![管理權限角色](./media/sap-successfactors-inbound-provisioning/manage-permission-roles.png) 1. 從 [權限角色] 清單中,按一下 [新建]。 > [!div class="mx-imgBorder"] > ![建立新的權限角色](./media/sap-successfactors-inbound-provisioning/create-new-permission-role-1.png) 1. 為新的權限角色新增 **角色名稱** 和 **描述**。 名稱和描述應該指出這是針對 API 使用權限設定的角色。 > [!div class="mx-imgBorder"] > ![權限角色詳細資料](./media/sap-successfactors-inbound-provisioning/permission-role-detail.png) 1. 在 [權限設定] 底下,按一下 [權限...],然後向下捲動權限清單,再按一下 [管理整合工具]。 核取 **允許系統管理員透過基本驗證存取 OData API** 方塊。 > [!div class="mx-imgBorder"] > ![管理整合工具](./media/sap-successfactors-inbound-provisioning/manage-integration-tools.png) 1. 在相同的方塊中向下捲動,然後選取 [員工中心 API]。 如下所示,新增使用 ODATA API 讀取和使用 ODATA API 編輯的權限。 如果您打算在回寫到 SuccessFactors 的案例中使用相同帳戶,請選取編輯選項。 > [!div class="mx-imgBorder"] > ![讀取寫入權限](./media/sap-successfactors-inbound-provisioning/odata-read-write-perm.png) 1. 按一下 [完成]。 按一下 **[儲存變更]** 。 ### <a name="create-a-permission-group-for-the-api-user"></a>為 API 使用者建立權限群組 1. 在 SuccessFactors 系統管理中心內,搜尋 [管理權限群組],然後從搜尋結果中選取 [管理權限群組]。 > [!div class="mx-imgBorder"] > ![管理權限群組](./media/sap-successfactors-inbound-provisioning/manage-permission-groups.png) 1. 從 [管理權限群組] 視窗中,按一下 [新建]。 > [!div class="mx-imgBorder"] > ![Add new group](./media/sap-successfactors-inbound-provisioning/create-new-group.png) 1. 為新群組新增群組名稱。 群組名稱應指出這是 API 使用者的群組。 > [!div class="mx-imgBorder"] > ![權限群組名稱](./media/sap-successfactors-inbound-provisioning/permission-group-name.png) 1. 將成員新增至群組。 例如,您可以從 [人員集區] 下拉式功能表中選取 [使用者名稱],然後輸入將用於整合的 API 帳戶使用者名稱。 > [!div class="mx-imgBorder"] > ![新增群組成員](./media/sap-successfactors-inbound-provisioning/add-group-members.png) 1. 按一下 [完成] 即可完成建立權限群組。 ### <a name="grant-permission-role-to-the-permission-group"></a>將權限角色授與權限群組 1. 在 SuccessFactors 系統管理中心內,搜尋 [管理權限角色],然後從搜尋結果中選取 [管理權限角色]。 1. 從 **權限角色清單** 中,選取您為 API 使用權限建立的角色。 1. 在 [將此角色授與...] 底下,按一下 [新增...] 按鈕。 1. 從下拉式功能表中選取 [權限群組...],然後按一下 [選取...] 來開啟 [群組] 視窗,搜尋並選取上方建立的群組。 > [!div class="mx-imgBorder"] > ![新增權限群組](./media/sap-successfactors-inbound-provisioning/add-permission-group.png) 1. 檢閱授與權限群組的權限角色。 > [!div class="mx-imgBorder"] > ![權限角色和群組詳細資料](./media/sap-successfactors-inbound-provisioning/permission-role-group.png) 1. 按一下 **[儲存變更]** 。 ## <a name="preparing-for-successfactors-writeback"></a>準備 SuccessFactors Writeback SuccessFactors Writeback 佈建應用程式會使用特定「代碼」值來設定員工中心內的電子郵件和電話號碼。 這些「代碼」值會設定為屬性對應資料表中的常數值,而且每個 SuccessFactors 執行個體都有不同的值。 本節提供用來擷取這些「代碼」值的步驟。 > [!NOTE] > 請洽詢您的 SuccessFactors 系統管理員,以完成本節中的步驟。 ### <a name="identify-email-and-phone-number-picklist-names"></a>識別電子郵件和電話號碼的挑選清單名稱 在 SAP SuccessFactors 中,「挑選清單」是一組可設定的選項,使用者可以從中進行選取。 不同類型的電子郵件和電話號碼 (例如公司、個人、其他) 都會使用挑選清單來表示。 在此步驟中,我們將識別 SuccessFactors 租用戶中設定的挑選清單,以儲存電子郵件和電話號碼值。 1. 在 SuccessFactors 系統管理中心內,搜尋 [管理業務設定]。 > [!div class="mx-imgBorder"] > ![管理業務設定](./media/sap-successfactors-inbound-provisioning/manage-business-config.png) 1. 在 [HRIS 元素] 底下,選取 [emailInfo],然後針對 [email-type] 欄位按一下 [詳細資料]。 > [!div class="mx-imgBorder"] > ![取得電子郵件資訊](./media/sap-successfactors-inbound-provisioning/get-email-info.png) 1. 在 [email-type] 詳細資料頁面上,記下與此欄位相關聯的挑選清單名稱。 根據預設,此名稱是 **ecEmailType**。 不過,您租用戶中的名稱可能會不同。 > [!div class="mx-imgBorder"] > ![識別電子郵件挑選清單](./media/sap-successfactors-inbound-provisioning/identify-email-picklist.png) 1. 在 [HRIS 元素] 底下,選取 [phoneInfo],然後針對 [phone-type] 欄位按一下 [詳細資料]。 > [!div class="mx-imgBorder"] > ![取得電話資訊](./media/sap-successfactors-inbound-provisioning/get-phone-info.png) 1. 在 [phone-type] 詳細資料頁面上,記下與此欄位相關聯的挑選清單名稱。 根據預設,此名稱是 **ecPhoneType**。 不過,您租用戶中的名稱可能會不同。 > [!div class="mx-imgBorder"] > ![識別電話挑選清單](./media/sap-successfactors-inbound-provisioning/identify-phone-picklist.png) ### <a name="retrieve-constant-value-for-emailtype"></a>取得 emailType 的常數值 1. 在 SuccessFactors 系統管理中心內,搜尋並開啟 [挑選清單中心]。 1. 使用上一節所擷取的電子郵件挑選清單名稱 (例如 ecEmailType) 來尋找電子郵件挑選清單。 > [!div class="mx-imgBorder"] > ![尋找電子郵件類型挑選清單](./media/sap-successfactors-inbound-provisioning/find-email-type-picklist.png) 1. 開啟使用中的電子郵件挑選清單。 > [!div class="mx-imgBorder"] > ![開啟使用中的電子郵件類型挑選清單](./media/sap-successfactors-inbound-provisioning/open-active-email-type-picklist.png) 1. 在電子郵件類型挑選清單頁面上,選取 [公司] 電子郵件類型。 > [!div class="mx-imgBorder"] > ![選取公司電子郵件類型](./media/sap-successfactors-inbound-provisioning/select-business-email-type.png) 1. 記下與「公司」電子郵件相關聯的 **選項識別碼**。 這是將與屬性對應表中 emailType 搭配使用的代碼。 > [!div class="mx-imgBorder"] > ![取得電子郵件類型代碼](./media/sap-successfactors-inbound-provisioning/get-email-type-code.png) > [!NOTE] > 當您複製值時,請捨棄逗號字元。 例如,如果 **選項識別碼** 值是 8,448,請將 Azure AD 中的 emailType 設定為常數 8448 (不包含逗號字元)。 ### <a name="retrieve-constant-value-for-phonetype"></a>取得 phoneType 的常數值 1. 在 SuccessFactors 系統管理中心內,搜尋並開啟 [挑選清單中心]。 1. 使用上一節所擷取的電話挑選清單名稱來尋找電話挑選清單。 > [!div class="mx-imgBorder"] > ![尋找電話類型挑選清單](./media/sap-successfactors-inbound-provisioning/find-phone-type-picklist.png) 1. 開啟使用中的電話挑選清單。 > [!div class="mx-imgBorder"] > ![開啟使用中的電話類型挑選清單](./media/sap-successfactors-inbound-provisioning/open-active-phone-type-picklist.png) 1. 在電話類型挑選清單頁面上,檢閱 [挑選清單值] 底下所列的不同電話類型。 > [!div class="mx-imgBorder"] > ![檢閱電話類型](./media/sap-successfactors-inbound-provisioning/review-phone-types.png) 1. 記下與「公司」電話相關聯的 **選項識別碼**。 這是將與屬性對應表中 businessPhoneType 搭配使用的代碼。 > [!div class="mx-imgBorder"] > ![取得公司電話代碼](./media/sap-successfactors-inbound-provisioning/get-business-phone-code.png) 1. 記下與「行動電話」相關聯的 **選項識別碼**。 這是將與屬性對應表中 cellPhoneType 搭配使用的代碼。 > [!div class="mx-imgBorder"] > ![取得行動電話代碼](./media/sap-successfactors-inbound-provisioning/get-cell-phone-code.png) > [!NOTE] > 當您複製值時,請捨棄逗號字元。 例如,如果 **選項識別碼** 值是 10,606,請將 Azure AD 中的 cellPhoneType 設定為常數 10606 (不包含逗號字元)。 ## <a name="configuring-successfactors-writeback-app"></a>設定 SuccessFactors Writeback 應用程式 本節提供執行下列作業的步驟 * [新增佈建連接器應用程式並設定 SuccessFactors 的連線](#part-1-add-the-provisioning-connector-app-and-configure-connectivity-to-successfactors) * [設定屬性對應](#part-2-configure-attribute-mappings) * [啟用及啟動使用者佈建](#enable-and-launch-user-provisioning) ### <a name="part-1-add-the-provisioning-connector-app-and-configure-connectivity-to-successfactors"></a>第 1 部分:新增佈建連接器應用程式並設定 SuccessFactors 的連線 **若要設定 SuccessFactors Writeback:** 1. 移至 <https://portal.azure.com> 2. 在左側導覽列中,選取 [Azure Active Directory] 3. 依序選取 [企業應用程式] 和 [所有應用程式]。 4. 選取 [新增應用程式],然後選取 [全部] 類別。 5. 搜尋 **SuccessFactors Writeback**,並從資源庫新增該應用程式。 6. 新增應用程式並顯示應用程式詳細資料畫面之後,請選取 [佈建] 7. 將 [佈建模式] 設定為 [自動] 8. 完成 [系統管理員認證] 區段,如下所示: * **系統管理員使用者名稱** – 輸入 SuccessFactors API 使用者帳戶的使用者名稱,並附上公司識別碼。 其格式為:**username\@companyID** * **系統管理員密碼 –** 輸入 SuccessFactors API 使用者帳戶的密碼。 * **租用戶 URL –** 輸入 SuccessFactors OData API 服務端點的名稱。 僅輸入不含 http 或 https 的伺服器主機名稱。 此值看起來應該像這樣:`api4.successfactors.com`。 * **通知電子郵件** – 輸入您的電子郵件地址,然後勾選 [發生失敗時傳送電子郵件] 核取方塊。 > [!NOTE] > 如果佈建作業進入[隔離](../app-provisioning/application-provisioning-quarantine-status.md)狀態,Azure AD 佈建服務會傳送電子郵件通知。 * 按一下 [測試連線] 按鈕。 如果連線測試成功,請按一下頂端的 [儲存] 按鈕。 如果失敗,請再次檢查 SuccessFactors 認證和 URL 是否有效。 >[!div class="mx-imgBorder"] >![Azure 入口網站](./media/sap-successfactors-inbound-provisioning/sfwb-provisioning-creds.png) * 成功儲存認證之後,[對應] 區段就會顯示預設的對應。 如果看不到屬性對應,請重新整理頁面。 ### <a name="part-2-configure-attribute-mappings"></a>第 2 部分:設定屬性對應 在本節中,您會設定使用者資料從 SuccessFactors 流向 Active Directory 的方式。 1. 在 [佈建] 索引標籤的 [對應] 底下,按一下 [佈建 Azure Active Directory 使用者]。 1. 在 [來源物件範圍] 欄位中,您可以透過定義一組屬性型篩選,選取 Azure AD 中應進行回寫的使用者集合。 預設範圍是「Azure AD 中的所有使用者」。 > [!TIP] > 第一次設定佈建應用程式時,您將需要測試及確認屬性對應和運算式,以確保它提供您所需的結果。 Microsoft 建議您使用 [來源物件範圍] 底下的範圍篩選,利用幾個來自 Azure AD 的測試使用者來測試您的對應。 確認對應能夠運作之後,您便可以移除篩選,或逐漸擴大篩選來包含更多使用者。 1. [目標物件動作] 欄位只支援 **更新** 作業。 1. 在 [屬性對應] 區段下的對應資料表中,您可以將下列 Azure Active Directory 屬性對應至 SuccessFactors。 下表提供如何對應回寫屬性的指引。 | \# | Azure AD 屬性 | SuccessFactors 屬性 | 備註 | |--|--|--|--| | 1 | employeeId | personIdExternal | 根據預設,此屬性是比對識別碼。 除了 employeeId,您也可以使用任何其他內含 SuccessFactors 中 personIdExternal 對等值的 Azure AD 屬性。 | | 2 | mail | 電子郵件 | 對應電子郵件屬性來源。 基於測試目的,您可以將 userPrincipalName 對應至電子郵件。 | | 3 | 8448 | emailType | 此常數值是與公司電子郵件相關聯的 SuccessFactors 識別碼值。 更新此值,以符合您的 SuccessFactors 環境。 如需設定此值的步驟,請參閱[取得 emailType 的常數值](#retrieve-constant-value-for-emailtype)一節。 | | 4 | true | emailIsPrimary | 使用此屬性將公司電子郵件設定為 SuccessFactors 中的主要項目。 如果公司電子郵件不是主要項目,請將此旗標設定為 false。 | | 5 | userPrincipalName | [custom01 – custom15] | 若使用 **新增對應**,您可以選擇性地將 userPrincipalName 或任何 Azure AD 屬性寫入 SuccessFactors 使用者物件中可用的自訂屬性。 | | 6 | on-prem-samAccountName | username | 若使用 **新增對應**,您可以選擇性地將內部部署 samAccountName 對應至 SuccessFactors 使用者名稱屬性。 | | 7 | SSO | loginMethod | 如果針對[部分 SSO](https://apps.support.sap.com/sap/support/knowledge/en/2320766) 設定 SuccessFactors 租用戶,則您可以使用 [新增對應] 選擇性地將 loginMethod 設定為常數值 "SSO" 或 "PWD"。 | | 8 | telephoneNumber | businessPhoneNumber | 使用此對應讓 telephoneNumber 從 Azure AD 流向 SuccessFactors 公司電話號碼。 | | 9 | 10605 | businessPhoneType | 此常數值是與公司電話相關聯的 SuccessFactors 識別碼值。 更新此值,以符合您的 SuccessFactors 環境。 如需設定此值的步驟,請參閱[取得 phoneType 的常數值](#retrieve-constant-value-for-phonetype)一節。 | | 10 | true | businessPhoneIsPrimary | 使用此屬性來設定公司電話號碼的主要旗標。 有效的值為 true 或 false。 | | 11 | mobile | cellPhoneNumber | 使用此對應讓 telephoneNumber 從 Azure AD 流向 SuccessFactors 公司電話號碼。 | | 12 | 10606 | cellPhoneType | 此常數值是與行動電話相關聯的 SuccessFactors 識別碼值。 更新此值,以符合您的 SuccessFactors 環境。 如需設定此值的步驟,請參閱[取得 phoneType 的常數值](#retrieve-constant-value-for-phonetype)一節。 | | 13 | false | cellPhoneIsPrimary | 使用此屬性來設定行動電話號碼的主要旗標。 有效的值為 true 或 false。 | 1. 驗證和檢閱您的屬性對應。 >[!div class="mx-imgBorder"] >![回寫屬性對應](./media/sap-successfactors-inbound-provisioning/writeback-attribute-mapping.png) 1. 按一下 [儲存] 以儲存對應。 接下來,我們將更新 JSON 路徑 API 運算式,以在您的 SuccessFactors 執行個體中使用 phoneType 代碼。 1. 選取 [顯示進階選項]。 >[!div class="mx-imgBorder"] >![顯示進階選項](./media/sap-successfactors-inbound-provisioning/show-advanced-options.png) 1. 按一下 [編輯 SuccessFactors 的屬性清單]。 > [!NOTE] > 如果 [編輯 SuccessFactors 的屬性清單] 選項未顯示在 Azure 入口網站中,請使用 URL *https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true* 來存取頁面。 1. 此視圖中的 [API 運算式] 欄會顯示連接器所使用的 JSON 路徑運算式。 1. 更新公司電話和行動電話的 JSON 路徑運算式,以使用與您環境對應的識別碼值 (businessPhoneType 和 cellPhoneType)。 >[!div class="mx-imgBorder"] >![電話 JSON 路徑變更](./media/sap-successfactors-inbound-provisioning/phone-json-path-change.png) 1. 按一下 [儲存] 以儲存對應。 ## <a name="enable-and-launch-user-provisioning"></a>啟用及啟動使用者佈建 在 SuccessFactors 佈建應用程式設定完成之後,您可以在 Azure 入口網站中開啟佈建服務。 > [!TIP] > 根據預設,當您開啟佈建服務時,它會為範圍中的所有使用者起始佈建作業。 如果有對應錯誤或資料問題,則佈建作業可能失敗並進入隔離狀態。 為了避免這種情況,我們建議您最好是先設定 [來源物件範圍] 篩選,並使用幾個測試使用者來測試您的屬性對應,然後才為所有使用者啟動完整同步處理。 確認對應能夠運作且提供您所需的結果之後,您便可以移除篩選,或逐漸擴大篩選來包含更多使用者。 1. 在 [佈建] 索引標籤中,將 [佈建狀態] 設定為 [開啟]。 1. 選取 [範圍]。 您可以選取下列其中一個選項: * **同步所有使用者與群組**:如果您打算將所有使用者的對應屬性從 Azure AD 寫回 SuccessFactors,請選取此選項,但須遵循 [對應] -> [來源物件範圍] 底下所定義的範圍規則。 * **只同步已指派的使用者與群組**:如果您已在 [應用程式] -> [管理] -> [使用者與群組] 功能表選項中,將使用者指派給此應用程式,而您打算只寫回這些使用者的對應屬性,則選取此選項。 這些使用者也會受限於 [對應] -> [來源物件範圍] 底下所定義的範圍規則。 > [!div class="mx-imgBorder"] > ![選取回寫範圍](./media/sap-successfactors-inbound-provisioning/select-writeback-scope.png) > [!NOTE] > SuccessFactors Writeback 佈建應用程式不支援「群組指派」。 僅支援「使用者指派」。 1. 按一下 **[儲存]** 。 1. 此作業會啟動初始同步,所需花費的時數會視 Azure AD 租用戶中的使用者人數及該作業定義的範圍而定。 您可以檢查進度列來追蹤同步週期的進度。 1. 您可隨時檢查 Azure 入口網站中的 [佈建記錄] 索引標籤,查看佈建服務執行了哪些動作。 佈建記錄會列出佈建服務所執行的所有個別同步事件。 1. 在初始同步完成之後,它會在 [佈建] 索引標籤中寫入稽核摘要報告,如下所示。 > [!div class="mx-imgBorder"] > ![佈建進度列](./media/sap-successfactors-inbound-provisioning/prov-progress-bar-stats.png) ## <a name="supported-scenarios-known-issues-and-limitations"></a>支援的案例、已知問題和限制 請參閱 SAP SuccessFactors 整合參考指南的[回寫案例區段](../app-provisioning/sap-successfactors-integration-reference.md#writeback-scenarios)。 ## <a name="next-steps"></a>後續步驟 * [深入探討 Azure AD 和 SAP SuccessFactors 整合的參考](../app-provisioning/sap-successfactors-integration-reference.md) * [瞭解如何針對佈建活動檢閱記錄和取得報告](../app-provisioning/check-status-user-account-provisioning.md) * [了解如何設定 SuccessFactors 與 Azure Active Directory 之間的單一登入](successfactors-tutorial.md) * [了解如何將其他 SaaS 應用程式與 Azure Active Directory 整合](tutorial-list.md) * [了解如何匯出和匯入您的佈建設定](../app-provisioning/export-import-provisioning-configuration.md)
41.248603
245
0.740706
yue_Hant
0.821841
c9739f7edf5aecbd7b6be69ae98ef9fd70043102
33,659
md
Markdown
README.md
HungryJoe/skywalking-eyes
1075f4d098cef2292e763ae40e9ad3ce0fb4bc8c
[ "Apache-2.0" ]
null
null
null
README.md
HungryJoe/skywalking-eyes
1075f4d098cef2292e763ae40e9ad3ce0fb4bc8c
[ "Apache-2.0" ]
null
null
null
README.md
HungryJoe/skywalking-eyes
1075f4d098cef2292e763ae40e9ad3ce0fb4bc8c
[ "Apache-2.0" ]
null
null
null
# SkyWalking Eyes <img src="http://skywalking.apache.org/assets/logo.svg" alt="Sky Walking logo" height="90px" align="right" /> A full-featured license tool to check and fix license headers and resolve dependencies' licenses. [![Twitter Follow](https://img.shields.io/twitter/follow/asfskywalking.svg?style=for-the-badge&label=Follow&logo=twitter)](https://twitter.com/AsfSkyWalking) ## Usage You can use License-Eye in GitHub Actions or in your local machine. ### GitHub Actions To use License-Eye in GitHub Actions, add a step in your GitHub workflow. ```yaml - name: Check License Header uses: apache/skywalking-eyes@main # always prefer to use a revision instead of `main`. # with: # log: debug # optional: set the log level. The default value is `info`. # config: .licenserc.yaml # optional: set the config file. The default value is `.licenserc.yaml`. # token: # optional: the token that license eye uses when it needs to comment on the pull request. Set to empty ("") to disable commenting on pull request. The default value is ${{ github.token }} # mode: # optional: Which mode License-Eye should be run in. Choices are `check` or `fix`. The default value is `check`. ``` Add a `.licenserc.yaml` in the root of your project, for Apache Software Foundation projects, the following configuration should be enough. ```yaml header: license: spdx-id: Apache-2.0 copyright-owner: Apache Software Foundation paths-ignore: - 'dist' - 'licenses' - '**/*.md' - 'LICENSE' - 'NOTICE' comment: on-failure ``` **NOTE**: The full configurations can be found in [the configuration section](#configurations). #### Using the Action in Fix Mode By default the action runs License-Eye in check mode, which will raise an error if any of the processed files are missing license headers. If `mode` is set to `fix`, the action will instead apply the license header to any processed file that is missing a license header. The fixed files can then be pushed back to the pull request using another GitHub action. For example: ```yaml - name: Check License Header uses: apache/skywalking-eyes@main with: mode: fix - name: Apply Changes uses: EndBug/add-and-commit@v4 with: author_name: License Bot author_email: [email protected] message: 'Automatic application of license header' env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} ``` **Note**: The exit code of fix mode is always 0 and can not be used to block CI status. Consider running the action in check mode if you would like CI to fail when a file is missing a license header. ### Docker Image ```shell docker run -it --rm -v $(pwd):/github/workspace apache/skywalking-eyes header check docker run -it --rm -v $(pwd):/github/workspace apache/skywalking-eyes header fix ``` ### Docker Image from the latest codes For users and developers who want to help to test the latest codes on main branch, we publish Docker image to GitHub Container Registry for every commit in main branch, tagged with the commit sha, if it's the latest commit in main branch, it's also tagged with `latest`. **Note**: these Docker images are not official Apache releases. For official releases, please refer to [the download page](https://skywalking.apache.org/downloads/#SkyWalkingEyes) for executable binary and [the Docker hub](https://hub.docker.com/r/apache/skywalking-eyes) for Docker images. ```shell docker run -it --rm -v $(pwd):/github/workspace ghcr.io/apache/skywalking-eyes/license-eye header check docker run -it --rm -v $(pwd):/github/workspace ghcr.io/apache/skywalking-eyes/license-eye header fix ``` ### Compile from Source ```bash git clone https://github.com/apache/skywalking-eyes cd skywalking-eyes make build ``` If you have Go SDK installed, you can also use `go install` command to install the latest code. ```bash go install github.com/apache/skywalking-eyes/cmd/license-eye@latest ``` #### Check License Header ```bash license-eye -c test/testdata/.licenserc_for_test_check.yaml header check ``` <details> <summary>Header Check Result</summary> ``` INFO Loading configuration from file: test/testdata/.licenserc_for_test_check.yaml INFO Totally checked 30 files, valid: 12, invalid: 12, ignored: 6, fixed: 0 ERROR the following files don't have a valid license header: test/testdata/include_test/without_license/testcase.go test/testdata/include_test/without_license/testcase.graphql test/testdata/include_test/without_license/testcase.ini test/testdata/include_test/without_license/testcase.java test/testdata/include_test/without_license/testcase.md test/testdata/include_test/without_license/testcase.php test/testdata/include_test/without_license/testcase.py test/testdata/include_test/without_license/testcase.sh test/testdata/include_test/without_license/testcase.yaml test/testdata/include_test/without_license/testcase.yml test/testdata/test-spdx-asf.yaml test/testdata/test-spdx.yaml exit status 1 ``` </details> #### Fix License Header ```bash bin/darwin/license-eye -c test/testdata/.licenserc_for_test_fix.yaml header fix ``` <details> <summary>Header Fix Result</summary> ``` INFO Loading configuration from file: test/testdata/.licenserc_for_test_fix.yaml INFO Totally checked 20 files, valid: 10, invalid: 10, ignored: 0, fixed: 10 ``` </details> #### Resolve Dependencies' licenses This command serves as assistance for human beings to audit the dependencies license, it's exit code is always 0. You can also use the `--output` or `-o` to save the dependencies' `LICENSE` files to a specified directory so that you can put them in distribution package if needed. ```bash license-eye -c test/testdata/.licenserc_for_test_check.yaml dep resolve -o ./dependencies/licenses ``` <details> <summary>Dependency Resolve Result</summary> ``` INFO GITHUB_TOKEN is not set, license-eye won't comment on the pull request INFO Loading configuration from file: test/testdata/.licenserc_for_test_check.yaml WARNING Failed to resolve the license of <github.com/gogo/protobuf>: cannot identify license content WARNING Failed to resolve the license of <github.com/kr/logfmt>: cannot find license file WARNING Failed to resolve the license of <github.com/magiconair/properties>: cannot identify license content WARNING Failed to resolve the license of <github.com/miekg/dns>: cannot identify license content WARNING Failed to resolve the license of <github.com/pascaldekloe/goe>: cannot identify license content WARNING Failed to resolve the license of <github.com/russross/blackfriday/v2>: cannot identify license content WARNING Failed to resolve the license of <gopkg.in/check.v1>: cannot identify license content Dependency | License | Version -------------------------------------------------- | -------------- | ------------------------------------ cloud.google.com/go | Apache-2.0 | v0.46.3 cloud.google.com/go/bigquery | Apache-2.0 | v1.0.1 cloud.google.com/go/datastore | Apache-2.0 | v1.0.0 cloud.google.com/go/firestore | Apache-2.0 | v1.1.0 cloud.google.com/go/pubsub | Apache-2.0 | v1.0.1 cloud.google.com/go/storage | Apache-2.0 | v1.0.0 dmitri.shuralyov.com/gpu/mtl | BSD-3-Clause | v0.0.0-20190408044501-666a987793e9 github.com/BurntSushi/toml | MIT | v0.3.1 github.com/BurntSushi/xgb | BSD-3-Clause | v0.0.0-20160522181843-27f122750802 github.com/OneOfOne/xxhash | Apache-2.0 | v1.2.2 github.com/alecthomas/template | BSD-3-Clause | v0.0.0-20160405071501-a0175ee3bccc github.com/alecthomas/units | MIT | v0.0.0-20151022065526-2efee857e7cf github.com/armon/circbuf | MIT | v0.0.0-20150827004946-bbbad097214e github.com/armon/go-metrics | MIT | v0.0.0-20180917152333-f0300d1749da github.com/armon/go-radix | MIT | v0.0.0-20180808171621-7fddfc383310 github.com/beorn7/perks | MIT | v1.0.0 github.com/bgentry/speakeasy | MIT | v0.1.0 github.com/bketelsen/crypt | MIT | v0.0.3-0.20200106085610-5cbc8cc4026c github.com/bmatcuk/doublestar/v2 | MIT | v2.0.4 github.com/cespare/xxhash | MIT | v1.1.0 github.com/client9/misspell | MIT | v0.3.4 github.com/coreos/bbolt | MIT | v1.3.2 github.com/coreos/etcd | Apache-2.0 | v3.3.13+incompatible github.com/coreos/go-semver | Apache-2.0 | v0.3.0 github.com/coreos/go-systemd | Apache-2.0 | v0.0.0-20190321100706-95778dfbb74e github.com/coreos/pkg | Apache-2.0 | v0.0.0-20180928190104-399ea9e2e55f github.com/cpuguy83/go-md2man/v2 | MIT | v2.0.0 github.com/davecgh/go-spew | ISC | v1.1.1 github.com/dgrijalva/jwt-go | MIT | v3.2.0+incompatible github.com/dgryski/go-sip13 | MIT | v0.0.0-20181026042036-e10d5fee7954 github.com/fatih/color | MIT | v1.7.0 github.com/fsnotify/fsnotify | BSD-3-Clause | v1.4.7 github.com/ghodss/yaml | MIT | v1.0.0 github.com/go-gl/glfw | BSD-3-Clause | v0.0.0-20190409004039-e6da0acd62b1 github.com/go-kit/kit | MIT | v0.8.0 github.com/go-logfmt/logfmt | MIT | v0.4.0 github.com/go-stack/stack | MIT | v1.8.0 github.com/golang/glog | Apache-2.0 | v0.0.0-20160126235308-23def4e6c14b github.com/golang/groupcache | Apache-2.0 | v0.0.0-20190129154638-5b532d6fd5ef github.com/golang/mock | Apache-2.0 | v1.3.1 github.com/golang/protobuf | BSD-3-Clause | v1.3.2 github.com/google/btree | Apache-2.0 | v1.0.0 github.com/google/go-cmp | BSD-3-Clause | v0.3.0 github.com/google/go-github/v33 | BSD-3-Clause | v33.0.0 github.com/google/go-querystring | BSD-3-Clause | v1.0.0 github.com/google/martian | Apache-2.0 | v2.1.0+incompatible github.com/google/pprof | Apache-2.0 | v0.0.0-20190515194954-54271f7e092f github.com/google/renameio | Apache-2.0 | v0.1.0 github.com/googleapis/gax-go/v2 | BSD-3-Clause | v2.0.5 github.com/gopherjs/gopherjs | BSD-2-Clause | v0.0.0-20181017120253-0766667cb4d1 github.com/gorilla/websocket | BSD-2-Clause | v1.4.2 github.com/grpc-ecosystem/go-grpc-middleware | Apache-2.0 | v1.0.0 github.com/grpc-ecosystem/go-grpc-prometheus | Apache-2.0 | v1.2.0 github.com/grpc-ecosystem/grpc-gateway | BSD-3-Clause | v1.9.0 github.com/hashicorp/consul/api | MPL-2.0 | v1.1.0 github.com/hashicorp/consul/sdk | MPL-2.0 | v0.1.1 github.com/hashicorp/errwrap | MPL-2.0 | v1.0.0 github.com/hashicorp/go-cleanhttp | MPL-2.0 | v0.5.1 github.com/hashicorp/go-immutable-radix | MPL-2.0 | v1.0.0 github.com/hashicorp/go-msgpack | BSD-3-Clause | v0.5.3 github.com/hashicorp/go-multierror | MPL-2.0 | v1.0.0 github.com/hashicorp/go-rootcerts | MPL-2.0 | v1.0.0 github.com/hashicorp/go-sockaddr | MPL-2.0 | v1.0.0 github.com/hashicorp/go-syslog | MIT | v1.0.0 github.com/hashicorp/go-uuid | MPL-2.0 | v1.0.1 github.com/hashicorp/go.net | BSD-3-Clause | v0.0.1 github.com/hashicorp/golang-lru | MPL-2.0 | v0.5.1 github.com/hashicorp/hcl | MPL-2.0 | v1.0.0 github.com/hashicorp/logutils | MPL-2.0 | v1.0.0 github.com/hashicorp/mdns | MIT | v1.0.0 github.com/hashicorp/memberlist | MPL-2.0 | v0.1.3 github.com/hashicorp/serf | MPL-2.0 | v0.8.2 github.com/inconshreveable/mousetrap | Apache-2.0 | v1.0.0 github.com/jonboulle/clockwork | Apache-2.0 | v0.1.0 github.com/json-iterator/go | MIT | v1.1.6 github.com/jstemmer/go-junit-report | MIT | v0.0.0-20190106144839-af01ea7f8024 github.com/jtolds/gls | MIT | v4.20.0+incompatible github.com/julienschmidt/httprouter | BSD-3-Clause | v1.2.0 github.com/kisielk/errcheck | MIT | v1.1.0 github.com/kisielk/gotool | MIT | v1.0.0 github.com/konsorten/go-windows-terminal-sequences | MIT | v1.0.1 github.com/kr/pretty | MIT | v0.1.0 github.com/kr/pty | MIT | v1.1.1 github.com/kr/text | MIT | v0.1.0 github.com/mattn/go-colorable | MIT | v0.0.9 github.com/mattn/go-isatty | MIT | v0.0.3 github.com/matttproud/golang_protobuf_extensions | Apache-2.0 | v1.0.1 github.com/mitchellh/cli | MPL-2.0 | v1.0.0 github.com/mitchellh/go-homedir | MIT | v1.1.0 github.com/mitchellh/go-testing-interface | MIT | v1.0.0 github.com/mitchellh/gox | MPL-2.0 | v0.4.0 github.com/mitchellh/iochan | MIT | v1.0.0 github.com/mitchellh/mapstructure | MIT | v1.1.2 github.com/modern-go/concurrent | Apache-2.0 | v0.0.0-20180306012644-bacd9c7ef1dd github.com/modern-go/reflect2 | Apache-2.0 | v1.0.1 github.com/mwitkow/go-conntrack | Apache-2.0 | v0.0.0-20161129095857-cc309e4a2223 github.com/oklog/ulid | Apache-2.0 | v1.3.1 github.com/pelletier/go-toml | MIT | v1.2.0 github.com/pkg/errors | BSD-2-Clause | v0.8.1 github.com/pmezard/go-difflib | BSD-3-Clause | v1.0.0 github.com/posener/complete | MIT | v1.1.1 github.com/prometheus/client_golang | Apache-2.0 | v0.9.3 github.com/prometheus/client_model | Apache-2.0 | v0.0.0-20190129233127-fd36f4220a90 github.com/prometheus/common | Apache-2.0 | v0.4.0 github.com/prometheus/procfs | Apache-2.0 | v0.0.0-20190507164030-5867b95ac084 github.com/prometheus/tsdb | Apache-2.0 | v0.7.1 github.com/rogpeppe/fastuuid | BSD-3-Clause | v0.0.0-20150106093220-6724a57986af github.com/rogpeppe/go-internal | BSD-3-Clause | v1.3.0 github.com/ryanuber/columnize | MIT | v0.0.0-20160712163229-9b3edd62028f github.com/sean-/seed | MIT | v0.0.0-20170313163322-e2103e2c3529 github.com/shurcooL/sanitized_anchor_name | MIT | v1.0.0 github.com/sirupsen/logrus | MIT | v1.7.0 github.com/smartystreets/assertions | MIT | v0.0.0-20180927180507-b2de0cb4f26d github.com/smartystreets/goconvey | MIT | v1.6.4 github.com/soheilhy/cmux | Apache-2.0 | v0.1.4 github.com/spaolacci/murmur3 | BSD-3-Clause | v0.0.0-20180118202830-f09979ecbc72 github.com/spf13/afero | Apache-2.0 | v1.1.2 github.com/spf13/cast | MIT | v1.3.0 github.com/spf13/cobra | Apache-2.0 | v1.1.1 github.com/spf13/jwalterweatherman | MIT | v1.0.0 github.com/spf13/pflag | BSD-3-Clause | v1.0.5 github.com/spf13/viper | MIT | v1.7.0 github.com/stretchr/objx | MIT | v0.1.1 github.com/stretchr/testify | MIT | v1.3.0 github.com/subosito/gotenv | MIT | v1.2.0 github.com/tmc/grpc-websocket-proxy | MIT | v0.0.0-20190109142713-0ad062ec5ee5 github.com/xiang90/probing | MIT | v0.0.0-20190116061207-43a291ad63a2 github.com/yuin/goldmark | MIT | v1.3.5 go.etcd.io/bbolt | MIT | v1.3.2 go.opencensus.io | Apache-2.0 | v0.22.0 go.uber.org/atomic | MIT | v1.4.0 go.uber.org/multierr | MIT | v1.1.0 go.uber.org/zap | MIT | v1.10.0 golang.org/x/crypto | BSD-3-Clause | v0.0.0-20191011191535-87dc89f01550 golang.org/x/exp | BSD-3-Clause | v0.0.0-20191030013958-a1ab85dbe136 golang.org/x/image | BSD-3-Clause | v0.0.0-20190802002840-cff245a6509b golang.org/x/lint | BSD-3-Clause | v0.0.0-20190930215403-16217165b5de golang.org/x/mobile | BSD-3-Clause | v0.0.0-20190719004257-d2bd2a29d028 golang.org/x/mod | BSD-3-Clause | v0.4.2 golang.org/x/net | BSD-3-Clause | v0.0.0-20210726213435-c6fcb2dbf985 golang.org/x/oauth2 | BSD-3-Clause | v0.0.0-20190604053449-0f29369cfe45 golang.org/x/sync | BSD-3-Clause | v0.0.0-20210220032951-036812b2e83c golang.org/x/sys | BSD-3-Clause | v0.0.0-20210510120138-977fb7262007 golang.org/x/term | BSD-3-Clause | v0.0.0-20201126162022-7de9c90e9dd1 golang.org/x/text | BSD-3-Clause | v0.3.6 golang.org/x/time | BSD-3-Clause | v0.0.0-20190308202827-9d24e82272b4 golang.org/x/tools | BSD-3-Clause | v0.1.5 golang.org/x/xerrors | BSD-3-Clause | v0.0.0-20200804184101-5ec99f83aff1 google.golang.org/api | BSD-3-Clause | v0.13.0 google.golang.org/appengine | Apache-2.0 | v1.6.1 google.golang.org/genproto | Apache-2.0 | v0.0.0-20191108220845-16a3f7862a1a google.golang.org/grpc | Apache-2.0 | v1.21.1 gopkg.in/alecthomas/kingpin.v2 | MIT | v2.2.6 gopkg.in/errgo.v2 | BSD-3-Clause | v2.1.0 gopkg.in/ini.v1 | Apache-2.0 | v1.51.0 gopkg.in/resty.v1 | MIT | v1.12.0 gopkg.in/yaml.v2 | Apache-2.0 | v2.2.8 gopkg.in/yaml.v3 | MIT and Apache | v3.0.0-20200615113413-eeeca48fe776 honnef.co/go/tools | MIT | v0.0.1-2019.2.3 rsc.io/binaryregexp | BSD-3-Clause | v0.2.0 github.com/gogo/protobuf | Unknown | v1.2.1 github.com/kr/logfmt | Unknown | v0.0.0-20140226030751-b84e30acd515 github.com/magiconair/properties | Unknown | v1.8.1 github.com/miekg/dns | Unknown | v1.0.14 github.com/pascaldekloe/goe | Unknown | v0.0.0-20180627143212-57f6aae5913c github.com/russross/blackfriday/v2 | Unknown | v2.0.1 gopkg.in/check.v1 | Unknown | v1.0.0-20180628173108-788fd7840127 ERROR failed to identify the licenses of following packages (7): github.com/gogo/protobuf github.com/kr/logfmt github.com/magiconair/properties github.com/miekg/dns github.com/pascaldekloe/goe github.com/russross/blackfriday/v2 gopkg.in/check.v1 ``` </details> #### Check Dependencies' licenses This command can be used to perform automatic license compatibility check, when there is incompatible licenses found, the command will exit with status code 1 and fail the command. ```bash license-eye -c test/testdata/.licenserc_for_test_check.yaml dep check ``` <details> <summary>Dependency Check Result</summary> ``` INFO GITHUB_TOKEN is not set, license-eye won't comment on the pull request INFO Loading configuration from file: .licenserc.yaml WARNING Failed to resolve the license of <github.com/gogo/protobuf>: cannot identify license content WARNING Failed to resolve the license of <github.com/kr/logfmt>: cannot find license file WARNING Failed to resolve the license of <github.com/magiconair/properties>: cannot identify license content WARNING Failed to resolve the license of <github.com/miekg/dns>: cannot identify license content WARNING Failed to resolve the license of <github.com/pascaldekloe/goe>: cannot identify license content WARNING Failed to resolve the license of <github.com/russross/blackfriday/v2>: cannot identify license content WARNING Failed to resolve the license of <gopkg.in/check.v1>: cannot identify license content ERROR the following licenses are incompatible with the main license: Apache-2.0 License: Unknown Dependency: github.com/gogo/protobuf License: Unknown Dependency: github.com/kr/logfmt License: Unknown Dependency: github.com/magiconair/properties License: Unknown Dependency: github.com/miekg/dns License: Unknown Dependency: github.com/pascaldekloe/goe License: Unknown Dependency: github.com/russross/blackfriday/v2 License: Unknown Dependency: gopkg.in/check.v1 exit status 1 ``` </details> ## Configurations ```yaml header: # <1> license: spdx-id: Apache-2.0 # <2> copyright-owner: Apache Software Foundation # <3> content: | # <4> Licensed to Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. Apache Software Foundation (ASF) licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. pattern: | # <5> Licensed to the Apache Software Foundation under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The Apache Software Foundation licenses this file to you under the Apache License, Version 2.0 \(the "License"\); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. paths: # <6> - '**' paths-ignore: # <7> - 'dist' - 'licenses' - '**/*.md' - '**/testdata/**' - '**/go.mod' - '**/go.sum' - 'LICENSE' - 'NOTICE' - '**/assets/languages.yaml' - '**/assets/assets.gen.go' comment: on-failure # <8> dependency: # <9> files: # <10> - go.mod ``` 1. The `header` section is configurations for source codes license header. 2. The [SPDX ID](https://spdx.org/licenses/) of the license, it’s convenient when your license is standard SPDX license, so that you can simply specify this identifier without copying the whole license `content` or `pattern`. This will be used as the content when `fix` command needs to insert a license header. 3. The copyright owner to replace the `[owner]` in the `SPDX-ID` license template. 4. If you are not using the standard license text, you can paste your license text here, this will be used as the content when `fix` command needs to insert a license header, if both `license` and `SPDX-ID` are specified, `license` wins. 5. The `pattern` is an optional regexp. You don’t need this if all the file headers are the same as `license` or the license of `SPDX-ID`, otherwise you need to compose a pattern that matches your license texts. 6. The `paths` are the path list that will be checked (and fixed) by license-eye, default is `['**']`. Formats like `**/*`.md and `**/bin/**` are supported. 7. The `paths-ignore` are the path list that will be ignored by license-eye. By default, `.git` and the content in `.gitignore` will be inflated into the `paths-ignore` list. 8. On what condition License-Eye will comment the check results on the pull request, `on-failure`, `always` or `never`. Options other than `never` require the environment variable `GITHUB_TOKEN` to be set. 9. `dependency` section is configurations for resolving dependencies' licenses. 10. `files` are the files that declare the dependencies of a project, typically, `go.mod` in Go project, `pom.xml` in maven project, and `package.json` in NodeJS project. If it's a relative path, it's relative to the `.licenserc.yaml`. **NOTE**: When the `SPDX-ID` is Apache-2.0 and the owner is Apache Software foundation, the content would be [a dedicated license](https://www.apache.org/legal/src-headers.html#headers) specified by the ASF, otherwise, the license would be [the standard one](https://www.apache.org/foundation/license-faq.html#Apply-My-Software). ## Supported File Types The `header check` command theoretically supports all kinds of file types, while the supported file types of `header fix` command can be found [in this YAML file](assets/languages.yaml). In the YAML file, if the language has a non-empty property `comment_style_id`, and the comment style id is declared in [the comment styles file](assets/styles.yaml), then the language is supported by `fix` command. - [assets/languages.yaml](assets/languages.yaml) ```yaml Java: type: programming tm_scope: source.java ace_mode: java codemirror_mode: clike codemirror_mime_type: text/x-java color: "#b07219" extensions: - ".java" language_id: 181 comment_style_id: SlashAsterisk ``` - [assets/styles.yaml](assets/styles.yaml) ```yaml - id: SlashAsterisk # (i) start: '/*' # (ii) middle: ' *' # (iii) end: ' */' # (iv) ``` 1. The `comment_style_id` used in [assets/languages.yaml](assets/languages.yaml). 2. The leading characters of the starting of a block comment. 3. The leading characters of the middle lines of a block comment. 4. The leading characters of the ending line of a block comment. ## Technical Documentation - There is an [activity diagram](./docs/header_fix_logic.svg) explaining the implemented license header fixing mechanism in-depth. The diagram's source file can be found [here](./docs/header_fix_logic.plantuml). ## Contribution - If you find any file type should be supported by the aforementioned configurations, but it's not listed there, feel free to [open a pull request](https://github.com/apache/skywalking-eyes/pulls) to add the configuration into the two files. - If you find the license template of an SPDX ID is not supported, feel free to [open a pull request](https://github.com/apache/skywalking-eyes/pulls) to add it into [the template folder](assets/header-templates). ## License [Apache License 2.0](https://github.com/apache/skywalking-eyes/blob/master/LICENSE) ## Contact Us * Submit [an issue](https://github.com/apache/skywalking/issues/new) by using [INFRA] as title prefix. * Mail list: **[email protected]**. Mail to [email protected], follow the reply to subscribe the mail list. * Join `skywalking` channel at [Apache Slack](http://s.apache.org/slack-invite). If the link is not working, find the latest one at [Apache INFRA WIKI](https://cwiki.apache.org/confluence/display/INFRA/Slack+Guest+Invites). * Twitter, [ASFSkyWalking](https://twitter.com/ASFSkyWalking)
63.748106
401
0.546867
eng_Latn
0.67124
c973aec26037b599c67b7294d8798690def0564d
515
md
Markdown
packages/core/docs/reference/core.md
microfleet/mservice
1ecfff965cc4b07b0f02e322b11ee7b1b30af750
[ "MIT" ]
33
2018-01-23T08:15:57.000Z
2021-12-22T16:41:53.000Z
packages/core/docs/reference/core.md
microfleet/mservice
1ecfff965cc4b07b0f02e322b11ee7b1b30af750
[ "MIT" ]
143
2017-06-09T16:46:43.000Z
2022-03-23T20:05:53.000Z
packages/core/docs/reference/core.md
makeomatic/mservice
1ecfff965cc4b07b0f02e322b11ee7b1b30af750
[ "MIT" ]
12
2018-02-01T02:33:38.000Z
2021-02-22T21:37:54.000Z
# Core config | Name | Type | Required | Description | |--------|------|----------|-------------| | `name` | `string` | `true` | Service name | | `maintenanceMode` | `bool` | `false` | Maintenance mode on/off | ## Name Service name setting is used to uniquely identify the service. ## Maintenance mode Toggles service maintenance mode. Read more on the [maintenance mode recipe](../recipes/maintenance.md) ## Configuration example Set up it like this: ```js const config = { name: 'my-service-name', } ```
23.409091
103
0.631068
eng_Latn
0.952298
c973f2e62a72ead5abd13bc5756736d8d3735aaa
1,064
md
Markdown
docs/debugger/assertion-failed-dialog-box.md
akeeton/visualstudio-docs
48e0a98a5acd101c674c2059025ebcb8e8a29f39
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/debugger/assertion-failed-dialog-box.md
akeeton/visualstudio-docs
48e0a98a5acd101c674c2059025ebcb8e8a29f39
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/debugger/assertion-failed-dialog-box.md
akeeton/visualstudio-docs
48e0a98a5acd101c674c2059025ebcb8e8a29f39
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Assertion Failed Dialog Box | Microsoft Docs" ms.date: "11/04/2016" ms.topic: "reference" f1_keywords: - "vs.debug.assertions" dev_langs: - "CSharp" - "VB" - "FSharp" - "C++" helpviewer_keywords: - "debugging assertions" - "assertions, debugging" - "assertions, assertion failures" - "Assertion Failed dialog box" ms.assetid: 64af5bed-e38b-420f-b9ce-d64f35100aae author: "mikejo5000" ms.author: "mikejo" manager: douge ms.workload: - "multiple" --- # Assertion Failed Dialog Box An assertion statement specifies a condition that you expect to hold true at some particular point in your program. If that condition does not hold true, the assertion fails, execution of your program is interrupted, and this dialog box appears. |Click|To| |-----------|--------| |Retry|Debug the assertion or get help on asserts.| |Ignore|Ignore the assertion and continue running the program.| |Abort|Halt execution of the program and end the debugging session.| ## See Also [C/C++ Assertions](../debugger/c-cpp-assertions.md)
31.294118
247
0.709586
eng_Latn
0.923373
c9744ab9fbf6bf6ba5de387018a200b3524f08a6
11,167
md
Markdown
articles/active-directory/cloud-provisioning/tutorial-single-forest.md
Aisark/azure-docs.es-es
5078f9c88984709a7ffdfce8baab7cfbf42674c9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/cloud-provisioning/tutorial-single-forest.md
Aisark/azure-docs.es-es
5078f9c88984709a7ffdfce8baab7cfbf42674c9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/cloud-provisioning/tutorial-single-forest.md
Aisark/azure-docs.es-es
5078f9c88984709a7ffdfce8baab7cfbf42674c9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Tutorial: Integración de un único bosque con un único inquilino de Azure AD' description: En este tema se describen los requisitos previos y los requisitos de hardware del aprovisionamiento en la nube. services: active-directory author: billmath manager: daveba ms.service: active-directory ms.workload: identity ms.topic: tutorial ms.date: 12/05/2019 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: ad5147971fc42e65e4621c8a3f0a98e01f2e0339 ms.sourcegitcommit: 30906a33111621bc7b9b245a9a2ab2e33310f33f ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 11/22/2020 ms.locfileid: "95237310" --- # <a name="tutorial-integrate-a-single-forest-with-a-single-azure-ad-tenant"></a>Tutorial: Integración de un único bosque con un único inquilino de Azure AD Este tutorial le guía en la creación de un entorno de identidad híbrida mediante el aprovisionamiento en la nube de Azure Active Directory (Azure AD) Connect. ![Crear](media/tutorial-single-forest/diagram1.png) El entorno que se crea en este tutorial se puede usar para realizar pruebas o para familiarizarse con el aprovisionamiento en la nube. ## <a name="prerequisites"></a>Requisitos previos ### <a name="in-the-azure-active-directory-admin-center"></a>En el Centro de administración de Azure Active Directory 1. Cree una cuenta de administrador global solo en la nube en el inquilino de Azure AD. De esta manera, puede administrar la configuración del inquilino en caso de que los servicios locales fallen o no estén disponibles. Información acerca de la [incorporación de una cuenta de administrador global que está solo en la nube](../fundamentals/add-users-azure-active-directory.md). Realizar este paso es esencial para garantizar que no queda bloqueado fuera de su inquilino. 2. Agregue uno o varios [nombres de dominio personalizados](../fundamentals/add-custom-domain.md) al inquilino de Azure AD. Los usuarios pueden iniciar sesión con uno de estos nombres de dominio. ### <a name="in-your-on-premises-environment"></a>En el entorno local 1. Identifique un servidor host unido a un dominio en el que se ejecuta Windows Server 2012 R2 o superior con un mínimo de 4 GB de RAM y un entorno de ejecución .NET 4.7.1 o posterior. 2. Si hay un firewall entre los servidores y Azure AD, configure los elementos siguientes: - Asegúrese de que los agentes pueden realizar solicitudes *de salida* a Azure AD a través de los puertos siguientes: | Número de puerto | Cómo se usa | | --- | --- | | **80** | Descarga las listas de revocación de certificados (CRL) al validar el certificado TLS/SSL. | | **443** | Controla toda la comunicación saliente con el servicio | | **8082** | Necesario con fines de instalación.| | **8080** (opcional) | Si el puerto 443 no está disponible, los agentes notifican su estado cada 10 minutos en el puerto 8080. Este estado se muestra en el portal de Azure AD. | Si el firewall fuerza las reglas según los usuarios que las originan, abra estos puertos para el tráfico de servicios de Windows que se ejecutan como un servicio de red. - Si el firewall o proxy le permite especificar sufijos seguros, agregue conexiones a **\*.msappproxy.net** y **\*.servicebus.windows.net**. En caso contrario, permita el acceso a los [intervalos de direcciones IP del centro de datos de Azure](https://www.microsoft.com/download/details.aspx?id=41653), que se actualizan cada semana. - Los agentes necesitan acceder a **login.windows.net** y **login.microsoftonline.com** para el registro inicial. Abra el firewall también para esas direcciones URL. - Para la validación de certificados, desbloquee las siguientes direcciones URL: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80** y **www\.microsoft.com:80**. Como estas direcciones URL se utilizan para la validación de certificados con otros productos de Microsoft, es posible que estas direcciones URL ya estén desbloqueadas. ## <a name="install-the-azure-ad-connect-provisioning-agent"></a>Instalación del agente de aprovisionamiento de Azure AD Connect 1. Inicie sesión en el servidor unido al dominio. Si usa el tutorial [Entorno básico de AD y Azure](tutorial-basic-ad-azure.md), sería DC1. 2. Inicie sesión en Azure Portal con credenciales de administrador global solo en la nube. 3. A la izquierda, seleccione **Azure Active Directory**, haga clic en **Azure AD Connect** y, en el centro, seleccione **Administrar aprovisionamiento (versión preliminar)** . ![Azure portal](media/how-to-install/install-6.png) 4. Haga clic en **Descargar agente**. 5. Ejecute el agente de aprovisionamiento de Azure AD Connect. 6. En la pantalla de presentación, **acepte** los términos de la licencia y haga clic en **Install** (Instalar). ![Captura de pantalla que muestra la pantalla de presentación "Paquete de agente de aprovisionamiento de Microsoft Azure AD Connect".](media/how-to-install/install-1.png) 7. Una vez que finalice esta operación, se iniciará el asistente para configuración. Inicie sesión con su cuenta de administrador global de Azure AD. Tenga en cuenta que si la seguridad de IE mejorada está habilitada, bloqueará el inicio de sesión. En ese caso, cierre la instalación, deshabilite la seguridad mejorada de IE en Administrador del servidor y haga clic en el **AAD Connect Provisioning Agent Wizard** (Asistente para el agente de aprovisionamiento de AAD Connect) para reiniciar la instalación. 8. En la pantalla **Connect Active Directory** (Conectar Active Directory), haga clic en **Add directory** (Agregar directorio) e inicie sesión con su cuenta de administrador de dominio de Active Directory. NOTA: La cuenta de administrador de dominio no debe tener requisitos de cambio de contraseña. Si la contraseña expira o cambia, tendrá que volver a configurar el agente con las credenciales nuevas. Esta operación permitirá agregar su directorio local. Haga clic en **Next**. ![Captura de pantalla que muestra la pantalla "Conectar Active Directory".](media/how-to-install/install-3a.png) 9. En la pantalla **Configuración completa**, haga clic en **Confirmar**. Esta operación registrará el agente y lo reiniciará. ![Captura de pantalla que muestra la pantalla "Configuración completada".](media/how-to-install/install-4a.png) 10. Una vez que se completa esta operación debería aparecer un aviso: **La configuración del agente se ha comprobado correctamente.** Puede hacer clic en **Salir**.</br> ![Pantalla principal](media/how-to-install/install-5.png)</br> 11. Si la pantalla de presentación inicial no desaparece, haga clic en **Cerrar**. ## <a name="verify-agent-installation"></a>Comprobación de la instalación del agente La comprobación del agente se produce en Azure Portal y en el servidor local que ejecuta el agente. ### <a name="azure-portal-agent-verification"></a>Comprobación del agente en Azure Portal Para comprobar que Azure ve el agente, siga estos pasos: 1. Inicie sesión en Azure Portal. 2. A la izquierda, seleccione **Azure Active Directory**, haga clic en **Azure AD Connect** y, en el centro, seleccione **Administración del aprovisionamiento (versión preliminar)** .</br> ![Azure Portal](media/how-to-install/install-6.png)</br> 3. En la pantalla **Aprovisionamiento de Azure AD (versión preliminar)** , haga clic en **Revisar todos los agentes**. ![Aprovisionamiento de Azure AD](media/how-to-install/install-7.png)</br> 4. En la pantalla **On-premises provisioning agents** (Agentes de aprovisionamiento locales) verá los agentes que ha instalado. Compruebe que el agente en cuestión está ahí y que se ha marcado como **Activo**. ![Agentes de aprovisionamiento](media/how-to-install/verify-1.png)</br> ### <a name="on-the-local-server"></a>En el servidor local Para comprobar que el agente se ejecuta, siga estos pasos: 1. Inicie sesión en el servidor con una cuenta de administrador. 2. Abra **Servicios**. Para ello, vaya ahí o a Inicio/Ejecutar/Services.msc. 3. En **Servicios** asegúrese de que tanto el **Actualizador del Agente de Microsoft Azure AD Connect** como el **Agente de aprovisionamiento de Microsoft Azure AD Connect** están presentes y que su estado es **En ejecución**. ![Servicios](media/how-to-install/troubleshoot-1.png) ## <a name="configure-azure-ad-connect-cloud-provisioning"></a>Configuración del aprovisionamiento en la nube de Azure AD Connect Use los pasos siguientes para configurar el aprovisionamiento: 1. Inicie sesión en Azure Portal. 2. Haga clic en **Azure Active Directory**. 3. Haga clic en **Azure AD Connect**. 4. Seleccione **Administración del aprovisionamiento (versión preliminar)** ![Captura de pantalla que muestra el vínculo "Administración del aprovisionamiento (versión preliminar)".](media/how-to-configure/manage1.png) 5. Haga clic en **Nueva configuración** ![Captura de pantalla de "Aprovisionamiento de Azure AD (versión preliminar)" con el vínculo "Nueva configuración" resaltado.](media/tutorial-single-forest/configure1.png) 7. En la pantalla de configuración, escriba un **correo electrónico de notificación**, mueva el selector a **Habilitar** y haga clic en **Guardar**. ![Captura de la pantalla de configuración con un correo electrónico de notificación rellenado y Habilitar seleccionado.](media/how-to-configure/configure2.png) 1. El estado de configuración ahora debería ser **Correcto**. ![Captura de pantalla de "Aprovisionamiento de Azure AD (versión preliminar)" que muestra un estado correcto.](media/how-to-configure/manage4.png) ## <a name="verify-users-are-created-and-synchronization-is-occurring"></a>Comprobación de la creación y sincronización de los usuarios Ahora comprobaremos que los usuarios que tenía en el directorio local se han sincronizado y que ya existen en el inquilino de Azure AD. Tenga en cuenta que esta acción puede tardar unas horas en completarse. Para comprobar que los usuarios están sincronizados, haga lo siguiente: 1. Vaya a [Azure Portal](https://portal.azure.com) e inicie sesión con una cuenta que tenga una suscripción de Azure. 2. En la parte izquierda, seleccione **Azure Active Directory**. 3. En **Administrar**, seleccione **Usuarios**. 4. Compruebe que ve los usuarios nuevos en nuestro inquilino.</br> ## <a name="test-signing-in-with-one-of-our-users"></a>Prueba del inicio de sesión con uno de nuestros usuarios 1. Vaya a [https://myapps.microsoft.com](https://myapps.microsoft.com). 2. Inicie sesión con la cuenta de usuario que se creó en nuestro nuevo inquilino. Deberá iniciar sesión mediante el formato siguiente: ([email protected]). Use la misma contraseña que el usuario utiliza para iniciar sesión en el entorno local.</br> ![Verify](media/tutorial-single-forest/verify-1.png)</br> Ahora tiene configurado correctamente un entorno de identidad híbrida que puede usar para probar y familiarizarse con lo que le ofrece Azure. ## <a name="next-steps"></a>Pasos siguientes - [¿Qué es el aprovisionamiento?](what-is-provisioning.md) - [¿Qué es el aprovisionamiento en la nube de Azure AD Connect?](what-is-cloud-provisioning.md)
79.198582
511
0.771559
spa_Latn
0.973821
c9746a8d21d0882f0d971d4e9887776d6d828b1f
330
md
Markdown
README.md
GrimStride/nim-lzma
077ca016a981d4b8b1edfc70149aca6c4804b137
[ "MIT" ]
null
null
null
README.md
GrimStride/nim-lzma
077ca016a981d4b8b1edfc70149aca6c4804b137
[ "MIT" ]
null
null
null
README.md
GrimStride/nim-lzma
077ca016a981d4b8b1edfc70149aca6c4804b137
[ "MIT" ]
null
null
null
# Installation ``` nimble install https://github.com/GrimStride/nim-lzma ``` # Usage You will need the `liblzma` library installed on your system. If you're on Windows, download the file `liblzma.dll` from [here](https://tukaani.org/xz/). ```nim import lzma var s = "uncompressed data" doAssert s.compress.decompress == s ```
20.625
153
0.718182
kor_Hang
0.361328
c974b0b97e85cb2ed843e2f11b326f5a575f1364
65
md
Markdown
README.md
runs-dev/apiresp
cee462a44bcdfdaf87289e271e085867ab0f60bc
[ "MIT" ]
null
null
null
README.md
runs-dev/apiresp
cee462a44bcdfdaf87289e271e085867ab0f60bc
[ "MIT" ]
null
null
null
README.md
runs-dev/apiresp
cee462a44bcdfdaf87289e271e085867ab0f60bc
[ "MIT" ]
null
null
null
# apiresp API Response Code and Message for RUNSystem's Services
21.666667
54
0.815385
eng_Latn
0.62468
c974b27c2c08bd66b6093fbd5ea3d03a75078b8e
239
md
Markdown
README.md
threenine/swcapi
d5426d11b5ff3171cf8b8d9fe6a8965d25241802
[ "Apache-2.0" ]
47
2017-07-14T07:24:08.000Z
2021-04-02T21:06:37.000Z
README.md
threenine/swcapi
d5426d11b5ff3171cf8b8d9fe6a8965d25241802
[ "Apache-2.0" ]
1
2018-05-31T14:20:07.000Z
2018-06-18T13:51:23.000Z
README.md
threenine/swcapi
d5426d11b5ff3171cf8b8d9fe6a8965d25241802
[ "Apache-2.0" ]
22
2017-10-09T19:32:59.000Z
2021-02-02T01:22:51.000Z
# Stop Web Crawlers Web API A web API to enable updates the Stop Web Crawlers WordPress Plugin referer spam list [![threenine logo](https://threenine.co.uk/wp-content/uploads/2016/12/threenine_footer.png)](https://threenine.co.uk/)
21.727273
118
0.757322
kor_Hang
0.4627
c974b6e7a93957a339034d6402404edb75e01722
10,337
md
Markdown
_posts/2018-04-24-Paper Writing.md
drwxyh/drwxyh.github.io
6c0aca754090b1358f6376693de96f726dea399c
[ "MIT" ]
4
2018-04-24T15:28:18.000Z
2020-03-26T12:42:47.000Z
_posts/2018-04-24-Paper Writing.md
drwxyh/drwxyh.github.io
6c0aca754090b1358f6376693de96f726dea399c
[ "MIT" ]
16
2018-04-23T15:27:35.000Z
2019-10-14T02:27:31.000Z
_posts/2018-04-24-Paper Writing.md
drwxyh/drwxyh.github.io
6c0aca754090b1358f6376693de96f726dea399c
[ "MIT" ]
2
2018-08-21T07:29:07.000Z
2021-04-25T12:15:12.000Z
--- layout: post # 使用的布局(不需要改) title: 思考:如何写论文? # 标题 subtitle: 来自知乎回答的整理 #副标题 date: 2018-04-24 # 时间 author: Charles Xu # 作者 header-img: img/post-bg-paper-1.jpg #这篇文章标题背景图片 catalog: true # 是否归档 tags: #标签 - 论文 - 写作技巧 --- # How to Write Technical Articles 论文可以被分为两类:一类是原创研究论文,另一类是调查论文。 #### 研究型论文 - 好论文的定义: - 需要对*论文解决的问题、提出的解决方案、得到的结果*有个清晰的陈述; - 需要描述 *问题已有的研究 和 本文的创新点*; - 论文的目的:描述创新技术的结果,主要有四种类型 - 新的算法 - 系统结构,如:硬件设计,软件系统,协议等 - 绩效评估,通过分析,模拟或者测量得到 - 新的理论,由一组定理组成 - 论文的关注点: - 提供充分的细节来描述结果,以验证其有效性 - 确定论文的新颖点,涉及到什么新知识,什么方面做得不够好 - 确定论文结果的重要性,提出了什么样的改进措施,有什么效果 #### 论文的传统框架 建议先写 approach and result section,然后写 problem section,然后 conclusions, 然后 introduction。后写 introduction 是因为它需要包括总结中的内容。最后写 abstract,给全文一个简明扼要的总结。 - *Abstract*:一般来说 *100~150* 字,不要太多。 - *Introduction*:介绍问题、列出解决方案。问题的陈述需要包括叙述研究问题的重要性。 - *Outline The Rest of the Paper*: 注意句式的多样性,比如: - The remainder of this paper is organized as follows. - In Section 2, we introduce…… - Section 3 discusses…… - Section 4 describes…… - Finally, we describe future work in Section 5. - *Body of Paper*:至少要有一个实例场景(最好两个),有说明性的图表,清晰的问题模型陈述,侧重于“新”功能,对于算法或者结构的通用评估,例如证明你的算法是'O(logN)'。 - problem model - approach or architecture : 基于问题模型提出的系统结构应当比你实现的系统更加通用,通常需要包括至少一张图; - results : 实现:需要包括实际的实现细节,但并非从头到尾铺陈描述,简要提及:实现语言,平台,位置,依赖包和资源使用情况;评估:在实际使用中如何运行?提供真实或者模拟的性能指标和终端用户研究。 - *Related Work*:对于会议文章来说,尽量引用会议主席团及其他成员的成果,对于期刊或者杂志来说,尽量引用最近 2~3 年的相关内容。 - *Summary and Future Work*:重复主要的研究结果 - *Acknowledgements*:研究经费支持者 - *Bibliography* - *Appendix*: - 详细的协议描述 - 详细的证明 - 其他低层次但是重要的细节 #### 标题-Title - 避免使用缩写,但是可以包括常见的易于理解的缩写; - 避免常见短语,比如:“novel”、“performance evaluation” 或者 “architecture”,因为这些单词太常见了,一搜索就是几千条结果,没人会拿这种词作为搜索关键词;用一些能体现你工作特征的形容词,比如:reliable,scalable,high-performance,robust,low-complexity,low-cost。 - 如果需要论文标题的灵感,可以查询 [Academic Essay Title Generator](https://www.besttitlegenerator.com/index.php) 或者 [Research title generator!](https://cjquines.com/titlegen#) 之类的网站。 #### 作者-Authors 基于IEEE标准,被列入作者的个人需要满足以下条件: - 为理论发展、系统设计、实验设计、原型开发、分析和解释论文中数据做出重要智力贡献; - 为文章起草,审查文章、修改内容做出贡献; - 参与文章最终版本(包括参考文献)的确定工作; #### 摘要-Abstract - 摘要不需要包含参考文献,摘要常常会独立于论文主体使用; - 避免使用 *In this paper* 短语,因为在这一块你不会谈其他文章; - 不需要证明一些常识,比如:因特网的重要性,Qos是什么东东; - 不仅要突出问题,也要突出结果,因为很多人通过阅读摘要决定读不读全文; - 因为摘要会被搜索引擎检索,所以摘要中需要包含能标志你工作的术语,特别是开发的协议和系统名称之类; - 不要包含数学公式; #### 简介-Introduction - 避免陈词滥调,例如:“recent advance of X” 或者任何暗示互联网发展的话; - 要让读者了解你的论文是关于什么的,不要光说你的研究领域如何如何重要,读者不会坚持三页来搞清楚你要干什么; - 简介中必须通过指出你正在解决的问题来彰显你工作的必要性,概括你的方法或者贡献,给出论文结果的一般性描述; - 不需要重复摘要中的内容; - 反面示例: <br><br> Here at the institute for computer research, me and my colleagues have created the SUPERGP system and have applied it to several toy problems. We had previously fumbled with earlier versions of SUPERGPSYSTEM for a while. This system allows the programmer to easily try lots of parameters, and problems, but incorporates a special constraint system for parameter settings and LISP S-expression parenthesis counting. The search space of GP is large and many things we are thinking about putting into the supergpsystem will make this space much more colorful.<br><br> - 正面例子:<br><br> Many new domains for genetic programming require evolved programs to be executed for longer amounts of time. For example, it is beneficial to give evolved programs direct access to low-level data arrays, as in some approaches to signal processing \cite{teller5}, and protein segment classification \cite{handley,koza6}. This type of system automatically performs more problem-specific engineering than a system that accesses highly preprocessed data. However, evolved programs may require more time to execute, since they are solving a harder task.<br><br> <b>Previous or obvious approach:</b><br/> (Note that you can also have a related work section that gives more details about previous work.) One way to control the execution time of evolved programs is to impose an absolute time limit. However, this is too constraining if some test cases require more processing time than others. To use computation time efficiently, evolved programs must take extra time when it is necessary to perform well, but also spend less time whenever possible.<br><br> <b>Approach/solution/contribution:</b><br> The first sentence of a paragraph like this should say what the contribution is. Also gloss the results. In this chapter, we introduce a method that gives evolved programs the incentive to strategically allocate computation time among fitness cases. Specifically, with an aggregate computation time ceiling imposed over a series of fitness cases, evolved programs dynamically choose when to stop processing each fitness case. We present experiments that show that programs evolved using this form of fitness take less time per test case on average, with minimal damage to domain performance. We also discuss the implications of such a time constraint, as well as its differences from other approaches to {\it multiobjective problems}. The dynamic use of resources other than computation time, e.g., memory or fuel, may also result from placing an aggregate limit over a series of fitness cases.<br><br> <b>Overview:</b><br> The following section surveys related work in both optimizing the execution time of evolved programs and evolution over Turing-complete representations. Next we introduce the game Tetris as a test problem. This is followed by a description of the aggregate computation time ceiling, and its application to Tetris in particular. We then present experimental results, discuss other current efforts with Tetris, and end with conclusions and future work. - 童话写作法: 1. 有一条巨龙抓走了公主(这个问题为什么值得研究?) 2. 巨龙多么多么的难打(强调你研究的重要性) 3. 王子提着一把金光闪闪的剑而不是破斧子烂长矛登场(你的方法好在哪里,别人的差在哪里) 4. 王子是如何打败巨龙的(你的方法的简介) 5. 从此王子和公主幸福的生活在一起(解决了问题) #### 主体-Body of Paper 在动笔写之前,一定要搞清楚自己想写的是什么。需要重视的不是所用的字词,如果你在为了遣词造句抓耳挠腮的话,准备废弃你写的东西吧,即使它是一整篇文章。下面是一些需要注意的原则: - 避免使用被动语态,比如:“In each reservation request message, a refresh interval used by the sender is included.” 最好改成 “Each ... message includes ...” - 使用强大的动词替换大量的名词,多使用简单术语,少用太花哨的,比如:<br> Advocate | Avoid ------------- | ------------- assume | make assumption depends on | is a function of illustrates,shows|is an illustration requires, need to|is a requirement uses|utilizes differed|had difference - 不用使用 “In other words”,用到了就说明你前面说的不够清楚,回过头去重写; - 不要误用名词,尤其是母语中没有的词,不要在固定术语和名字前面添加名词。 - 句子之间应该存在逻辑关系,注意使用逻辑连词: - 转折 describe an exception : but, however - 因果 describe a causality : thus, therefore, because of this - 正反 indicate two facets of an argument : on the one hand, on the other hand - 罗列 enumerate sub-cases : firstly, secondly - 时序 indicate a temporal relationship : then, afterwords - 协议的名称缩写通常不被视作名词,但是组织名称缩写被视作确定的名词; - 除非是在早期论文中获得的结果,否则通常使用一致的现在时态; - 连字符的使用: - 加前缀,如:pre-,co-,de-,pro- - all-around - self-conscious - 不要过度使用破折号分割; - 不要乱使用引号,因为会导致读者认为单词具有讽刺意味; - 十以下的数字尽量拼写,不要使用阿拉伯数字; - 使用 $$Eq. 7$$,不用 $$Eq (7)$$; - 避免在行中列举,例如:“Packets can be (a) lost, (b) stolen, (c) get wet.” 除非你之后要引用; - 大多数情况下,避免使用分项(子弹头),因为会占据额外空间,而且看上去更像PPT; - 使用 “Smith [1] showed” 或者 “Smith and Jones [1] showed”,而不是“Reference [1] shows” 或者 “[1] shows”,超过两位作者时,使用 “Smith et al. [1] showe”;正确引用:“Smith and Wesson claim that ... [42]” 或者 “In [42], Smith and Wesson claim that..”; - 在说明中使用正常大小写,如:"This is a caption," 而不是 "This is a Caption"; - 所有级别标题统一大写,风格保持一致; - 圆括号或者括号都要有一个空格,如:"The experiment (Fig. 7) shows" ; - 避免过分使用括号,会影响阅读,能使用脚注尽量使用脚注,但是脚注一般一页也不要超过两条; - 缩写要先声明,再使用; - 句子不要以 “and” 或者 “That's because……” 开头; - 不要在句中使用冒号,冒号之前应当是一个完整的句子,反例:“This is possible because: somebody said so”; - 在正式写作中,通常应当避免“don't, doesn't, won't”; - 比较的表达方式,"Flying is faster than driving" 的表达比 “Flying has the advantage of being faster” 好; - 不要使用“/”,应当使用 or 或 and 替代,“+”,“%”,“->”应当用相应单词表示; - 技术术语应当小写,除非在解释缩写时可以大写首字母; - 每一段都应当有一个统领句概括整段,除非段落太短。应当能保证只看段落首句的情况下,论文仍能表达完整意思。例如:<br><br> There are two service models, integrated and differentiated service. Integrated service follows the German approach that anything that isn't explicitly allowed is verboten. It strictly regulates traffic, but also makes the trains run on time. Differentiated service follows the Animal Farm appraoch, where some traffic is more equal than others. It seems simpler, until one has to worry about proletariat traffic dressing up as the aristocracy. - 使用 $$i$$th,而不是 $$i-th$$; - 为了提高可读性,1,000 应当用“,”分开; - 单位使用罗马字体,不使用斜体或者数学模式; - 使用 "kb/s“ 或者 ”Mb/s“,而不是 "kbps","Mbps",因为它们不是科学单位,注意 ”kb“(1000 bits) 和 ”KB“(1024 bytes)的区别,注意 ”kHz“,”Wi-Fi“; - 操作系统,如:”Android, OS X, Linux“,应当大写; - 小数点前的 ”0“ 不能省略; - 使用”for example, such as, among others“ 比 ”etc.“ 好,除了引用,尽量全部列全。使用 ”for example“ 和 ”like“ 不要跟着 ”etc.“; - ”i.e.“ 和 ”e.g.“ 后面总是跟着逗号; - 使用”In Figure 1“ 而不是 ”the following Figure“,因为排版图的位置会改变; - 表格中的文本左对齐,数字按小数点对齐或者右对齐; - Section, Figure, Table大写; - 表示变量之间关系是使用线图,线不能依靠颜色区分,通常论文黑白打印。显示不同实验时使用柱状图或者散点图; - 引用的格式需要保持一致,会议论文引用要加上会议地址,时间,会议全称(不是缩写);期刊需要加上卷号、期号和页码;年份在某一部分出现一次即可,不要重复出现; #### 参考书目-Bibliography - 作者避免使用……等,除非作者多于五位; - 互联网草案必须标注为”work in progress“; - 书籍引用需要包括初版年份,但是不需要 ISBN(国际标准书号); - 引用中不应当出现URL地址; - 外国人姓名之间应当空格,如:"J. P. Doe"; #### 致谢-Acknowledgements - 感谢资金支持者,有的要特殊格式,例如:"This work was supported by the National Science Foundation under grant EIA NN-NNNNN." - 通常匿名审稿人不被感谢,除非真的提供了特别大的帮助,使用”X helps with Y.“句型陈述; #### 实验结果-Reporting Numerical Results and Simulations - 需要披露足够多的实验细节,保障阅读者可以重复试验,需要提供所有使用的参数,分析使用的样本数量; - 模拟结果应当有置信度,如果可能,提供置信区间,存在异常数据时,应当解释异常数据出现的原因;‘ - 绘图应当有选择,选择哪个参数在什么区间绘图,出现大量直线是建议的; - 对于图的描述不应当是表面的,应当具体描述变量之间的关系,是线性的,还是其他关系? #### LaTeX的使用 - 没有必要把数字包装在 ”$$“ 中; - 使用”\cite{1,2,3}“ 而不是 ”\cite{1} \cite{2} \cite{3}“; #### 会议论文审查过程 1. 会议论文以 pdf 格式递交给委员会; 2. 技术委员会主席将论文发给一至多位技术委员会成员审查,技术委员会成员身份保密; 3. 审稿人反馈一份审查报告,通常是3份; 4. 委员会整理审查报告,将论文按得分排序,开会审议,通常前三分之一直接接受,后三分之一直接拒绝,中间的需要进一步讨论; 5. 一个了解会议等级的网站:[Conference Ranks](http://www.conferenceranks.com/); #### 论文的撰写流程 1. 搜集资料,查找相关论文; - 文献查阅报告,阅读主要文献在40篇以上; - 选题报告书 - 为什么研究这个问题(背景、目的、意义) - 国内外研究现状 - 研究内容、解决的关键问题 - 怎样研究和解决这些问题 - 预期的研究成果和创新点,研究进度安排 2. 学习相关理论,以及分析计算软件; 3. 初步确定论文的总体框架; 4. 完成论文课题的主体部分; 5. 分块撰写论文; 6. 注意图、表、公式、参考文献格式; #### 参考文献 - [硕士论文你有哪些经验与收获?](https://www.zhihu.com/question/20141321/answer/14112630) - [Writing Technical Articles](http://www.cs.columbia.edu/~hgs/etc/writing-style.html)
50.179612
797
0.75283
eng_Latn
0.810407
c9758af66d1c58fddec60bd75f21ad1446a3d6e9
423
md
Markdown
README.md
jknlsn/miscellaneous
cf26397383e02f2ef9a298cc2f61116b9208ce8f
[ "MIT" ]
null
null
null
README.md
jknlsn/miscellaneous
cf26397383e02f2ef9a298cc2f61116b9208ce8f
[ "MIT" ]
null
null
null
README.md
jknlsn/miscellaneous
cf26397383e02f2ef9a298cc2f61116b9208ce8f
[ "MIT" ]
null
null
null
# miscellaneous Various code snippets and configuration files. ## conky.conf Configuration for the process monitor Conky for Linux. Working on Debian with KDE on Macbook Pro 11,1. Can keep in sync with the repo and then make changes to the repo with a symbolic link. Change the first parameter to be the path to where you have the file downloaded. ```ln -s ~/Github/miscellaneous/conky.conf ~/.config/conky/conky.conf```
42.3
102
0.777778
eng_Latn
0.992452
c975978e9def13600506cc12f234947d67cbfa0a
6,632
md
Markdown
content/refguide/report-grid.md
dlhartveld/docs
5ad7b69729c852790a4c1170ff15881288e00fa4
[ "CC-BY-4.0" ]
102
2016-09-26T11:28:17.000Z
2022-03-11T14:11:54.000Z
content/refguide/report-grid.md
dlhartveld/docs
5ad7b69729c852790a4c1170ff15881288e00fa4
[ "CC-BY-4.0" ]
1,636
2016-10-06T09:02:52.000Z
2022-03-31T13:38:17.000Z
content/refguide/report-grid.md
dlhartveld/docs
5ad7b69729c852790a4c1170ff15881288e00fa4
[ "CC-BY-4.0" ]
829
2016-09-29T06:58:11.000Z
2022-03-31T08:41:52.000Z
--- title: "Report Grid" parent: "report-widgets" menu_order: 10 tags: ["studio pro"] #If moving or renaming this doc file, implement a temporary redirect and let the respective team know they should update the URL in the product. See Mapping to Products for more details. --- {{% alert type="info" %}} This widget been deprecated in version 9.0 and will be marked for removal in a future version. {{% /alert %}} {{% alert type="warning" %}} The report grid widget is not supported on native mobile pages. {{% /alert %}} ## 1 Introduction A **Report grid** shows data retrieved from the database using a [Data set](data-sets) in a grid format. Each row in the grid displays a single result from the data set. Each time a report is created, the data is retrieved from the database. The difference between a data grid and a report grid is that you can use a data grid to edit the data shown. A report grid will only display data. However, in a report grid, you can create additional information by merging and processing attributes when you define the data set which retrieves the data. The report grid is displayed in structure mode with the data set source shown between square brackets and colored blue. The data fields returned by the data set are shown in the report grid columns, under the column captions. See [Report Grid Column Data Source](#column-data-source) for information on how to assign a data field to a column. ![Report grid in structure mode](attachments/report-widgets/report-grid.png) ## 2 Report Grid Properties An example of report grid properties is represented in the image below: {{% image_container width="300" %}}![Report grid in structure mode](attachments/report-widgets/report-grid-properties.png) {{% /image_container %}} Report grid properties consist of the following sections: * [Common](#common) * [Data source](#data-source) * [Design Properties](#design-properties) * [General](#general) Each column in a report grid also has properties: see [Report Grid Column Properties](#column-properties), below. ### 2.1 Common Section{#common} {{% snippet file="refguide/common-section-link.md" %}} ### 2.2 Data Source Section{#data-source} #### 2.2.1 Data Set **Data set** specifies the [Data set](data-sets) which defines the data that will be shown in the report grid. Any of the selected attributes or aggregations (for example totals or averages) from the data set can be dragged into a data grid column from the **Connector** pane: see [Report Grid Column Data Source](#column-data-source), below. ### 2.3 Design Properties Section{#design-properties} {{% snippet file="refguide/design-section-link.md" %}} ### 2.4 General Section{#general} #### 2.4.1 Use Paging Set **Use paging** to **Yes** if you expect more data than you can display on one page. This splits the results into several pages. #### 2.4.2 Page Size **Page size** specifies the number of results which are displayed on one page if **Use paging** is yes. #### 2.4.3 Zoom {#zoom} **Zoom** specifies a page which will be displayed when the end-user double-clicks a result in the report. If the selected page contains a report, the columns of the current report can be mapped to the parameters of the data set which is the basis of the report in the other page. ![Zoom configuration showing columns being passed as parameters](attachments/report-widgets/report-zoom.png) #### 2.4.4 Column Widths The widths of the columns are expressed as a percentage of the total width of the basic report. You can edit this property in two ways: * by dragging the border between the columns: ![Drag column widths](attachments/report-widgets/drag-column-width.png) * by entering the percentages explicitly ![Enter column widths](attachments/report-widgets/enter-column-widths.png) Each column in a report grid also has properties: see [Report Grid Column Properties](#column-properties), below. The data source for each column can be dragged into the column from the **Connector** pane: see [Report Grid Column Data Source](#column-data-source), below. #### 2.4.5 Show Export Button Set **Show export button** to **Yes** to display the **Export to Excel** button to the end-user on the report grid. ![Add the Export to Excel button](attachments/report-widgets/export-to-excel.png) When the end-user clicks this button, the report is exported as a `Microsoft Excel 97-2003 Worksheet` which the end-user can download or view, depending on their browser's settings. #### 2.4.6 Generate on Page Load If **Generate on page load** is set to **No**, the report grid will not show any data until the end-user presses the [Generate report button](report-button). This is especially useful if the report uses parameters that should first be specified by the end-user. ## 3 Report Grid Column Properties{#column-properties} An example of report grid properties is represented in the image below: {{% image_container width="250" %}}![Report grid column properties](attachments/report-widgets/report-grid-column-properties.png) {{% /image_container %}} Report grid properties consist of a single section, [General](#column-general). ### 3.1 General Section{#column-general} #### 3.1.1 Caption **Caption** contains the text which appears at the top of each column of the report grid. #### 3.1.2 Alignment **Alignment** sets the alignment of the caption and data displayed in this column. The values are: * Left * Center * Right #### 3.1.3 Format **Format** allows you to convert a numeric value in the column to a month or day name. If the value in the column is not a number, **Default** format will be used. The possible values are: * Default – the data will be displayed in the default string format * Month name – a numeric value will be interpreted as a month name (for example **8** is displayed as **August**) * Weekday name – a numeric value will be interpreted as a day of the week (for example **4** is displayed as **Wednesday**) #### 3.1.4 Visible If **Visible** is set to **No** then this column will not be displayed in the report. This can be used to add a value to the report which can be passed to a report on a page specified with [Zoom](#zoom) without displaying it as part of the report. ## 4 Report Grid Column Data Source{#column-data-source} To add data to a column, select the column, open the **Connector** pane, and drag one of the results into the column. You will need to select the report grid, or part of it, to see the results of the data set in the connector pane. ![Drag value from Connector pane into a column](attachments/report-widgets/drag-column-value.png)
45.737931
342
0.748492
eng_Latn
0.997605
c9767b83e8f0a6b4e2afeeb88c36b2949f218c2d
2,812
md
Markdown
README.md
fluffynukeit/adaspark
c0fc58b0f8472cf928c005740d7a0ec846a85932
[ "MIT" ]
12
2021-02-01T20:56:27.000Z
2022-02-19T09:11:54.000Z
README.md
repos-nixos/adaspark
c0fc58b0f8472cf928c005740d7a0ec846a85932
[ "MIT" ]
5
2021-02-15T17:30:56.000Z
2021-05-17T20:22:51.000Z
README.md
Xmgplays/adaspark
c0fc58b0f8472cf928c005740d7a0ec846a85932
[ "MIT" ]
5
2021-01-31T12:13:54.000Z
2021-09-22T06:30:59.000Z
# adaspark nix flake This repo is a nix flake for providing an Ada and SPARK development environment. There are several (primary) components to this flake: 1. `gnat`, GNAT FSF for compiling Ada code. Because it is the FSF version, there are no runtime library license encumbrances. Includes commands `gnatmake`, `gnatbind`, `gnatlink`, `gcc`, `g++`, etc. 2. `gpr`, GNAT Project Manager, a common multi-project build tool for Ada or mixed Ada/C/C++ projects. Includes commands `gprbuild`, `gprls`, etc. 3. `spark`, the SPARK code verification tools. Includes `gnatprove` and `gnat2why`. 4. `asis`, the ASIS tools. Includes `gnattest`, `gnatcheck`, `gnat2xml`, `gnat2xsd`, `gnatelim`, `gnatmetric`, `gnatpp`, `gnatstub`. 5. `alr`, the Alire package manager. 6. `adaspark` target that includes 1-5. This is the default flake output and gives you everything you need to build an Ada project without using the nix build system to package it. 7. `adaenv`, a nix build environment like nix's stdenv, but modified so that derivations that are installed with `gprbuild` as `buildInputs` can be located in the nix store. It does this by setting `GPR_PROJECT_PATH` to certain nix store locations. `gpr` is automatically included in the environment and does not need to be specified in the `buildInputs`. Other components are provided but typically aren't needed to be referenced directly in practice. Consult the flake source for more. ## Enable flakes in nix At the time of this writing, flakes are an experimental feature of nix. Please consult the [flakes wiki entry](https://nixos.wiki/wiki/Flakes) for information on how to enable them. ## Quickstart shell *You do not need to clone this repo to use it.* On the command line, run `nix develop github:fluffynukeit/adaspark` to build and/or activate a command line environment that includes gnat tools, asis tools, gpr, and SPARK. You can then use these tools with your own build or development scripts that are executed from the shell environment. Any other programs already installed on your system will still be accessible. If you don't want all the components of the built-in `adaspark` environment (for instance, you don't care about SPARK and don't want to install it), you can specify individual flake components of your shell environment: `nix develop github:fluffynukeit/adaspark#{gnat,gpr}` If you want a "pure" shell with nothing from your own system, add the `-i` flag to the nix command, which will include only those packages (and their dependencies) you specify. No installed programs from your own system will be accessible, not even `ls`! This is useful for verifying that the dependencies of your projects are all identified. ## Quickstart flake Example of how to use `adaenv` in a nix flake for a new project (forthcoming).
49.333333
98
0.768492
eng_Latn
0.998851
c976c60f8da668445b23d355bef4586a6569d678
339
md
Markdown
README.md
Joe0/Pub-Sub-RSC
24703671fe91586293c263a734251acb360e168f
[ "MIT" ]
1
2015-05-20T01:56:28.000Z
2015-05-20T01:56:28.000Z
README.md
Joe0/Pub-Sub-RSC
24703671fe91586293c263a734251acb360e168f
[ "MIT" ]
null
null
null
README.md
Joe0/Pub-Sub-RSC
24703671fe91586293c263a734251acb360e168f
[ "MIT" ]
null
null
null
# Pub-Sub-RSC RSC server implementation that uses pub-sub. The reasoning for using pub-sub is to really decouple the core from the game logic so that one could easily implement a scripting system. <h2 name="license">License</h2> This project is distributed under the terms of the MIT License. See file "LICENSE" for further information.
48.428571
182
0.787611
eng_Latn
0.998589
c976edab85e504dfbb684ff43857a3260438a267
6,863
md
Markdown
docs/5_EventFeedback/6_Pattern_SetFeedbackElement.md
orstavik/JoiEvents
1ccc1412b5bf34b5e72b82fd2c18c789e4eecab9
[ "MIT" ]
3
2019-02-06T11:13:06.000Z
2021-01-31T02:42:56.000Z
docs/5_EventFeedback/6_Pattern_SetFeedbackElement.md
orstavik/JoiEvents
1ccc1412b5bf34b5e72b82fd2c18c789e4eecab9
[ "MIT" ]
null
null
null
docs/5_EventFeedback/6_Pattern_SetFeedbackElement.md
orstavik/JoiEvents
1ccc1412b5bf34b5e72b82fd2c18c789e4eecab9
[ "MIT" ]
2
2019-02-06T15:47:09.000Z
2019-04-18T09:12:07.000Z
# Pattern: SetFeedbackElement The SetFeedbackElement pattern adds/replaces a feedback image for an EventSequence or composed event. The pattern: 1. adds a `.setImage(feedbackElement, timeToLive)` on all the composed event objects dispatched from the EventSequence function. The arguments are: 1. `feedbackElement`: an `HTMLElement` node that will be displayed. Often, this is a custom element/web component. 2. `ttl` (time to live): The `Number` of milliseconds that the visual `feedbackElement` will be kept alive in the DOM after the current EventSequence cycle has completed. * If the `feedbackElement` argument `isConnected` to the DOM when it is to be first added, the EventSequence will deep clone the `feedbackElement` object. 2. The `.setImage(..)` function overrides the `feedbackElement` for one display cycle. If the EventSequence needs to override the default display of the `feedbackElement`, then a static `setDefaultImage(..)` function needs to be set up on the ExtendsEvent class. 3. The EventSequence will attach the `feedbackElement` to a BlindManDom layer element and automatically control relevant CSS variables on the BlindManDom layer such as position, size and rotation. 4. The EventSequence will attach CSS pseudo-pseudo-classes to the feedback element, as it would its default FeedbackElement. ## Demo: Show how `long-press` persists In this demo, we set up a `long-press` EventSequence and pass it two custom feedback elements using the `.setImage(...)` method on the `long-press-start` event. The first `setImage(...)` is passed a plain old `<div>` element. The `<div>` is dumb as in restricted from doing any animations or adaptations to CSS pseudo-pseudo-classes. The second `setImage(...)` is passed a modern, elegant web component called `<beating-heart>`. The web component is smart as it can include styles that both animate and respond to changing pseudo-pseudo-classes. To reduce the example size, we do not implement a default `long-press` FeedbackElement such as a bulls-eye. The feedback element is a circle border that grows from where the initial `mousedown` is registered, which then gets doubled as the time requirement for the press is met. ```html <script> (function () { class BlindManDomLayer extends HTMLElement { constructor() { super(); this.attachShadow({mode: "open"}); this.shadowRoot.innerHTML = ` <style> @keyframes starting-press { 0% { transform: scale(0); opacity: 0.2; } 100% { transform: scale(1); opacity: 1.0; } } :host { margin: 0; padding: 0; position: fixed; z-index: 2147483647; pointer-events: none; /*overflow: visible;*/ animation: starting-press 600ms forwards; } </style> <slot></slot>`; } } customElements.define("blind-man-dom-layer", BlindManDomLayer); var blindMan = document.createElement("blind-man-dom-layer"); var feedbackElement; var timeToLive; var primaryEvent; class LongPressEvent extends Event { constructor(type, props = {bubbles: true, composed: true}) { super(type, props); } setImage(el, ttl) { feedbackElement = el; timeToLive = ttl; } } function dispatchPriorEvent(target, composedEvent, trigger) { composedEvent.preventDefault = function () { trigger.preventDefault(); trigger.stopImmediatePropagation ? trigger.stopImmediatePropagation() : trigger.stopPropagation(); }; composedEvent.trigger = trigger; return target.dispatchEvent(composedEvent); } function addVisualFeedback(x, y) { //using left and top instead of transform: translate(x, y) so as to simplify scale animation if (blindMan.isConnected){ blindMan = blindMan.cloneNode(); feedbackElement = feedbackElement.cloneNode(); } blindMan.style.left = x + "px"; blindMan.style.top = y + "px"; blindMan.appendChild(feedbackElement); document.body.appendChild(blindMan); } function removeVisualFeedback(endState, ttl) { blindMan.classList.add(endState); feedbackElement.classList.add(endState); var a = blindMan; var b = feedbackElement; setTimeout(function () { b.classList.remove(endState); b.remove(); a.classList.remove(endState); a.remove(); }, ttl); } function onMousedown(e) { if (e.button !== 0) return; primaryEvent = e; window.addEventListener("mouseup", onMouseup, true); const longPress = new LongPressEvent("long-press-start"); dispatchPriorEvent(e.target, longPress, e); //the event must be dispatched *before* the addVisualFeedback is run. addVisualFeedback(e.clientX, e.clientY); } function onMouseup(e) { var duration = e.timeStamp - primaryEvent.timeStamp; const endState = duration > 600 ? "long-press" : "long-press-cancel"; const longPress = new LongPressEvent(endState); dispatchPriorEvent(e.target, longPress, e); const delay = timeToLive !== undefined ? timeToLive : 250; removeVisualFeedback("long-press", delay); primaryEvent = undefined; window.removeEventListener("mouseup", onMouseup, true); } window.addEventListener("mousedown", onMousedown, true); })(); </script> <h1>Election 2020: Long-press to select!</h1> <h3 id="theDonald">Trump</h3> <h3 id="Pocahuntas">Warren</h3> <script> const blackHole = document.createElement("div"); blackHole.style.width = "20px"; blackHole.style.height = "20px"; blackHole.style.background = "black"; blackHole.style.borderRadius = "50%"; blackHole.style.margin = "-10px 0 0 -10px"; class BeatingHeart extends HTMLElement { constructor(){ super(); this.attachShadow({mode: "open"}); this.shadowRoot.innerHTML = ` <style> @keyframes throb { 0% { -webkit-text-stroke: 1px red; } 100% { -webkit-text-stroke: 3px darkred; } } :host { margin: -10px 0 0 -10px; box-sizing: border-box; width: 20px; height: 20px; -webkit-text-stroke: 3px red; } :host(*.long-press) { animation: throb 400ms infinite alternate forwards; color: red; /*box-sizing: border-box;*/ } </style>&#9829`; } } customElements.define("beating-heart", BeatingHeart); const persistance = new BeatingHeart(); document.querySelector("#theDonald").addEventListener("long-press-start", function(e){ e.setImage(blackHole); }); document.querySelector("#Pocahuntas").addEventListener("long-press-start", function(e){ e.setImage(persistance, 2147483647); }); window.addEventListener("long-press-start", e => console.log(e.type)); window.addEventListener("long-press", e => console.log(e.type)); </script> ``` ## References * []()
35.744792
545
0.683666
eng_Latn
0.753392
c97711ad141f5027dacf0d1897d22300930f0c16
2,237
md
Markdown
README.md
thozza/osbuild
0cbd7898c7822d1e6c667f2376bd06fea901a71e
[ "Apache-2.0" ]
81
2019-08-05T15:43:27.000Z
2022-03-27T11:10:58.000Z
README.md
thozza/osbuild
0cbd7898c7822d1e6c667f2376bd06fea901a71e
[ "Apache-2.0" ]
604
2019-07-29T09:34:19.000Z
2022-03-30T13:13:36.000Z
README.md
thozza/osbuild
0cbd7898c7822d1e6c667f2376bd06fea901a71e
[ "Apache-2.0" ]
50
2019-07-28T19:35:45.000Z
2022-03-09T08:57:25.000Z
OSBuild ======= Build-Pipelines for Operating System Artifacts OSBuild is a pipeline-based build system for operating system artifacts. It defines a universal pipeline description and a build system to execute them, producing artifacts like operating system images, working towards an image build pipeline that is more comprehensible, reproducible, and extendable. See the `osbuild(1)` man-page for details on how to run osbuild, the definition of the pipeline description, and more. ### Project * **Website**: <https://www.osbuild.org> * **Bug Tracker**: <https://github.com/osbuild/osbuild/issues> * **IRC**: #osbuild on [Libera.Chat](https://libera.chat/) * **Changelog**: <https://github.com/osbuild/osbuild/releases> #### Contributing Please refer to the [developer guide](https://www.osbuild.org/guides/developer-guide/developer-guide.html) to learn about our workflow, code style and more. ### Requirements The requirements for this project are: * `bubblewrap >= 0.4.0` * `python >= 3.7` Additionally, the built-in stages require: * `bash >= 5.0` * `coreutils >= 8.31` * `curl >= 7.68` * `qemu-img >= 4.2.0` * `rpm >= 4.15` * `tar >= 1.32` * `util-linux >= 235` At build-time, the following software is required: * `python-docutils >= 0.13` * `pkg-config >= 0.29` Testing requires additional software: * `pytest` ### Install Installing `osbuild` requires to not only install the `osbuild` module, but also additional artifacts such as tools (i.e: `osbuild-mpp`) sources, stages, schemas and SELinux policies. For this reason, doing an installation from source is not trivial and the easier way to install it is to create the set of RPMs that contain all these components. This can be done with the `rpm` make target, i.e: ```sh make rpm ``` A set of RPMs will be created in the `./rpmbuild/RPMS/noarch/` directory and can be installed in the system using the distribution package manager, i.e: ```sh sudo dnf install ./rpmbuild/RPMS/noarch/*.rpm ``` ### Repository: - **web**: <https://github.com/osbuild/osbuild> - **https**: `https://github.com/osbuild/osbuild.git` - **ssh**: `[email protected]:osbuild/osbuild.git` ### License: - **Apache-2.0** - See LICENSE file for details.
26.951807
156
0.708985
eng_Latn
0.958409
c977425778a120775a7ad0914f35a56efcbf52bd
5,505
md
Markdown
docs/guides/EinsteinAi.md
mitra-varuna/nlp-gauntlet
71f275e9edad1a3e3d384922fc5fea63a1a21cec
[ "BSD-3-Clause" ]
10
2019-10-22T14:28:47.000Z
2022-02-21T16:43:45.000Z
docs/guides/EinsteinAi.md
mitra-varuna/nlp-gauntlet
71f275e9edad1a3e3d384922fc5fea63a1a21cec
[ "BSD-3-Clause" ]
5
2019-10-20T04:10:46.000Z
2020-04-05T19:25:31.000Z
docs/guides/EinsteinAi.md
mitra-varuna/nlp-gauntlet
71f275e9edad1a3e3d384922fc5fea63a1a21cec
[ "BSD-3-Clause" ]
7
2019-10-09T13:17:29.000Z
2022-03-31T14:33:46.000Z
1- Create an Einstein.ai account (https://api.einstein.ai/signup) - Proceed to step 4 if you already have an Einstein.ai account provisioned for your org - Take note of the private key provided as part of the sign-up process will be used later ![Einstein.ai Setup](/docs/guides/images/einsteinAi/eai1.png?raw=true) 2- Create a Remote Site Setting for Einstein.Ai - From Setup, enter Remote Site in the Quick Find box, then select Remote Site Settings - Click New Remote Site - Enter a name for the remote site - In the Remote Site URL field, enter https://api.einstein.ai - Click Save 3- Upload a new data set and train a new model - [This trail](https://trailhead.salesforce.com/en/content/learn/modules/einstein_intent_basics/einstein_intent_basics_prep) is a good place to start with Einstein.ai. 4- If you already have a certificate in your organization for Einstein Platform services proceed to step 5, otherwise perform the following steps : 4a- Obtain your private key - Right after signing up for an Einstein.ai account, you should'be been presented with a private key. - Store the contents of this private key in a temporary location as we will needed it later since we will rotate this key with a new key that is stored in a new certificate in Salesforce. 4b- Create a new certificate in your organization - Go to Setup > Security > Certificate and Key Management and click "Create Self-Signed Certificate" - Set a label and a unique namef or your cert - Key Size : 2048 - Exportable Private Key : Checked - Save 4c- Extract the public key from your certificate - Back in the Certificate and Key Management list, click on the Certificate name you created in step 4a - Click the "Download Certificate" button and store the .crt file in a known location - Open a terminal window and type the following command `openssl x509 -pubkey -noout -in YOUR_CERT_NAME.crt` - You should see the publick key for your cert between `-----BEGIN PUBLIC KEY-----` and `-----END PUBLIC KEY-----` - Copy the contents of the public key between these two tags and store in a temporary location, this will be the new public key we will upload to Einstein.ai in the next step ![Einstein.ai Setup](/docs/guides/images/einsteinAi/eai2.png?raw=true) 4d- Rotate your key - Go to `/apex/EinsteinAiKeyMgmt` - In the `Key Information` section, enter the following: - Einstein.ai Email : < the email associated your Einstein.ai account > - Private Key : The contents for your Einstein Account private key without the `-----BEGIN` or `-----END` tags (obtained in step 4a) - Certificate Name : < leave empty for now > - Use Staging endpoint : < leave un-checked unless you are testing beta features > - Einstein.AI Public Key : < leave empty for now > - Click the `Get Keys` button and you should see the `Einstein.AI Public Key` drop-down populate with a Default (Active) key. - In the `Add a new key` section, enter the following: - Public Key Name : <a name for your new einstien key (e.g. EinsteinAI)> - Public Key : The contents for your Salesforce Certificate public key without the `-----BEGIN` or `-----END` tags (obtained in step 4c) - Set as active : Checked - Click the `Add Key` button 4e - Validate rotation - Go to `/apex/EinsteinAiKeyMgmt` - In the `Key Information` section, enter the following: - Einstein.ai Email : < the email associated your Einstein.ai account > - Private Key : < leave empty > - Certificate Name : The developer name of the certificate created in step 4b - Use Staging endpoint : < leave un-checked unless you are testing beta features > - Einstein.AI Public Key : < leave empty for now > - Click the `Get Keys` button and you should see the `Einstein.AI Public Key` drop-down populate with 2 options now, and your new public key should be the Active one. 5 - Create a Named Credential for Einstein Ai API that uses that uses the certificate that you just uploaded Go to Setup > Administer > Security Controls > Named Credentials ![Einstein.ai Setup](/docs/guides/images/einsteinAi/einstein1.png?raw=true) - URL : https://api.einstein.ai - Identity Type : Named Principal - Authentication Protocol : JWT Token Exchange - Token Endpoint Url : https://api.einstein.ai/v2/oauth2/token - Scope : offline - Issuer : developer.force.com - Named Principal Subject : < your einstein ai account email > - Audiences: https://api.einstein.ai/v2/oauth2/token - JWT Signing Certificate : < select the Einstein Platform Services certificate > - Generate Authorization Header : Checked - Allow Merge Fields in HTTP Header : Un-Checked - Allow Merge Fields in HTTP Body : Un-Checked 6 - Next, we can try out the service integration by launching the Nlp Gauntlet Workbench in `/apex/ExternalNlpWorkbench` Set the following parameters in the workbench and click the `Test` button : - Type : `Einstein.ai` - Additional Parameters : < empty > - Intent Confidence Threshold : Set to a value between 0 and 1, if empty it defaults to 0.7 - NER Confidence Threshold : Set to a value between 0 and 1, if empty it defaults to 0.7 - Sentiment Confidence Threshold : Set to a value between 0 and 1, if empty it defaults to 0.7 - Named Credential : The Name of the Named Credential you created for this integration - Model Id : The model id for the model you trained in Einstein.ai (use `CommunitySentiment` as Model Id for Sentiment Analysis) - Language : Your desired language - Time Zone : Your desired timezone - Input Text : The text you would like to get predictions on.
45.875
187
0.755313
eng_Latn
0.986357
c977474dc84875144257f52d97b02caf6b6d6ad1
6,720
md
Markdown
articles/fin-ops-core/fin-ops/get-started/use-lookups-to-find-information.md
frankarg/dynamics-365-unified-operations-public
ef45aaac9799bd6e3600bee12b3a85642a52c53e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/fin-ops-core/fin-ops/get-started/use-lookups-to-find-information.md
frankarg/dynamics-365-unified-operations-public
ef45aaac9799bd6e3600bee12b3a85642a52c53e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/fin-ops-core/fin-ops/get-started/use-lookups-to-find-information.md
frankarg/dynamics-365-unified-operations-public
ef45aaac9799bd6e3600bee12b3a85642a52c53e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # required metadata title: Find information by using lookups description: Many fields have lookups that can help you easily find the correct or desired value. Several enhancements have been added to lookups that make these controls more usable and make users more productive. In this topic, you will learn about these new lookup features and will receive some helpful tips to get the optimal use out of lookups in the system. author: jasongre manager: AnnBe ms.date: 06/20/2017 ms.topic: article ms.prod: ms.service: dynamics-ax-applications ms.technology: # optional metadata # ms.search.form: # ROBOTS: audience: Application User # ms.devlang: ms.reviewer: sericks #ms.search.scope: Core, Operations # ms.tgt_pltfrm: ms.custom: 269934 ms.assetid: f20cbd2c-14e0-47e7-b351-8e60d3537f96 ms.search.region: Global # ms.search.industry: ms.author: jasongre ms.search.validFrom: 2016-02-28 ms.dyn365.ops.version: AX 7.0.0 --- # Find information by using lookups [!include [banner](../includes/banner.md)] Many fields have lookups that can help you easily find the correct or desired value. Several enhancements have been added to lookups that make these controls more usable and make users more productive. In this topic, you will learn about these new lookup features and will receive some helpful tips to get the optimal use out of lookups in the system. ## Responsive lookups In previous versions, when interacting with a lookup control, a user would have to take an explicit action to open the drop-down menu. This may have been by typing an asterisk (\*) in the control to filter the lookup based on the current value of the control, clicking the drop-down button, or by using the **Alt**+**Down arrow** keyboard shortcut. Lookup controls have been modified in the following ways to better align with current web practices: - Lookup drop-down menus will now open automatically after a slight pause in typing, with the drop-down menu contents filtered based on the lookup control's value. Note that the old behavior of automatic opening of the dropdown after typing an asterisk (\*) has been deprecated. - After the lookup drop-down menu has opened, the following will occur: - The cursor will stay in the lookup control (instead of focus moving into the drop-down menu) so you can continue to make modifications to the control's value. However, the user can still use the **Up arrow** and **Down arrow** to change rows in the drop-down menu, and enter to select the current row in the drop-down menu. - The contents of the drop-down menu will adjust after any modifications are made to the lookup control's value. For example, consider a lookup field called **City**. When focus is in the **City** field, you can start looking for the city that you want by typing a few letters, like "col." After you stop typing, the lookup will open automatically, filtered to those cities that begin with "col". [![typeaheadLookupExample](./media/typeaheadlookupexample.png)](./media/typeaheadlookupexample.png) At this point, the cursor is still in the lookup field. If you continue typing so the value is "colum," the lookup contents adjust automatically to reflect the latest value in the control. ![updateFilterLookupExample](./media/updatefilterlookupexample.png) Even though focus is still in the lookup control, you can also use the **Up arrow** or **Down arrow** keys to highlight the row that you want to select. If you press **Enter** the highlighted row will be selected from the lookup and the control's value will be updated. ![changingSelectionLookup](./media/changingselectionlookup.png) ## Typing in more than IDs When entering data, it's natural for users to attempt to identify an entity, such as a customer or vendor, in terms of the name rather than an identifier representing the entity. Many (but not all) lookups now allow contextual data entry. This powerful feature allows the user to type the ID or the corresponding name into the lookup control. For example, consider the **Customer account** field when creating a sales order. This field shows the **Account ID** for the customer, but a user would typically prefer to enter an **Account name** instead of an **Account ID** for this field when creating a sales order, such as "Forest Wholesales" instead of "US-003." If the user started to enter an **Account ID** into the lookup control, the drop-down menu would automatically open as described in the previous section and the user would see the lookup as shown below. [![Contextual lookup when a customer account ID is entered](./media/howtocontextuallookups-1.png)](./media/howtocontextuallookups-1.png) However, the user can also now enter the beginning of an **Account name** as well. If this is detected, then the user will see the following lookup. Notice how the **Name** column is moved to be the first column in the lookup, and how the lookup is sorted and filtered based on the **Name** column. [![Contextual lookup when a customer name is entered](./media/howtocontextuallookups-2.png)](./media/howtocontextuallookups-2.png) ## Using grid column headers for more advanced filtering and sorting The lookup enhancements discussed in the previous two sections greatly improve a user's ability to navigate the rows in a lookup based on a "begins with" search of the **ID** or **Name** field in the lookup. However, there are situations in which more advanced filtering (or sorting) is needed to find the correct row. In these situations, the user needs to use the filtering and sorting options in the grid column headers inside the lookup. For example, consider an employee entering a sales order line who needs to locate the right "cable" as the product. Typing "cable" into the **Item number** control isn't helpful, as there are no product names that begin with "cable." ![emptyitemlookup](./media/emptyitemlookup.png) Instead, the user needs to clear the value of the lookup control, open the lookup drop-down menu, and filter the drop-down menu using the grid column header, as shown below. A mouse (or touch) user can simply click (or touch) any column header to access the filtering and sorting options for that column. For a keyboard user, the user simply needs to press **Alt**+**Down** **arrow** a second time to move focus into the drop-down menu, after which the user can tab to the correct column, and then press **Ctrl**+**G** to open the grid column header drop-down menu. [![gridfilteritemlookup](./media/gridfilteritemlookup.png)](./media/gridfilteritemlookup.png) After the filter has been applied (see the image below), the user can find and select the row as usual. ![filtereditemlookup](./media/filtereditemlookup.png)
72.258065
675
0.775893
eng_Latn
0.999291
c9774d881d004c7fd7962829742b613e819ca169
22
md
Markdown
README.md
LeoCheung0221/Choreographer
fc5b884f3cf6ae0864d4db3cf227b8088c188dcb
[ "MIT" ]
null
null
null
README.md
LeoCheung0221/Choreographer
fc5b884f3cf6ae0864d4db3cf227b8088c188dcb
[ "MIT" ]
null
null
null
README.md
LeoCheung0221/Choreographer
fc5b884f3cf6ae0864d4db3cf227b8088c188dcb
[ "MIT" ]
null
null
null
# Choreographer 编舞者框架
7.333333
15
0.818182
eng_Latn
0.980336
c97797eac94e78bafb83d6c50111401eeb9c37a3
13,011
md
Markdown
chapter-4.md
Stromberg90/ziglearn
73f539945ff4926bca1b7b12eab1e9a16bb2211e
[ "MIT" ]
null
null
null
chapter-4.md
Stromberg90/ziglearn
73f539945ff4926bca1b7b12eab1e9a16bb2211e
[ "MIT" ]
null
null
null
chapter-4.md
Stromberg90/ziglearn
73f539945ff4926bca1b7b12eab1e9a16bb2211e
[ "MIT" ]
null
null
null
--- title: "Chapter 4 - Working with C" weight: 5 date: 2021-02-24 17:17:00 description: "Chapter 4 - Learn about how the Zig programming language makes use of C code. This tutorial covers C data types, FFI, building with C, translate-c and more!" --- Zig has been designed from the ground up with C interop as a first class feature. In this section we will go over how this works. # ABI An ABI *(application binary interface)* is a standard, pertaining to: - The in-memory layout of types (i.e. a type's size, alignment, offsets, and the layouts of its fields) - The in-linker naming of symbols (e.g. name mangling) - The calling conventions of functions (i.e. how a function call works at a binary level) By defining these rules and not breaking them an ABI is said to be stable and this can be used to, for example, reliably link together multiple libraries, executables, or objects which were compiled separately (potentially on different machines, or using different compilers). This allows for FFI *(foreign function interface)* to take place, where we can share code between programming languages. Zig natively supports C ABIs for `extern` things; which C ABI is used is dependant on the target which you are compiling for (e.g. CPU architecture, operating system). This allows for near-seamless interoperation with code that was not written in Zig; the usage of C ABIs is standard amongst programming languages. Zig internally does not make use of an ABI, meaning code should explicitly conform to a C ABI where reproducible and defined binary-level behaviour is needed. # C Primitive Types Zig provides special `c_` prefixed types for conforming to the C ABI. These do not have fixed sizes, but rather change in size depending on the ABI being used. | Type | C Equivalent | Minimum Size (bits) | |--------------|-------------------|---------------------| | c_short | short | 16 | | c_ushort | unsigned short | 16 | | c_int | int | 16 | | c_uint | unsigned int | 16 | | c_long | long | 32 | | c_ulong | unsigned long | 32 | | c_longlong | long long | 64 | | c_ulonglong | unsigned longlong | 64 | | c_longdouble | long double | N/A | | c_void | void | N/A | Note: C's void (and Zig's `c_void`) has an unknown non-zero size. Zig's `void` is a true zero-sized type. # Calling conventions Calling conventions describe how functions are called. This includes how arguments are supplied to the function (i.e. where they go - in registers or on the stack, and how), and how the return value is received. In Zig, the attribute `callconv` may be given to a function. The calling conventions available may be found in [std.builtin.CallingConvention](https://ziglang.org/documentation/master/std/#std;builtin.CallingConvention). Here we make use of the cdecl calling convention. ```zig fn add(a: u32, b: u32) callconv (.C) u32 { return a + b; } ``` Using a calling convention is crucial when you're calling C code from Zig, or vice versa. # Extern Structs Normal structs in Zig do not have a defined layout; `extern` structs are required for when you want the layout of your struct to match the layout of your C ABI. Let's create an extern struct. This test should be run with `x86_64` with a `gnu` ABI, which can be done with `-target x86_64-native-gnu`. ```zig const expect = @import("std").testing.expect; const Data = extern struct { a: i32, b: u8, c: f32, d: bool, e: bool }; test "hmm" { const x = Data{ .a = 10005, .b = 42, .c = -10.5, .d = false, .e = true, }; const z = @ptrCast([*]const u8, &x); expect(@ptrCast(*const i32, z).* == 10005); expect(@ptrCast(*const u8, z + 4).* == 42); expect(@ptrCast(*const f32, z + 8).* == -10.5); expect(@ptrCast(*const bool, z + 12).* == false); expect(@ptrCast(*const bool, z + 13).* == true); } ``` This is what the memory inside our `x` value looks like. | Field | a | a | a | a | b | | | | c | c | c | c | d | e | | | |-------|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| | Bytes | 15 | 27 | 00 | 00 | 2A | 00 | 00 | 00 | 00 | 00 | 28 | C1 | 00 | 01 | 00 | 00 | Note how there are gaps in the middle and at the end - this is called "padding". The data in this padding is undefined memory, and won't always be zero. As our `x` value is that of an extern struct, we could safely pass it into a C function expecting a `Data`, providing the C function was also compiled with the same `gnu` ABI and CPU arch. # Alignment For circuitry reasons, CPUs access primitive values at certain multiples in memory. This could mean for example that the address of an `f32` value must be a multiple of 4, meaning `f32` has an alignment of 4. This so-called "natural alignment" of primitive data types is dependent on CPU architecture. All alignments are powers of 2. Data of a larger alignment also has the alignment of every smaller alignment; for example, a value which has an alignment of 16 also has an alignment of 8, 4, 2 and 1. We can make specially aligned data by using the `align(x)` property. Here we are making data with a greater alignment. ```zig const a1: u8 align(8) = 100; const a2 align(8) = @as(u8, 100); ``` And making data with a lesser alignment. Note: Creating data of a lesser alignment isn't particularly useful. ```zig const b1: u64 align(1) = 100; const b2 align(1) = @as(u64, 100); ``` Like `const`, `align` is also a property of pointers. ```zig test "aligned pointers" { const a: u32 align(8) = 5; expect(@TypeOf(&a) == *align(8) const u32); } ``` Let's make use of a function expecting an aligned pointer. ```zig fn total(a: *align(64) const [64]u8) u32 { var sum: u32 = 0; for (a) |elem| sum += elem; return sum; } test "passing aligned data" { const x align(64) = [_]u8{10} ** 64; expect(total(&x) == 640); } ``` # Packed Structs By default all struct fields in Zig are naturally aligned to that of [`@alignOf(FieldType)`](https://ziglang.org/documentation/master/#alignOf) (the ABI size), but without a defined layout. Sometimes you may want to have struct fields with a defined layout that do not conform to your C ABI. `packed` structs allow you to have extremely precise control of your struct fields, allowing you to place your fields on a bit-by-bit basis. Inside packed structs, Zig's integers take their bit-width in space (i.e. a `u12` has an [`@bitSizeOf`](https://ziglang.org/documentation/master/#bitSizeOf) of 12, meaning it will take up 12 bits in the packed struct). Bools also take up 1 bit, meaning you can implement bit flags easily. ```zig const MovementState = packed struct { running: bool, crouching: bool, jumping: bool, in_air: bool, }; test "packed struct size" { expect(@sizeOf(MovementState) == 1); expect(@bitSizeOf(MovementState) == 4); const state = MovementState{ .running = true, .crouching = true, .jumping = true, .in_air = true, }; } ``` Currently Zig's packed structs have some long withstanding compiler bugs, and do not currently work for many use cases. # Bit Aligned Pointers Similar to aligned pointers, bit aligned pointers have extra information in their type which informs how to access the data. These are necessary when the data is not byte-aligned. Bit alignment information is often needed to address fields inside of packed structs. ```zig test "bit aligned pointers" { var x = MovementState{ .running = false, .crouching = false, .jumping = false, .in_air = false, }; const running = &x.running; running.* = true; const crouching = &x.crouching; crouching.* = true; expect(@TypeOf(running) == *align(1:0:1) bool); expect(@TypeOf(crouching) == *align(1:1:1) bool); expect(@import("std").meta.eql(x, .{ .running = true, .crouching = true, .jumping = false, .in_air = false, })); } ``` # C Pointers Up until now we have used the following kinds of pointers: - single item pointers - `*T` - many item pointers - `[*]T` - slices - `[]T` Unlike the aforementioned pointers, C pointers cannot deal with specially aligned data, and may point to the address `0`. C pointers coerce back and forth between integers, and also coerce to single and multi item pointers. When a C pointer of value `0` is coerced to a non-optional pointer, this is detectable illegal behaviour. Outside of automatically translated C code, the usage of `[*c]` is almost always a bad idea, and should almost never be used. # Translate-C Zig provides the command `zig translate-c` for automatic translation from C source code. Create the file `main.c` with the following contents. ```c #include <stddef.h> void int_sort(int* array, size_t count) { for (int i = 0; i < count - 1; i++) { for (int j = 0; j < count - i - 1; j++) { if (array[j] > array[j+1]) { int temp = array[j]; array[j] = array[j+1]; array[j+1] = temp; } } } } ``` Run the command `zig translate-c main.c` to get the equivalent Zig code output to your console (stdout). You may wish to pipe this into a file with `zig translate-c main.c > int_sort.zig` (warning for windows users: piping in powershell will produce a file with the incorrect encoding - use your editor to correct this). In another file you could use `@import("int_sort.zig")` to make use of this function. Currently the code produced may be unnecessarily verbose, though translate-c successfully translates most C code into Zig. You may wish to use translate-c to produce Zig code before editing it into more idiomatic code; a gradual transfer from C to Zig within a codebase is a supported use case. # cImport Zig's [`@cImport`](https://ziglang.org/documentation/master/#cImport) builtin is unique in that it takes in an expression, which can only take in [`@cInclude`](https://ziglang.org/documentation/master/#cInclude), [`@cDefine`](https://ziglang.org/documentation/master/#cDefine), and [`@cUndef`](https://ziglang.org/documentation/master/#cUndef). This works similarly to translate-c, translating C code to Zig under the hood. [`@cInclude`](https://ziglang.org/documentation/master/#cInclude) takes in a path string, can adds the path to the includes list. [`@cDefine`](https://ziglang.org/documentation/master/#cDefine) and [`@cUndef`](https://ziglang.org/documentation/master/#cUndef) define and undefine things for the import. These three functions work exactly as you'd expect them to work within C code. Similar to [`@import`](https://ziglang.org/documentation/master/#import) this returns a struct type with declarations. It is typically recommended to only use one instance of [`@cImport`](https://ziglang.org/documentation/master/#cImport) in an application to avoid symbol collisions; the types generated within one cImport will not be equivalent to those generated in another. cImport is only available when linking libc. # Linking libc Linking libc can be done via the command line via `-lc`, or via `build.zig` using `exe.linkLibC();`. The libc used is that of the compilation's target; Zig provides libc for many targets. # Zig cc, Zig c++ The Zig executable comes with Clang embedded inside it alongside libraries and headers required to cross compile for other operating systems and architectures. This means that not only can `zig cc` and `zig c++` compile C and C++ code (with Clang-compatible arguments), but it can also do so while respecting Zig's target triple argument; the single Zig binary that you have installed has the power to compile for several different targets without the need to install multiple versions of the compiler or any addons. Using `zig cc` and `zig c++` also makes use of Zig's caching system to speed up your workflow. Using Zig, one can easily construct a cross-compiling toolchain for languages which make use of a C and/or C++ compiler. Some examples in the wild: - [Using zig cc to cross compile LuaJIT to aarch64-linux from x86_64-linux](https://andrewkelley.me/post/zig-cc-powerful-drop-in-replacement-gcc-clang.html) - [Using zig cc and zig c++ in combination with cgo to cross compile hugo from aarch64-macos to x86_64-linux, with full static linking](https://twitter.com/croloris/status/1349861344330330114) - [Using zig build, zig cc, and zigmod to build snapraid](https://github.com/nektro/snapraid) # End of Chapter 4 This chapter is incomplete. In the future it will contain things such as: - Calling C code from Zig and vice versa - Using `zig build` with a mixture of C and Zig code Feedback and PRs are welcome.
47.312727
451
0.684882
eng_Latn
0.996818
c977ebc4b4e7c73d9d43aad25863725e630ae804
384
md
Markdown
README.md
gamliela/starter-react-mobx-styled-components
837538a7facadabaf9ff10922a00d0522c66865a
[ "Apache-2.0" ]
null
null
null
README.md
gamliela/starter-react-mobx-styled-components
837538a7facadabaf9ff10922a00d0522c66865a
[ "Apache-2.0" ]
null
null
null
README.md
gamliela/starter-react-mobx-styled-components
837538a7facadabaf9ff10922a00d0522c66865a
[ "Apache-2.0" ]
null
null
null
# starter-react-mobx-styled-components My starter kit for projects based on React, MobX, Styled Components and TypeScript. This kit consists of the latest versions of: * React * Webpack * MobX * Styled Components * TypeScript #### Build Instructions `npm install` `npm build` The output goes to `/build/bundle.js` & `/build/index.html` #### Development Server `npm dev-server`
17.454545
83
0.739583
eng_Latn
0.899067
c9782b39161f725e43b4d7a4d2278bf5a253ca7e
175
md
Markdown
_posts/0000-01-02-b-canas.md
b-canas/github-slideshow
d39a3837983c85d98292cb92e8b17688b7c00766
[ "MIT" ]
null
null
null
_posts/0000-01-02-b-canas.md
b-canas/github-slideshow
d39a3837983c85d98292cb92e8b17688b7c00766
[ "MIT" ]
3
2021-02-01T00:23:34.000Z
2021-02-01T02:20:58.000Z
_posts/0000-01-02-b-canas.md
b-canas/github-slideshow
d39a3837983c85d98292cb92e8b17688b7c00766
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" --- ![Git Meme: Force Solution](https://miro.medium.com/max/1200/0*tmfbLDU_hIeg0B3B.jpg) Use the left arrow to go back!
29.166667
84
0.725714
eng_Latn
0.813844
c97866ba08fdd6fbc77de9b70a2c6a3491d0d31c
1,604
md
Markdown
docs/_api_object/co-obj-rd-value.md
nebulorum/canopen-stack
09090ec748db8ab24a05547911256e04d9460f14
[ "Apache-2.0" ]
2
2022-03-01T21:41:12.000Z
2022-03-01T21:41:15.000Z
docs/_api_object/co-obj-rd-value.md
nebulorum/canopen-stack
09090ec748db8ab24a05547911256e04d9460f14
[ "Apache-2.0" ]
null
null
null
docs/_api_object/co-obj-rd-value.md
nebulorum/canopen-stack
09090ec748db8ab24a05547911256e04d9460f14
[ "Apache-2.0" ]
1
2021-06-02T15:34:54.000Z
2021-06-02T15:34:54.000Z
--- layout: article title: COObjRdValue() sidebar: nav: docs --- This function reads the value of the given object entry. <!--more--> ## Description The access with this function to an object entry will be done with the casting of the object entry values to the requested value width. ### Prototype ```c int16_t COObjRdValue(CO_OBJ *obj , void *value, uint8_t width, uint8_t nodeId); ``` #### Arguments | Parameter | Description | | --- | --- | | obj | pointer to object entry | | value | pointer to destination memory | | width | width of read value (must be 1, 2 or 4 and reflecting the width of the referenced variable, given by parameter value) | | nodeId | device node ID (only relevant in case of node ID dependent value) | #### Returned Value - `==CO_ERR_NONE` : successful operation - `!=CO_ERR_NONE` : an error is detected (see [CONodeGetErr()](/api_node/co-node-get-err)) ## Example The following example gets the value of the hypothetical application-specific object entry "[1234:56]" within the object dictionary of the CANopen node AppNode. ```c uint32_t value; CO_OBJ *entry; : entry = CODirFind (&(AppNode.Dir), CO_DEV(0x1234,0x56)); err = COObjRdValue (entry, &value, 4, 0); : ``` Note: The example shows the read access with the knowledge, that the addressed object entry is independent of the node ID. To be independent of this knowledge, the API function `CONmtGetNodeId()` may be used to get the current node ID.
30.264151
236
0.65399
eng_Latn
0.982071
c9786fc71a6c29e558338faa2ec180d909189631
22,286
md
Markdown
chapter-1-A-Tutorial-Introduction/README.md
moki/The-C-Programming-Language-walkthrough
a5a9900e0d06651da54d823d20c139560b8559cc
[ "MIT" ]
null
null
null
chapter-1-A-Tutorial-Introduction/README.md
moki/The-C-Programming-Language-walkthrough
a5a9900e0d06651da54d823d20c139560b8559cc
[ "MIT" ]
null
null
null
chapter-1-A-Tutorial-Introduction/README.md
moki/The-C-Programming-Language-walkthrough
a5a9900e0d06651da54d823d20c139560b8559cc
[ "MIT" ]
null
null
null
# Chapter 1 - A Tutorial Introduction Diving in the code asap. ## 1.1 Getting started Hello world program, refer to hello-world.c ### Key points: 1. **main** is a special function with which execution of your program starts. ``` int main() { /* code of the program */ } ``` 2. **#include** is the way to tell the compiler to include information about library. ``` /* includes standart input/output library */ #include <stdio.h> ``` 3. To **call the fn** just state its name and follow by () passing arguments in. ``` printf("hello, world\n"); ``` 4. There is an escape sequences like \n for new line, \\t for a tab, \\b for a backspace, \\" for a double quote, \\\ for a backslash itself and many more. 5. C is an imperative language ### Run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o hello-world hello-world.c $ ./hello-world ``` If not replace gcc with whatever compiler you use (refer to it's documentation in case of errors) --- ## 1.2 Variables and Arithmetic Expressions Program to list Fahrenheit temperatures from 0 to 300 with 20 step and equivalents of each in Celsius, refer to fahr-to-cels.c ### Key points: 1. **Types** - There is **four basic** types in C some have sub types. #### Integer Integers are used to store whole numbers. | Type | Size(bytes) | Range | | ----------------- |:-----------:|:------------------------------- | | int | 2 | -32,768 to 32767 | | unsigned int | 2 | 0 to 65535 | | short int | 1 | -128 to 127 | | long int | 4 | -2,147,483,648 to 2,147,483,647 | | unsigned long int | 4 | 0 to 4,294,967,295 | #### Floating Floating types are used to store real numbers. | Type | Size(bytes) | Range | | ----------------- |:-----------:|:------------------------------- | | Float | 4 | 3.4E-38 to 3.4E+38 | | double | 8 | 1.7E-308 to 1.7E+308 | | long double | 10 | 3.4E-4932 to 1.1E+4932 | #### Character Character types are used to store characters value. | Type | Size(bytes) | Range | | ------------------- |:-----------:|:------------------------------- | | char or signed char | 1 | -128 to 127 | | unsigned char | 8 | 0 to 255 | #### void Means no value, often use is to specify functions return value #### Note that Size and range of all types in tables given for an 16-bit machine. Size of these object depends on the machine you are running code, to figure out size of type on any machine use the sizeof operator like: ``` // sizeof(type_name); printf("Size for int: %d\n", sizeof(int)); ``` #### More info on types: [https://en.wikipedia.org/wiki/C_data_types](https://en.wikipedia.org/wiki/C_data_types) 2. **Comments** - is the way to commentate the code. ``` // This is a comment /* This is a comment */ /* This is a comment */ ``` 3. **Variables** - In C variables must be declared before they are used one can declare type by stating variable type and name. ``` int int_val_1, int_val_2; float float_val_1, float_val_2; ``` 4. **While** - while() {} construct Repeats code between {}, as long as condition(expression in parenthesis) evaluates to true. One may also omit curly braces if body of the while loop consists of single statement, write body from the new line indent by one tab. Example usage: ``` int i = 1; // initialize initial loop value while (i <= 10) // test condition to run or not the loop { printf("loop evaluation #: %d\n", i); i = i + 1; // increase loop value } ``` Single statement example: ``` int i = 1; // initialize initial loop value while (i <= 10) // test condition to run or not the loop i = i + 1; // increase loop value ``` 4. **printf % Conversion specification** %spec used to specify how to format value passed in the printf function inside the string. | Conversion spec | formatting | | ------------------- |:------------------------------------------------------------ | | %d | as decimal integer | | %6d | as decimal integer, at least 6 chars wide | | %f | as floating point | | %6f | as floating point, at least 6 chars wide | | %.2f | as floating point, 2 characters after decimal point | | %6.2f | as floating point, at least 6 wide and 2 after decimal point | Also there is **%o** for **octal**, **%x hexademical**, **%c** for a **character**, **%s** **character string**, and **%%** for **% itself**. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o fahr-to-cels fahr-to-cels.c $ ./fahr-to-cels ``` If not replace gcc with whatever compiler you use (refer to it's documentation in case of errors) ### Exercises: #### Exercise 1-3 Refer to **line 17** in **fahr-to-cels.c** source code file ##### Compile and run the code: ``` $ gcc -o fahr-to-cels fahr-to-cels.c $ ./fahr-to-cels ``` #### Exercise 1-4 Refer to **cels-to-fahr.c** source code file ##### Compile and run the code: ``` $ gcc -o cels-to-fahr cels-to-fahr.c $ ./cels-to-fahr ``` --- ## 1.3 The For Statement There is alot of ways to write this program lets try do it using the for loop. ### Key points: 1. **For loop** - for () {} construct. ``` for (init; condition; increment) { // statements to repeat } ``` 2. **Variable <-> expression** - in any context where it is permissible to use the value of a variable of some type, you can use a more complicated expression of that type. replace: ``` printf("%3d %7.1f\n", fahr, cels); ``` with: ``` printf("%3d %7.1f\n", fahr, (5.9 / 9.0) * (fahr - 32)); ``` ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o for-fahr-to-cels for-fahr-to-cels.c $ ./for-fahr-to-cels ``` ### Exercises: #### Exercise 1-5 Refer to **reverse-fahr-to-cels.c** source code file ##### Compile and run the code: ``` $ gcc -o reverse-fahr-to-cels reverse-fahr-to-cels.c $ ./reverse-fahr-to-cels ``` --- ## 1.4 Symbolic Constants Magic numbers in your program and how to deal with them. ### Key points: 1. **#define** - defines a **symbolic name** or **symbolic constant** **name** has the same form as a variable name: a sequence of letters and digits that begins with a letter. ``` #define name replacement-text ``` Thereafter, any occurrence of **name** will be replaced by the corresponding **replacement** text. **name** conventionally written in UPPER_CASE. **replacement-text** can be any sequence of characters. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o const-fahr-to-cels const-fahr-to-cels.c $ ./const-fahr-to-cels ``` --- ## 1.5 Character Input and Output Text input and output processed with streams using stdio lib. ### Key points: 1. **stdio** - **text** input/output is dealt with as **streams of characters** **Text stream** is a sequence of characters divided into lines; **each line** consists of **zero or more characters** followed by a **newline character**. #### API: ##### getchar() * reads **next input character** from a **text stream** and **returns it** as a value ``` c = getchar(); ``` The chars normally come from the keyboard ##### putchar() * **prints a character** everytime it is called ``` putchar(c); ``` **Prints** the contents of the **int variable c as a character**, usually on the screen Calls to **putchar** and **printf** may be interleaved, the **output** will **appear** in the **order in which the calls are made**. --- ## 1.5.1 File Copying Given getchar and putchar, you can write a surprising amount of useful code without knowing anything more about input and output. ### Key points: 1. Relational operator **!=** means **not equal to**. 2. What appears to be a **character** on the **keyboard or screen** is of course, like everything else, **stored internally just as bit pattern**. Type char is specifically meant for storing such character data, but any integer type can be used. 3. **getchar()** returns **EOF**, when there is no input it stands for **End Of File** **Variable c** in example below is of type int because **c** have to be big enough to hold **EOF** in addition to any possible **char**. 4. **EOF** is an integer defined in **stdio** 5. In **C any assignment** such as > c = getchar(); is an **expression** and **has a value**, which is the **value** of the **left** hand side **after assignment**. This means assignment can appear as part of a larger expression. ``` while ((c = getchar()) != EOF) ``` **Parenthesis** around assignment is **necessary**, because the **precedence** of **!=** is **higher** than that of **=** Which will lead us to **undesired results** such as 1 or 0 value being set to the **c** variable. Simplest example would be a program that copies its input to output one character at a time. ### Pseudocode: > read a character > > while (character is not end-of-file indicator) > > &nbsp;&nbsp; output the character just read > > &nbsp;&nbsp; read a character ### Example in C: ``` #include <stdio.h> int main() { int c; c = getchar(); while(c != EOF) { putchar(c); c = getchar(); } } ``` ### Improved Example in C: ``` #include <stdio.h> int main() { int c; while((c = getchar()) != EOF) { putchar(c); } } ``` ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o file-copy file-copy.c $ ./file-copy ``` ### Exercises: #### Exercise 1-6 Verify that the expression getchar() != EOF is 0 or 1. Refer to **verify-EOF.c** source code file ##### Compile and run the code: ``` $ gcc -o verify-EOF verify-EOF.c $ ./verify-EOF ``` #### Exercise 1-7 Write a program to print the value of EOF. Refer to **print-EOF.c** source code file ##### Compile and run the code: ``` $ gcc -o print-EOF print-EOF.c $ ./print-EOF ``` --- ## 1.5.2 Character Counting Counting input characters. ### Key points: 1. **++variable_name** statement - presents a new operator, **++**, which means **increment** variable value by one. Example usage: ``` ++variable; ``` You could instead write: ``` variable = variable + 1; ``` But former is more concise and often more efficient. 2. **--variable_name** statement - which is operator to **decrement** variable value by one. Example usage: ``` --variable; ``` You could instead write: ``` variable = variable - 1; ``` But former is more concise and often more efficient. 3. Above operators can be, either **prefix** operators: ``` ++variable; ``` or **postfix**: ``` variable++; ``` these **two forms** have **different values** in expressions, but **both** increment or decrement **variable** 4. The way to **cope** with **big numbers** is to use **long** integer type or **even bigger** numbers **double** float type. 5. **While** and **For** Loops run test condition at their top. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o count-char-v1 count-char-v1.c $ ./count-char-v1 ``` ``` $ gcc -o count-char-v2 count-char-v2.c $ ./count-char-v2 ``` --- ## 1.5.3 Line Counting Counting lines in input. ### Key Points: 1. **Standart Library** ensures that input text streams appears as a sequence of lines, each terminated by a newline. Counting lines is just counting newlines('\n'). 2. Character written between **single quotes** represents an **integer** value equal to the **numerical** value of the character in the machine's character set. Called **character constant**, although it is just another way to write a **small integer**. Example: > 'A' is a **character constant** > > in **ASCI** its value is **65**, escape characters are also legal in character constants, so **'\n'** stands for the **value** of the **newline** character, which is **10** in **ASCI**. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o count-lines count-lines.c $ ./count-lines ``` ### Exercises: #### Exercise 1-8 Write a program to count the amount of space in the input. i.e. count amount of Blanks, Tabs, Newlines. bonus: Vertical Tab, Formfeed, Carriage Return. Refer to **count-space.c** source code file ##### Compile and run the code: ``` $ gcc -o count-space count-space.c $ ./count-space ``` #### Exercise 1-9 Write program to escape multiple blanks when copying from input to output. Refer to **escape-multiple-blanks.c** source code file ##### Compile and run the code: ``` $ gcc -o escape-multiple-blanks escape-multiple-blanks.c $ ./escape-multiple-blanks ``` #### Exercise 1-10 Write a program to explicitly state escape sequences: **tab** as **\t**, **backspace** as **\b**, and each **backslash** as **\\\\**. Refer to **explicit-escape-sequence.c** source code file ##### Compile and run the code: ``` $ gcc -o explicit-escape-sequence explicit-escape-sequence.c $ ./explicit-escape-sequence ``` --- ## 1.5.3 Word Counting Lines words and characters. ### Key points: 1. **Logical** operators. > **&&** Operator for **AND** > > **||** Operator for **OR** **AND** operator precedence is just higher than **OR**. **chained** logical **expressions** are **evaluated left to right** and its evaluation **guaranteed** to **stop** **as soon** as the **truth** or **falsehood** is **known**. 2. **else** construct. Specifies an alternative action if the condition part of an if statement is false. ### Example: ``` if (expression) { // statement_1 } else { // statement_2 } ``` Only one of the of the two statements associated with an if else is preformed. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o wc wc.c $ ./wc ``` ### Exercises: #### Exercise 1-12 Write a program that prints input one word per line. Refer to **split-words-with-new-line.c** source code file ##### Compile and run the code: ``` $ gcc -o split-words-with-new-line split-words-with-new-line.c $ ./split-words-with-new-line ``` --- ## 1.6 Arrays Writing a program to count the number of occurrences of each digit, of white space characters, and all of other characters. ### Key points: 1. **Declaration**. ``` int digits[10] ``` where **int** is **type** of the **array** **digits** variabel **name** of the **array** **10** is a **subscript**, which can be **any integer expression**, includes **integer variables** like **i** and integer **constants** 2. **Chars ASCII** by definition **chars** are just **small integers**, so char **variables** and **constants** are **identical** to **ints** in arithmetic expressions. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o count-digits-white-space-chars count-digits-white-space-chars.c $ ./count-digits-white-space-chars ``` ### Exercises: #### Exercise 1-13 Write a program to print a histogram of the lengths of words in its input. Refer to **words-length-hist.c** source code file ##### Compile and run the code: ``` $ gcc -o words-length-hist words-length-hist.c $ ./words-length-hist ``` #### Exercise 1-14 Write a program to print a histogram of the frequencies of different characters in its input. Refer to **chars-freq-hist.c** source code file ##### Compile and run the code: ``` $ gcc -o chars-freq-hist chars-freq-hist.c $ ./chars-freq-hist ``` --- ## 1.7 Functions Convenient way to encapsulate some computation. ### Key points: 1. **Definition** > return-**type** function-**name**(parameter declaration, if any in form: **type** **name**) { > > &nbsp;&nbsp;&nbsp;&nbsp;**function code** > > } 2. **Parameter** variable in function declaration 3. **Argument** variable passed to function call 4. **return** is a way to return value from function return **without value** returns **control** to the **caller function**. 5. **main() return** return **value** of **0** from main function **implies normal termination**. **non zero** values indicate **unusual or erroneous** terminal **conditions**. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o exponent exponent.c $ ./exponent ``` ### Exercises: #### Exercise 1-15 Rewrite temperature conversion program of Section 1.2 to use a function for conversion. Refer to **fn-fahr-to-cels.c** source code file ##### Compile and run the code: ``` $ gcc -o fn-fahr-to-cels fn-fahr-to-cels.c $ ./fn-fahr-to-cels ``` --- ## 1.8 Arguments - Call by value ### Key points: 1. All fn arguments are **passed by value**. Meaning that the called **function** is **given** the **values** of its arguments in **temporary variables** rather than the originals, i.e local copy. 2. **Except** for **Arrays** When the name of the **array** is **used** as an **argument**, the **value passed** to the function is the **location** or **address** of the **beginning** of the **array** - **there is no copying of array elements**. 3. **Pointers** Using **pointers** it is **possible** to arrange for a function to **modify** a **variable in calling routine**. --- ## 1.9 Character Arrays The most common type of arrays in the c is the array of characters. ### Key points: 1. **End** of the string - **\0** End of the string is specified with character **'\0'** in the array of the characters. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o longest-line longest-line.c $ ./longest-line ``` ### Exercises: #### Exercise 1-16 Revise the main routine of the longest line program so it will correctly print the length of arbitrarily long input lines, and as much as possible of the text. Refer to **longest-line-rev.c** source code file ##### Compile and run the code: ``` $ gcc -o longest-line-rev longest-line-rev.c $ ./longest-line-rev ``` #### Exercise 1-17 Write a program to print all input lines that are longer than 80 characters. Refer to **print-more-than-80.c** source code file ##### Compile and run the code: ``` $ gcc -o print-more-than-80 print-more-than-80.c $ ./print-more-than-80 ``` #### Exercise 1-18 Write a program to remove trailing blanks and tabs from each line of input, and to delete entirely blank lines. Refer to **strip-line.c** source code file ##### Compile and run the code: ``` $ gcc -o strip-line strip-line.c $ ./strip-line ``` #### Exercise 1-19 Write a function to reverse the character string. Use it to write a program that reverses its input a line at a time. Refer to **reverse-string.c** source code file ##### Compile and run the code: ``` $ gcc -o reverse-string reverse-string.c $ ./reverse-string ``` --- ## 1.10 External Variables and Scope The most common type of arrays in the c is the array of characters. ### Key points: 1. **Local** variables - automatic. Each local variable in function comes into existance only when the function is called, and disappears when the function exits. This is why such functions known as **automatic** 2. **External** variables - external to all functions. Variables that can be accessed by name by any function, globally accessable thus can be used instead of function arguments, they retain their values even after the functions that set them have returned. must be **defined** exactly **once** outside of any function, that sets **storage** for it. Also must be **declared** in each **function** that wants **access** it. this **states** **type** of the variable. 2. **extern** declaration can be **omitted** if the it was **defined** in the **same file**. 3. **Usual practice** is to collect **extern** declarations of variables and functions in the separate file, historically called **header**, that is included by **#include** at the front of each file, the **suffix .h** is **conventional** for header **names**. ### Compile and run the code: Assuming one uses gcc as a compiler run in shell: ``` $ gcc -o extern-longest-line extern-longest-line.c $ ./extern-longest-line ``` ### Exercises: #### Exercise 1-20 Write a program **detab** that **replaces tabs** in the input with the proper number of blanks to space to the next tab stop. Assume a fixed set of tab stops, say every **n** columns. Should n be a variable or a symbolic parameter? **n** should be a variable, to allow different tab size option. Refer to **detab.c** source code file ##### Compile and run the code: ``` $ gcc -o detab detab.c $ ./detab ``` #### Exercise 1-21 Write a program **entab** that **replaces strings of blanks** in the input with the minimum number of tabs and spaces to achieve the same thing. Refer to **entab.c** source code file ##### Compile and run the code: ``` $ gcc -o entab entab.c $ ./entab ``` #### Exercise 1-22 Write a program **fold** long input lines into two or more shorter lines after the last non-blank character that occurs before the n-th column of input. Make sure your program does something intelligent with very long lines, and if there are no blanks or tabs before the specified column. Refer to **fold.c** source code file ##### Compile and run the code: ``` $ gcc -o fold fold.c $ ./fold ```
24.145179
262
0.623306
eng_Latn
0.994712
c97ab3811a13fbccf98abfa6b54b89881f6111f8
816
md
Markdown
docs/actions.md
bootstrap-styled/redux
029d6949d5950a8583e63204314b72bc4d9adb69
[ "MIT" ]
9
2018-11-15T22:44:19.000Z
2020-05-02T15:25:25.000Z
docs/actions.md
bootstrap-styled/redux
029d6949d5950a8583e63204314b72bc4d9adb69
[ "MIT" ]
59
2018-11-17T07:17:21.000Z
2020-04-25T11:18:21.000Z
docs/actions.md
bootstrap-styled/redux
029d6949d5950a8583e63204314b72bc4d9adb69
[ "MIT" ]
null
null
null
You can dispatch the following actions against the store: ```js static /** * Used to change the theme * @param theme * theme are identified with a key '_name' * OR it can be the name of theme stored in themes */ export const changeTheme = (theme) => ({ type: CHANGE_THEME, theme, }); /** * Add a new themes to the redux themes * theme are identified with a key '_name' * @param theme */ export const storeTheme = (theme) => ({ type: STORE_THEME, theme, }); /** * Remove an existing theme to the redux themes * theme are identified with a key '_name' * @param theme */ export const deleteTheme = (theme) => ({ type: DELETE_THEME, theme, }); /** * Remove all stored themes * theme are identified with a key '_name' */ export const deleteThemes = () => ({ type: DELETE_THEMES, }); ```
18.976744
57
0.656863
eng_Latn
0.988799
c97b5e5066bf90bf6728805e882d991581cd1618
4,858
md
Markdown
API-Design/Best-Practices-in-API-Design.md
edwardlee03/tech-study
a52ef87b088da47b1afc58be7fd47129ac4b0b71
[ "Apache-2.0" ]
18
2018-12-02T14:49:20.000Z
2021-08-05T08:00:33.000Z
API-Design/Best-Practices-in-API-Design.md
edwardlee03/tech-study
a52ef87b088da47b1afc58be7fd47129ac4b0b71
[ "Apache-2.0" ]
null
null
null
API-Design/Best-Practices-in-API-Design.md
edwardlee03/tech-study
a52ef87b088da47b1afc58be7fd47129ac4b0b71
[ "Apache-2.0" ]
11
2019-05-14T06:27:14.000Z
2021-11-26T12:51:27.000Z
Best Practices in API Design API设计的最佳实践 ============================ > Keshav Vasudevan, 2016.10.10 Good API design is a topic that comes up a lot for teams that are trying to perfect their API strategy. **The benefits of a well-designed API** include: **improved developer experience**, **faster documentation**, and **higher adoption for your API**. But **what exactly goes into good API design?** In this blog post, I will detail **a few best practices for designing RESTful APIs**. 精心设计的API的好处包括:改善的开发者体验,更快的文档记录,更高的API采用率。 ### Characteristics of a well-designed API 精心设计的API的特征 In general, an effective API design will have the following characteristics: * **Easy to read and work with 易于阅读和使用**: A well designed API will be easy to work with, and its resources and associated operations can quickly be memorized by developers who work with it constantly. * **Hard to misuse 难以滥用**: Implementing and integrating with an API with good design will be a straightforward process, and writing incorrect code will be a less likely outcome. It has informative feedback, and does not enforce strict guidelines on the API’s end consumer. * **Complete and concise 完整而简洁**: Finally, a complete API will make it possible for developers to make full-fledged applications against the data you expose. Completeness happens over time usually, and most API designers and developers incrementally build on top of existing APIs. It is an ideal which every engineer or company with an API must strive towards. ### Collections, Resources, and their URLs #### Understanding resources and collections 理解资源和集合 **Resources are fundamental to the concept of REST.** **A resource is an object** that’s important enough to be referenced in itself. A resource has **data, relationships to other resources, and methods** that operate against it to allow for accessing and manipulating the associated information. **A group of resources is called a collection.** The contents of collections and resources depend on your organizational and consumer requirements. **A Uniform Resource Locator (URL) identifies the online location of a resource.** This URL points to the location where your API’s resources exist. The base URL is the consistent part of this location. #### Nouns describe URLs better 名词描述URL更好 The base URL should be **neat, elegant and simple** so that developers using your product can easily use them in their web applications. A long and difficult-to-read base URL is not just bad to look at, but can also be prone to mistakes when trying to recode it. Nouns should always be trusted. There’s no rule on keeping the resource nouns singular or plural, though it is advisable to keep collections plural. **Having the same plurality across all resources and collections respectively for consistency** is **good practice**. ### Describe resource functionality with HTTP methods 使用HTTP方法描述资源的功能 All resources have a set of methods that can be operated against them to work with the data being exposed by the API. RESTful APIs comprise majorly of HTTP methods which have well defined and unique actions against any resource. ### Responses #### Give feedback to help developers succeed 提供反馈以帮助开发人员成功 Providing good feedback to developers on how well they are using your product goes a long way in improving adoption and retention. Every client request and server side response is a message and, in an ideal RESTful ecosystem, these messages must be self descriptive. Good feedback involves positive validation on correct implementation, and an informative error on incorrect implementation that can help users debug and correct the way they use the product. For an API, **errors are a great way to provide context to using an API.** **Align your errors around the standard HTTP codes.** Incorrect, client side calls should have 400-type errors associated with them. If there are any server side errors, then a suitable 500 response must be associated with them. A successful method used against your resource should return a 200-type response. For additional information, [check out this REST API tutorial](http://www.restapitutorial.com/httpstatuscodes.html). In general, there are three possible outcomes when using your API: 1. The client application behaved erroneously (client error - 4xx response code) 2. The API behaved erroneously (server error - 5xx response code) 3. The client and API worked (success - 2xx response code) ### Requests #### Handle complexity elegantly 优雅地处理复杂问题 The data you’re trying to expose can be characterized by a lot of properties which could be beneficial for the end consumer working with your API. These properties describe the base resource and isolate specific assets of information that can be manipulated with the appropriate method. [原文](https://swagger.io/blog/api-design/api-design-best-practices/)
63.090909
162
0.788802
eng_Latn
0.998635
c97b83ae0950fae2171be32232ffcbb7d53bf6ee
2,893
md
Markdown
CONTRIBUTING.md
qjuanp/grelay
ecc8277e3b59ebd13b3e106339feac798ac4fb27
[ "MIT" ]
11
2021-10-13T15:24:19.000Z
2022-03-29T06:46:45.000Z
CONTRIBUTING.md
qjuanp/grelay
ecc8277e3b59ebd13b3e106339feac798ac4fb27
[ "MIT" ]
3
2021-10-30T12:58:17.000Z
2022-03-29T21:40:52.000Z
CONTRIBUTING.md
qjuanp/grelay
ecc8277e3b59ebd13b3e106339feac798ac4fb27
[ "MIT" ]
3
2021-10-15T20:11:35.000Z
2022-03-29T19:49:46.000Z
# **Contributing to gRelay** First of all, thanks for the interest in contributing, we can do great work together :earth_americas: :leaves: The following is a set of guidelines for contributing to gRelay project. It is important to be clear, these guidelines are just guidelines, not rules. Use it for your best judgment, and feel free to propose changes to this document. #### **Table Of Contents** [:earth_africa: How can I contribute?](#earth_africa-how-can-i-contribute) * [:bug: Reporting bugs](#bug-reporting-bugs) * [:bulb: Suggesting an enhancement](#bulb-suggesting-an-enhancement) * [:octocat: Code contributtion](#octocat-code-contributtion) [:art: Styleguides](#art-styleguides) ## **:earth_africa: How can I contribute?** ### **:bug: Reporting bugs** If you found out a bug, follow these steps: * Ensure the bug was not already reported by searching on GitHub under [Issues](https://github.com/grelay/grelay/issues). * If you didn't find out any issue opened, open an [issue](https://github.com/grelay/grelay/issues) that follows the [bug_report.md](https://github.com/grelay/grelay/blob/main/.github/ISSUE_TEMPLATE/bug_report.md) template. ### **:bulb: Suggesting an enhancement** If you want to suggest a new feature to gRelay, follow these steps: * Check if the enhancement was not already implemented; * If the enhancement was not already implemented, open an [issue](https://github.com/grelay/grelay/issues) that follows the [feature_request.md](https://github.com/grelay/grelay/blob/main/.github/ISSUE_TEMPLATE/feature_request.md) template. ### :octocat: Code contributtion * If you want to contribute to the project and this is the first time here, we recommend searching an [issue](https://github.com/grelay/grelay/issues) with the `good first issue` label * If you are looking for contribute to the project and this is not your first time here, we recommend searching for some [issue](https://github.com/grelay/grelay/issues) which has a `bug` label or `enhancement` label * After find out your issue, you can fork the gRelay project and start to develop the solution. Don't forget to [configure your forked project](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork) * Before open a Pull Request, run the tests and check if the code is following the [styleguide](#art-styleguides) * Finally, open a Pull Request following the [pull_request_template.md](https://github.com/grelay/grelay/blob/main/.github/pull_request_template.md) template. ## **:art: Styleguides** * We indent using two spaces (soft tabs) * Clear variable name (`var p periodType = thirdPeriod`) * Clear comments above the public methods * This is an open source project. Consider the people who will read your code, and make it look nice for them. Thanks! :hearts: gRelay team
47.42623
256
0.752506
eng_Latn
0.987646
c97b99eb503405ccbe40231020f0f68ade19eaf1
134
md
Markdown
playbooks/kerberos/README.md
dbist/ansible-roles
81ce44d259fe1badc39326b11fabecb3448705f5
[ "Apache-2.0" ]
null
null
null
playbooks/kerberos/README.md
dbist/ansible-roles
81ce44d259fe1badc39326b11fabecb3448705f5
[ "Apache-2.0" ]
10
2020-05-01T12:10:33.000Z
2022-03-02T16:05:47.000Z
playbooks/kerberos/README.md
dbist/ansible-roles
81ce44d259fe1badc39326b11fabecb3448705f5
[ "Apache-2.0" ]
null
null
null
# Kerberos server playbook ## ssh to the vm ```bash vagrant ssh ``` ## connect to kadmin as root ```bash sudo su kadmin.local ```
8.933333
28
0.656716
eng_Latn
0.925384
c97bf2281bcf0fe811b866e724bddb2d020a8f73
2,247
md
Markdown
windows-driver-docs-pr/audio/making-portdmus-the-default-directmusic-port-driver.md
pravb/windows-driver-docs
c952c72209d87f1ae0ebaf732bd3c0875be84e0b
[ "CC-BY-4.0", "MIT" ]
4
2017-05-30T18:13:16.000Z
2021-09-26T19:45:08.000Z
windows-driver-docs-pr/audio/making-portdmus-the-default-directmusic-port-driver.md
pravb/windows-driver-docs
c952c72209d87f1ae0ebaf732bd3c0875be84e0b
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/audio/making-portdmus-the-default-directmusic-port-driver.md
pravb/windows-driver-docs
c952c72209d87f1ae0ebaf732bd3c0875be84e0b
[ "CC-BY-4.0", "MIT" ]
1
2018-01-11T11:23:06.000Z
2018-01-11T11:23:06.000Z
--- title: Making PortDMus the Default DirectMusic Port Driver description: Making PortDMus the Default DirectMusic Port Driver ms.assetid: 1e498eb1-8a48-4240-8557-2fd2bba02abb keywords: - port drivers WDK audio , default DMus port driver - default DMus port driver ms.author: windowsdriverdev ms.date: 04/20/2017 ms.topic: article ms.prod: windows-hardware ms.technology: windows-devices --- # Making PortDMus the Default DirectMusic Port Driver ## <span id="making_portdmus_the_default_directmusic_port_driver"></span><span id="MAKING_PORTDMUS_THE_DEFAULT_DIRECTMUSIC_PORT_DRIVER"></span> To make the DMus port driver the default for all DirectMusic applications, generate a GUID (using uuidgen.exe or guidgen.exe, which are included in the Microsoft Windows SDK) to uniquely identify your synth. Your [**KSPROPERTY\_SYNTH\_CAPS**](https://msdn.microsoft.com/library/windows/hardware/ff537389) property handler should copy this GUID into the **Guid** member of the [**SYNTHCAPS**](https://msdn.microsoft.com/library/windows/hardware/ff538424) structure. Also, modify your driver's INF file to set up the following registry entry: ``` Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\DirectMusic\Defaults String Value: DefaultOutputPort ```     -------------------- [Send comments about this topic to Microsoft](mailto:[email protected]?subject=Documentation%20feedback%20[audio\audio]:%20Making%20PortDMus%20the%20Default%20DirectMusic%20Port%20Driver%20%20RELEASE:%20%287/18/2016%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
59.131579
965
0.803738
eng_Latn
0.441315
c97c4b5d444f0d4e1f652b698a43d01302915bbd
2,858
md
Markdown
_posts/2019-08-30-190830.md
hojinYang/hojinYang.github.io
228d736239dc0c79795ed5d57b119edb5a2aa27b
[ "MIT" ]
1
2020-04-25T05:20:45.000Z
2020-04-25T05:20:45.000Z
_posts/2019-08-30-190830.md
hojinYang/hojinYang.github.io
228d736239dc0c79795ed5d57b119edb5a2aa27b
[ "MIT" ]
1
2020-04-28T01:24:53.000Z
2020-04-28T15:23:14.000Z
_posts/2019-08-30-190830.md
hojinYang/hojinYang.github.io
228d736239dc0c79795ed5d57b119edb5a2aa27b
[ "MIT" ]
null
null
null
--- title: "캐나다 대학원/영주권 이런 저런 정보" categories: - blogging last_modified_at: 2019-08-30T13:00:00+09:00 author_profile: no comments: true --- 유학 준비생에게 도움을 줄 수 있을까 생각해서 블로깅을 시작했지만, 정작 가장 중요한 유학 준비 과정에 대해서는 아는 바가 별로 없다. 유학에 필요한 영어점수와 서류 준비를 끝낸 뒤 여러 대학에 지원하는 것이 일반적인 준비 과정이지만, 나는 운이 좋게도 연구 주제가 맞는 지금의 지도교수님과 사전에 컨택이 되어 딱 필요한 점수만 맞춰 토론토대학에만 지원하였기 때문이다. 전반적인 유학 준비 과정은 스탠포드 재학중이신 [이민아님 블로그](https://minalee.info/2016/12/19/%eb%b0%95%ec%82%ac%ea%b3%bc%ec%a0%95-%ec%a7%80%ec%9b%90%ec%9d%84-%ec%a4%80%eb%b9%84%ed%95%98%eb%a9%b0/)에 잘 정리가 되어있다. 그래서 나의 준비과정 블로그는 끝! 이 아니라… 그래도 나름 알고 있는 캐나다 대학원(특히 공대)가 가진 특징 몇 가지를 정리해본다. <figure> <img src="{{ 'assets/images/posts/info1.jpg' | relative_url }}" alt="bundle install in Terminal window"> </figure> - 석사과정이 Research Track과 Course Track으로 나뉜다. 미국과 캐나다의 대학원 과정을 조사하던 중 두 나라의 석사과정이 사뭇 다르다는 것을 알게되었다. 미국에서의 석사는 학사의 연장선 느낌으로, 수업 및 프로젝트 위주로 진행된다. 연구실 생활에 관심이 있는 경우 입학 후 개인적으로 교수님과 컨택하는 것이 일반적. 반면 캐나다에선 한국과 비슷하게 연구실을 배정받고 랩돌이의 지위를 획득할 수 있는 Research track이 존재한다. 이 과정의 장점은 아무래도 교수님의 advising을 받을 수 있다는 점(^^)에 더해 경제적으로 안정적이라는 것. 일반적으로 등록금 면제에 RA/TA를 통해 생활비까지 벌 수 있다. 풍족하다고까지 말할 수는 없지만, 혼자 자급자족할 수준의 벌이는 된다. - 졸업 후 Work permit이 나온다. 유학을 고민할 때 세 손가락 안에 드는 고민이 졸업 후에 먹고 살 걱정이다. 만약 학위를 마치고 현지에서 경력을 쌓아보고 싶다면 캐나다를 고려해보는 것도 좋을 듯하다. 유학생들은 학위 중에는 study permit으로 캐나다에서 지내게 되며, 학위를 마치면 Post-graduation Work Permit(PGWP)이 나온다. 말 그대로 졸업 후 일을 할 수 있는 work permit인데, 개인의 학위 기간만큼, 최대 2년동안 합법적으로 캐나다에서 일을 할 수 있다. 예를 들어 학위기간이2년이면 PGWP역시 2년, 4년이어도 2년동안 일을 할 수 있는 권리를 얻게 된다. - 영주권 신청 Study permit이나 Work permit은 모두 임시로 캐나다에 머무를 수 있는 권리이므로, 캐나다에서 오래 거주할 계획인 분들은 영주권(Permanent residence)을 신청한다. 캐나다에서 학위를 마쳤다면 다양한 전형으로 영주권을 신청할 수 있다. 그 중에서도 Express Entry Program를 통해 지원하는 것이 일반적인데, 경력, 언어점수, 학위, 나이 등등 점수를 산정하여 매년마다 공지되는 점수 이상이면 영주권을 받는 시스템이다. 대략적으로 학위를 마친 뒤 1년 정도 일을 한 경력이 있다면 통과 점수를 만들 수 있다고는 하지만, 개개인마다 점수가 다르니 확인해보아야 한다. 온타리오 주에서 석/박사를 하였다면 OINP Masters/phD 엔트리로 영주권을 신청할 수 있다. 그러나 비정기적으로 오픈되고, 신청 후 영주권을 받기까지 평균 1.5년이 걸린다고 알려져 있어 Express Entry와 비교했을 때 큰 장점이 있는 전형은 아닌 듯싶다. <figure> <img src="{{ 'assets/images/posts/info2.jpg' | relative_url }}" alt="bundle install in Terminal window"> </figure> 다음은 토론토 생활 알쓸신잡. - 토론토 대중교통(TTC)은 크게 버스, 지하철, 스트릿카가 있으며 기본 요금은 모두 $3. 두시간동안 모든 교통수단 무료환승 가능하다. - 한인타운은 다운타운에 하나(Bloor), 그리고 토론토 북쪽 Finch에 하나. - 굳이 한인타운이 아니더라도 토론토에는 한인마트(H-mart, PAT, 갤러리아)가 많이 있다. 집 나가면 먹는게 제일 고민인데, 한인마트 근처에 살면 삶의 질이 상승한다. - 다운타운 기준 룸 렌트 비용은 대략 $700~$1,200. 가을학기 시작 직전 7~8월에 가격이 많이 올라간다. - 식사는 저렴한 한끼 기준 $8~$11. - 대학교 안에 입점해있는 식당이라고 특별히 더 저렴하진 않지만, 학기중엔 푸드트럭이 교내로 찾아오니 한 끼 저렴히 해결하기 괜찮다. - 다양한 통신사가 있으며 한달 요금은 대략 $40~60 내외로 넉넉하게 쓸 수 있다. - Research-stream 학생이라면 등록금 및 매달 지급되는 RA 급여를 지급받는 데 필요한 은행 계좌가 필요하다. 계좌를 개설하기 전 Social Insurance Number(SIN)을 발급받아야 하는데, 지도 앱에 Service Canada를 검색해 가까운 지점에 찾아가면 된다. - 다양한 은행이 있으며, 한인 유학생은 주로 TD나 BMO계좌를 계설하는 것 같다. 한인타운에 가면 한인 행원이 있는 경우가 많으니 초반에 많이 복잡하다면 도움을 받는 것을 추천!
66.465116
492
0.710637
kor_Hang
1.00001
c97ca6abe8a60cc45d4983c243be611d6108d0e3
8,593
md
Markdown
help/c-activities/edit-activity.md
vgiurgiu/target.en
0056026e1a35e572ae41c834e66123c3ba1a4054
[ "MIT" ]
null
null
null
help/c-activities/edit-activity.md
vgiurgiu/target.en
0056026e1a35e572ae41c834e66123c3ba1a4054
[ "MIT" ]
null
null
null
help/c-activities/edit-activity.md
vgiurgiu/target.en
0056026e1a35e572ae41c834e66123c3ba1a4054
[ "MIT" ]
null
null
null
--- description: Information about the different ways you can edit an existing activity, including saving an activity in draft form. keywords: activities;activity;activity types;edit activity;edit;draft seo-description: Information about the different ways you can edit an existing activity, including saving an activity in draft form. seo-title: Edit an activity or save as draft solution: Target title: Edit an activity or save as draft topic: Standard uuid: bfc7a045-ebdb-40b3-badc-668fbbe2fcf3 --- # Edit an activity or save as draft{#edit-an-activity-or-save-as-draft} Information about the different ways you can edit an existing activity, including saving an activity in draft form. Target provides various places in the UI where you can edit existing activities. The process varies depending on the method you choose. ## Edit an activity by using the hover button on the Activities page {#section_29EE2ECA6B88473A8F9AC5600FFBB174} 1. From the **[!UICONTROL Activities]** page, hover over the activity you want to edit, then click the **[!UICONTROL Edit]** icon. ![Edit icon](/help/c-activities/assets/hover_edit.png) Target opens the activity in the Visual Experience Composer (VEC) and you see the [!UICONTROL Experiences] page (the first step in the three-step guided workflow). 1. Edit the activity, as desired using the [VEC options](/help/c-experiences/c-visual-experience-composer/viztarget-options.md). 1. Click the split button to advance to the next step or to save the activity. ![Split button](/help/c-activities/assets/edit_split_button_2.png) * **Next:** To edit another page in the three-step workflow, click **[!UICONTROL Next]** to advance to the desired step. For example, in the illustration above, clicking [!UICONTROL Next] displays the [!UICONTROL Targeting] step. * **Save & Close:** Make the desired changes on the current step, click the drop-down on the split button, then select **[!UICONTROL Save and Close]** to save your changes and display the activity's [!UICONTROL Overview] page. * **Save:** Make the desired changes on a step, click the drop-down on the split button, then select **[!UICONTROL Save]** to save your changes and remain on that step where you can continue to make changes. Wait for the save to complete before making additional changes. The VEC reloads with the refreshed changes after the save is complete. ## Edit an activity by opening the activity by clicking its name on the Activities page {#section_176180DAD17E40CEA441903F39E0AA1C} 1. To avoid having to step through the workflow, click the desired activity from the Activities page to open it, then select an option from the **[!UICONTROL Edit Activity]** drop-down list. ![Edit Activity drop-down](/help/c-activities/assets/edit_activity.png) 1. Select the desired option:: * **Edit Experiences:** Takes you directly to the [!UICONTROL Experiences] page (the first step in the guided workflow). Make your desired changes, then use the split button (explained above) to save the activity. * Click **[!UICONTROL Save & Close]** to save your changes and display the activity's Overview page. * Click **[!UICONTROL Save]** to save your changes and remain on that step where you can continue to make changes. Wait for the save to complete before making additional changes. The VEC reloads with the refreshed changes after the save is complete. * **Edit Targeting:** Takes you directly to the [!UICONTROL Targeting] page (the second step in the guided workflow). Make your desired changes, then use the split button (explained above) to save the activity. * Click **[!UICONTROL Save & Close]** to save your changes and display the activity's Overview page. * Click **[!UICONTROL Save]** to save your changes and remain on that step where you can continue to make changes. Wait for the save to complete before making additional changes. The VEC reloads with the refreshed changes after the save is complete. * **Edit Goals & Settings:** Takes you directly to the [!UICONTROL Goals & Settings] page (the final step in the guided workflow). Make your desired changes, then use the split button (explained above) to save the activity. * Click **[!UICONTROL Save & Close]** to save your changes and display the activity's Overview page. * Click **[!UICONTROL Save]** to save your changes and remain on that step where you can continue to make changes. Wait for the save to complete before making additional changes. The VEC reloads with the refreshed changes after the save is complete. ## Work with legacy activities created in Recommendations Classic {#classic} The [!UICONTROL Activities] list display activities created in various sources, including [!DNL Recommendations Classic]. The following actions are available when working with legacy activities created in [!DNL Recommendations Classic]: * [!UICONTROL Activate] * [!UICONTROL Deactivate] * [!UICONTROL Archive] * [!UICONTROL Copy] * [!UICONTROL Delete] You cannot edit a [!DNL Recommendations] activity directly. If you want to edit the activity, you should create a copy of the activity using [!DNL Target Premium] and then save the newly created activity. This newly created activity can then be edited as necessary. ## Save an activity in draft form {#section_968CD7A63027432EBD8FAE3A0F7404C3} When you are creating a new activity that has not yet been saved, or you are editing an activity that was previously saved in draft form, the Save Draft options display in the split button. You can save an activity in draft mode if the activity setup has been started but it is not ready to run. 1. Create new activity or edit an existing activity that is in draft form. 1. Select the desired option from the split button: ![Save Draft](/help/c-activities/assets/save_draft.png) * **Next:** To edit another page in the three-step workflow, click **[!UICONTROL Next]** to advance to the desired step. * **Save Draft & Close:** Make the desired changes on the current step, click the drop-down on the split button, then select **[!UICONTROL Save Draft and Close]** to save your changes and display the activity's [!UICONTROL Overview] page. * **Save Draft:** Make the desired changes on a step, click the drop-down on the split button, then select **[!UICONTROL Save Draft]** to save your changes and remain on that step. ## Copying/Editing an Activity When Using Workspaces {#section_45A92E1DD3934523B07E71EF90C4F8B6} A workspace lets an organization assign a specific set of users to a specific set of properties. In many ways, a workspace is similar to a report suite in [!DNL Adobe Analytics]. >[!NOTE] > >Workspaces are part of the Properties and Permissions functionality available as part of the [!DNL Target Premium] solution. They are not available in [!DNL Target Standard] without a [!DNL Target Premium] license. If you are part of a multi-national organization, you might have a workspace for your European web pages, properties, or sites and another workspace for your American web pages, properties, or sites. If you are part of a multi-brand organization, you might have a separate workspace for each of your brands. For more information about workspaces and the Enterprise User Permissions functionality, see [Enterprise User Permissions](../administrating-target/c-user-management/property-channel/property-channel.md#concept_E396B16FA2024ADBA27BC056138F9838). If you have Enterprise User Permissions enabled in your environment, you can copy activities to the same workspace or to another workspace. You cannot currently move an activity from one workspace to another. To copy an activity to another workspace, from the [!UICONTROL Activities] page, hover over the activity you want to copy, click the [!UICONTROL Copy] icon, then select the desired workspace from the drop-down list. Consider the following information when using the copy/edit functionality with workspaces: * When you copy an activity within same workspace, the first step of the creation flow of the newly copied activity opens in edit mode. * When you copy an activity to a different workspace, the activity is copied to the other workspace without opening it in the activity creation flow. After the activity is copied successfully, a message displays indicating that the activity was copied successfully and includes a link to open the new activity. If your environment does not have the Enterprise User Permissions functionality enabled, all activities open in edit mode before copying.
81.066038
424
0.773769
eng_Latn
0.995022
c97e3156644b2f89366665e7c3eab21cef59d0be
13
md
Markdown
README.md
jamesmontemagno/lukelov.es
c850039ff39f1dc8a88f19647567d95926905370
[ "MIT" ]
1
2017-12-28T19:16:30.000Z
2017-12-28T19:16:30.000Z
README.md
jamesmontemagno/lukelov.es
c850039ff39f1dc8a88f19647567d95926905370
[ "MIT" ]
11
2019-12-10T00:33:09.000Z
2021-05-14T00:39:10.000Z
README.md
lukekarrys/lukecod.es
7e0ef757357d23d235d44572b03aa8b8f112b232
[ "MIT" ]
null
null
null
# lukecod.es
6.5
12
0.692308
vie_Latn
0.194127
c97ea611b4cb193d0a14bde2fdc53b291113e47c
723
md
Markdown
edmm-core/src/main/java/io/github/edmm/plugins/ansible/README.md
codacy-badger/transformation-framework
f32d936c78b58b1cb58bf4c7e6f7befcfe7d7460
[ "Apache-2.0" ]
4
2020-06-22T07:40:08.000Z
2021-10-05T08:34:20.000Z
edmm-core/src/main/java/io/github/edmm/plugins/ansible/README.md
codacy-badger/transformation-framework
f32d936c78b58b1cb58bf4c7e6f7befcfe7d7460
[ "Apache-2.0" ]
6
2020-05-06T11:47:36.000Z
2021-09-30T09:57:37.000Z
edmm-core/src/main/java/io/github/edmm/plugins/ansible/README.md
codacy-badger/transformation-framework
f32d936c78b58b1cb58bf4c7e6f7befcfe7d7460
[ "Apache-2.0" ]
3
2020-11-30T06:08:21.000Z
2021-05-05T12:04:21.000Z
# Ansible Plugin The Ansible plugin creates one playbook for the entire application topology. The topology graph is first reverted and sorted topologically to determine the deployment order. Afterwards, a play is created for each component. As play tasks represent operations, they form together a logical EDMM component. Further, all properties are included as play variables in order be accessed by operations, while all available operations are executed using the respective Ansible task. The Ansible plugin assumes that there is already a running virtual compute resource available. However, we could use Ansible's EC2 provisioner to spin up EC2 instances, but this is something that will be improved in future work.
65.727273
169
0.824343
eng_Latn
0.999743
c97f350f4d8a72f148d02ea826a5cf741b60ef5d
9,891
md
Markdown
partner-center/new-commerce-promotions.md
MicrosoftDocs/partner-center-pr.de-de
058ce670a9a4a7f181c562ae3775a6191c48a2ff
[ "CC-BY-4.0", "MIT" ]
4
2020-05-19T19:37:44.000Z
2021-09-01T13:42:06.000Z
partner-center/new-commerce-promotions.md
MicrosoftDocs/partner-center-pr.de-de
058ce670a9a4a7f181c562ae3775a6191c48a2ff
[ "CC-BY-4.0", "MIT" ]
null
null
null
partner-center/new-commerce-promotions.md
MicrosoftDocs/partner-center-pr.de-de
058ce670a9a4a7f181c562ae3775a6191c48a2ff
[ "CC-BY-4.0", "MIT" ]
4
2019-10-09T19:51:31.000Z
2021-10-09T10:23:41.000Z
--- title: Promotions im neuen E-Commerce-Verfahren ms.topic: article ms.date: 09/29/2021 ms.service: partner-dashboard ms.subservice: partnercenter-pricing description: Erfahren Sie mehr über neue Commerce-Erfahrungen zum Entdecken und Kaufen von Werbeaktionen. author: BrentSerbus ms.author: brserbus ms.localizationpriority: medium ms.custom: SEOMAY.20 ms.openlocfilehash: ae227414e995a26ebabd4530a1e5e01012449dd3 ms.sourcegitcommit: 947c91c2c6b08fe5ce2156ac0f205d829c19a454 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 10/12/2021 ms.locfileid: "129828363" --- # <a name="introduction-new-commerce-promotions"></a>Einführung: Neue Commerce-Werbeaktionen **Geeignete Rollen** - Administrator-Agent - Vertriebsbeauftragter - Globaler Administrator > [!Note] > Die neuen Änderungen an der E-Commerce-Erfahrung sind derzeit nur für Partner verfügbar, die Teil der technical preview Microsoft 365/Dynamics 365 new commerce experience sind. Microsoft unterstützt Werbeaktionen im neuen Handel. Diese Aktionen haben unterschiedliche Rabattbeträge und -daueren. ## <a name="discovering-promotions"></a>Entdecken von Werbeaktionen ## Partner können Werbeaktionen entdecken, indem sie das Backlog für Werbeaktionen besuchen oder die getPromotions-API aufrufen. Das Backlog für Werbeaktionen ist eine Microsoft-Liste der verfügbaren Werbeaktionen, die Partner kennen müssen. Die Liste wird monatlich redell verwaltet und aktualisiert. ## <a name="operationalize-promotions"></a>Operationalisieren von Werbeaktionen ## Partner können die Werbeaktionen operationalisieren, indem sie die [getPromotions-API implementieren.](/partner-center/develop/get-promotions) Diese API gibt alle Aktionen zurück, die für einen bestimmten Markt (das Land des Kunden) und segmentiert sind. Die API gibt die Liste der Werbeaktionen und wichtige Informationen zurück, damit der Partner nachvollziehen kann, welche Werbeaktionen für Kunden in verschiedenen Ländern verfügbar sind. Die getPromotions-API enthält die folgenden Daten für eine bestimmte Promotion: - Dauer der Promotion - Prozentualer Rabatt für die Promotion - Die Produkte und SKUs, für die die Promotion verfügbar ist Werbeaktionen werden vom Partner Center angewendet, wenn der Partner die Produkt-SKU erwirbt, für die die Promotion verfügbar ist. Partneraktionen sind auf der Benutzeroberfläche des Partner Center-Katalogs in den Produkt-SKU-Details verfügbar. Sie können auf die Schaltfläche "Anzeigen von Promotiondetails" klicken, um weitere Informationen zur Promotion zu erhalten. Auf die Möglichkeit, die Promotiondetails anzuzeigen, können Sie über die SKU-Details der Katalogseitenansicht, die Überprüfungsseite vor der Übermittlung des Kaufs, die Bestätigung nach übermittlung der Bestellung und die Seite mit dem Bestellverlauf zugreifen. ## <a name="verify-eligibility"></a>Überprüfen der Berechtigung ## Partner können anzeigen, ob ein Kundenkauf für eine Promotion berechtigt ist, indem sie die Informationen auf der Überprüfungsseite im Partner Center sehen, bevor sie das Produkt kaufen. Partner können auch die [verifyPromotionEligibility-API aufrufen](/partner-center/develop/verify-promotion-eligibility)und dabei die Kunden-Mandanten-ID und die Promotion-ID übergeben. Der Anruf gibt TRUE zurück, wenn der Kunde berechtigt ist. Wenn der Kunde nicht berechtigt ist, gibt die API die Bedingungen zurück, die nicht erfüllt wurden, damit die Promotion anwendbar ist. Partner können die Überprüfungsberechtigung anrufen und Ergebnisse erhalten. Berechtigungsfehler können auf der Anzahl von Bzw. inkompatiblen Bedingungen oder den Grenzwerten basieren, mit der eine Promotion auf die Produkt-SKU eines Kunden angewendet werden kann. Wichtige Themen für neue Commerce-Werbe-APIs: - [GetPromotions-API](/partner-center/develop/get-promotions) - [GetPromotionsById-API](/partner-center/develop/get-promotion-by-id) - [VerifyPromtionEligibilities](/partner-center/develop/verify-promotion-eligibility) - [Ressourcen zu Werbeaktionen](/partner-center/develop/promotion-resources) >[!IMPORTANT] > Partner sollten Werbeaktionen überprüfen, bevor sie eine Transaktion übermitteln. Wenn auf Partner Center Seite nicht angezeigt wird, wird diese nicht auf die Transaktion angewendet. Der Partner bekommt den Nicht-Promotion-Preis. Partner können auch die Warenkorbzeilen-API anzeigen, um zu sehen, ob die Aktion vorhanden ist, bevor sie eine Transaktion übermitteln. Partner können die API zur Überprüfung von Werbeaktionen aufrufen, bevor sie Transaktionen übermitteln, um zu überprüfen, ob ihre Kundenprodukt-SKU für die Promotion berechtigt ist, und ander wenn dies nicht der Grund für die Unzulässigkeit ist. Es gibt drei Gründe, warum ein Kunde möglicherweise nicht für eine Promotion berechtigt ist. Diese nicht zulässigen Typen werden in den Fällen, in denen der Kunde nicht berechtigt ist, in der API für die Validierungserstufung zurückgegeben. ### <a name="seat-count"></a>Anzahl von Zählern ### Für viele Werbeaktionen kann ein Platz von maximal 2.400 Plätzen angegeben werden. In diesen Fällen wird eine Transaktion mit mehr als 2400 zu den Preisen übermittelt, die keine Höherstufungen sind. Diese Anzahl von Plätzen wird auch erzwungen, wenn einem Promotionabonnement mit diesen Grenzwerten Neue-Zähler hinzugefügt werden. Partner erhalten eine Fehlermeldung, wenn sie versuchen, ein Abonnement mit aktivierter Promotion über die Grenzwerte hinaus zu erhöhen. Die Platzgrenzen von Werbeaktionen werden partnerübergreifend erzwungen. Wenn also ein Partner eine Promotion von 2.300 Plätzen mit einer Begrenzung der Anzahl von Promotionplätzen erwirbt, würde ein zweiter Partner, der 200 Plätzen erwirbt, den Abonnementpreis zum Preis erhalten, der nicht zur Promotion gehört. Die Werbung wird auf der Produkt-SKU-Ebene erzwungen, die der Partner abwickelt, sodass ein Partner Werbepreise für 2.400 Microsoft 365 E3-Plätzen und auch für eine andere Produkt-SKU-Microsoft 365 E5. Partner können die [abonnierteSKUs-API aufrufen,](/partner-center/develop/get-a-list-of-available-licenses) um zu sehen, wie viele Lizenzen ein Kunde für eine bestimmte bereitgestellte SKU besitzt. Wenn ein Partner mehr Lizenzen als die 2400 Lizenzen und die Höherstufung möchte, kann er einfach ein Abonnement bis zum Limit von 2.400 zum Höherstufungspreis und ein zweites Abonnement zum Nicht-Höherstufungspreis erwerben. ### <a name="term"></a>Begriff ### Begriffseinschränkungen definieren, welche Produkt-SKU-Begriffe an einer bestimmten Promotion ausgerichtet sind. Für viele Aktionen werden basierend auf dem Begriff unterschiedliche Rabatte definiert. Wenn ein Partner eine Transaktion übermittelt und der Begriff nicht mit der Promotion in Einklang steht, geht er davon aus, dass die Transaktion zu dem von ihnen erwarteten Preis liegt, der nicht der Promotion entspricht. Beispiele für Begriffe sind *jährlich oder* *monatlich.* ### <a name="first-purchase"></a>Erstkauf ### Einige Werbeaktionen erzwingen, dass sie nur einmal erworben werden. Einem Partner wird die Berechtigung false angezeigt, indem *er* die Überprüfungsberechtigungs-API mit dem Fehlertyp *FirstPurchase verwendet.* Ein Partner kann weiterhin die bestimmte Produkt-SKU erwerben, aber das Abonnement hat den Preis, der nicht für die Promotion verwendet wird. Diese Einschränkung gilt pro Kunde und nicht pro Partner. Sobald ein Kunde über eine Promotion mit dieser Regel verfügt, kann er keine zweite Instanz der Aktion erhalten, die von einem zweiten Partner angewendet wird. ## <a name="promotions-and-renewals"></a>Werbeaktionen und Verlängerungen ## Werberabatte, wenn sie angewendet werden, gelten für die Laufzeit des Kaufs. Abonnements mit angewendeten Werbeaktionen behalten den Aktionspreis bei, wenn das Verlängerungsdatum im Datumsbereich der Promotionsdauer liegt. Verlängerungen außerhalb des Datumsbereichs der Promotiondauer werden auf den Nicht-Promotion-Preis verlängert (aus der Preisliste). Partner können den Verlängerungsstatus auf den Preispunkten auf der Seite mit den Abonnementdetails und in den Anweisungen zur Datenerneuerung getSubscription nachverfolgen. ## <a name="promotions-and-upgrades"></a>Werbeaktionen und Upgrades ## Partner, die ein Upgrade von einem Abonnement auf eine andere SKU durchführen, lassen den Aktionspreis zurück. Diese Aktion tritt auf, weil die Promotion für die SKU konfiguriert wurde, die sie beim Upgrade auf eine andere SKU verlassen. Partner, die ein Upgrade auf eine SKU durchführen, für die möglicherweise eine Promotion ausgeführt wird, erhalten den Aktionspreis nicht automatisch. Wenn sie den Aktionspreis für die SKU benötigen oder möchten, zu der sie wechseln möchten, müssen sie die neue SKU manuell als neues Abonnement erwerben. Derzeit werden Werbeaktionen nur für neue Abonnementkäufe und -verlängerungen angewendet. ## <a name="promotions-and-migrations"></a>Werbeaktionen und Migrationen ## Partner können die Abonnements ihrer Kunden von herkömmlichen Microsoft 365/Dynamics 365 zu neuen Commerceversionen ihrer Abonnements migrieren. Die Migrationen sind sowohl über die Partner Center benutzeroberfläche als auch über den Aufruf der Migrations-APIs verfügbar. Partner, die von einem herkömmlichen Abonnement zu einem neuen Commerce migrieren, erhalten die Promotion, wenn sie migrieren, solange die Produkt-SKU, zu der sie migrieren, der Definition der Promotion entspricht. Partner sollten die Überprüfungsberechtigungs-API aufrufen, um sicherzustellen, dass die Zielprodukt-SKU den Aktionspreis vor der Migration anwenden wird. ## <a name="cross-channel-considerations"></a>Kanalübergreifende Überlegungen ## Cloud Solution Provider (CSP) gelten nicht für Kanäle wie Enterprise Agreement (EA). Ein CSP-Partner kann eine Promotion mit einer Einschränkung von 2.400 Promotionen auch dann erwerben, wenn der Kunde über 3.000 Ea-Plätzen verfügen kann.
105.223404
1,181
0.824386
deu_Latn
0.998862
c98050fc95a60b8633c273ddd6cd5331b64504a1
5,139
md
Markdown
packages/vantui/README.md
LeekJay/vantui
910a7ae82e9e8a3c86d8b163b7dd89af935ec4b7
[ "MIT" ]
null
null
null
packages/vantui/README.md
LeekJay/vantui
910a7ae82e9e8a3c86d8b163b7dd89af935ec4b7
[ "MIT" ]
null
null
null
packages/vantui/README.md
LeekJay/vantui
910a7ae82e9e8a3c86d8b163b7dd89af935ec4b7
[ "MIT" ]
null
null
null
<div class="card"> <div class="intro" style="text-align: center; padding: 20px;"> <img class="intro__logo" style="width: 120px; height: 120px; box-shadow: none;" src="https://img.yzcdn.cn/public_files/2017/12/18/fd78cf6bb5d12e2a119d0576bedfd230.png"> <h2 style="margin: 0; font-size: 32px; line-height: 60px;">@antmjs/vantui</h2> <p>一套基于 vant-weapp 开发的在 Taro-React / React 框架中使用的多端 UI 组件库</p> </div> </div> ### 组件文档 [点击查看](https://antmjs.github.io/vantui/#/home) ### 关联 - [Vant Weapp](https://github.com/youzan/vant-weapp):由有赞团队打造的轻量、可靠的微信小程序 UI 组件库 - [Taro](https://github.com/NervJS/taro):由京东团队打造的开放式跨端跨框架解决方案,支持使用 React/Vue/Nerv 等框架来开发微信/京东/百度/支付宝/字节跳动/ QQ 小程序/H5/React Native 等应用 - [React](https://reactjs.org/):Facebook 内部开源出来的用于构建用户界面的 JavaScript 库 ### 起源 - 为什么要做这个组件库?我们认为有赞团队的组件库经过了多年的实践积累,以及经过我们的实际体验之后认为确实是一款优秀的组件库,但可惜的是他们只提供了Vue版本和微信小程序版本,而我们的技术架构选用的是Facebook的React库以及京东的Taro库,所以就开始思考如何能把有赞微信小程序的版本迁移到Taro上面来,最终我们实现了[@antmjs/vantui](https://github.com/antmjs/vantui)。 - 为什么是99%?迁移的步骤其实不难,第一步100%同步样式,第二步通过Taro convert转译之后再重构js部分,但因为有赞微信小程序的版本完全基于微信小程序实现的,所以在改造兼容支付宝小程序、H5的时候还是存在不能100%兼容的情况,具体的个别差异点可以参考[快速上手](https://antmjs.github.io/vantui/#/quickstart)。 - 为什么能支持React应用?创建初期是为了在Taro上面使用才建立的,但当我们开始在H5端测试的时候发现,既然这个库能在Taro版的H5应用中使用,为什么不能在React中使用呢?于是乎我们开始调研Taro的底层架构随即理清思路,在不重构任何组件的前提下使之能在React中使用 ### 预览 正在全力建设中... ### 优势 <div> <div style="display:inline-block;width:48px;"><img height="20px" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAALoAAAC6CAMAAAAu0KfDAAAAYFBMVEUxeMb///8zesfz9vvD1Ou7zugCasElc8T5+/1umdIsdsUZb8MAaMDm7fZPh8x7otbb5PObt9+FqNg8fsjP2+6kveIAZL9Yjc7g6PSzyOaLrdqSsdyoweMAYb7T4PBjk9Hlav+OAAAF10lEQVR4nO2di5LaIBRAEbIQwIBJyEtT/f+/LNFVd1fyBiUznHZ2OlPjnNLLMxcAuycxUikVEHgJFDRVKP6hCx5/QlFGSk7Ypx37YISXJIvQq7qSnDNPS/wOZJxL9Uc9LnLubXn/hPG8iH+qI7kN8Q7GJXqqowp/WmgOuEJ39Vhiz4P8NxDL+Fu94J+WmQsvbuoq30yc32G56tR1Ff20yXy6qgp20XYalyeMRzsQZxssdF3sWQyQv13/EIwgoMpNNYx3YKlAusl40RGTAko+LbEMQoHYZKjrYBdgk5HesVnxQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAIB5zAN4ZxjjPd73ME5YQxY2gTD1tL7xYSwXAh6LL6ic4dSUVMcaSVy2P3denW4FrM350KemvNzq/qP4wHqKM0uOV+5w5FBFa2jNWxsZRgem0Ns0H6QqFauK3kGh75/CvXL9i2GWVsPet9A9arN3+vVD3/UdRinpjAxsfdKnXCaTH7WK3WSFzOe9UmdV2r8CS/ViahnPeuPOoPTw9wvdYZnRYtP6jid+6wv6uQytTn3TZ2Rr9nPeqLOs96+P0YIJUmif6Lfn/FDnYGeQk/OjRR8X5blHsPLsVGHZ1ytVY+NmArPyPlbvTJGOmopwbwb1HejYz3vwDiXxf3YplXqAEhqwlDllPGD9AKv6iQzmZ/p6+lXeiifX06JBXVipHxV/zJ/8lvNWEkVNB/FoEeXeYZWq5vBBvXBB4ihI63z/kMkGN8XsZPDmuaqM2Io9HTwNCa2p5alb8xWzw3qIzP/nmhay1x1cjGo//vIMRKz1el21aVBHX7kvBQrpT5cTV1hJdaHGkd3WGlhdC/2gZCZ3yWZ1HcNe3+5z1bn5gm1Em8/O21+qfeNeTP45tOw5qubWscrEcXknS3NbPWe8XoHOot/b6yv89VZ26euqSl7m/xsdcDp4IJAfRTkPTV2vjojzZC6rrAt5fwNQT9fHRAxtg6TKFq6r7EL1AHvbWQexEqUrqNmiTosT6PuupOqiNsedom6LvdJC2Ct2x52mTqDI1X1Rn1yObRZpq7dh1r3B7FyOBxeqA4YN805XkG5s5HNUnUd7/l5kryzs76XqwNC0ilvlFDmaGiwQr17j9dOeEuApB/rML8h5DKhmUSVE/d16t1qtIhG3WM/lktf7cu8GXsX2bhYL12vDiDBIlWDGRro4iBkLKiD6/UHMhqS/3JQ7HbUO3l4GQj6urLfQNpS72osHuikjvY7VXvqGrKXfZ1U05/2thSr6rqTEj1DyrP907MtqwPCjkZ1dPFeXRe8eQpl/7h1++qMG1Or7A9k7KsDIg4G9WwL6oCbZlAbUZeGobAH6nA8PZqYMiA/H+usGs/QJRcv1QGp07GZMs8MAeNB40iSnaL7wYLnhh7VwUxpUWYGajjulyeVIV58GAjckkrQifCeARUBpizIL/tvtBfnw8StyF93MzDSkzPr4MaYNak8UUrz636S626Ka8IXpIVxspQ4uDFmXRYSUk0qq/yaZ8dhJVvVsy6jHCwjrU6gilGS1IdakyT960kubhiykvs1SmJ/jvQu9dSLdZgl6slAL+C5OnWyxv4O9dSTld756qmjdwPO1VHqJNDfoF5LV+YL1GftdmiEu/yeBfkw07fHuH1hvSAzo4TtQJd/J0ZRXg6Jr05/wK8DvbEVAcgwpq2qB/TRITrl+6HkDLj+Cj/efOlfj9/dj+OExwgmQqZNdEB//+nauk2lwMMtIhMWLk5c+v/W7QNnuagoldkpLdq2LdKjpJdKz0DGtyQTauG6yunbv80Pv+6LmPIFPN3yJaEbvpp1wxfibvga4i1f/rzhK7e3fNH5lq+X36HKyT4xV+BbKja4DRucpVfZh93f1oDvAWaRb0Oe8fy+Jgkeg3qphzyehzxknMvHPOWhvkNRRsrX7areoMeZJcmi5xj/qd5NS1RKhaclDwVN1a+x/X+lj1ZgW/v7wQAAAABJRU5ErkJggg==" /> </div> TS类型安全 </div> <div> <div style="display:inline-block;"> <img height="20px" src="https://img20.360buyimg.com/ling/jfs/t1/20876/36/12835/3043/5c9c2929Ed18cfb11/15b1c03ec830ab8e.png" /> </div> 目前支持微信小程序、支付宝小程序、H5。其他端逐渐更新中... </div> <div> <div style="display:inline-block;width:48px;"> <img height="20px" src="https://cdn.jsdelivr.net/gh/reactnativecn/react-native-website@gh-pages/img/header_logo.svg" /> </div> React应用中使用 </div> > 小程序、Taro-React-H5、React-H5多端完全统一 ### 贡献代码 使用过程中发现任何问题都可以提 [Issue](https://github.com/antmjs/vantui/issues) 给我们,当然,我们也非常欢迎你给我们发 [PR](https://github.com/antmjs/vantui/pulls),同时,到目前为止我们已经对vant-weapp的[commit](https://github.com/youzan/vant-weapp/commits/dev)记录同步到了1.9.2的版本,我们也会持续同步 ### 开源协议 本项目基于 [MIT](https://zh.wikipedia.org/wiki/MIT%E8%A8%B1%E5%8F%AF%E8%AD%89) 协议,请自由地享受和参与开源 ### 参与贡献的小伙伴 [![hisanshao](https://avatars.githubusercontent.com/u/26359618?s=100&v=4)](https://github.com/hisanshao/) | [![Chitanda60](https://avatars.githubusercontent.com/u/16026533?s=100&v=4)](https://github.com/Chitanda60/) | [![zuolung](https://avatars.githubusercontent.com/u/19684540?s=100&v=4)](https://github.com/Banlangenn/) | [![hisanshao](https://avatars.githubusercontent.com/u/28145148?s=100&v=4)](https://github.com/zuolung/) :---:|:---:|:---:|:---: [hisanshao](https://github.com/hisanshao/) | [Chitanda60](https://github.com/Chitanda60/) | [Banlangenn](https://github.com/Banlangenn/) | [zuolung](https://github.com/zuolung/)
84.245902
2,312
0.838295
yue_Hant
0.506076
c9805f96cd5c06ca839ef53a168a016a828d4b09
802
md
Markdown
guides/content/personas/health-catalyst-developer.md
Techgeekster/Fabric.Cashmere
e4de1a2a932692f45577108f110d9b06639658f2
[ "Apache-2.0" ]
70
2017-12-07T15:50:27.000Z
2021-12-30T19:54:20.000Z
guides/content/personas/health-catalyst-developer.md
Techgeekster/Fabric.Cashmere
e4de1a2a932692f45577108f110d9b06639658f2
[ "Apache-2.0" ]
1,740
2017-12-13T22:31:55.000Z
2022-03-31T22:44:08.000Z
guides/content/personas/health-catalyst-developer.md
Techgeekster/Fabric.Cashmere
e4de1a2a932692f45577108f110d9b06639658f2
[ "Apache-2.0" ]
65
2017-12-13T21:42:22.000Z
2022-03-03T19:06:36.000Z
# Health Catalyst Developer ###### Last updated Mar 12, 2021 ::: <div class="persona-header"> ![Avatar Image](./assets/avatars/avatar18.svg) <div> # Devon ### Developer </div> </div> --- ## Role - Frequency of Use - Technical Level – High --- ## Responsibilities - Performs setup of accounts (users, defaults, etc.) - Defines workflow within the product - Performs setup of SFTP and auto file submission --- ## Pain Points - Optimal performance of SFTP - Not getting the results as promised from the API and having to adapt via workaround measures, which takes time with no guaranteed solutions. - If I am unable to get adequate implementation and documentation, then major errors can occur in regard to product setup thus causing major delays in implementation. :::
17.822222
168
0.71197
eng_Latn
0.983454
c980a76ffcaa496dc87e6a1e934a535e89eb20ca
3,025
md
Markdown
docs/extensibility/preparing-extensions-for-windows-installer-deployment.md
alkaberov/visualstudio-docs.ru-ru
4a55b254cc9f6b626b92cd16633235c0f9044c0f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/preparing-extensions-for-windows-installer-deployment.md
alkaberov/visualstudio-docs.ru-ru
4a55b254cc9f6b626b92cd16633235c0f9044c0f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/preparing-extensions-for-windows-installer-deployment.md
alkaberov/visualstudio-docs.ru-ru
4a55b254cc9f6b626b92cd16633235c0f9044c0f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Подготовка расширений для развертывания установщика Windows | Документация Майкрософт ms.date: 11/04/2016 ms.topic: conceptual helpviewer_keywords: - vsix msi ms.assetid: 5ee2d1ba-478a-4cb7-898f-c3b4b2ee834e author: madskristensen ms.author: madsk manager: jillfra ms.workload: - vssdk ms.openlocfilehash: c958c75088a6e31d9386f1acd423360b8dbe0a6c ms.sourcegitcommit: 40d612240dc5bea418cd27fdacdf85ea177e2df3 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 05/29/2019 ms.locfileid: "66336180" --- # <a name="prepare-extensions-for-windows-installer-deployment"></a>Подготовка расширения для развертывания установщика Windows Пакет установщика Windows (MSI) нельзя использовать для развертывания пакета VSIX. Тем не менее можно извлечь содержимое пакета VSIX для развертывания MSI. В этом документе показано, как подготовка проекта, выходные данные которого по умолчанию — это пакет VSIX для включения в проект установки. ## <a name="prepare-an-extension-project-for-windows-installer-deployment"></a>Подготовить проект расширения для развертывания установщика Windows Прежде чем добавлять в проект установки, выполните следующие действия на новых проектов расширения. ### <a name="to-prepare-an-extension-project-for-windows-installer-deployment"></a>Чтобы подготовить проект расширения для развертывания установщика Windows 1. Создайте VSPackage, компонент MEF, оформления редактора или другого типа проекта расширения, включающий в манифесте VSIX. 2. Откройте манифест VSIX в редакторе кода. 3. Задайте `InstalledByMsi` манифеста VSIX для `true`. Дополнительные сведения о манифесте VSIX, см. в разделе [Справочник по схеме 2.0 расширения VSIX](../extensibility/vsix-extension-schema-2-0-reference.md). Это предотвращает попытки установки компонента установщик VSIX. 4. Щелкните правой кнопкой мыши проект в **обозревателе решений** и нажмите кнопку **свойства**. 5. Выберите **VSIX** вкладки. 6. Установите флажок **VSIX копирования содержимого в следующее расположение** и введите путь к которой проект установки скопирует файлы. ## <a name="extract-files-from-an-existing-vsix-package"></a>Извлеките файлы из существующего пакета VSIX Выполните следующие шаги для добавления содержимого существующего пакета VSIX в проект установки, если у вас нет исходных файлов. ### <a name="to-extract-files-from-an-existing-vsix-package"></a>Чтобы извлечь файлы из существующего пакета VSIX 1. Переименовать *. VSIX* файл, содержащий расширение из *filename.vsix* для *filename.zip*. 2. Скопируйте содержимое *ZIP-файл* файл в каталог. 3. Удалить *[Content_types] .xml* файл из каталога. 4. Измените манифест VSIX, как показано в предыдущей процедуре. 5. Добавьте оставшиеся файлы проекта установки. ## <a name="see-also"></a>См. также - [Развертывание установщика Visual Studio](https://msdn.microsoft.com/library/121be21b-b916-43e2-8f10-8b080516d2a0) - [Пошаговое руководство: Создание настраиваемого действия](/previous-versions/visualstudio/visual-studio-2010/d9k65z2d(v=vs.100))
51.271186
295
0.803636
rus_Cyrl
0.870284
c9812c00576df22b39419d5206a9be70b94d2da6
998
md
Markdown
_posts/2016-08-03-virtualbox-move-vms-to-another-disk.md
brontosaurusrex/brontosaurusrex.github.io
7a3b6a21f36327c176849ac75c7c55f603646c0e
[ "MIT" ]
1
2018-12-30T05:17:13.000Z
2018-12-30T05:17:13.000Z
_posts/2016-08-03-virtualbox-move-vms-to-another-disk.md
brontosaurusrex/brontosaurusrex.github.io
7a3b6a21f36327c176849ac75c7c55f603646c0e
[ "MIT" ]
3
2017-01-26T21:04:51.000Z
2020-02-22T11:57:10.000Z
_posts/2016-08-03-virtualbox-move-vms-to-another-disk.md
brontosaurusrex/brontosaurusrex.github.io
7a3b6a21f36327c176849ac75c7c55f603646c0e
[ "MIT" ]
4
2018-02-28T16:08:19.000Z
2019-06-19T21:05:05.000Z
--- published: true layout: post date: '2016-08-03 16:35 +0200' title: 'Virtualbox, move vms to another disk' --- > 1. Shut down VirtualBox, back up your <userdoc\.VirtualBox\VirtualBox.xml file. > 2. Find your existing "Virtualbox VMs" folder, and copy (not move) the whole folder with contents to > your new drive E: > 3. Run VirtualBox, then for each VM in turn :- > 3.1 Right click the VM name and select "Remove" from the popup menu. Answer no to the "physically > delete files?" question. > 3.2 Select the Machine|Add.. menu item, navigate to the VMs new location on drive E:, and select the .vbox file. > 3.3. Repeat for any remaining VMs. > 4. In File/Preferences, set the default machine path to "E:\VirtualBox VMs" > 5. Test each of the VMs. Only after you are sure they all work, delete the old VM containing folder, i.e. delete "C:\VirtualBox VMs". from [https://forums.virtualbox.org/viewtopic.php?f=1&t=48258#p219275](https://forums.virtualbox.org/viewtopic.php?f=1&t=48258#p219275)
58.705882
135
0.733467
eng_Latn
0.972162
c9815f062f7c3419e3b12c0723bfc3715bfc3810
2,539
md
Markdown
x/incentives/spec/02_state.md
CrackLord/osmosis
2e3966f5bfc799d9fa70f4e7faf0cb9963eda92e
[ "Apache-2.0" ]
426
2021-02-19T09:49:00.000Z
2022-03-31T01:53:18.000Z
x/incentives/spec/02_state.md
CrackLord/osmosis
2e3966f5bfc799d9fa70f4e7faf0cb9963eda92e
[ "Apache-2.0" ]
928
2021-02-20T00:22:27.000Z
2022-03-31T22:52:28.000Z
x/incentives/spec/02_state.md
CrackLord/osmosis
2e3966f5bfc799d9fa70f4e7faf0cb9963eda92e
[ "Apache-2.0" ]
160
2021-02-21T00:34:45.000Z
2022-03-28T01:40:15.000Z
<!-- order: 2 --> # State ## Incentives management All the incentives that are going to be provided are locked into `IncentivePool` until released to the appropriate recipients after a specific period of time. ### Gauge Rewards to be distributed are organized by `Gauge`. The `Gauge` describes how users can get reward, stores the amount of coins in the gauge, the cadence at which rewards are to be distributed, and the number of epochs to distribute the reward over. ```protobuf enum LockQueryType { option (gogoproto.goproto_enum_prefix) = false; ByDuration = 0; // locks which has more than specific duration ByTime = 1; // locks which are started before specific time } message QueryCondition { LockQueryType lock_query_type = 1; // type of lock, ByLockDuration | ByLockTime string denom = 2; // lock denom google.protobuf.Duration duration = 3; // condition for lock duration, only valid if positive google.protobuf.Timestamp timestamp = 4; // condition for lock start time, not valid if unset value } message Gauge { uint64 id = 1; // unique ID of a Gauge QueryCondition distribute_to = 2; // distribute condition of a lock which meet one of these conditions repeated cosmos.base.v1beta1.Coin coins = 3; // can distribute multiple coins google.protobuf.Timestamp start_time = 4; // condition for lock start time, not valid if unset value uint64 num_epochs_paid_over = 5; // number of epochs distribution will be done } ``` ### Gauge queues #### Upcoming queue To start release `Gauges` at a specific time, we schedule distribution start time with time key queue. #### Active queue Active queue has all the `Gauges` that are distributing and after distribution period finish, it's removed from the queue. #### Active by Denom queue To speed up the distribution process, module introduces the active `Gauges` by denom. #### Finished queue Finished queue saves the `Gauges` that has finished distribution to keep in track. ## Module state The state of the module is expressed by `params`, `lockable_durations` and `gauges`. ```protobuf // GenesisState defines the incentives module's genesis state. message GenesisState { // params defines all the parameters of the module Params params = 1 [ (gogoproto.nullable) = false ]; repeated Gauge gauges = 2 [ (gogoproto.nullable) = false ]; repeated google.protobuf.Duration lockable_durations = 3 [ (gogoproto.nullable) = false, (gogoproto.stdduration) = true, (gogoproto.moretags) = "yaml:\"lockable_durations\"" ]; } ```
34.780822
248
0.743994
eng_Latn
0.990534
c98199646522ed4d84ea1adf32a4ac19499efdb7
5,391
md
Markdown
docs/core/tutorials/publishing-with-visual-studio.md
sbriknie/docs
876af041db9d51e2bb43e38e780c854648ca715e
[ "CC-BY-4.0", "MIT" ]
3
2021-10-06T18:19:40.000Z
2022-01-26T20:00:53.000Z
docs/core/tutorials/publishing-with-visual-studio.md
sbriknie/docs
876af041db9d51e2bb43e38e780c854648ca715e
[ "CC-BY-4.0", "MIT" ]
154
2021-11-04T02:22:26.000Z
2022-03-21T02:19:33.000Z
docs/core/tutorials/publishing-with-visual-studio.md
sbriknie/docs
876af041db9d51e2bb43e38e780c854648ca715e
[ "CC-BY-4.0", "MIT" ]
1
2020-12-13T11:15:37.000Z
2020-12-13T11:15:37.000Z
--- title: Publish a .NET Core console application using Visual Studio description: Publishing creates the set of files that are needed to run a .NET Core application. ms.date: 06/08/2020 dev_langs: - "csharp" - "vb" ms.custom: "vs-dotnet" --- # Tutorial: Publish a .NET Core console application using Visual Studio This tutorial shows how to publish a console app so that other users can run it. Publishing creates the set of files that are needed to run your application. To deploy the files, copy them to the target machine. ## Prerequisites - This tutorial works with the console app that you create in [Create a .NET Core console application in Visual Studio 2019](with-visual-studio.md). ## Publish the app 1. Start Visual Studio. 1. Open the *HelloWorld* project that you created in [Create a .NET Core console application in Visual Studio](with-visual-studio.md). 1. Make sure that Visual Studio is using the Release build configuration. If necessary, change the build configuration setting on the toolbar from **Debug** to **Release**. ![Visual Studio toolbar with release build selected](media/publishing-with-visual-studio/visual-studio-toolbar-release.png) 1. Right-click on the **HelloWorld** project (not the HelloWorld solution) and select **Publish** from the menu. ![Visual Studio Publish context menu](media/publishing-with-visual-studio/publish-context-menu.png) 1. On the **Target** tab of the **Publish** page, select **Folder**, and then select **Next**. ![Pick a publish target in Visual Studio](media/publishing-with-visual-studio/pick-publish-target.png) 1. On the **Location** tab of the **Publish** page, select **Finish**. ![Visual Studio Publish page Location tab](media/publishing-with-visual-studio/publish-page-loc-tab.png) 1. On the **Publish** tab of the **Publish** window, select **Publish**. ![Visual Studio Publish window](media/publishing-with-visual-studio/publish-page.png) ## Inspect the files By default, the publishing process creates a framework-dependent deployment, which is a type of deployment where the published application runs on machine that has the .NET Core runtime installed. Users can run the published app by double-clicking the executable or issuing the `dotnet HelloWorld.dll` command from a command prompt. In the following steps, you'll look at the files created by the publish process. 1. In **Solution Explorer**, select **Show all files**. 1. In the project folder, expand *bin/Release/netcoreapp3.1/publish*. :::image type="content" source="media/publishing-with-visual-studio/published-files-output.png" alt-text="Solution Explorer showing published files"::: As the image shows, the published output includes the following files: * *HelloWorld.deps.json* This is the application's runtime dependencies file. It defines the .NET Core components and the libraries (including the dynamic link library that contains your application) needed to run the app. For more information, see [Runtime configuration files](https://github.com/dotnet/cli/blob/85ca206d84633d658d7363894c4ea9d59e515c1a/Documentation/specs/runtime-configuration-file.md). * *HelloWorld.dll* This is the [framework-dependent deployment](../deploying/deploy-with-cli.md#framework-dependent-deployment) version of the application. To execute this dynamic link library, enter `dotnet HelloWorld.dll` at a command prompt. This method of running the app works on any platform that has the .NET Core runtime installed. * *HelloWorld.exe* This is the [framework-dependent executable](../deploying/deploy-with-cli.md#framework-dependent-executable) version of the application. To run it, enter `HelloWorld.exe` at a command prompt. The file is operating-system-specific. * *HelloWorld.pdb* (optional for deployment) This is the debug symbols file. You aren't required to deploy this file along with your application, although you should save it in the event that you need to debug the published version of your application. * *HelloWorld.runtimeconfig.json* This is the application's run-time configuration file. It identifies the version of .NET Core that your application was built to run on. You can also add configuration options to it. For more information, see [.NET Core run-time configuration settings](../run-time-config/index.md#runtimeconfigjson). ## Run the published app 1. In **Solution Explorer**, right-click the *publish* folder, and select **Copy Full Path**. 1. Open a command prompt and navigate to the *publish* folder. To do that, enter `cd` and then paste the full path. For example: ``` cd C:\Projects\HelloWorld\bin\Release\netcoreapp3.1\publish\ ``` 1. Run the app by using the executable: 1. Enter `HelloWorld.exe` and press <kbd>Enter</kbd>. 1. Enter a name in response to the prompt, and press any key to exit. 1. Run the app by using the `dotnet` command: 1. Enter `dotnet HelloWorld.dll` and press <kbd>Enter</kbd>. 1. Enter a name in response to the prompt, and press any key to exit. ## Additional resources - [.NET Core application deployment](../deploying/index.md) ## Next steps In this tutorial, you published a console app. In the next tutorial, you create a class library. > [!div class="nextstepaction"] > [Create a .NET Standard library in Visual Studio](library-with-visual-studio.md)
49.009091
387
0.756446
eng_Latn
0.98875
c981a6ef5fdecaf031b2efad50431e879cab8807
7,707
md
Markdown
articles/security/fundamentals/subdomain-takeover.md
zoeleng/azure-docs.zh-cn
55d0a2ed4017c7b4da8c14b0b454f90ce2345b44
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security/fundamentals/subdomain-takeover.md
zoeleng/azure-docs.zh-cn
55d0a2ed4017c7b4da8c14b0b454f90ce2345b44
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security/fundamentals/subdomain-takeover.md
zoeleng/azure-docs.zh-cn
55d0a2ed4017c7b4da8c14b0b454f90ce2345b44
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 使用 Azure DNS 别名记录和 Azure 应用服务自定义域验证来防止子域接管 description: 了解如何避免子域接管造成的常见严重威胁 services: security author: memildin manager: rkarlin ms.assetid: '' ms.service: security ms.subservice: security-fundamentals ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: na ms.date: 09/29/2020 ms.author: memildin ms.openlocfilehash: 7c09a7f6c6a313852fc6212c6190a584ba5f67bd ms.sourcegitcommit: 17b36b13857f573639d19d2afb6f2aca74ae56c1 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 11/10/2020 ms.locfileid: "94409886" --- # <a name="prevent-dangling-dns-entries-and-avoid-subdomain-takeover"></a>阻止无关联的 DNS 项并避免子域接管 本文介绍了子域接管造成的常见安全威胁,以及可采取的缓解措施。 ## <a name="what-is-subdomain-takeover"></a>什么是子域接管? 子域接管是定期创建和删除大量资源的组织会遇到的一种常见严重威胁。 当你有 [DNS 记录](../../dns/dns-zones-records.md#dns-records)指向已取消预配的 Azure 资源时,可能会发生子域接管。 这类 DNS 记录也称为“无关联的 DNS”项。 CNAME 记录特别容易受到此威胁的攻击。 子域接管使恶意操作者能够将专用于某组织的域的流量重定向到一个执行恶意活动的站点。 出现子域接管的常见情况: 1. **创建:** 1. 你使用完全限定的域名 (FQDN) `app-contogreat-dev-001.azurewebsites.net` 预配 Azure 资源。 1. 你使用将流量路由到你的 Azure 资源的子域 `greatapp.contoso.com` 在 DNS 区域中分配 CNAME 记录。 1. **取消预配:** 1. 不再需要 Azure 资源时将其取消预配或删除。 此时,应从 DNS 区域删除 CNAME 记录 `greatapp.contoso.com`。 如果未删除 CNAME 记录,则该记录将播发为活动域,但不会将流量路由到活动的 Azure 资源。 这就是“无关联的”DNS 记录的定义。 1. 无关联的子域 `greatapp.contoso.com` 目前易受攻击,可通过将其分配到其他 Azure 订阅的资源来接管它。 1. **接管:** 1. 威胁操纵者使用常用方法和工具来发现无关联的子域。 1. 威胁操纵者使用之前由你控制的资源的 FQDN 来预配 Azure 资源。 在本示例中为 `app-contogreat-dev-001.azurewebsites.net`。 1. 发送到子域 `greatapp.contoso.com` 的流量现在路由到由恶意操作者控制其中内容的资源。 ![取消预配的网站中的子域接管](./media/subdomain-takeover/subdomain-takeover.png) ## <a name="the-risks-of-subdomain-takeover"></a>子域接管造成的风险 当 DNS 记录指向不可用的资源时,记录本身应已从 DNS 区域中删除。 如果尚未删除,它会成为“无关联的 DNS”记录,并且有可能发生子域接管。 无关联的 DNS 项使威胁操纵者能够控制关联的 DNS 名称,进而托管恶意网站或服务。 组织子域中的恶意页面和服务可能导致以下结果: - **失去对子域内容的控制** - 造成负面影响,显示出你的组织没法保护自身的内容,还会造成品牌受损和信任丧失。 - **从无防备的访客那里获取 Cookie** - Web 应用通常会将会话 Cookie 公开到子域 (*.contoso.com),这导致任何子域均可对其进行访问。 威胁操纵者可能会使用子域接管来生成看似可信的页面,诱使毫无戒心的用户访问该页面,然后获取用户的 Cookie(甚至是安全 Cookie)。 常见的误解是,使用 SSL 证书可保护网站和用户的 Cookie 免遭接管。 然后,威胁操纵者可能会使用劫持的子域来申请和接收有效的 SSL 证书。 有效的 SSL 证书使他们能够访问安全 Cookie,而且可进一步使恶意网站看起来更合法。 - **网络钓鱼活动** - 看似可信的子域可能会被用于网络钓鱼活动。 对于恶意网站和 MX 记录来说确实如此,威胁操纵者会利用这些内容来接收发往已知安全品牌的合法子域的电子邮件。 - **其他风险** - 恶意网站可能会被用于升级为其他经典攻击,例如 XSS、CSRF 和 CORS 绕过等。 ## <a name="identify-dangling-dns-entries"></a>标识无关联的 DNS 项 若要标识组织内部可能无关联的 DNS 项,请使用 Microsoft 的 GitHub 托管的 PowerShell 工具[“Get-DanglingDnsRecords”](https://aka.ms/DanglingDNSDomains)。 此工具可帮助 Azure 客户列出 CNAME 与客户订阅或租户上创建的现有 Azure 资源相关联的所有域。 如果 CNAME 属于其他 DNS 服务并指向 Azure 资源,请在该工具的输入文件中提供 CNAME。 该工具支持下表中列出的 Azure 资源。 该工具会提取所有租户的 CNAME 或将其作为输入。 | 服务 | 类型 | FQDNproperty | 示例 | |---------------------------|---------------------------------------------|--------------------------------------------|---------------------------------| | Azure Front Door | microsoft.network/frontdoors | properties.cName | `abc.azurefd.net` | | Azure Blob 存储 | microsoft.storage/storageaccounts | properties.primaryEndpoints.blob | `abc. blob.core.windows.net` | | Azure CDN | microsoft.cdn/profiles/endpoints | properties.hostName | `abc.azureedge.net` | | 公共 IP 地址 | microsoft.network/publicipaddresses | properties.dnsSettings.fqdn | `abc.EastUs.cloudapp.azure.com` | | Azure 流量管理器 | microsoft.network/trafficmanagerprofiles | properties.dnsConfig.fqdn | `abc.trafficmanager.net` | | Azure 容器实例 | microsoft.containerinstance/containergroups | properties.ipAddress.fqdn | `abc.EastUs.azurecontainer.io` | | Azure API 管理 | microsoft.apimanagement/service | properties.hostnameConfigurations.hostName | `abc.azure-api.net` | | Azure 应用服务 | microsoft.web/sites | properties.defaultHostName | `abc.azurewebsites.net` | | Azure 应用服务 - 槽 | microsoft.web/sites/slots | properties.defaultHostName | `abc-def.azurewebsites.net` | ### <a name="prerequisites"></a>先决条件 以具有下列权限的用户身份运行查询: - 至少可对 Azure 订阅进行读取者级别访问 - 可对 Azure Resource Graph 进行读取访问 如果你是贵组织租户的全局管理员,请按照[提升访问权限以管理所有 Azure 订阅和管理组](../../role-based-access-control/elevate-access-global-admin.md)中的指导,将帐户提升为可访问贵组织的所有订阅。 > [!TIP] > 如果使用大型 Azure 环境,应考虑 Azure Resource Graph 的带宽限制和分页限制。 > > [详细了解如何使用大型 Azure 资源数据集](../../governance/resource-graph/concepts/work-with-data.md)。 > > 该工具使用订阅批处理来避免这些限制。 ### <a name="run-the-script"></a>运行脚本 详细了解 PowerShell 脚本 Get-DanglingDnsRecords.ps1,然后从 GitHub 下载: https://aka.ms/DanglingDNSDomains。 ## <a name="remediate-dangling-dns-entries"></a>修正无关联的 DNS 项 查看 DNS 区域,并标识无关联的或已遭接管的 CNAME 记录。 如果发现子域无关联或已遭接管,请删除易受攻击的子域,并按以下步骤操作来缓解风险: 1. 在 DNS 区域中,删除所有指向不再预配的资源的 FQDN 的 CNAME 记录。 1. 为确保流量路由到你控制的资源,请使用无关联子域的 CNAME 记录中指定的 FQDN 来预配其他资源。 1. 查看应用程序代码中对特定子域的引用,并更新所有错误或过期的子域引用。 1. 调查是否发生了任何泄露,并按照组织的事件响应程序采取措施。 以下是调查此问题的提示和最佳做法。 如果应用程序逻辑是将 OAuth 凭据等机密发送到无关联的子域,或将隐私敏感信息发送到无关联的子域,则该数据可能已公开到第三方。 1. 了解取消预配资源时未从 DNS 区域中删除 CNAME 记录的原因,并采取措施以确保后续取消预配 Azure 资源时正确更新 DNS 记录。 ## <a name="prevent-dangling-dns-entries"></a>防止无关联的 DNS 项 确保贵组织已落实过程来防止出现无关联的 DNS 项,安全计划的关键部分是防止由这些项产生的子域接管。 一些 Azure 服务提供了相关功能来帮助创建预防措施,下面是详细介绍。 必须根据贵组织的最佳做法或标准操作程序来建立防止此问题的其他方法。 ### <a name="use-azure-dns-alias-records"></a>使用 Azure DNS 别名记录 Azure DNS 的[别名记录](../../dns/dns-alias.md#scenarios)通过将 DNS 记录的生命周期与 Azure 资源相结合来防止出现无关联引用。 例如,假设某个 DNS 记录限定为别名记录,以指向公共 IP 地址或流量管理器配置文件。 如果删除这些基础资源,DNS 别名记录会变成空的记录集。 它不再引用已删除的资源。 务必注意,别名记录可保护的内容有限。 目前,列表限制如下: - Azure Front Door - 流量管理器配置文件 - Azure 内容分发网络 (CDN) 终结点 - 公共 IP 尽管目前服务产品有限,但建议尽可能使用别名记录来防止子域接管。 [详细了解 Azure DNS 的别名记录的功能](../../dns/dns-alias.md#capabilities)。 ### <a name="use-azure-app-services-custom-domain-verification"></a>使用 Azure 应用服务的自定义域验证 为 Azure 应用服务创建 DNS 项时,请使用域验证 ID 创建 asuid.{subdomain} TXT 记录。 存在此类 TXT 记录时,其他 Azure 订阅均无法验证自定义域,即无法接管自定义域。 这些记录不会阻止使用 CNAME 项中的名称创建 Azure 应用服务。 如果无法证明域名的所有权,则威胁操纵者无法接收流量或控制内容。 [详细了解如何将现有的自定义 DNS 名称映射到 Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md)。 ### <a name="build-and-automate-processes-to-mitigate-the-threat"></a>建立并自动执行缓解威胁的流程 通常是由开发人员和操作团队来运行清理进程,以避免无关联的 DNS 威胁。 以下做法有助于确保贵组织免遭这种威胁。 - **创建预防过程:** - 训练应用程序开发人员在每次删除资源时都重新路由地址。 - 将“删除 DNS 项”放入停用服务时需进行的检查的列表中。 - 在具有自定义 DNS 项的所有资源中放入[删除锁定](../../azure-resource-manager/management/lock-resources.md)。 删除锁定可指示取消预配资源前必须删除映射。 只有在与内部训练计划结合时,此类措施才能发挥作用。 - **创建发现过程:** - 定期查看 DNS 记录,确保所有子域都映射到以下 Azure 资源: - 存在-在 DNS 区域中查询指向 Azure 子域的资源,例如 *. azurewebsites.net 或 *. cloudapp.azure.com (参阅 [azure 域) 的参考列表](azure-domains.md) 。 - You own - 确认拥有 DNS 子域面向的所有资源。 - 维护包含 Azure 完全限定的域名 (FQDN) 终结点和应用程序所有者的服务目录。 若要生成服务目录,请运行以下 Azure Resource Graph 查询脚本。 此脚本阐述你有权访问的资源的 FQDN 终结点信息,并将其以 CSV 文件形式输出。 如果你可访问租户的所有订阅,则脚本将考虑所有这些订阅,如以下示例脚本所示。 若要将结果限制为一组特定的订阅,请如下所示编辑脚本。 - **创建修正过程:** - 找到无关联的 DNS 项后,团队需要调查是否发生了任何泄露。 - 调查停用资源时未重新路由地址的原因。 - 如果不再使用 DNS 记录,请将其删除,或将其指向贵组织所拥有的正确 Azure 资源 (FQDN)。 ## <a name="next-steps"></a>后续步骤 若要详细了解可用于防止子域接管的相关服务和 Azure 功能,请参阅以下页面。 - [防止无关联的 DNS 记录与 Azure DNS](../../dns/dns-alias.md#prevent-dangling-dns-records) - [在 Azure App Service 中添加自定义域时使用域验证 ID](../../app-service/app-service-web-tutorial-custom-domain.md#get-a-domain-verification-id) - [快速入门:使用 Azure PowerShell 运行首个 Resource Graph 查询](../../governance/resource-graph/first-query-powershell.md)
36.7
276
0.701051
yue_Hant
0.681604
c981d88ede46fce8a3555850e771bfdae854bb25
701
md
Markdown
hardware/README.md
spestana/ribbit-network-frog-sensor
c926642d140e84a1cbb54920da490cba1a22f70a
[ "MIT" ]
null
null
null
hardware/README.md
spestana/ribbit-network-frog-sensor
c926642d140e84a1cbb54920da490cba1a22f70a
[ "MIT" ]
null
null
null
hardware/README.md
spestana/ribbit-network-frog-sensor
c926642d140e84a1cbb54920da490cba1a22f70a
[ "MIT" ]
null
null
null
## Overview The hardware [Bill of Materials is located here.](ribbit_network_frog_sensor_bom.csv) Currently most components are off the shelf, though this will likely become less true as the Frog sensor matures. ## Assembly Instructions Assembly instructions can all be found [here](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/blob/main/assembly-instructions/0-start-here.md). ## Custom Components There are a few custom components desribed [here](https://github.com/Ribbit-Network/ribbit-network-frog-sensor/blob/main/assembly-instructions/2-3d-printing.md). ## Main Readme [Head back to the main readme for more info!](https://github.com/Ribbit-Network/ribbit-network-sensor)
43.8125
161
0.794579
eng_Latn
0.951793
c982d5930db505d54609d99012c8bfa860e0dc58
366
md
Markdown
README.md
FanaticalTest/data-driven-test-framework
c8ff6bb55fdb6a44407bc54f222984323a74a52b
[ "Apache-2.0" ]
null
null
null
README.md
FanaticalTest/data-driven-test-framework
c8ff6bb55fdb6a44407bc54f222984323a74a52b
[ "Apache-2.0" ]
null
null
null
README.md
FanaticalTest/data-driven-test-framework
c8ff6bb55fdb6a44407bc54f222984323a74a52b
[ "Apache-2.0" ]
null
null
null
# data-driven-test-framework This Data driven test framework is using initially Selenium, then other technology could be added ## Run * Run locally the service ``` gradle bootRun ``` ## Docker ### PhpMyAdmin Use `docker.for.mac.localhost` for the host ## Python * Run the generated Python file ``` python ./src/test/resources/outputFiles/scenarioPythonFile.py ```
20.333333
97
0.751366
eng_Latn
0.898891
c982e3b2ec50c22c464ee0c6129683e50bf1a3d1
2,015
md
Markdown
docs/dotnet/how-to-use-regular-expressions-to-validate-data-formatting-cpp-cli.md
svick/cpp-docs
76fd30ff3e0352e2206460503b61f45897e60e4f
[ "CC-BY-4.0", "MIT" ]
1
2021-04-18T12:54:41.000Z
2021-04-18T12:54:41.000Z
docs/dotnet/how-to-use-regular-expressions-to-validate-data-formatting-cpp-cli.md
Mikejo5000/cpp-docs
4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/dotnet/how-to-use-regular-expressions-to-validate-data-formatting-cpp-cli.md
Mikejo5000/cpp-docs
4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f
[ "CC-BY-4.0", "MIT" ]
1
2020-07-11T13:20:45.000Z
2020-07-11T13:20:45.000Z
--- title: "Use Regular Expressions to Validate Formatting (C++/CLI) | Microsoft Docs" ms.custom: "" ms.date: "11/04/2016" ms.technology: ["cpp-cli"] ms.topic: "conceptual" dev_langs: ["C++"] helpviewer_keywords: ["strings [C++], formatting", "data [C++], formatting", "regular expressions [C++], validating data formatting"] ms.assetid: 225775c3-3efc-4734-bde2-1fdf73e3d397 author: "mikeblome" ms.author: "mblome" ms.workload: ["cplusplus", "dotnet"] --- # How to: Use Regular Expressions to Validate Data Formatting (C++/CLI) The following code example demonstrates the use of regular expressions to verify the formatting of a string. In the following code example, the string should contain a valid phone number. The following code example uses the string "\d{3}-\d{3}-\d{4}" to indicate that each field represents a valid phone number. The "d" in the string indicates a digit, and the argument after each "d" indicates the number of digits that must be present. In this case, the number is required to be separated by dashes. ## Example ``` // regex_validate.cpp // compile with: /clr #using <System.dll> using namespace System; using namespace Text::RegularExpressions; int main() { array<String^>^ number = { "123-456-7890", "444-234-22450", "690-203-6578", "146-893-232", "146-839-2322", "4007-295-1111", "407-295-1111", "407-2-5555", }; String^ regStr = "^\\d{3}-\\d{3}-\\d{4}$"; for ( int i = 0; i < number->Length; i++ ) { Console::Write( "{0,14}", number[i] ); if ( Regex::IsMatch( number[i], regStr ) ) Console::WriteLine(" - valid"); else Console::WriteLine(" - invalid"); } return 0; } ``` ## See Also [.NET Framework Regular Expressions](/dotnet/standard/base-types/regular-expressions) [.NET Programming with C++/CLI (Visual C++)](../dotnet/dotnet-programming-with-cpp-cli-visual-cpp.md)
34.741379
503
0.632258
eng_Latn
0.879428
c982f906be08fe16a09a29c40a025428b7721a22
199
md
Markdown
README.md
luka-devnull/Compass-like-Gate-Pixel-Icons
f240fdb7b4b01cdd94e8bc525d4faa41ef20966f
[ "MIT" ]
1
2020-05-08T07:30:46.000Z
2020-05-08T07:30:46.000Z
README.md
luka-devnull/Compass-like-Gate-Pixel-Icons
f240fdb7b4b01cdd94e8bc525d4faa41ef20966f
[ "MIT" ]
null
null
null
README.md
luka-devnull/Compass-like-Gate-Pixel-Icons
f240fdb7b4b01cdd94e8bc525d4faa41ef20966f
[ "MIT" ]
3
2020-03-09T02:43:46.000Z
2020-11-16T22:54:49.000Z
# Compass-like Gate Pixel Icons Compass-like gate icons. Aimed to help you by saving time - instead of targeting gates one by one, you can just check their icons to see the direction they are going.
66.333333
166
0.78392
eng_Latn
0.999812
c983216fda8806dbc0ce53b0f95d9adf2a827efa
9,014
md
Markdown
doc/operations/incident_management/alert_integrations.md
FutureSenzhong/gitlabhq
7db8721ff9eeb914afa30da7fb8df07ff6c10796
[ "MIT" ]
1
2021-01-12T08:51:20.000Z
2021-01-12T08:51:20.000Z
doc/operations/incident_management/alert_integrations.md
FutureSenzhong/gitlabhq
7db8721ff9eeb914afa30da7fb8df07ff6c10796
[ "MIT" ]
null
null
null
doc/operations/incident_management/alert_integrations.md
FutureSenzhong/gitlabhq
7db8721ff9eeb914afa30da7fb8df07ff6c10796
[ "MIT" ]
null
null
null
--- stage: Monitor group: Health info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers --- # Alert integrations > - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/13203) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 12.4. > - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/42640) to [GitLab Core](https://about.gitlab.com/pricing/) in 12.8. GitLab can accept alerts from any source via a webhook receiver. This can be configured generically or, in GitLab versions 13.1 and greater, you can configure [External Prometheus instances](../metrics/alerts.md#external-prometheus-instances) to use this endpoint. ## Integrations list > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/245331) in [GitLab Core](https://about.gitlab.com/pricing/) 13.5. With Maintainer or higher [permissions](../../user/permissions.md), you can view the list of configured alerts integrations by navigating to **Settings > Operations** in your project's sidebar menu, and expanding **Alerts** section. The list displays the integration name, type, and status (enabled or disabled): ![Current Integrations](img/integrations_list_v13_5.png) ## Configuration GitLab can receive alerts via a [HTTP endpoint](#generic-http-endpoint) that you configure, or the [Prometheus integration](#external-prometheus-integration). ### Generic HTTP Endpoint **CORE** Enabling the Generic HTTP Endpoint activates a unique HTTP endpoint that can receive alert payloads in JSON format. You can always [customize the payload](#customize-the-alert-payload-outside-of-gitlab) to your liking. 1. Sign in to GitLab as a user with maintainer [permissions](../../user/permissions.md) for a project. 1. Navigate to **Settings > Operations** in your project. 1. Expand the **Alerts** section, and in the **Integration** dropdown menu, select **Generic**. 1. Toggle the **Active** alert setting to display the **URL** and **Authorization Key** for the webhook configuration. ### HTTP Endpoints **PREMIUM** > [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/4442) in [GitLab Premium](https://about.gitlab.com/pricing/) 13.6. In [GitLab Premium](https://about.gitlab.com/pricing/), you can create multiple unique HTTP endpoints to receive alerts from any external source in JSON format, and you can [customize the payload](#customize-the-alert-payload-outside-of-gitlab). 1. Sign in to GitLab as a user with maintainer [permissions](../../user/permissions.md) for a project. 1. Navigate to **Settings > Operations** in your project. 1. Expand the **Alerts** section. 1. For each endpoint you want to create: 1. In the **Integration** dropdown menu, select **HTTP Endpoint**. 1. Name the integration. 1. Toggle the **Active** alert setting to display the **URL** and **Authorization Key** for the webhook configuration. You must also input the URL and Authorization Key in your external service. 1. _(Optional)_ To generate a test alert to test the new integration, enter a sample payload, then click **Save and test alert payload**.Valid JSON is required. 1. Click **Save Integration**. The new HTTP Endpoint displays in the [integrations list](#integrations-list). You can edit the integration by selecting the **{pencil}** pencil icon on the right side of the integrations list. ### External Prometheus integration For GitLab versions 13.1 and greater, please read [External Prometheus Instances](../metrics/alerts.md#external-prometheus-instances) to configure alerts for this integration. ## Customize the alert payload outside of GitLab For all integration types, you can customize the payload by sending the following parameters. All fields other than `title` are optional: | Property | Type | Description | | ------------------------- | --------------- | ----------- | | `title` | String | The title of the incident. Required. | | `description` | String | A high-level summary of the problem. | | `start_time` | DateTime | The time of the incident. If none is provided, a timestamp of the issue is used. | | `end_time` | DateTime | For existing alerts only. When provided, the alert is resolved and the associated incident is closed. | | `service` | String | The affected service. | | `monitoring_tool` | String | The name of the associated monitoring tool. | | `hosts` | String or Array | One or more hosts, as to where this incident occurred. | | `severity` | String | The severity of the alert. Must be one of `critical`, `high`, `medium`, `low`, `info`, `unknown`. Default is `critical`. | | `fingerprint` | String or Array | The unique identifier of the alert. This can be used to group occurrences of the same alert. | | `gitlab_environment_name` | String | The name of the associated GitLab [environment](../../ci/environments/index.md). This can be used to associate your alert to your environment. | You can also add custom fields to the alert's payload. The values of extra parameters aren't limited to primitive types (such as strings or numbers), but can be a nested JSON object. For example: ```json { "foo": { "bar": { "baz": 42 } } } ``` TIP: **Tip:** Ensure your requests are smaller than the [payload application limits](../../administration/instance_limits.md#generic-alert-json-payloads). Example request: ```shell curl --request POST \ --data '{"title": "Incident title"}' \ --header "Authorization: Bearer <authorization_key>" \ --header "Content-Type: application/json" \ <url> ``` The `<authorization_key>` and `<url>` values can be found when configuring an alert integration. Example payload: ```json { "title": "Incident title", "description": "Short description of the incident", "start_time": "2019-09-12T06:00:55Z", "service": "service affected", "monitoring_tool": "value", "hosts": "value", "severity": "high", "fingerprint": "d19381d4e8ebca87b55cda6e8eee7385", "foo": { "bar": { "baz": 42 } } } ``` ## Triggering test alerts > [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/3066) in GitLab Core in 13.2. After a [project maintainer or owner](../../user/permissions.md) configures an integration, you can trigger a test alert to confirm your integration works properly. 1. Sign in as a user with Developer or greater [permissions](../../user/permissions.md). 1. Navigate to **Settings > Operations** in your project. 1. Click **Alerts endpoint** to expand the section. 1. Enter a sample payload in **Alert test payload** (valid JSON is required). 1. Click **Test alert payload**. GitLab displays an error or success message, depending on the outcome of your test. ## Automatic grouping of identical alerts **(PREMIUM)** > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/214557) in [GitLab Premium](https://about.gitlab.com/pricing/) 13.2. In GitLab versions 13.2 and greater, GitLab groups alerts based on their payload. When an incoming alert contains the same payload as another alert (excluding the `start_time` and `hosts` attributes), GitLab groups these alerts together and displays a counter on the [Alert Management List](incidents.md) and details pages. If the existing alert is already `resolved`, GitLab creates a new alert instead. ![Alert Management List](img/alert_list_v13_1.png) ## Link to your Opsgenie Alerts DANGER: **Deprecated:** We are building deeper integration with Opsgenie and other alerting tools through [HTTP endpoint integrations](#generic-http-endpoint) so you can see alerts within the GitLab interface. As a result, the previous direct link to Opsgenie Alerts from the GitLab alerts list is scheduled for deprecation following the 13.7 release on December 22, 2020. > [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/3066) in [GitLab Premium](https://about.gitlab.com/pricing/) 13.2. You can monitor alerts using a GitLab integration with [Opsgenie](https://www.atlassian.com/software/opsgenie). If you enable the Opsgenie integration, you can't have other GitLab alert services, such as [Generic Alerts](generic_alerts.md) or Prometheus alerts, active at the same time. To enable Opsgenie integration: 1. Sign in as a user with Maintainer or Owner [permissions](../../user/permissions.md). 1. Navigate to **Operations > Alerts**. 1. In the **Integrations** select box, select **Opsgenie**. 1. Select the **Active** toggle. 1. In the **API URL** field, enter the base URL for your Opsgenie integration, such as `https://app.opsgenie.com/alert/list`. 1. Select **Save changes**. After you enable the integration, navigate to the Alerts list page at **Operations > Alerts**, and then select **View alerts in Opsgenie**.
45.296482
195
0.71744
eng_Latn
0.944171
c983fbaddf719c81763c6e5e9696fd966864147d
4,889
md
Markdown
articles/active-directory/user-help/user-help-join-device-on-network.md
mikan/azure-docs.ja-jp
2a9d9afbefa8ace8f9c16caa5e49f50c32279e44
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/user-help/user-help-join-device-on-network.md
mikan/azure-docs.ja-jp
2a9d9afbefa8ace8f9c16caa5e49f50c32279e44
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/user-help/user-help-join-device-on-network.md
mikan/azure-docs.ja-jp
2a9d9afbefa8ace8f9c16caa5e49f50c32279e44
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 職場のデバイスを組織のネットワークに参加させる - AD description: 職場のデバイスを組織のネットワークに参加させる方法について説明します。 services: active-directory author: eross-msft manager: daveba ms.assetid: 54e1b01b-03ee-4c46-bcf0-e01affc0419d ms.service: active-directory ms.subservice: user-help ms.workload: identity ms.topic: conceptual ms.date: 08/03/2018 ms.author: lizross ms.reviewer: jairoc ms.openlocfilehash: 149c0298ee7883aa130756bfc4d0cbbb9e002065 ms.sourcegitcommit: af6847f555841e838f245ff92c38ae512261426a ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 01/23/2020 ms.locfileid: "76704648" --- # <a name="join-your-work-device-to-your-organizations-network"></a>職場のデバイスを組織のネットワークに参加させる 制限されている可能性があるリソースにアクセスできるように、職場所有の Windows 10 デバイスを組織のネットワークに参加させます。 ## <a name="what-happens-when-you-join-your-device"></a>デバイスを参加させたときに実行される内容 Windows 10 デバイスを組織のネットワークに参加させている間、次のアクションが実行されます。 - Windows がデバイスを組織のネットワークに登録して、ユーザーが個人アカウントを使用してリソースにアクセスできるようにします。 デバイスが登録されたら、ユーザーが組織のユーザー名とパスワードを使用してサインインし、制限されたリソースにアクセスできるように、Windows は次にデバイスをネットワークに参加させます。 - オプションで、組織の選択に基づいて、[Multi-Factor Authentication](multi-factor-authentication-end-user-first-time.md) または[セキュリティ情報](user-help-security-info-overview.md)のどちらかを使用して 2 段階認証を設定するよう求められることがあります。 - オプションで、組織の選択に基づいて、Microsoft Intune などのモバイル デバイス管理で自動的に登録されることがあります。 Microsoft Intune での登録の詳細については、[Intune でのデバイスの登録](https://docs.microsoft.com/intune-user-help/enroll-your-device-in-intune-all)に関するページを参照してください。 - 組織のアカウントで自動サインインを使用して、サインイン プロセスを実行します。 ## <a name="to-join-a-brand-new-windows-10-device"></a>新しい Windows 10 デバイスを参加させるには デバイスが新しく、まだ設定されていない場合は、Windows OOBE (Out of Box Experience) プロセスを実行してデバイスをネットワークに参加させることができます。 1. 新しいデバイスを起動し、OOBE プロセスを開始します。 2. **[Microsoft アカウントでサインイン]** 画面で、職場または学校のメール アドレスを入力します。 ![メール アドレスが表示されたサインイン画面](./media/user-help-join-device-on-network/join-device-oobe-signin.png) 3. **[パスワードの入力]** 画面で、パスワードを入力します。 ![[パスワードの入力] 画面](./media/user-help-join-device-on-network/join-device-oobe-password.png) 4. モバイル デバイスでは、アカウントにアクセスできるようにデバイスを承認します。 ![モバイル通知画面](./media/user-help-join-device-on-network/join-device-oobe-mobile.png) 5. プライバシーの設定や Windows Hello の設定 (必要な場合) など、OOBE プロセスを完了します。 これで、デバイスが組織のネットワークに参加しました。 ## <a name="to-make-sure-youre-joined"></a>参加したことを確認するには 設定を参照することによって、参加したことを確認できます。 1. **[設定]** を開いてから、 **[アカウント]** を選択します。 ![[設定] 画面上の [アカウント]](./media/user-help-join-device-on-network/join-device-settings-accounts.png) 2. **[職場または学校にアクセスする]** を選択し、 **[ *\<your_organization>* Azure AD に接続しました]** のようなテキストが表示されることを確認します。 ![接続されている contoso アカウントが表示された [職場または学校にアクセスする] 画面](./media/user-help-join-device-on-network/join-device-oobe-verify.png) ## <a name="to-join-an-already-configured-windows-10-device"></a>既に構成されている Windows 10 デバイスを参加させるには デバイスが一定期間使用されており、既に設定されている場合は、次の手順に従ってデバイスをネットワークに参加させることができます。 1. **[設定]** を開いてから、 **[アカウント]** を選択します。 2. **[職場または学校にアクセスする]** を選択してから、 **[接続]** を選択します。 ![[職場または学校にアクセスする] および [接続] リンク](./media/user-help-join-device-on-network/join-device-access-work-school-connect.png) 3. **[職場または学校アカウントの設定]** 画面で、 **[このデバイスを Azure Active Directory に参加させる]** を選択します。 ![[職場または学校アカウントの設定] 画面](./media/user-help-join-device-on-network/join-device-setup-join-aad.png) 4. **[サインインしましょう]** 画面で、メール アドレス ([email protected] など) を入力し、 **[次へ]** を選択します。 ![[サインインしましょう] 画面](./media/user-help-join-device-on-network/join-device-setup-get-signed-in.png) 5. **[パスワードの入力]** 画面で、パスワードを入力し、 **[サインイン]** を選択します。 ![パスワードの入力](./media/user-help-join-device-on-network/join-device-setup-password.png) 6. モバイル デバイスでは、アカウントにアクセスできるようにデバイスを承認します。 ![モバイル通知画面](./media/user-help-join-device-on-network/join-device-setup-mobile.png) 7. **[これがあなたの組織のネットワークであることを確認してください]** 画面で、情報が正しいことを確認してから、 **[参加]** を選択します。 ![[これがあなたの組織のネットワークであることを確認してください] 確認画面](./media/user-help-join-device-on-network/join-device-setup-confirm.png) 8. **[これで完了です]** 画面で、 **[完了]** をクリックします。 ![[これで完了です] 画面](./media/user-help-join-device-on-network/join-device-setup-finish.png) ## <a name="to-make-sure-youre-joined"></a>参加したことを確認するには 設定を参照することによって、参加したことを確認できます。 1. **[設定]** を開いてから、 **[アカウント]** を選択します。 ![[設定] 画面上の [アカウント]](./media/user-help-join-device-on-network/join-device-settings-accounts.png) 2. **[職場または学校にアクセスする]** を選択し、 **[ *\<your_organization>* Azure AD に接続しました]** のようなテキストが表示されることを確認します。 ![接続されている contoso アカウントが表示された [職場または学校にアクセスする] 画面](./media/user-help-join-device-on-network/join-device-setup-verify.png) ## <a name="next-steps"></a>次のステップ デバイスを組織のネットワークに参加させたら、職場または学校アカウントの情報を使用してすべてのリソースにアクセスできるようになります。 - 組織で電話などの個人デバイスを登録させるようにする場合は、「[個人デバイスを組織のネットワークに登録する](user-help-register-device-on-network.md)」を参照してください。 - 組織が Microsoft Intune を使用して管理されていて、登録、サインイン、またはその他の Intune 関連の問題について質問がある場合は、[Intune ユーザー ヘルプ コンテンツ](https://docs.microsoft.com/intune-user-help/use-managed-devices-to-get-work-done)を参照してください。
41.432203
213
0.767437
yue_Hant
0.480296
c98407581a3fa8e446f71cce521259e2bc859b6d
586
md
Markdown
tables/RoomLandingParam-value.md
alexislours/crcpedia
b27ed40efc9930bc5deb3f36daab94bf52baeeb0
[ "CC-BY-4.0" ]
6
2020-09-02T08:04:13.000Z
2021-10-21T00:59:38.000Z
tables/RoomLandingParam-value.md
alexislours/crcpedia
b27ed40efc9930bc5deb3f36daab94bf52baeeb0
[ "CC-BY-4.0" ]
null
null
null
tables/RoomLandingParam-value.md
alexislours/crcpedia
b27ed40efc9930bc5deb3f36daab94bf52baeeb0
[ "CC-BY-4.0" ]
1
2020-09-17T03:05:04.000Z
2020-09-17T03:05:04.000Z
Ninji's exports: [HTML](https://wuffs.org/acnh/bcsv_160/html/RoomLandingParam.html), [CSV](https://wuffs.org/acnh/bcsv_160/csv/RoomLandingParam.csv), [JSON](https://wuffs.org/acnh/bcsv_160/json/RoomLandingParam.json) Spazzy's exports: [CSV](https://github.com/McSpazzy/acnh-csv/blob/master/RoomLandingParam.csv), [JSON](https://github.com/McSpazzy/acnh-json/blob/master/RoomLandingParam.json) | Floorboards | UISortID | UniqueID | LandingName | ResourceName | |:--:|:--:|:--:|:--:|:--:| | 6 | 0 | 0 | '木/茶色' | 'RoomTexStairsWood00' | | 6 | 1 | 1 | '木/こげ茶' | 'RoomTexStairsWood01' |
65.111111
216
0.697952
yue_Hant
0.965394
c9852bbd4ebad49f90faa3322abc1d382b00cf15
1,023
md
Markdown
content/blog/algorithm/programmers-42885.md
camiyoung/camiyoung.github.com
e54a0b2c48d563b3f72aeb30dddb77568fd56c73
[ "MIT" ]
null
null
null
content/blog/algorithm/programmers-42885.md
camiyoung/camiyoung.github.com
e54a0b2c48d563b3f72aeb30dddb77568fd56c73
[ "MIT" ]
1
2021-12-27T18:10:07.000Z
2021-12-27T18:10:08.000Z
content/blog/algorithm/programmers-42885.md
camiyoung/camiyoung.github.com
e54a0b2c48d563b3f72aeb30dddb77568fd56c73
[ "MIT" ]
null
null
null
--- title: '[프로그래머스] 구명보트 c++ ' date: 2021-12-15 18:49:22 category: algorithm thumbnail: { thumbnailSrc } draft: false emoji: 🚤 --- # 구명보트 최대 탑승 인원 2명이고 무게 제한이 있는 구명보트를 사용하는 최소 개수를 구한다. ## 풀이 및 코드 ### 풀이 주어진 배열을 정렬 후 투포인터 알고리즘으로 풀었다. 최대 몸무게 + 최소 몸무게가 보트의 limit을 넘는다면 최대 몸무게인 사람은 무조건 혼자 1개의 보트를 이용해야 한다. 한 보트에는 최대 2명까지만 탈 수 있기 때문에 2명이상이 되는 경우는 생각 할 필요 없다. ![](https://images.velog.io/images/anji00/post/d5cdcbbc-d0a4-454e-af02-83aa7fb855f1/image.png) ### 코드 ```cpp #include <string> #include <vector> #include <iostream> #include <algorithm> using namespace std; int solution(vector<int> people, int limit) { int answer = 0; sort(people.rbegin(),people.rend()); int start=0,end=people.size()-1; while(start<end){ if(people[start]+people[end]<=limit) { end--; } answer++; start++; } if(start==end)// 남은 사람이 1명 있음 answer++; return answer; } ``` [문제 출처 : 프로그래머스 구명보트](https://programmers.co.kr/learn/courses/30/lessons/42885)
18.6
94
0.622678
kor_Hang
0.999571
c9855b57ac920dfcdf70d423ae4d2b68eb127e6f
961
md
Markdown
docs/error-messages/compiler-warnings/compiler-warning-level-1-c4175.md
anmrdz/cpp-docs.es-es
f3eff4dbb06be3444820c2e57b8ba31616b5ff60
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-warnings/compiler-warning-level-1-c4175.md
anmrdz/cpp-docs.es-es
f3eff4dbb06be3444820c2e57b8ba31616b5ff60
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-warnings/compiler-warning-level-1-c4175.md
anmrdz/cpp-docs.es-es
f3eff4dbb06be3444820c2e57b8ba31616b5ff60
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Compilador advertencia (nivel 1) C4175 | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: - cpp-diagnostics ms.topic: error-reference f1_keywords: - C4175 dev_langs: - C++ helpviewer_keywords: - C4175 ms.assetid: 11407a07-127c-4d0d-b262-61f9f2b035ba author: corob-msft ms.author: corob ms.workload: - cplusplus ms.openlocfilehash: 61cf0471bd4308feb7be3a789f03e4355d6f7366 ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 09/18/2018 ms.locfileid: "46080290" --- # <a name="compiler-warning-level-1-c4175"></a>Advertencia del compilador (nivel 1) C4175 \#pragma component(browser, on): información del explorador debe especificarse inicialmente en la línea de comandos Para usar el pragma de [componentes](../../preprocessor/component.md) , debe generar información de examen durante la compilación ([/FR](../../build/reference/fr-fr-create-dot-sbr-file.md)).
32.033333
191
0.782518
spa_Latn
0.304946
c9859baf26b98a68e15a404e916473bc0aa27566
2,723
md
Markdown
CONTRIBUTING.md
assignar/join-monster
6d2fd9332675b4b3da33dd54f122d87bac06e331
[ "MIT" ]
null
null
null
CONTRIBUTING.md
assignar/join-monster
6d2fd9332675b4b3da33dd54f122d87bac06e331
[ "MIT" ]
7
2020-12-11T23:47:28.000Z
2022-03-08T22:52:19.000Z
CONTRIBUTING.md
assignar/join-monster
6d2fd9332675b4b3da33dd54f122d87bac06e331
[ "MIT" ]
null
null
null
Contributing to Join Monster ======================== All development of Join Monster happens on GitHub. We actively welcome and appreciate your [pull requests](https://help.github.com/articles/creating-a-pull-request)! ## Issues Bugs and feature requests are tracked via GitHub issues. Make sure bug descriptions are clear and detailed, with versions and stack traces. Provide your code snippets if any. ## Pull Requests Begin by forking our repository and cloning your fork. Once inside the directory, you'll need to provide a PostgreSQL and MySQL URI in your environment in the `PG_URL` and `MYSQL_URL` variables (omit the database name from the URI, but keep the trailing slash, e.g. `postgres://user:pass@localhost/` and `mysql://user:pass@localhost/`). You will also need to create the test databases in Postgres (`test1`, `test2`, and `demo`) and mysql (`test1` and `test2`), and install `sqlite3` to complete the tests. Setting this up might look something like this on MacOS (in a perfect world). ```sh # install SQLite3 brew install sqlite3 # install PostgreSQL and get DBs created, skipping creating a user brew install postgres brew services start postgres createdb test1 createdb test2 createdb demo # install MySQL, create a user, and a database brew install mysql brew services start mysql mysql -u root -e "CREATE USER 'andy'@'localhost' IDENTIFIED BY 'password';" mysql -u root -e "ALTER USER 'andy'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';" mysql -u root -e "GRANT ALL PRIVILEGES ON *.* TO 'andy'@'localhost' WITH GRANT OPTION;" mysql -u andy -p -e "CREATE DATABASE test1;" mysql -u andy -p -e "CREATE DATABASE test2;" export MYSQL_URL=mysql://andy:password@localhost/ export PG_URL=postgres://localhost/ npm install npm run db-build npm test ``` Run `npm install` and `npm run db-build` to prepare the fixture data. Check the `scripts` in the `package.json` for easily running the example data and the demo server with GraphiQL. Now you can begin coding. Before commiting your changes, **run the lint, tests, and coverage to make sure everything is green.** After making your commits, push it up to your fork and make a pull request to our master branch. We will review it ASAP. ## Release on NPM We will make timely releases with any new changes, so you do not need to worry about publishing. Tagged commits to the master branch will trigger a new build on NPM. Travis-CI will publish the package. Do not publish manually from the command line. New versions MUST be compliant with [semver](http://semver.org/). ## License By contributing to Join Monster, you agree that your contributions will be licensed under the LICENSE file in the project root directory.
46.152542
336
0.762394
eng_Latn
0.972534
c985a7fff1581877734704172e0b8ecf9cdbdc56
4,896
md
Markdown
develop/testing.md
hu279381344/kubernetes-
ac7d1583e27a7e2e9acf805a000cbc582920f7ed
[ "Apache-2.0" ]
4
2017-10-10T15:08:57.000Z
2022-01-26T09:30:05.000Z
develop/testing.md
hu279381344/kubernetes-
ac7d1583e27a7e2e9acf805a000cbc582920f7ed
[ "Apache-2.0" ]
null
null
null
develop/testing.md
hu279381344/kubernetes-
ac7d1583e27a7e2e9acf805a000cbc582920f7ed
[ "Apache-2.0" ]
4
2017-10-17T05:43:14.000Z
2018-08-06T01:41:32.000Z
# Kubernetes测试 ## 单元测试 单元测试仅依赖于源代码,是测试代码逻辑是否符合预期的最简单方法。 **运行所有的单元测试** ``` make test ``` **仅测试指定的package** ```sh # 单个package make test WHAT=./pkg/api # 多个packages make test WHAT=./pkg/{api,kubelet} ``` 或者,也可以直接用`go test` ```sh go test -v k8s.io/kubernetes/pkg/kubelet ``` **仅测试指定package的某个测试case** ``` # Runs TestValidatePod in pkg/api/validation with the verbose flag set make test WHAT=./pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$' # Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation make test WHAT=./pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$" ``` 或者直接用`go test` ``` go test -v k8s.io/kubernetes/pkg/api/validation -run ^TestValidatePod$ ``` **并行测试** 并行测试是root out flakes的一种有效方法: ```sh # Have 2 workers run all tests 5 times each (10 total iterations). make test PARALLEL=2 ITERATION=5 ``` **生成测试报告** ``` make test KUBE_COVER=y ``` ## Benchmark测试 ``` go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch ``` ## 集成测试 Kubernetes集成测试需要安装etcd(只要按照即可,不需要启动),比如 ``` hack/install-etcd.sh # Installs in ./third_party/etcd echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH ``` 集成测试会在需要的时候自动启动etcd和kubernetes服务,并运行[test/integration](https://github.com/kubernetes/kubernetes/tree/master/test/integration)里面的测试。 **运行所有集成测试** ```sh make test-integration # Run all integration tests. ``` **指定集成测试用例** ```sh # Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set. make test-integration KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$" ``` ## End to end (e2e)测试 End to end (e2e) 测试模拟用户行为操作Kubernetes,用来保证Kubernetes服务或集群的行为完全符合设计预期。 在开启e2e测试之前,需要先编译测试文件,并设置KUBERNETES_PROVIDER(默认为gce): ``` make WHAT='test/e2e/e2e.test' make ginkgo export KUBERNETES_PROVIDER=local ``` **启动cluster,测试,最后停止cluster** ```sh # build Kubernetes, up a cluster, run tests, and tear everything down go run hack/e2e.go -- -v --build --up --test --down ``` **仅测试指定的用例** ```sh go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sKubectl\srolling\-update\sshould\ssupport\srolling\-update\sto\ssame\simage\s\[Conformance\]$' ``` **略过测试用例** ```sh go run hack/e2e.go -- -v --test --test_args="--ginkgo.skip=Pods.*env ``` **并行测试** ```sh # Run tests in parallel, skip any that must be run serially GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]" # Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false" ``` **清理测试** ```sh go run hack/e2e.go -- -v --down ``` **有用的`-ctl`** ```sh # -ctl can be used to quickly call kubectl against your e2e cluster. Useful for # cleaning up after a failed test or viewing logs. Use -v to avoid suppressing # kubectl output. go run hack/e2e.go -- -v -ctl='get events' go run hack/e2e.go -- -v -ctl='delete pod foobar' ``` ## Fedaration e2e测试 ```sh export FEDERATION=true export E2E_ZONES="us-central1-a us-central1-b us-central1-f" # or export FEDERATION_PUSH_REPO_BASE="quay.io/colin_hom" export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}" # build container images KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -- -v -build # push the federation container images build/push-federation-images.sh # Deploy federation control plane go run hack/e2e.go -- -v --up # Finally, run the tests go run hack/e2e.go -- -v --test --test_args="--ginkgo.focus=\[Feature:Federation\]" # Don't forget to teardown everything down go run hack/e2e.go -- -v --down ``` 可以用`cluster/log-dump.sh <directory>`方便的下载相关日志,帮助排查测试中碰到的问题。 ## Node e2e测试 Node e2e仅测试Kubelet的相关功能,可以在本地或者集群中测试 ```sh export KUBERNETES_PROVIDER=local make test-e2e-node FOCUS="InitContainer" make test_e2e_node TEST_ARGS="--experimental-cgroups-per-qos=true" ``` ## 补充说明 借助kubectl的模版可以方便获取想要的数据,比如查询某个container的镜像的方法为 ```sh kubectl get pods nginx-4263166205-ggst4 -o template '--template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "nginx"}}{{.image}}{{end}}{{end}}{{end}}' ``` ## 参考文档 * [Kubernetes testing](https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md) * [End-to-End Testing](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md) * [Node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md) * [How to write e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/writing-good-e2e-tests.md) * [Coding Conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/coding-conventions.md#testing-conventions)
24.852792
200
0.734069
yue_Hant
0.409525
c985f27e4d98f4ab11e49a0d71e1fd56376a82a4
1,932
md
Markdown
README.md
EllissaPeterson/hackillinois-2020
ceff0aff655d10b9f1edd8703049af84534ce039
[ "Apache-2.0" ]
null
null
null
README.md
EllissaPeterson/hackillinois-2020
ceff0aff655d10b9f1edd8703049af84534ce039
[ "Apache-2.0" ]
null
null
null
README.md
EllissaPeterson/hackillinois-2020
ceff0aff655d10b9f1edd8703049af84534ce039
[ "Apache-2.0" ]
1
2020-03-16T18:11:42.000Z
2020-03-16T18:11:42.000Z
# Risk Assessment and Data Visualization of 2019 Novel Coronavirus (SARS-CoV-2) Currently classified as an epidemic by the <a href='https://www.who.int/emergencies/diseases/novel-coronavirus-2019'>World Health Organization (WHO)</a>, coronavirus is on its way to becoming a pandemic as it continues to spread geographically to more regions of the world. This program aims to provide assistance in global aid allocation and spread individuals' awareness, showing comparative severity of imminent danger to persons in different countries.<br> It is important to note that lack of accurate reporting on affected persons due to fears of inducing public panic and negative economic effects has had reprecussions on modeling accuracy.<br> The project mainly consists of a visualization that actively draws data from <a href="https://github.com/CSSEGISandData/COVID-19">this GitHub repository</a> put together by Johns Hopkins University Center for Systems Science and Engineering. In addition, a neural network machine learning model taking into account life expectancy, age group distribution, and sanitary standards in each country predicts future data for countries using China as a training set.<br> Potential for improvements include building up graphical interface, increasing model accuracy, and finetuning system-wide interoperability. <figure> <img src="https://github.com/EllissaPeterson/hackillinois-2020/blob/master/deaths.png?raw=true" style="width:33%"> <figcaption>deaths due to CoV as of 2/29</figcaption> </figure> <figure> <img src="https://github.com/EllissaPeterson/hackillinois-2020/blob/master/life.png?raw=true" style="width:33%"> <figcaption>average life expectancy at birth</figcaption> </figure> <figure> <img src="https://github.com/EllissaPeterson/hackillinois-2020/blob/master/sanitation.png?raw=true" style="width:33%"> <figcaption>% of population with access to basic sanitation</figcaption> </figure>
138
464
0.804865
eng_Latn
0.979657
c9864338d58d971e3ce1cf9c015d30081ea5c56a
28
md
Markdown
README.md
OpenFRC/FRCjs
b99c8ab587d62e6516ecec1d16497916e0bbfbca
[ "MIT" ]
null
null
null
README.md
OpenFRC/FRCjs
b99c8ab587d62e6516ecec1d16497916e0bbfbca
[ "MIT" ]
null
null
null
README.md
OpenFRC/FRCjs
b99c8ab587d62e6516ecec1d16497916e0bbfbca
[ "MIT" ]
null
null
null
# FRCjs FRC node.js binding
9.333333
19
0.75
lvs_Latn
0.650518
c98680e28c1836753dbd8b85d0e1258b1d5ca55e
18,637
md
Markdown
docs/standard/assembly/unloadability.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/assembly/unloadability.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/assembly/unloadability.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Sposób używania i debugowania funkcji zwolnienia zestawu w programie .NET Core description: Dowiedz się, jak używać programu AssemblyLoadContexte kolekcjonowane do ładowania i zwalniania zestawów zarządzanych oraz jak debugować problemy uniemożliwiające pomyślne wyładowywanie. author: janvorli ms.author: janvorli ms.date: 02/05/2019 ms.openlocfilehash: 9d1f604816dcbd7a84a3692b3cfd24481532789a ms.sourcegitcommit: 3d84eac0818099c9949035feb96bbe0346358504 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 07/21/2020 ms.locfileid: "86865349" --- # <a name="how-to-use-and-debug-assembly-unloadability-in-net-core"></a>Sposób używania i debugowania funkcji zwolnienia zestawu w programie .NET Core Począwszy od platformy .NET Core 3,0, możliwość załadowania i późniejszego zwolnienia zestawu zestawów jest obsługiwana. W tym celu w .NET Framework użyto niestandardowych domen aplikacji, ale program .NET Core obsługuje tylko jedną domyślną domenę aplikacji. Program .NET Core 3,0 i jego nowsze wersje obsługują możliwości zwalniania w programie <xref:System.Runtime.Loader.AssemblyLoadContext> . Zestaw zestawów można załadować do `AssemblyLoadContext` w nich lub po prostu przeprowadzić ich kontrolę przy użyciu odbicia, a wreszcie Zwolnij `AssemblyLoadContext` . To zwalnia zestawy ładowane do `AssemblyLoadContext` . Istnieje jedna istotna różnica między zwalnianiem przy użyciu `AssemblyLoadContext` i używania domen aplikacji. W przypadku domen aplikacji wyładowywanie jest wymuszane. W czasie zwalniania wszystkie wątki działające w docelowej domenie AppDomain są przerywane, zarządzane obiekty COM utworzone w docelowej domenie AppDomain są niszczone i tak dalej. W przypadku `AssemblyLoadContext` , Unload to "spółdzielnie". Wywołanie <xref:System.Runtime.Loader.AssemblyLoadContext.Unload%2A?displayProperty=nameWithType> metody po prostu inicjuje zwolnienie. Zwalnianie kończy się po: - Żadne wątki nie mają metod z zestawów załadowanych `AssemblyLoadContext` na stosy wywołań. - Żaden z typów zestawów nie został załadowany do `AssemblyLoadContext` , wystąpienia tych typów i same zestawy są przywoływane przez: - Odwołania poza z `AssemblyLoadContext` , z wyjątkiem słabych odwołań ( <xref:System.WeakReference> lub <xref:System.WeakReference%601> ). - Dojścia silnego modułu wyrzucania elementów bezużytecznych (GC) ([GCHandleType. Normal](xref:System.Runtime.InteropServices.GCHandleType.Normal) lub [GCHandleType. przypięto](xref:System.Runtime.InteropServices.GCHandleType.Pinned)) zarówno wewnątrz, jak i na zewnątrz `AssemblyLoadContext` . ## <a name="use-collectible-assemblyloadcontext"></a>Użyj AssemblyLoadContextów kolekcjonowanych Ta sekcja zawiera szczegółowy samouczek krok po kroku, w którym przedstawiono prosty sposób ładowania aplikacji platformy .NET Core do elementu "kolekcjonowania" `AssemblyLoadContext` , wykonywania jego punktu wejścia, a następnie jego zwolnienia. Kompletny przykład można znaleźć pod adresem [https://github.com/dotnet/samples/tree/master/core/tutorials/Unloading](https://github.com/dotnet/samples/tree/master/core/tutorials/Unloading) . ### <a name="create-a-collectible-assemblyloadcontext"></a>Utwórz kolekcjonowaną AssemblyLoadContext Należy utworzyć klasę z <xref:System.Runtime.Loader.AssemblyLoadContext> i zastąpić jej <xref:System.Runtime.Loader.AssemblyLoadContext.Load%2A?displayProperty=nameWithType> metodę. Ta metoda rozwiązuje odwołania do wszystkich zestawów, które są zależnościami zestawów ładowanych do tego elementu `AssemblyLoadContext` . Poniższy kod jest przykładem najprostszego niestandardowego `AssemblyLoadContext` : [!code-csharp[Simple custom AssemblyLoadContext](~/samples/snippets/standard/assembly/unloading/simple_example.cs#1)] Jak widać, `Load` Metoda zwraca `null` . Oznacza to, że wszystkie zestawy zależności są ładowane do domyślnego kontekstu, a nowy kontekst zawiera tylko zestawy, które są w nim jawnie załadowane. Jeśli chcesz załadować niektóre lub wszystkie zależności do tego `AssemblyLoadContext` , możesz użyć `AssemblyDependencyResolver` w `Load` metodzie. `AssemblyDependencyResolver`Rozpoznaje nazwy zestawów do bezwzględnych ścieżek plików zestawów. Mechanizm rozwiązywania konfliktów używa *.deps.js* plików plików i zestawów w katalogu głównym zestawu załadowanym do kontekstu. [!code-csharp[Advanced custom AssemblyLoadContext](~/samples/snippets/standard/assembly/unloading/complex_assemblyloadcontext.cs)] ### <a name="use-a-custom-collectible-assemblyloadcontext"></a>Korzystanie z niestandardowego AssemblyLoadContextego kolekcjonowania W tej sekcji założono, że prostsza wersja `TestAssemblyLoadContext` jest używana. Można utworzyć wystąpienie niestandardowego `AssemblyLoadContext` i załadować do niego zestaw w następujący sposób: [!code-csharp[Part 1](~/samples/snippets/standard/assembly/unloading/simple_example.cs#3)] Dla każdego z zestawów, do którego odwołuje się załadowany zestaw, `TestAssemblyLoadContext.Load` Metoda jest wywoływana, aby `TestAssemblyLoadContext` można było określić miejsce, z którego ma zostać pobrany zestaw. W naszym przypadku zwraca wartość `null` wskazującą, że powinna zostać załadowana do domyślnego kontekstu z lokalizacji używanych przez środowisko uruchomieniowe do domyślnego ładowania zestawów. Teraz, gdy zestaw został załadowany, można wykonać z niego metodę. Uruchom `Main` metodę: [!code-csharp[Part 2](~/samples/snippets/standard/assembly/unloading/simple_example.cs#4)] Po `Main` powrocie metody można zainicjować zwalnianie przez wywołanie `Unload` metody na Custom `AssemblyLoadContext` lub pozbyć się odwołania do `AssemblyLoadContext` : [!code-csharp[Part 3](~/samples/snippets/standard/assembly/unloading/simple_example.cs#5)] Jest to wystarczające do zwolnienia zestawu testowego. W rzeczywistości przeniesiemy wszystkie te elementy do oddzielnej metody nieliniowej, aby upewnić się `TestAssemblyLoadContext` , że, `Assembly` i `MethodInfo` ( `Assembly.EntryPoint` ) nie mogą być utrzymywane przez odwołania do gniazda stosu (wartości lokalne lub JIT). Może to prowadzić do utrzymania `TestAssemblyLoadContext` aktywności i uniemożliwić zwolnienie. Należy również zwrócić słabe odwołanie do, aby `AssemblyLoadContext` można było użyć go później do wykrywania zakończenia zwalniania. [!code-csharp[Part 4](~/samples/snippets/standard/assembly/unloading/simple_example.cs#2)] Teraz można uruchomić tę funkcję w celu załadowania, wykonania i zwolnienia zestawu. [!code-csharp[Part 5](~/samples/snippets/standard/assembly/unloading/simple_example.cs#6)] Jednak zwalnianie nie zakończy się natychmiast. Jak wspomniano wcześniej, polega na wykorzystaniu modułu wyrzucania elementów bezużytecznych w celu zebrania wszystkich obiektów z zestawu testowego. W wielu przypadkach nie trzeba czekać na ukończenie zwalniania. Istnieją jednak przypadki, w których warto wiedzieć, że zwolnienie zostało zakończone. Na przykład możesz chcieć usunąć plik zestawu, który został załadowany do niestandardowego `AssemblyLoadContext` z dysku. W takim przypadku można użyć poniższego fragmentu kodu. Wyzwala wyrzucanie elementów bezużytecznych i oczekuje na oczekujące finalizatory w pętli do momentu, gdy słabe odwołanie do niego `AssemblyLoadContext` jest ustawione na `null` , co wskazuje na to, że obiekt docelowy został zebrany. W większości przypadków wymagane jest tylko jedno przejście przez pętlę. Jednak w przypadku bardziej złożonych przypadków, w których obiekty utworzone przez kod działający w programie `AssemblyLoadContext` mają finalizatory, może być konieczne wykonanie większej liczby przebiegów. [!code-csharp[Part 6](~/samples/snippets/standard/assembly/unloading/simple_example.cs#7)] ### <a name="the-unloading-event"></a>Zdarzenie wyładowywania W niektórych przypadkach może być konieczne, aby kod został załadowany do niestandardowym `AssemblyLoadContext` w celu wykonania pewnego oczyszczania po zainicjowaniu zwalniania. Na przykład może być konieczne zatrzymanie wątków lub oczyszczenie silnych dojść GC. `Unloading`W takich przypadkach można użyć zdarzenia. Do tego zdarzenia można podłączyć procedurę obsługi, która wykonuje wymagane oczyszczanie. ### <a name="troubleshoot-unloadability-issues"></a>Rozwiązywanie problemów z możliwością ponownego ładowania Ze względu na spółdzielnię zwalniającą, można łatwo zapomnieć o odwołaniach, które mogą mieć na celu utrzymywanie `AssemblyLoadContext` aktywności i uniemożliwiają zwolnienie. Poniżej znajduje się podsumowanie jednostek (niektóre z nich nie są oczywiste), które mogą zawierać odwołania: - Regularne odwołania przechowywane z zewnątrz, `AssemblyLoadContext` które znajdują się w gnieździe stosu lub w rejestrze procesora (Metoda lokalna, jawnie utworzona przez kod użytkownika lub niejawnie przez kompilator just-in-Time (JIT), zmienna statyczna lub silne (Przypinanie) dojście GC i przechodnie wskazywanie: - Zestaw załadowany do elementu "kolekcjonowane" `AssemblyLoadContext` . - Typ z takiego zestawu. - Wystąpienie typu z takiego zestawu. - Wątki uruchamiające kod z zestawu zostały załadowane do kolekcjonowania `AssemblyLoadContext` . - Wystąpienia niestandardowe, niekolekcjonowane `AssemblyLoadContext` typy utworzone wewnątrz elementu kolekcjonowanego `AssemblyLoadContext` . - Oczekujące <xref:System.Threading.RegisteredWaitHandle> wystąpienia z wywołaniami zwrotnymi mają ustawioną metodę w niestandardowym `AssemblyLoadContext` . > [!TIP] > Odwołania do obiektów, które są przechowywane w gniazdach stosu lub rejestrach procesora i mogą uniemożliwiać zwolnienie `AssemblyLoadContext` może wystąpić w następujących sytuacjach: > > - Gdy wyniki wywołań funkcji są przesyłane bezpośrednio do innej funkcji, nawet jeśli nie istnieje zmienna lokalna utworzona przez użytkownika. > - Gdy kompilator JIT przechowuje odwołanie do obiektu, który był dostępny w pewnym momencie w metodzie. ## <a name="debug-unloading-issues"></a>Debuguj problemy z zwalnianiem Problemy z debugowaniem z rozładowywaniem mogą być żmudnym. Możesz się dowiedzieć, gdzie nie wiesz, co może być `AssemblyLoadContext` utrzymywane, ale zwolnienie nie powiedzie się. Najlepsza broń, która pomoże Ci to WinDbg (LLDB w systemie UNIX) z wtyczką SOS. Należy się dowiedzieć, co zachowuje się z `LoaderAllocator` konkretną `AssemblyLoadContext` aktywnością. Wtyczka SOS umożliwia przeglądanie obiektów sterty GC, ich hierarchii i katalogów głównych. Aby załadować wtyczkę do debugera, wprowadź następujące polecenie w wierszu polecenia debugera: W programie WinDbg (po załamaniu do aplikacji .NET Core) automatycznie ```console .loadby sos coreclr ``` W LLDB: ```console plugin load /path/to/libsosplugin.so ``` Debugujmy Przykładowy program, który ma problemy z wyładunkiem. Kod źródłowy jest uwzględniony poniżej. Gdy uruchamiasz go w ramach usługi WinDbg, program przerwie się w debugerze po podjęciu próby sprawdzenia powodzenia zwolnienia. Następnie można rozpocząć wyszukiwanie culprits. > [!TIP] > Jeśli debugujesz przy użyciu LLDB w systemie UNIX, polecenia SOS w poniższych przykładach nie mają `!` przed nimi. ```console !dumpheap -type LoaderAllocator ``` To polecenie zrzuca wszystkie obiekty o nazwie typu zawierającego `LoaderAllocator` te, które znajdują się w stercie GC. Oto przykład: ```console Address MT Size 000002b78000ce40 00007ffadc93a288 48 000002b78000ceb0 00007ffadc93a218 24 Statistics: MT Count TotalSize Class Name 00007ffadc93a218 1 24 System.Reflection.LoaderAllocatorScout 00007ffadc93a288 1 48 System.Reflection.LoaderAllocator Total 2 objects ``` W poniższym składniku "Statistics:" Sprawdź `MT` ( `MethodTable` ) należący do `System.Reflection.LoaderAllocator` obiektu, który jest obiektem, którego potrzebujemy. Następnie na liście na początku Znajdź wpis ze `MT` zgodną z nim i uzyskaj adres samego obiektu. W naszym przypadku jest to "000002b78000ce40". Teraz, gdy wiemy, że znasz adres `LoaderAllocator` obiektu, możemy użyć innego polecenia, aby znaleźć jego elementy główne GC: ```console !gcroot -all 0x000002b78000ce40 ``` To polecenie zrzuca łańcuch odwołań do obiektów, które prowadzą do `LoaderAllocator` wystąpienia. Lista rozpoczyna się od elementu głównego, który jest obiektem, który utrzymuje naszą `LoaderAllocator` aktywność i w ten sposób stanowi rdzeń problemu. Katalogiem głównym może być gniazdo stosu, rejestr procesora, dojście GC lub zmienna statyczna. Oto przykład danych wyjściowych `gcroot` polecenia: ```console Thread 4ac: 000000cf9499dd20 00007ffa7d0236bc example.Program.Main(System.String[]) [E:\unloadability\example\Program.cs @ 70] rbp-20: 000000cf9499dd90 -> 000002b78000d328 System.Reflection.RuntimeMethodInfo -> 000002b78000d1f8 System.RuntimeType+RuntimeTypeCache -> 000002b78000d1d0 System.RuntimeType -> 000002b78000ce40 System.Reflection.LoaderAllocator HandleTable: 000002b7f8a81198 (strong handle) -> 000002b78000d948 test.Test -> 000002b78000ce40 System.Reflection.LoaderAllocator 000002b7f8a815f8 (pinned handle) -> 000002b790001038 System.Object[] -> 000002b78000d390 example.TestInfo -> 000002b78000d328 System.Reflection.RuntimeMethodInfo -> 000002b78000d1f8 System.RuntimeType+RuntimeTypeCache -> 000002b78000d1d0 System.RuntimeType -> 000002b78000ce40 System.Reflection.LoaderAllocator Found 3 roots. ``` Następnym krokiem jest określenie, gdzie znajduje się katalog główny, aby można go było usunąć. Najłatwiejszym przypadkiem jest to, że katalog główny to gniazdo stosu lub rejestr procesora. W takim przypadku `gcroot` pokazuje nazwę funkcji, której ramka zawiera element główny i wątek wykonujący tę funkcję. Trudnym przypadkiem jest to, że element główny jest zmienną statyczną lub dojściem GC. W poprzednim przykładzie pierwszy element główny jest lokalnym typem `System.Reflection.RuntimeMethodInfo` przechowywanym w ramce funkcji `example.Program.Main(System.String[])` pod adresem `rbp-20` ( `rbp` jest to rejestr procesora `rbp` i-20 to szesnastkowe przesunięcie z tego rejestru). Drugi katalog główny to normalna (silna) `GCHandle` , która przechowuje odwołanie do wystąpienia `test.Test` klasy. Trzeci element główny jest przypięty `GCHandle` . Ten element jest w rzeczywistości zmienną statyczną, ale niestety, nie ma możliwości poinformowania. Elementy statyczne dla typów referencyjnych są przechowywane w tablicy obiektów zarządzanych w wewnętrznych strukturach środowiska uruchomieniowego. Innym przypadkiem, który może uniemożliwiać zwolnienie, `AssemblyLoadContext` jest, gdy wątek ma ramkę metody z zestawu załadowanego do obiektu `AssemblyLoadContext` na jego stosie. Można sprawdzić, czy są używane stosy wywołań zarządzanych wszystkich wątków: ```console ~*e !clrstack ``` Polecenie oznacza "Zastosuj do wszystkich wątków `!clrstack` polecenia". Poniżej znajduje się dane wyjściowe tego polecenia dla przykładu. Niestety, LLDB w systemie UNIX nie ma żadnego sposobu zastosowania polecenia do wszystkich wątków, dlatego należy ręcznie przełączać wątki i powtórzyć `clrstack` polecenie. Ignoruj wszystkie wątki, w których debuger "nie może przeprowadzić zarządzanego stosu". ```console OS Thread Id: 0x6ba8 (0) Child SP IP Call Site 0000001fc697d5c8 00007ffb50d9de12 [HelperMethodFrame: 0000001fc697d5c8] System.Diagnostics.Debugger.BreakInternal() 0000001fc697d6d0 00007ffa864765fa System.Diagnostics.Debugger.Break() 0000001fc697d700 00007ffa864736bc example.Program.Main(System.String[]) [E:\unloadability\example\Program.cs @ 70] 0000001fc697d998 00007ffae5fdc1e3 [GCFrame: 0000001fc697d998] 0000001fc697df28 00007ffae5fdc1e3 [GCFrame: 0000001fc697df28] OS Thread Id: 0x2ae4 (1) Unable to walk the managed stack. The current thread is likely not a managed thread. You can run !threads to get a list of managed threads in the process Failed to start stack walk: 80070057 OS Thread Id: 0x61a4 (2) Unable to walk the managed stack. The current thread is likely not a managed thread. You can run !threads to get a list of managed threads in the process Failed to start stack walk: 80070057 OS Thread Id: 0x7fdc (3) Unable to walk the managed stack. The current thread is likely not a managed thread. You can run !threads to get a list of managed threads in the process Failed to start stack walk: 80070057 OS Thread Id: 0x5390 (4) Unable to walk the managed stack. The current thread is likely not a managed thread. You can run !threads to get a list of managed threads in the process Failed to start stack walk: 80070057 OS Thread Id: 0x5ec8 (5) Child SP IP Call Site 0000001fc70ff6e0 00007ffb5437f6e4 [DebuggerU2MCatchHandlerFrame: 0000001fc70ff6e0] OS Thread Id: 0x4624 (6) Child SP IP Call Site GetFrameContext failed: 1 0000000000000000 0000000000000000 OS Thread Id: 0x60bc (7) Child SP IP Call Site 0000001fc727f158 00007ffb5437fce4 [HelperMethodFrame: 0000001fc727f158] System.Threading.Thread.SleepInternal(Int32) 0000001fc727f260 00007ffb37ea7c2b System.Threading.Thread.Sleep(Int32) 0000001fc727f290 00007ffa865005b3 test.Program.ThreadProc() [E:\unloadability\test\Program.cs @ 17] 0000001fc727f2c0 00007ffb37ea6a5b System.Threading.Thread.ThreadMain_ThreadStart() 0000001fc727f2f0 00007ffadbc4cbe3 System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) 0000001fc727f568 00007ffae5fdc1e3 [GCFrame: 0000001fc727f568] 0000001fc727f7f0 00007ffae5fdc1e3 [DebuggerU2MCatchHandlerFrame: 0000001fc727f7f0] ``` Jak widać, ostatni wątek ma `test.Program.ThreadProc()` . Jest to funkcja z zestawu, która została załadowana do `AssemblyLoadContext` , i dlatego utrzymuje `AssemblyLoadContext` aktywności. ## <a name="example-source-with-unloadability-issues"></a>Przykładowe źródło z problemami z możliwością załadowania Poniższy kod jest używany w poprzednim przykładzie debugowania. ### <a name="main-testing-program"></a>Główny program do testowania [!code-csharp[Main testing program](~/samples/snippets/standard/assembly/unloading/unloadability_issues_example_main.cs)] ## <a name="program-loaded-into-the-testassemblyloadcontext"></a>Program został załadowany do TestAssemblyLoadContext Poniższy kod reprezentuje *test.dll* przekazaną do `ExecuteAndUnload` metody w głównym programie testowym. [!code-csharp[Program loaded into the TestAssemblyLoadContext](~/samples/snippets/standard/assembly/unloading/unloadability_issues_example_test.cs)]
73.086275
1,042
0.811933
pol_Latn
0.999482
c987028a22ee6e26a1c9b638164a24d17c568cde
5,220
md
Markdown
docset/windows/networkswitchmanager/get-networkswitchvlan.md
MaratMussabekov/windows-powershell-docs
207707fed441cc2522227d7c0da575d4bb2e65d8
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2021-01-28T23:16:13.000Z
2021-01-28T23:16:13.000Z
docset/windows/networkswitchmanager/get-networkswitchvlan.md
MaratMussabekov/windows-powershell-docs
207707fed441cc2522227d7c0da575d4bb2e65d8
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
docset/windows/networkswitchmanager/get-networkswitchvlan.md
MaratMussabekov/windows-powershell-docs
207707fed441cc2522227d7c0da575d4bb2e65d8
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2021-05-02T15:01:27.000Z
2021-05-02T15:01:27.000Z
--- ms.mktglfcycl: manage ms.sitesec: library ms.author: coreyp author: coreyp-at-msft description: Use this topic to help manage Windows and Windows Server technologies with Windows PowerShell. external help file: NetworkSwitchVlan-help.xml keywords: powershell, cmdlet manager: jasgro ms.date: 12/20/2016 ms.prod: w10 ms.technology: powershell-windows ms.topic: reference online version: schema: 2.0.0 title: Get-NetworkSwitchVlan ms.assetid: 7FEE3E95-8EFA-4300-B101-110EFD94694E --- # Get-NetworkSwitchVlan ## SYNOPSIS Gets VLANs for a network switch. ## SYNTAX ### NameSet (Default) ``` Get-NetworkSwitchVlan -CimSession <CimSession> [-Name <String>] [<CommonParameters>] ``` ### VlanIdSet ``` Get-NetworkSwitchVlan -CimSession <CimSession> -VlanId <Int32> [<CommonParameters>] ``` ### InstanceIdSet ``` Get-NetworkSwitchVlan -CimSession <CimSession> -InstanceId <String> [<CommonParameters>] ``` ### CaptionSet ``` Get-NetworkSwitchVlan -CimSession <CimSession> -Caption <String> [<CommonParameters>] ``` ### DescriptionSet ``` Get-NetworkSwitchVlan -CimSession <CimSession> -Description <String> [<CommonParameters>] ``` ## DESCRIPTION The **Get-NetworkSwitchVlan** cmdlet gets available virtual local area networks (VLANs) for a network switch. ## EXAMPLES ### Example 1: Get all VLANs for a network switch ``` PS C:\>$Session = New-CimSession -ComputerName "NetworkSwitch08" PS C:\> Get-NetworkSwitchVlan -CimSession $Session Caption Description Name InstanceID VlanID PSComputerName ------- ----------- ----------- ---------- ------ -------------- Vlan_description default Contoso:NetworkVL... 1 10.19.246.18 Vlan_description VLAN0002 Contoso:NetworkVL... 2 10.19.246.18 ``` The first command creates a **CimSession** for a network switch, and then stores it in the **$Session** variable. For more information about **CimSession** objects, type `Get-Help New-CimSession`. The second command gets all VLAN for the switch NetworkSwitch08 by using the **$Session** object. ### Example 2: Get a VLAN by using a name ``` PS C:\>Get-NetworkSwitchVlan -CimSession $Session -Name "VLAN22" Caption Description Name InstanceID VlanID PSComputerName ------- ----------- ----------- ---------- ------ -------------- Vlan_description VLAN22 Contoso:NetworkVL... 1 10.19.236.49 ``` This command gets the VLAN named VLAN22. The command includes a **CimSession**, similar to the first example. ## PARAMETERS ### -Caption Specifies the caption of a VLAN to get. ```yaml Type: String Parameter Sets: CaptionSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -CimSession Specifies the **CimSession** that this cmdlet uses to connect to the network switch. For more information about **CimSession** objects, type `Get-Help New-CimSession`. ```yaml Type: CimSession Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Description Specifies the description of a VLAN to get. ```yaml Type: String Parameter Sets: DescriptionSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -InstanceId Specifies the instance ID of a VLAN to get. ```yaml Type: String Parameter Sets: InstanceIdSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Name Specifies the name of a VLAN to get. This **ElementName** is a friendly name. It is not necessarily unique. ```yaml Type: String Parameter Sets: NameSet Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -VlanId Specifies the VLAN ID of the VLAN to get. ```yaml Type: Int32 Parameter Sets: VlanIdSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### None ## OUTPUTS ### CimInstance[] This cmdlet returns an array of **CimInstance** objects for network switch VLANs. ## NOTES ## RELATED LINKS [Disable-NetworkSwitchVlan](./Disable-NetworkSwitchVlan.md) [Enable-NetworkSwitchVlan](./Enable-NetworkSwitchVlan.md) [New-NetworkSwitchVlan](./New-NetworkSwitchVlan.md) [Remove-NetworkSwitchVlan](./Remove-NetworkSwitchVlan.md) [Set-NetworkSwitchVlanProperty](./Set-NetworkSwitchVlanProperty.md)
25.588235
315
0.687548
eng_Latn
0.405701
c987082485d9d3b7cdb8487628f48fd2275c84a6
3,595
md
Markdown
docs/prepare/requrements.md
dustdfg/book
48e03969819469d77c0c9d93d0561afc2381c03c
[ "MIT" ]
76
2021-04-26T15:45:49.000Z
2021-07-08T02:05:58.000Z
docs/prepare/requrements.md
dustdfg/book
48e03969819469d77c0c9d93d0561afc2381c03c
[ "MIT" ]
307
2021-04-21T19:14:00.000Z
2021-07-08T20:45:50.000Z
docs/prepare/requrements.md
dustdfg/book
48e03969819469d77c0c9d93d0561afc2381c03c
[ "MIT" ]
13
2021-04-20T14:16:00.000Z
2021-07-03T16:19:01.000Z
# Требования к рабочей станции и операционной системе ## Требования к аппаратному обеспечению - Раздел на жёстком диске, рекомендуемый размер - более 20 Гб, так как для сборки пакетов необходимо много свободного места. - Если оперативной памяти ПК мало (3 Гб и меньше), то рекомендуется создать раздел/файл подкачки. В крайнем случае используйте [zram](additional/zram). ## Требования к программному обеспечению Более ранние версии перечисленных программных пакетов могут работать, но корректность работы не проверялась. - Bash-3.2 (`/bin/sh` - жесткая или символическая ссылка на ``bash``) - Binutils-2.25 - Bison-2.7 (`/usr/bin/yacc` - символическая ссылка на `bison` или на файл сценария, который его запускает) - Bzip2-1.0.4 - Coreutils-6.9 - Diffutils-2.8.1 - Findutils-4.2.31 - Gawk-4.0.1 (``/usr/bin/awk`` - символическая ссылка на `gawk`) - GCC-6.2 с компилятором C++, g++ - Glibc-2.11 - Grep-2.5.1a - Gzip-1.3.12 - Linux Kernel-3.2 - M4-1.4.10 - Make-4.0 - Patch-2.5.4 - Perl-5.8.8 - Python-3.4 - Sed-4.1.5 - Tar-1.22 - Texinfo-4.7 - Xz-5.0.0 В зависимости от семейства ОС Linux, выполните следующий команды, чтобы обеспечить совместимость и установить необходимые пакеты: ### Для Debian, Ubuntu: ```bash apt-get install build-essential bison gawk texinfo ln -sf bash /bin/sh ``` ### Для ArchLinux ```bash pacman -S base-devel ``` ### Для Fedora, Redhat ```bash dnf install bison gawk texinfo make gcc-c++ ``` ## Проверка соответствия программного обеспечения Чтобы узнать, что ваша хост-система полностью соответствует всем необходимым для дальнейшей работы требованиям, выполните следующий набор команд: ```bash {{ include('../scripts/version-check.sh') }} ``` ???+ warning "Предупреждение" Внимательно изучите результат выполнения. В нём не должно встречаться строк, содержащих `ERROR`, `command not found`, `failed`. **Ошибочный результат** ``` bash, version 5.0.17(1)-release /bin/sh -> /usr/bin/dash ERROR: /bin/sh does not point to bash Binutils: version-check.sh: line 12: ld: command not found version-check.sh: line 13: bison: command not found yacc not found bzip2, Version 1.0.8, 13-Jul-2019. Coreutils: 8.32 diff (GNU diffutils) 3.7 find (GNU findutils) 4.7.0 version-check.sh: line 27: gawk: command not found /usr/bin/awk -> /usr/bin/mawk version-check.sh: line 37: gcc: command not found version-check.sh: line 38: g++: command not found (Ubuntu GLIBC 2.32-0ubuntu3) 2.32 grep (GNU grep) 3.4 gzip 1.10 Linux version 5.8.0-25-generic (buildd@lcy01-amd64-022) (gcc (Ubuntu 10.2.0-13ubuntu1) 10.2.0, GNU ld (GNU Binutils for Ubuntu) 2.35.1) #26-Ubuntu SMP Thu Oct 15 10:30:38 UTC 2020 version-check.sh: line 43: m4: command not found version-check.sh: line 44: make: command not found GNU patch 2.7.6 Perl version='5.30.3'; Python 3.8.6 sed (GNU sed) 4.7 tar (GNU tar) 1.30 version-check.sh: line 50: makeinfo: command not found xz (XZ Utils) 5.2.4 version-check.sh: line 53: g++: command not found g++ compilation failed ``` **Успешный результат** ``` bash, version 5.0.0(1)-release /bin/sh -> /bin/bash Binutils: (GNU Binutils) 2.32 bison (GNU Bison) 3.4.1 yacc is bison (GNU Bison) 3.4.1 bzip2, Version 1.0.8, 13-Jul-2019. Coreutils: 8.31 diff (GNU diffutils) 3.7 find (GNU findutils) 4.6.0 GNU Awk 5.0.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2) /usr/bin/awk -> /usr/bin/gawk gcc (GCC) 9.2.0 g++ (GCC) 9.2.0 (GNU libc) 2.30 grep (GNU grep) 3.3 gzip 1.10 m4 (GNU M4) 1.4.18 GNU Make 4.2.1 GNU patch 2.7.6 Perl version='5.30.0'; Python 3.7.4 sed (GNU sed) 4.7 tar (GNU tar) 1.32 texi2any (GNU texinfo) 6.6 xz (XZ Utils) 5.2.4 g++ compilation OK ```
27.442748
179
0.714325
rus_Cyrl
0.200999
c9879f5f02e49fd85b7388290b4cf7dc8f227a73
35
md
Markdown
src/content/pages/works.md
sezardino/Gatsby-edu
d7e90d5653d9d0d7797c19132feeafcb4df3137a
[ "RSA-MD" ]
null
null
null
src/content/pages/works.md
sezardino/Gatsby-edu
d7e90d5653d9d0d7797c19132feeafcb4df3137a
[ "RSA-MD" ]
null
null
null
src/content/pages/works.md
sezardino/Gatsby-edu
d7e90d5653d9d0d7797c19132feeafcb4df3137a
[ "RSA-MD" ]
null
null
null
--- title: "Works" navOrder: 4 ---
7
14
0.542857
eng_Latn
0.110975
c987c1e416a696d14aef1b69da396c3cc0798924
6,560
md
Markdown
README.md
stdlib-js/assert-is-uri
9648ce1543d1f74cb618e9fe59200784f3f32415
[ "BSL-1.0" ]
1
2021-06-16T21:25:31.000Z
2021-06-16T21:25:31.000Z
README.md
stdlib-js/assert-is-uri
9648ce1543d1f74cb618e9fe59200784f3f32415
[ "BSL-1.0" ]
null
null
null
README.md
stdlib-js/assert-is-uri
9648ce1543d1f74cb618e9fe59200784f3f32415
[ "BSL-1.0" ]
null
null
null
<!-- @license Apache-2.0 Copyright (c) 2018 The Stdlib Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # isURI [![NPM version][npm-image]][npm-url] [![Build Status][test-image]][test-url] [![Coverage Status][coverage-image]][coverage-url] <!-- [![dependencies][dependencies-image]][dependencies-url] --> > Test if a value is a [URI][uri]. <section class="installation"> ## Installation ```bash npm install @stdlib/assert-is-uri ``` </section> <section class="usage"> ## Usage ```javascript var isURI = require( '@stdlib/assert-is-uri' ); ``` #### isURI( value ) Tests if a `value` is a [URI][uri]. ```javascript var bool = isURI( 'https://google.com' ); // returns true bool = isURI( 'ftp://ftp.is.co.za/rfc/rfc1808.txt' ); // returns true bool = isURI( '' ); // returns false bool = isURI( 'foo' ); // returns false bool = isURI( null ); // returns false bool = isURI( NaN ); // returns false bool = isURI( true ); // returns false ``` </section> <!-- /.usage --> <section class="notes"> ## Notes - For more information regarding the URI scheme, see [RFC 3986][rfc-3986] and [Wikipedia][uri]. - On the distinction between URI, URL, and URN, see [The Difference Between URLs and URIs][difference-url-uri]. </section> <!-- /.notes --> <section class="examples"> ## Examples <!-- eslint no-undef: "error" --> ```javascript var isURI = require( '@stdlib/assert-is-uri' ); /* Valid */ var bool = isURI( 'http://google.com' ); // returns true bool = isURI( 'http://localhost/' ); // returns true bool = isURI( 'http://example.w3.org/path%20with%20spaces.html' ); // returns true bool = isURI( 'http://example.w3.org/%20' ); // returns true bool = isURI( 'ftp://ftp.is.co.za/rfc/rfc1808.txt' ); // returns true bool = isURI( 'ftp://ftp.is.co.za/../../../rfc/rfc1808.txt' ); // returns true bool = isURI( 'http://www.ietf.org/rfc/rfc2396.txt' ); // returns true bool = isURI( 'ldap://[2001:db8::7]/c=GB?objectClass?one' ); // returns true bool = isURI( 'mailto:[email protected]' ); // returns true bool = isURI( 'news:comp.infosystems.www.servers.unix' ); // returns true bool = isURI( 'tel:+1-816-555-1212' ); // returns true bool = isURI( 'telnet://192.0.2.16:80/' ); // returns true bool = isURI( 'urn:oasis:names:specification:docbook:dtd:xml:4.1.2' ); // returns true /* Invalid */ // No scheme: bool = isURI( '' ); // returns false // No scheme: bool = isURI( 'foo' ); // returns false // No scheme: bool = isURI( 'foo@bar' ); // returns false // No scheme: bool = isURI( '://foo/' ); // returns false // Illegal characters: bool = isURI( 'http://<foo>' ); // returns false // Invalid path: bool = isURI( 'http:////foo.html' ); // returns false // Incomplete hex escapes... bool = isURI( 'http://example.w3.org/%a' ); // returns false bool = isURI( 'http://example.w3.org/%a/foo' ); // returns false bool = isURI( 'http://example.w3.org/%at' ); // returns false ``` </section> <!-- /.examples --> * * * <section class="cli"> ## CLI <section class="installation"> ## Installation To use the module as a general utility, install the module globally ```bash npm install -g @stdlib/assert-is-uri ``` </section> <section class="usage"> ### Usage ```text Usage: is-uri [options] [<uri>] Options: -h, --help Print this message. -V, --version Print the package version. ``` </section> <!-- /.usage --> <section class="examples"> ### Examples ```bash $ is-uri http://google.com true ``` To use as a [standard stream][standard-streams], ```bash $ echo -n 'http://google.com' | is-uri true ``` </section> <!-- /.examples --> </section> <!-- /.cli --> <!-- Section for related `stdlib` packages. Do not manually edit this section, as it is automatically populated. --> <section class="related"> </section> <!-- /.related --> <!-- Section for all links. Make sure to keep an empty line after the `section` element and another before the `/section` close. --> <section class="main-repo" > * * * ## Notice This package is part of [stdlib][stdlib], a standard library for JavaScript and Node.js, with an emphasis on numerical and scientific computing. The library provides a collection of robust, high performance libraries for mathematics, statistics, streams, utilities, and more. For more information on the project, filing bug reports and feature requests, and guidance on how to develop [stdlib][stdlib], see the main project [repository][stdlib]. #### Community [![Chat][chat-image]][chat-url] --- ## License See [LICENSE][stdlib-license]. ## Copyright Copyright &copy; 2016-2021. The Stdlib [Authors][stdlib-authors]. </section> <!-- /.stdlib --> <!-- Section for all links. Make sure to keep an empty line after the `section` element and another before the `/section` close. --> <section class="links"> [npm-image]: http://img.shields.io/npm/v/@stdlib/assert-is-uri.svg [npm-url]: https://npmjs.org/package/@stdlib/assert-is-uri [test-image]: https://github.com/stdlib-js/assert-is-uri/actions/workflows/test.yml/badge.svg [test-url]: https://github.com/stdlib-js/assert-is-uri/actions/workflows/test.yml [coverage-image]: https://img.shields.io/codecov/c/github/stdlib-js/assert-is-uri/main.svg [coverage-url]: https://codecov.io/github/stdlib-js/assert-is-uri?branch=main <!-- [dependencies-image]: https://img.shields.io/david/stdlib-js/assert-is-uri.svg [dependencies-url]: https://david-dm.org/stdlib-js/assert-is-uri/main --> [chat-image]: https://img.shields.io/gitter/room/stdlib-js/stdlib.svg [chat-url]: https://gitter.im/stdlib-js/stdlib/ [stdlib]: https://github.com/stdlib-js/stdlib [stdlib-authors]: https://github.com/stdlib-js/stdlib/graphs/contributors [stdlib-license]: https://raw.githubusercontent.com/stdlib-js/assert-is-uri/main/LICENSE [uri]: http://en.wikipedia.org/wiki/URI_scheme [rfc-3986]: https://tools.ietf.org/html/rfc3986 [difference-url-uri]: https://danielmiessler.com/study/url-uri/ [standard-streams]: https://en.wikipedia.org/wiki/Standard_streams </section> <!-- /.links -->
20.694006
275
0.672561
eng_Latn
0.473858
c987ce33e7b272b1cc6693e527503e4e60219123
512
md
Markdown
docs/formBuilder/actions/save.md
adamDilger/formBuilder
c8543435d94dcfc2f50521ac85841b1f3efe6b9e
[ "MIT" ]
2,441
2015-01-13T22:13:27.000Z
2022-03-31T03:44:02.000Z
docs/formBuilder/actions/save.md
adamDilger/formBuilder
c8543435d94dcfc2f50521ac85841b1f3efe6b9e
[ "MIT" ]
1,066
2015-08-19T10:21:47.000Z
2022-03-29T00:28:33.000Z
docs/formBuilder/actions/save.md
adamDilger/formBuilder
c8543435d94dcfc2f50521ac85841b1f3efe6b9e
[ "MIT" ]
1,308
2015-01-25T15:01:40.000Z
2022-03-31T11:51:52.000Z
# Actions -> save Programmatically save the data for formBuilder. ## Usage ```javascript jQuery($ => { const fbEditor = document.getElementById("build-wrap"); const formBuilder = $(fbEditor).formBuilder(); document .getElementById("saveData") .addEventListener("click", () => formBuilder.actions.save()); }); ``` ## See it in Action <p data-height="525" data-theme-id="22927" data-embed-version="2" data-slug-hash="xyOdzp" data-default-tab="result" data-user="kevinchappell" class="codepen"></p>
30.117647
162
0.697266
eng_Latn
0.173831
c9886c622d157d1f9fccf3dec887a1f37d5f8c33
787
md
Markdown
source/nodejs/metrics/index.html.md
SquadcastHub/appsignal-docs
be106ceff9a20d7a10f35a9bfadbad4f8a5aeeb2
[ "MIT" ]
null
null
null
source/nodejs/metrics/index.html.md
SquadcastHub/appsignal-docs
be106ceff9a20d7a10f35a9bfadbad4f8a5aeeb2
[ "MIT" ]
null
null
null
source/nodejs/metrics/index.html.md
SquadcastHub/appsignal-docs
be106ceff9a20d7a10f35a9bfadbad4f8a5aeeb2
[ "MIT" ]
null
null
null
--- title: "Metrics" --- To track application-wide metrics, you can send custom metrics to AppSignal. These metrics enable you to track anything in your application, from newly registered users to database disk usage. These are not replacements for custom instrumentation, but provide an additional way to make certain data in your code more accessible and measurable over time. With different types of _metrics_ (gauges, counters and measurements) you can track any kind of data from your apps and _tag_ them with metadata to easily spot the differences between contexts. More information on how custom metrics can be used can be found [here](https://docs.appsignal.com/metrics/custom.html). ![Custom metrics demo dashboard](/assets/images/screenshots/custom_metrics_dashboard.png)
65.583333
353
0.805591
eng_Latn
0.993125
c988ea80c9ec9c99490839d9f00372e32074a9a9
4,223
md
Markdown
tiup/tiup-component-dm.md
cncc/docs
e943feba3ab43cc55435d6c413e14078c32ce19e
[ "Apache-2.0" ]
null
null
null
tiup/tiup-component-dm.md
cncc/docs
e943feba3ab43cc55435d6c413e14078c32ce19e
[ "Apache-2.0" ]
null
null
null
tiup/tiup-component-dm.md
cncc/docs
e943feba3ab43cc55435d6c413e14078c32ce19e
[ "Apache-2.0" ]
1
2019-05-16T08:21:08.000Z
2019-05-16T08:21:08.000Z
--- title: TiUP DM --- # TiUP DM Similar to [TiUP Cluster](/tiup/tiup-component-cluster.md) which is used to manage TiDB clusters, TiUP DM is used to manage DM clusters. You can use the TiUP DM component to perform daily operations and maintenance tasks of DM clusters, including deploying, starting, stopping, destroying, elastic scaling, upgrading DM clusters, and managing the configuration parameters of DM clusters. ## Syntax ```shell tiup dm [command] [flags] ``` `[command]` is used to pass the name of the command. See the [command list](#list-of-commands) for supported commands. ## Options ### --ssh - Specifies the SSH client to connect to the remote end (the machine where the TiDB service is deployed) for the command execution. - Data type:`STRING` - Support values: - `builtin`: Uses the built-in easyssh client of tiup-cluster as the SSH client. - `system`: Uses the default SSH client of the current operating system. - `none`: No SSH client is used. The deployment is only for the current machine. - If this option is not specified in the command, `builtin` is used as the default value. ### --ssh-timeout - Specifies the SSH connection timeout in seconds. - Data type: `UINT` - If this option is not specified in the command, the default timeout is `5` seconds. ### --wait-timeout - Specifies the maximum waiting time (in seconds) for each step in the operation process. The operation process consists of many steps, such as specifying systemctl to start or stop services, and waiting for ports to be online or offline. Each step may take several seconds. If the execution time of a step exceeds the specified timeout, the step exits with an error. - Data type:`UINT` - If this option is not specified in the command, the maximum waiting time for each steps is `120` seconds. ### -y, --yes - Skips the secondary confirmation of all risky operations. It is not recommended to use this option unless you use a script to call TiUP. - Data type:`BOOLEAN` - This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value. ### -v, --version - Prints the current version of TiUP DM. - Data type:`BOOLEAN` - This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value. ### -h, --help - Prints help information about the specified command. - Data type: `BOOLEAN` - This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value. ## List of commands - [import](/tiup/tiup-component-dm-import.md): Imports a DM v1.0 cluster deployed by DM-Ansible. - [deploy](/tiup/tiup-component-dm-deploy.md): Deploys a cluster based on a specified topology. - [list](/tiup/tiup-component-dm-list.md): Queries the list of deployed clusters. - [display](/tiup/tiup-component-dm-display.md): Displays the status of a specified cluster. - [start](/tiup/tiup-component-dm-start.md): Starts a specified cluster. - [stop](/tiup/tiup-component-dm-stop.md): Stops a specified cluster. - [restart](/tiup/tiup-component-dm-restart.md): Restarts a specified cluster. - [scale-in](/tiup/tiup-component-dm-scale-in.md): Scales in a specified cluster. - [scale-out](/tiup/tiup-component-dm-scale-out.md): Scales out a specified cluster. - [upgrade](/tiup/tiup-component-dm-upgrade.md): Upgrades a specified cluster. - [prune](/tiup/tiup-component-dm-prune.md): Cleans up instances in the Tombstone status for a specified cluster. - [edit-config](/tiup/tiup-component-dm-edit-config.md): Modifies the configuration of a specified cluster. - [reload](/tiup/tiup-component-dm-reload.md): Reloads the configuration of a specified cluster. - [patch](/tiup/tiup-component-dm-patch.md): Replaces a specified service in a deployed cluster. - [destroy](/tiup/tiup-component-dm-destroy.md): Destroys a specified cluster. - [audit](/tiup/tiup-component-dm-audit.md): Queries the operation audit log of a specified cluster. - [help](/tiup/tiup-component-dm-help.md): Prints help information.
52.7875
387
0.74852
eng_Latn
0.99255
c989128431b3793da0aa512247e0e6ff8e8a1206
9,664
md
Markdown
README.md
pgruener/public-ocr-api
48a0256a00272af8f60eb43038c5376cd30d0f57
[ "MIT" ]
null
null
null
README.md
pgruener/public-ocr-api
48a0256a00272af8f60eb43038c5376cd30d0f57
[ "MIT" ]
null
null
null
README.md
pgruener/public-ocr-api
48a0256a00272af8f60eb43038c5376cd30d0f57
[ "MIT" ]
null
null
null
# Quick reference - Maintained by: [Royal Software GmbH](https://royal-software.de/email-ocr/) - Where to get help: [the Docker Community Forums](https://forums.docker.com/), [create a ticket on Github](https://github.com/pgruener/public-ocr-api) # What is **Public OCR API** (POA)? Public OCR API is an HTTP webservice (based on NodeJS), which implements the **Public OCR API definition** (POAD). With this **POA** implementation you can simply enable OCR detection for your application and have your free OCR service up and running in seconds. **POA** implements the **POAD** interface and uses [OCRmyPDF](https://github.com/jbarlow83/OCRmyPDF) under the hood, to perform the *real work* of OCR recognition. OCRmyPDF itself makes use of HP/Googles tesseract library, which is a basic AI for character recognition. That's a huge advantage, as it is a well-maintained project, which is in use in many real-word use cases and provides [100s of trained language data](https://github.com/tesseract-ocr/tessdata). Refer the up-to-date API: https://www.postman.com/rs-public # What is **Public OCR API definition** (POAD)? There are many software vendors and solutions, which are used to recognize text data of images or documents. And there are increasingly many use cases, which make it necessary to fulfill those requirements to proceed with digitalization. The bad thing is, that all of those services have different interfaces. So if any company decides to improve their processes using OCR, they are forced to follow some of the following steps: - Check if the own enterprise software provides integration-possibilities for one or more OCR service vendors - If yes: you will be forced to buy the solution from the one vendor, which is connectable out-of-the-box to your system. - If not: the integration to your system(s) needs to be built (or ordered) from you. You'll have to find several solutions and test those, which meets your needs mostly. From now on, you're locked in and each turn to another software will be the same effort again. That's why we decided to define a Standard for a RESTful HTTP API, which may be chosen by several OCR vendors, to be implemented. Also the enterprise solutions will be much more flexible, if it chooses to build the interface to this standard, as all of the software solutions, which are already providing this interface, will be pluggable out-of-the-box then. As most of the vendors already implement some kind of a RESTful interface, it's that huge effort, they need to do. Especially, as our main goal is, to keep the interface as simple as possible. For all software solutions, which want to integrate an OCR software, should choose to implement **Public OCR API definition** (POAD). They can afterwards use this image, to integrate an open-source solution, for their customers at no additional costs. If their customers' needs another solution, they can simply choose between all **POA** supported vendors (OCR Solution Vendors, which support the **POAD** interface). Now they can easily license other services and plug them in, without any additional efforts. # How to use this image ## Start the POA service Starting your first POA instance is pretty easy. Just log into your docker environment and run the following command: $ docker run -d -p 5000:5000 rspub/ocr-api:0.0.5 ... where you might adjust the port and tag to your needs. That's it! You can now browse to `http://your-servers-fqdn:5000` and recognize the characters for your first images or PDF documents. **or on command line** `curl -F 'file=@/filepath.pdf' -F http://your-servers-fqdn:5000 -H "Accept: application/json" --output /filepath_out.pdf` ## ... via docker stack deploy or docker-compose Example `poa-stack.yml` file for POA: version: '3.7' services: ocr_sandbox: image: rspub/ocr-api:0.0.5 environment: - OCR_SANDBOX_PROCESSING=true - OMP_THREAD_LIMIT=1 - MAX_PAGES=3 Run `docker stack deploy -c poa-stack.yml poa` (or `docker-compose -f poa-stack.yml up`), wait for it to initialize completely, and visit `http://swarm-ip:5000`, `http://localhost:5000`, or `http://your-servers-fqdn:5000` (as appropriate). **or on command line** $ curl -F 'file=@/filepath.pdf' -F http://your-servers-fqdn:5000 -H "Accept: application/json" --output /filepath_out.pdf ## Container shell access The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your poa container: $ docker exec -it `docker container ls -f "ancestor=rspub/ocr-api:0.0.5" -q` /bin/bash Or if you made some changes in your own fork and are having trouble to start the container, you might want to dive into the container and debug some things live: $ docker run -it --entrypoint /bin/bash `docker images rspub/ocr-api:0.0.5 -q` -s ## Viewing POA logs The log is available through Docker's container log: $ docker container logs -f `docker container ls -f "ancestor=rspub/ocr-api:0.0.5" -q` # Container configuration with environment variables When you start the POA image, you can adjust the configuration of the instance's behaviour by passing one or more environment variables on the `docker run` command line. ## OCR_SANDBOX_PROCESSING Indicates if the container should run in sandbox mode. Usually you won't use this, as its only purpose is to display some more information on the website, which informs the user, that this instance is a sandbox and might be restricted and limited in its functionality. ## OCR_SYSTEM_LANG Defines the default language of the web interface. Currently only **en** and **de** are supported. The language is automatically influenced by the visitor's browser. If the visitor's preferred language is English, he'll get the page in English, no matter which setting was chosen here. It's just the fallback value for those, who don't speak a supported language or prevented their browser from sending a preferred language. ## DEFAULT_OCR_LANG Defines the system language. This defines the default language for OCR recognition, if no language was defined in the requests. That only makes sense, if you run this container only for documents of one single language. If you don't know, which languages will be used on your images/documents, you should keep it empty as well as the language parameter in the REST API. In this case, the detection will use all of the installed languages in the image. This is more flexible but will be slower. ## MAX_PAGES If you want to limit the amount of pages for each document, you can use this variable. If you set it to 3 (for example), then only 3 pages are processed in the permitted document. The API allows the parameter **fromPage** to set the starting point. ## OMP_THREAD_LIMIT The documents will be processed in several processes on all available CPUs. And each process will distribute its work over several threads. There are situations and CPU architectures, where the single-threaded recognition is faster than the multithreaded. With this parameter you can influence the number of threads in each parallel process to test, which setting fits most for you. # Install more languages For tesseract more than 100 languages are available. In this image the languages `de,en,fr,es,pt*` are included. If you need more languages in one specific container, you could log into the container (described in: [Container shell access](#container-shell-access)) and list all available languages with: $ apt-cache search tesseract-ocr And install the desired ones using: $ apt-get install tesseract-ocr-`LANGUAGE(-BUNDLE) NAME` If you need it in the image definition itself, you should fork this repository and implement another instruction to the Dockerfile. If you're implementing a reliable solution (like specification via environment variables), we'd like to get your pull-request for it. # Extending / changing / customization We highly appreciate if you want to contribute to this project or just want to change something. Feel free to fork the repository and afterwards ship us a pull-request. In the following sections, there is a small list of docker commands, which you might use to test and troubleshoot your first changes. ## building docker build -t rspub/ocr-api:0.0.5 . docker push rspub/ocr-api:0.0.5 Where the tag number should be adjusted. ## local building/testing Previously to delete the current container: docker rm -f `docker container ls -f "ancestor=rspub/ocr-api:0.0.5" -q` Build the new one from the local directory docker build -t rspub/ocr-api:0.0.5 . And run it docker run -d -p 5000:5000 rspub/ocr-api:0.0.5 All commands quickly concatenated: docker rm -f `docker container ls -f "ancestor=rspub/ocr-api:0.0.5" -q` && docker build -t rspub/ocr-api:0.0.5 . && docker run -d -p 5000:5000 rspub/ocr-api:0.0.5 # Copyright and license Code and documentation copyright 2020 [Royal Software GmbH](https://royal-software.de). Code is released under the MIT License. This means, you may use all parts of this repository, in commercial or private projects, how it fits your needs. The base of this image is [OCRmyPDF](https://github.com/jbarlow83/OCRmyPDF), which is released in a different, but still very broad usable license: Mozilla Public License 2.0 (MPL-2.0). There are several other components in different licenses under the hood, which you should check out. # Disclaimer The software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
44.127854
239
0.762107
eng_Latn
0.996758