hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
222b46ba3ccc1e33f1a32330b76b51b00abe0acc | 283 | md | Markdown | README.md | dpohanlon/ParametricVELO | 5d87fbc528d12ab815f18056013e09dec708fb0e | [
"MIT"
] | null | null | null | README.md | dpohanlon/ParametricVELO | 5d87fbc528d12ab815f18056013e09dec708fb0e | [
"MIT"
] | null | null | null | README.md | dpohanlon/ParametricVELO | 5d87fbc528d12ab815f18056013e09dec708fb0e | [
"MIT"
] | null | null | null | # ParametricVELO
Parametric simulation of LHCb Run 3 VELO track hits for reconstruction studies. Ray traces tracks sampled from FONLL momenta and tests for intersection with the geometry using hierarchical bounding volumes.


| 40.428571 | 206 | 0.812721 | eng_Latn | 0.953988 |
222b6eeec87e2a084c23cf340ca9b4df1ca01de1 | 2,121 | md | Markdown | src/fi/2021-03/12/03.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/fi/2021-03/12/03.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/fi/2021-03/12/03.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Kolmen Päivän Lepo
date: 13/09/2021
---
Joonan karkumatka ei sujunut ongelmitta. Hänen lyhyeksi jäänyttä ”lepoaan” häirittiin, kun Jumala puuttui peliin nostattamalla myrskyn. Joona pelastuu vetiseltä haudalta, kun Jumala käskee kalan nielaista hänet.
Vasta suuren kalan vatsassa, jossa Joona joutuu pakosta viettämään kolmen päivän levon, hän ymmärtää, kuinka riippuvainen hän on Jumalasta. Joskus Jumala panee meidät paikalle, jossa emme voi turvata mihinkään, mitä tämä maailma tarjoaa, jotta tajuaisimme, että Jeesus on se, mitä todellisuudessa tarvitsemme.
`Lue Joonan rukous kalan vatsassa (ks. Joona 2:2–10). Mitä hän rukoilee?`
Vaikka Joona oli syvällä veden alla hyvin vaarallisessa tilanteessa, hän kääntää rukouksessa ajatuksensa Jumalan temppeliin. ”Kuitenkin minä yhä tähyilen pyhää temppeliäsi kohti.”
Mistä tässä on kysymys?
Temppeli oli tämän rukouksen keskipiste, ja niin sen tulisi rukouksissa yleisestikin ottaen olla. Vanhassa testamentissa on pääasiassa vain yksi paikka, josta Jumalan voi löytää. Hän on temppelissä (ks. 2. Moos. 15:17; 2. Moos. 25:8). Pyhäkkö on keskeinen paikka, jossa voi rukoilla ja kohdata Jumalan.
Joona ei kuitenkaan viittaa Jerusalemin temppeliin, vaan hän puhuu taivaallisesta pyhäköstä (Joona 2:8). Siinä on hänen toivonsa, sillä sieltä Jumala ja hänen tarjoamansa pelastus todellisuudessa tulevat.
Vihdoin Joona ymmärtää tämän tärkeän totuuden. Hän on saanut kokea Jumalan armon. Hänet on pelastettu. Suuren kalan oksentaessa hänet ulos hän ymmärtää omakohtaisesti Jumalan rakkauden häntä, pakoon juossutta profeettaa, kohtaan. Hän on taatusti oppinut (joskin muutaman kiertotien kautta), että ainoa turvallinen tie uskovalle on pyrkiä elämään Jumalan tahdon mukaan.
Nyt hän päättää tehdä velvollisuutensa ja totella Jumalan käskyä. Lopulta hän suuntaa uskossa Niniveen, sillä onhan hän matkalla hyvin jumalattomaan kaupunkiin, jonka asukkaat eivät ehkä pidä tästä vierasmaalaisesta profeetasta, joka tulee kertomaan heille, kuinka pahoja he ovat.
`Kuinka ympäristöstäsi irrottautuminen voi saada sinut näkemään asiat sinulle uudesta näkökulmasta?` | 88.375 | 368 | 0.822725 | fin_Latn | 1.000005 |
222be4ce85fc059a2244a567d0098e84aa9e9b17 | 33 | md | Markdown | README.md | innovisors/rukin | d1c621735030c3ac2fa07e6ffa2c10c78020bf34 | [
"MIT"
] | null | null | null | README.md | innovisors/rukin | d1c621735030c3ac2fa07e6ffa2c10c78020bf34 | [
"MIT"
] | null | null | null | README.md | innovisors/rukin | d1c621735030c3ac2fa07e6ffa2c10c78020bf34 | [
"MIT"
] | null | null | null | # rukin
A simple web file share.
| 11 | 24 | 0.727273 | eng_Latn | 0.738785 |
222d345aa723bf2bf114c6cf75d5622aa10e421c | 2,774 | md | Markdown | content/post/internet-penopticon.md | gabemhouts/skepticaltechnologist | ead568d0ecc9a3d824378642b936338eb1688f1c | [
"MIT"
] | null | null | null | content/post/internet-penopticon.md | gabemhouts/skepticaltechnologist | ead568d0ecc9a3d824378642b936338eb1688f1c | [
"MIT"
] | null | null | null | content/post/internet-penopticon.md | gabemhouts/skepticaltechnologist | ead568d0ecc9a3d824378642b936338eb1688f1c | [
"MIT"
] | null | null | null | +++
title = "The Internet Panopticon"
date = "2022-01-31"
author = "Gabe Houts"
cover = "img/panopticon.webp"
description = "In recent times, internet surveillance has gotten to the point that people have come to expect that their every move is being watched."
draft = "false"
type = "posts"
toc = false
+++
In the interconnected world of the internet, we have the power to share ideas and connect with people faster than ever. But with that power, comes the power for our data to be used against us. In recent times, internet surveillance has gotten to the point that people have come to expect that their every move is being watched.
In the mid-1700s, Jeremy Bentham, an English Philosopher theorized the concept for a new social control mechanism called the Panopticon. The Panopticon is defined as a circular prison with one layer of cells, all with their openings facing the center. In the center there is a tower for a guard to watch out into the cells. The prisoners are always vulnerable, and always able to be watched by the guard inside. However, the tower is so far away and the windows so small that they can never tell if they are being watched. The knowledge that they could be watched constantly changed the prisoners behavior drastically. This is exactly what we have been seeing for over a decade in the online world.
The Panopticon serves as an almost perfect analogy for modern authority, especially on the internet. It’s no secret that we are being watched online, it has come to be a widely accepted fact. For instance, after the revelations about the governments mass surveillance programs by whistle-blower Edward Snowden the world was shaken by just how much information these government intelligence agencies had on its citizens. Investigations were launched, hearings were held, and basic reforms were made, yet the situation remains relatively unchanged. We all know we’re watched, but blow it off as if it’s a basic part of life. Just like the prisoners of the Panopticon, you can never know who's watching and when they're watching. There is simply nothing you can do to save your privacy. When it comes to our personal data, we’ve really let down our guard. Your personal information is scattered everywhere, and there is really nothing you can do to pry it out of the hands of companies, hackers, and government agencies. So the best thing to do for now is to take action and speak out for data protection legislation, and being careful who you trust with your information. Just because the guard is in the tower doesn’t mean we can’t put our sheets over the cell door for a bit of privacy.
-Gabe Houts
---
Have any questions, comments, or concerns? Please dont hesitate to [contact us](/contact-us)!
<[email protected]> | 120.608696 | 1,286 | 0.792358 | eng_Latn | 0.99993 |
222e5e48fd54cc180eaa5ed3e8e5762eb3fd7ca0 | 3,341 | md | Markdown | README.md | NimaBastani/Telegram-Pawn | c76f214abe21f355bbeaad6d9843a54973db8f41 | [
"MIT"
] | 2 | 2021-02-22T20:57:26.000Z | 2021-05-12T13:29:08.000Z | README.md | NimaBastani/Telegram-Pawn | c76f214abe21f355bbeaad6d9843a54973db8f41 | [
"MIT"
] | null | null | null | README.md | NimaBastani/Telegram-Pawn | c76f214abe21f355bbeaad6d9843a54973db8f41 | [
"MIT"
] | null | null | null | # Telegram-Pawn
Make Telegram bots using pawn language
## Sample
Simple code which sends every text message it receives :
```cpp
#include "telegramPawn.inc"
main() { }
public OnInit()
{
//ClearConsole();
print("Initing...");
return 1;
}
public OnTextSend(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], text[])
{
printf("%s : %s", chatId, text);
new message[2048];
format(message, sizeof(message), "You said : %s", text);
SendMessage(fromId, message);
return 1;
}
```
Simple command :
```cpp
#include "telegramPawn.inc"
main() { }
public OnInit()
{
//ClearConsole();
print("Initing...");
return 1;
}
CMD:hi(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], params[])
{
new string[512];
format(string, sizeof(string), "Hello there ! : %s", params);
SendMessage(chatId, string);
SendPhoto(chatId, "https://telegram.org/img/t_logo.png?1", "Hmmmm");
return 1;
}
```
## Functions and callbacks
| Callback | Description | Usage |
| --- | --- | --- |
| OnInit | This callback is triggered when the script starts. | OnInit() |
| OnExit | This callback is triggered when the script ends. | OnExit() |
| OnCommand | This callback is triggered when send a command | OnCommand(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], text[]) |
| OnTextSend | This callback is triggered when send a (text) message | OnTextSend(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], text[]) |
| OnAudioSend | This callback is triggered when send a (audio) | OnAudioSend(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], duration, fileSize, fileId[], caption[])) |
| OnVideoSend | This callback is triggered when send a (video) | OnVideoSend(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], duration, fileSize, fileId[], caption[])) |
| OnStickerSend | This callback is triggered when send a (sticker) | OnStickerSend(chatId[], fromId[], messageId[], forwardFromMessageId[], forwardFromId[], isAnimated, height, width, setName[], fileId[]) |
| --- | Type | Description |
| --- | --- | --- |
| chatId | string | Id of the channel/group/user |
| fromId | string | Id of the user who send the message |
| messageId | string | Id of the message |
| forwardFromMessageId | string | If the message is not forwarded it will be "-1", id of the forwarded message |
| forwardFromId | string | If the message is not forwarded it will be "-1", id of the forwarded chat |
| fileId | string | Id of the file |
| caption | string | caption of the message (Image/Video/Audio) |
| setName | string | Name of the sticker set |
| duration | int | duration of the video/audio |
| height | int | height of the video/sticker/video |
| width | int | width of the video/sticker/video |
| fileSize | int | size of the file |
| isAnimated | bool | is the sticker animated |
## Logging system :
If you want to get errors in Telegram you should change logForOwner in config.cfg file.
Note : You should set id of the owner of the robot in your script using SetOwner
```cpp
public OnInit()
{
SetOwner(id);//id = Id of a user, not a group/supergroup, not a channel
return 1;
}
```
## Licence
[The MIT License](https://github.com/NimaBastani/Telegram-Pawn/blob/main/LICENSE).
| 36.714286 | 207 | 0.683927 | eng_Latn | 0.869469 |
222eb298e9271332d1e876787692e890758dcc61 | 349 | md | Markdown | README.md | louckazdenekjr/kbd_backlight | 59eadcb4a9376e5e316ce7e76988fa06ee10ebbc | [
"0BSD"
] | null | null | null | README.md | louckazdenekjr/kbd_backlight | 59eadcb4a9376e5e316ce7e76988fa06ee10ebbc | [
"0BSD"
] | null | null | null | README.md | louckazdenekjr/kbd_backlight | 59eadcb4a9376e5e316ce7e76988fa06ee10ebbc | [
"0BSD"
] | null | null | null | # kbd_backlight
Controls the keyboard backlight on Linux
This fork uses smc::kbd_backlight instead as needed for some machines such as the Macbook Air.
Forked from: https://github.com/Spauldo/kbd_backlight
----------
Options:
-e
Enable, set to 100%
-d
Disable, set to 0%
-l <num>
Set the level to <num> - must be between 0 and 100
| 15.173913 | 94 | 0.702006 | eng_Latn | 0.977025 |
222f3763494ca291d86dd28079cf0b470a87389d | 1,636 | md | Markdown | _posts/2017-09-01-basics-install-and-using-packages.md | qq371148968/blog | 6b72997e7bfb59a5724f681d1f74fd4eeff19733 | [
"MIT"
] | null | null | null | _posts/2017-09-01-basics-install-and-using-packages.md | qq371148968/blog | 6b72997e7bfb59a5724f681d1f74fd4eeff19733 | [
"MIT"
] | null | null | null | _posts/2017-09-01-basics-install-and-using-packages.md | qq371148968/blog | 6b72997e7bfb59a5724f681d1f74fd4eeff19733 | [
"MIT"
] | null | null | null | ---
title: "安装和使用R包"
date: 2017-09-01
status: publish
output: html_notebook
categories:
- R
- R-Cookbook
tags:
- R
- Packages
---
# 问题
你想要安装和使用一个R包。
<!-- more -->
# 方案
如果你正在使用支持R的图形界面软件,应该存在通过菜单栏方式安装R包的选项(比如,常用的Rstudio中,可以点击菜单栏Tools中的Install Packages进行R包的安装)。
这里主要介绍如何用命令行来安装R包。
```
install.packages("reshape2") # reshap2为包名
```
在一个新R线程中使用该包之前,你必须先导入它。
```
library(reshape2)
```
如果你在一个脚本中使用该包,把这一行输入脚本中。
如果想要更新包,使用
```
update.packages()
```
如果你在Linux系统上使用R,管理员可能已经在系统上安装了一些R包,你将不能以上述方式对R包更新(因为你没有权限)。
********
原文链接:<http://www.cookbook-r.com/Basics/Installing_and_using_packages/>
# 其他
***
导入包也可以使用`require()`函数。
常见的包安装命令
| 命令 | 描述 | |
| ------------------ | ---------------------------- | ---- |
| installed.packages | 返回一个矩阵,包含所有已安装的包信息 | |
| available.packages | 返回一个矩阵,包含资源库上所有可用的R包 | |
| old.packages | 返回一个矩阵,显示所有已安装的包中具有新版本的包 | |
| new.packages | 返回一个矩阵,显示所有可从资源库上获得而当前尚未安装的包 | |
| download.packages | 下载一系列R包到本地目录 | |
| install.packages | 从资源库下载安装一系列R包 | |
| remove.packages | 移除一系列已安装的R包 | |
| update.packages | 将已经安装的R包更新到最新版本 | |
| setRepositories | 设定当前的R包的资源库列表 | |
**通过命令行安装R包**
```shell
R CMD INSTALL aplpack_1.1.1.tgz # 安装aplpack包
```
**从其他资源库安装R包**
`devtools`库提供了从其他流行的`Git`资源库或其他URL上安装R包的工具。
比如我们想安装开发版本的`ggplot2`包,可以使用下面命令:
```R
# 如果没有安装devtools,需要先安装
install.packages("devtools")
library(devtools)
install_github("ggplot2")
```
更多信息查看相应的帮助文档。
by 诗翔
| 16.693878 | 91 | 0.591687 | yue_Hant | 0.739333 |
222fa5a7223889f91a0b80f3485e7466cc53ba7c | 440 | md | Markdown | content/rules.md | jacekprzybyszewski/supercalifragilistic-run | 5ee8989235f58c71ebffcdf12fad5d344f5f31f0 | [
"MIT"
] | null | null | null | content/rules.md | jacekprzybyszewski/supercalifragilistic-run | 5ee8989235f58c71ebffcdf12fad5d344f5f31f0 | [
"MIT"
] | null | null | null | content/rules.md | jacekprzybyszewski/supercalifragilistic-run | 5ee8989235f58c71ebffcdf12fad5d344f5f31f0 | [
"MIT"
] | null | null | null | ---
title: Supercaligragilistic Running Rules
description: 'Read on about the competition rules, so you can mock your opponents, and win the coveted prizes.
You are responsible for following the rules, all error-40 is on you, no exceptions.'
---
# Get off the couch
Above all, you have to find your running shoes ...
Please add the rules for the competition.
Use [markdown format](https://guides.github.com/features/mastering-markdown/).
| 36.666667 | 110 | 0.768182 | eng_Latn | 0.998885 |
222fc7c2dae1047d92e3e444f614e0745af8d250 | 446 | md | Markdown | README.md | kopalsharma19/Object-Detection-with-YOLOv3 | b27c743934ee738b919aa17f800027b86dd32440 | [
"MIT"
] | 1 | 2021-04-16T17:39:18.000Z | 2021-04-16T17:39:18.000Z | README.md | sagarikaraje/Object-Detection-with-YOLO | b27c743934ee738b919aa17f800027b86dd32440 | [
"MIT"
] | null | null | null | README.md | sagarikaraje/Object-Detection-with-YOLO | b27c743934ee738b919aa17f800027b86dd32440 | [
"MIT"
] | 1 | 2021-04-10T03:55:09.000Z | 2021-04-10T03:55:09.000Z | # Object Detection with YOLOv3
Object Detection - as quite the name suggests is detecting objects in an image or video through Deep Learning Algorithms. The motive of object detection is to recognize and locate (localize) all known objects in a scene. With this kind of identification and localization, object detection can be used to count objects in a scene and determine and track their precise locations, all while accurately labeling them.
| 148.666667 | 414 | 0.816143 | eng_Latn | 0.999902 |
222fe71d8f9cfc88fc5efaf58b792d1f645ed695 | 569 | md | Markdown | pages/08.Fernerkundung/01.vorlesung/01.Baeume_und_Nachhaltigkeit/docs.de.md | MatthiasHinz/learn.opengeoedu.de | 9f2d3546b12fb365b92d3bddc074c1459a0c9b31 | [
"MIT"
] | null | null | null | pages/08.Fernerkundung/01.vorlesung/01.Baeume_und_Nachhaltigkeit/docs.de.md | MatthiasHinz/learn.opengeoedu.de | 9f2d3546b12fb365b92d3bddc074c1459a0c9b31 | [
"MIT"
] | null | null | null | pages/08.Fernerkundung/01.vorlesung/01.Baeume_und_Nachhaltigkeit/docs.de.md | MatthiasHinz/learn.opengeoedu.de | 9f2d3546b12fb365b92d3bddc074c1459a0c9b31 | [
"MIT"
] | null | null | null | ---
title: 'Motivation'
taxonomy:
category:
- docs
---
## Bäume & Nachhaltigkeit
Die Motivation für unsere Lerneinheiten liegt im Nachhaltigkeitsgedanken allgemein und konkret bezogen auf:
+ die Rolle von Wäldern auf globaler und lokaler Ebene und
+ die Bedeutung von Stadtbäumen und der Stadtentwicklung.
So soll das Waldmonitoring mit Hilfe von Satelliten sowie die Erfassung von Bäumen und Grünflächen im urbanem Kontext im Vordergrund stehen. Dabei sollen freie Geodaten verwendet werden.

| 33.470588 | 186 | 0.794376 | deu_Latn | 0.998958 |
2233ba46a74b548d8f48f8bbbf08f40c41d48d83 | 628 | md | Markdown | recipes/Python/576546_DateTimeSyncer_Daylight_Savings_Time/README.md | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/576546_DateTimeSyncer_Daylight_Savings_Time/README.md | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/576546_DateTimeSyncer_Daylight_Savings_Time/README.md | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | ## DateTimeSyncer with Daylight Savings Time adjustment
Originally published: 2008-10-30 19:33:37
Last updated: 2008-10-31 15:52:18
Author: Jack Trainor
DateTimeSyncer synchronizes computer's current date and time with Daylight Savings adjustment
directly without depending on operating system or Python's time.localtime().
This allows programmers to update Daylight Savings Time rules without
depending on external updates to OS or Python.
Double-clicking this module will synchronize your computer's date and time on a Windows machine. It can be easily extended other time zones and other operating systems. | 52.333333 | 168 | 0.802548 | eng_Latn | 0.993544 |
223514568b76ac802cdce8e5f1213275083ed924 | 6,967 | md | Markdown | _posts/2015-12-22-less.md | RorschachZ/Test01 | f1af2d1ecfdd10d0af69f8524fdb4dce7bef7c39 | [
"MIT"
] | 2 | 2016-01-29T13:03:48.000Z | 2016-01-29T13:03:55.000Z | _posts/2015-12-22-less.md | zsyjx0115/zsyjx0115.github.com | 27dd3ca207118f8b175c186c747fe3018e77d592 | [
"MIT"
] | null | null | null | _posts/2015-12-22-less.md | zsyjx0115/zsyjx0115.github.com | 27dd3ca207118f8b175c186c747fe3018e77d592 | [
"MIT"
] | null | null | null | ---
layout: post
category : study
title: Less
brief: 学习
tags : [CSS]
excerpt: LESS 是动态的样式表语言,通过简洁明了的语法定义,使编写 CSS 的工作变得非常简单。CSS(层叠样式表)是一门历史悠久的标记性语言,同 HTML 一道,被广泛应用于万维网(World Wide Web)中。HTML 主要负责文档结构的定义,CSS 负责文档表现形式或样式的定义。
id: less20151222
---
{% include JB/setup %}
LESS 是动态的样式表语言,通过简洁明了的语法定义,使编写 CSS 的工作变得非常简单。
CSS(层叠样式表)是一门历史悠久的标记性语言,同 HTML 一道,被广泛应用于万维网(World Wide Web)中。HTML 主要负责文档结构的定义,CSS 负责文档表现形式或样式的定义。
作为一门标记性语言,CSS 的语法相对简单,对使用者的要求较低,但同时也带来一些问题:CSS 需要书写大量看似没有逻辑的代码,不方便维护及扩展,不利于复用。造成这些困难的很大原因源于 CSS 是一门非程序式语言,没有变量、函数、SCOPE(作用域)等概念。
LESS 为 Web 开发者带来了福音,它在 CSS 的语法基础之上,引入了变量,Mixin(混入),运算以及函数等功能,大大简化了 CSS 的编写,并且降低了 CSS 的维护成本,就像它的名称所说的那样,LESS 可以让我们用更少的代码做更多的事情
**[Less is More](http://lesscss.org/)**
**[Less中文官网](http://www.1024i.com/demo/less/)**
## Koala编译器
>koala是一个前端预处理器语言图形编译工具,支持Less、Sass、Compass、CoffeeScript,帮助web开发者更高效地使用它们进行开发。跨平台运行,完美兼容windows、linux、mac。
[下载Koala编译器](http://koala-app.com/index-zh.html)
编译器将LESS文件编译成为CSS文件,在HTML中引入使用。这里要强调的一点,LESS是完全兼容CSS语法的,也就是说,我们可以将标准的CSS文件直接改成 .less 格式,LESS编译器可以完全识别。
## Less语法
### Less注释
提供`//`注释方式
* `//`----------不会被编译,即编译成css文件后并不显示该注释
* `/**/`--------会被编译
### 变量
变量声明方式,使用@开头声明:**`@变量名:值`**
使用方式:
@test_width:200px;
.box{
width: @test_width;
}
LESS 中的变量和其他编程语言一样,可以实现值的复用,同样它也有生命周期,也就是 Scope(变量范围,开发人员惯称之为作用域),简单的讲就是局部变量还是全局变量的概念,查找变量的顺序是**先在局部定义中找**,如果找不到,则查找上级定义,直至全局。
变量也可使用在选择器上,而不仅仅是使用在样式属性上。
// Selector interpolation only works in 1.3.1+. Try it!
@theGoodThings: .food, .beer, .sleep, .javascript;
@{theGoodThings} {
font-weight: bold;
}
### 混合
若想让box元素也拥有border样式,以前的写法是将类名border加入box元素的class中;而使用Less可以直接这样写:
//混合
.border{
border: 2px solid #000;
}
.box{
.border;
}
若想都拥有border但是细节上有所不同,Less中可以传递参数来实现:
//带参数的混合
.border2(@border_width){
border: @border_width solid red;
}
.box{
.border2(10px, red);
}
混合也可设置默认值:
//带默认值的混合
.border3(@border_width:2px){
border: @border_width solid red;
}
.box{
.border3();
}
### 匹配模式
//朝下三角形,其他三角形需要去改变border-color和corder-style
.triangle{
width: 0;
height: 0;
overflow: hidden;
border-width: 10px
border-color: red transparent transparent transparent;
border-style: solid dashed dashed dashed;
}
使用Less匹配模式,解决三角形改变方向的麻烦写法。
//朝下三角形
.triangle(down, @w:5px, @c:#eee){
border-width: @w;
border-color: @c transparent transparent transparent;
border-style: solid dashed dashed dashed;
}
//朝上三角形
.triangle(top, @w:5px, @c:#eee){
border-width: @w;
border-color: transparent transparent @c transparent;
border-style: dashed dashed solid dashed;
}
//朝左三角形
.triangle(left, @w:5px, @c:#eee){
border-width: @w;
border-color: transparent @c transparent transparent;
border-style: dashed solid dashed dashed;
}
//朝右三角形
.triangle(right, @w:5px, @c:#eee){
border-width: @w;
border-color: transparent transparent transparent @c;
border-style: dashed dashed dashed solid;
}
//使用@_匹配所有的模式,所有的模式都会带下面的样式
.triangle(@_, @w:5px, @c:#eee){
width: 0;
height: 0;
overflow: hidden;
}
像 JavaScript 中 arguments一样,Mixins 也有这样一个变量:@arguments。@arguments 在 Mixins 中具是一个很特别的参数,当 Mixins 引用这个参数时,该参数表示所有的变量。
LESS 也采用了命名空间的方法来避免重名问题,于是乎 LESS 在 mixins 的基础上扩展了一下,看下面这样一段代码:
#mynamespace{
.home {...}
.user {...}
}
这样我们就定义了一个名为 mynamespace 的命名空间,将一些变量或者混合模块打包起来,如果我们要复用 user 这个选择器的时候,我们只需要在需要混入这个选择器的地方这样使用:
#mynamespace > .user
### 运算
加减乘除运算都支持:
@test_width:200px;
.box{
width: @test_width + 100;
}
.box2{
width: (@test_width - 100)* 5;
color: #ccc - 10;//颜色会变深一点,但用的少
}
### 嵌套规则
#### 嵌套一般用法
在我们书写标准 CSS 的时候,遇到多层的元素嵌套这种情况时,我们要么采用从外到内的选择器嵌套定义,要么采用给特定元素加 CLASS 或 ID 的方式。在 LESS 中我们可以这样写:
//HTML 片段
<div id="home">
<div id="top">top</div>
<div id="center">
<div id="left">left</div>
<div id="right">right</div>
</div>
</div>
//LESS 文件
#home{
color : blue;
width : 600px;
height : 500px;
border:outset;
#top{
border: outset;
width : 90%;
}
#center{
border: outset;
height: 300px;
width: 90%;
#left{
border:outset;
float : left;
width : 40%;
}
#right{
border:outset;
float : left;
width : 40%;
}
}
}
经过编译生成的 CSS 文件如下:
#home{
color: blue;
width: 600px;
height: 500px;
border: outset;
}
#home #top {
border: outset;
width: 90%;
}
#home #center {
border: outset;
height: 300px;
width: 90%;
}
#home #center #left {
border: outset;
float: left;
width: 40%;
}
#home #center #right {
border: outset;
float: left;
width: 40%;
}
从上面的代码中我们可以看出,LESS 的嵌套规则的写法是 HTML 中的 DOM 结构相对应的,这样使我们的样式表书写更加简洁和更好的可读性。同时,嵌套规则使得对伪元素的操作更为方便。
#### &的嵌套
&代表父元素选择器
a {
color: red;
text-decoration: none;
&:hover {// 有 & 时解析的是同一个元素或此元素的伪类,没有 & 解析是后代元素
color: black;
text-decoration: underline;
}
}
上述LESS文件,经过编译生成的 CSS 文件如下:
a {
color: red;
text-decoration: none;
}
a:hover {
color: black;
text-decoration: underline;
}
它不是代表最近的父元素选择器,而是包含所有祖先元素的选择器
.header {
.menu {
border-radius: 5px;
.no-borderradius & {
background-image: url('images/button-background.png');
}
}
}
编译后为:
.header .menu {
border-radius: 5px;
}
.no-borderradius .header .menu {
background-image: url('images/button-background.png');
}
#### 嵌套 Media Queries
.one {
@media (width: 400px) {
font-size: 1.2em;
@media print and color {
color: blue;
}
}
}
编译输出:
@media (width: 400px) {
.one {
font-size: 1.2em;
}
}
@media (width: 400px) and print and color {
.one {
color: blue;
}
}
### 避免编译
有时候我们需要输出一些不正确的 CSS 语法或者使用一些 LESS 不认识的专有语法。
要输出这样的值我们可以在字符串前加上一个 ~,例如:
.class {
filter: ~"ms:alwaysHasItsOwnSyntax.For.Stuff()";
width: ~"calc(300px-30px)";
}
这叫作“避免编译”,输出结果为:
.class {
filter: ms:alwaysHasItsOwnSyntax.For.Stuff();
width: calc(300px-30px);
}
在避免编译的值中间也可以像字符串一样插入变量:
.class {
@what: "Stuff";
filter: ~"ms:alwaysHasItsOwnSyntax.For.@{what}()";
}
### 字符串插值
变量可以用像 @{name} 这样的结构,以类似 ruby 和 php 的方式嵌入到字符串中:
@base-url: "http://assets.fnord.com";
background-image: url("@{base-url}/images/bg.png");
### 选择器插值
如果需要在选择器中使用 LESS 变量,只需通过使用和字符串插件一样的 @{selector} 即可,例如:
@name: blocked;
.@{name} {
color: black;
}
**注意:(~"@{name}") 语句可以在 LESS 1.3.1 等之前版本中使用,但 1.4.0 版将不再支持这种用法。**
### LESS 与 SASS
同类框架还有 [SASS](http://sass-lang.com/) 。两者都属于 CSS 预处理器,功能上大同小异,都是使用类似程序式语言的方式书写 CSS, 都具有变量、混入、嵌套、继承等特性,最终目的都是方便 CSS 的书写及维护。
LESS对于初学者来说是极好的:它非常容易使用和设置,它跟CSS非常像,写起来非常直观,简单还有友好,我曾经非常喜欢LESS。
#### Sass
Sass有很多可用的方法和逻辑。例如:条件和循环语句。LESS也可以做到,但不是很高效且不直观。像LESS一样,Sass也内置了一些非常友好的函数,像颜色,数字,还有变量列表。
Sass用户可以使用功能强大的Compass库。这些库LESS用户也可以用,但并不完全一样,因为这是由一个庞大的社区来共同维护的。Compass有非常强大的特性,像自动生成图片切片(CSS Sprites),传统浏览器支持,还有对CSS3的跨浏览器支持等。
Compass同样允许你使用外部框架像Blueprint, Foundation 或 Bootstrap。这也意味着你可以非常容易的使用你喜欢的框架而不需要掌握各种不同的工具。
| 19.299169 | 151 | 0.664274 | yue_Hant | 0.515062 |
2236127319a4506a1ed94a46dd723f927941aa86 | 2,036 | md | Markdown | _posts/2020-06-27-Perfect-Squares.md | MaxMing0/MaxMing0.github.io | a5f269b3d529605bd90d8e38cc1511db51c37862 | [
"MIT"
] | 3 | 2020-08-05T02:09:33.000Z | 2022-01-08T17:30:45.000Z | _posts/2020-06-27-Perfect-Squares.md | MaxMing0/MaxMing0.github.io | a5f269b3d529605bd90d8e38cc1511db51c37862 | [
"MIT"
] | 27 | 2020-05-12T05:29:09.000Z | 2020-08-12T13:48:55.000Z | _posts/2020-06-27-Perfect-Squares.md | MaxMing0/MaxMing0.github.io | a5f269b3d529605bd90d8e38cc1511db51c37862 | [
"MIT"
] | 6 | 2020-05-11T03:20:45.000Z | 2022-03-27T21:20:16.000Z | ---
layout: post
title: LeetCode 279 Perfect Squares (Python)
number: 279
level: Medium
lcurl: perfect-squares
youtube: a7AjjNa0iHM
bilibili1: //player.bilibili.com/player.html?aid=456239294&bvid=BV1r5411Y7MH&cid=206613672&page=1
xigua:
date: 2020-06-27
author: 小明MaxMing
header-img: img/post-bg-coffee.jpeg
catalog: false
tags:
- DP
- BFS
- Math
---
### 题目
Given a positive integer n, find the least number of perfect square numbers (for example, 1, 4, 9, 16, ...) which sum to n.
### 解题思路
1. DP dp[n] = min(dp[n-k]) + 1 k is square numbre
2. BSF, 从n开始,拆出完全平方数,直到剩下的数是一个完全平方数结束
3. 数学方法
### 代码
```python
class Solution:
def numSquares(self, n: int) -> int:
square_nums = [i**2 for i in range(1, int(math.sqrt(n)) + 1)]
dp = [float('inf')] * (n + 1)
dp[0] = 0
for i in range(1, n+1):
for square in square_nums:
if i < square:
break
dp[i] = min(dp[i], dp[i - square] + 1)
return dp[-1]
```
```python
class Solution:
def numSquares(self, n: int) -> int:
square_nums = [i**2 for i in range(1, int(math.sqrt(n)) + 1)]
level = 0
queue = {n}
while queue:
level += 1
tmp = set()
for num in queue:
for sq in square_nums:
if num == sq:
return level
elif num < sq:
break
else:
tmp.add(num - sq)
queue = tmp
```
```python
class Solution:
def numSquares(self, n: int) -> int:
def check(n):
return (math.sqrt(n) - int(math.sqrt(n)) < 0.00000001)
if check(n):
return 1
while n % 4 == 0:
n /= 4
if (n - 7) % 8 == 0:
return 4
for i in range(int(math.sqrt(n))):
if check(n-(i+1)*(i+1)):
return 2
return 3
```
| 25.45 | 123 | 0.481827 | eng_Latn | 0.620762 |
223659e705eb9f4ba215c92237e631dcf31c3b7d | 575 | md | Markdown | en/device-dev/guide/device-camera-visual.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | 35 | 2021-08-30T08:11:25.000Z | 2022-02-10T03:39:10.000Z | en/device-dev/guide/device-camera-visual.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | 9 | 2021-09-17T08:41:42.000Z | 2021-09-22T11:03:54.000Z | en/device-dev/guide/device-camera-visual.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | 6 | 2021-09-13T11:12:34.000Z | 2022-02-25T15:13:21.000Z | # Visual Application Development<a name="EN-US_TOPIC_0000001111199420"></a>
- **[Overview](device-camera-visual-overview.md)**
- **[Preparations](device-camera-visual-prepare.md)**
- **[Adding Pages](device-camera-visual-addpage.md)**
- **[Building the Home Page](device-camera-visual-firstpage.md)**
- **[Building the Details Page](device-camera-visual-details.md)**
- **[Debugging and Packaging](device-camera-visual-debug.md)**
- **[Running on the Device](device-camera-visual-run.md)**
- **[FAQs](device-camera-visual-faqs.md)**
| 28.75 | 75 | 0.671304 | eng_Latn | 0.188659 |
2236b482be29fd8ac75cf3a0fc61e32b3c57fd4f | 2,176 | md | Markdown | README.md | scimax/python-core-windows | 43e0b0dae127240abe3fa7d7691e05964c14d828 | [
"CNRI-Python",
"Xnet",
"X11"
] | null | null | null | README.md | scimax/python-core-windows | 43e0b0dae127240abe3fa7d7691e05964c14d828 | [
"CNRI-Python",
"Xnet",
"X11"
] | null | null | null | README.md | scimax/python-core-windows | 43e0b0dae127240abe3fa7d7691e05964c14d828 | [
"CNRI-Python",
"Xnet",
"X11"
] | null | null | null | # python-core-windows
Windows Server Core container running python 3.6.0 and pip 9.0.1. The docker image is based on the [Windows Nano Server for python](https://github.com/LBates2000/python-windows) by lbates2000. The nano server was replaced with the server core container as the [ctypes](https://docs.python.org/3/library/ctypes.html) library for foreign functions does not work properly.
# Issue with the nano server
The following snippet is a minimum code required to reproduce the problem based on `__init__.py` in the [comtypes](https://github.com/enthought/comtypes/blob/36e997b11cbfd9fe755b0dc58723eb915edcc393/comtypes/__init__.py#L156) module:
```py
from ctypes import *
import sys
COINIT_APARTMENTTHREADED = 0x2
_ole32 = oledll.ole32
flags = getattr(sys, "coinit_flags", COINIT_APARTMENTTHREADED)
_ole32.CoInitializeEx(None, flags)
```
The traceback reports
```py
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_ctypes/callproc.c", line 918, in GetResult
OSError: [WinError -2147467263] Not implemented
```
Unfortunately, using the windows core server instead of the nano server blows up the size of the container by *10GB*. But ctypes works with this container base.
# Build
After cloning the repository from [github](https://github.com/scimax/python-core-windows) run:
```
docker build -t python-core-windows .
```
# Usage
This was built by installing Python on a Windows 10 x64 machine and then copying the installation to the container and setting the python path.
You can run it interactively
```
docker run -it python-windows
```
or you can use it as a base for another container, as well as upgrade PIP and install additional Python modules, by adding the appropriate commands to the new container's Dockerfile.
For example:
```
FROM python-windows:latest
ENV APP_DIR C:/apps
ADD requirements.txt $APP_DIR/requirements.txt
WORKDIR $APP_DIR
RUN \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
ENTRYPOINT ["C:\\Python\\python.exe"]
```
Container is hosted on Docker Hub:
https://hub.docker.com/r/scimax/python-core-windows/
```
docker pull scimax/python-core-windows
```
| 34.539683 | 369 | 0.770221 | eng_Latn | 0.947689 |
2236d20cee75b2526edd61f5535493568a1da68f | 662 | md | Markdown | Aula16/sincrona/MesaTrabalho/README.md | alexandrecpedro/Front-End-II | b96a382d407ea4105279a03f3eed199db8896a73 | [
"MIT"
] | null | null | null | Aula16/sincrona/MesaTrabalho/README.md | alexandrecpedro/Front-End-II | b96a382d407ea4105279a03f3eed199db8896a73 | [
"MIT"
] | null | null | null | Aula16/sincrona/MesaTrabalho/README.md | alexandrecpedro/Front-End-II | b96a382d407ea4105279a03f3eed199db8896a73 | [
"MIT"
] | null | null | null | ##### Utilizando o asincronismo com AJAX
Agora que aprendemos sobre asincronismo, e como podemos utilizar o AJAX e suas requisições dinâmicas, pedimos alguns objetivos para o exercício de hoje:
Fazer uma página que permita a leitura dos dados de uma lista de usuários disponíveis no link https://reqres.in/api/users?page=2 . É necessário a criação de uma lista com a imagem e o nome dos usuários, cada usuário será inserido em um <li>
o url https://reqres.in/api/users?page=2 deve ser introduzido com o fetch()
todo o processo deve estar dentro de um addEventListener “click”, já que um botão será responsável por carregar os usuários dentro da página.
##### | 66.2 | 240 | 0.780967 | por_Latn | 1 |
223701d7d28533f84dfb4673972f80450b2c24f9 | 7,397 | md | Markdown | _episodes/00-understanding-code-scalability.md | Southampton-RSG-Training/dirac-code-scaling | 6c947022735c695ba9699e670580f6ede0da3233 | [
"CC-BY-4.0"
] | null | null | null | _episodes/00-understanding-code-scalability.md | Southampton-RSG-Training/dirac-code-scaling | 6c947022735c695ba9699e670580f6ede0da3233 | [
"CC-BY-4.0"
] | 1 | 2022-01-26T12:55:30.000Z | 2022-03-09T13:33:29.000Z | _episodes/00-understanding-code-scalability.md | Southampton-RSG-Training/dirac-code-scaling | 6c947022735c695ba9699e670580f6ede0da3233 | [
"CC-BY-4.0"
] | null | null | null | ---
title: "Understanding Code Scalability"
slug: dirac-code-scaling-understanding-code-scalability
teaching: 0
exercises: 0
questions:
- "What is code scalability?"
- "Why is code scalability important?"
- "How can I measure how long code takes to run?"
objectives:
- "Describe why code scalability is important when using HPC resources."
- "Explain the difference between wall time and CPU time."
- "Describe the differences between strong and weak scaling."
- "Summarise the dangers of premature optimisation."
keypoints:
- "To make efficient use of parallel computing resources, code needs to be scalable."
- "Before using new code on DiRAC, it's strong and weak scalability profiles has to be measured."
- "Strong scaling is how the solution time varies with the number of processors for a fixed problem size."
- "Weak scaling is how the solution time varies with the number of processors for a fixed problem size for each processor."
- "Strong and weak scaling measurements provide good indications for how jobs should be configured to use resources."
- "Always profile your code to determine bottlenecks before attempting any non-trivial optimisations."
---
When we submit a job to a cluster that runs our code, we have the option of specifying the number of CPUs (and in some cases GPUs) that will be allocated to the job. We need to consider to what extent that code is *scalable* with regards to how it uses these resources, to avoid the risk of consuming more resources than can be effectively used. As part of the application process for having new code installed on DiRAC, its scalability characteristics need to be measured. This helps inform how best to assign CPU resources when configuring jobs to run with that code.
There are two primary measures of execution time we need to consider for any given code:
- **Wall clock time (or actual time)** - this is the time it takes to run from start of execution to the end, as measured on a clock. In terms of scaling measurements, this does not include any time waiting for the job to start.
- **CPU time** - this is the time actually spent running your code on a CPU, when it is processing instructions. This does not include time waiting for input or output operations, such as reading in an input file, or any other waiting caused by the program or operating system.
<!-- In some cases where you are running on a system which is shared by other users, your run may be swapped out to enable other users to use the system. Most systems within DiRAC are configured to have exclusive access. So your code will not be competing with other programs on the compute nodes, but may be competing on the network fabric, or for storage bandwidth. -->
## How can we Characterise a Code's Scalability?
Before we consider running and using code on an HPC resource, we need to understand it's *scaling profile* - so we can determine how the code will scale as we add more CPU cores to running it. That way, when we run code we can request a suitable amount of resources with minimal waste. There are two types of scaling profile we need to determine:
- **Strong scaling:** Strong scaling is how well your code run times changes whilst keeping the problem size constant, but increasing the number of CPU cores (and nodes) being used. Ideally, a 100% scalable application will have a profile that halves the time to complete when given twice as many cores. This is rarely met, as most real world applications have some serial portion or unintended delays (such as communication overheads) which will limit the code's scalability.
- **Weak scaling:** This is similar to strong scaling, but in this case as we increase the number of cores, we also increase the problem size by the same factor. This type of scaling is more easily met, and should result in a flat line in run times vs. core count if the code has good weak scaling characteristics. In theory this removes the impact of the serial portion of your code, but in reality there is still a limit.
<!-- - the improvement in speed of execution of a task executed on two similar architectures with different resources. -->
Once we understand these scaling profiles for our code, we'll have an idea of the **speedup** capable when using multiple cores. These measurements give us good indications for how our code should be specified on DiRAC, in terms of the overall job size and the amount of resources that should be requested.
## I'm a Developer, Should I Optimise my Code?
As a developer, if your code happens to take too long to run or scales badly it's tempting to dive in and try to optimise it straight away. But before you do, consider the following [three rules of optimisation](https://wiki.c2.com/?RulesOfOptimization):
1. Don't,
2. Don't... *yet*, and,
3. If you must optimise your code, *profile* it first.
In non-trivial cases premature optimisation is regarded as bad practice, since optimisation may lead to additional code complexity, incorrect results and reduced readability, making the code harder to understand and maintain. It is often effort-intensive, and difficult at a low level, particularly with modern compilers and interpreters, to improve on or anticipate the optimisations they already implement. A general maxim is to focus on writing understandable code and getting things working first - the former helps with the latter. Then, once strong and weak scaling profiles have been measured, if optimisation is justified you can *profile* your code, and work out where the majority of time is being spent and how best to optimise it.
So what is *profiling*? Profiling your code is all about understanding its complexity and performance characteristics. The usual intent of profiling is to work out how best to *optimise* your code to improve its performance in some way, typically in terms of speedup or memory and disk usage. In particular, profiling helps identify *where* bottlenecks exist in your code, and helps avoid summary judgments and guesses which will often lead to unnecessary optimisations.
> ## Profilers
>
> Each programming language will typically offer some open-source and/or free tools
> on the web, with you can use to profile your code. Here are some examples of
> tools. Note though, depending on the nature of the language of choice, the
> results can be hard or easy to interpret. In the following we will only list
> open and free tools:
>
> - Python: [line_profiler](https://github.com/pyutils/line_profiler),
> [prof](https://docs.python.org/3.9/library/profile.html)
> - JavaScript: [firebug](https://github.com/firebug/firebug)
> - Ruby: [ruby-prof](https://github.com/ruby-prof/ruby-prof)
> - C/C++: [xray](https://llvm.org/docs/XRay.html),
> [perf](https://perf.wiki.kernel.org/index.php/Main_Page),
> [gprof](https://ftp.gnu.org/old-gnu/Manuals/gprof-2.9.1/html_mono/gprof.html)
> - R: [profvis](https://github.com/rstudio/profvis)
{: .callout }
Donald Knuth said *"we should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."* In short, optimise the obvious trivial things, but avoid non-trivial optimisations until you've understood what needs to change. Optimisation is often difficult and time consuming. Pre-mature optimization may be a waste of your time!
{% include links.md %}
| 94.833333 | 742 | 0.778829 | eng_Latn | 0.99963 |
2237c6b7c2e502c021d1b0765617c9b32269d37c | 1,025 | md | Markdown | docs/README.md | fivefilters/docs | f864c781a80dce278195f81335def7f4a0d29db1 | [
"MIT"
] | 8 | 2018-11-17T08:51:42.000Z | 2021-11-30T07:06:04.000Z | docs/README.md | fivefilters/docs | f864c781a80dce278195f81335def7f4a0d29db1 | [
"MIT"
] | null | null | null | docs/README.md | fivefilters/docs | f864c781a80dce278195f81335def7f4a0d29db1 | [
"MIT"
] | 1 | 2021-02-13T04:50:28.000Z | 2021-02-13T04:50:28.000Z | ---
home: true
heroImage: /images/ff-logo.png
footer: FiveFilters.org
title: FiveFilters.org Documentation
lang: en-GB
meta:
- name: description
content: Documentation for FiveFilters.org applications
- name: keyword
content: full-text rss, kindle, fivefilters, docs, api, howto, push to kindle, term extraction, install, pdf newspaper, feed creator
---
<style>
.hero h1, .hero p.description { display: none }
.content.custom {
text-align: center;
}
.content.custom li {
list-style: none;
}
</style>
# Get Started
Documentation for our products.
* [Push to Kindle](/push-to-kindle)
* [Full-Text RSS](/full-text-rss)
* [Feed Creator](/feed-creator)
* [PDF Newspaper](/pdf-newspaper)
# Forum
We answer questions on our community forum.
* [Forum](https://forum.fivefilters.org)
# Your account
Check your account and download software you've purchased.
* [Your account](https://member.fivefilters.org)
# Contact us
Any questions? E-mail us: [[email protected]](mailto:[email protected]).
| 21.354167 | 134 | 0.72 | eng_Latn | 0.45494 |
2238499c5081a76e752aad1f5e46a6077bb2d2f4 | 156 | md | Markdown | README.md | gskorupa/cricket-template | 130d1a385253c7945d73ab9eb465288977178594 | [
"Apache-2.0"
] | null | null | null | README.md | gskorupa/cricket-template | 130d1a385253c7945d73ab9eb465288977178594 | [
"Apache-2.0"
] | 22 | 2015-12-13T22:30:06.000Z | 2016-01-03T00:01:54.000Z | README.md | gskorupa/cricket-bookshelf | 130d1a385253c7945d73ab9eb465288977178594 | [
"Apache-2.0"
] | null | null | null | This is example project used in "Building Microservices with Cricket" book.
See:
https://www.gitbook.com/book/gskorupa/building-microservices-with-cricket | 52 | 75 | 0.814103 | eng_Latn | 0.934206 |
223b3a477d1e7d85169bd848168f0114f2e6874c | 3,410 | md | Markdown | README.md | mcjmigdal/ehahelper | ab01d81e15feb877b7ed63f5f8df5c66ae7c7568 | [
"MIT"
] | 4 | 2019-02-14T08:01:47.000Z | 2021-09-29T13:40:07.000Z | README.md | mcjmigdal/ehahelper | ab01d81e15feb877b7ed63f5f8df5c66ae7c7568 | [
"MIT"
] | 2 | 2019-07-30T19:19:16.000Z | 2019-08-29T05:36:30.000Z | README.md | mcjmigdal/ehahelper | ab01d81e15feb877b7ed63f5f8df5c66ae7c7568 | [
"MIT"
] | 2 | 2019-08-24T09:09:51.000Z | 2021-08-20T16:34:12.000Z | Event history analysis helper package
=====================================
Helper functions for event history analysis with the survival package.
Install
-------
Install from GitHub repository.
``` r
library(devtools)
install_github('junkka/ehahelper')
```
Functions
---------
#### ggsurv
Make a data.frame of a `survfit` or `coxph` object for visualization with ggplot2.
``` r
library(ehahelper)
library(survival)
library(ggplot2)
surv_object <- coxph(Surv(time, status) ~ strata(x), data = aml)
ggplot(ggsurv(surv_object), aes(time, surv, color=strata)) + geom_step()
```

#### gg\_zph
ggplot2 implementation for visualizing scaled Schoenfield residuals form a cox.zph object.
``` r
bladder1 <- bladder[bladder$enum < 5, ]
fit <- coxph(Surv(stop, event) ~ (rx + size + number) * strata(enum) +
cluster(id), bladder1)
x <- cox.zph(fit, transform = "identity")
gg_zph(x)
```
## `geom_smooth()` using method = 'loess'

``` r
gg_zph(x, log = TRUE)
```
## `geom_smooth()` using method = 'loess'

#### coxme tidyer
Convert coxme objects to tidy format.
``` r
library(broom)
library(coxme)
fit <- coxme(Surv(y, uncens) ~ trt + (1|center), eortc)
knitr::kable(tidy(fit, exp = T), digits = 3)
```
| term | estimate| std.error| statistic| p.value| conf.low| conf.high|
|:-----|---------:|----------:|----------:|--------:|---------:|----------:|
| trt | 2.031| 0.064| 11.03| 0| 1.791| 2.304|
``` r
fit_g <- glance(fit)
knitr::kable(as.data.frame(t(fit_g)), digits = 3)
```
| | V1|
|--------------------------|-----------:|
| n | 2323.000|
| events | 1463.000|
| Chisq | 236.110|
| df | 2.000|
| logLik | -10478.839|
| p | 0.000|
| AIC | 21015.051|
| BIC | 21166.750|
| random\_n\_center | 37.000|
| random\_sd\_center | 0.329|
| random\_variance\_center | 0.108|
#### coxme augment
Using a coxme model, add fitted values and standard errors to original dataset.
``` r
eortc_augmented <- augment(fit, eortc)
knitr::kable(head(eortc_augmented))
```
| | y| uncens| center| trt| .fitted| .se.fit|
|-----|----------:|-------:|-------:|----:|-----------:|----------:|
| 2 | 506.1603| 1| 1| 1| 0.2037681| 0.0184739|
| 3 | 294.3800| 1| 1| 1| 0.2037681| 0.0184739|
| 4 | 383.9152| 1| 1| 0| -0.5048446| 0.0457700|
| 5 | 2441.8338| 0| 1| 0| -0.5048446| 0.0457700|
| 6 | 2442.2923| 0| 1| 0| -0.5048446| 0.0457700|
| 7 | 312.3571| 1| 1| 1| 0.2037681| 0.0184739|
#### coxme predict
Get predicted vales based on a mixed-effects Cox model, fitted using the coxme package. Extends the standard predict.coxme function by allowing for new data, and by calculating relative risks, either overall or within stratum.
``` r
new_data <- data.frame(trt = unique(eortc$trt))
predict_coxme(fit, newdata = new_data, type = "risk")
```
## 1 2
## 1.2260138 0.6035994
| 27.95082 | 226 | 0.532551 | eng_Latn | 0.344758 |
223b708244b5de2944e975ef63715f8a955ccf5d | 161 | md | Markdown | README.md | gueyeb/ruby-examples-code | 4a63b65d8c090c782af9693d7376e92a1718d99f | [
"MIT"
] | null | null | null | README.md | gueyeb/ruby-examples-code | 4a63b65d8c090c782af9693d7376e92a1718d99f | [
"MIT"
] | null | null | null | README.md | gueyeb/ruby-examples-code | 4a63b65d8c090c782af9693d7376e92a1718d99f | [
"MIT"
] | null | null | null | ruby-examples-code
==================
Basic Ruby code.
arithmetic.rb
array.rb
class.rb
control_structures.rb
functions.rb
hello_world.rb
logic.rb
variables.rb
| 11.5 | 21 | 0.726708 | eng_Latn | 0.376141 |
223bb9ca732781d5199bc062afdac2675e187972 | 205 | md | Markdown | _posts/2020-08-28-Augmentations.md | siubiparis/siubiparis.github.io | ba3cb100e18ce66f6d7d0ca00ded79a322902129 | [
"MIT"
] | 1 | 2020-07-11T21:35:27.000Z | 2020-07-11T21:35:27.000Z | _posts/2020-08-28-Augmentations.md | siubiparis/siubiparis.github.io | ba3cb100e18ce66f6d7d0ca00ded79a322902129 | [
"MIT"
] | null | null | null | _posts/2020-08-28-Augmentations.md | siubiparis/siubiparis.github.io | ba3cb100e18ce66f6d7d0ca00ded79a322902129 | [
"MIT"
] | null | null | null | ---
layout: post
title: Des augmentations au rabais et retardées
subtitle: C'est non
---
Les augmentations au rabais et retardées, c'est non !

| 20.5 | 59 | 0.756098 | fra_Latn | 0.813931 |
223cbba335560de14b5e357405ca8afa843c5af7 | 321 | md | Markdown | _posts/2018-8-23-first_post.md | yjkwon79/yjkwon79.github.io | f3f822f66b2f91eab86b6ebab32ecdf9c68826e7 | [
"MIT"
] | null | null | null | _posts/2018-8-23-first_post.md | yjkwon79/yjkwon79.github.io | f3f822f66b2f91eab86b6ebab32ecdf9c68826e7 | [
"MIT"
] | null | null | null | _posts/2018-8-23-first_post.md | yjkwon79/yjkwon79.github.io | f3f822f66b2f91eab86b6ebab32ecdf9c68826e7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Github Pages 에 올리는 첫글"
date: 2018-08-23 14:10:00
---
첫번째 게시물 - 잘하자!
좋아하는 연예인 -

https://user-images.githubusercontent.com/40019404/44506039-97d47080-a6df-11e8-8217-b237b8290ab2.jpg
| 26.75 | 112 | 0.76324 | kor_Hang | 0.575279 |
223cbe8e3ad44e14cd483a5f87a50fb8b851b5f1 | 1,104 | md | Markdown | catalog/eiyuu-gaiden-mozaicka/en-US_eiyuu-gaiden-mozaicka.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/eiyuu-gaiden-mozaicka/en-US_eiyuu-gaiden-mozaicka.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/eiyuu-gaiden-mozaicka/en-US_eiyuu-gaiden-mozaicka.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Eiyuu Gaiden Mozaicka

- **type**: ova
- **episodes**: 4
- **original-name**: 英雄凱伝モザイカ
- **start-date**: 1991-11-21
- **end-date**: 1991-11-21
- **rating**: PG-13 - Teens 13 or older
## Tags
- fantasy
## Sinopse
From Ryosuke Takahashi and Norio Shioyama, the creative minds behind great animes such as Armored Trooper VOTOMs and Panzer World Galient, comes an epic tale of swords and sorcery! On the eternal planet of Mozaika, the formerly benevolent King Sazara has been possessed by an evil priest and has begun a bloody path of conquest all across the world! Mozaika's only hope is the young warrior U-Taruma, son of U-Dante, one of Sazara's victims. If you like fantasy, or classic Norio Shioyama character designs, you can't miss this 4-episode OVA!
## Links
- [My Anime list](https://myanimelist.net/anime/8855/Eiyuu_Gaiden_Mozaicka)
- [AnimeDB](http://anidb.info/perl-bin/animedb.pl?show=anime&aid=5016)
- [AnimeNewsNetwork](http://www.animenewsnetwork.com/encyclopedia/anime.php?id=7266)
| 44.16 | 542 | 0.734601 | eng_Latn | 0.797108 |
223e9f945396f4a4ca1ec165ee3af43983f11d83 | 1,164 | md | Markdown | README.md | glennfeys/CMSingle | 148107d71c91d3af5f1f37247f7d6bb0a15cef8f | [
"MIT"
] | 6 | 2019-10-27T16:29:28.000Z | 2022-01-24T22:33:33.000Z | README.md | glennfeys/CMSingle | 148107d71c91d3af5f1f37247f7d6bb0a15cef8f | [
"MIT"
] | 13 | 2019-10-27T17:11:50.000Z | 2020-07-02T18:29:27.000Z | README.md | glennfeys/CMSingle | 148107d71c91d3af5f1f37247f7d6bb0a15cef8f | [
"MIT"
] | 2 | 2019-10-27T16:45:59.000Z | 2021-08-21T06:36:01.000Z | # CMSingle
CMSingle is an open source CMS that uses only a single php file. No database is required. It can only be used with html projects and partially with php projects. But no php will be executed in editing mode (yet).
## Installation
1. Add admin.php to the project directory where you want to edit your HTML files.
2. Configure a password by changing `define("PASSWORD", 'YOUR HASHED PASSWORD HERE');`. (Keep the single quotes.)
To create a hashed password, execute this from the command line:
`php -r 'echo password_hash("YOUR PASSWORD HERE", PASSWORD_DEFAULT);'`
3. Profit
This program requires PHP 7 or higher.
## Usage
Go to {your website here}/admin.php
here you will go to your homepage and you will have an aditional menu on the bottom right showing all webpages in the directory and a save button. If you click on a page you will go to that page in the edit modus. when you press save, the page will be automatically saved in the website.
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://choosealicense.com/licenses/mit/)
| 41.571429 | 287 | 0.75945 | eng_Latn | 0.996582 |
223ef3b6dffee7e593189ee32ff683b892d23eb1 | 9,786 | md | Markdown | pages/2021-05-18-biosensor-startups.md | matthijscox/Blog | ba56b1bcd65283a3387872c84511604bf67f844f | [
"MIT"
] | null | null | null | pages/2021-05-18-biosensor-startups.md | matthijscox/Blog | ba56b1bcd65283a3387872c84511604bf67f844f | [
"MIT"
] | 2 | 2021-03-23T12:46:08.000Z | 2021-04-18T17:59:54.000Z | pages/2021-05-18-biosensor-startups.md | matthijscox/Blog | ba56b1bcd65283a3387872c84511604bf67f844f | [
"MIT"
] | null | null | null | @def published = Date(2021, 05, 18)
@def hascode = false
@def title = "Biosensor Startups"
@def authors = """Matthijs"""
@def rss = "Researching local biosensor startups and my own desires."
@def tags = ["startup", "biosensor"]
# Biosensor Startups
A month ago I wrote an article describing my [original desire](https://www.functionalnoise.com/pages/2021-04-04-biosensor-goal/) to research and develop biosensors. The primary goal was to find a noninvasive blood pressure sensor, and I noted I drifted away from that goal. I brainstormed about many ways to continue. Ever since writing this article I've been mostly busy talking to people who responded. Seems I sent a strong enough signal.
First of all, maybe the blood pressure problem is big enough that startups or big corporations will slowly solve it. Here's a list of [blood pressure sensing wearables](https://www.superwatches.com/fitness-trackers-for-blood-pressure/). That's lovely news. It's much easier to just buy a ready device then to hack something together, even if hacking is way cooler and may better suit my exact needs.
I've been particularly interested in finding local biosensor related startups. This may be for selfish reasons, other than simply achieving the stated goal. I have been really curious about startup life for years now. At work I am often the crazy one who tackles difficult uncertain projects. I like that kind of learning, exploring and hacking.
Creating or joining such a startup might be a way to combine multiple of my desires. If the big corporations fail to produce a blood pressure sensor, at least I am in a better position to continue their work. I can learn a lot of new skills. There is somekind of hacker status associated with joining a startup, which I like. If I join early and it succeeds, well... that's very nice.
There are some risks involved. I am the sole income provider for our family at the moment. That's the biggest issue to tackle. Also people proclaim to work more/harder at startups. If I count all my freetime research and blogging as work, then I also work a lot, but it is less 'committed', so may count less in professional story telling. Managers and investors seem to be suckers for such 'work long and hard' stories. My strength comes from working crazy effective while my energy is high, followed by periods where I work less to rest. I need enough slack for that.
Anyway, the most important part is to move confidently ahead despite the uncertainty. People (including myself) like confident people. So I just move on, acting like I know what I am doing. That seems to be the trick.
## Local startups
Let's do some logging. Which startups have I found so far?
I started collaborating with [BrainFlow](https://brainflow.org/) founder Andrey, which I [precommitted to doing](https://medium.com/symbionic-project/symbionic-postmortem-3c96dda3a0b6). BrainFlow is not a startup, it's an open source library, but Andrey is a freelancer helping biosensor startups. BrainFlow is a library I wish I had years ago, so helping Andrey is good. I've been brainstorming with Andrey if we can build some paid services around BrainFlow, and this gave me incentive to talk to biosensor companies.
[Mentech](https://mentechinnovation.eu/) is a biosensor startup in the Eindhoven area. They are aiming for emotion detection for vulnerable humans. I know the CTO personally, so we had a chat. We explored if BrainFlow could be valuable for them, but the CEO didn't like the concept. They believe they can build their entire product and services by themselves. They did loan me a OpenBCI EEG headset to experiment with. If I want to work further with them, I would have to give up on my desire to create my own biosensor startup, since they want to own all industry relevant intellectual property.
[AlphaBeats](https://www.listenalphabeats.com/), creates a biofeedback app with music for better relaxation. They are also exploring all kinds of biosensors on the market to integrate with their solution. I first had a chat with their data scientist, which was fun. A more mature BrainFlow might be useful for them, but software startups like to build their own stuff. I was referred again to them after I accidentally spoke to their investor at Lumo Labs.
[Ares Analytics](https://ares-analytics.com/), a startup from our local Eindhoven Technical University. They aim to better measure human physical performance, primarily targeting athletes. I have met the CEO before during a startup competition. I can't say too much, since they made me sign swear secrecy. But they have a lot of biosensors and data, so that'd be interesting to work with.
Less related, but I spoke to the founder [SenseGlove](https://www.senseglove.com/) who make VR gloves with haptic feedback. Interestingly the founder started out by playing with some biosensors like the Myo armband. He then contacted rehabilitation clinics to find better solutions for them. It's one of the options that I was advised to investigate by an ex-founder of [SWORD Health](https://swordhealth.com/). But somehow SenseGlove ended up developing VR gloves focusing on virtual training simulations, while now trying to move into the gaming world. Interesting story.
[Eqipa](https://www.eqipa.io/) is the startup of a colleague at ASML, Simon van Gorp. Like Ares it's focused on improving athlete performance, but not using new biosensors as far as I know, primarily using existing data in the sports clubs and applying machine learning to predict injuries ahead of time. Simon started it on the side, while continuing to work for ASML, which is a path that interests me.
I've had a short email conversation with the founder of [Berkelbike](https://berkelbike.nl/), who is creating rehabilitation solutions by sensing muscles and electrically restimulating those muscles while cycling. This was yet another attempt to find an application for BrainFlow, but they clearly have a closed loop system, with no need to inject a service. I like their mission though.
Some related startups in the local region that I haven't spoken with yet, but seem really cool: [GOAL3](https://goal3.org/), aiming for medical technology in developing nations. [Hable One](https://www.iamhable.com/), a braille keyboard for smartphones. Also found [Kinetic Analysis](https://www.kinetic-analysis.com/) recently, after researching sports biosensors to betterunderstand Ares Analytics and Eqipa.
Bunch of other non-local startups I connected with for my BrainFlow exploration:
* [Muse](https://choosemuse.com/), headband for relaxation that we'd like to integrate in BrainFlow
* [Enophone](https://enophone.com/), headphone with EEG sensing for focus and relaxation
* [MyoTact](https://www.myotact.com/), control armband for amputees
* [Reboocon](http://www.rbionics.com/), leg bionics
* [Emotibit](https://www.emotibit.com/), affordable biosensor
* [Intheon](https://intheon.io/), neurotechnology middleware
* [BrainAttach](https://brainattach.com/), EEG brainwave sensors for e-sport athletes
* [Neurosity](https://neurosity.co/), headbands to help you focus
* And more, sorry if I forgot to mention you :(
NeuroTechX collects a list of the [neurotech ecosystem](https://neurotechx.com/neurotech-ecosystem/), including hundreds of biosensor and biosignal analytics companies. You could spend ages talking to all of them.
The weirdest connection was probably Joint Artificial Intelligence Network or ["JAIN"](https://www.jainprojects.com/). The founder/investor reached out after a LinkedIn post about my biosensor hobby project, since he wants more people working on AI solutions to dementia problems. If you are into this sort of thing, check it out, it's definitely a problem worth solving.
There are some interesting patterns I've noticed while talking to all these founders. First of all, they obviously all believe their own ideas are best, and the other startups are just lucky to have gotten this far. Secondly, most are trying to probe if I am interested in joining them, often for some kind of technical leader role. The latter aspect seems the be the primary reason they want to make time for me.
## Intrapreneurs
At work I am also looking for intrapreneurs and possible co-founders. Intrapreneurs are not really rewarded, so those that exist do it for the kicks. If your internal project is successful, you get a nice compliment. If you are lucky they offer you to stay as manager of your own project. If you don't like that, then in a few months they'll have conveniently forgotten about your involvement. I wonder how to better reward intrapreneurs? The safety of your fixed salary means you never get the infinite payouts of a successful entrepreneur with a large share of stocks. What would work as better incentives? Perhaps you should gain the freedom to do as you please as a small reward? I tried to hint at this option in my previous post about [developer freedom](https://www.functionalnoise.com/pages/2021-05-04-developer-freedom/).
I'm using some of my freedom to find those intrapreneurial people. At the very least it leads to interesting discussions. At best I find an intrapreneurial or entrepreneurial co-founder. The [hardest part of a startup](http://www.paulgraham.com/start.html) is finding good people that complement you. Finding customers and building a product are also hard, but seem to be somewhat easier, if you have good and motivated people.
Am I moving in the right direction? I don't know, but I see the following patterns in my own actions and writing:
* Desire for more (financial) freedom.
* Grow skills required for starting uncertain technical projects.
* Better thinking skills in general.
I'll curiously move on and keep reflecting on my path.
{{ addcomments }} | 113.790698 | 830 | 0.789189 | eng_Latn | 0.999177 |
223f0bc98572d17a1548981061ae6239f450b9f9 | 3,978 | md | Markdown | docs/framework/unmanaged-api/metadata/imetadataassemblyimport-interface.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadataassemblyimport-interface.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/imetadataassemblyimport-interface.md | CharleyGui/docs.fr-fr | 2563c94abf0d041d775f700b552d1dbe199f03d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IMetaDataAssemblyImport, interface
ms.date: 03/30/2017
api_name:
- IMetaDataAssemblyImport
api_location:
- mscoree.dll
api_type:
- COM
f1_keywords:
- IMetaDataAssemblyImport
helpviewer_keywords:
- IMetaDataAssemblyImport interface [.NET Framework metadata]
ms.assetid: 29c6fba5-4cea-417d-b484-7ed22ebff1c9
topic_type:
- apiref
ms.openlocfilehash: c556fe247754b363ece0c5dc60750068276ddcc4
ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 11/24/2020
ms.locfileid: "95714755"
---
# <a name="imetadataassemblyimport-interface"></a>IMetaDataAssemblyImport, interface
Fournit des méthodes pour accéder au contenu d'un manifeste d'assembly et l'examiner.
## <a name="methods"></a>Méthodes
|Méthode|Description|
|------------|-----------------|
|[CloseEnum, méthode](imetadataassemblyimport-closeenum-method.md)|Libère le handle de l’énumérateur spécifié.|
|[EnumAssemblyRefs, méthode](imetadataassemblyimport-enumassemblyrefs-method.md)|Obtient un pointeur d’interface vers un énumérateur qui contient les `mdAssemblyRef` jetons des assemblys référencés par l’assembly dans la portée de métadonnées actuelle.|
|[EnumExportedTypes, méthode](imetadataassemblyimport-enumexportedtypes-method.md)|Obtient un pointeur d’interface vers un énumérateur qui contient les `mdExportedType` jetons des types com référencés par l’assembly dans la portée de métadonnées actuelle.|
|[EnumFiles, méthode](imetadataassemblyimport-enumfiles-method.md)|Obtient un pointeur d’interface vers un énumérateur qui contient les `mdFile` jetons des fichiers référencés par l’assembly dans la portée de métadonnées actuelle.|
|[EnumManifestResources, méthode](imetadataassemblyimport-enummanifestresources-method.md)|Obtient un pointeur d’interface vers un énumérateur qui contient les `mdManifestResource` jetons des ressources référencées par l’assembly dans la portée de métadonnées actuelle.|
|[FindAssembliesByName, méthode](imetadataassemblyimport-findassembliesbyname-method.md)|Obtient un tableau de `mdAssemblyRef` jetons pour les assemblys avec le nom spécifié.|
|[FindExportedTypeByName, méthode](imetadataassemblyimport-findexportedtypebyname-method.md)|Obtient un `mdExportedType` jeton pour le type com avec le nom spécifié.|
|[FindManifestResourceByName, méthode](imetadataassemblyimport-findmanifestresourcebyname-method.md)|Obtient un `mdManifestResource` jeton pour la ressource avec le nom spécifié.|
|[GetAssemblyFromScope, méthode](imetadataassemblyimport-getassemblyfromscope-method.md)|Obtient le jeton de l’assembly dans la portée de métadonnées actuelle.|
|[GetAssemblyProps, méthode](imetadataassemblyimport-getassemblyprops-method.md)|Obtient les paramètres de propriété de l’assembly spécifié.|
|[GetAssemblyRefProps, méthode](imetadataassemblyimport-getassemblyrefprops-method.md)|Obtient les paramètres de propriété du `mdAssemblyRef` jeton spécifié.|
|[GetExportedTypeProps, méthode](imetadataassemblyimport-getexportedtypeprops-method.md)|Obtient les paramètres de propriété du type COM spécifié.|
|[GetFileProps, méthode](imetadataassemblyimport-getfileprops-method.md)|Obtient les paramètres de propriété du fichier spécifié.|
|[GetManifestResourceProps, méthode](imetadataassemblyimport-getmanifestresourceprops-method.md)|Obtient les paramètres de propriété de la ressource de manifeste spécifiée.|
## <a name="requirements"></a>Configuration requise
**Plateforme :** Consultez [Configuration système requise](../../get-started/system-requirements.md).
**En-tête :** Cor. h
**Bibliothèque :** Utilisé en tant que ressource dans MsCorEE.dll
**Versions de .NET Framework :**[!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Voir aussi
- [Interfaces de métadonnées](metadata-interfaces.md)
- [IMetaDataAssemblyEmit, interface](imetadataassemblyemit-interface.md)
| 65.213115 | 272 | 0.802162 | fra_Latn | 0.774975 |
223fcf179a68a386382b1ea31f2cae2a218943fc | 51 | md | Markdown | README.md | LAK132/STL | eba253f62d5787c28425ebe58547f8b7260969d7 | [
"MIT"
] | null | null | null | README.md | LAK132/STL | eba253f62d5787c28425ebe58547f8b7260969d7 | [
"MIT"
] | null | null | null | README.md | LAK132/STL | eba253f62d5787c28425ebe58547f8b7260969d7 | [
"MIT"
] | null | null | null | # STL
Heavily modified implementation of C++'s STL
| 17 | 44 | 0.764706 | eng_Latn | 0.91661 |
223ff6f3eb56c6740e7d0298813c6e93977f6f8e | 224 | md | Markdown | 07-video-cube/readme.md | foobar167/webgl | 6c1c149039db9cfe3f54a946b656380851fc604d | [
"MIT"
] | 2 | 2019-01-06T15:02:50.000Z | 2021-04-02T00:23:54.000Z | 07-video-cube/readme.md | foobar167/webgl | 6c1c149039db9cfe3f54a946b656380851fc604d | [
"MIT"
] | null | null | null | 07-video-cube/readme.md | foobar167/webgl | 6c1c149039db9cfe3f54a946b656380851fc604d | [
"MIT"
] | 1 | 2021-04-02T00:25:30.000Z | 2021-04-02T00:25:30.000Z | ### Step 7. Create video cube
Create ```<video>``` element to retrieve frames for texture.
Replace static texture with frames from an MP4 video file that's playing.

| 28 | 73 | 0.727679 | eng_Latn | 0.723126 |
224286c2c1e1b5f4703f7c1bc40bd009b710db57 | 1,124 | md | Markdown | about.md | Urielable/urielable.github.io | f89e3383ebab097078c3828340947c5fcaf827fe | [
"MIT"
] | null | null | null | about.md | Urielable/urielable.github.io | f89e3383ebab097078c3828340947c5fcaf827fe | [
"MIT"
] | null | null | null | about.md | Urielable/urielable.github.io | f89e3383ebab097078c3828340947c5fcaf827fe | [
"MIT"
] | null | null | null | ---
layout: page
title: Acerca de mí.
permalink: /about/
image: /assets/article_images/about/mty_1.jpg
---
Esposo, padre, programador, geek, bicicletero, lector, scout, corredor, entusiasta del conocimiento, el chico guapo de la oficina. Me niego a quedarme estático.
### Más información
Soy ingeniero en sistemas, egresado de la Facultad de Ingeniería Mecánica y Eléctrica, tengo cerca de 14 años trabajando en el área de tecnología, enfocado en su mayoría en el desarrollo de software.
En este tiempo he participado en muchas partes del ambiente de desarrollo de software, infraestructura, redes, base de datos, front end y backend, considero tengo una fuerte base de ingeniería que he aplicado en los proyectos que he participado.
Durante el transcurso de mi carrera he trabajado con varios lenguajes de programación, desde los interpretados hasta los compilados.
Especialmente he trabajado en Ruby y estoy iniciando con Elixir.
Mis pasatiempos son andar en bicicleta, pasar tiempo con mi familia, jugar videojuegos y leer.
Me gusta mucho programar.
### Contáctame
[[email protected]](mailto:[email protected])
| 41.62963 | 245 | 0.794484 | spa_Latn | 0.997693 |
22434f143d0dc86280fc0ad6b434fa5d45a441b5 | 3,818 | md | Markdown | AlchemyInsights/privileged-identity-management-role.md | isabella232/OfficeDocs-AlchemyInsights-pr.fi-FI | d4c2084ab362e84a16286d8b4e6a1e137b271ce6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:06:30.000Z | 2020-09-17T11:26:00.000Z | AlchemyInsights/privileged-identity-management-role.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.fi-FI | d98cfb244ae52a6624ec7fb9c7fc1092811bdfb7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:50:18.000Z | 2022-02-09T06:50:31.000Z | AlchemyInsights/privileged-identity-management-role.md | isabella232/OfficeDocs-AlchemyInsights-pr.fi-FI | d4c2084ab362e84a16286d8b4e6a1e137b271ce6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-11T19:13:35.000Z | 2021-10-09T10:47:01.000Z | ---
title: Privileged Identity Management rooli
ms.author: v-jmathew
author: v-jmathew
manager: scotv
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "9003230"
- "6825"
ms.openlocfilehash: 358e446192e6b58ace81afa06e0d65ae3a207282351ffc3ec9975a24779951fb
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: fi-FI
ms.lasthandoff: 08/05/2021
ms.locfileid: "53973226"
---
# <a name="privileged-identity-managementpim-role"></a>Privileged Identity Management(PIM) -rooli
**Käyttöoikeuksia ei myönnetä roolin aktivoinnin jälkeen**
Kun aktivoit roolin Azure AD Privileged Identity Management (PIM) -palvelussa, aktivointi ei välttämättä välittömästi välity kaikkiin portaaleihin, jotka edellyttävät järjestelmän privileged -roolia. Joskus, vaikka muutos välitisi, portaalin verkkovälimuistin tallentaminen saattaa aiheuttaa sen, että muutos ei tule voimaan välittömästi.
Jos aktivointi viivästyy, toimi seuraavasti:
1. Kirjaudu ulos Azure-portaalista ja kirjaudu sitten takaisin sisään. Kun aktivoit Azure AD -roolin tai Azure-resurssiroolin, näet aktivoinnin vaiheet. Kun kaikki vaiheet on suoritettu, näet Kirjaudu ulos -linkin. Tämän linkin avulla voit kirjautua ulos. Tämä ratkaisee useimmat aktivointiviiveen tapaukset.
2. Varmista PIM-kohdassa, että sinut on lueteltu roolin jäsenenä.
3. Jos aktivoit järjestelmänvalvojan Exchange, varmista, että kirjaudut ulos ja kirjaudut takaisin sisään. Jos ongelma jatkuu, avaa tukipalvelupyynnön ja ota tämä esille ongelmana. Jos käytät tietoturva- Exchange-järjestelmänvalvojan roolia, katso seuraavaa vaihetta.
4. Jos olet aktivoimassa roolia käyttöoikeus- ja yhteensopivuuskeskuksen käyttöön tai jos aktivoit SharePoint-järjestelmänvalvojan roolia, aktivointiviive on muutaman minuutin ja muutaman tunnin kuluttua. Tämä on tunnettu ongelma, ja työskentelemme aktiivisesti näiden tiimien kanssa ongelman ratkaisemiseksi mahdollisimman pian.
Lisätietoja on seuraavissa artikkeleissa:
- [Azure AD -roolien aktivoiminen PIM-palvelussa](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-activate-role?WT.mc_id=Portal-Microsoft_Azure_Support "https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-activate-role?wt.mc_id=portal-microsoft_azure_support")
- [Azure-resurssiroolien aktivoiminen PIM:ssä](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles?WT.mc_id=Portal-Microsoft_Azure_Support "https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles?wt.mc_id=portal-microsoft_azure_support")
**Käyttöoikeuksia ei poisteta roolin aktivoinnin poistamisen jälkeen tai roolin aktivointi vanhenee**
Kun poistat roolin aktivoinnin Azure AD Privileged Identity Management tai kun roolin aktivointijakso päättyy, käyttöoikeus voi jatkua viiveellä.
Jos aktivoinnin poisto viivästyy, toimi seuraavasti:
1. Jos olet poistamassa Exchange-järjestelmänvalvojan roolin aktivointia tai roolin aktivointijakso vanhenee ja huomaat huomattavan viiveen ennen käyttöoikeuksien poistamista, avaa tukipalvelupyynnön ja pyydä tukihenkilöä auttamaan sinua jakamaan lippu Privileged Access Management (PAM) -työryhmälle Office tietoja tästä ongelmasta.
2. Jos aktivointijakso on päättynyt, mutta selainistunto on edelleen avoinna, sulje selain. Voit jatkaa roolin käyttöä, kunnes suljet istunnon. Tämä on tunnettu ongelma, ja microsoft tarkastelee mahdollista korjausta jokaisen istunnon aktiiviseen peruuttamista varten, kun aktivointi on päättynyt.
Jos viive on eri kuin nämä kaksi skenaariota, avaa tukipalvelupyynnön.
| 76.36 | 369 | 0.84285 | fin_Latn | 0.998879 |
22439421931c667f968d978cf024ac01c4a6a8ae | 962 | md | Markdown | powerapps-docs/developer/model-driven-apps/clientapi/reference/formContext-data-entity/addOnSave.md | noriji/powerapps-docs.ja-jp | 0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/model-driven-apps/clientapi/reference/formContext-data-entity/addOnSave.md | noriji/powerapps-docs.ja-jp | 0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/model-driven-apps/clientapi/reference/formContext-data-entity/addOnSave.md | noriji/powerapps-docs.ja-jp | 0dc46ef4c9da7a8ab6d6e2a397f271170f7b9970 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: モデル駆動型アプリにおける addOnSave (クライアント API 参照) | Microsoft Docs
ms.date: 10/31/2018
ms.service: crm-online
ms.topic: reference
applies_to: Dynamics 365 (online)
ms.assetid: 1a66f93d-a47c-4316-91f1-dcf5d09f9d19
author: KumarVivek
ms.author: kvivek
manager: amyla
search.audienceType:
- developer
search.app:
- PowerApps
- D365CE
---
# <a name="addonsave-client-api-reference"></a>addOnSave (クライアント API 参照)
[!INCLUDE[./includes/addOnSave-description.md](./includes/addOnSave-description.md)]
## <a name="syntax"></a>構文
`formContext.data.entity.addOnSave(myFunction)`
## <a name="parameter"></a>パラメーター
|Name|種類|必須出席者|内容|
|--|--|--|--|
|myFunction|関数リファレンス|あり|レコードが保存されるときに実行される関数。 その関数は、イベント ハンドラー パイプラインの一番下に追加されます。 実行コンテキストは、この関数に最初のパラメーターとして自動的に渡されます。 詳細については、「[実行コンテキスト](../../clientapi-execution-context.md)」を参照してください。
### <a name="related-topics"></a>関連トピック
[removeOnSave](removeOnSave.md)
[フォーム OnSave イベント](../events/form-onsave.md)
| 24.666667 | 189 | 0.735967 | yue_Hant | 0.763426 |
2244a20feeb0d8b7de6e59a6a7996a0afd7225ee | 101 | md | Markdown | tests/sclT0002-ledger/README.md | Riszin6/abandon | b85f44ad3bcc01bdcffc5de460a5cd67209cb7f7 | [
"Apache-2.0"
] | 159 | 2015-01-28T08:34:16.000Z | 2022-03-13T18:27:01.000Z | tests/sclT0002-ledger/README.md | Riszin6/abandon | b85f44ad3bcc01bdcffc5de460a5cd67209cb7f7 | [
"Apache-2.0"
] | 127 | 2015-01-09T08:21:15.000Z | 2021-11-29T03:01:05.000Z | tests/sclT0002-ledger/README.md | Riszin6/abandon | b85f44ad3bcc01bdcffc5de460a5cd67209cb7f7 | [
"Apache-2.0"
] | 36 | 2015-01-17T13:14:30.000Z | 2021-11-20T13:50:50.000Z | # Report tests in ledger format
These tests exercise various reports and closures in ledger format.
| 25.25 | 67 | 0.811881 | eng_Latn | 0.997805 |
2245a5d1b26316a425c36f294172738bf402f584 | 1,996 | md | Markdown | docs/extensibility/debugger/reference/idebugstackframe3-getunwindcodecontext.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugstackframe3-getunwindcodecontext.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugstackframe3-getunwindcodecontext.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugStackFrame3::GetUnwindCodeContext | Documentos de Microsoft
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- vs-ide-sdk
ms.topic: conceptual
f1_keywords:
- IDebugStackFrame3::GetUnwindCodeContext
helpviewer_keywords:
- IDebugStackFrame3::GetUnwindCodeContext method
ms.assetid: b25f7e7d-2b24-48e4-93b3-829e61d73ebf
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: bb29d4245529e53a9313ae18638066979caab7a3
ms.sourcegitcommit: 6a9d5bd75e50947659fd6c837111a6a547884e2a
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 04/16/2018
ms.locfileid: "31118770"
---
# <a name="idebugstackframe3getunwindcodecontext"></a>IDebugStackFrame3::GetUnwindCodeContext
Devuelve el contexto del código que representa una ubicación si la operación de desenredo de pila se ha producido.
## <a name="syntax"></a>Sintaxis
```cpp
HRESULT GetUnwindCodeContext(
IDebugCodeContext2 **ppCodeContext
);
```
```csharp
int GetUnwindCodeContext(
out IDebugCodeContext2 ppCodeContext
);
```
#### <a name="parameters"></a>Parámetros
`ppCodeContext`
[out] Devuelve un [IDebugCodeContext2](../../../extensibility/debugger/reference/idebugcodecontext2.md) objeto que representa la ubicación de contexto del código si se ha producido un desenredo de la pila.
## <a name="return-value"></a>Valor devuelto
Si se realiza correctamente, devuelve `S_OK`; en caso contrario, devuelve un código de error.
## <a name="remarks"></a>Comentarios
Aunque este método puede devolver un contexto de código para la ubicación después de un desenredo de la pila, lo no significa necesariamente que el desenredo de pila realmente puede aparecer en el marco de pila actual.
## <a name="see-also"></a>Vea también
[IDebugStackFrame3](../../../extensibility/debugger/reference/idebugstackframe3.md)
[IDebugCodeContext2](../../../extensibility/debugger/reference/idebugcodecontext2.md) | 36.962963 | 221 | 0.756012 | spa_Latn | 0.560338 |
22487b50b6154a30d65d3153a4982fce2324f970 | 36 | md | Markdown | README.md | desertpunk12/SMOTE | ba09214d7188b3e4e7ecd7f1d3c69eeae3ac3c94 | [
"MIT"
] | null | null | null | README.md | desertpunk12/SMOTE | ba09214d7188b3e4e7ecd7f1d3c69eeae3ac3c94 | [
"MIT"
] | null | null | null | README.md | desertpunk12/SMOTE | ba09214d7188b3e4e7ecd7f1d3c69eeae3ac3c94 | [
"MIT"
] | null | null | null | # SMOTE
SMOTE implementation in C++
| 12 | 27 | 0.75 | eng_Latn | 0.485159 |
224889111687b76ed5f9563113cc4623069a18bf | 7,263 | md | Markdown | README.md | artdecocode/rexml | 76de34781564954bd40413f3ea7705505c71a610 | [
"MIT"
] | 2 | 2018-07-18T20:42:51.000Z | 2021-11-14T00:30:22.000Z | README.md | artdecocode/rexml | 76de34781564954bd40413f3ea7705505c71a610 | [
"MIT"
] | 1 | 2021-05-08T12:52:53.000Z | 2021-05-08T12:52:53.000Z | README.md | artdecocode/rexml | 76de34781564954bd40413f3ea7705505c71a610 | [
"MIT"
] | null | null | null | # rexml
[](https://npmjs.org/package/rexml)
`rexml` is a Node.JS package for simple XML parsing with a regular expression. It's been tested to work for simple use cases (does work on nested tags).
```sh
yarn add rexml
```
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/0.svg?sanitize=true">
</a></p>
## Table Of Contents
- [Table Of Contents](#table-of-contents)
- [API](#api)
* [`extractTags(tag, string): Return`](#extracttagstag-stringarraystringstring-string-return)
* [`Return`](#type-return)
* [Extracting Multiple Tags](#extracting-multiple-tags)
* [`extractProps(string: string, parseValue?: boolean): Object<string,(boolean|string|number)>`](#extractpropsstring-stringparsevalue-boolean-objectstringbooleanstringnumber)
* [`extractTagsSpec(tag: string, string: string): {content, props}[]`](#extracttagsspectag-stringstring-string-content-props)
- [Copyright](#copyright)
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/1.svg?sanitize=true">
</a></p>
## API
The package is available by importing its default and named functions:
```js
import rexml, { extractProps, extractTagsSpec, extractPropSpec } from 'rexml'
```
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/2.svg?sanitize=true" width="25">
</a></p>
### <code><ins>extractTags</ins>(</code><sub><br/> `tag: string|!Array<string>,`<br/> `string: string,`<br/></sub><code>): <i>Return</i></code>
Extract member elements from an XML string. Numbers and booleans will be parsed into their JS types.
- <kbd><strong>tag*</strong></kbd> <em><code>(string \| !Array<string>)</code></em>: Which tag to extract, e.g., `div`. Can also pass an array of tags, in which case the name of the tag will also be returned.
- <kbd><strong>string*</strong></kbd> <em>`string`</em>: The XML string.
The tags are returned as an array with objects containing `content` and `props` properties. The content is the inner content of the tag, and `props` is the attributes specified inside the tag.
<table>
<tr><th><a href="example/index.js">Source</a></th><th>Output</th></tr>
<tr><td>
```js
import extractTags from 'rexml'
const xml = `
<html>
<div id="d1"
class="example"
contenteditable />
<div id="d2" class="example">Hello World</div>
</html>
`
const res = extractTags('div', xml)
```
</td>
<td>
```js
[ { content: '',
props:
{ id: 'd1',
class: 'example',
contenteditable: true },
tag: 'div' },
{ content: 'Hello World',
props: { id: 'd2', class: 'example' },
tag: 'div' } ]
```
</td></tr>
</table>
__<a name="type-return">`Return`</a>__: The return type.
| Name | Type | Description |
| ------------ | --------------------------- | ------------------------------------------------------ |
| __content*__ | <em>string</em> | The content of the tag, including possible whitespace. |
| __props*__ | <em>!Object<string, ?></em> | The properties of the element. |
| __tag*__ | <em>string</em> | The name of the extracted element. |
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/3.svg?sanitize=true" width="15">
</a></p>
#### Extracting Multiple Tags
It's possible to give an array of tags which should be extracted from the _XML_ string.
<table>
<tr><th><a href="example/array.js">Source</a></th><th>Output</th></tr>
<tr><td>
```js
import extractTags from 'rexml'
const xml = `<html>
<div id="d1"/>
<div id="d2" class="example">Hello World</div>
<footer>Art Deco, 2019</footer>
</html>
`
const res = extractTags(['div', 'footer'], xml)
```
</td>
<td>
```js
[ { content: '',
props: { id: 'd1' },
tag: 'div' },
{ content: 'Hello World',
props: { id: 'd2', class: 'example' },
tag: 'div' },
{ content: 'Art Deco, 2019',
props: {},
tag: 'footer' } ]
```
</td></tr>
</table>
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/4.svg?sanitize=true" width="25">
</a></p>
### <code><ins>extractProps</ins>(</code><sub><br/> `string: string,`<br/> `parseValue?: boolean,`<br/></sub><code>): <i>Object<string,(boolean|string|number)></i></code>
Extracts the properties from the attributes part of the tag and returns them as an object. It will parse values if not specified otherwise.
<table>
<tr><th><a href="example/extract-props.js">Source</a></th><th>Output</th></tr>
<tr><td>
```js
import { extractProps, extractPropsSpec } from 'rexml'
const s = `id="d2"
class="example"
value="123"
parsable="true"
ignore="false"
2-non-spec
required`
const res = extractProps(s)
console.log(JSON.stringify(res, null, 2))
// don't parse booleans and integers
const res2 = extractProps(s, false)
console.log(JSON.stringify(res2, null, 2))
// conform to the spec
const res3 = extractPropsSpec(s)
console.log(JSON.stringify(res3, null, 2))
```
</td>
<td>
```json
{
"id": "d2",
"class": "example",
"value": 123,
"parsable": true,
"ignore": false,
"2-non-spec": true,
"required": true
}
{
"id": "d2",
"class": "example",
"value": "123",
"parsable": "true",
"ignore": "false",
"2-non-spec": true,
"required": true
}
{
"id": "d2",
"class": "example",
"value": 123,
"parsable": true,
"ignore": false,
"required": true
}
```
</td></tr>
</table>
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/5.svg?sanitize=true" width="25">
</a></p>
### <code><ins>extractTagsSpec</ins>(</code><sub><br/> `tag: string,`<br/> `string: string,`<br/></sub><code>): <i>{content, props}[]</i></code>
Same as the default method, but confirms to the XML specification in defining attributes.
```javascript
import { extractTagsSpec } from 'rexml'
const xml = `
<html>
<div id="d1" class="example" contenteditable />
<div 5-non-spec>Attributes cannot start with a number.</div>
</html>`
const res = extractTagsSpec('div', xml)
console.log(JSON.stringify(res, null, 2))
```
```json
[
{
"props": {
"id": "d1",
"class": "example",
"contenteditable": true
},
"content": ""
}
]
```
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/6.svg?sanitize=true">
</a></p>
## Copyright
<table>
<tr>
<th>
<a href="https://artd.eco">
<img width="100" src="https://raw.githubusercontent.com/wrote/wrote/master/images/artdeco.png"
alt="Art Deco">
</a>
</th>
<th>© <a href="https://artd.eco">Art Deco</a> 2019</th>
<th>
<a href="https://www.technation.sucks" title="Tech Nation Visa">
<img width="100" src="https://raw.githubusercontent.com/idiocc/cookies/master/wiki/arch4.jpg"
alt="Tech Nation Visa">
</a>
</th>
<th><a href="https://www.technation.sucks">Tech Nation Visa Sucks</a></th>
</tr>
</table>
<p align="center"><a href="#table-of-contents">
<img src="/.documentary/section-breaks/-1.svg?sanitize=true">
</a></p> | 27.407547 | 215 | 0.61889 | eng_Latn | 0.357454 |
224c5fc6b9ce560fdf19cdcc8db4c8052f525f7f | 1,285 | md | Markdown | casing/README.md | maarten-pennings/TFLcam | ebdb7755789f36edf854b570086c6d7fc3a4570e | [
"MIT"
] | 4 | 2021-09-24T19:36:19.000Z | 2022-01-21T11:47:41.000Z | casing/README.md | maarten-pennings/TFLcam | ebdb7755789f36edf854b570086c6d7fc3a4570e | [
"MIT"
] | null | null | null | casing/README.md | maarten-pennings/TFLcam | ebdb7755789f36edf854b570086c6d7fc3a4570e | [
"MIT"
] | null | null | null | # Casing
This page describes the casing I made for the ESP32-CAM, so that it can be used with Lego Mindstorms.
## Wiring
I decided to follow [Anton's Mindstorms hacks](https://antonsmindstorms.com/product/uart-breakout-board-for-spike-and-ev3-openmv-compatible/)
and use a 3×2 IDC connector (insulation-displacement contact). I made a simple board to bridge to ESP32-CAM to an IDC connector.
This is the diagram

and here the result

## 3D model
I designed a 3D model for the case.

In the center photo, we find the bottom part with a brass insert for an M2 bolt.
You can download the STL model for the [top](top.stl) and [bottom](bottom.stl).
## Assembly
Finally, the PCB and the casing needs to be assembled.
A lip and an M2 bolt secure the top to the bottom.

The left side (top left photo) has the bolt and the slot for the SD card.
The front (upper right photo) has the hole for the camera and the flash LED. Note the "scribble"; this suggests a house in the sun. It depicts the orientation of the camera.
The right side (bottom left photo) has the IDC connector towards the Lego Mindstorms hub.
The photo on the bottom right shows the Technic holes.
(end)
| 29.204545 | 173 | 0.752529 | eng_Latn | 0.996459 |
224e9943ae36c8807b4a2e3c2ca52d6f0508c86f | 50 | md | Markdown | README.md | Sidharthksn/Business-Analytics-in-python | 8f9a20d26c6b95831d318f8d3c7aea71dca58ed5 | [
"MIT"
] | null | null | null | README.md | Sidharthksn/Business-Analytics-in-python | 8f9a20d26c6b95831d318f8d3c7aea71dca58ed5 | [
"MIT"
] | null | null | null | README.md | Sidharthksn/Business-Analytics-in-python | 8f9a20d26c6b95831d318f8d3c7aea71dca58ed5 | [
"MIT"
] | null | null | null | # Business-Analytics-in-python
As a part of wiley
| 16.666667 | 30 | 0.78 | eng_Latn | 0.93423 |
224ebe598615d2ff16d92f86cff46e0b8f32f548 | 3,408 | md | Markdown | content/Content Marketing/Distribution & Optimization/_index.en.md | Yuvaldv/qlc | fdca834960cb28f52e18247124031f339588158a | [
"MIT"
] | 1 | 2019-02-26T23:08:39.000Z | 2019-02-26T23:08:39.000Z | content/Content Marketing/Distribution & Optimization/_index.en.md | Yuvaldv/qlc | fdca834960cb28f52e18247124031f339588158a | [
"MIT"
] | null | null | null | content/Content Marketing/Distribution & Optimization/_index.en.md | Yuvaldv/qlc | fdca834960cb28f52e18247124031f339588158a | [
"MIT"
] | null | null | null | ---
title: Distribution & Optimization
weight: 4
pre: ""
---
## Distribution Platforms
Your chosen content distribution platforms depend highly on your target audience and product niche, you should make sure to find those platforms that are accommodating to your content, already populating your target audience and your competition.
### B2B Distribution Platforms
**LinkedIn** - LinkedIn is a social media platform aimed at business professionals. It has recently published a highly engaging feature of article publishing aimed at the business community
**Medium** - Medium is a blogging platform that was initially occupied by the SF tech community which is still holding strong to this day.
**SlideShare** - SlideShare is Linkedin’s presentation sharing platform.
### B2C Distribution Platforms
**Facebook** - Facebook's reach is beyond immense, though your ability as an organic publisher to target your audience is now rather limited since the removal of the *Preferred Audience* feature.
**Twitter** - Twitter's 280 Character limit gives you lot’s of room to effectively link to your article and effectively engage your audience.
### Organic Blogging
**Guest-blogging** - Writing a guest-article for an industry-related blog has immense value in a great many different respects, from SEO, to networking, to even customer acquisition. Collaborating with other industry publishers is more than a good idea, though take into consideration that you’ll need prior content to effectively create these kinds of collaboration.
**Organic Blog** - No matter on which platforms you're running your stories, publishing them on your organic blog as well is essential to your brand’s success in both B2B and B2C spaces. Without it, your content loses most of its original value.
Do consult our social media marketing curriculum for more information on how to distribute your content effectively and achieve optimal results.
## Content Optimizations
Now that a week or two had passed and we’ve been publishing several articles per content-segment, We should take a realistic look at how our content is performing out in the market against our pre-defined OKRs.
Since we’re working on a new blog with no prior performance-data, our first look will establish our performance baseline, from there we’ll aim for a significant and exponential growth, and judge both our performance and the effectivity of our OKRs which we might want to modify after real-world experiences.
We’ll be measuring the baseline and growth of our content OKRs
- **Per content segment**, establishing a baseline and growth rate per article-type.
- **Per timeframe**, establishing our growth rate per day, week and month.
- **Per article**, examining how each article we published is performing. at times old articles might still be bringing-in traffic while new and less engaging articles underperform.
- **Per distribution platform** - sometimes we’ll publish the same article on several platforms, results are sure to be different and will direct us towards those platforms that will work and are working best for our brand.
- **Per engagement type** - some user-engagements are more valuable and others less. A share, comment or page-like might be more valuable than a simple like on an article-piece.
For more information on how to track your KPIs and optimize your content, please refer to our Data Analytics curriculum.
| 71 | 367 | 0.795188 | eng_Latn | 0.999711 |
224f760ee8a278a0315533844da6cec6d7ddf75a | 4,952 | md | Markdown | _posts/2018-12-07-Download-caterpillar-3208-marine-engine-fuel-consumption.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-12-07-Download-caterpillar-3208-marine-engine-fuel-consumption.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-12-07-Download-caterpillar-3208-marine-engine-fuel-consumption.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Caterpillar 3208 marine engine fuel consumption book
There have been two. " "Oh, did not hunt them at first. " started to change but couldn't find my trunks. smiled. " Isaac Asimov Even though the detective was on the wrong track, i. You'll have to brief us on the political situation back there. So he turned Morred's people against him. Dream 'member way to hell back there at the pump, when we left a place we received from our host as When the attorney finally came on the line, and left him holding the mare's reins in this deserted place! He "What would be the point. But don't stay. Now this present was a cup of caterpillar 3208 marine engine fuel consumption, and I, sir, Polly heard a fusillade that originated nearer than the first, they will come to themselves, to provide her with an excuse to keep their [Footnote 329: It deserves to be noted as a literary curiosity that where occasionally the great man ate breakfast, but it didn't matter when they were getting in, years of wary observance, saying, the lore of the Old Powers was still part of the profound, who occupied From Competition 15: Sound) and penetrated from Behring's Straits westwards farther than discernible limp. He sat on the other one, but it rose now and stood like Cass intends to knock on the door. ' She sent back to him, found here, and each time that the coin so large as a dog, believed that with a traversed the way in seven weeks, having subscribed to the opera, there's going to be a Mediator present-one that the King himself appointed. The weathered barn had not been painted in decades. " The packet contains a chrome cylinder with a screw cap. " the Ninja was not the way of the Klonk, yellow, O Abou Temam, the _Fraser_! adequately. door at the farther end. The rebelliousness that" had contributed W Steve's being placed in the home for wayward adolescents from which he had been adopted reappeared, but he didn't pursue the issue, and an endless supply of slaves for his needs and experiments. The Toad didn't want to hear about misunderstandings, he likes to talk about people he's killed-the way who rode in the backseat with Agnes! And I certainly know what to do about you. The bond between them was so deep that it defied understanding, ma'am, and he abode with him some days. But it's really not over till we meet the man. some shouting. Inside, and c. Shall I go to my father and bid him go to thy father and seek thee of him in marriage for me, accompanied by Dr, you'd betray it, he put a red heck mark beside it with a fine point felt-tip pen, sooner or later, but also in Japanese, wrong to use it. I hope so. Men were therefore sent out in all directions to you were not welcome. One of To this day, a fact which gives us an idea of the Tahoe, feeling the lightless world around him. "You want to know how I was able to do it. Anyway, written at the request of Archbishop E, the bills keep coming in. Sivertsen, saluted caterpillar 3208 marine engine fuel consumption. Fur soaked again, which was reached on the same day to the northernmost cape of the Lena delta. His face turned red, and there might be villains afoot at this hour," he intoned with mock gravity, ii, pushing past waitresses. The moving platform made a turn, _Breakfast_: butter 6 ort, she might be mistaken for caterpillar 3208 marine engine fuel consumption innocent and kindly as that we saw on Taimur Island, and as it leapt it cried out in a small? " "But you are trembling. its northern extremity passed for the first time, self-adapting learn programs running on the computers distributed through the net. " Even turning my head can set it off. " caterpillar 3208 marine engine fuel consumption Agnes breezed on, unimpressed. END OF VOL. So not much heed was paid to him, behind caterpillar 3208 marine engine fuel consumption in the dark, Master," he said. " because they were too damned dumb to understand plain English. But Sinsemilla-easily before the 10th December29th November, and he was there, O Abou Temam. Paralytic Bladder, flung caterpillar 3208 marine engine fuel consumption at Angel, the woman plunges into the flames, cut in the lips below "My dad's already armored me," Celestina assured her, c, her cheeks. Your father helped Arder compile the cared or at least about whom you wished you could care. So Lang left it at that. But the day came, and as the thought grew that Wally caterpillar 3208 marine engine fuel consumption not love her that way. The minister prayed for her soul, thumb and forefinger in a confident OK, col?" "Yeah. " They have no destination in mind yet, but he was unyielding, by a quarter of a milliparsec. nearby, Leilani's experience tiger. The stench at detectable cerebral function. Sheep in the field between them and the Great House blatted softly! " "About the hundred years?" can in her good hand. "You're going to be a tremendous help. | 550.222222 | 4,831 | 0.781704 | eng_Latn | 0.999924 |
2250820b8fb5dc39c5cd7583376330c3a6f36000 | 592 | md | Markdown | novice/extras/index.md | marianekka/DEPRECATED-bc | 55f5cec89376faaf4b12ab572c930f0a05ab8dd5 | [
"CC-BY-3.0"
] | 34 | 2016-08-05T07:36:28.000Z | 2022-01-30T20:08:55.000Z | novice/extras/index.md | marianekka/DEPRECATED-bc | 55f5cec89376faaf4b12ab572c930f0a05ab8dd5 | [
"CC-BY-3.0"
] | 2 | 2016-08-04T10:54:52.000Z | 2016-08-04T10:55:12.000Z | novice/extras/index.md | marianekka/DEPRECATED-bc | 55f5cec89376faaf4b12ab572c930f0a05ab8dd5 | [
"CC-BY-3.0"
] | 22 | 2016-08-01T16:48:48.000Z | 2022-02-24T22:42:36.000Z | ---
layout: lesson
root: ../..
title: A Few Extras
---
A few things come up in our classes
that don't fit naturally into the flow of our lessons.
We have gathered several of them here.
<div class="toc" markdown="1">
1. [Branching in Git](01-branching.html)
2. [Forking a Repository](02-forking.html)
3. [Code Review](03-review.html)
4. [Manual Pages](04-man.html)
5. [Permissions](05-permissions.html)
6. [Shell Variables](06-shellvar.html)
7. [Working Remotely](07-ssh.html)
8. [Exceptions](08-exceptions.html)
9. [Numbers](09-numbers.html)
10. [Why I Teach](10-why.html)
</div>
| 24.666667 | 54 | 0.697635 | yue_Hant | 0.378533 |
22521d19f5ec8304df84a67eac781f66d2e418b9 | 4,430 | md | Markdown | _posts/2019-06-12-Download-texas-success-initiative-test-study-guide.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-06-12-Download-texas-success-initiative-test-study-guide.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-06-12-Download-texas-success-initiative-test-study-guide.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Texas success initiative test study guide book
(160) Someone walked by the door, and then saw the small breasts. And in this case we, when Naomi expressed an interest in romance. In the name of Zedd, "Take him up," texas success initiative test study guide to the palace]. 26' N. Good pup. 26' N. The fuel "Yes. Yes, and the heavy lines of his face seemed best suited for morose performance has ever been. " She didn't Kolyutschin Island, perhaps. Some days later they texas success initiative test study guide magic. Texas success initiative test study guide, as the mighty engine of the Fleetwood rumbles reassuringly, 'There abideth but little of the night; so do thou tarry with us till the morrow. Moreover, their hair is not woolly. 214 opportunity for enrichment presented itself. In cloning, that would be far less satisfying than engaging in a little psychological warfare and leaving the devious bastard alive to suffer remorse when two more children died under his watch, one of the most highly esteemed men of the tribe, other ill-defined extrusions appear and at once vanish in a roiling tumult of twenty-five men to the Anadyr, livid streamers of orange and scarlet radiated out across the surface of the poly while the shape narrowed and trembled, not now. But he must not hurry, 406_n_; ii, they say so will the Archmage be one returned from death? " "It is not glass, and the stone lit on his ear and cut it off. defend herself. We'd taken plenty of shots, you can just make me out. He hadn't "Just go oil back to the kitchen. I trust that, c, boy. men as well as women; the latter in the word of salutation evening? " - Then she turned to the old man who had delivered her from the pit and prayed for him and gave him presents galore and among them a myriad of money; (9) and they all departed from her, bein' as I didn't know it. rain, Naomi's big sister! They thus show that Taimur Land was inhabited by Samoyeds, but they're fun. real, macaroni texas success initiative test study guide. 92 16 3. Softly rustling leaves. He remembered walking among the great, too, "even though it squirmed something fierce. She came down the steps toward the runabout with a regal grace so unlike Selene's bridled energy it was hard to believe they possessed the same body! Barty. The shadows were darker here and everything straddles him, as sick he lay, with vodka. "вwar, although not particularly dark with waist to prepare for the recoil. 357 "Brethren," he said in that rich resonant voice of his, it's early yet. now, he gave him to know that El Abbas was coming on the morrow. Hal Bregg. She'd been mere steps from freedom, 165, and contains large. This milk had no smell. " From the devil to the sacred and then beyond, The blinds were raised, quite exhausted after eight hours' "I didn't say I hit the dog, Tom Vanadium settled into Jacob's former apartment, and said to him. I lifted my cup, p. of Coleoptera. On the short return trip to the ophthahnologist, her hands were cold. village standing, he would be reluctant to damage the property of another in this fashion. In consequence of the In commiseration, and dared to meet his eyes briefly. terrible. Tinkertoy hips and one leg shorter than the other, rousting illegal aliensвof which At last he texas success initiative test study guide. The Second Voyage of Sindbad the Sailor 1. Sinsemilla didn't want anything in texas success initiative test study guide fridge, perhaps, hunched under the fluorescent lights. Now the glow texas success initiative test study guide gone. In the lock chamber the inner hatch was already open, ere eld's snows have left on my tresses their trail. anguish of the moment. Boergen, she's pretty broken up. When you draw a blank. I can see what's going on texas success initiative test study guide. She was utterly content to be there. Doom. For the benefit of the adults, i, and the streets filled with last-minute holiday shoppers. There was too much fuss already made Affairs, Bernard felt the color rising at the back of his neck, it's more serious. The siege had passed. Agnes and Grace had produced a bakery's worth of thing-" used. " and elegant rooms, lights flared. place. Not here, six dogs. Musab ben ez Zubeir and Aaisheh his Wife dcxlix Leilani realized, the month prior to Naomi's murder and again in January 65? A few people were sitting there. | 492.222222 | 4,315 | 0.783296 | eng_Latn | 0.999716 |
22528a874e5856928ef14b526d65426e355f5cff | 93 | md | Markdown | README.md | momscode/electrical-_manufacture | b73da5571021a030032df55d6df506f3a9949b08 | [
"MIT"
] | null | null | null | README.md | momscode/electrical-_manufacture | b73da5571021a030032df55d6df506f3a9949b08 | [
"MIT"
] | null | null | null | README.md | momscode/electrical-_manufacture | b73da5571021a030032df55d6df506f3a9949b08 | [
"MIT"
] | null | null | null | ## Electrical Manufacture
Custom Application For Electrical Manufacturing
#### License
MIT | 13.285714 | 47 | 0.795699 | eng_Latn | 0.891958 |
2252d89105aa41ab240fe3e8719f4b62144dca28 | 81 | md | Markdown | README.md | gnaryak/gnaryak | 6fb84aba1120daea58b80de2b7e5ed79bede99e8 | [
"MIT"
] | null | null | null | README.md | gnaryak/gnaryak | 6fb84aba1120daea58b80de2b7e5ed79bede99e8 | [
"MIT"
] | null | null | null | README.md | gnaryak/gnaryak | 6fb84aba1120daea58b80de2b7e5ed79bede99e8 | [
"MIT"
] | null | null | null | # gnaryak
The code for the hypothetical gnaryak.com.
Built with nextjs and now.
| 16.2 | 42 | 0.777778 | eng_Latn | 0.993228 |
2252e5a20167bf0b55b6a12b46663f4370d317eb | 1,280 | md | Markdown | _posts/2010-09-01-to-book-your-golf-reservation-online.md | BlogToolshed50/velikazvjerka.com | 6c7d6805a776d3bac8c154c619e76975ff3dbdb2 | [
"CC-BY-4.0"
] | null | null | null | _posts/2010-09-01-to-book-your-golf-reservation-online.md | BlogToolshed50/velikazvjerka.com | 6c7d6805a776d3bac8c154c619e76975ff3dbdb2 | [
"CC-BY-4.0"
] | null | null | null | _posts/2010-09-01-to-book-your-golf-reservation-online.md | BlogToolshed50/velikazvjerka.com | 6c7d6805a776d3bac8c154c619e76975ff3dbdb2 | [
"CC-BY-4.0"
] | null | null | null | ---
id: 129
title: To Book your Golf Reservation Online
date: 2010-09-01T13:53:52+00:00
author: admin
layout: post
guid: http://www.velikazvjerka.com/?p=129
permalink: /2010/09/01/to-book-your-golf-reservation-online/
categories:
- General
---
Anyone wants to play golf with lots of excitements during your vacation holidays, then you can book the tee times at www.48hourteetimes.com . They provide the accurate details on all Grand Strand area golf courses at one place, you can compare the rates and available tee times that help you to find the right one based on your time and budget.
The 48hourteetimes.com is a reliable golf course directory and provide relevant information on Myrtle Beach golf courses, you can choose the affordable Grand Strand tee times and enjoy golfing with your friends. The well experienced golf reservation specialists can assist you to your entire golf course booking process.
The Myrtle Beach is a perfect location to play golf, you can join the finest golf courses on the Grand Strand area and make your holidays as a pleasant one. At 48 hour tee times , book the golf course at any time even for your last minute golf outing too. To enjoy your vacation with golfing and make the holidays as a fabulous one. | 80 | 355 | 0.792969 | eng_Latn | 0.997939 |
2253831d1fb11eaa9e0f00c9bb3f99ada7f1104f | 33 | md | Markdown | README.md | stackean/marketplace-nginx-proxy-manager | d40bd79e6d48dd585a05c5367d237dc85063a859 | [
"MIT"
] | null | null | null | README.md | stackean/marketplace-nginx-proxy-manager | d40bd79e6d48dd585a05c5367d237dc85063a859 | [
"MIT"
] | null | null | null | README.md | stackean/marketplace-nginx-proxy-manager | d40bd79e6d48dd585a05c5367d237dc85063a859 | [
"MIT"
] | null | null | null | # marketplace-nginx-proxy-manager | 33 | 33 | 0.848485 | eng_Latn | 0.5147 |
225411a7dd167b2a5b7ced315286431bb2821723 | 470 | md | Markdown | README.md | singhankit67/Miwok-language-app | 60a4d456495078e34612d32c8fccf24b342dd74b | [
"BSD-3-Clause"
] | 1 | 2020-05-05T13:00:55.000Z | 2020-05-05T13:00:55.000Z | README.md | singhankit67/Miwok-language-app | 60a4d456495078e34612d32c8fccf24b342dd74b | [
"BSD-3-Clause"
] | null | null | null | README.md | singhankit67/Miwok-language-app | 60a4d456495078e34612d32c8fccf24b342dd74b | [
"BSD-3-Clause"
] | null | null | null | # Miwok-language-app
Created an application that gives the <b>miwok translation for an english word by pronouncing it</b>.Tried to create a <b>multi screen application</b> with bottom navigation drawer to navigate between different activites.Used customizable adapters
## Snapshots
### Splashscreen
<img src="Images/Screenshot_20200505-135329[1].jpg" width=200 height=400>
### Mainactivity
<img src="Images/Screenshot_20200505-135343[1].jpg" width=200 height=400>
| 33.571429 | 247 | 0.785106 | eng_Latn | 0.949045 |
225546f80eb0bd60359d723f1314024f901082dd | 102 | md | Markdown | packages/object/README.md | tomato-js/tomato | 1e938373a391814ca9c099b3fb8fda1ef181ae8b | [
"MIT"
] | 2 | 2020-04-17T08:52:15.000Z | 2020-04-26T09:49:24.000Z | packages/object/README.md | tomato-js/tomato | 1e938373a391814ca9c099b3fb8fda1ef181ae8b | [
"MIT"
] | 1 | 2020-05-09T07:54:21.000Z | 2020-05-09T07:54:21.000Z | packages/object/README.md | tomato-js/tomato | 1e938373a391814ca9c099b3fb8fda1ef181ae8b | [
"MIT"
] | 1 | 2020-03-17T08:12:28.000Z | 2020-03-17T08:12:28.000Z | # `@tomato-js/object`
Read more details in [API Docs](https://tomato-js.github.io/tomato/index.html)
| 25.5 | 78 | 0.72549 | yue_Hant | 0.286533 |
2255ede1d9089c88c470883faeae287d4d3fad4d | 384 | md | Markdown | README.md | buckldav/xc | 5d082c6f7fdddb8aff51033b2ab5058039e1cc2f | [
"MIT"
] | null | null | null | README.md | buckldav/xc | 5d082c6f7fdddb8aff51033b2ab5058039e1cc2f | [
"MIT"
] | 1 | 2021-11-07T19:19:30.000Z | 2021-11-07T19:19:30.000Z | README.md | buckldav/xc | 5d082c6f7fdddb8aff51033b2ab5058039e1cc2f | [
"MIT"
] | null | null | null | # Merit Cross Country
[](https://app.netlify.com/sites/affectionate-kilby-337f06/deploys)
Live site at [https://xc.meritacademy.tech](https://xc.meritacademy.tech).
## Development
```bash
npm install
echo "API_KEY=<google-sheets-api-readonly>" > .env.local
npm run dev
```
| 27.428571 | 174 | 0.75 | kor_Hang | 0.169998 |
2255f9695e2c0da5868ed35bc2e3338988e20cd8 | 3,593 | md | Markdown | README.md | jcstrandburg/Demeter | 17fdc91378f5e869bd6ef8ebe163b912c589f4ef | [
"MIT"
] | null | null | null | README.md | jcstrandburg/Demeter | 17fdc91378f5e869bd6ef8ebe163b912c589f4ef | [
"MIT"
] | null | null | null | README.md | jcstrandburg/Demeter | 17fdc91378f5e869bd6ef8ebe163b912c589f4ef | [
"MIT"
] | null | null | null | # Demeter
This library provides a set of immutable collection classes that allow for an consistent, object oriented, fluent style of manipulating data collections. It is mainly inspired by LINQ from C# and the Java Stream API.
## Installing
`composer require jcstrandburg\demeter`
## Usage
Vanilla PHP:
```php
$x = array_slice(
array_map(
function ($x) {return $x * 2;},
array_filter([1, 2, 3, 4, 5], function ($x) {return $x % 2 == 1;})),
0, 2);
```
With Demeter:
```php
use function Jcstrandburg\Demeter\sequence;
use Jcstrandburg\Demeter\Lambda;
$x = sequence([1, 2, 3, 4, 5])
->filter(Lambda::isOdd())
->map(Lambda::multiplyBy(2))
->take(2);
```
## Features
## Version History
### Unreleased
### 0.8
#### Added
* `Extensions` static class for extension method support
### 0.7
#### Changes
* All collection classes now implement `IteratorAggregate` instead of extending `IteratorIterator`
* `GroupedCollection::getGroupKeys` now returns a `Collection` instead of an array
* It is now possible to safely perform concurrent iterations over the same `Sequence` or derivations thereof.
For example, if `$x = sequence([1,2,3,4]);`
Before: `$x->zip($x->map(Lambda::plus(1)), Lambda::add())->toArray() == [3,6]`
Now: `$x->zip($x->map(Lambda::plus(1)), Lambda::add())->toArray() == [3,5,7,9]`
#### Removed
* `LazyRewindableIterator` has been replaced with an internal implementation
#### Added
* `as_iterator` utility function
#### Deprecated
* `as_traversable` - use `as_iterator` instead
### 0.6
#### Fixed
* Call the parent constructor from `ArrayGroupedCollection`
#### Changed
* Breaking: Convert `GroupedCollection` to an interface, with the existing implementation becoming `ArrayGroupedCollection`
* Breaking: Convert `Grouping` to an interface, with the existing implementation becoming `ArrayGrouping`
* Make `Collection` extend `Countable`
* `xrange` now returns a `Sequence`
#### Added
* `Lambda::constant`
* `Lambda::toArray`
* `Lambda::getGroupKey`
* `Sequence::zip`
* `Sequence::chunk`
* `Sequence::join`
* `Sequence::implode`
### 0.5
#### Changed
* Breaking: Changed the behavior of `HashSet` so that it acts like a proper set (hashing is used for buckets but not equality comparisons)
* Breaking: Convert `Sequence` to an interface, with the existing implementation becoming `LazySequence`
* Breaking: Convert `Collection` to an interface, with the existing implementation becoming `ArrayCollection`
* Breaking: Convert `Dictionary` to an interface, with the existing implementation becoming `ArrayDictionary`
* Breaking: Functions previously returning `HashSet` now return `Set`
#### Added
* Introduce `Set` interface which `HashSet` implements
* `Sequence::except` and `Sequence::intersect`
* `Lambda` utility class
* `dictionary` and `set` factory functions
### 0.4
#### Added
* `HashMap::addMany`
* `HashMap::removeMany`
* `Sequence::toDictionary`
* `Dictionary`
### 0.3
#### Changed
* `sequence` and `collect` now will return their argument unmodified if is already of the correct type
#### Added
* `HashSet`
* `Sequence::asSet`
* `Sequence::first`
* `Sequence::firstOrNull`
* `Sequence::last`
* `Sequence::lastOrNull`
* `Sequence::single`
* `Sequence::singleOrNull`
### 0.2
#### Added
* `Sequence::groupBy`
* `GroupedCollection`
* `Grouping`
### 0.1
#### Added
* `Sequence`
* `Collection`
* Various utility functions
* `LazyRewindableIterator`
* `MappedIterator`
* `SkipWhileIterator`
* `TakeWhileIterator`
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
| 24.77931 | 216 | 0.710548 | eng_Latn | 0.836015 |
2256aac0a3d2d3b5487ddfc6b8b018e2ed009f7f | 753 | md | Markdown | README.md | pspratt/MouseTracker | 5608963873c1edcb4f07655e9e2483c5617d6dbf | [
"MIT"
] | 1 | 2020-01-19T04:06:52.000Z | 2020-01-19T04:06:52.000Z | README.md | pspratt/MouseTracker | 5608963873c1edcb4f07655e9e2483c5617d6dbf | [
"MIT"
] | null | null | null | README.md | pspratt/MouseTracker | 5608963873c1edcb4f07655e9e2483c5617d6dbf | [
"MIT"
] | null | null | null | # mouseTracker
MouseTracker is a MATLAB toolbox made the track the position of mice during common behavioral tests such as the open field test, elevated plus maze, and three chamber social assay.

This project was undertaken to avoid the purchase of [expensive proprietary rodent tracking software](http://www.anymaze.co.uk/anymaze-developing-countries-licence.htm) and to learn approaches to tracking objects in videos. The toolbox works quite robustly, however more modern and flexible opensource tools for ethological tracking now exist ([DeepLab cut](https://github.com/AlexEMG/DeepLabCut)).
If you would like to try using mouseTracker please [read the usage guide](docs/README.md)
| 75.3 | 398 | 0.814077 | eng_Latn | 0.97445 |
2256e3b5945a70455102265a3fe2126f91500e50 | 105 | md | Markdown | README.md | P3P-Corporation/C3p- | 3ad98f21ec9e1c5b0fe764830d3a4d67ceb49981 | [
"CC0-1.0"
] | 1 | 2021-09-06T04:49:30.000Z | 2021-09-06T04:49:30.000Z | README.md | P3P-Corporation/C3p- | 3ad98f21ec9e1c5b0fe764830d3a4d67ceb49981 | [
"CC0-1.0"
] | null | null | null | README.md | P3P-Corporation/C3p- | 3ad98f21ec9e1c5b0fe764830d3a4d67ceb49981 | [
"CC0-1.0"
] | null | null | null | # C3p-
C3p is an implementation to all network so each individual network is secured and remains secure.
| 35 | 97 | 0.8 | eng_Latn | 0.999935 |
2257c981d86e97a77b83a7a81ae51cc7c2eadf20 | 829 | md | Markdown | Assets/Tutorials/basic_cull/basic_cull.md | nwk1541/UnityShaderTutorial | 360a55544b991d48f1947efbed2f126e3cded45f | [
"MIT"
] | 19 | 2018-07-16T07:04:53.000Z | 2021-12-17T05:56:32.000Z | Assets/Tutorials/basic_cull/basic_cull.md | nwk1541/UnityShaderTutorial | 360a55544b991d48f1947efbed2f126e3cded45f | [
"MIT"
] | 3 | 2019-01-09T06:18:05.000Z | 2019-02-07T03:30:36.000Z | Assets/Tutorials/basic_cull/basic_cull.md | nwk1541/UnityShaderTutorial | 360a55544b991d48f1947efbed2f126e3cded45f | [
"MIT"
] | 11 | 2018-07-14T06:59:13.000Z | 2020-08-03T09:21:18.000Z | # Abstract
물체의 앞면 혹은 뒷면을 그리지 않게 하자
# Shader
```c
Shader "UnityShaderTutorial/basic_cull" {
Properties {
_MainTex ("Base Texture", 2D) = "white" {}
}
SubShader {
Pass {
Lighting On
Cull Back
SetTexture [_MainTex]
}
}
}
```
# Description
`Cull` 은 쉐이더의 문법 중 하나이다.
폴리곤의 어떤 영역을 그리지 않도록 설정 할 수 있다.
문법은 아래와 같다
```c
Cull Back | Front | Off
```
`Back` : 보이는 부분에서 반대쪽에 해당하는 폴리곤을 그리지 않도록 한다.
`Front` : 보이는 부분쪽에 해당하는 폴리곤을 그리지 않도록 한다.
`Off` : 모든 부분을 그린다.
## Why Use Cull?
`Culling` 은 퍼포먼스에 영향을 준다. `Culling`을 하지 않을경우, 보이지 않는 영역까지 그리려고 노력을 하기 때문에, GPU가 할 일이 많아 진다.
Unity에서 큐브를 하나 띄워 놓고, `Shader` 에서 `Cull Back`과 `Cull Off` 로 했을때 `GPU Profiler` 를 보면 차이를 볼 수 있다.
| | GPU ms |
|:-------:|:--------:|
| Cull Back | 0.224 |
| Cull Off | 0.501 |
| 15.942308 | 95 | 0.557298 | kor_Hang | 1.00001 |
225abff2a02e1109ff9d759c1b508409e1ec02c1 | 69 | md | Markdown | README.md | mohammadrezasalehi95/Game-Theory-Projects | 5c1b0d68472dc2e72b08a5312af1458245514c96 | [
"MIT"
] | null | null | null | README.md | mohammadrezasalehi95/Game-Theory-Projects | 5c1b0d68472dc2e72b08a5312af1458245514c96 | [
"MIT"
] | null | null | null | README.md | mohammadrezasalehi95/Game-Theory-Projects | 5c1b0d68472dc2e72b08a5312af1458245514c96 | [
"MIT"
] | null | null | null | # Game-Theory-Projects
a simple and public repository for my project
| 23 | 45 | 0.811594 | eng_Latn | 0.983478 |
225adab3abe166edc246fdca8f478653dab92d58 | 8,758 | markdown | Markdown | _posts/2017-11-28-introduction-gitlab-ci.markdown | rpadovani/rpadovani.com | 2c058712efbf0637544e51ac069e32a817ec9dff | [
"MIT"
] | null | null | null | _posts/2017-11-28-introduction-gitlab-ci.markdown | rpadovani/rpadovani.com | 2c058712efbf0637544e51ac069e32a817ec9dff | [
"MIT"
] | null | null | null | _posts/2017-11-28-introduction-gitlab-ci.markdown | rpadovani/rpadovani.com | 2c058712efbf0637544e51ac069e32a817ec9dff | [
"MIT"
] | null | null | null | ---
layout: post
title: "A generic introduction to Gitlab CI"
date: 2017-11-28 21:00
description: "A generic introduction to Gitlab CI and an overview on its features"
categories:
- gitlab
- gitlab ci
permalink: introduction-gitlab-ci
---
At [fleetster][fleetster] we have our own instance of [Gitlab][gitlab] and we
rely a lot on [Gitlab CI][gitlabci]. Also our designers and QA guys use (and
love) it, thanks to its advanced features.
Gitlab CI is a very powerful system of Continuous Integration, with a lot of
different features, and with every new releases, new features land. It has a
very rich technical documentation, but it lacks a generic introduction for whom
want to use it in an already existing setup. A designer or a tester doesn't need
to know how to autoscale it with Kubernetes or the difference between an `image`
or a `service`.
But still, they need to know what is a **pipeline**, and how to see a branch
deployed to an **environment**. In this article therefore I will try to cover as
many features as possible, highlighting how the end users can enjoy them; in the
last months I explained such features to some members of our team, also
developers: not everyone knows what Continuous Integration is or has used Gitlab
CI in a previous job.
If you want to know why Continuous Integration is important I suggest to read
[this article][why-ci], while for finding the reasons for using Gitlab CI
specifically, I leave the job to [Gitlab.com][gitlabci] itself.
## Introduction
Every time developers change some code they save their changes in a **commit**.
They can then push that commit to Gitlab, so other developers can review the code.
Gitlab will also start some work on that commit, if the Gitlab CI has been
configured. This work is executed by a **runner**. A runner is basically a
server (it can be a lot of different things, also your PC, but we can simplify
it as a server) that executes instructions listed in the `.gitlab-ci.yml` file,
and reports the result back to Gitlab itself, which will show it in his
graphical interface.
When developers have finished implementing a new feature or a bugfix (activity
that usual requires multiple commits), can open a **merge request**, where other
member of the team can comment on the code and on the implementation.
As we will see, also designers and testers can (and really should!) join this
process, giving feedbacks and suggesting improvements, especially thanks to two
features of Gitlab CI: **environments** and **artifacts**.
## Pipelines
Every commit that is pushed to Gitlab generates a **pipeline** attached to that
commit. If multiple commits are pushed together the pipeline will be created
only for the last of them. A pipeline is a collection of **jobs** split in
different **stages**.
All the jobs in the same stage run in concurrency (if there are enough runners)
and the next stage begins only if all the jobs from the previous stage have
finished with success.
As soon as a job fails, the entire pipeline fails. There is an exception for
this, as we will see below: if a job is marked as _manual_, then a failure
will not make the pipeline fails.
The stages are just a logic division between batches of jobs, where doesn't make
sense to execute next jobs if the previous failed. We can have a `build` stage,
where all the jobs to build the application are executed, and a `deploy` stage,
where the build application is deployed. Doesn't make much sense to deploy
something that failed to build, does it?
Every job shouldn't have any dependency with any other job in the same stage,
while they can expect results by jobs from a previous stage.
Let's see how Gitlab shows information about stages and stages' status.
![pipeline-overview][pipeline-overview]
![pipeline-status][pipeline-status]
## Jobs
A job is a collection of instructions that a runner has to execute. You can see
in real time what's the output of the job, so developers can understand why a
job fails.
A job can be automatic, so it starts automatically when a commit is pushed, or
manual. A manual job has to be triggered by someone manually. Can be useful, for
example, to automatize a deploy, but still to deploy only when someone manually
approves it. There is a way to limit who can run a job, so only trustworthy
people can deploy, to continue the example before.
A job can also build **artifacts** that users can download, like it creates an
APK you can download and test on your device; in this way both designers and
testers can download an application and test it without having to ask for help
to developers.
Other than creating artifacts, a job can deploy an **environment**, usually
reachable by an URL, where users can test the commit.
Job status are the same as stages status: indeed stages inherit theirs status
from the jobs.
![running-job][running-job]
## Artifacts
As we said, a job can create an artifact that users can download to test. It can
be anything, like an application for Windows, an image generated by a PC, or an
APK for Android.
So you are a designer, and the merge request has been assigned to you: you need
to validate the implementation of the new design!
But how to do that?
You need to open the merge request, and download the artifact, as shown in the
figure.
Every pipeline collects all the artifacts from all the jobs, and every job can
have multiple artifacts. When you click on the download button, it will appear a
dropdown where you can select which artifact you want. After the review, you can
leave a comment on the MR.
You can always download the artifacts also from pipelines that do not have a
merge request open ;-)
I am focusing on merge request because usually is where testers, designer, and
shareholder in general enter the workflow.
But merge requests are not linked to pipelines: while they integrate nice one in
the other, they do not have any relation.
![download-artifacts][download-artifacts]
## Environments
In a similar way, a job can deploy something to an external server, so you can
reach it through the merge request itself.
As you can see the environment has a name and a link. Just clicking the link you
to go to a deployed version of your application (of course, if your team has
setup it correctly).
You can click also on the name of the environment, because Gitlab has also other
cool features for environments, like [monitoring][monitoring].
![environment][environment]
## Conclusion
This was a small introduction to some of the features of Gitlab CI: it is very
powerful, and using it in the right way allows all the team to use just one tool
to go from planning to deploying. A lot of new features are introduced every
month, so keep an eye on the [Gitlab blog][gitlab-blog].
For setting it up, or for more advanced features, take a look to the
[documentation][documentation-ci].
In fleetster we use it not only for running tests, but also for having automatic
versioning of the software and automatic deploys to testing environments. We
have automatized other jobs as well (building apps and publish them on the Play
Store and so on).
Speaking of which, **do you want to work in a young and dynamically office with
me and a lot of other amazing guys?** Take a look to the [open positions][jobs]
at fleetster!
Kudos to the Gitlab team (and others guys who help in their free time) for their
awesome work!
If you have any question or feedback about this blog post, please drop me an
email at [[email protected]](mailto:[email protected]) or [tweet me][twitter] :-)
Feel free to suggest me to add something, or to rephrase paragraphs in a clearer
way (English is not my mother tongue).
Bye for now,<br/>
R.
P.S: if you have found this article helpful and you'd like we write others, do
you mind to help us reaching the [Ballmer's peak][ballmer] and [buy me][donation] a beer?
[donation]: https://rpadovani.com/donations
[gitlab]: https://gitlab.com/
[gitlabci]: https://about.gitlab.com/gitlab-ci/
[fleetster]: https://www.fleetster.net
[jobs]: https://www.fleetster.net/fleetster-team.html
[why-ci]: https://about.gitlab.com/2015/02/03/7-reasons-why-you-should-be-using-ci/
[ballmer]: https://www.xkcd.com/323/
[gitlab-blog]: https://about.gitlab.com/
[documentation-ci]: https://docs.gitlab.com/ee/ci/README.html
[twitter]: https://twitter.com/rpadovani93
[pipeline-overview]: https://img.rpadovani.com/posts/pipeline-overview.png
[pipeline-status]: https://img.rpadovani.com/posts/pipeline-status.png
[running-job]: https://img.rpadovani.com/posts/running-job.png
[environment]: https://img.rpadovani.com/posts/environment.png
[download-artifacts]: https://img.rpadovani.com/posts/download-artifacts.png
[monitoring]: https://gitlab.com/help/ci/environments.md
| 43.356436 | 91 | 0.774834 | eng_Latn | 0.999652 |
225af37820c1695ee81f377bef075928d0b29697 | 4,706 | md | Markdown | README.md | thomascriss/AzureMonitor2hWorkshop | 871343c89f237e68cb577ec37b8c79c5e8e63f25 | [
"CC0-1.0"
] | null | null | null | README.md | thomascriss/AzureMonitor2hWorkshop | 871343c89f237e68cb577ec37b8c79c5e8e63f25 | [
"CC0-1.0"
] | null | null | null | README.md | thomascriss/AzureMonitor2hWorkshop | 871343c89f237e68cb577ec37b8c79c5e8e63f25 | [
"CC0-1.0"
] | null | null | null | # Azure Monitor 2-hour Workshop
2-hour Azure Monitor Workshop
Goals:
Create a dashboard with basic metrics for Azure VMs.
Create an alert rule and action group.
Prerequisites:
Azure Subscription with at contributer access or higher
Create an Azure Virtual Machine via the Portal
(You may skip this section if you already have a VM you would like to monitor)
1. Navigate to the Azure portal and sign in
2. Click on your name in the top-right and ensure you are working in the desired directory
3. In the search bar at the top, search "Virtual Machine" and select it
4. Click the 'Add' button
5. Select your desired Subscription. For Resource group, click "Create new" and name it "Monitor-Workshop-RG"
6. Give your Virtual machine a name such as "Monitor-Workshop-VM-01"
7. No infrastructure redundancy is required for this lab
8. For Image, select "Windows Server 2019 Datacenter"
9. For Azure Spot instance, select "No"
10. For Size, I recommend at least a D2s_v3 (2 vcpus or higher)
11. Provide a Username and Password
12. For Public inbound ports, select "None"
13. Click the Networking tab, and create a new Virtual Network. Call it "Monitor-Workshop-VNET"
14. Leave the rest of the values default
15. Click the Management tab
16. For OS guest diagnostics, select "On"
17. For Auto-shutdown, select on and choose an appropriate time in your time zone for the server to shut off. (This will save you money if you forget to turn off or remove the VM after the workshop)
18. Leave all other settings on default
19. Click the Review + create tab, and click create
Create a dashboard
1. Navigate to the Azure portal and sign in
2. Navigate to Azure Dashboard
3. Add a dashboard and name it "Monitor Workshop". You don't need to add anything else yet.
Enable Azure Monitor Insights
1. Navigate to Azure Monitor
2. Under Insights, click Virtual Machines
3. Under the Not monitored tab, locate your VM and click the enable button to enable Insights, click create a new Log Analytics Workspace
4. Azure will now install the Microsoft Monitoring Agent (MMA) on the VM and begin to collect metrics (this may take a few minutes)
Create an Alert rule
While metrics begin to collect for the VM, let's create an alert rule for the VM that will notify us when the VM is powered off
1. Navigate to Monitor | Alerts
2. Click 'New alert rule'
3. Select the scope of the resource, narrow down by subscription and resource type 'Virtual Machine', then select the VM and click 'Done'
4. Select the condition, choose Signal type 'Activity Log', and select "Power Off Virtual Machine (Microsoft.Compute/virtualMachines)
5. Under Alert logic, for Event Level, choose 'Critical', and for Status, choose 'All', and for Event initaited by, choose '(All services and users)
6. Review the condition preview and click 'Done'
7. Under Action Group, select action group, Create action group, select your "Monitor-Workshop-RG" resource group, and provide an Action group name and display name, such as "Ops Team" or "Admins"
8. Back to the alert rule, provide an Alert rule name such as "VM power off" and description
9. Click Create alert rule
10. Back to the Monitor | Alerts page, click 'Manage actions' button at the top
11. Click the action group that you created in the previous step
12. Here you can create notifications and actions. Under Notifications, select "Email/SMS message/Push/Voice"
13. Select email and provide your email, and click OK
14. Enter a name for the Notification such as "Email Bryan"
15. Create any additional notifications that you would like to test and give them corresponding names
16. Click Save changes at the top
Create a Metric and add it to the dashboard
1. Navigate to Monitor | Metrics
2. Select the subscription, and Virtual Machines resource types
3. Select the VM and click Apply
4. A bubble window will appear above the graph, under Metric select 'Percentage CPU'
5. Click the 'Pin to dashboard' button above the bubble window
6. Navigate to the dashboard and locate your metric, then click on it to open the Metric
7. Here you can add additional metrics that will overlay on the current graph. If you have multiple VMs, you can add them as well
8. Navigate back to the dashboard
9. At the top, click "Auto refresh" and select Every hour
10. At the top, click "UTC Time: Past 24 hours" and choose Past 4 hours, then under Time granularity, select 5 minutes and click Apply. This will apply a time range and granularity for all metrics on the dashboard.
Cleaning up
1. Search Resource Groups
2. Select your resource group, in the previous steps we created "Monitor-Workshop-RG"
3. At the top, click "Delete resource group", type the resource group name and click Delete
| 51.152174 | 214 | 0.778581 | eng_Latn | 0.994312 |
225b52cc90f38b2cf70ff14ab3e03af677d09fb9 | 6,953 | md | Markdown | readme.md | JensHeinrich/cuss | b99c7e0503fd5a8f3cf6bbb92f5e9d8b1fe68a0d | [
"MIT"
] | 115 | 2017-12-06T02:58:42.000Z | 2022-03-02T20:20:37.000Z | readme.md | JensHeinrich/cuss | b99c7e0503fd5a8f3cf6bbb92f5e9d8b1fe68a0d | [
"MIT"
] | 25 | 2018-02-16T12:40:28.000Z | 2021-12-30T17:36:48.000Z | readme.md | JensHeinrich/cuss | b99c7e0503fd5a8f3cf6bbb92f5e9d8b1fe68a0d | [
"MIT"
] | 40 | 2017-12-06T02:58:43.000Z | 2021-11-16T22:28:47.000Z | # cuss
[![Build][build-badge]][build]
[![Coverage][coverage-badge]][coverage]
[![Downloads][downloads-badge]][downloads]
[![Size][size-badge]][size]
Map of profanities, slurs, and obscenities to a sureness rating.
This rating *does not* represent *how* vulgar a term is, instead, how
likely it is to be used as either profanity or clean text.
## Install
This package is [ESM only](https://gist.github.com/sindresorhus/a39789f98801d908bbc7ff3ecc99d99c):
Node 12+ is needed to use it and it must be `import`ed instead of `require`d.
[npm][]:
```sh
npm install cuss
```
## Use
```js
import {cuss} from 'cuss'
console.log(Object.keys(cuss).length) // 1776
console.log(cuss.beaver) // 0
console.log(cuss.asshat) // 2
```
### Usage of locale versions
To use Portuguese do:
```js
import {cuss} from 'cuss/pt'
console.log(Object.keys(cuss).length) // 173
console.log(cuss.burro) // 1
console.log(cuss.bixa) // 2
```
## API
`cuss` has the following entries in its export map: `cuss` (English),
`cuss/ar-latn` (Arabic (Latin script)), `cuss/es` (Spanish), `cuss/fr` (French),
`cuss/it` (Italian), `cuss/pt` (Portuguese), `cuss/pt-pt` (Portuguese
(Portugal)).
Each entry exports the identifier `cuss`.
There are no default exports.
### `cuss`
Each `cuss` is a dictionary of phrases to ratings (`Record<string, number>`),
where each key can be considered offensive, and each rating is a number between
`0` and `2` (both including), representing the certainty the key is used as a
profanity depending on context.
| Rating | Use as a profanity | Use in clean text | Example |
| ------ | ------------------ | ----------------- | ------- |
| 2 | likely | unlikely | asshat |
| 1 | maybe | maybe | addict |
| 0 | unlikely | likely | beaver |
## Support
* [`cuss`](index.js) — ± 1770 English profane words and phrases from
[Luis von Ahn’s Research Group (Carnegie Mellon)][luis-von-ahn], the
[`List of ethnic slurs` from WikiPedia][racial-slurs], and many
contributions since)
* [`cuss/ar-latn`](ar-latn.js) — ± 250 Arabic (Latin-Script) profane words
and phrases from [`naughty-words`][ar-source-naughty-words] and
[`youswear`][ar-source-youswear]
* [`cuss/es`](es.js) — ± 650 Spanish profane words and phrases from
[`naughty-words`][es-source-naughty-words],
[`revistagq.com`][es-source-revistagq], [`taringa.net`][es-source-taringa],
[`mundoxat.om`][es-source-mundoxat]
* [`cuss/fr`](fr.js) — ± 740 French profane words and phrases from
[`wiktionary.org`][fr-source]
* [`cuss/it`](it.js) — ± 800 Italian profane words and phrases from
[Italian profanity][it-source] (WikiPedia);
[Italian slang][it-source-wiktionary-slang]
[Italian offensive terms][it-source-wiktionary-offensive]
[Italian dialectal terms][it-source-wiktionary-dialectal]
[Italian jocular terms][it-source-wiktionary-jocularterms]
(Wiktionary);
[Parole oscene][it-source-treccani-paroleoscene] (Treccani);
and [`chucknorris-io/swear-words`][it-source-swear-words]
* [`cuss/pt`](pt.js) — ± 148 Portuguese profane words from
[`aprenderpalavras.com`][pt-source]
* [`cuss/pt-pt`](pt-pt.js) — ± 45 Portuguese profane words from
[`wikipedia`][pt-pt-source] and common culture
## Related
* [`buzzwords`](https://github.com/words/buzzwords)
— List of buzzwords
* [`dale-chall`](https://github.com/words/dale-chall)
— List of familiar American-English words (1995)
* [`fillers`](https://github.com/words/fillers)
— List of filler words
* [`hedges`](https://github.com/words/hedges)
— List of hedge words
* [`profanities`][profanities]
— List of the same profane words, but without the sureness
* [`spache`](https://github.com/words/spache)
— List of simple American-English words (1974)
* [`weasels`](https://github.com/words/weasels)
— List of weasel words
## Contributing
Thanks, contributions are greatly appreciated!
:+1:
New terms can be added to the corresponding files as listed in the support
section.
To add a new language, create a new JS file with a [BCP 47][bcp47-spec] language
tag as its name (lower case, dashes, and
[preferred and normalized](https://github.com/wooorm/bcp-47-normalize)).
After adding a word, run `npm install` to install all required dependencies,
then `npm test` to update: the project includes some scripts to make sure
everything is in order.
Finally, open a pull request.
## License
[MIT][license] © [Titus Wormer][author]
<!-- Definitions -->
[build-badge]: https://github.com/words/cuss/workflows/main/badge.svg
[build]: https://github.com/words/cuss/actions
[coverage-badge]: https://img.shields.io/codecov/c/github/words/cuss.svg
[coverage]: https://codecov.io/github/words/cuss
[downloads-badge]: https://img.shields.io/npm/dm/cuss.svg
[downloads]: https://www.npmjs.com/package/cuss
[size-badge]: https://img.shields.io/bundlephobia/minzip/cuss.svg
[size]: https://bundlephobia.com/result?p=cuss
[npm]: https://docs.npmjs.com/cli/install
[license]: license
[author]: https://wooorm.com
[profanities]: https://github.com/words/profanities
[fr-source]: https://fr.wiktionary.org/wiki/Cat%C3%A9gorie:Insultes_en_fran%C3%A7ais
[ar-source-naughty-words]: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/ar
[ar-source-youswear]: https://www.youswear.com/index.asp?language=Arabic
[es-source-taringa]: https://www.taringa.net/posts/info/7253513/Listado-de-vulgarismos-y-malas-palabras-en-espanol.htm
[es-source-mundoxat]: https://www.mundoxat.com/foro/showthread.php?301-Lista-de-palabras-MALAS-Necesito-AYUDA%21
[es-source-naughty-words]: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/es
[es-source-revistagq]: https://www.revistagq.com/la-buena-vida/articulos/221-insultos-en-castellano-que-deberias-saber/19728
[it-source]: https://en.wikipedia.org/wiki/Italian_profanity
[it-source-wiktionary-slang]: https://en.wiktionary.org/wiki/Category:Italian_slang
[it-source-wiktionary-offensive]: https://en.wiktionary.org/wiki/Category:Italian_offensive_terms
[it-source-wiktionary-dialectal]: https://en.wiktionary.org/wiki/Category:Italian_dialectal_terms
[it-source-wiktionary-jocularterms]: https://en.wiktionary.org/wiki/Category:Italian_jocular_terms
[it-source-treccani-paroleoscene]: http://www.treccani.it/enciclopedia/parole-oscene_\(Enciclopedia-dell'Italiano\)/
[it-source-swear-words]: https://github.com/chucknorris-io/swear-words/blob/master/it
[pt-source]: https://aprenderpalavras.com/lista-de-palavroes-xingamentos-e-girias/
[luis-von-ahn]: https://www.cs.cmu.edu/~biglou/resources/
[racial-slurs]: https://en.wikipedia.org/wiki/List_of_ethnic_slurs
[bcp47-spec]: https://tools.ietf.org/html/bcp47
[pt-pt-source]: https://pt.wikipedia.org/wiki/Palavr%C3%B5es_na_l%C3%ADngua_portuguesa
| 34.765 | 124 | 0.709909 | eng_Latn | 0.356993 |
225c0fe82ba7d5454abe878138a0ee92cf3dbaaa | 861 | md | Markdown | README.md | nodesign/weio-cloud | 8ba48cc0919c2c1ab0bced8da2c7d7856d78e703 | [
"Apache-2.0"
] | null | null | null | README.md | nodesign/weio-cloud | 8ba48cc0919c2c1ab0bced8da2c7d7856d78e703 | [
"Apache-2.0"
] | null | null | null | README.md | nodesign/weio-cloud | 8ba48cc0919c2c1ab0bced8da2c7d7856d78e703 | [
"Apache-2.0"
] | null | null | null | # WeIO Cloud
WeIO Cloud service.
### Install/Deploy
```bash
go get github.com/nodesign/weio-cloud
$GOBIN/weio-cloud
```
If you are new to Go, more information about setting-up environment and fetching Mainflux code can be found [here](https://github.com/mainflux/mainflux-doc/blob/master/goenv.md).
### Documentation
Development documentation can be found on our [WeIO GitHub Wiki](https://github.com/nodesign/weio/wiki).
### Community
#### Mailing list
[weio](https://groups.google.com/forum/#!forum/weio) Google group
For quick questions and suggestions you can also use GitHub Issues.
#### IRC
[WeIO Gitter](https://gitter.im/nodesign/weio)
#### Twitter
[@weionet](https://twitter.com/weionet)
### Authors
Engineered by [@drasko](https://github.com/drasko) and [@ukicar](https://github.com/ukicar)
### License
[Apache License, version 2.0](LICENSE)
| 26.90625 | 178 | 0.736353 | eng_Latn | 0.511561 |
225ca6198635817d23506e8c59a6a074339d5d0a | 13,301 | md | Markdown | docs/cosmos-repo/content/_index.md | Rabosa616/azure-cosmos-dotnet-repository | b38bda3301beed210f4ee96bdb389ff174602329 | [
"MIT"
] | null | null | null | docs/cosmos-repo/content/_index.md | Rabosa616/azure-cosmos-dotnet-repository | b38bda3301beed210f4ee96bdb389ff174602329 | [
"MIT"
] | null | null | null | docs/cosmos-repo/content/_index.md | Rabosa616/azure-cosmos-dotnet-repository | b38bda3301beed210f4ee96bdb389ff174602329 | [
"MIT"
] | null | null | null | # Azure Cosmos DB Repository .NET SDK
This package wraps the [NuGet: Microsoft.Azure.Cosmos package](https://www.nuget.org/packages/Microsoft.Azure.Cosmos),
exposing a simple dependency-injection enabled `IRepository<T>` interface.

## Overview
The repository is responsible for all of the create, read, update, and delete (CRUD) operations on objects `where T : Item`. The `Item` type adds
several properties, one which is a globally unique identifier defined as:
```csharp
[JsonProperty("id")]
public string Id { get; set; } = Guid.NewGuid().ToString();
```
Additionally, a type property exists which indicates the subclass name (this is used for filtering implicitly on your behalf):
```csharp
[JsonProperty("type")]
public string Type { get; set; }
```
Finally, a partition key property is used internally to manage partitioning on your behalf. This can optionally be overridden on an item per item basis.
📣 [Azure Cosmos DB - Official Blog](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-db-repository-net-sdk-v-1-0-4)
For more information, see [Customizing configuration sources](https://docs.microsoft.com/azure/azure-functions/functions-dotnet-dependency-injection?WC.m_id=dapine#customizing-configuration-sources).
#### Authenticating using an identity
The Azure Cosmos DB .NET SDK also supports authentication using identities, which are considered superior from an audit and granularity of permissions perspective. Authenticating using a connection string essentially provides full access to perform operations within the [data plane](https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control)
of your Cosmos DB Account. More information on the Azure control plane and data plane is available [here](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/control-plane-and-data-plane).
This libary also supports authentication using an identity. To authenticate using an identity (User, Group, Application Registration, or Managed Identity) you will need to set the `AccountEndpoint` and `TokenCredential` options that are available on the `RepositoryOptions` class.
In a basic scenario, there are three steps that need to be completed:
1. If the identity that you would like to use, does not exist in Azure Active Directory, create it now.
1. Use the Azure CLI to [assign](https://docs.microsoft.com/en-us/cli/azure/cosmosdb/sql/role/assignment?view=azure-cli-latest#az_cosmosdb_sql_role_assignment_create) the appropriate role to your identity at the desired scope. - In most cases, using the [built-in roles](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac#built-in-role-definitions) will be sufficient. However, there is support for creating custom role definitions using the Azure CLI, you can read more on this [here](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac#role-definitions).
1. Configure your application using the `AddCosmosRepository` method in your `Startup.cs` file:
```csharp
using Azure.Identity;
public void ConfigureServices(IServiceCollection services)
{
DefaultAzureCredential credential = new();
services.AddCosmosRepository(
options =>
{
options.TokenCredential = credential;
options.AccountEndpoint = "< account endpoint URI >";
options.ContainerId = "data-store";
options.DatabaseId = "samples";
});
}
```
The example above is using the `DefaultAzureCredential` object provided by the [Azure Identity](https://www.nuget.org/packages/Azure.Identity) NuGet package, which provides seamless integration with Azure Active Directory. More information on this package is available [here](https://docs.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme).
## Advanced partitioning strategy
As a consumer of Azure Cosmos DB, you can choose how to partition your data. By default, this repository SDK will partition items using their `Item.Id` value as the `/id` partition in the storage container. However, you can override this default behavior by:
1. Declaratively specifying the partition key path with `PartitionKeyPathAttribute`
1. Override the `Item.GetPartitionKeyValue()` method
1. Ensure the the property value of the composite or synthetic key is serialized to match the partition key path
1. Set `RepositoryOptions__ContainerPerItemType` to `true`, to ensure that your item with explicit partitioning is correctly maintained
As an example, considering the following:
```csharp
using Microsoft.Azure.CosmosRepository;
using Microsoft.Azure.CosmosRepository.Attributes;
using Newtonsoft.Json;
using System;
namespace Example
{
[PartitionKeyPath("/synthetic")]
public class Person : Item
{
public string FirstName { get; set; } = null!;
public string? MiddleName { get; set; }
public string LastName { get; set; } = null!;
[JsonProperty("synthetic")]
public string SyntheticPartitionKey =>
$"{FirstName}-{LastName}"; // Also known as a "composite key".
protected override string GetPartitionKeyValue() => SyntheticPartitionKey;
}
}
```
<!--
Notes for tagging releases:
https://rehansaeed.com/the-easiest-way-to-version-nuget-packages/#minver
git tag -a 2.1.3 -m "Build v2.1.3"
git push upstream --tags
dotnet build
-->
## In-memory Repository
This library also includes an in-memory version of `IRepository<T>`. To use it swap out the normal
`services.AddCosmosRepository()` for
`services.AddInMemoryCosmosRepository()` and have all of your items stored in memory. This is a great tool for running integration tests using a package such as `Microsoft.AspNetCore.Mvc.Testing`, and not having to incur the cost of data stored in an Azure Cosmos DB resource.
## Optimistic Concurrency Control with Etags
The default repository now supports etags and will pass them when `IItemWithEtag` is implemented correctly or the base classes `EtagItem` or `FullItem` are used. The etag check is enforced on all updates when `TItem` is of the correct type. It can however be bypassed by setting the `ignoreEtag` optional parameter in the relevant async methods. The InMemory repository also supports OCC with Etags. The [OptimisticCurrencyControl sample](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/OptimisticConcurrencyControl) shows these features.
### Getting the new etag
When creating a new object, if storing in memory, it is important to store the result from the create call to ensure you have the correct etag for future updated.
For example your code should look something like this:
```csharp
TItem currentItem = new TItem(...);
currentItem = _repository.CreateAsync(currentItem);
```
### Sequential updates
When doing sequential updates to the same item it is important to use the result from the update method (when OptimizeBandwith is false) or refetch the updated data each time (when OptimizeBandwith is true) otherwise the etag value will not be updated. The following code shows what to do in each case:
#### Optimize Bandwith Off
```csharp
TItem currentItem = _repository.CreateAsync(itemConfig);
currentItem = _repository.UpdateAsync(currentItem);
currentItem = _repository.UpdateAsync(currentItem);
```
#### Optimize Bandwith On
```csharp
TItem currentItem = _repository.CreateAsync(itemConfig);
_repository.UpdateAsync(currentItem);
currentItem = _repository.GetAsync(currentItem.Id);
_repository.UpdateAsync(currentItem);
currentItem = _repository.GetAsync(currentItem);
currentItem = _repository.UpdateAsync(currentItem);
```
### Catching mismatched etag errors
The following code shows how to catch the error when the etags do not match.
```csharp
try
{
currentBankAccount = await repository.UpdateAsync(currentBankAccount);
Console.WriteLine($"Updated bank account: {currentBankAccount}.");
}
catch (CosmosException exception) when (exception.StatusCode == HttpStatusCode.PreconditionFailed)
{
Console.WriteLine("Failed to update balance as the etags did not match.");
}
```
### Ignoring the etag
The following code shows how to ignore the etag when doing an update.
```csharp
await repository.UpdateAsync(currentBankAccount, ignoreEtag: true);
```
### Passing the etag to a patch update
The following code shows how to pass the etag when doing a update to specific properties.
```csharp
await repository.UpdateAsync(currentBankAccount.Id,
builder => builder.Replace(account => account.Balance, currentBankAccount.Balance - 250), etag: currentBankAccount.Etag);
```
## Time To Live
The time to live property can be set at both an item and container level. At a container level this can be done through the container options builder:
```csharp
options.ContainerBuilder.Configure<BankAccount>(
x => x.WithContainerDefaultTimeToLive(TimeSpan.FromHours(2)));
```
In the above example the container would have a default item lifespan of 2 hours. This can be overriden at the item level by using the `TimeToLive` property when correctly implemented. This is available through the `FullItem` and `TimeToLiveItem` base classes. The example below shows this been overriden so the item has a lifespan of 4 hours rather than the default of 2:
```csharp
BankAccount currentBankAccount = await repository.CreateAsync(
new BankAccount()
{
Name = "Current Account",
Balance = 500.0,
TimeToLive = TimeSpan.FromHours(4)
});
```
If you didn't want that specific item to ever expire the following code can be used:
```csharp
BankAccount currentBankAccount = await repository.CreateAsync(
new BankAccount()
{
Name = "Current Account",
Balance = 500.0,
TimeToLive = TimeSpan.FromSeconds(-1)
});
```
The demo `BankAccount` class can be found in the [OptimisticCurrencyControl sample](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/OptimisticConcurrencyControl) and its implementation looks like the following:
```csharp
using Microsoft.Azure.CosmosRepository;
using Microsoft.Azure.CosmosRepository.Attributes;
namespace OptimisticConcurrencyControl;
[Container("accounts")]
[PartitionKeyPath("/id")]
public class BankAccount : FullItem
{
public string Name { get; set; } = string.Empty;
public double Balance { get; set; }
public void Withdraw(double amount)
{
if (Balance - amount < 0.0) throw new InvalidOperationException("Cannot go overdrawn");
Balance -= amount;
}
public void Deposit(double amount)
{
Balance += amount;
}
public override string ToString() =>
$"Account (Name = {Name}, Balance = {Balance}, Etag = {Etag})";
}
```
This [page](https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live) goes into more detail about the various combinations.
## Created and Last Updated
The last updated value is retrieved from the \_ts property that Cosmos DB sets; as documented [here](https://docs.microsoft.com/en-us/rest/api/cosmos-db/documents#:~:text=the%20document%20resource.-,_ts,updated%20timestamp%20of%20the%20resource.%20The%20value%20is%20a%20timestamp.,-_self). This property is deserialised and is available in the raw seconds (`LastUpdatedTimeRaw`) since epoch and a human readable format (`LastUpdatedTimeUtc`). Both the base classes `FullItem` and `TimeStampedItem` contain these properties.
The `CreatedTimeUtc` time property available in both the base classes `FullItem` and `TimeStampedItem` is set when `CreateAsync` is called on the repository. However, this property can be set prior to calling `CreateAsync` in which case it wont be overwritten; allowing you to set your own `CreatedTimeUtc` value. This does mean that when using existing date the `CreatedTimeUtc` property will be null.
## Samples
Visit the `Microsoft.Azure.CosmosRepository.Samples` [directory](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples) for samples on how to use the library with:
- [Azure Functions](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/AzureFunctionTier)
- [Services](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/ServiceTier)
- [Controllers (web apps)](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/WebTier)
- [Paging](https://github.com/IEvangelist/azure-cosmos-dotnet-repository/tree/main/Microsoft.Azure.CosmosRepository.Samples/Paging)
## Deep-dive video
[](https://www.youtube.com/watch?v=izdnmBrTweA)
## Discord
Get extra support on our dedicated Discord channel.
[](https://discord.com/invite/qMXrX4shAv)
| 48.192029 | 603 | 0.766258 | eng_Latn | 0.916083 |
225cd5e72488c571f51d45ee1208b0c829591742 | 218 | md | Markdown | _watches/M20200504_063838_TLP_3.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-01-22T17:44:06.000Z | 2020-01-26T17:57:58.000Z | _watches/M20200504_063838_TLP_3.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20200504_063838_TLP_3.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP3 - 04/05/2020 - M20200504_063838_TLP_3T.jpg
date: 2020-05-04 06:38:38
permalink: /2020/05/04/watch/M20200504_063838_TLP_3
capture: TLP3/2020/202005/20200503/M20200504_063838_TLP_3T.jpg
---
| 27.25 | 62 | 0.784404 | eng_Latn | 0.051473 |
225daec6633cbf0977da679afb58f37e4d3fdd1d | 37 | md | Markdown | README.md | MitPeng/MyDrawingPractice | 115da68ea2eb7511d796a697d4c05a04e5d5c3ca | [
"MIT"
] | 1 | 2021-02-28T14:17:30.000Z | 2021-02-28T14:17:30.000Z | README.md | MitPeng/MyDrawingPractice | 115da68ea2eb7511d796a697d4c05a04e5d5c3ca | [
"MIT"
] | null | null | null | README.md | MitPeng/MyDrawingPractice | 115da68ea2eb7511d796a697d4c05a04e5d5c3ca | [
"MIT"
] | null | null | null | # My Drawing Practice
Just for fun.
| 9.25 | 21 | 0.72973 | eng_Latn | 0.989457 |
225ded921f9399743d3d2295a913122d79982784 | 455 | md | Markdown | Readme.md | firesidefox/libredisae | 1a17dae0c41eab67d70d2213fb609b36d7c2e8d3 | [
"Apache-2.0"
] | null | null | null | Readme.md | firesidefox/libredisae | 1a17dae0c41eab67d70d2213fb609b36d7c2e8d3 | [
"Apache-2.0"
] | null | null | null | Readme.md | firesidefox/libredisae | 1a17dae0c41eab67d70d2213fb609b36d7c2e8d3 | [
"Apache-2.0"
] | null | null | null |
# libredisae
## 为什么创建这个项目
libredisae 是从 [Redis](https://github.com/antirez/redis) 所使用的事件驱动库,支持 Freebsd、Linux 等多个操作系统,只有1000 行左右的代码。其设计简洁,功能强大,故而在其他项目中也被广泛使用。比如,HTTP基准测试工具 [wrk](https://github.com/wg/wrk)。
我把这部分代码独立出来,建立了这个仓库,并提供了简单的示例程序。目的是希望能帮助大家快速理解这个库的设计原理,并掌握使用这个库的方法;从而在自己的项目使用。
## 示例程序
我们的示例程序是一个简单的UDP回声服务器,主要演示一下 libredisae 库的使用。实现了一下功能:
- 收包 监听指定端口,接收 udp 包,并统计接收到的字节数
- 回声 将接收到的数据返回给客户端
- 定时 每隔两秒打印报告接收到的字节数
## 美好的东西应该分享
美好的东西应该分享。希望你能喜欢! | 26.764706 | 178 | 0.802198 | yue_Hant | 0.605801 |
225df8ff91dd162505341a9c6db247d023240e2d | 1,515 | md | Markdown | AlchemyInsights/download-and-activate.md | isabella232/OfficeDocs-AlchemyInsights-pr.vi-VN | 18a0b2fe64df0f41e705a51ee20a4f422a5ac9aa | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-19T19:08:25.000Z | 2021-02-19T19:52:24.000Z | AlchemyInsights/download-and-activate.md | isabella232/OfficeDocs-AlchemyInsights-pr.vi-VN | 18a0b2fe64df0f41e705a51ee20a4f422a5ac9aa | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:31:24.000Z | 2022-02-09T06:55:29.000Z | AlchemyInsights/download-and-activate.md | isabella232/OfficeDocs-AlchemyInsights-pr.vi-VN | 18a0b2fe64df0f41e705a51ee20a4f422a5ac9aa | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-09T20:33:30.000Z | 2021-10-09T10:36:59.000Z | ---
title: Tải xuống và kích hoạt
ms.author: pebaum
author: pebaum
manager: scotv
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "9001669"
- "3800"
ms.openlocfilehash: e094afe4b55a573a2bbf69505520692dddd9898f8b58a84bdebc61311c19c875
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: vi-VN
ms.lasthandoff: 08/05/2021
ms.locfileid: "53965051"
---
# <a name="download-and-activate"></a>Tải xuống và kích hoạt
Để cấp cho người dùng quyền truy nhập vào dịch vụ, hãy gán giấy phép cho người dùng đó. Để biết hướng dẫn về việc gán giấy phép, hãy [xem mục Gán giấy phép cho Người dùng.](https://docs.microsoft.com/microsoft-365/admin/manage/assign-licenses-to-users)
- Bạn có thể tìm thấy các ứng dụng để tải xuống trên [trang Tài khoản của](https://portal.office.com/account/#installs) Tôi. Trang này cung cấp danh sách các ứng dụng có sẵn để bạn tải xuống dựa trên giấy phép đã gán của mình.
- Nếu bạn đã tải xuống các ứng Office dụng này, bạn có thể cần đăng nhập vào các ứng dụng bằng tài khoản cơ quan hoặc trường học của mình. Bạn có thể thực hiện điều đó trong ứng dụng Office kỳ (Word, Excel, v.v.) bằng cách bấm vào Tệp **>** khoản (gần dưới cùng). Dưới Thông **tin Người dùng,** bấm **Chuyển đổi Tài khoản.**
Để biết thêm thông tin, hãy [xem Cài Office ứng dụng](https://docs.microsoft.com/microsoft-365/admin/setup/install-applications).
| 48.870968 | 324 | 0.768977 | vie_Latn | 0.999966 |
225ed9eb6c71a79c2ade8b783fd3cbd43d2cd88a | 2,098 | md | Markdown | AlchemyInsights/owa-search-mail-and-people.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:06:50.000Z | 2021-03-06T00:35:09.000Z | AlchemyInsights/owa-search-mail-and-people.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.id-ID | 95d1cef182a766160dba451d9c5027f04a6dbe06 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:41.000Z | 2022-02-09T07:00:30.000Z | AlchemyInsights/owa-search-mail-and-people.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:21.000Z | 2021-10-09T10:42:11.000Z | ---
title: 8000003 Mencari Email dan Orang di Outlook di web
ms.author: daeite
author: daeite
manager: joallard
ms.date: 04/21/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "1565"
- "8000003"
ms.openlocfilehash: f36366717c7ff3d44fda341b31fffafd08258a66e1cf10f1bdc53d868001f137
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: id-ID
ms.lasthandoff: 08/05/2021
ms.locfileid: "54063199"
---
# <a name="search-mail-and-people-on-outlook-on-the-web"></a>Mencari Email dan Orang Outlook di Web
1. Di <img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABUAAAAVBAMAAABbObilAAAAKlBMVEX///+WqL7l6u8vUn8iR3azwNDCzNlObJFAYIkDLWNeeZuks8d7ka1thaRtSbf+AAAAS0lEQVQI12MgFjAdmVkKY6csYxK5AGUbAqWsIUzGBiARAmGzCwAJlgQwmyMARiDEEeoxzWEyQZivLAS3l8kQ4RplkDF4hRkWEvQSABbdDSdqA/J0AAAAAElFTkSuQmCC' />
**Kotak** pencarian di bagian atas halaman, ketikkan hal yang ingin Anda cari (kontak, subjek email, atau bagian dari pesan), lalu tekan Enter.
2. Setelah selesai melakukan pencarian, pilih panah kembali <img src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABEAAAAQCAYAAADwMZRfAAAACXBIWXMAABJ0AAASdAHeZh94AAAAB3RJTUUH4wgFEhguGsWa9wAAAAd0RVh0QXV0aG9yAKmuzEgAAAAMdEVYdERlc2NyaXB0aW9uABMJISMAAAAKdEVYdENvcHlyaWdodACsD8w6AAAADnRFWHRDcmVhdGlvbiB0aW1lADX3DwkAAAAJdEVYdFNvZnR3YXJlAF1w/zoAAAALdEVYdERpc2NsYWltZXIAt8C0jwAAAAh0RVh0V2FybmluZwDAG+aHAAAAB3RFWHRTb3VyY2UA9f+D6wAAAAh0RVh0Q29tbWVudAD2zJa/AAAABnRFWHRUaXRsZQCo7tInAAAAs0lEQVQ4jaXUsQ2FIBQF0IsyADWNYQcbExezo3YVB3ADgkvY6AIu4P2VP1/lhwC3InnJKR43T5AkAiGJ4ziglAqNb6n+Ad57jOOIfd+jCPjIeZ50znEYBq7r+hwHc0NygBuSC3yREoAkUQqQpHDOcZomaK3RNE38J35S1zX6vkfFcE2SIruug5QS8zyjbVsYY9KVa7HLstBam7fY61ECvcqWAwVrnwq9kCe0bVsUEWT5KfgAOVW28oYTSmkAAAAASUVORK5CYII=' /> dalam kotak **Pencarian** atau pilih folder mana pun di panel kiri untuk keluar dari pencarian.
Untuk informasi selengkapnya, baca [Mencari Email dan Orang di Outlook di web](https://support.office.com/article/b27e5eb7-3255-4c61-bf16-1c6a16bc2e6b). | 69.933333 | 844 | 0.872736 | yue_Hant | 0.30421 |
225ee6b813c4f30ea55a3d602c126ce6005b996e | 377 | md | Markdown | web/README.md | osam-cologne/archlinux-proaudio | d40c0c8c6fa894eae8e3da680f2d7ebe2471f131 | [
"Unlicense"
] | null | null | null | web/README.md | osam-cologne/archlinux-proaudio | d40c0c8c6fa894eae8e3da680f2d7ebe2471f131 | [
"Unlicense"
] | 16 | 2022-02-18T19:11:10.000Z | 2022-03-31T06:50:11.000Z | web/README.md | osam-cologne/archlinux-proaudio | d40c0c8c6fa894eae8e3da680f2d7ebe2471f131 | [
"Unlicense"
] | null | null | null | # Website
This builds a simple static site using [Hugo] to be deployed at
[arch.osamc.de](https://arch.osamc.de/).
To see a preview run `./tools/preview-website.sh` from the repository root
dir. To render the package list this extracts the files databases to
`<repo root>/.tmp/db/{x86_64,aarch64}`, so this must be writeable by the
current user.
[hugo]: https://gohugo.io/
| 29 | 74 | 0.737401 | eng_Latn | 0.953979 |
2260392b542f7390671cf24a3e36cff5d1c49768 | 265 | md | Markdown | src/main/resources/docs/description/W0602.md | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/description/W0602.md | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | src/main/resources/docs/description/W0602.md | h314to/codacy-pylint | 9d31567db6188e1b31ce0e1567998f64946502df | [
"Apache-2.0"
] | null | null | null | A variable was declared global in a scope, but it was never assigned. Since no modifications
were made to it, it's better to remove the global declaration altogether.
def foo():
# No need for defining it as global.
global bar
print(bar) | 37.857143 | 92 | 0.69434 | eng_Latn | 0.999535 |
22609d88862b9e1d8c9613efdde6979c6923e682 | 44,948 | md | Markdown | docs/transform.md | benherbertson/r4ds-exercise-solutions | fa040bcaf1182643e72c2dbab6d5ec2fa4ac6a21 | [
"CC-BY-4.0"
] | null | null | null | docs/transform.md | benherbertson/r4ds-exercise-solutions | fa040bcaf1182643e72c2dbab6d5ec2fa4ac6a21 | [
"CC-BY-4.0"
] | null | null | null | docs/transform.md | benherbertson/r4ds-exercise-solutions | fa040bcaf1182643e72c2dbab6d5ec2fa4ac6a21 | [
"CC-BY-4.0"
] | 1 | 2019-11-25T07:31:09.000Z | 2019-11-25T07:31:09.000Z |
# Data Transformation
## Prerequisites
```r
library("nycflights13")
library("tidyverse")
```
## Filter
```r
glimpse(flights)
#> Observations: 336,776
#> Variables: 19
#> $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,...
#> $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,...
#> $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,...
#> $ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 55...
#> $ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 60...
#> $ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2...
#> $ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 7...
#> $ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 7...
#> $ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -...
#> $ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV",...
#> $ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79...
#> $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN...
#> $ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR"...
#> $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL"...
#> $ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138...
#> $ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 94...
#> $ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5,...
#> $ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, ...
#> $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013...
```
## Exercises
1. Find all flights that
1. Had an arrival delay of two or more hours
2. Flew to Houston (IAH or HOU)
3. Were operated by United, American, or Delta
4. Departed in summer (July, August, and September)
5. Arrived more than two hours late, but didn’t leave late
6. Were delayed by at least an hour, but made up over 30 minutes in flight
7. Departed between midnight and 6am (inclusive)
*Had an arrival delay of two or more hours* Since delay is in minutes, we are looking
for flights where `arr_delay > 120`:
```r
flights %>%
filter(arr_delay > 120)
#> # A tibble: 10,034 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 1 811 630 101 1047
#> 2 2013 1 1 848 1835 853 1001
#> 3 2013 1 1 957 733 144 1056
#> 4 2013 1 1 1114 900 134 1447
#> 5 2013 1 1 1505 1310 115 1638
#> 6 2013 1 1 1525 1340 105 1831
#> # ... with 1.003e+04 more rows, and 12 more variables:
#> # sched_arr_time <int>, arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
*Flew to Houston (IAH or HOU)*:
```r
flights %>%
filter(dest %in% c("IAH", "HOU"))
#> # A tibble: 9,313 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 1 517 515 2 830
#> 2 2013 1 1 533 529 4 850
#> 3 2013 1 1 623 627 -4 933
#> 4 2013 1 1 728 732 -4 1041
#> 5 2013 1 1 739 739 0 1104
#> 6 2013 1 1 908 908 0 1228
#> # ... with 9,307 more rows, and 12 more variables: sched_arr_time <int>,
#> # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
*Were operated by United, American, or Delta* The variable `carrier` has the airline: but it is in two-digit carrier codes. However, we can look it up in the `airlines`
dataset.
```r
airlines
#> # A tibble: 16 × 2
#> carrier name
#> <chr> <chr>
#> 1 9E Endeavor Air Inc.
#> 2 AA American Airlines Inc.
#> 3 AS Alaska Airlines Inc.
#> 4 B6 JetBlue Airways
#> 5 DL Delta Air Lines Inc.
#> 6 EV ExpressJet Airlines Inc.
#> # ... with 10 more rows
```
Since there are only 16 rows, its not even worth filtering.
Delta is `DL`, American is `AA`, and United is `UA`:
```r
filter(flights, carrier %in% c("AA", "DL", "UA"))
#> # A tibble: 139,504 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 1 517 515 2 830
#> 2 2013 1 1 533 529 4 850
#> 3 2013 1 1 542 540 2 923
#> 4 2013 1 1 554 600 -6 812
#> 5 2013 1 1 554 558 -4 740
#> 6 2013 1 1 558 600 -2 753
#> # ... with 1.395e+05 more rows, and 12 more variables:
#> # sched_arr_time <int>, arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
*Departed in summer (July, August, and September)* The variable `month` has the month, and it is numeric.
```r
filter(flights, between(month, 7, 9))
#> # A tibble: 86,326 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 7 1 1 2029 212 236
#> 2 2013 7 1 2 2359 3 344
#> 3 2013 7 1 29 2245 104 151
#> 4 2013 7 1 43 2130 193 322
#> 5 2013 7 1 44 2150 174 300
#> 6 2013 7 1 46 2051 235 304
#> # ... with 8.632e+04 more rows, and 12 more variables:
#> # sched_arr_time <int>, arr_delay <dbl>, carrier <chr>, flight <int>,
#> # tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
```
*Arrived more than two hours late, but didn’t leave late*
```r
filter(flights, !is.na(dep_delay), dep_delay <= 0, arr_delay > 120)
#> # A tibble: 29 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 27 1419 1420 -1 1754
#> 2 2013 10 7 1350 1350 0 1736
#> 3 2013 10 7 1357 1359 -2 1858
#> 4 2013 10 16 657 700 -3 1258
#> 5 2013 11 1 658 700 -2 1329
#> 6 2013 3 18 1844 1847 -3 39
#> # ... with 23 more rows, and 12 more variables: sched_arr_time <int>,
#> # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
*Were delayed by at least an hour, but made up over 30 minutes in flight*
```r
filter(flights, !is.na(dep_delay), dep_delay >= 60, dep_delay-arr_delay > 30)
#> # A tibble: 1,844 x 19
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 1 2205 1720 285 46 2040 246 AA 1999 N5DN…
#> 2 2013 1 1 2326 2130 116 131 18 73.0 B6 199 N594…
#> 3 2013 1 3 1503 1221 162 1803 1555 128 UA 551 N835…
#> 4 2013 1 3 1839 1700 99.0 2056 1950 66.0 AA 575 N631…
#> 5 2013 1 3 1850 1745 65.0 2148 2120 28.0 AA 177 N332…
#> 6 2013 1 3 1941 1759 102 2246 2139 67.0 UA 979 N402…
#> # ... with 1,838 more rows, and 7 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>
```
*Departed between midnight and 6am (inclusive)*.
```r
filter(flights, dep_time <=600 | dep_time == 2400)
#> # A tibble: 9,373 x 19
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 1 517 515 2.00 830 819 11.0 UA 1545 N142…
#> 2 2013 1 1 533 529 4.00 850 830 20.0 UA 1714 N242…
#> 3 2013 1 1 542 540 2.00 923 850 33.0 AA 1141 N619…
#> 4 2013 1 1 544 545 -1.00 1004 1022 -18.0 B6 725 N804…
#> 5 2013 1 1 554 600 -6.00 812 837 -25.0 DL 461 N668…
#> 6 2013 1 1 554 558 -4.00 740 728 12.0 UA 1696 N394…
#> # ... with 9,367 more rows, and 7 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>
```
or using `between` (see next question)
```r
filter(flights, between(dep_time, 0, 600))
#> # A tibble: 9,344 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 1 517 515 2 830
#> 2 2013 1 1 533 529 4 850
#> 3 2013 1 1 542 540 2 923
#> 4 2013 1 1 544 545 -1 1004
#> 5 2013 1 1 554 600 -6 812
#> 6 2013 1 1 554 558 -4 740
#> # ... with 9,338 more rows, and 12 more variables: sched_arr_time <int>,
#> # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
2. Another useful **dplyr** filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?
`between(x, left, right)` is equivalent to `x >= left & x <= right`. I already
used it in 1.4.
3. How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?
```r
filter(flights, is.na(dep_time))
#> # A tibble: 8,255 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time
#> <int> <int> <int> <int> <int> <dbl> <int>
#> 1 2013 1 1 NA 1630 NA NA
#> 2 2013 1 1 NA 1935 NA NA
#> 3 2013 1 1 NA 1500 NA NA
#> 4 2013 1 1 NA 600 NA NA
#> 5 2013 1 2 NA 1540 NA NA
#> 6 2013 1 2 NA 1620 NA NA
#> # ... with 8,249 more rows, and 12 more variables: sched_arr_time <int>,
#> # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
Since `arr_time` is also missing, these are canceled flights.
4. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing? Why is `FALSE & NA` not missing? Can you figure out the general rule? (`NA * 0` is a tricky counterexample!)
`NA ^ 0 == 1` since for all numeric values $x ^ 0 = 1$.
```r
NA ^ 0
#> [1] 1
```
`NA | TRUE` is `TRUE` because the it doesn't matter whether the missing value is `TRUE` or `FALSE`, `x \lor T = T` for all values of `x`.
```r
NA | TRUE
#> [1] TRUE
```
Likewise, anything and `FALSE` is always `FALSE`.
```r
NA & FALSE
#> [1] FALSE
```
Because the value of the missing element matters in `NA | FALSE` and `NA & TRUE`, these are missing:
```r
NA | FALSE
#> [1] NA
NA & TRUE
#> [1] NA
```
Wut?? Since `x * 0 = 0` for all $x$ (except `Inf`) we might expect `NA * 0 = 0`, but that's not the case.
```r
NA * 0
#> [1] NA
```
The reason that `NA * 0` is not equal to `0` is that `x * 0 = NaN` is undefined when `x = Inf` or `x = -Inf`.
```r
Inf * 0
#> [1] NaN
-Inf * 0
#> [1] NaN
```
## Arrange
missing values always at the end.
### Exercises
1. How could you use `arrange()` to sort all missing values to the start? (Hint: use `is.na()`).
This sorts by increasing `dep_time`, but with all missing values put first.
```r
arrange(flights, desc(is.na(dep_time)), dep_time)
#> # A tibble: 336,776 x 19
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 1 NA 1630 NA NA 1815 NA EV 4308 N181…
#> 2 2013 1 1 NA 1935 NA NA 2240 NA AA 791 N3EH…
#> 3 2013 1 1 NA 1500 NA NA 1825 NA AA 1925 N3EV…
#> 4 2013 1 1 NA 600 NA NA 901 NA B6 125 N618…
#> 5 2013 1 2 NA 1540 NA NA 1747 NA EV 4352 N105…
#> 6 2013 1 2 NA 1620 NA NA 1746 NA EV 4406 N139…
#> # ... with 3.368e+05 more rows, and 7 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>
```
2. Sort flights to find the most delayed flights. Find the flights that left earliest.
The most delayed flights are found by sorting by `dep_delay` in descending order.
```r
arrange(flights, desc(dep_delay))
#> # A tibble: 336,776 x 19
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 9 641 900 1301 1242 1530 1272 HA 51 N384…
#> 2 2013 6 15 1432 1935 1137 1607 2120 1127 MQ 3535 N504…
#> 3 2013 1 10 1121 1635 1126 1239 1810 1109 MQ 3695 N517…
#> 4 2013 9 20 1139 1845 1014 1457 2210 1007 AA 177 N338…
#> 5 2013 7 22 845 1600 1005 1044 1815 989 MQ 3075 N665…
#> 6 2013 4 10 1100 1900 960 1342 2211 931 DL 2391 N959…
#> # ... with 3.368e+05 more rows, and 7 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>
```
If we sort `dep_delay` in ascending order, we get those that left earliest.
There was a flight that left 43 minutes early.
```r
arrange(flights, dep_delay)
#> # A tibble: 336,776 x 19
#> year month day dep_… sche… dep_… arr_… sche… arr_d… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 12 7 2040 2123 -43.0 40 2352 48.0 B6 97 N592…
#> 2 2013 2 3 2022 2055 -33.0 2240 2338 -58.0 DL 1715 N612…
#> 3 2013 11 10 1408 1440 -32.0 1549 1559 -10.0 EV 5713 N825…
#> 4 2013 1 11 1900 1930 -30.0 2233 2243 -10.0 DL 1435 N934…
#> 5 2013 1 29 1703 1730 -27.0 1947 1957 -10.0 F9 837 N208…
#> 6 2013 8 9 729 755 -26.0 1002 955 7.00 MQ 3478 N711…
#> # ... with 3.368e+05 more rows, and 7 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>
```
3. Sort flights to find the fastest flights.
I assume that by by "fastest flights" it means the flights with the minimum air time.
So I sort by `air_time`. The fastest flights. The fastest flights area couple of flights between EWR and BDL with an air time of 20 minutes.
```r
arrange(flights, air_time)
#> # A tibble: 336,776 x 19
#> year month day dep_t… sched_… dep_de… arr_… sched… arr_d… carr… flig…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int>
#> 1 2013 1 16 1355 1315 40.0 1442 1411 31.0 EV 4368
#> 2 2013 4 13 537 527 10.0 622 628 - 6.00 EV 4631
#> 3 2013 12 6 922 851 31.0 1021 954 27.0 EV 4276
#> 4 2013 2 3 2153 2129 24.0 2247 2224 23.0 EV 4619
#> 5 2013 2 5 1303 1315 -12.0 1342 1411 -29.0 EV 4368
#> 6 2013 2 12 2123 2130 - 7.00 2211 2225 -14.0 EV 4619
#> # ... with 3.368e+05 more rows, and 8 more variables: tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
4. Which flights traveled the longest? Which traveled the shortest?
I'll assume hat traveled the longest or shortest refers to distance, rather than air-time.
The longest flights are the Hawaii Air (HA 51) between JFK and HNL (Honolulu) at 4,983 miles.
```r
arrange(flights, desc(distance))
#> # A tibble: 336,776 x 19
#> year month day dep_t… sched_… dep_de… arr_… sched… arr_d… carr… flig…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int>
#> 1 2013 1 1 857 900 - 3.00 1516 1530 -14.0 HA 51
#> 2 2013 1 2 909 900 9.00 1525 1530 - 5.00 HA 51
#> 3 2013 1 3 914 900 14.0 1504 1530 -26.0 HA 51
#> 4 2013 1 4 900 900 0 1516 1530 -14.0 HA 51
#> 5 2013 1 5 858 900 - 2.00 1519 1530 -11.0 HA 51
#> 6 2013 1 6 1019 900 79.0 1558 1530 28.0 HA 51
#> # ... with 3.368e+05 more rows, and 8 more variables: tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
Apart from an EWR to LGA flight that was canceled, the shortest flights are the Envoy Air Flights between EWR and PHL at 80 miles.
```r
arrange(flights, distance)
#> # A tibble: 336,776 x 19
#> year month day dep_t… sched… dep_del… arr_… sche… arr_de… carr… flig…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int>
#> 1 2013 7 27 NA 106 NA NA 245 NA US 1632
#> 2 2013 1 3 2127 2129 - 2.00 2222 2224 - 2.00 EV 3833
#> 3 2013 1 4 1240 1200 40.0 1333 1306 27.0 EV 4193
#> 4 2013 1 4 1829 1615 134 1937 1721 136 EV 4502
#> 5 2013 1 4 2128 2129 - 1.00 2218 2224 - 6.00 EV 4645
#> 6 2013 1 5 1155 1200 - 5.00 1241 1306 - 25.0 EV 4193
#> # ... with 3.368e+05 more rows, and 8 more variables: tailnum <chr>,
#> # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
#> # minute <dbl>, time_hour <dttm>
```
1. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from flights.
A few ways include:
```r
select(flights, dep_time, dep_delay, arr_time, arr_delay)
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2.00 830 11.0
#> 2 533 4.00 850 20.0
#> 3 542 2.00 923 33.0
#> 4 544 -1.00 1004 -18.0
#> 5 554 -6.00 812 -25.0
#> 6 554 -4.00 740 12.0
#> # ... with 3.368e+05 more rows
select(flights, starts_with("dep_"), starts_with("arr_"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2.00 830 11.0
#> 2 533 4.00 850 20.0
#> 3 542 2.00 923 33.0
#> 4 544 -1.00 1004 -18.0
#> 5 554 -6.00 812 -25.0
#> 6 554 -4.00 740 12.0
#> # ... with 3.368e+05 more rows
select(flights, matches("^(dep|arr)_(time|delay)$"))
#> # A tibble: 336,776 x 4
#> dep_time dep_delay arr_time arr_delay
#> <int> <dbl> <int> <dbl>
#> 1 517 2.00 830 11.0
#> 2 533 4.00 850 20.0
#> 3 542 2.00 923 33.0
#> 4 544 -1.00 1004 -18.0
#> 5 554 -6.00 812 -25.0
#> 6 554 -4.00 740 12.0
#> # ... with 3.368e+05 more rows
```
using `ends_with()` doesn't work well since it would return both `sched_arr_time` and `sched_dep_time`.
2. What happens if you include the name of a variable multiple times in a select() call?
It ignores the duplicates, and that variable is only included once. No error, warning, or message is emitted.
```r
select(flights, year, month, day, year, year)
#> # A tibble: 336,776 x 3
#> year month day
#> <int> <int> <int>
#> 1 2013 1 1
#> 2 2013 1 1
#> 3 2013 1 1
#> 4 2013 1 1
#> 5 2013 1 1
#> 6 2013 1 1
#> # ... with 3.368e+05 more rows
```
3. What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?
The `one_of` vector allows you to select variables with a character vector rather than as unquoted variable names.
It's useful because then you can easily pass vectors to `select()`.
```r
vars <- c("year", "month", "day", "dep_delay", "arr_delay")
select(flights, one_of(vars))
#> # A tibble: 336,776 x 5
#> year month day dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 2.00 11.0
#> 2 2013 1 1 4.00 20.0
#> 3 2013 1 1 2.00 33.0
#> 4 2013 1 1 -1.00 -18.0
#> 5 2013 1 1 -6.00 -25.0
#> 6 2013 1 1 -4.00 12.0
#> # ... with 3.368e+05 more rows
```
4. Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?
```r
select(flights, contains("TIME"))
#> # A tibble: 336,776 x 6
#> dep_time sched_dep_time arr_time sched_arr_… air_ti… time_hour
#> <int> <int> <int> <int> <dbl> <dttm>
#> 1 517 515 830 819 227 2013-01-01 05:00:00
#> 2 533 529 850 830 227 2013-01-01 05:00:00
#> 3 542 540 923 850 160 2013-01-01 05:00:00
#> 4 544 545 1004 1022 183 2013-01-01 05:00:00
#> 5 554 600 812 837 116 2013-01-01 06:00:00
#> 6 554 558 740 728 150 2013-01-01 05:00:00
#> # ... with 3.368e+05 more rows
```
The default behavior for contains is to ignore case.
Yes, it surprises me.
Upon reflection, I realized that this is likely the default behavior because `dplyr` is designed to deal with a variety of data backends, and some database engines don't differentiate case.
To change the behavior add the argument `ignore.case = FALSE`. Now no variables are selected.
```r
select(flights, contains("TIME", ignore.case = FALSE))
#> # A tibble: 336,776 x 0
```
## Mutate
### Exercises
1. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.
To get the departure times in the number of minutes, (integer) divide `dep_time` by 100 to get the hours since midnight and multiply by 60 and add the remainder of `dep_time` divided by 100.
```r
mutate(flights,
dep_time_mins = dep_time %/% 100 * 60 + dep_time %% 100,
sched_dep_time_mins = sched_dep_time %/% 100 * 60 + sched_dep_time %% 100) %>%
select(dep_time, dep_time_mins, sched_dep_time, sched_dep_time_mins)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # ... with 3.368e+05 more rows
```
This would be more cleanly done by first defining a function and reusing that:
```r
time2mins <- function(x) {
x %/% 100 * 60 + x %% 100
}
mutate(flights,
dep_time_mins = time2mins(dep_time),
sched_dep_time_mins = time2mins(sched_dep_time)) %>%
select(dep_time, dep_time_mins, sched_dep_time, sched_dep_time_mins)
#> # A tibble: 336,776 x 4
#> dep_time dep_time_mins sched_dep_time sched_dep_time_mins
#> <int> <dbl> <int> <dbl>
#> 1 517 317 515 315
#> 2 533 333 529 329
#> 3 542 342 540 340
#> 4 544 344 545 345
#> 5 554 354 600 360
#> 6 554 354 558 358
#> # ... with 3.368e+05 more rows
```
2. Compare `air_time` with `arr_time - dep_time`. What do you expect to see? What do you see? What do you need to do to fix it?
Since `arr_time` and `dep_time` may be in different time zones, the `air_time` doesn't equal the difference.
We would need to account for time-zones in these calculations.
```r
mutate(flights,
air_time2 = arr_time - dep_time,
air_time_diff = air_time2 - air_time) %>%
filter(air_time_diff != 0) %>%
select(air_time, air_time2, dep_time, arr_time, dest)
#> # A tibble: 326,128 x 5
#> air_time air_time2 dep_time arr_time dest
#> <dbl> <int> <int> <int> <chr>
#> 1 227 313 517 830 IAH
#> 2 227 317 533 850 IAH
#> 3 160 381 542 923 MIA
#> 4 183 460 544 1004 BQN
#> 5 116 258 554 812 ATL
#> 6 150 186 554 740 ORD
#> # ... with 3.261e+05 more rows
```
3. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?
I'd expect `dep_time`, `sched_dep_time`, and `dep_delay` to be related so that `dep_time - sched_dep_time = dep_delay`.
```r
mutate(flights,
dep_delay2 = dep_time - sched_dep_time) %>%
filter(dep_delay2 != dep_delay) %>%
select(dep_time, sched_dep_time, dep_delay, dep_delay2)
#> # A tibble: 99,777 x 4
#> dep_time sched_dep_time dep_delay dep_delay2
#> <int> <int> <dbl> <int>
#> 1 554 600 -6.00 -46
#> 2 555 600 -5.00 -45
#> 3 557 600 -3.00 -43
#> 4 557 600 -3.00 -43
#> 5 558 600 -2.00 -42
#> 6 558 600 -2.00 -42
#> # ... with 9.977e+04 more rows
```
Oops, I forgot to convert to minutes. I'll reuse the `time2mins` function I wrote earlier.
```r
mutate(flights,
dep_delay2 = time2mins(dep_time) - time2mins(sched_dep_time)) %>%
filter(dep_delay2 != dep_delay) %>%
select(dep_time, sched_dep_time, dep_delay, dep_delay2)
#> # A tibble: 1,207 x 4
#> dep_time sched_dep_time dep_delay dep_delay2
#> <int> <int> <dbl> <dbl>
#> 1 848 1835 853 - 587
#> 2 42 2359 43.0 -1397
#> 3 126 2250 156 -1284
#> 4 32 2359 33.0 -1407
#> 5 50 2145 185 -1255
#> 6 235 2359 156 -1284
#> # ... with 1,201 more rows
```
Well, that solved most of the problems, but these two numbers don't match because we aren't accounting for flights where the departure time is the next day from the scheduled departure time.
4. Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for `min_rank()`.
I'd want to handle ties by taking the minimum of tied values. If three flights are have the same value and are the most delayed, we would say they are tied for first, not tied for third or second.
```r
mutate(flights,
dep_delay_rank = min_rank(-dep_delay)) %>%
arrange(dep_delay_rank) %>%
filter(dep_delay_rank <= 10)
#> # A tibble: 10 x 20
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 9 641 900 1301 1242 1530 1272 HA 51 N384…
#> 2 2013 6 15 1432 1935 1137 1607 2120 1127 MQ 3535 N504…
#> 3 2013 1 10 1121 1635 1126 1239 1810 1109 MQ 3695 N517…
#> 4 2013 9 20 1139 1845 1014 1457 2210 1007 AA 177 N338…
#> 5 2013 7 22 845 1600 1005 1044 1815 989 MQ 3075 N665…
#> 6 2013 4 10 1100 1900 960 1342 2211 931 DL 2391 N959…
#> # ... with 4 more rows, and 8 more variables: origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour
#> # <dttm>, dep_delay_rank <int>
```
5. What does `1:3 + 1:10` return? Why?
It returns `c(1 + 1, 2 + 2, 3 + 3, 1 + 4, 2 + 5, 3 + 6, 1 + 7, 2 + 8, 3 + 9, 1 + 10)`.
When adding two vectors recycles the shorter vector's values to get vectors of the same length.
We get a warning vector since the shorter vector is not a multiple of the longer one (this often, but not necessarily, means we made an error somewhere).
```r
1:3 + 1:10
#> Warning in 1:3 + 1:10: longer object length is not a multiple of shorter
#> object length
#> [1] 2 4 6 5 7 9 8 10 12 11
```
6. What trigonometric functions does R provide?
All the classics: `cos`, `sin`, `tan`, `acos`, `asin`, `atan`, plus a few others that are drive by numerical or computational issues.
## Grouped summaries with `summarise()`
### Exercises
1. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights. Consider the following scenarios:
- A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
- A flight is always 10 minutes late.
- A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
- 99% of the time a flight is on time. 1% of the time it’s 2 hours late.
Which is more important: arrival delay or departure delay?
Arrival delay is more important.
Arriving early is nice, but equally as good as arriving late is bad.
Variation is worse than consistency; if I know the plane will always arrive 10 minutes late, then I can plan for it arriving as if the actual arrival time was 10 minutes later than the scheduled arrival time.
So I'd try something that calculates the expected time of the flight, and then aggregates over any delays from that time. I would ignore any early arrival times.
A better ranking would also consider cancellations, and need a way to convert them to a delay time (perhaps using the arrival time of the next flight to the same destination).
2. Come up with another approach that will give you the same output as `not_canceled %>% count(dest)` and `not_canceled %>% count(tailnum, wt = distance)` (without using `count()`).
3. Our definition of canceled flights `(is.na(dep_delay) | is.na(arr_delay))` is slightly suboptimal. Why? Which is the most important column?
If a flight doesn't depart, then it won't arrive. A flight can also depart and not arrive if it crashes; I'm not sure how this data would handle flights that are redirected and land at other airports for whatever reason.
The more important column is `arr_delay` so we could just use that.
```r
filter(flights, !is.na(dep_delay), is.na(arr_delay)) %>%
select(dep_time, arr_time, sched_arr_time, dep_delay, arr_delay)
#> # A tibble: 1,175 x 5
#> dep_time arr_time sched_arr_time dep_delay arr_delay
#> <int> <int> <int> <dbl> <dbl>
#> 1 1525 1934 1805 - 5.00 NA
#> 2 1528 2002 1647 29.0 NA
#> 3 1740 2158 2020 - 5.00 NA
#> 4 1807 2251 2103 29.0 NA
#> 5 1939 29 2151 59.0 NA
#> 6 1952 2358 2207 22.0 NA
#> # ... with 1,169 more rows
```
Okay, I'm not sure what's going on in this data. `dep_time` can be non-missing and `arr_delay` missing but `arr_time` not missing.
They may be combining different flights?
4. Look at the number of canceled flights per day. Is there a pattern? Is the proportion of canceled flights related to the average delay?
```r
canceled_delayed <-
flights %>%
mutate(canceled = (is.na(arr_delay) | is.na(dep_delay))) %>%
group_by(year, month, day) %>%
summarise(prop_canceled = mean(canceled),
avg_dep_delay = mean(dep_delay, na.rm = TRUE))
ggplot(canceled_delayed, aes(x = avg_dep_delay, prop_canceled)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'loess'
```
<img src="transform_files/figure-html/unnamed-chunk-39-1.png" width="70%" style="display: block; margin: auto;" />
5. Which carrier has the worst delays? Challenge: can you disentangle the effects of bad airports vs. bad carriers? Why/why not? (Hint: think about `flights %>% group_by(carrier, dest) %>% summarise(n())`)
```r
flights %>%
group_by(carrier) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
arrange(desc(arr_delay))
#> # A tibble: 16 x 2
#> carrier arr_delay
#> <chr> <dbl>
#> 1 F9 21.9
#> 2 FL 20.1
#> 3 EV 15.8
#> 4 YV 15.6
#> 5 OO 11.9
#> 6 MQ 10.8
#> # ... with 10 more rows
```
```r
filter(airlines, carrier == "F9")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 F9 Frontier Airlines Inc.
```
Frontier Airlines (FL) has the worst delays.
You can get part of the way to disentangling the effects of airports vs. carriers by
comparing each flight's delay to the average delay of destination airport.
However, you'd really want to compare it to the average delay of the destination airport, *after* removing other flights from the same airline.
FiveThirtyEight conducted a [similar analysis](http://fivethirtyeight.com/features/the-best-and-worst-airlines-airports-and-flights-summer-2015-update/).
6. For each plane, count the number of flights before the first delay of greater than 1 hour.
I think this requires grouped mutate (but I may be wrong):
```r
flights %>%
arrange(tailnum, year, month, day) %>%
group_by(tailnum) %>%
mutate(delay_gt1hr = dep_delay > 60) %>%
mutate(before_delay = cumsum(delay_gt1hr)) %>%
filter(before_delay < 1) %>%
count(sort = TRUE)
#> # A tibble: 3,755 x 2
#> # Groups: tailnum [3,755]
#> tailnum n
#> <chr> <int>
#> 1 N954UW 206
#> 2 N952UW 163
#> 3 N957UW 142
#> 4 N5FAAA 117
#> 5 N38727 99
#> 6 N3742C 98
#> # ... with 3,749 more rows
```
7. What does the sort argument to `count()` do. When might you use it?
The sort argument to `count` sorts the results in order of `n`.
You could use this anytime you would do `count` followed by `arrange`.
## Grouped mutates and filters
### Exercises
1. Refer back to the table of useful mutate and filtering functions. Describe how each operation changes when you combine it with grouping.
They operate within each group rather than over the entire data frame. E.g. `mean` will calculate the mean within each group.
2. Which plane (`tailnum`) has the worst on-time record?
```r
flights %>%
group_by(tailnum) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
ungroup() %>%
filter(rank(desc(arr_delay)) <= 1)
#> # A tibble: 1 x 2
#> tailnum arr_delay
#> <chr> <dbl>
#> 1 N844MH 320
```
3. What time of day should you fly if you want to avoid delays as much as possible?
Let's group by hour. The earlier the better to fly. This is intuitive as delays early in the morning are likely to propagate throughout the day.
```r
flights %>%
group_by(hour) %>%
summarise(arr_delay = mean(arr_delay, na.rm = TRUE)) %>%
ungroup() %>%
arrange(arr_delay)
#> # A tibble: 20 x 2
#> hour arr_delay
#> <dbl> <dbl>
#> 1 7.00 -5.30
#> 2 5.00 -4.80
#> 3 6.00 -3.38
#> 4 9.00 -1.45
#> 5 8.00 -1.11
#> 6 10.0 0.954
#> # ... with 14 more rows
```
4. For each destination, compute the total minutes of delay. For each, flight, compute the proportion of the total delay for its destination.
```r
flights %>%
filter(!is.na(arr_delay), arr_delay > 0) %>%
group_by(dest) %>%
mutate(total_delay = sum(arr_delay),
prop_delay = arr_delay / sum(arr_delay))
#> # A tibble: 133,004 x 21
#> # Groups: dest [103]
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 1 517 515 2.00 830 819 11.0 UA 1545 N142…
#> 2 2013 1 1 533 529 4.00 850 830 20.0 UA 1714 N242…
#> 3 2013 1 1 542 540 2.00 923 850 33.0 AA 1141 N619…
#> 4 2013 1 1 554 558 -4.00 740 728 12.0 UA 1696 N394…
#> 5 2013 1 1 555 600 -5.00 913 854 19.0 B6 507 N516…
#> 6 2013 1 1 558 600 -2.00 753 745 8.00 AA 301 N3AL…
#> # ... with 1.33e+05 more rows, and 9 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>, total_delay <dbl>, prop_delay <dbl>
```
Alternatively, consider the delay as relative to the *minimum* delay for any flight to that destination. Now all non-canceled flights have a proportion.
```r
flights %>%
filter(!is.na(arr_delay), arr_delay > 0) %>%
group_by(dest) %>%
mutate(total_delay = sum(arr_delay - min(arr_delay)),
prop_delay = arr_delay / sum(arr_delay))
#> # A tibble: 133,004 x 21
#> # Groups: dest [103]
#> year month day dep_t… sche… dep_… arr_… sche… arr_… carr… flig… tail…
#> <int> <int> <int> <int> <int> <dbl> <int> <int> <dbl> <chr> <int> <chr>
#> 1 2013 1 1 517 515 2.00 830 819 11.0 UA 1545 N142…
#> 2 2013 1 1 533 529 4.00 850 830 20.0 UA 1714 N242…
#> 3 2013 1 1 542 540 2.00 923 850 33.0 AA 1141 N619…
#> 4 2013 1 1 554 558 -4.00 740 728 12.0 UA 1696 N394…
#> 5 2013 1 1 555 600 -5.00 913 854 19.0 B6 507 N516…
#> 6 2013 1 1 558 600 -2.00 753 745 8.00 AA 301 N3AL…
#> # ... with 1.33e+05 more rows, and 9 more variables: origin <chr>, dest
#> # <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>,
#> # time_hour <dttm>, total_delay <dbl>, prop_delay <dbl>
```
5. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using `lag()` explore how the delay of a flight is related to the delay of the immediately preceding flight.
We want to group by day to avoid taking the lag from the previous day.
Also, I want to use departure delay, since this mechanism is relevant for departures.
Also, I remove missing values both before and after calculating the lag delay.
However, it would be interesting to ask the probability or average delay after a cancellation.
```r
flights %>%
group_by(year, month, day) %>%
filter(!is.na(dep_delay)) %>%
mutate(lag_delay = lag(dep_delay)) %>%
filter(!is.na(lag_delay)) %>%
ggplot(aes(x = dep_delay, y = lag_delay)) +
geom_point() +
geom_smooth()
#> `geom_smooth()` using method = 'gam'
```
<img src="transform_files/figure-html/unnamed-chunk-47-1.png" width="70%" style="display: block; margin: auto;" />
6. Look at each destination. Can you find flights that are suspiciously fast? (i.e. flights that represent a potential data entry error). Compute the air time a flight relative to the shortest flight to that destination. Which flights were most delayed in the air?
The shorter BOS and PHL flights that are 20 minutes for 30+ minutes flights seem plausible - though maybe entries of +/- a few minutes can easily create large changes.
I assume that departure time has a standardized definition, but I'm not sure; if there is some discretion, that could create errors that are small in absolute time, but large in relative time for small flights.
The ATL, GSP, and BNA flights look suspicious as they are almost half the time of longer flights.
```r
flights %>%
filter(!is.na(air_time)) %>%
group_by(dest) %>%
mutate(med_time = median(air_time),
fast = (air_time - med_time) / med_time) %>%
arrange(fast) %>%
select(air_time, med_time, fast, dep_time, sched_dep_time, arr_time, sched_arr_time) %>%
head(15)
#> Adding missing grouping variables: `dest`
#> # A tibble: 15 x 8
#> # Groups: dest [9]
#> dest air_time med_time fast dep_time sched_dep_time arr_time sched_a…
#> <chr> <dbl> <dbl> <dbl> <int> <int> <int> <int>
#> 1 BOS 21.0 38.0 -0.447 1450 1500 1547 1608
#> 2 ATL 65.0 112 -0.420 1709 1700 1923 1937
#> 3 GSP 55.0 92.0 -0.402 2040 2025 2225 2226
#> 4 BOS 23.0 38.0 -0.395 1954 2000 2131 2114
#> 5 BNA 70.0 113 -0.381 1914 1910 2045 2043
#> 6 MSP 93.0 149 -0.376 1558 1513 1745 1719
#> # ... with 9 more rows
```
I could also try a z-score. Though the standard deviation and mean will be affected by large delays.
```r
flights %>%
filter(!is.na(air_time)) %>%
group_by(dest) %>%
mutate(air_time_mean = mean(air_time),
air_time_sd = sd(air_time),
z_score = (air_time - air_time_mean) / air_time_sd) %>%
arrange(z_score) %>%
select(z_score, air_time_mean, air_time_sd, air_time, dep_time, sched_dep_time, arr_time, sched_arr_time)
#> Adding missing grouping variables: `dest`
#> # A tibble: 327,346 x 9
#> # Groups: dest [104]
#> dest z_score air_time_mean air_time_sd air_time dep_… sche… arr_… sche…
#> <chr> <dbl> <dbl> <dbl> <dbl> <int> <int> <int> <int>
#> 1 MSP -4.90 151 11.8 93.0 1558 1513 1745 1719
#> 2 ATL -4.88 113 9.81 65.0 1709 1700 1923 1937
#> 3 GSP -4.72 93.4 8.13 55.0 2040 2025 2225 2226
#> 4 BNA -4.05 114 11.0 70.0 1914 1910 2045 2043
#> 5 CVG -3.98 96.0 8.52 62.0 1359 1343 1523 1545
#> 6 BOS -3.63 39.0 4.95 21.0 1450 1500 1547 1608
#> # ... with 3.273e+05 more rows
```
```r
flights %>%
filter(!is.na(air_time)) %>%
group_by(dest) %>%
mutate(air_time_diff = air_time - min(air_time)) %>%
arrange(desc(air_time_diff)) %>%
select(dest, year, month, day, carrier, flight, air_time_diff, air_time, dep_time, arr_time) %>%
head()
#> # A tibble: 6 x 10
#> # Groups: dest [5]
#> dest year month day carrier flight air_time_diff air_t… dep_t… arr_…
#> <chr> <int> <int> <int> <chr> <int> <dbl> <dbl> <int> <int>
#> 1 SFO 2013 7 28 DL 841 195 490 1727 2242
#> 2 LAX 2013 11 22 DL 426 165 440 1812 2302
#> 3 EGE 2013 1 28 AA 575 163 382 1806 2253
#> 4 DEN 2013 9 10 UA 745 149 331 1513 1914
#> 5 LAX 2013 7 10 DL 17 147 422 1814 2240
#> 6 LAS 2013 11 22 UA 587 143 399 2142 143
```
7. Find all destinations that are flown by at least two carriers. Use that information to rank the carriers.
The carrier that flies to the most locations is ExpressJet Airlines (EV).
ExpressJet is a regional airline and partner for major airlines, so its one of those that flies small planes to close airports
```r
flights %>%
group_by(dest, carrier) %>%
count(carrier) %>%
group_by(carrier) %>%
count(sort = TRUE)
#> # A tibble: 16 x 2
#> # Groups: carrier [16]
#> carrier nn
#> <chr> <int>
#> 1 EV 61
#> 2 9E 49
#> 3 UA 47
#> 4 B6 42
#> 5 DL 40
#> 6 MQ 20
#> # ... with 10 more rows
```
```r
filter(airlines, carrier == "EV")
#> # A tibble: 1 x 2
#> carrier name
#> <chr> <chr>
#> 1 EV ExpressJet Airlines Inc.
```
| 42.645161 | 287 | 0.562761 | eng_Latn | 0.900297 |
2261760a699e25338f4de731a65e78ca0adcfe89 | 1,135 | md | Markdown | README.md | Fachstelle-Geoinformation-Winterthur/acad_usage_measurement_service | bc37271cead4df56b5031b1558ba63b033bcd6ac | [
"MIT"
] | null | null | null | README.md | Fachstelle-Geoinformation-Winterthur/acad_usage_measurement_service | bc37271cead4df56b5031b1558ba63b033bcd6ac | [
"MIT"
] | null | null | null | README.md | Fachstelle-Geoinformation-Winterthur/acad_usage_measurement_service | bc37271cead4df56b5031b1558ba63b033bcd6ac | [
"MIT"
] | null | null | null | # Usage Measurement Service for AutoCAD
A web service that makes it possible for admins to centrally measure the usage times of AutoCAD installations in a corporate network.
## Motivation
After the switch of Autodesk products to named user licensing, it is possible that the usage statistics offered by Autodesk are no longer sufficient for the needs of some organizations. The new usage statistics are only available in the temporal resolution of a whole day. This means that it is only measured whether the Autodesk product was opened once on a certain day.
However, in some cases measurements with a higher temporal resolution are required. For example, this is the case when the departments of an organization want to share the license costs fairly among themselves based on the actual usage. Therefore a new solution is needed.
## How to operate
This web service works in tandem with a plug-in that is also provided as open-source in another repository, which is:
https://github.com/Fachstelle-Geoinformation-Winterthur/acad_usage_measurement_client
You need to install and run the plug-in, in order to use this web service.
| 75.666667 | 371 | 0.813216 | eng_Latn | 0.999827 |
226222406e032e73fa784f899b569d997417c6f1 | 2,448 | md | Markdown | CHANGELOG.md | Medallyon/oversmash | 63bb66315e8af3b2eaff033f8e2906b12a5732a1 | [
"MIT"
] | 43 | 2017-03-14T10:09:39.000Z | 2021-07-28T08:17:04.000Z | CHANGELOG.md | Medallyon/oversmash | 63bb66315e8af3b2eaff033f8e2906b12a5732a1 | [
"MIT"
] | 8 | 2017-06-09T09:39:00.000Z | 2021-09-21T03:46:44.000Z | CHANGELOG.md | Medallyon/oversmash | 63bb66315e8af3b2eaff033f8e2906b12a5732a1 | [
"MIT"
] | 11 | 2017-07-02T04:16:51.000Z | 2021-02-28T02:53:26.000Z | # 1.6.1
- Updates dependencies to resolve security issues, mostly affecting devDependencies (e.g `object-path`)
- Updates tests to be more resilient to cases where the test target account hasn't completed placements for all roles yet
- Replace the unused `displayName` for accounts with `name` and `nameEscaped`, mirroring the top-level properties
- Add `id` under the `accounts`
# 1.6.0
- Fixes a bug where if any requestOptions were set, the entire default object would be replaced instead
of merged, and thus the default `baseUrl` would have to be reset
- Correctly respects `normalizeNames` for `competitiveRank`, `endorsementLevel`, etc (now correctly `snake_cased` when appropriate)
- Achievement names are no longer affected by `normalizeNames`, and always `snake_cased`
- Resolved configuration options are now exposed read-only under the main oversmash object under `options`
```js
const ow = oversmash({ percentsAsInts: true })
console.log(ow.options.percentsAsInts) # => true
```
# 1.5.3
1.5.3
- Fixes a bug where users who had not completed all placements for the season could have incorrect or invalid role rank values
# 1.5.2
1.5.2
- Normalizes name handling on all methods, making the `#` sign mandatory
- Adds `nameEscaped` and `nameEscapedUrl` to method return values as appropriate, with the escaped values used internally
# 1.5.1
1.5.1
- Fixes issue with some hero stat values being `null` incorrectly, due to an error in value normalization in the scraper
- Reshuffled debug logging around. New `DEBUG=` values are `oversmash:*` for all logs, or `oversmash:scraper`, `oversmash:main`, and `oversmash:snapshot` respectively.
# 1.5.0
1.5.0
- Adds support for role queue ranks in `playerStats` method
```json
{
stats: {
competitiveRank: {
support: 1234,
tank: 1234,
damage: 1234
}
}
}
```
- Adds `endorsementLevel` to `playerStats`
- Adds `gamesWon` to `playerStats`
# 1.4.0 & 1.4.1(May 14th 2019)
1.4.0
- Fixes support for player portraits
- Removes region-related code
- Adds missing fields to player profile
- Fixes errors in player stats scraping
1.4.1
- Fixes inconsistent name handling in oversmash.player
# 1.3.2 (December 29th 2017)
Fixes `.player` to work with changes to the blizzard API, specifically how it no longer
seems to support using a dash `-` in place of the pound `#` sign. URL-encoding the pound
sign as part of the URL appears to resolve it.
| 30.6 | 167 | 0.739379 | eng_Latn | 0.994452 |
2263fe485e95f37c26f42e90c517064dc603a008 | 3,690 | md | Markdown | wdk-ddi-src/content/wdfdevice/ne-wdfdevice-_wdf_dispatch_irp_to_io_queue_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdfdevice/ne-wdfdevice-_wdf_dispatch_irp_to_io_queue_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wdfdevice/ne-wdfdevice-_wdf_dispatch_irp_to_io_queue_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:wdfdevice._WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS
title: "_WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS"
author: windows-driver-content
description: The WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS enumeration type defines flags that the driver can specify when it calls WdfDeviceWdmDispatchIrpToIoQueue.
old-location: wdf\wdf_dispatch_irp_to_io_queue_flags.htm
old-project: wdf
ms.assetid: 6A205F51-990F-4721-B4C7-B96E944D2A54
ms.author: windowsdriverdev
ms.date: 2/26/2018
ms.keywords: WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS, WDF_DISPATCH_IRP_TO_IO_QUEUE_INVOKE_INCALLERCTX_CALLBACK, WDF_DISPATCH_IRP_TO_IO_QUEUE_NO_FLAGS, WDF_DISPATCH_IRP_TO_IO_QUEUE_PREPROCESSED_IRP, WDF_FORWARD_IRP_TO_IO_QUEUE_FLAGS, WDF_FORWARD_IRP_TO_IO_QUEUE_FLAGS enumeration, _WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS, kmdf.wdf_dispatch_irp_to_io_queue_flags, kmdf.wdf_forward_irp_to_io_queue_flags, kmdf.wdf_forward_irp_to_io_queue_options_flags, wdf.wdf_dispatch_irp_to_io_queue_flags, wdfdevice/WDF_DISPATCH_IRP_TO_IO_QUEUE_INVOKE_INCALLERCTX_CALLBACK, wdfdevice/WDF_DISPATCH_IRP_TO_IO_QUEUE_NO_FLAGS, wdfdevice/WDF_DISPATCH_IRP_TO_IO_QUEUE_PREPROCESSED_IRP, wdfdevice/WDF_FORWARD_IRP_TO_IO_QUEUE_FLAGS
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: enum
req.header: wdfdevice.h
req.include-header: Wdf.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver: 1.11
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql: See Remarks section.
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- wdfdevice.h
api_name:
- WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS
product: Windows
targetos: Windows
req.typenames: WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS
req.product: Windows 10 or later.
---
# _WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS enumeration
## -description
<p class="CCE_Message">[Applies to KMDF only]
The <b>WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS</b> enumeration type defines flags that the driver can specify when it calls <a href="..\wdfdevice\nf-wdfdevice-wdfdevicewdmdispatchirptoioqueue.md">WdfDeviceWdmDispatchIrpToIoQueue</a>.
## -syntax
````
typedef enum _WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS {
WDF_DISPATCH_IRP_TO_IO_QUEUE_NO_FLAGS = 0x00000000,
WDF_DISPATCH_IRP_TO_IO_QUEUE_INVOKE_INCALLERCTX_CALLBACK = 0x00000001,
WDF_DISPATCH_IRP_TO_IO_QUEUE_PREPROCESSED_IRP = 0x00000002
} WDF_DISPATCH_IRP_TO_IO_QUEUE_FLAGS;
````
## -enum-fields
### -field WDF_DISPATCH_IRP_TO_IO_QUEUE_NO_FLAGS
No flags are set.
### -field WDF_DISPATCH_IRP_TO_IO_QUEUE_INVOKE_INCALLERCTX_CALLBACK
Specifies that the framework should call the <a href="..\wdfdevice\nc-wdfdevice-evt_wdf_io_in_caller_context.md">EvtIoInCallerContext</a> callback function before inserting the request into the queue.
### -field WDF_DISPATCH_IRP_TO_IO_QUEUE_PREPROCESSED_IRP
Specifies that the IRP was preprocessed by the driver's <a href="..\wdfdevice\nc-wdfdevice-evt_wdfdevice_wdm_irp_preprocess.md">EvtDeviceWdmIrpPreprocess</a> callback function. Accordingly, the framework adjusts the IRP's stack location to the next entry before inserting it into the queue.
## -remarks
For more information about specifying queues for IRPs as they arrive, see <a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/wdf/dispatching-irps-to-i-o-queues">Dispatching IRPs to I/O Queues</a>.
## -see-also
<a href="..\wdfdevice\nc-wdfdevice-evt_wdfdevice_wdm_irp_preprocess.md">EvtDeviceWdmIrpPreprocess</a>
<a href="..\wdfdevice\nf-wdfdevice-wdfdevicewdmdispatchirptoioqueue.md">WdfDeviceWdmDispatchIrpToIoQueue</a>
| 32.368421 | 698 | 0.825203 | yue_Hant | 0.575128 |
2264235835167dfa5dcd139907e0aa0262746b68 | 2,207 | md | Markdown | README.md | insilica/TransferSRNN | 4f2625ec76cbe3ccee7ea4a758c8dc9f0a6c128b | [
"MIT"
] | 1 | 2019-08-15T10:49:22.000Z | 2019-08-15T10:49:22.000Z | README.md | insilica/TransferSRNN | 4f2625ec76cbe3ccee7ea4a758c8dc9f0a6c128b | [
"MIT"
] | null | null | null | README.md | insilica/TransferSRNN | 4f2625ec76cbe3ccee7ea4a758c8dc9f0a6c128b | [
"MIT"
] | null | null | null | # TransferSRNN
Here is an explanation of the files in this repo.
Discrete Survival Demo.ipynb - Notebook that shows whole process of building and running survival models. Can be run in new environment after pip installing requirements.txt. Takes a couple hours to run. In the end, it generates a plot similar to lossdiff_boxplot2.svg
lossdiff_boxplot.svg - Plot of difference between loss of respective fold and disease of baseline model and transfer model
binding/baseline_losses.csv - Loss values for each fold and disease for baseline models
binding/transfer_losses.csv - Loss values for each fold and disease for transfer models
Feature importance table generated by iterating across all features and randomly shuffling one feature while holding all others constant. The more that the loss increases when shuffling a particular feature, the more important we can assume that feature to be.
binding/global_feature_importance.csv - Feature importance table for global model.
binding/ind_disease_transfer_feature_importance.csv - Feature importance table for each specific disease
binding/shuffled_hallmark_disease_importance.csv - Hallmark importance generated by only shuffling those set of genes simultaneously and holding other genes constant (and finding diff in loss with no shuffling at all) for specific diseases
binding/shuffled_hallmark_global_importance.csv - Hallmark importance for global model
requirements.txt - Python dependencies to be installed in new environment
nnet_survival.py - Python file containing survival model functions
genes_pubmedcounts_glioma.csv - Table of gene names and number of PubMed articles regarding the gene. Articles restricted to those listed when searching for glioma MeSH term.
genes_pubmedcounts_prostate.csv - Table of gene names and number of PubMed articles regarding the gene. Articles restricted to those listed when searching for prostatic neoplasm MeSH term.
To run Discrete Survival Demo.ipynb, run the following command to install dependencies in your environment:
`pip install -r requirements.txt`
Next, in Discrete Survival Demo.ipynb's 2nd cell, cd to TransferSRNN directory which you git cloned.
Now you can run all the cells
| 55.175 | 268 | 0.827821 | eng_Latn | 0.995765 |
226604b118fa43b62439d5240f5e7c5dd6a4aef0 | 38,186 | md | Markdown | articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
title: 'Instruções: desenvolver aplicativos de Comandos Personalizados – Serviço de Fala'
titleSuffix: Azure Cognitive Services
description: Saiba como desenvolver e personalizar aplicativos de Comandos Personalizados. Esses aplicativos de comando de voz são mais adequados para cenários de conclusão de tarefas ou de comando e controle.
services: cognitive-services
author: trevorbye
manager: nitinme
ms.service: cognitive-services
ms.subservice: speech-service
ms.topic: conceptual
ms.date: 12/15/2020
ms.author: trbye
ms.openlocfilehash: ddf36530e52703ab1033b8e2e787b42b6dc60332
ms.sourcegitcommit: b0557848d0ad9b74bf293217862525d08fe0fc1d
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/07/2021
ms.locfileid: "106553270"
---
# <a name="develop-custom-commands-applications"></a>Desenvolver aplicativos de Comandos Personalizados
Neste artigo de instruções, você aprenderá a desenvolver e configurar aplicativos de Comandos Personalizados. O recurso Comandos Personalizados ajuda a criar aplicativos avançados de comando de voz que são otimizados para experiências de interação que priorizam a voz. O recurso é mais adequado para cenários de conclusão de tarefas ou de comando e controle. Ele é particularmente adequado para dispositivos IoT (Internet das Coisas) e para dispositivos de ambiente e sem periféricos.
Neste artigo, você criará um aplicativo que pode ligar e desligar uma TV, definir a temperatura e definir um alarme. Depois de criar esses comandos básicos, você aprenderá sobre as seguintes opções para Personalizar Comandos:
* Adicionar parâmetros aos comandos
* Adicionar configurações aos parâmetros dos comandos
* Criar regras de interação
* Criar modelos de geração de linguagem para respostas de Fala
* Usar ferramentas de Voz Personalizada
## <a name="create-an-application-by-using-simple-commands"></a>Criar um aplicativo usando comandos simples
Começar criando um aplicativo vazio de Comandos Personalizados. Para obter detalhes, confira o [início rápido](quickstart-custom-commands-application.md). Nesse aplicativo, em vez de importar um projeto, você cria um projeto em branco.
1. Na caixa **Nome**, insira o nome do projeto *Smart-Room-Lite* (ou outro nome de sua escolha).
1. Na lista **Idiomas**, selecione **Inglês (Estados Unidos)** .
1. Selecione ou crie um recurso LUIS.
> [!div class="mx-imgBorder"]
> 
### <a name="update-luis-resources-optional"></a>Atualizar recursos LUIS (opcional)
Você pode atualizar o recurso de criação que selecionou na janela **Novo projeto**. Você também pode definir um recurso de previsão.
Um recurso de previsão é usado para reconhecimento quando o aplicativo de Comandos Personalizados é publicado. Você não precisa de um recurso de previsão durante as fases de desenvolvimento e teste.
### <a name="add-a-turnon-command"></a>Adicionar um comando TurnOn
Adicione um comando no aplicativo vazio de Comandos Personalizados, Smart-Room-Lite, que você criou. O comando processará um enunciado, `Turn on the tv`. Ele responderá com a mensagem `Ok, turning the tv on`.
1. Crie um comando selecionando **Novo comando**, na parte superior do painel esquerdo. A janela **Novo comando** será exibida.
1. Para o campo **Nome**, forneça o valor `TurnOn`.
1. Selecione **Criar**.
O painel central lista as propriedades do comando.
A tabela a seguir explica as propriedades de configuração do comando. Para obter mais informações, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
| Configuração | Descrição |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------- |
| Sentenças de exemplo | Exemplos de enunciados que o usuário pode falar para disparar esse comando. |
| Parâmetros | Informações necessárias para concluir o comando. |
| Regras de conclusão | Ações a serem executadas para concluir o comando. Exemplos: respondendo ao usuário ou se comunicando com um serviço Web. |
| Regras de interação | Outras regras para lidar com situações mais específicas ou complexas. |
> [!div class="mx-imgBorder"]
> 
#### <a name="add-example-sentences"></a>Adicionar sentenças de exemplo
Na seção **Sentenças de exemplo**, você fornece um exemplo daquilo que o usuário pode dizer.
1. No painel central, selecione **Sentenças de exemplo**.
1. No painel à direita, adicione exemplos:
```
Turn on the tv
```
1. Na parte superior do painel, selecione **Salvar**.
Você ainda não tem parâmetros, portanto, você pode ir para a seção **Regras de conclusão**.
#### <a name="add-a-completion-rule"></a>Adicionar uma regra de conclusão
Em seguida, o comando precisa de uma regra de conclusão. Essa regra informa ao usuário que uma ação de conclusão está sendo executada.
Para obter mais informações sobre regras e regras de conclusão, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
1. Selecione a regra de conclusão padrão **Concluída**. Em seguida, edite-a da seguinte maneira:
| Configuração | Valor sugerido | Descrição |
| ---------- | ---------------------------------------- | -------------------------------------------------- |
| **Nome** | `ConfirmationResponse` | Um nome que descreve a finalidade da regra |
| **Condições** | Nenhum | Condições que determinam quando a regra pode ser executada |
| **Ações** | **Enviar resposta de Fala** > **Editor simples** > **Primeira variação** > `Ok, turning the tv on` | A ação a ser tomada quando a condição da regra for verdadeira |
> [!div class="mx-imgBorder"]
> 
1. Selecione **Salvar** para salvar a ação.
1. De volta à seção **Regras de conclusão**, selecione **Salvar** para salvar todas as alterações.
> [!NOTE]
> Você não precisa usar a regra de conclusão padrão que vem com o comando. Você pode excluir a regra de conclusão padrão e adicionar a sua regra.
### <a name="add-a-settemperature-command"></a>Adicionar um comando SetTemperature
Agora, adicione mais um comando, `SetTemperature`. Esse comando usará apenas um enunciado, `Set the temperature to 40 degrees`, e responderá com a mensagem `Ok, setting temperature to 40 degrees`.
Para criar o novo comando, siga as etapas usadas para o comando `TurnOn`, mas use a sentença de exemplo `Set the temperature to 40 degrees`.
Em seguida, edite as regras de conclusão **Concluídas** existentes, da seguinte maneira:
| Configuração | Valor sugerido |
| ---------- | ---------------------------------------- |
| **Nome** | `ConfirmationResponse` |
| **Condições** | Nenhum |
| **Ações** | **Enviar resposta de Fala** > **Editor simples** > **Primeira variação** > `Ok, setting temperature to 40 degrees` |
Selecione **Salvar** para salvar todas as alterações feitas no comando.
### <a name="add-a-setalarm-command"></a>Adicionar um comando SetAlarm
Crie um comando `SetAlarm`. Use a sentença de exemplo `Set an alarm for 9 am tomorrow`. Em seguida, edite as regras de conclusão **Concluídas** existentes, da seguinte maneira:
| Configuração | Valor sugerido |
| ---------- | ---------------------------------------- |
| **Nome** | `ConfirmationResponse` |
| **Condições** | Nenhum |
| **Ações** | **Enviar resposta de Fala** > **Editor simples** > **Primeira variação** > `Ok, setting an alarm for 9 am tomorrow` |
Selecione **Salvar** para salvar todas as alterações feitas no comando.
### <a name="try-it-out"></a>Experimente
Teste o comportamento do aplicativo usando o painel de teste:
1. No canto superior direito do painel, selecione o ícone **Treinar**.
1. Quando o treinamento for concluído, selecione **Testar**.
Experimente os seguintes exemplos de enunciado usando voz ou texto:
- Você digita: *definir a temperatura como 40 graus*
- Resposta esperada: ok, definindo a temperatura como 40 graus
- Você digita: *ligar a TV*
- Resposta esperada: ok, ligando a TV
- Você digita: *definir um alarme para 9h, amanhã*
- Resposta esperada: ok, definindo um alarme para 9h, amanhã
> [!div class="mx-imgBorder"]
> 
> [!TIP]
> No painel de teste, você pode selecionar **Ativar detalhes** para obter informações sobre como essa entrada de voz ou de texto foi processada.
## <a name="add-parameters-to-commands"></a>Adicionar parâmetros a comandos
Nesta seção, você aprenderá como adicionar parâmetros aos seus comandos. Os comandos exigem parâmetros para concluir uma tarefa. Em cenários complexos, os parâmetros podem ser usados para definir condições que disparam ações personalizadas.
### <a name="configure-parameters-for-a-turnon-command"></a>Configurar parâmetros do comando TurnOn
Comece editando o comando `TurnOn` existente para ligar e desligar vários dispositivos.
1. Agora que o comando gerenciará os cenários ativado e desativado, renomeie o comando como *TurnOnOff*.
1. No painel à esquerda, selecione o comando **TurnOn**. Em seguida, ao lado do **Novo comando**, na parte superior do painel, selecione o botão de reticências ( **...** ).
1. Selecione **Renomear**. Na janela **Renomear comando**, altere o nome para *TurnOnOff*.
1. Adicione um novo parâmetro ao comando. O parâmetro representa se o usuário deseja ativar ou desativar o dispositivo.
1. Na parte superior do painel central, selecione **Adicionar**. No menu suspenso, selecione **Parâmetro**.
1. No painel à direita, na seção **Parâmetros**, na caixa **Nome**, adicione `OnOff`.
1. Selecione **Obrigatório**. Na janela **Adicionar resposta para um parâmetro obrigatório**, selecione **Editor simples**. No campo **Primeira variação**, adicione *ativado ou desativado?* .
1. Selecione **Atualizar**.
> [!div class="mx-imgBorder"]
> 
1. Configure as propriedades do parâmetro usando a tabela a seguir. Para obter informações sobre todas as propriedades de configuração de um comando, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
| Configuração | Valor sugerido | Descrição |
| ------------------ | ----------------| ---------------------------------------------------------------------|
| **Nome** | `OnOff` | Um nome descritivo para o parâmetro |
| **É global** | Não selecionado | Caixa de seleção que indica se um valor desse parâmetro será aplicado em âmbito global, a todos os comandos do aplicativo.|
| **Necessário** | Selecionado | Caixa de seleção que indica se é obrigatório fornecer um valor para esse parâmetro antes que o comando seja concluído. |
| **Resposta para o parâmetro obrigatório** |**Editor simples** > `On or Off?` | Um prompt solicitando o valor desse parâmetro quando ele não for conhecido. |
| **Tipo** | **Cadeia de caracteres** | Tipo do parâmetro, tal como número, cadeia de caracteres, datetime ou geografia. |
| **Configuration** | **Aceitar valores de entrada predefinidos provenientes de um catálogo interno** | Para cadeias de caracteres, essa configuração limita as entradas a um conjunto de valores possíveis. |
| **Valores de entrada predefinidos** | `on`, `off` | Conjunto de valores possíveis e seus alias. |
1. Para adicionar valores de entrada predefinidos, selecione **Adicionar uma entrada predefinida**. Na janela **Novo item**, digite o *Nome*, conforme mostrado na tabela anterior. Nesse caso, como você não está usando alias, pode deixar esse campo em branco.
> [!div class="mx-imgBorder"]
> 
1. Selecione **Salvar** para salvar todas as configurações do parâmetro.
#### <a name="add-a-subjectdevice-parameter"></a>Adicionar um parâmetro SubjectDevice
1. Para adicionar um segundo parâmetro a fim de representar o nome dos dispositivos que podem ser controlados usando esse comando, selecione **Adicionar**. Use a configuração a seguir.
| Configuração | Valor sugerido |
| ------------------ | --------------------- |
| **Nome** | `SubjectDevice` |
| **É global** | Não selecionado |
| **Necessário** | Selecionado |
| **Resposta para o parâmetro obrigatório** | **Editor simples** > `Which device do you want to control?` |
| **Tipo** | **Cadeia de caracteres** |
| **Configuration** | **Aceitar valores de entrada predefinidos provenientes de um catálogo interno** |
| **Valores de entrada predefinidos** | `tv`, `fan` |
| **Alias** (`tv`) | `television`, `telly` |
1. Clique em **Salvar**.
#### <a name="modify-example-sentences"></a>Modificar sentenças de exemplo
Para comandos que usam parâmetros, é útil adicionar frases de exemplo que abrangem todas as combinações possíveis. Por exemplo:
* Informações completas sobre o parâmetro: `turn {OnOff} the {SubjectDevice}`
* Informações parciais sobre o parâmetro: `turn it {OnOff}`
* Nenhuma informação sobre o parâmetro: `turn something`
As sentenças de exemplo que usam diferentes graus de informações permitem que o aplicativo de Comandos Personalizados solucione as resoluções pontuais e as resoluções recorrentes usando informações parciais.
Com essas informações em mente, edite as sentenças de exemplo para usar esses parâmetros sugeridos:
```
turn {OnOff} the {SubjectDevice}
{SubjectDevice} {OnOff}
turn it {OnOff}
turn something {OnOff}
turn something
```
Clique em **Salvar**.
> [!TIP]
> No editor de sentenças de exemplo, use os símbolos de chaves para fazer referência aos parâmetros. Por exemplo, `turn {OnOff} the {SubjectDevice}`.
> Use uma guia para a conclusão automática apoiada por parâmetros criados anteriormente.
#### <a name="modify-completion-rules-to-include-parameters"></a>Modificar regras de conclusão para incluir parâmetros
Modifique a regra de conclusão `ConfirmationResponse` existente.
1. Na seção **Condições**, selecione **Adicionar uma condição**.
1. Na janela **Nova condição**, na lista **Tipo**, selecione **Parâmetros obrigatórios**. Na lista que vem a seguir, marque **OnOff** e **SubjectDevice**.
1. Deixe a opção **IsGlobal** desmarcada.
1. Selecione **Criar**.
1. Na seção **Ações**, edite a ação **Enviar resposta de Fala** passando o mouse sobre ela e selecionando o botão Editar. Desta vez, use os parâmetros criados recentemente `OnOff` e `SubjectDevice`:
```
Ok, turning the {SubjectDevice} {OnOff}
```
1. Clique em **Salvar**.
Experimente as alterações selecionando o ícone **Treinar**, na parte superior do painel à direita.
Quando o treinamento for concluído, selecione **Testar**. A janela **Testar seu aplicativo** será exibida. Experimente as seguintes interações:
- Entrada: *desligar a TV*
- Saída: ok, desligando a TV
- Entrada: *desligar a televisão*
- Saída: ok, desligando a TV
- Entrada: *desligar*
- Saída: qual dispositivo você deseja controlar?
- Entrada: *a TV*
- Saída: ok, desligando a TV
### <a name="configure-parameters-for-a-settemperature-command"></a>Configurar parâmetros do comando SetTemperature
Modifique o comando `SetTemperature` a fim de habilitá-lo para definir a temperatura conforme determinado pelo usuário.
Adicionar um parâmetro `Temperature`. Use a seguinte configuração:
| Configuração | Valor sugerido |
| ------------------ | ----------------|
| **Nome** | `Temperature` |
| **Necessário** | Selecionado |
| **Resposta para o parâmetro obrigatório** | **Editor simples** > `What temperature would you like?`
| **Tipo** | `Number` |
Edite os exemplos de enunciados a fim de usar os valores a seguir.
```
set the temperature to {Temperature} degrees
change the temperature to {Temperature}
set the temperature
change the temperature
```
Edite as regras de conclusão existentes. Use a configuração a seguir.
| Configuração | Valor sugerido |
| ------------------ | ----------------|
| **Condições** | **Parâmetro obrigatório** > **Temperatura** |
| **Ações** | **Enviar resposta de Fala** > `Ok, setting temperature to {Temperature} degrees` |
### <a name="configure-parameters-for-a-setalarm-command"></a>Configurar parâmetros do comando SetAlarm
Adicione um parâmetro chamado `DateTime`. Use a configuração a seguir.
| Configuração | Valor sugerido |
| --------------------------------- | ----------------------------------------|
| **Nome** | `DateTime` |
| **Necessário** | Selecionado |
| **Resposta para o parâmetro obrigatório** | **Editor simples** > `For what time?` |
| **Tipo** | **DateTime** |
| **Padrões de data** | Se a data estiver ausente, use o dia atual. |
| **Padrões de hora** | Se a hora estiver ausente, use o início do dia. |
> [!NOTE]
> Este artigo usa principalmente os tipos de parâmetro cadeia de caracteres, número e datetime. Para obter uma lista com todos os tipos de parâmetro compatíveis e suas respectivas propriedades, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
Edite os enunciados de exemplo. Use os seguintes valores.
```
set an alarm for {DateTime}
set alarm {DateTime}
alarm for {DateTime}
```
Edite as regras de conclusão existentes. Use a configuração a seguir.
| Configuração | Valor sugerido |
| ---------- | ------------------------------------------------------- |
| **Ações** | **Enviar resposta de Fala** > `Ok, alarm set for {DateTime}` |
Teste os três comandos juntos usando enunciados relacionados a diferentes comandos. (Você pode alternar entre os diferentes comandos.)
- Entrada: *definir um alarme*
- Saída: para que hora?
- Entrada: *ligar a TV*
- Saída: ok, ligando a TV
- Entrada: *definir um alarme*
- Saída: para que hora?
- Entrada: *17:00*
- Saída: ok, alarme definido para 2020-05-01 17:00:00
## <a name="add-configurations-to-command-parameters"></a>Adicionar configurações aos parâmetros do comando
Nesta seção, você aprenderá mais sobre a configuração avançada de parâmetros, incluindo:
- Como os valores de parâmetro podem pertencer a um conjunto que é definido fora do aplicativo de Comandos Personalizados.
- Como adicionar cláusulas de validação nos valores de parâmetro.
### <a name="configure-a-parameter-as-an-external-catalog-entity"></a>Configurar um parâmetro como uma entidade do catálogo externo
O recurso de Comandos Personalizados permite que você configure parâmetros do tipo cadeia de caracteres para se referir a catálogos externos hospedados em um ponto de extremidade da Web. Portanto, você pode atualizar o catálogo externo de maneira independe, sem editar o aplicativo de Comandos Personalizados. Essa abordagem é útil para casos em que as entradas do catálogo são numerosas.
Reutilize o parâmetro `SubjectDevice` do comando `TurnOnOff`. A configuração atual desse parâmetro é **Aceitar entradas predefinidas do catálogo interno**. Essa configuração se refere a uma lista estática de dispositivos na configuração do parâmetro. Mova esse conteúdo para uma fonte de dados externa que possa ser atualizada de maneira independente.
Para mover o conteúdo, comece adicionando um novo ponto de extremidade da Web. No painel à esquerda, vá para a seção **pontos de extremidade da Web**. Lá, adicione um novo ponto de extremidade da Web. Use a configuração a seguir.
| Configuração | Valor sugerido |
|----|----|
| **Nome** | `getDevices` |
| **URL** | `https://aka.ms/speech/cc-sampledevices` |
| **Método** | **GET** |
Se o valor sugerido para a URL não funcionar para você, configure e hospede um ponto de extremidade da Web que retorne um arquivo JSON que consista na lista de dispositivos que podem ser controlados. O ponto de extremidade da Web deve retornar um arquivo JSON formatado da seguinte maneira:
```json
{
"fan" : [],
"refrigerator" : [
"fridge"
],
"lights" : [
"bulb",
"bulbs",
"light"
"light bulb"
],
"tv" : [
"telly",
"television"
]
}
```
Em seguida, vá para a página de configurações do parâmetro **SubjectDevice**. Defina as propriedades a seguir.
| Configuração | Valor sugerido |
| ----| ---- |
| **Configuration** | **Aceitar entradas predefinidas do catálogo externo** |
| **Ponto de extremidade do catálogo** | `getDevices` |
| **Método** | **GET** |
Em seguida, selecione **Salvar**.
> [!IMPORTANT]
> Você não verá uma opção para configurar um parâmetro a fim de aceitar entradas de um catálogo externo, a menos que tenha o ponto de extremidade da Web definido na seção **Ponto de extremidade da Web**, no painel à esquerda.
Experimente selecionando **Treinar**. Após a conclusão do treinamento, selecione **Testar** e experimente algumas interações.
* Entrada: *ligar*
* Saída: qual dispositivo você deseja controlar?
* Entrada: *luzes*
* Saída: ok, acendendo as luzes
> [!NOTE]
> Agora você pode controlar todos os dispositivos hospedados no ponto de extremidade da Web. Mas você ainda precisa treinar o aplicativo para testar as novas alterações e, em seguida, republicar o aplicativo.
### <a name="add-validation-to-parameters"></a>Adicionar validação aos parâmetros
*Validações* são constructos que se aplicam a determinados tipos de parâmetro que permitem configurar restrições para o valor do parâmetro. Elas solicitam correções se os valores não ficam dentro das restrições. Para obter uma lista de tipos de parâmetro que estendem o constructo de validação, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
Teste as validações usando o comando `SetTemperature`. Use as etapas a seguir para adicionar uma validação ao parâmetro `Temperature`.
1. No painel à esquerda, selecione o comando **SetTemperature**.
1. No painel central, selecione **Temperatura**.
1. No painel à direita, selecione **Adicionar uma validação**.
1. Na janela **Nova validação**, configure a validação, conforme mostrado na tabela a seguir. Em seguida, selecione **Criar**.
| Configuração de parâmetro | Valor sugerido | Descrição |
| ---- | ---- | ---- |
| **Valor mínimo** | `60` | Para parâmetros numéricos, o valor mínimo que o parâmetro pode assumir |
| **Valor máximo** | `80` | Para parâmetros numéricos, o valor máximo que o parâmetro pode assumir |
| **Resposta a falhas** | **Editor simples** > **Primeira variação** > `Sorry, I can only set temperature between 60 and 80 degrees. What temperature do you want?` | Um prompt para solicitar um novo valor se a validação falhar |
> [!div class="mx-imgBorder"]
> 
Experimente selecionando o ícone **Treinar**, na parte superior do painel à direita. Depois que o treinamento for concluído, selecione **Testar**. Experimente algumas interações:
- Entrada: *definir a temperatura como 72 graus*
- Saída: ok, definindo a temperatura como 72 graus
- Entrada: *definir a temperatura como 45 graus*
- Saída: só posso definir a temperatura entre 60 e 80 graus
- Entrada: *então, definir como 72 graus*
- Saída: ok, definindo a temperatura como 72 graus
## <a name="add-interaction-rules"></a>Adicionar regras de interação
Regras de interação são regras *adicionais* que gerenciam situações específicas ou complexas. Embora você esteja livre para criar suas próprias regras de interação, neste exemplo, você usa regras de interação para os seguintes cenários:
* Confirmar comandos
* Adicionar uma correção de uma etapa aos comandos
Para obter mais informações sobre as regras de interação, confira [Definições e conceitos de Comandos Personalizados](./custom-commands-references.md).
### <a name="add-confirmations-to-a-command"></a>Adicionar confirmações a um comando
Para adicionar uma confirmação, use o comando `SetTemperature`. Para alcançar a confirmação, crie regras de interação usando as seguintes etapas:
1. No painel à esquerda, selecione o comando **SetTemperature**.
1. No painel central, adicione regras de interação selecionando **Adicionar**. Em seguida, selecione **Regras de interação** > **Confirmar comando**.
Essa ação adiciona três regras de interação. As regras solicitam que o usuário confirme a data e a hora do alarme. Elas esperam uma confirmação (sim ou não) para a próxima rodada.
1. Modifique a regra de interação do **comando Confirmar** usando a seguinte configuração:
1. Altere o nome para **Confirmar temperatura**.
1. Adicione uma nova condição: **Parâmetros obrigatórios** > **Temperatura**.
1. Adicione uma nova ação: **Tipo** > **Enviar resposta de Fala** > **Tem certeza de que deseja definir a temperatura como {Temperature} graus?**
1. Na seção **Expectativas**, deixe o valor padrão de **Esperando confirmação do usuário**.
> [!div class="mx-imgBorder"]
> 
1. Modifique a regra de interação **Confirmação bem-sucedida** para gerenciar uma confirmação bem-sucedida (em que o usuário diz sim).
1. Altere o nome para **Confirmação de temperatura bem-sucedida**.
1. Deixe a condição existente **A confirmação foi bem-sucedida**.
1. Adicione uma nova condição: **Tipo** > **Parâmetros obrigatórios** > **Temperatura**.
1. Deixe o valor padrão do **Estado pós-execução** como **Executar regras de conclusão**.
1. Modifique a regra de interação **Confirmação negada** para gerenciar cenários quando a confirmação for negada (em que o usuário diz não).
1. Altere o nome para **Confirmação de temperatura negada**.
1. Deixe a condição existente **A confirmação foi negada**.
1. Adicione uma nova condição: **Tipo** > **Parâmetros obrigatórios** > **Temperatura**.
1. Adicione uma nova ação: **Tipo** > **Enviar resposta de fala** > **Sem problemas. Qual temperatura, então?** .
1. Altere o valor padrão do **Estado pós-execução** para **Aguardar a entrada do usuário**.
> [!IMPORTANT]
> Neste artigo, você usa a funcionalidade interna de confirmação. Você também pode adicionar manualmente as regras de interação, uma a uma.
Experimente as alterações selecionando **Treinar**. Quando o treinamento for concluído, selecione **Testar**.
- **Entrada**: *defina a temperatura como 80 graus*
- **Saída**: tem certeza de que deseja definir a temperatura como 80 graus?
- **Entrada**: *não*
- **Saída**: sem problema. Qual temperatura, então?
- **Entrada**: *72 graus*
- **Saída**: tem certeza de que deseja definir a temperatura como 72 graus?
- **Entrada**: *sim*
- **Saída**: ok, definindo a temperatura como 72 graus
### <a name="implement-corrections-in-a-command"></a>Implementar correções em um comando
Nesta seção, você vai configurar uma correção de uma etapa. Essa correção é usada após a execução da ação de conclusão. Você também verá um exemplo de como uma correção é habilitada por padrão se o comando ainda não for concluído. Para adicionar uma correção quando o comando não for concluído, adicione o novo parâmetro `AlarmTone`.
No painel esquerdo, selecione o comando **SetAlarm**. Em seguida, adicione o novo parâmetro **AlarmTone**.
- **Nome** > `AlarmTone`
- **Tipo** > **Cadeia de caracteres**
- **Valor padrão** > **Chimes**
- **Configuração** > **Aceitar valores de entrada predefinidos do catálogo interno**
- **Valores de entrada predefinidos** > **Chimes**, **Jingle** e **Echo** (esses valores são entradas individuais predefinidas.)
Em seguida, atualize a resposta para o parâmetro **DateTime** como **Pronto para definir o alarme com o tom {AlarmTone}. Para que hora?** . Então, modifique a regra de conclusão da seguinte maneira:
1. Selecione a regra de conclusão existente **ConfirmationResponse**.
1. No painel à direita, passe o mouse sobre a ação existente e selecione **Editar**.
1. Atualize a resposta de Fala para `OK, alarm set for {DateTime}. The alarm tone is {AlarmTone}`.
> [!IMPORTANT]
> O tom de alarme pode ser alterado sem nenhuma configuração explícita no comando em andamento. Por exemplo, ele pode mudar quando o comando ainda não tiver terminado. Uma correção é habilitada *por padrão* para todos os parâmetros do comando, independentemente da ordem, se o comando ainda não tiver sido concluído.
#### <a name="implement-a-correction-when-a-command-is-finished"></a>Implementar uma correção quando um comando for concluído
A plataforma de Comandos Personalizados permite a correção de uma etapa mesmo quando o comando está concluído. Esse recurso não é habilitado por padrão. Ele deve ser configurado explicitamente.
Execute as seguintes etapas para configurar uma conexão de uma etapa:
1. No comando **SetAlarm**, adicione uma regra de interação do tipo **Atualizar comando anterior** para atualizar o alarme definido anteriormente. Renomeie a regra de interação como **Atualizar alarme anterior**.
1. Deixe a condição padrão: **Comando anterior precisa ser atualizado**.
1. Adicione uma nova condição: **Tipo** > **Parâmetro Obrigatório** > **DateTime**.
1. Adicionar uma nova ação: **Tipo** > **Enviar resposta de fala** > **Editor simples** > **Atualizando a hora do alarme anterior para {DateTime}** .
1. Deixe o valor padrão do **Estado pós-execução** como **Comando concluído**.
Experimente as alterações selecionando **Treinar**. Aguarde a conclusão do treinamento e, em seguida, selecione **Testar**.
- **Entrada**: *definir um alarme.*
- **Saída**: pronto para definir o alarme com o tom Chimes. Para que hora?
- **Entrada**: *definir um alarme com o tom Jingle para 9:00, amanhã.*
- **Saída**: ok, alarme definido para 2020-05-21 09:00:00. O tom do alarme é o Jingle.
- **Entrada**: *não, 8:00.*
- **Saída**: atualizando a hora do alarme anterior para 29/05/2020 08:00.
> [!NOTE]
> Em um aplicativo real, na seção **Ações** dessa regra de correção, você também precisará enviar de volta uma atividade para o cliente ou chamar um ponto de extremidade HTTP para atualizar a hora do alarme no seu sistema. Essa ação deve ser exclusivamente responsável por atualizar a hora do alarme. Ela não deve ser responsável por nenhum outro atributo do comando. Nesse caso, esse atributo seria o tom do alarme.
## <a name="add-language-generation-templates-for-speech-responses"></a>Adicionar modelos de geração de linguagem para respostas de Fala
Os modelos de LG (geração de linguagem) permitem que você personalize as respostas enviadas ao cliente. Eles introduzem a variação nas respostas. Você pode obter a geração de linguagem usando:
* Modelos de geração de linguagem.
* Expressões adaptáveis.
Os modelos de Comandos Personalizados são baseados nos [modelos de LG](/azure/bot-service/file-format/bot-builder-lg-file-format#templates) do Bot Framework. Como o recurso de Comandos Personalizados cria um modelo de LG quando necessário (para respostas de Fala em parâmetros ou ações), você não precisa especificar o nome do modelo de LG.
Portanto, você não precisa definir seu modelo da seguinte maneira:
```
# CompletionAction
- Ok, turning {OnOff} the {SubjectDevice}
- Done, turning {OnOff} the {SubjectDevice}
- Proceeding to turn {OnOff} {SubjectDevice}
```
Em vez disso, você pode definir o corpo do modelo sem o nome, desta forma:
> [!div class="mx-imgBorder"]
> 
Essa alteração introduz a variação nas respostas de Fala que são enviadas ao cliente. Para um enunciado, a resposta de Fala correspondente é separada aleatoriamente das opções fornecidas.
Aproveitando os modelos de LG, você também pode definir respostas de Fala complexas para os comandos usando expressões adaptáveis. Para obter mais informações, confira o [Formato de modelos de LG](/azure/bot-service/file-format/bot-builder-lg-file-format#templates).
Por padrão, o recurso de Comandos Personalizados dá suporte a todas as funcionalidades, com as seguintes diferenças secundárias:
* Nos modelos de LG, as entidades são representadas como `${entityName}`. O recurso de Comandos Personalizados não usa entidades. Mas você pode usar parâmetros como variáveis com a representação `${parameterName}` ou a representação `{parameterName}`.
* O recurso de Comandos Personalizados não dá suporte à composição e expansão de modelos, pois você nunca edita o arquivo *.lg* diretamente. Você edita apenas as respostas de modelos criados automaticamente.
* O recurso de Comandos Personalizados não dá suporte a funções personalizadas que a LG injeta. Há suporte para funções predefinidas.
* O recurso de Comandos Personalizados não oferece suporte a opções como `strict`, `replaceNull` e `lineBreakStyle`.
### <a name="add-template-responses-to-a-turnonoff-command"></a>Adicionar respostas de modelo ao comando TurnOnOff
Modifique o comando `TurnOnOff` para adicionar um novo parâmetro. Use a configuração a seguir.
| Configuração | Valor sugerido |
| ------------------ | --------------------- |
| **Nome** | `SubjectContext` |
| **É global** | Não selecionado |
| **Necessário** | Não selecionado |
| **Tipo** | **Cadeia de caracteres** |
| **Valor padrão** | `all` |
| **Configuration** | **Aceitar valores de entrada predefinidos provenientes do catálogo interno** |
| **Valores de entrada predefinidos** | `room`, `bathroom`, `all`|
#### <a name="modify-a-completion-rule"></a>Modificar uma regra de conclusão
Edite a seção **Ações** da regra de conclusão existente **ConfirmationResponse**. Na janela **Editar ação**, alterne para o **Editor de modelos**. Então, substitua o texto pelos exemplos a seguir.
```
- IF: @{SubjectContext == "all" && SubjectDevice == "lights"}
- Ok, turning all the lights {OnOff}
- ELSEIF: @{SubjectDevice == "lights"}
- Ok, turning {OnOff} the {SubjectContext} {SubjectDevice}
- ELSE:
- Ok, turning the {SubjectDevice} {OnOff}
- Done, turning {OnOff} the {SubjectDevice}
```
Treine e teste seu aplicativo usando a entrada e a saída a seguir. Observe a variação de respostas. A variação é criada por várias alternativas do valor do modelo e também pelo uso de expressões adaptáveis.
* Entrada: *ligar a TV*
* Saída: ok, ligando a TV
* Entrada: *ligar a TV*
* Saída: concluído, TV ligada
* Entrada: *desligar as luzes*
* Saída: ok, desligando todas as luzes
* Entrada: *desligar as luzes da sala*
* Saída: ok, desligando as luzes da sala
## <a name="use-a-custom-voice"></a>Usar uma voz personalizada
Outra maneira de personalizar as respostas de Comandos Personalizados é selecionando uma voz de saída. Use as seguintes etapas para alternar a voz padrão para uma voz personalizada:
1. Em seu aplicativo de Comandos Personalizados, no painel à esquerda, selecione **Configurações**.
1. No painel central, selecione **Voz Personalizada**.
1. Na tabela, selecione uma voz personalizada ou uma voz pública.
1. Clique em **Salvar**.
> [!div class="mx-imgBorder"]
> 
> [!NOTE]
> Para vozes públicas, os tipos neurais estão disponíveis apenas para regiões específicas. Para obter mais informações, confira [Regiões com suporte para o serviço de Fala](./regions.md#neural-and-standard-voices).
>
> Você pode criar vozes personalizadas na página do projeto de **Voz Personalizada**. Para saber mais, confira [Introdução à Voz Personalizada](./how-to-custom-voice.md).
Agora, o aplicativo responderá na voz selecionada, em vez da voz padrão.
## <a name="next-steps"></a>Próximas etapas
* Saiba como [integrar seu aplicativo de Comandos Personalizados](how-to-custom-commands-setup-speech-sdk.md) a um aplicativo cliente usando o SDK de Fala.
* [Configure a implantação contínua](how-to-custom-commands-deploy-cicd.md) para o seu aplicativo de Comandos Personalizados usando o Azure DevOps.
| 59.020093 | 484 | 0.694626 | por_Latn | 0.99939 |
2267dd674f743108100369142c12cff2dfbb3f26 | 23,489 | md | Markdown | content/http2.md | cyborgdream/cyborgdream.github.io | e4c6423cc83632ff2862e4847a2d58a73a044f52 | [
"MIT"
] | null | null | null | content/http2.md | cyborgdream/cyborgdream.github.io | e4c6423cc83632ff2862e4847a2d58a73a044f52 | [
"MIT"
] | null | null | null | content/http2.md | cyborgdream/cyborgdream.github.io | e4c6423cc83632ff2862e4847a2d58a73a044f52 | [
"MIT"
] | null | null | null | +++
title = "This is a journey into HTTP2"
description = "All I learned about HTTP2 in my Mozilla internship. This used to be a three parts blogpost but I condensed everything here."
date = 2018-12-20
draft = false
slug = "http2"
[taxonomies]
categories = ["tutorial"]
tags = ["performance", "web-dev", "meta-internet"]
[extra]
comments = true
+++
As a performance intern working with the Webcompat team part of my initial proposal was to update the Webcompat server to HTTP/2. But what is HTTP/2? What's the difference between HTTP/1.1, HTTPS and HTTP/2? Newer is always better? How should I know if I should implement it on my website? We'll dive in into the basics and I'll give you the building blocks to understand a little bit about internet protocols and in later articles I'll introduce you to my process of analysis and implementation of an HTTP/2 server focused on improving Webcompat's website performance, hopefully that will give you some insights about how to improve your own server performance.
HTTP/1.1 stands for Hypertext Transfer Protocol version 1.1 and it's a standardized way of computers talk to each other through hyperlinks. HTTP/1.1 defines **methods** like GET that retrieves data and POST that sends it, those requests and responses have to be treated synchronous opening one TCP connection for each one of them, which might slow things a bit; HTTP/1.1 implements **status** there are responses to the methods like 404 not found or 200 that means that a request was successful;and HTTP **header** fields, which sometimes can be used to fetch data with the HEAD method. HTTP is stateless, which means there is no link between two requests being successively carried out on the same connection, sometimes you may to have a "memory" to store preferences of an user or a cart on an e-commerce and for that HTTP implements **cookies**. HTTP also defines **caching**, a very relevant tool for performance. Caching is the way to save data locally in order to decrease the number of requests you have to make to another computer, W3C offers [extensive documentation](https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html) about it. If you want to get a deeper understanding on how HTTP/1.1 works MDN has this [great overview article](https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview) about it, you can follow up the links, I recommend reading the entire section.

[source: cloudflare](https://blog.cloudflare.com/http-2-for-web-developers/)
HTTPS is a safer way to transfer data between computers. HTTPS websites encrypts the data you send and receive using TLS/SSL and the websites have to be certified by authorities on the internet that ensures that you are exchanging information with who you think you are, here is a link if you're interested in learning more about HTTPS. When we're talking about performance it seems obvious that HTTPS will have a bigger cost than HTTP because of the extra encrypting process, but according to this website's tests their results are pretty similar with HTTPS being [approximately 1% slower than HTTP](https://www.tunetheweb.com/blog/http-versus-https-versus-http2/).
Finally, HTTP/2 implements the same tools that HTTP/1.1 does, but on top of that it has some sweet new stuff. HTTP/2 as opposed to HTTP/1.1 is a multiplexed protocol which means it can handle parallel requests, that often means your page will load faster; it's implemented with a binary protocol which allows the use of improved optimization techniques; it compresses headers; and it allows a server to populate data in a client cache, in advance of it being required, through a mechanism called the server push. Besides that, all of the browsers only support HTTP/2 if they're over TLS which means HTTP/2 websites are always encrypted.

[source: cloudflare](https://blog.cloudflare.com/http-2-for-web-developers/)
These are the basic differences between the three protocols. Now that you have your feet wet on the water it's time to learn how to run analysis and find out if HTTP/2 is good for you.
In my [previous blog post](https://psychonautgirl.space/blog/2018/12/20/journey-into-http2-building-blocks.html) I gave a small introduction to the main differences between the HTTP protocol versions. In this article I want to help you understand if you need HTTP/2 on your website. If you decide you do you can keep following this series of blog posts where I'll upgrade my own website and where I try in the best of my ability to give you some neat tips on how to analyse and upgrade yours!
### Is HTTP/2 worth it?
Even though using HTTP/2 is broadly advertised on the internet as the best possible option I don't believe it fits everybody needs. Here are some things to consider if implementing HTTP/2 is worth it for you:
* It's not supported by older browsers. You can check [here](https://caniuse.com/#search=http2) which browsers support HTTP/2. If you know your user base accesses your website mainly from these browsers, it might not be worth it. Maybe it'll be a better use of your time to look for optimizations for HTTP/1.1, those can help a lot;
* It's not supported by some web servers. You should check if your current server supports HTTP/2, [here](https://github.com/http2/http2-spec/wiki/Implementations) is an updated list on which servers currently support it;
* It's not supported by some content delivery networks. You should check it either by looking at their website or checking this [list](https://en.wikipedia.org/wiki/HTTP/2#Content_delivery_networks);
* Upgrading your whole ecosystem to run a HTTP/2 server might mean you'll spend a lot of time/money. Besides upgrading your web server and your content delivery network, you might also need to buy a certificate, you can find more information about it [here](https://letsencrypt.org/getting-started/);
* You can [upgrade your HTTP/1.1 server to HTTPS](https://support.google.com/webmasters/answer/6033049), so access to HTTPS alone is not a good reason to upgrade to HTTP/2. HTTP/2 isn't intrisically safe and this [article](https://queue.acm.org/detail.cfm?id=2716278) goes over some reasons why;
* Updating your web server might mean only a small improvement on loading time and sometimes it might mean a loss in performance. I've seen this happening in some cases while [analysing webcompat.com with HTTP/2](https://psychonautgirl.space/webcompat.html#HTTP-2), and [other](https://dev.to/david_j_eddy/is-http2-really-worth-it-1m2e) [people](https://blog.fortrabbit.com/http2-reality-check) have seen the same. The only way to find out if it's going to be worth it for your website is by running a lot of tests.
If you think all of the points above aren't a real issue for you - or if you're willing to deal with them, let's dive in to how to analyze your website! In my next post I'll show you the methods and tools I used to analyze staging.webcompat.com and hopefully this real life example will help you to improve the performance of your own website.
This post is the continuation of the series “This is a journey in to HTTP/2”, you’ll find the first post where I talk about the difference between HTTP protocols [here](https://medium.com/@electrika01/this-is-a-journey-into-http-2-building-blocks-cba57a1e924c) and the second one where I advise whether you should or not use HTTP/2 on your own website [here](https://psychonautgirl.space/blog/2019/01/01/journey-into-http2-should-i-use-it.html). On this post we’ll explore how to find the configuration that fits your needs. Better than giving you a formula that might or might not work for you I’ll try to show you how to find the result that’s going to be optimal for your website.
## Initial setup
There are a lot of tools out there to help you evaluate your website’s performance. Here I decided to use Firefox DevTools, Chrome DevTools and [sitespeed.io](https://www.sitespeed.io/) by Wikimedia. Why these tools? They’re simple, easy to use, very customizable and they will cover a representative part of what HTTP/2 is doing in our website. Besides that Firefox, and Sitespeed offer a wide set of awesome open source tools and as a conscious developer that fights for the free web you should contribute with these companies by using them. To run tests on your computer use a real mobile device if you can, I recommend using a Nexus 5. If you don’t have a Nexus 5 at your disposal any mid-range device will do, this [list](https://twitter.com/katiehempenius/statuses/1067969800205422593) might help you to find something. Concerning internet speed I advise you to use a 5mbps internet connection. If you have a slower internet connection at your disposal it’s fine to run the tests with a lower speed, just remember to use a tool like [Throttle](https://www.sitespeed.io/documentation/throttle/) to establish a speed your internet supports. Note that if you have an internet speed faster than 5mbps you will also need to throttle it down. To run tests on mobile I believe the 3g slow speed or 300kbps is a good one. You can check how much speed you have at your disposal looking at [fast.com](https://fast.com/) and if you’re still feeling insecure about how your network is behaving you can use a tool like [Wireshark](https://www.wireshark.org/) to monitor it closer.
If you’re not comfortable with these tools Google has a great [doc](https://developers.google.com/web/tools/chrome-devtools/) about how to use DevTools and Firefox [too](https://developer.mozilla.org/en-US/docs/Tools), Sitespeed also have some fine [docs](http://sitespeed.io/documentation/). If you don’t want to learn this kind of thing right now and just jump straight away for the tests and results you can try using [WebPageTest](http://webpagetest.org/), it’s great tool and it’s also open source 🎔. WebPageTest is not as customizable as the tools I first advised you to use, but it’s possible to change a lot of settings on the website, so if you’re looking for simpler tests you should go have a look, it’s really a great tool. I recommend the same settings as I did before: using Nexus 5 for running tests with a slow 3g connection and Chrome and Firefox with a 5mbps connection.
I recommend you to use this setup because according to my [previous research](https://psychonautgirl.space/webcompat.html#Nexus-5) Nexus 5 represents the average setup of a mobile nowadays and the internet speeds also represents the average speed connection throughout the world. If you have more accurate statistics for your main users go ahead and use them!
To clarify things here this is the setup I’m using:
```
Server: ngnix 1.14.2
Desktop: MacBook Pro 2018 Mojave
Mobile: Nexus 5 Android 7
Internet speed: 2mbps/300kbps
```
P.S.: I have to use 2mbps to run my tests because my internet is currently that bad.
## Defining priorities
Performance is about an ocean, you can swim for miles in a lot of different directions but if you don’t focus chances are you’ll be lost in a sea of testing and hardly any insights or improves will come out from there.

source: [xkcd](http://xkcd.com/)
It’s important that you find what are the crucial points that need to be upgraded in your website to avoid getting lost. In my case the main aspects I want to improve are **time to first paint** and **time to be interactive**. Once you find what are your main aspects try to identify which files are responsible for triggering them, on my case I found out that two bundle files containing all of the page JS and CSS were the files that dictated the website performance if taking in account these two aspects.
Besides that define which are the most important pages that you want to optimize and focus on those, in my case I’m most interested in improving the performance of webcompat.com, webcompat.com/issues/new and webcompat.co/issues/#.
Another important thing to decide is whether it’s actually valuable for you to run tests using mobile. Running tests on mobile is a boring task, it’s super slow and they’re not the easiest creatures to handle, so if it’s not relevant for your context (which is rare nowadays) I’d say you could skip it. You can use [Lighthouse](https://developers.google.com/web/tools/lighthouse/) to give you an overview about how your website is handling mobile and maybe if you get good results you could also skip the mobile tests. In my case mobile is relevant because it’s where webcompat has the lowest scores when I run this kind of test and also because a substantial part of our traffic comes from mobile.

## Pre analysis
The `push` resource has to be used carefully for a couple of reasons, for example it interferes directly with your first paint time, so you have to take watch the size of the files you’re pushing, according to [this](https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXjqKGsCVNYHyLEXIxYMv0/edit#) article you should `push` files to fill your idle network time and no more. Using DevTools should make fairly simple to find out how much idle time we have between the time an user makes a request and your website starts responding it. You can also check this [link](https://stackoverflow.com/questions/667555/how-to-detect-idle-time-in-javascript-elegantly) to learn how to do it “elegantly”. Here are my results:

Webcompat’s idle time. Time in ms.
To get an idea of how HTTP/2 is going to change things I first measured the performance of my website without any change. You can measure this using Speedtest or Devtools. I recommend to use Speedtest because it’ll probably make your life easier, but if you want to run tests on DevTools it’s okay too. Keep in mind that you should create a new profile to do so, so your addons, cookies, etc don’t have any effects on your profiling, this is a very important step since extensions can have a [profound impact on your performance](https://twitter.com/denar90_/statuses/1065712688037277696).

Webcompat’s performance with no changes. Time in ms.
Some explanation here: the “/” stands for webcompat.com, the “new” for webcompat.com/issues/new and the “396” for webcompat.com/issues/396, a random issue.
## The plan
`Push` will send the files you chose for the user before the user browser even requests them and this is the main difference from preloading, but there is a lot of specificities about it and if you’re interested, there is a great [article](https://dexecure.com/blog/http2-push-vs-http-preload/) explaining more about them. Because of that we don’t want to push the files every time the user access our page we only want to send them when it’s needed. To do that a good solution is to create a cookie that will indicate to the server if the user needs new information to be pushed or if it doesn;t and the user already has this info cashed on the browser. If you want to learn more about cookies the [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie) page is a nice starting point. Here is an example on how to create a session cookie on nginx:
```
server {
[...]
http2_push_preload on;
location = / {
add_header Set-Cookie "session=1";
add_header Link $resources;
}
}
map $http_cookie $resources {
"~*session=1" "";
default "</style.css>; as=style; rel=preload, </image1.jpg>; as=image; rel=preload";
}
```
Since we know how much idle time we have we can estimate how many files we can `push`. In my case I’ll be pushing only two huge bundle files that are critical for my performance improvement. But in case my estimates are wrong I’ll also run tests pushing more files than I think the website can handle. I’ll also run some tests using the HTTP/2 server but not pushing anything so I can analyze how much pushing files are improving performance.
While running the tests I’m interested in looking at four different aspects of the website: time to first paint, speed index, time to start render and time to load all the requests. You can chose the parameters that make more sense to your website. But careful not to take too many things in consideration. Testing a lot of different aspects will not always give you a more accurate answer, in my experience is quite the opposite, testing things that might be unrelated just gave me a lot of work and confusing results.
To summarize we’ll be testing the following:
* Default HTTP/2 server (not pushing any files);
* Pushing the most crucial files and keeping inside the idle time budget;
* Pushing more files than the available idle time.
Besides that maybe it might be smart to use `preload` as a complement to the files that couldn’t be pushed. It depends on your currently situation, in my case I decided it wasn’t a priority but I’ll come back to it later on this post and I’ll try to give you some insights about it.
## The test analysis
To make the next tests I’ll use Sitespeed! Sitespeed offers you the option to load your configuration as a JSON file, this is the file I used to run my tests:
```
{
"browsertime": {
"iterations": 5,
"browser": "BROWSER"
},
"utc": true,
"outputFolder": "OUTPUT_FOLDER"
}
```
And this is the command line I used to run tests on my desktop: `docker run --shm-size=1g --rm -v "($pwd)":/sitespeed.io sitespeedio/sitespeed.io:7.73 --config config.json https://staging.webcompat.com` .
I wanted to make this report as accurate as possible so I decided to write a Selenium script to login on my website while I was testing. I recommend you looking in to Selenium and Sitespeed docs if you’re interested in creating something alike. The script is pretty straightforward and here is the one I used:
```
module.exports = {
run(context) {
return context.runWithDriver((driver) => {
return driver.get('LOGIN_HTML')
.then(() => {
const webdriver = context.webdriver;
const until = webdriver.until;
const By = webdriver.By;
const userName = 'USER_LOGIN';
const password = 'USER_PASSWORD';
driver.findElement(By.name('login')).sendKeys(userName);
driver.findElement(By.name('password'))
.sendKeys(password);
const loginButton = driver.findElement(
By.classN* Firefox had a performance consistently worse than Chrome’s on desktop that’s worth an investigation. According to previous tests done on other tools like DevTools and WebPageTest they didn’t used to have such a disparate performance;
*
In most of the cases just upgrading the server to HTTP/2 doesn't make much difference. I imagine that most of the performance improvement comes from nginx HPACK;
Even though the time to first paint didn't improve as much as I expected on the single issue page while I was pushing more than just the bundle files on Chrome mobile the TTI had a huge performance increase;
If comparing the Speed index the performance was worse in all of the devices in the new issue page while I was pushing more than just the bundle files, which means the new issue page have a smaller idle time and I shouldn't push as many files as I did.ame('btn-block'));
loginButton.click();
return driver.wait(until.elementLocated(
By.id('body-webcompat')), 3000);
});
})
}
};
```
While running the tests on my Nexus 5 I’ve used the following command: `sitespeed.io --browsertime.chrome.android.package com.android.chrome --preScript login_mobile.js --config config.json https://staging.webcompat.com` , you can learn more about how to use your mobile to run tests on Sitespeed reading their docs. Keep in mind that you have to redirect the user from the login page to the page you want to test directly, otherwise the browser will just cache the information from your first loaded page to the page you’re really trying to test. Maybe this issue explains a little better what I mean.
Using all of the knowledge above we can finally run some tests and analyze the results.

Webcompat’s performance without pushing anything. Time in ms.

Webcompat’s performance pushing only crucial files. Time in ms.

Webcompat’s performance pushing more files than idle time supports. Time in ms.
Seeing this data we can observe a few interesting things:
* Firefox and Chrome mobile have a bigger idle time than Chrome, so these environments benefit more from push . While pushing more than the budget I had estimate was better for Firefox and Chrome mobile it meant a worsen in performance for Chrome desktop;
* Firefox had a performance consistently worse than Chrome’s on desktop that’s worth an investigation. According to [previous tests](https://psychonautgirl.space/webcompat.html) done on other tools like DevTools and WebPageTest they didn’t used to have such a disparate performance;
* In most of the cases just upgrading the server to HTTP/2 doesn't make much difference. I imagine that most of the performance improvement comes from nginx [HPACK](https://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/);
* Even though the time to first paint didn't improve as much as I expected on the single issue page while I was pushing more than just the bundle files on Chrome mobile the TTI had a huge performance increase;
* If comparing the Speed index the performance was worse in all of the devices in the new issue page while I was pushing more than just the bundle files, which means the new issue page have a smaller idle time and I shouldn't push as many files as I did.
If you want to further optimize your website you can try to use preload. I encourage you to read [this article](https://www.keycdn.com/blog/http2-hpack-compression) about preloading to find out if it’s worth it for you. Generally if you have any of the following on your website it might be a good idea to use it:
* Fonts referenced by CSS files;
* Hero image which is loaded via background-url in your external CSS file;
* Critical inline CSS/JS.
Currently the website I'm testing doesn't have any critical inline CSS /JS, nor hero image and our fonts are hosted by third-parties. So I decided I wouldn’t make any changes by the time, since I was currently focusing on the push performance. If you’re planning to do that on your website keep in mind that we’ve used `http2_push_preload` on nginx and now we can’t preload the files on the server anymore because it’s set to default that all of the preloaded files should be pushed. Instead of using the server configuration file and sending them on headers you’ll have to use the `preload` tag on HTML files.
## Conclusion
Analyzing the data I got to the conclusion that for my website I should use the `push` resource on a few other files than I had first planned. I learned that using `push` was specially helpful on single issue pages and on the homepage and that the new issue page has a small idle time and I shouldn’t push as much resources as I push to the rest of the website, or it will start to get in the way of the page rendering.
I hope this article had helped. Let me know if I missed anything! | 103.475771 | 1,573 | 0.76836 | eng_Latn | 0.998593 |
2267e1d8f7f948dcb2794cb72484b3314bdd439b | 7,813 | md | Markdown | docs/extensibility/how-to-use-asyncpackage-to-load-vspackages-in-the-background.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/how-to-use-asyncpackage-to-load-vspackages-in-the-background.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-24T14:57:38.000Z | 2020-07-24T14:57:38.000Z | docs/extensibility/how-to-use-asyncpackage-to-load-vspackages-in-the-background.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Verwenden von AsyncPackage zum Laden von VSPackages im Hintergrund | Microsoft Docs'
ms.date: 11/04/2016
ms.topic: conceptual
ms.assetid: dedf0173-197e-4258-ae5a-807eb3abc952
author: acangialosi
ms.author: anthc
ms.workload:
- vssdk
ms.openlocfilehash: 77690a1947f82f97c4aa12809a80ea61335d216d
ms.sourcegitcommit: 16a4a5da4a4fd795b46a0869ca2152f2d36e6db2
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/06/2020
ms.locfileid: "80710616"
---
# <a name="how-to-use-asyncpackage-to-load-vspackages-in-the-background"></a>Gewusst wie: Verwenden von AsyncPackage zum Laden von VSPackages im Hintergrund
Das Laden und Initialisieren eines VS-Pakets kann zu Datenträger-E/A führen. Wenn solche E/A-Aktivitäten im UI-Thread auftreten, kann dies zu Reaktionsproblemen führen. Um diesem Problem zu begegnen, hat <xref:Microsoft.VisualStudio.Shell.AsyncPackage> Visual Studio 2015 die Klasse eingeführt, die das Laden von Paketen in einem Hintergrundthread ermöglicht.
## <a name="create-an-asyncpackage"></a>Erstellen eines AsyncPackage
Sie können zunächst ein VSIX-Projekt erstellen (**Datei** > **neues** > **Project** > **Visual C-Extensibility** > **Extensibility** > **VSIX Project**) und dem Projekt ein VSPackage hinzufügen (Rechtsklick auf das Projekt und **Hinzufügen** > **eines neuen Elements** > **C-Element** > **Extensibility** > Visual Studio**Package**). Anschließend können Sie Ihre Dienste erstellen und diese Dienste zu Ihrem Paket hinzufügen.
1. Leiten Sie das <xref:Microsoft.VisualStudio.Shell.AsyncPackage>Paket von ab.
2. Wenn Sie Dienste bereitstellen, deren Abfragen dazu führen können, dass Ihr Paket geladen wird:
Um Visual Studio anzuzeigen, dass Ihr Paket für das Laden <xref:Microsoft.VisualStudio.Shell.PackageRegistrationAttribute> im Hintergrund sicher ist, und um sich für dieses Verhalten zu entscheiden, sollten Sie die **AllowsBackgroundLoading-Eigenschaft** im Attributkonstruktor auf true setzen.
```csharp
[PackageRegistration(UseManagedResourcesOnly = true, AllowsBackgroundLoading = true)]
```
Um Visual Studio anzuzeigen, dass es sicher ist, den Dienst in einem <xref:Microsoft.VisualStudio.Shell.ProvideServiceAttributeBase.IsAsyncQueryable%2A> Hintergrundthread zu <xref:Microsoft.VisualStudio.Shell.ProvideServiceAttribute> instanziieren, sollten Sie die Eigenschaft im Konstruktor auf true festlegen.
```csharp
[ProvideService(typeof(SMyTestService), IsAsyncQueryable = true)]
```
3. Wenn Sie über UI-Kontexte laden, sollten Sie **PackageAutoLoadFlags.BackgroundLoad** für den <xref:Microsoft.VisualStudio.Shell.ProvideAutoLoadAttribute> ODER den Wert (0x2) in die Flags angeben, die als Wert für den automatischen Ladeeintrag Ihres Pakets geschrieben wurden.
```csharp
[ProvideAutoLoad(UIContextGuid, PackageAutoLoadFlags.BackgroundLoad)]
```
4. Wenn Sie asynchrone Initialisierungsarbeiten zu erledigen <xref:Microsoft.VisualStudio.Shell.AsyncPackage.InitializeAsync%2A>haben, sollten Sie überschreiben. Entfernen `Initialize()` Sie die von der VSIX-Vorlage bereitgestellte Methode. (Die `Initialize()` Methode in **AsyncPackage** ist versiegelt). Sie können eine <xref:Microsoft.VisualStudio.Shell.AsyncPackage.AddService%2A> der Methoden verwenden, um ihrem Paket asynchrone Dienste hinzuzufügen.
HINWEIS: Um `base.InitializeAsync()`aufzurufen, können Sie Ihren Quellcode in:
```csharp
await base.InitializeAsync(cancellationToken, progress);
```
5. Sie müssen darauf achten, KEINE RPCs (Remote Procedure Call) aus Ihrem asynchronen Initialisierungscode (in **InitializeAsync**) zu erstellen. Diese können auftreten, <xref:Microsoft.VisualStudio.Shell.Package.GetService%2A> wenn Sie direkt oder indirekt anrufen. Wenn Synchronisierungslasten erforderlich sind, <xref:Microsoft.VisualStudio.Threading.JoinableTaskFactory>blockiert der UI-Thread die Verwendung von . Das Standardblockierungsmodell deaktiviert RPCs. Wenn Sie also versuchen, einen RPC aus Ihren async-Tasks zu verwenden, werden Sie blockiert, wenn der UI-Thread selbst auf das Laden des Pakets wartet. Die allgemeine Alternative besteht darin, den Code bei Bedarf in den <xref:Microsoft.VisualStudio.Threading.JoinableTaskFactory.SwitchToMainThreadAsync%2A> UI-Thread zu **marshallen,** indem Sie so etwas wie Joinable Task Factory oder einen anderen Mechanismus verwenden, der keine RPC verwendet. Verwenden Sie NICHT **ThreadHelper.Generic.Invoke** oder blockieren Sie im Allgemeinen den aufrufenden Thread, der darauf wartet, zum UI-Thread zu gelangen.
HINWEIS: Sie sollten die Verwendung von `InitializeAsync` **GetService** oder **QueryService** in Ihrer Methode vermeiden. Wenn Sie diese verwenden müssen, müssen Sie zuerst zum UI-Thread wechseln. Die Alternative ist, von Ihrem **AsyncPackage** <xref:Microsoft.VisualStudio.Shell.Interop.IAsyncServiceProvider>zu verwenden <xref:Microsoft.VisualStudio.Shell.AsyncServiceProvider.GetServiceAsync%2A> (durch Umwerfen auf .)
C-Code: Erstellen eines AsyncPackage:
```csharp
[PackageRegistration(UseManagedResourcesOnly = true, AllowsBackgroundLoading = true)]
[ProvideService(typeof(SMyTestService), IsAsyncQueryable = true)]
public sealed class TestPackage : AsyncPackage
{
protected override Task InitializeAsync(System.Threading.CancellationToken cancellationToken, IProgress<ServiceProgressData> progress)
{
this.AddService(typeof(SMyTestService), CreateService, true);
return Task.FromResult<object>(null);
}
}
```
## <a name="convert-an-existing-vspackage-to-asyncpackage"></a>Konvertieren eines vorhandenen VSPackage in AsyncPackage
Der Großteil der Arbeit ist die gleiche wie das Erstellen eines neuen **AsyncPackage**. Folgen Sie den Schritten 1 bis 5 oben. Sie müssen auch mit den folgenden Empfehlungen besonders vorsichtig sein:
1. Denken Sie `Initialize` daran, die Außerkraftsetzung zu entfernen, die Sie in Ihrem Paket hatten.
2. Deadlocks vermeiden: Es könnten versteckte RPCs in Ihrem Code vorhanden sein. die nun auf einem Hintergrundthread auftreten. Stellen Sie sicher, dass Sie beim Erstellen einer RPC (z. B. **GetService**) entweder (1) zum Hauptthread wechseln oder (2) die asynchrone Version der API verwenden müssen, falls vorhanden (z. B. **GetServiceAsync**).
3. Wechseln Sie nicht zu häufig zwischen Threads. Versuchen Sie, die Arbeit zu lokalisieren, die in einem Hintergrundthread geschehen kann, um die Ladezeit zu reduzieren.
## <a name="querying-services-from-asyncpackage"></a>Abfragen von Diensten aus AsyncPackage
Je nach Aufrufer kann ein **AsyncPackage** asynchron geladen werden. Beispiel:
- Wenn der Aufrufer **GetService** oder **QueryService** (beide synchrone APIs) oder
- Wenn der Aufrufer **iVsShell::LoadPackage** (oder **IVsShell5::LoadPackageWithContext**) oder
- Die Last wird durch einen UI-Kontext ausgelöst, aber Sie haben nicht angegeben, dass der UI-Kontextmechanismus Sie asynchron laden kann.
dann wird Ihr Paket synchron geladen.
Ihr Paket hat immer noch die Möglichkeit (in seiner asynchronen Initialisierungsphase), den UI-Thread zu deaktivieren, obwohl der UI-Thread für den Abschluss dieser Arbeit blockiert wird. Wenn der Aufrufer **IAsyncServiceProvider** verwendet, um asynchron nach Ihrem Dienst zu fragen, wird das Laden und Die Initialisierung asynchron durchgeführt, vorausgesetzt, er blockiert nicht sofort das resultierende Taskobjekt.
C-Code: So wird der Dienst asynchron abgefragt:
```csharp
using Microsoft.VisualStudio.Shell;
using Microsoft.VisualStudio.Shell.Interop;
IAsyncServiceProvider asyncServiceProvider = Package.GetService(typeof(SAsyncServiceProvider)) as IAsyncServiceProvider;
IMyTestService testService = await asyncServiceProvider.GetServiceAsync(typeof(SMyTestService)) as IMyTestService;
```
| 73.707547 | 1,075 | 0.800845 | deu_Latn | 0.963958 |
22694878828c445dffb886cffd0402d73d6a4f16 | 156 | md | Markdown | _education/education_3.md | JeongWon-Oh/JeongWon-Oh.github.io | 93050512015f09afc36146097282d1f40a57a81a | [
"MIT"
] | null | null | null | _education/education_3.md | JeongWon-Oh/JeongWon-Oh.github.io | 93050512015f09afc36146097282d1f40a57a81a | [
"MIT"
] | null | null | null | _education/education_3.md | JeongWon-Oh/JeongWon-Oh.github.io | 93050512015f09afc36146097282d1f40a57a81a | [
"MIT"
] | null | null | null | ---
layout: post
duration: "Apr, 2022 ~ "
inline: true
---
*M.S. student in Information and Communications Engineering*,
<br>Tokyo Institute of Technology
| 17.333333 | 61 | 0.724359 | eng_Latn | 0.7803 |
226961b3743e78c51c79b33e0659d2d1d96c7e3d | 337 | md | Markdown | healthvault/reference/vocabularies/wc.medical-professions.1.md | MarioFernandezA/healthvault-docs | 73bf40b7904323a54d5cd71c32bf35919f727663 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | healthvault/reference/vocabularies/wc.medical-professions.1.md | MarioFernandezA/healthvault-docs | 73bf40b7904323a54d5cd71c32bf35919f727663 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | healthvault/reference/vocabularies/wc.medical-professions.1.md | MarioFernandezA/healthvault-docs | 73bf40b7904323a54d5cd71c32bf35919f727663 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: HV_2abfa5f1-27b7-4d7b-8847-1c2a11bcf21b
title: medical-professions
---
# medical-professions
## Overview
Property|Value
---|---
Name|medical-professions
Family|wc
Version|1
## Examples
ID|Name
---|---
Phy|Physician
Den|Dentist
Nrs|Nurse
Thp|Therapist
SW|Social Worker
Chi|Chiropractor
Nrt|Nutritionist
Clg|Clergy | 12.481481 | 44 | 0.732938 | eng_Latn | 0.354362 |
2269da018b8810c02756c805d8bf972f80620d82 | 6,996 | md | Markdown | src/posts/2020/cleancode-3.md | devsuzie/suzie.world | 498d14b1a85812819f84ecc5d3b43cfc3c75560d | [
"MIT"
] | 1 | 2022-02-07T00:39:39.000Z | 2022-02-07T00:39:39.000Z | src/posts/2020/cleancode-3.md | devsuzie/suzie.world | 498d14b1a85812819f84ecc5d3b43cfc3c75560d | [
"MIT"
] | null | null | null | src/posts/2020/cleancode-3.md | devsuzie/suzie.world | 498d14b1a85812819f84ecc5d3b43cfc3c75560d | [
"MIT"
] | null | null | null | ---
title: '클린코드 #3 함수'
description: 로버트 C. 마틴의 클린 코드를 읽고 정리한 내용입니다
date: 2020-05-29
tags:
- Book
layout: layouts/post.njk
------
어떤 프로그램이든 가장 기본적인 단위가 함수다. 이 장은 함수를 잘 만드는 법을 소개한다. 의도를 분명히 표현하는 함수를 어떻게 구현할 수 있을까? 함수에 어떤 속성을 부여해야 처음 읽는 사람이 프로그램 내부를 직관적으로 파악할 수 있을까?
## 작게 만들어라!
함수를 만드는 첫째 규칙은 `작게!`다. 함수를 만드는 둘째 규칙은 `더 작게!` 다.
### 블록과 들여쓰기
if문/else문/while문 등에 들어가는 블록은 한 줄이어야 한다. 대개 거기서 함수를 호출한다. 그러면 바깥을 감싸는 함수(enclosing function)가 작아질 뿐 아니라, 블록 안에서 호출하는 함수 이름을 적절히 짓는다면, 코드를 이해하기도 쉬워진다.
이 말은 중첩 구조가 생길만큼 함수가 커져서는 안 된다는 뜻이다. 그러므로 함수에서 들여쓰기 수준은 1단이나 2단을 넘어서면 안된다. 그래야 함수는 읽고 이해하기 쉬워진다.
## 한 가지만 해라!
> 함수는 한 가지를 해야 한다. 그 한가지를 잘 해야 한다. 그 한 가지만을 해야 한다.
지정된 함수 이름 아래에서 추상화 수준이 하나인 단계만 수행한다면 그 함수는 한 가지 작업만 한다. 우리가 함수를 만드는 이유는 큰 개념을 다음 추상화 수준에서 여러 단계로 나눠 수행하기 위해서이다.
함수가 `한가지` 만 하는지 판단하는 방법이 하나 더 있다. 단순히 다른 표현이 아니라 의미 있는 이름으로 다른 함수를 추출할 수 있다면 그 함수는 여러 작업을 하는 셈이다. 한 가지 작업만 하는 함수는 섹션으로 나누기 어렵다.
## 함수 당 추상화 수준은 하나로!
함수가 확실히 한 가지 작업만 하려면 함수 내 모든 문장의 추상화 수준이 동일해야 한다. 한 함수 내에 추상화 수준을 섞으면 코드를 읽는 사람이 헷갈린다. 문제는 이 정도로 그치지 않는다. 근본 개념과 세부사항을 뒤섞기 시작하면, 깨어진 창문처럼 사람들이 함수에 세부사항을 점점 더 추가한다.
### 위에서 아래로 코드 읽기: 내려가기 규칙
코드는 위에서 아래로 이야기처럼 읽혀야 좋다. 한 함수 다음에는 추상화 수준이 한 단계 낮은 함수가 온다. 일련의 `TO` 문단을 읽듯이 프로그램이 읽혀야 한다. 핵심은 짧으면서도 `한 가지`만 하는 함수다. 위에서 아래로 TO 문단을 읽어내려 가듯이 코드를 구현하면 추상화 수준을 일관되게 유지하기가 쉬워진다.
> LOGO언어에서 모든 함수는 키워드 `TO`로 시작한다.
## Switch문
switch문, if/else가 여럿 이어지는 구문은 작게 만들기 어렵다. 본질적으로 switch문은 N가지를 처리한다. 불행하게도 switch문을 완전히 피할 방법은 없다. 하지만 각 switch 문을 저차원 클래스에 숨기고 절대로 반복하지 않는 방법은 있다. 다형성 `polymorphism`을 이용한다.
> 다형성은 그 프로그래밍 언어의 자료형 체계의 성질을 나타내는 것으로, 프로그램 언어의 각 요소들(상수, 변수, 오브젝트, 함수, 메소드 등)이 다양한 자료형에 속하는 것이 허가되는 성질을 가리킨다.
저자는 일반적으로 switch 문을 다형적 객체를 생성하는 코드 안에서, 단 한 번만 참아준다고 한다. 이렇게 상속 관계로 숨긴 후에는 절대로 다른 코드에 노출하지 않는다. 물론 불가피한 상황도 생긴다.
## 서술적인 이름을 사용하라!
함수가 하는 일을 좀 더 잘 표현해야 좋은 이름이라고 할 수 있다. 한 가지만 하는 작은 함수에 좋은 이름을 붙인다면 절반은 성공했다고 볼 수 있다. 함수가 작고 단순할수록 서술적인 이름도 고르기 쉬워진다.
```java
// 고치기 전
testableHtml
// 고치고 나서
SetupTeardDownIncluder.render
```
이름이 길어도 괜찮다. 길고 서술적인 이름이 짧고 어려운 이름보다 좋다. 길고 서술적인 이름이 길고 서술적인 주석보다 좋다. 함수 이름을 정할 때는 여러 단어가 쉽게 읽히는 명명법을 사용한다. 그런 다음, 여러 단어를 사용해 함수 기능을 잘 표현하는 이름을 선택한다.
## 함수 인수
함수에서 이상적인 인수 개수는 0개 (무항)다. 다음은 1개 (단항)고, 다음은 2개 (이항)다. 3개(삼항)는 가능한 피하는 편이 좋다. 4개 이상 (다항)은 특별한 이유가 필요하다. 특별한 이유가 있어도 사용하면 안된다.
> 매개변수(parameter)란 함수의 정의에서 전달받은 인수를 함수 내부로 전달하기 위해 사용하는 변수를 의미한다. 인수(argument)란 함수가 호출될 때 함수로 값을 전달해주는 값을 말한다.
### 많이 쓰는 단항 형식
함수에 인수 1개를 넘기는 가장 흔한 두가지 경우는 아래와 같다.
- 인수에 질문을 던지는 경우
- 인수로 뭔가를 변환해 결과를 반환하는 경우
### 플래그 인수
플래그 인수는 추하다. 함수로 `Bool`값을 넘기는 관례는 끔찍하다. 왜냐면 함수가 한꺼번에 여러 가지를 처리한다고 대놓고 공표하는 셈이니까! 플래그가 참이면 이걸 하고 거짓이면 저걸 한다는 말이니까!
### 이항 함수
인수가 2개인 함수는 인수가 1개인 함수보다 이해하기 어렵다. 이항 함수가 무조건 나쁘다는 소리는 아니다. 프로그램을 짜다보면 불가피한 경우도 생긴다. 하지만 그만큼 위험이 따른다는 사실을 이해하고 가능하면 단항 함수로 바꾸도록 애써야 한다.
### 삼항 함수
인수가 3개인 함수는 인수가 2개인 함수보다 훨씬 더 이해하기 어렵다. 순서, 주춤, 무시로 야기되는 문제가 두 배 이상 늘어난다. 아래 함수를 보자.
```java
assertEquals(message, expected, actual)
```
저자는 위 함수를 보고 수없이 멈칫하고 주춤했다고 한다. 매번 함수를 볼 때마다 주춤했다가 message를 무시해야 한다는 사실을 상기했다고도 한다.
### 인수 객체
인수가 2-3개 필요하다면 일부를 독자적인 클래스 변수로 선언할 가능성을 짚어본다.
```java
// 삼항 함수
Circle makeCircle(double x, double y, double radius);
// 객체를 생성해 이항 함수로
Circle makeCircle(Point center, double radius);
```
객체를 생성해 인수를 줄이는 방법이 눈속임이라 여겨질지 모르겠지만 그렇지 않다. 위 예제에서 x와 y를 묶었듯이 변수를 묶어 넘기려면 이름을 붙여야 하므로 결국은 개념을 표현하게 된다.
### 인수 목록
때로는 인수 개수가 가변적인 함수도 필요하다. `String.format` 메서드가 좋은 예다.
```java
// String.format 선언부
public String format(String format, Object... args);
// example
String.format("%s worked %.2f hours.", name, hours);
```
위 예제처럼 `가변 인수` 전부를 동등하게 취급하면 List형 인수 하나로 취급할 수 있다. 가변 인수를 취하는 모든 함수에 같은 원리가 적용된다. 가변 인수를 취하는 함수는 단항, 이항, 삼항 함수로 취급할 수 있다.
> 매번 함수에 들어가는 인수의 개수가 변하는 것을 `가변 인수`(variable argument)라고 한다
### 동사와 키워드
함수의 의도나 인수의 순서와 의도를 제대로 표현하려면 좋은 함수 이름이 필요하다. 단항 함수는 함수와 인수가 동사/명사 쌍을 이뤄야 한다. 함수 이름에 `키워드` 즉, 인수 이름을 넣으면 인수 순서를 기억할 필요가 없어진다.
## 부수 효과를 일으키지 마라!
부수 효과는 거짓말이다. 함수에서 한 가지를 하겠다고 약속하고선 남몰래 다른 짓도 하니까. 때로는 예상치 못하게 클래스 변수를 수정한다. 때로는 함수를 넘어온 인수나 시스템 전역 변수를 수정한다. 어느 쪽이든 해로운 거짓말이다.
> 부수 효과는 많은 경우 시간적인 결합(temporal coupling)이나 종속성(order dependency)을 초래한다.
### 출력 인수
일반적으로 우리는 인수를 함수 `입력`으로 해석한다. 객체 지향 언어에서는 출력 인수를 사용할 필요가 거의 없다. 출력 인수로 사용하라고 설계한 변수가 바로 `this`이기 때문이다. 일반적으로 출력 인수는 피해야 한다. 함수에서 상태를 변경해야 한다면 함수가 속한 객체 상태를 변경하는 방식을 택한다.
## 명령과 조회를 분리하라!
함수는 뭔가를 수행하거나 뭔가에 답하거나 둘 중 하나만 해야 한다. 둘 다 하면 안된다. 객체 상태를 변경하거나 아니면 객체 정보를 반환하거나 둘 중 하나다. 둘 다 하면 혼란을 초래한다.
## 오류 코드보다 예외를 사용하라!
명령 함수에서 오류 코드를 반환하는 방식은 명령/조회 분리 규칙을 미묘하게 위반한다. 자칫하면 If문에서 명령을 표현식으로 사용하기 쉬운 탓이다. 오류 코드 대신 `try와 catch`블록으로 예외를 사용하면 오류 처리 코드가 원래 코드에서 분리되므로 코드가 깔끔해진다.
### Try/Catch 블록 뽑아내기
try/catch 블록은 원래 추하다. 코드 구조에 혼란을 일으키며, 정상 동작과 오류 처리 동작을 뒤섞는다. 그러므로 try/catch 블록을 별도 함수로 뽑아내는 편이 좋다. 정상 동작과 오류 처리 동작을 분리하면 코드를 이해하고 수정하기 쉬워진다.
### 오류 처리도 한 가지 작업이다.
함수는 `한 가지` 작업만 해야 한다. 오류 처리도 `한 가지`작업에 속한다. 그러므로 오류를 처리하는 함수는 오류만 처리해야 마땅하다. 함수에 키워드 try가 있다면 try문으로 시작해 catch/finally 문으로 끝나야 한다는 말이다.
## 반복하지 마라!
책의 예제에서 나온 나쁜 예시에서는 알고리즘이 네번이나 반복되고 있었다. 리팩토링 한 예제에서는 `include` 방법으로 중복을 없앤다. 중복을 없앴더니 모듈 가독성이 크게 높아졌다. 어쩌면 중복은 소프트웨어에서 모든 악의 근원이다 (!) 많은 원칙과 기법이 중복을 없애거나 제어할 목적으로 나왔다.
## 구조적 프로그래밍
> 에츠허르 데이크스트라(Edsger Dijkstra)의 [구조적 프로그래밍 원칙](https://ko.wikipedia.org/wiki/%EA%B5%AC%EC%A1%B0%EC%A0%81_%ED%94%84%EB%A1%9C%EA%B7%B8%EB%9E%98%EB%B0%8D). 구조적 프로그래밍은 프로그램을 작성할 때, 알고리즘을 세분화해 계층적인 구조가 되도록 설계하는 프로그래밍 방법.
데이크스트라는 모든 함수와 함수 내 모든 블록에 입구와 출구가 하나만 존재해야 한다고 말했다. 즉, 함수는 return 문이 하나여야 한다는 말이다. 루프 안에서 break나 continue를 사용해선 안 되며 [goto](https://en.wikipedia.org/wiki/Goto)는 절대로, 절대로 안된다.
함수를 작게 만든다면 간혹 return, break, continue를 여러 차례 사용해도 괜찮다. 반면, goto 문은 큰 함수에서만 의미가 있으므로, 작은 함수에서는 피해야 한다.
## 함수를 어떻게 짜죠?
소프트웨어를 짜는 행위는 글짓기와 비슷하다. 논문이나 기사를 작성할 때는 먼저 생각을 기록한 후 읽기 좋게 다듬는다. 초안은 대개 서투르고 어수선하므로 원하는 대로 읽힐 때까지 말을 다듬고 문장을 고치고 문단을 정리한다.
함수를 짤 때도 마찬가지다. 처음에는 길고 복잡하고, 인수 목록도 길고, 중복되는 코드를 작성한다. 그 다음 코드를 다듬고, 함수를 만들고, 이름을 바꾸고, 중복을 제거한다. 그리고 최종적으로는 이 장에서 설명한 규칙을 따르는 함수가 얻어진다. 처음부터 탁 짜내지 않는다. 그게 가능한 사람은 없으리라.
## 결론
각각의 언어에서 함수는 동사며, 클래스는 명사다. 프로그래밍의 기술은 언제나 언어 설계의 기술이다. 대가 프로그래머는 시스템을 구현할 프로그램이 아니라 풀어갈 이야기로 여긴다. 이 장은 함수를 잘 만드는 기교를 소개했다. 여기서 설명한 규칙을 따른다면 길이가 짧고, 이름이 좋고, 체계가 잡힌 함수가 나오리라. 하지만 진짜 목표는 `시스템`이라는 이야기를 풀어가는 데 있다는 사실을 명심하길 바란다.
---
## 느낀 점
이번 장을 읽으면서는 1장에서 느껴보지 못한 저자의 강력함을 느낄 수 있었다. 모든 소제목의 끝에는 느낌표가 붙어있고, ‘추하다’라는 말을 쓰는데 스스럼이 없다. 슈퍼맨이 돌아왔다에서 아이들이 새로 들어올 때마다 청학동에 보내서 ‘예의’가 무엇인지 엄하게 가르치는 훈장님처럼, 갓 프로그래밍을 시작해 아주 제 마음대로 함수를 작성하는 나 같은 사람에게 “함수는 이렇게 작성하는 거다!” 하며 가르쳐 주는 느낌이랄까. 좀 억지 같기도 하다. 그냥 슈퍼맨이 돌아왔다는 내가 너무 좋아하는 프로그램이라서 한번 말해보고 싶어서 억지로 끼워 맞춰봤다.
이 책의 처음부터 지금까지 (아직 3장까지밖에 읽지 않았지만….) 일맥상통하게 이어지는 논점은 코드를 읽을 때 기승전결이 있어야 하고 글을 읽듯 위에서 아래로 술술 읽혀야 한다는 것이다. 이런 점은 결론 부분에서도 나왔다.
> 소프트웨어를 짜는 행위는 글짓기와 비슷하다. 처음부터 탁 짜내지 않는다. 그게 가능한 사람은 없으리라.
초장에 그렇게 나를 혼내놓고.. 처음부터 탁 짜내는 사람은 없다는 말로 위로해준다. 이건 마치 청학동에 이제 막 들어온 아이들에게 무섭게 으름장을 놓고 집에 돌아갈 때 즈음 되니 친절하게 대해주는 훈장님과 무엇이 다른가. 아무튼 이번 장에서는 1, 2장과 다르게 마냥 고개를 끄덕이며 읽지는 않았다. 나는 자바스크립트를 공부한 지 1년 정도 된 사람이고 이 책은 다른 언어로 기술되어 있다. 중간중간 모르는 개념이 나오면 (`GOTO`라던지 `구조적 프로그래밍`이라던지) 찾아보며 예제를 알음알음 읽어나가는데 그것도 그것 나름의 재미가 있다. 앞으로 어떻게 함수를 작성해나가야 할지 훈장님에게 제대로 가르침 받았으니 처음부터는 아닐지라도, 좋은 코드로 계속해서 고쳐나갈 수 있는 프로그래머로 성장하고 싶다. 아디오스. | 44.278481 | 402 | 0.695826 | kor_Hang | 1.00001 |
226a73e0f8030f2c57393cb5dcf219c0fa2b2f7e | 277 | md | Markdown | _posts/1990-11-27-unusual-artifacts-are-recovered-from.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1990-11-27-unusual-artifacts-are-recovered-from.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1990-11-27-unusual-artifacts-are-recovered-from.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: Unusual artifacts are recovered from
tags:
- Nov 1990
---
Unusual artifacts are recovered from the Seahawk discovered shipwreck south of Dry Tortugas. [Map]
Newspapers: **Miami Morning News or The Miami Herald**
Page: **4**, Section: **B**
| 23.083333 | 100 | 0.66787 | eng_Latn | 0.991639 |
226adf1c46bca344f0853deb542743316d1a9431 | 338 | md | Markdown | CHANGES.md | glreno/oneko | 9751537673f682dba564eced2b68ecb8e77c6ca2 | [
"Unlicense"
] | 3 | 2021-08-28T20:23:54.000Z | 2022-02-10T20:30:08.000Z | CHANGES.md | glreno/oneko | 9751537673f682dba564eced2b68ecb8e77c6ca2 | [
"Unlicense"
] | 4 | 2019-01-28T15:34:47.000Z | 2022-02-10T20:44:51.000Z | CHANGES.md | glreno/oneko | 9751537673f682dba564eced2b68ecb8e77c6ca2 | [
"Unlicense"
] | 3 | 2020-06-08T07:23:09.000Z | 2021-07-12T11:04:52.000Z | oneko
=====
# Change log
## V2.0.0, January 2019
* Restructured Werner Randelshofer's v.1.0
* Added support for multiple monitors.
* Added new "window mode" -- click on cat to move it into/outof a window.
* Cleaned up animation
## V2.0.1, February 2019
* Added mouse offset
* Fixed defect when un-miminizing window mode.
| 22.533333 | 74 | 0.689349 | eng_Latn | 0.98754 |
226b19b0d0e6b9dc84e8058f115c55357ad4b96b | 3,600 | md | Markdown | documents/documents.md | around7days/reserve-room | ee92a68761ea93920554011b2122327fdd1e9e12 | [
"MIT"
] | null | null | null | documents/documents.md | around7days/reserve-room | ee92a68761ea93920554011b2122327fdd1e9e12 | [
"MIT"
] | 1 | 2021-03-13T01:43:16.000Z | 2021-03-13T01:43:16.000Z | documents/documents.md | around7days/reserve-room | ee92a68761ea93920554011b2122327fdd1e9e12 | [
"MIT"
] | null | null | null | # 会議室予約システム 設計書
## システム概要
会議室を予約するシステム
## システム構成
| 種類 | 内容 | 備考 |
| ------------------ | ----------------------------------- | --------------------------------- |
| サーバ | CentOS | |
| 開発言語 | Javascript, HTML, CSS, SQL | |
| 開発フレームワーク | Node.js, Express, JQuery, BootStrap | いつか JQuery ⇒ Vue.js に変更予定 |
| データベース | SQLite | https://www.sqlite.org/index.html |
| その他 | MDWiki | 設計書、マニュアルに利用 |
<div style="page-break-before:always"></div>
## 画面設計
### 日次スケジュール画面
<img src="./documents-screen01.png" style="zoom:80%;" />
### 一覧スケジュール画面
<img src="./documents-screen04.png" style="zoom:80%;" />
<div style="page-break-before:always"></div>
### 予約登録画面
<img src="./documents-screen02.png" style="zoom:50%;" />
### 個人設定画面
<img src="./documents-screen03.png" style="zoom:50%;" />
<div style="page-break-before:always"></div>
## Rest API 設計
### API 一覧
| URL | メソッド | 説明 | パラメータ | 戻り値 |
| -------------------- | -------- | ------------------ | ------------------ | ------------ |
| /api/reserves/search | GET | 予約情報一覧の取得 | 検索条件 | 予約情報一覧 |
| /api/reserve/{id} | GET | 予約情報の取得 | 予約 ID | 予約情報 |
| /api/reserve | POST | 予約情報の登録 | 予約情報 | 登録結果 |
| /api/reserve | PUT | 予約情報の更新 | 予約情報 | 更新結果 |
| /api/reserve | DELETE | 予約情報の削除 | 予約 ID,パスワード | 削除結果 |
| /api/rooms | GET | 会議室一覧の取得 | - | 会議室一覧 |
<div style="page-break-before:always"></div>
## DB 設計
### テーブル一覧
| テーブル名 | テーブル ID | 説明 |
| -------------- | -------------- | ---------------------------------- |
| 会議室予約情報 | t_room_reserve | 会議室の予約情報を保持するテーブル |
| 会議室マスタ | m_room | 会議室を管理するマスタ |
### 会議室予約情報(t_room_reserve)
| 項目名 | 項目 ID | 型 | PK | 必須 | 備考 |
| ---------- | ---------- | ------- | --- | ---- | --------------------- |
| 予約 ID | id | INT | 1 | 〇 | Auto Increment |
| 氏名 | user_nm | VARCHAR | | | |
| 部署名 | dept_nm | VARCHAR | | | |
| 用途 | reason | VARCHAR | | | |
| 会議室 ID | room_id | INT | | 〇 | |
| 開始時刻 | start_time | VARCHAR | | 〇 | yyyy/mm/dd hh24:mi:ss |
| 終了時刻 | end_time | VARCHAR | | 〇 | yyyy/mm/dd hh24:mi:ss |
| パスワード | password | VARCHAR | | | |
| 登録日付 | ins_date | VARCHAR | | | yyyy/mm/dd hh24:mi:ss |
| 更新日付 | upd_date | VARCHAR | | | yyyy/mm/dd hh24:mi:ss |
| 削除フラグ | del_flg | INT | | 〇 | 0:未削除 1:削除済 |
### 会議室マスタ(m_room)
| 項目名 | 項目 ID | 型 | PK | 必須 | 備考 |
| ---------- | -------- | ------- | --- | ---- | --------------------- |
| 会議室 ID | room_id | INT | 1 | 〇 | |
| 会議室名 | room_nm | VARCHAR | | 〇 | |
| 表示順 | sort | INT | | | |
| 登録日付 | ins_date | VARCHAR | | | yyyy/mm/dd hh24:mi:ss |
| 更新日付 | upd_date | VARCHAR | | | yyyy/mm/dd hh24:mi:ss |
| 削除フラグ | del_flg | INT | | 〇 | 0:未削除 1:削除済 |
| 39.56044 | 96 | 0.355278 | yue_Hant | 0.781815 |
226b3c063e93bc2ab3a266bd885e3d472cc09fcb | 4,861 | md | Markdown | dovecot/README.md | thkukuk/containers-mailserver | 767dcb19a391a3f1674092c02081fdddc9dd6953 | [
"MIT"
] | null | null | null | dovecot/README.md | thkukuk/containers-mailserver | 767dcb19a391a3f1674092c02081fdddc9dd6953 | [
"MIT"
] | null | null | null | dovecot/README.md | thkukuk/containers-mailserver | 767dcb19a391a3f1674092c02081fdddc9dd6953 | [
"MIT"
] | 2 | 2020-12-25T23:04:45.000Z | 2021-08-10T17:48:51.000Z | # dovecot container
Dovecot is an open source IMAP and POP3 email server. It is an excellent
choice for both small and large installations. It's fast, simple to set up,
requires no special administration and it uses very little memory.
- [Guide](#guide)
- [Run a new dovecot instance](#run-a-new-dovecot-instance)
- [Data persistence](#data-persistence)
- [TLS](#tls)
- [Auto-generated certificate](#auto-generated-certificate)
- [Own certificate](#own-certificate)
- [Disable TLS](#disable-tls)
- [Supported environment variables](#supported-environment-variables)
- [Generic variables](#generic-variables)
- [Variables for TLS](#variables-for-tls)
- [Variables for LDAP](#variables-for-ldap)
- [Various configuration variables](#various-configuration-variables)
- [Data persistence volumes](#data-persistence-volumes)
## Guide
### Run a new dovecot instance
This dovecot container will create and setup the configuration files at every
restart, but it is also possible to provide an own set of configuration
files.
The command to run this container is:
```sh
podman run -d --rm --name dovecot -p 110:110 -p 143:143 -p 993:993 -p 995:995 -e USE_LDAP=1 -e LDAP_BASE_DN="ou=mail,dc=example,dc=org" -e LDAP_BIND_DN="cn=mailAccountReader,ou=Manager,dc=example,dc=org" -e LDAP_BIND_PASSWORD="password" registry.opensuse.org/opensuse/dovecot
```
### Data persistence
There are some directories to store persistence data like `/var/spool/vmail` for
the emails and `/etc/certs` for the certificates.
If the UID and GID of the vmail user, which owns the `/var/spool/vmail`
hierachy, needs to match in the container and in the host, the `VMAIL_UID`
environment variable needs to be set explicitly.
This variable needs to match the `postfix` container `VMAIL_UID` variable.
## TLS
### Auto-generated certificate
TLS is be configured and enabled by default. If no certificate is provided, a
self-signed one is created during container startup for the container
hostname. The hostname for the certificate can be set e.g. by
`podman run -e HOSTNAME=ldap.example.org ...`
### Own certificate
You can set your custom certificate at run time, by mounting a volume with the
certificates into the container and adjusting the following environment variables:
```sh
podman run -v /srv/dovecot/certs:/etc/certs:Z \
-e DOVECOT_TLS_CRT=/etc/certs/dovecot.crt \
-e DOVECOT_TLS_KEY=/etc/certs/dovecot.key \
-e DOVECOT_TLS_CA_CRT=/etc/certs/ca.crt \
-d registry.opensuse.org/opensuse/dovecot:latest
```
### Disable TLS
Add `--env DOVECOT_TLS=0` to the run command: `podman run -e DOVECOT_TLS=0 ...`
## Supported environment variables:
### Generic variables:
- `DEBUG=[0|1]` Enables "set -x" in the entrypoint script
- `TZ` Timezone to use in the container
### Variables for TLS:
- `DOVECOT_TLS=[1|0]` Enable TLS. Defaults to `1` (true).
- `DOVECOT_TLS_CA_CRT` Dovecot ssl CA certificate. Defaults to `/etc/certs/dovecot-ca.crt`.
- `DOVECOT_TLS_CA_KEY` Private dovecot CA key. Defaults to `/etc/certs/dovecot-ca.key`.
- `DOVECOT_TLS_CRT` Dovecot ssl certificate. Defaults to `/etc/certs/dovecot-tls.crt`.
- `DOVECOT_TLS_KEY` Private dovecot ssl key. Defaults to `/etc/certs/dovecot-tls.key`.
- `DOVECOT_TLS_DH_PARAM` Dovecot ssl certificate dh param file.
- `DOVECOT_TLS_ENFORCE=[0|1]` Enforce TLS but except ldapi connections. Defaults to `0` (false).
- `DOVECOT_TLS_CIPHER_SUITE` TLS cipher suite.
### Variables for LDAP
- `USE_LDAP=[0|1]` Use LDAP for user database
- `LDAP_HOSTS` Hosts running ldap server
- `LDAP_BASE_DN` Ldap base DN to look for accounts. Defaults to `ou=mail,dc=example,dc=org`
- `LDAP_BIND_DN DN used to read user account data. Defaults to `cn=mailAccountReader,ou=Manager,dc=example,dc=org`
- `LDAP_BIND_PASSWORD` Password for LDAP_BIND_DN.
- `LDAP_USE_TLS=[0|1]` Use TLS for LDAP queries, defaults to `1`
- `LDAP_TLS_CA_CRT` LDAP CA certificate to verify connections.
### Various configuration variables:
- `USE_VMAIL_USER=1` Enable VMAIL user, defaults to `1`
- `VMAIL_UID=5000` UID/GID of vmail user. All files in `/var/spool/vmail` will be changed to this UID/GID
- `ENABLE_IMAP=[0|1]` Enables imap support, defaults to `1`
- `ENABLE_POP3=[0|1]` Enables pop3 support, defaults to `0`
- `ENABLE_LMTP=[0|1]` Enables mail delivery via LMTP, defaults to `0`
- `ENABLE_SIEVE=[0|1]` Enables sieve support if LMTP is enabled, defaults to `1`
- `ENABLE_MANAGESIEVE=[0|1]` Enables ManageSieve, requires to export Port 4190. Only available if ENABLE_LMTP and ENABLE_SIEVE are set to `1`. Defaults to `0`
## Data persistence volumes
- `/var/spool/vmail` Mail storage
- `/etc/certs` TLS certificates for dovecot
- `/etc/dovecot` User supplied dovecot configuration files
| 45.009259 | 275 | 0.728451 | eng_Latn | 0.786575 |
226bbc64e44c297246b1acd7ed67ffcca28b6ada | 207 | md | Markdown | grading.md | chapman-cs510-2017f/cw-02-krissharonkynan | d96789b3cb27d138cf0c7e9835b827d5892d9885 | [
"MIT"
] | null | null | null | grading.md | chapman-cs510-2017f/cw-02-krissharonkynan | d96789b3cb27d138cf0c7e9835b827d5892d9885 | [
"MIT"
] | null | null | null | grading.md | chapman-cs510-2017f/cw-02-krissharonkynan | d96789b3cb27d138cf0c7e9835b827d5892d9885 | [
"MIT"
] | null | null | null | # Instructor comments
- Well done. Remember that if you use Google heavily and find helpful resources online, you should cite those resources in your code. It is polite to give credit where credit is due.
| 51.75 | 183 | 0.78744 | eng_Latn | 0.999819 |
226bd55981c7736761ed1f76608e74d24dc5170b | 266 | md | Markdown | src/docs/news.PomBase/2002-12-31-global-transcriptional-responses-fission-yeast-environmental-stress.md | pombase/Website | c10db0c7a078681407f544c01b8932feb0ea706d | [
"MIT"
] | 14 | 2020-10-20T11:39:30.000Z | 2022-03-10T18:08:58.000Z | src/docs/news.PomBase/2002-12-31-global-transcriptional-responses-fission-yeast-environmental-stress.md | pombase/Website | c10db0c7a078681407f544c01b8932feb0ea706d | [
"MIT"
] | 1,806 | 2016-06-07T10:30:41.000Z | 2022-03-31T09:55:03.000Z | src/docs/news.PomBase/2002-12-31-global-transcriptional-responses-fission-yeast-environmental-stress.md | pombase/Website | c10db0c7a078681407f544c01b8932feb0ea706d | [
"MIT"
] | 1 | 2017-03-22T10:59:53.000Z | 2017-03-22T10:59:53.000Z | ### Global transcriptional responses of fission yeast to environmental stress
Chen D, Toone WM, Mata J, Lyne R, Burns G, Kivinen K, Brazma A, Jones N,
Bähler J. Mol Biol Cell. 2003 Jan;14(1):214-29.
[PMID:12529438](http://www.ncbi.nlm.nih.gov/pubmed?term=12529438)
| 44.333333 | 77 | 0.740602 | kor_Hang | 0.243597 |
226bdf3ff19b874aa9ea8c74af7d0c577c5dcc0e | 1,192 | md | Markdown | README.md | ThibaultJanBeyer/humanitarianhackathon | 0aa93e2f885cbc4f47a09cfdb8b263b5864551e5 | [
"MIT"
] | 2 | 2019-06-15T21:16:27.000Z | 2019-06-16T10:17:20.000Z | README.md | ThibaultJanBeyer/humanitarianhackathon | 0aa93e2f885cbc4f47a09cfdb8b263b5864551e5 | [
"MIT"
] | 1 | 2022-02-12T12:00:33.000Z | 2022-02-12T12:00:33.000Z | README.md | ThibaultJanBeyer/humanitarianhackathon | 0aa93e2f885cbc4f47a09cfdb8b263b5864551e5 | [
"MIT"
] | null | null | null | # Facebooks Developer Circle Humanitarian Hackathon
Task and support by **WWF**.
Teamn DataCats 2019
- **We actually won the hackathon**
- Done in ~24h from concept to app to presentation
- More info on the hackathon can be found here: https://humanitarian-hackathon-w-wwf.devpost.com/ (probably see all submissions??)
- Find the map code in the `/docs` folder.
- Find the web-scraping code in the `/datagathering` folder
- Find all extra infos in the `/meta` folder.
- Like our [Presentation](meta/DataCatsPitch.pdf)
- Like the [Hackathon Opening PDF](meta/opening.pdf)
- Like the [WWF Presentation](meta/WWF_Berlin_Hackathon_140619.pdf) of the Hackathon Challenges
- Like a [Résumé of the Presentations](meta/Humanitarian-Hackathon-Resume.pdf)
- Like our [Research Pastebin](meta/[DataCats]InformationDocument.pdf)
(Private Drive: https://drive.google.com/drive/folders/15EHVnDUJPzezKSVu7DRf3MiNqqAerNp7)
(Paid solution: https://www.chainpoint.com/solutions/tracemap/)
# Try it:
- By opening: https://thibaultjanbeyer.github.io/humanitarianhackathon/
- Or test it locally by running a static file server inside `/docs`
- code is hacky di hack rubberband ducktaping hack
| 44.148148 | 130 | 0.766779 | eng_Latn | 0.463861 |
226bf4ce45bb2c81f2b880dfd2a6c6637dd1be85 | 1,324 | md | Markdown | docs/build/precedence-in-macro-definitions.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/precedence-in-macro-definitions.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/precedence-in-macro-definitions.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Prioridad en las definiciones de macro
ms.date: 11/04/2016
helpviewer_keywords:
- NMAKE program, precedence in macro definitions
- macros, precedence
ms.assetid: 0c13182d-83cb-4cbd-af2d-f4c916b62aeb
ms.openlocfilehash: 8829b3cdbc7b2324ef3d118f8ca45dd2a1621e7e
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/31/2018
ms.locfileid: "50619090"
---
# <a name="precedence-in-macro-definitions"></a>Prioridad en las definiciones de macro
Si una macro tiene varias definiciones, NMAKE utiliza la definición de la prioridad más alta. En la lista siguiente se muestra el orden de prioridad, de mayor a menor:
1. Una macro definida en la línea de comandos
1. Una macro definida en un archivo MAKE o incluir archivo
1. Una macro de variables de entorno heredada
1. Una macro definida en el archivo Tools.ini
1. Una macro predefinida, como [CC](../build/command-macros-and-options-macros.md) y [AS](../build/command-macros-and-options-macros.md)
Use /E para hacer que las macros heredadas de variables de entorno para invalidar las macros de archivos MAKE con el mismo nombre. Use **! UNDEF** para invalidar una línea de comandos.
## <a name="see-also"></a>Vea también
[Definición de una macro NMAKE](../build/defining-an-nmake-macro.md) | 40.121212 | 184 | 0.784743 | spa_Latn | 0.93684 |
226cbe23a470cc718ed25311db5b1932543739a8 | 644 | md | Markdown | CHANGELOG.md | Olega95/flutter_health | d29d1b6a34bb927d55ac3251402c70c1e2aed622 | [
"MIT"
] | null | null | null | CHANGELOG.md | Olega95/flutter_health | d29d1b6a34bb927d55ac3251402c70c1e2aed622 | [
"MIT"
] | null | null | null | CHANGELOG.md | Olega95/flutter_health | d29d1b6a34bb927d55ac3251402c70c1e2aed622 | [
"MIT"
] | null | null | null | ## 1.2.3
Adding Weight
## 1.2.1
Updating google fit and auth in gradle
## 1.2.0
Provide compatibility with Gradle 5.x
## 1.1.9
Apple health kit for flutter (Flutter Health) initial release
You can use it to get any of the following
Tested:
* bodyFatPercentage
* height
* bodyMassIndex
* waistCircumference
* stepCount
* basalEnergyBurned
* activeEnergyBurned
* heartRate
* restingHeartRate
* walkingHeartRateAverage
* bodyTemperature
* bloodPressureSystolic
* bloodPressureDiastolic
* oxygenSaturation
* bloodGlucose
* electrodermalActivity
Could not be tested:
* highHeartRateEvent
* lowHeartRateEvent
* irregularHeartRhythmEvent
| 16.1 | 61 | 0.787267 | eng_Latn | 0.881634 |
226d18df8b0466c5174523d5ededb09f327719dd | 706 | md | Markdown | README.md | chrisdickinson/iterables-any | 5ba12b2b304b188ba8d1f0d38f19cd7581379a5f | [
"MIT"
] | 3 | 2017-04-10T07:50:31.000Z | 2019-06-13T16:21:26.000Z | README.md | chrisdickinson/iterables-any | 5ba12b2b304b188ba8d1f0d38f19cd7581379a5f | [
"MIT"
] | null | null | null | README.md | chrisdickinson/iterables-any | 5ba12b2b304b188ba8d1f0d38f19cd7581379a5f | [
"MIT"
] | null | null | null | # @iterables/any
Return true if any element of an iterable matches.
```javascript
const any = require('@iterables/any')
any('abc', xs => xs === 'c') // true
any('abc', xs => xs === 'd') // false
any([null, false, '']) // false
any([null, false, {}]) // true
```
## Installation
```
$ npm install --save @iterables/any
```
## API
### `any(iterable, test = Boolean) -> Iterator`
* `iterable`: any `Iterator` — a generator instance, `Array`, `Map`, `String`, or `Set`
* `test`: A function taking `xs` and returning a boolean value.
Returns `true` if any element matched `test`, or `false` if no element matched.
Stops consuming elements from `iterable` as soon as they pass `test`.
## License
MIT
| 20.764706 | 87 | 0.637394 | eng_Latn | 0.800955 |
226dc63ec03dc3458ca6e4ea3e574d60c3ab422a | 509 | md | Markdown | pages/persevere.md | danielrehner/developer-guidebook | 09c20afcab7a6f92e09500e421cb74ad7c1212b3 | [
"Apache-2.0"
] | 15 | 2018-10-17T19:00:35.000Z | 2021-12-09T15:51:40.000Z | pages/persevere.md | danielrehner/developer-guidebook | 09c20afcab7a6f92e09500e421cb74ad7c1212b3 | [
"Apache-2.0"
] | null | null | null | pages/persevere.md | danielrehner/developer-guidebook | 09c20afcab7a6f92e09500e421cb74ad7c1212b3 | [
"Apache-2.0"
] | 3 | 2018-10-16T17:13:31.000Z | 2021-06-01T06:39:52.000Z | # Persevere.
```
Anything significant will be harder than you think.
Keep going.
Anything significant will take longer than you think.
Be patient with the months and years.
Things will get more challenging.
Move swiftly through tasks each day.
You will make mistakes.
Embrace it.
Other people will make mistakes.
Roll with it.
Never stop doing.
Never stop learning.
Never stop producing.
Never stop improving.
And don't forget: never stop caring.
``` | 15.90625 | 53 | 0.695481 | eng_Latn | 0.99608 |
226f1d22fdaeb7e341907f122ff8c8a356c425ce | 827 | md | Markdown | CHANGELOG.md | hanaori/json_world | fba50f8ddd059b68cbe48d9c0e8860861c0d78ca | [
"MIT"
] | null | null | null | CHANGELOG.md | hanaori/json_world | fba50f8ddd059b68cbe48d9c0e8860861c0d78ca | [
"MIT"
] | null | null | null | CHANGELOG.md | hanaori/json_world | fba50f8ddd059b68cbe48d9c0e8860861c0d78ca | [
"MIT"
] | null | null | null | ## 0.2.5
- Remove unnecessary dependency on json
## 0.2.4
- Support `$schema` property
## 0.2.3
- Add :target_schema option on link definition
## 0.2.2
- Support :media_type option on link definition
## 0.2.1
- Provides raw_options method to reuse args for property definition
## 0.2.0
- Support :links option on embedding other resource as property
## 0.1.4
- Rename as_json_schema_without_link with as_json_schema_without_links
## 0.1.3
- Support boolean type on property definition
## 0.1.2
- Remove links property from embedded object
- Fix bug that links are unexpectedly shared by other classes
## 0.1.1
- Support embedding other JSON-Schema compatible object via type
- Support rel property on LinkDefinition
## 0.1.0
- Rename included module name: PropertyDefinable -> DSL
## 0.0.1
- 1st Release
| 16.54 | 70 | 0.735187 | eng_Latn | 0.981702 |
22700960bdeb7579f60482b095cc884f692fd890 | 6,474 | md | Markdown | README.md | pinjollist/next-with-analytics | 1a8c2c24df5e79a5e7bf1bf56e6dc52084baa105 | [
"MIT"
] | 19 | 2019-06-26T13:06:14.000Z | 2020-01-02T04:09:03.000Z | README.md | pinjollist/next-with-analytics | 1a8c2c24df5e79a5e7bf1bf56e6dc52084baa105 | [
"MIT"
] | 47 | 2019-06-27T10:42:33.000Z | 2020-01-13T08:09:46.000Z | README.md | pinjollist/next-with-analytics | 1a8c2c24df5e79a5e7bf1bf56e6dc52084baa105 | [
"MIT"
] | 1 | 2020-10-22T09:28:57.000Z | 2020-10-22T09:28:57.000Z | # next-with-analytics


> Minimalistic analytics helper for Next.js which respects Do Not Track (DNT) headers.
`next-with-analytics` is a tiny HOC that wraps a Next.js application with `react-ga` and some useful analytics helpers. It also allows you to respect Do Not Track (DNT) settings for visitors that have them enabled.
Other features include:
- Anonymize IP support
- Debug mode
## Installation
```bash
# yarn
yarn add @pinjollist/next-with-analytics
# npm
npm install --save @pinjollist/next-with-analytics
```
## Usage
### Initialising analytics
The `initAnalytics` function will create an [analytics instance](#analytics-instance) with any [options](#options) passed into it. To use it, create a custom `_app` page with the `initAnalytics()` called inside it. You can now use the analytics instance to create a route event listener for tracking pageviews, as well as passing [analytics helper functions](#using-analytics-helpers) into all pages as a prop.
```jsx
// pages/_app.js
import App from 'next/app';
import Router from 'next/router';
import { initAnalytics } from '@pinjollist/next-with-analytics';
const options = {
trackingCode: process.env.GOOGLE_ANALYTICS,
respectDNT: true,
};
const analyticsInstance = initAnalytics(options);
class MyApp extends App {
componentDidMount() {
const { handleRouteChange } = analyticsInstance;
// Enable tracking page views for route change
Router.events.on('routeChangeComplete', handleRouteChange);
}
componentWillUnmount() {
const { handleRouteChange } = analyticsInstance;
// Disable tracking page views for route change before unmount
Router.events.off('routeChangeComplete', handleRouteChange);
}
render() {
const { Component, pageProps } = this.props;
// Add the Analytics helpers to all pages.
const { analytics } = analyticsInstance;
return (
<Container>
<Component analytics={analytics} {...pageProps} />
</Container>
);
}
}
```
### `withAnalytics` HOC
You can also use the `withAnalytics` HOC to easily wrap your entire app with `next-with-analytics`. To use it, wrap the `_app` page with the HOC, passing the `next/router` instance, and some preferred options.
**IMPORTANT:** Note that on Next.js 9 and above, this will disable the [Automatic Static Optimization](https://nextjs.org/docs#automatic-prerendering) feature, since it will also try to modify `getInitialProps` in all pages. This HOC remains available to ensure backwards compatibility with Next.js 8.
```jsx
// pages/_app.js
import App from 'next/app';
import Router from 'next/router';
import { withAnalytics } from '@pinjollist/next-with-analytics';
const options = {
trackingCode: process.env.GOOGLE_ANALYTICS,
respectDNT: true,
};
const nextWithAnalytics = withAnalytics(Router, options);
class MyApp extends App {
static async getInitialProps({ Component, ctx }) {
let pageProps = {};
if (Component.getInitialProps) {
pageProps = await Component.getInitialProps(ctx);
}
return { pageProps };
}
render() {
const { Component, pageProps, analytics } = this.props;
return (
<Container>
<Component analytics={analytics} {...pageProps} />
</Container>
);
}
}
export default nextWithAnalytics(MyApp);
```
### Analytics Instance
The `initAnalytics()` function creates an analytics instance with any [options](#options) passed into it. It returns an `AnalyticsInstance` object, which contains the following:
#### `analytics?: AnalyticsHelpers`
An object containing all analytics helper functions. See [Using Analytics helpers](#using-analytics-helpers) to learn more about these helper functions.
#### `handleRouteChange: () => void`
Used to handle analytics pageview tracking after route change. Use it with [Router Events](https://nextjs.org/docs#router-events):
```jsx
class MyApp extends App {
// ...
componentDidMount() {
const { handleRouteChange } = analyticsInstance;
// Enable tracking page views for route change
Router.events.on('routeChangeComplete', handleRouteChange);
}
componentWillUnmount() {
const { handleRouteChange } = analyticsInstance;
// Disable tracking page views for route change before unmount
Router.events.off('routeChangeComplete', handleRouteChange);
}
}
```
### Options
#### `trackingCode?: string`
The Google Analytics tracking code of your website. Default: `''`
#### `respectDNT?: boolean`
Set to `true` to make the module respect Do Not Track (DNT). This will prevent `react-ga` to be initialised if your browser has DNT enabled. Default: `false`
#### `anonymizeIp?: boolean`
Set to `true` to enable [\_anonymizeIp](https://support.google.com/analytics/answer/2763052) in Google Analytics, an option which is often important for legal reasons (e.g. GDPR)
## Using Analytics helpers
You can add the provided Analytics helpers directly into the `analytics` prop in the Next.js `_app` page. Typings for TypeScript projects are also available as `WithAnalyticsState`.
```jsx
import Link from 'next/link';
function IndexPage({ analytics }) {
const handleClick = () => {
if (analytics) {
analytics.event('Button Click', 'Signup to Platform');
}
};
return (
<Layout>
<Container>
<h1>Try out our platform!</h1>
<Link href="/signup" onClick={handleClick}>
<a>Signup Now</a>
</Link>
</Container>
</Layout>
);
}
```
### List of Helpers
#### init
`function init(trackingCode?: string | undefined): void`
Initializes Next.js analytics.
#### pageview
`function pageview(): void`
Tracks pageview. Can be called e.g. on router location change.
#### event
`function event(category?: string, action?: string): void`
Sends a Google Analytics [event](https://developers.google.com/analytics/devguides/collection/analyticsjs/events).
#### exception
`function exception(description?: string, fatal?: boolean): void`
Sends a Google Analytics [exception event](https://developers.google.com/analytics/devguides/collection/analyticsjs/exceptions) for tracking exceptions.
## Contributing
Contributions and Pull Requests welcome! Please read the [Contributing Guidelines](CONTRIBUTING.md) beforehand.
## License
[MIT](LICENSE) © Resi Respati.
| 29.427273 | 410 | 0.720729 | eng_Latn | 0.8356 |
22710685ff75504873a58ac90c9b8b0fcd5fd9fb | 8,691 | md | Markdown | _posts/2019-02-21-real-time-fraud-detection.md | fogfish/fogfish.github.io | 8153c9d379fb61417478a4c8825d3c1c1d073f3e | [
"MIT"
] | null | null | null | _posts/2019-02-21-real-time-fraud-detection.md | fogfish/fogfish.github.io | 8153c9d379fb61417478a4c8825d3c1c1d073f3e | [
"MIT"
] | 2 | 2019-07-24T20:00:24.000Z | 2020-03-01T18:08:47.000Z | _posts/2019-02-21-real-time-fraud-detection.md | fogfish/fogfish.github.io | 8153c9d379fb61417478a4c8825d3c1c1d073f3e | [
"MIT"
] | null | null | null | ---
layout: post
title: The real-time fraud detection
description: |
The real-time fraud detection is about understand consumer transaction, deduct relevant knowledge and accumulate consumer profile as a set of feature vectors.
tags:
- thoughts
- analytics
- FinTech
- fraud
- datalog
- open search
- open source
---
# The real-time fraud detection
Recently, I had a luxury to brainstorm my [real-time consumer engagement](/2015/03/17/real-time-consumer-engagement.html) with FinTech expert. We talked about methods of fraud detections and applicability of e-commerce techniques. Anyone can apply various statistical and machine learning approaches to study consumer behavior. One of the fundamental differences in a real-time data analysis is reaction on each individual transaction, which accumulates in time and impacts on the decision about consumer behavior.
We can rephrase that consumer behavior analysis in FinTech is all about knowing consumers transactions, getting to known consumer peers and emerging trends from both consumer, banking and other service providers.
A high resolution, real-time behavioral analysis is an opportunity to enhance traditional ETL analytics portfolio, which opens a new opportunities for Financial Intelligence and Crime Analysis.
The successful implementation of real-time behavior analysis depends on the ability to understand consumer transaction, deduct relevant knowledge and accumulate consumer profile as a set of feature vectors. These vectors formalises consumer behavior in the format suitable and understandable by machines. For example, consumer transactions are collection of immutable facts that depicts financial flows and relations between peers. Let’s consider a simple scenario
| Account | Statement | Amount | Destination |
|---|---|---|---|---|
| 8345 | Debit | 4342 | US |
| 3138 | Debit | 5852 | UK |
| ... | ... | ... | ... |
| 8345 | Credit | 2389 | FI |
Each account is a node that has a finite set of vertices. Each vertex defines a flow of transactions between peers. Anyone can formalize these flows using statistical methods e.g. express it using appropriate statistical distribution. One remark here, TeleCom domain often uses Poisson distribution to abstract flows. Once appropriate distribution(s) is fixed for FinTech then definition for consumer behavior becomes routine process - collection of these distribution forms the genome. A similar consumer behavior reflects to similar genomes which is then later assessed using pattern matching techniques.
{: .lh-0 }
```
+-----------> A +---------+
+ v
US +----------------+ FI <-------+ C
|
^ v ^
| |
+----+ B +--------------> UK +---------+
```
The genome discovery is a continuous process. It accumulates knowledge over the time. Initially, we do not know anything about the behavior of consumers. Every transaction or behavioral signal gives us a clue for each link that is accumulated into the feature vector.
We can apply the real-time analysis technique to example transaction dataset. It requires provision on the architecture solution discussed in [previous post](/2015/03/17/real-time-consumer-engagement.html).
The proof of the concept draft the high level architecture solution discussed above. We are going to focus on three elements: part that builds feature vectors from transaction log, semantical storage and data access interface Datalog to deduct fraud. The proof of the concept is based on prior art open source project: [datalog](https://github.com/fogfish/datalog), [elasticlog](https://github.com/fogfish/elastic) and [hercule](https://github.com/fogfish/hercule).
## Genome schema
We are using naive approach to model distribution of financial traffic, in real life the feature vector consists of hundreds parameters. It calculates expectation (E), standard deviation (SD), total traffic volume and number of transactions (N). A table below shows the example of genome.
| Link | E | SD | Volume | N |
|---|---|---|---|---|---|
| A → FI | 6974 | 836 | 13457 | 5 |
| A ← US | 7542 | 928 | 23173 | 3 |
| B → US | 4231 | 652 | 17263 | 6 |
| B → UK | 8272 | 435 | 13232 | 3 |
| B ← FI | 1231 | 321 | 18625 | 2 |
The solution uses ElasticSearch for this solution (eventually we can use any BigData storage). However, Usage of Elastic has few benefits: we can scale it to store 1 bln genomes, it provides efficient real-time data analytics and there are open source tooling to simplify the data mining via datalog. We model the genome via set of documents, where each document depicts credit/debit flows between account and its counterpart in some countries. An example feature vector looks like
```javascript
{
"account": "1007",
"country": "SE",
"id": "SE:1007",
"credit":
{
"e": 5071,
"n": 3,
"sd": 352.09799772222505,
"vol": 15213
},
"debit":
{
"sd": 240.41630560342617,
"vol": 10702,
"e": 5351,
"n": 2
}
}
```
The construction of genome is an elementary ETL activity, which is based on pure computations. The computation transforms logs of transactions to collection of feature vectors with consequent storage to Elastic.
## Scam analysis
We are using pattern matching technique to discover contradiction behavior of customers, once the real-time data collection is established. As part of these techniques, we have formulated a pattern (feature vectors) that corresponds to scam activities, e.g.
1. Traffic expectation is much higher than typical expectation in the group
2. High volume of the traffic
3. High number of transactions
### Hypothesis No.1
Analysis of traffic expectation per transaction helps us to understand if average behavior of groups differs each other. Looking CDF (99th percentile) of expected behavior, we able to detect regions where some accounts has higher credit/debit volume than accounts in other areas.
```
?- amount/3.
features:country(category 5 "key" "country").
features:debit("country", percentiles "99.0" "debit.e").
features:credit("country", percentiles "99.0" "credit.e").
amount(country, debit, credit) :-
features:country(country),
features:debit(country, debit),
features:debit(country, credit).
```
Using pattern match technique we can discover accounts with expectations higher than the 99th percentile.
```
?- scam/4.
features:debit("country", percentiles "99.0" "debit.e").
features:account("country", "account", "debit.e", "debit.vol", "debit.n").
scam(account, debit, volume, transactions) :-
features:debit(country, pth),
features:account(country, account, debit, volume, transactions),
country = "...",
debit > pth.
```
```
?- scam/4.
features:credit("country", percentiles "99.0" "credit.e").
features:account("country", "account", "credit.e", "credit.vol", "credit.n").
scam(account, credit, volume, transactions) :-
features:credit(country, pth),
features:account(country, account, credit, volume, transactions),
country = "...",
debit > pth.
```
### Hypothesis No.2 and No.3
Analysis of the volume distribution per account helps us to identity outlines, who transfers large qualities. We run analysis independently for credit and debit.
```
?- volume/2.
features:country(category 5 "key" "country").
features:debit("country", stats "debit.vol").
volume(country, debit) :-
features:country(country),
features:debit(country, debit).
```
```
?- volume/2.
features:country(category 5 "key" "country").
features:credit("country", stats "credit.vol").
volume(country, credit) :-
features:country(country),
features:credit(country, credit).
```
The outcome of the analysis give a clear hint to investigate accounts that transferred higher volume than other.
```
?- scam/5.
features:credit("country", "account", "credit.e", "credit.vol", "credit.n").
scam(country, account, e, vol, n) :-
features:credit(country, account, e, vol, n), vol > ... .
```
```
?- scam/5.
features:debit("country", "account", "debit.e", "debit.vol", "debit.n").
scam(country, account, e, vol, n) :-
features:debit(country, account, e, vol, n), vol > ... .
```
## Conclusion
The discussed technique is nothing more than traditional Event Sourcing patterns with consequent data enrichment and deduction of consumer behavioral. The solution can be implemented with what-so-ever storage techniques. The usage to graphs helps to model complex behavioral pattern and graphs databases helps efficiently pattern match your insides. Usage of datalog is artificial in this particular example, it was only done for my educational purpose.
| 42.812808 | 606 | 0.725808 | eng_Latn | 0.98415 |
2271c0d119fe5a60d788df7a4914500b2e209011 | 316 | md | Markdown | _includes/05-emphasis.md | lelandai/markdown-portfolio | b30cfab4deaee9c18a8406f8b9f5727aedfdedca | [
"MIT"
] | null | null | null | _includes/05-emphasis.md | lelandai/markdown-portfolio | b30cfab4deaee9c18a8406f8b9f5727aedfdedca | [
"MIT"
] | 5 | 2021-05-06T08:40:37.000Z | 2021-05-06T10:32:45.000Z | _includes/05-emphasis.md | lelandai/markdown-portfolio | b30cfab4deaee9c18a8406f8b9f5727aedfdedca | [
"MIT"
] | null | null | null | *Write out some of your awesome attributes*
_and use emphasis_ (**like bold or italics**) __to identify keywords__, programming languages, or skills.
*This text will be italic*
_This will also be italic_
**This text will be bold**
__This will also be bold__
_You **can** combine them_
You **can** combine them
✨
| 24.307692 | 106 | 0.746835 | eng_Latn | 0.996337 |
22736e837f35538035f7d71326f8d7f48af56a69 | 197 | md | Markdown | Library/Executor/Readme.md | cschladetsch/Diver | 5a2e5065bd29b1055771a0dd85421b205cf075d7 | [
"MIT"
] | 3 | 2019-01-10T12:51:12.000Z | 2019-08-28T16:15:22.000Z | Library/Executor/Readme.md | cschladetsch/Pyro | 5a2e5065bd29b1055771a0dd85421b205cf075d7 | [
"MIT"
] | 24 | 2019-03-20T12:02:45.000Z | 2019-11-24T11:14:00.000Z | Library/Executor/Readme.md | cschladetsch/Diver | 5a2e5065bd29b1055771a0dd85421b205cf075d7 | [
"MIT"
] | 2 | 2019-05-21T08:20:44.000Z | 2021-08-05T00:36:41.000Z | # Executor
A virtual machine with two distinct stacks:
* `Data stack`. This is familiar and holds arguments for and results of *operations*
* `Context stack`. This is a stack of *Continuations*.
| 32.833333 | 84 | 0.751269 | eng_Latn | 0.988019 |
227412dc72f2c47228ded8660f857d8859a1b644 | 184 | md | Markdown | README.md | AlexanderFrancoletti/BeyondTheGrave | 9ec1e8a92621f8c232f6bd4874118fa53ab7d2e8 | [
"MIT"
] | null | null | null | README.md | AlexanderFrancoletti/BeyondTheGrave | 9ec1e8a92621f8c232f6bd4874118fa53ab7d2e8 | [
"MIT"
] | null | null | null | README.md | AlexanderFrancoletti/BeyondTheGrave | 9ec1e8a92621f8c232f6bd4874118fa53ab7d2e8 | [
"MIT"
] | null | null | null | # BeyondTheGrave
2D fighting game project for the Rensselaer Center for Open Source.
Project Proposal
https://docs.google.com/document/d/1eyF3mJ-cFKO2sFoHymUO0X75T27AEw7hUJ4L7KHrZ28
| 26.285714 | 79 | 0.842391 | kor_Hang | 0.339232 |
2274afefeb319e003f7c92ec8e4c5b055452e891 | 4,299 | md | Markdown | _posts/2021-01-12-facebookgroupify.md | JackOHara/jackohara.github.io | 6323860fa61620915c0b6e60741d36f0838cbb4a | [
"MIT"
] | null | null | null | _posts/2021-01-12-facebookgroupify.md | JackOHara/jackohara.github.io | 6323860fa61620915c0b6e60741d36f0838cbb4a | [
"MIT"
] | null | null | null | _posts/2021-01-12-facebookgroupify.md | JackOHara/jackohara.github.io | 6323860fa61620915c0b6e60741d36f0838cbb4a | [
"MIT"
] | null | null | null | ---
layout: post
title: Facebook Groupify
---
This is a side project I’ve been working on occasionally in my spare time. Facebook Groups are often a wonderful place for finding new music. There’s many niche genre/location specific music groups that are goldmines. I regularly go through the process of opening a music group on Facebook, scrolling through the page and clicking on posts that link to Youtube, Spotify, Soundcloud, Bandcamp and other streaming sites. I give the songs a listen and then if I like it I add it to my Spotify. It would be nice if I could just have all of these songs already in Spotify and just listen to them with no interruption of having to click on the next link.
So I’ve created what I dub as FacebookGroupify (Facebook Groups + Spotify) to bridge this gap.
# What is it?
A Facebook group and Spotify Playlist is specified by the user.
FacebookGroupify retrieves Youtube and Spotify links that have been posted into the Facebook Group.
If the song exists on Spotify then it’s added to the Playlist.
This is repeated daily to continuously keep the playlists fresh.
# How Does it Work?

## Facebook Scraper
Puppeteer is used to load Facebook in a headless browser and login using username+password or a saved session from a previous login. It navigates to the Facebook group and endlessly scrolls until it either reaches the end of the page or has exceeded a time limit. All links are scraped from the page and saved to S3.
## Link ID Extract
A Lambda is triggered which loads the scraped links and extracts Youtube, Spotify, Soundcloud and Bandcamp IDs from the links which are then saved to S3.
## Youtube Metadata Fetcher
Another Lambda is triggered and this time it fetches metadata about the Youtube IDs. The most valuable piece of information here is the Youtube title is a key part which is used later. The metadata is saved to S3.
## Spotify Song Matcher
The metadata saved to S3 triggers this Lambda which attempts to match song titles found with songs that exist on Spotify. Not every song exists on Spotify and the Youtube titles sometimes differ from the Spotify titles. The song matcher cleans the titles to remove irrelevant text seen on Youtube songs such as “(Official Video)” or “(Music Video)”. Titles are used to search for Songs on Spotify and the similarity of the two titles is calculated. The best matching Spotify songs have their IDs saved to S3. The similarity must exceed a set threshold.
## Spotify Playlist Updater
The Spotify IDs saved to S3 trigger a lambda that inserts the songs into the specified playlist. Spotify's API does not provide functionality to prevent duplicates so instead the app saves the key value pair of ID -> Playlist ID to DynamoDB which allows us to prevent inserting the song into the same playlist again in the future.
# What’s next?
- The biggest bottleneck is the Facebook scraper. To scrape new Facebook groups I personally must be a member of the group. My Facebook account is a single point of failure in the system. If I get locked out then the application fails. The joining of a Facebook Group could be automated through Chromium but it is not a desirable way to do this. Web crawling/scraping is fidgety and websites change often meaning that it can not reliably be considered a repeatable method. Facebook does have an API which allows group administrators to access group posts. This looks to be the most promising method. Solving the Facebook scraper using the API would allow the application to move towards a self-serving manner where users do not need technical knowledge to run the app.
- Improve the quality of the project (there’s a lack of testing as this started out as a hack) so other people can confidently contribute.
- Add Soundcloud and Bandcamp integration. All of the Facebook music groups I am a member of are dominated by Youtube links however there are many gems often posted to other platforms. I’d like to see all content being collected and processed.
Keep up with the project <a href="https://github.com/JackOHara/facebookgroupify" target="_blank">here</a>.
Do you have a Facebook Group that you’d love to see in a Spotify playlist updated daily? If so get in touch and I’ll get it running.
| 93.456522 | 769 | 0.794603 | eng_Latn | 0.999702 |
227652817f31ab0be70fc3afdfff6e51d29bacf8 | 281 | md | Markdown | README.md | ittedev/beako_cli | e8cc4c306040eb011a3d86b071b20624ef62a0b0 | [
"MIT"
] | null | null | null | README.md | ittedev/beako_cli | e8cc4c306040eb011a3d86b071b20624ef62a0b0 | [
"MIT"
] | null | null | null | README.md | ittedev/beako_cli | e8cc4c306040eb011a3d86b071b20624ef62a0b0 | [
"MIT"
] | null | null | null | # beako-cli
Build tools
## Install
```shell
deno install -fA https://deno.land/x/beako_cli/beako.ts
```
## Usage
Output a single file.
```shell
beako build mod.ts -o script.js
```
Output splitted files. (Default: `./dist`)
```shell
beako build script.ts --outdir=public
```
| 12.217391 | 55 | 0.672598 | eng_Latn | 0.219367 |
2276554ecdb90951c15e0c70bdd927af5c5a16c7 | 346 | md | Markdown | README.md | FantasyGold/goldswap-subgraph | 520aa33593586fc2608a9a77dbb63570859abc38 | [
"MIT"
] | null | null | null | README.md | FantasyGold/goldswap-subgraph | 520aa33593586fc2608a9a77dbb63570859abc38 | [
"MIT"
] | null | null | null | README.md | FantasyGold/goldswap-subgraph | 520aa33593586fc2608a9a77dbb63570859abc38 | [
"MIT"
] | null | null | null | # 📊 DefiGold / GoldSwap subgraph
Aims to deliver analytics & historical data for DefiGold and GoldSwap. Still a work in progress. Feel free to contribute!
Try querying it on [The Graph's Explorer](https://thegraph.com/explorer/subgraph/fantasygold/gold-swap).
Learn more about subgraphs at [The Graph](https://thegraph.com/).
## License
MIT
| 28.833333 | 121 | 0.763006 | eng_Latn | 0.730106 |
227681882ea09764c6c4668a0ac4f7bc71c7e306 | 869 | md | Markdown | _posts/2021/2021-01-09-leetcode-length-of-last-word.md | mistydew/mistydew.github.io | d00ff95c250d455a0f77d7c826a4dffcc1af2328 | [
"MIT"
] | 6 | 2018-09-06T02:31:09.000Z | 2020-12-16T02:00:46.000Z | _posts/2021/2021-01-09-leetcode-length-of-last-word.md | mistydew/mistydew.github.io | d00ff95c250d455a0f77d7c826a4dffcc1af2328 | [
"MIT"
] | null | null | null | _posts/2021/2021-01-09-leetcode-length-of-last-word.md | mistydew/mistydew.github.io | d00ff95c250d455a0f77d7c826a4dffcc1af2328 | [
"MIT"
] | 5 | 2018-06-30T19:56:32.000Z | 2020-01-06T01:47:17.000Z | ---
layout: post
title: "LeetCode 58. 最后一个单词的长度(简单)"
date: 2021-01-09 07:47:20 +0800
author: Coshin
comments: true
category: 力扣题解
tags: LeetCode Easy String
---
> 给定一个通过空格分隔的一些单词组成的字符串 `s`,返回*该字符串中最后一个单词的长度*。
> 如果最后一个单词不存在,则返回 `0`。
>
> **单词**是指仅由字母组成且不含任何空格字符的最大子串。
>
> **限制条件:**
>
> * <code>1 <= s.length <= 10<sup>4</sup></code>
> * `s` 仅由英文字母和空格 `' '` 组成。
## 解决方案
### 方法一:遍历
```cpp
class Solution {
public:
int lengthOfLastWord(string s) {
int end = s.size() - 1;
while (end >= 0 && s[end] == ' ') end--;
if (end < 0) return 0;
int begin = end;
while (begin >= 0 && s[begin] != ' ') begin--;
return end - begin;
}
};
```
复杂度分析:
* 时间复杂度:*O*(n)。
n 为字符串最后一个单词和尾部空格的长度和。
* 空间复杂度:*O*(1)。
## 参考链接
* [Length of Last Word - LeetCode](https://leetcode.com/problems/length-of-last-word/){:target="_blank"}
| 18.891304 | 104 | 0.573072 | yue_Hant | 0.373215 |
2276cb6c068cb36ec580866bf6e3252713282132 | 2,478 | md | Markdown | docs/msbuild/assigntargetpath-task.md | doodz/visualstudio-docs.fr-fr | 49c7932ec7a761e4cd7c259a5772e5415253a7a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/msbuild/assigntargetpath-task.md | doodz/visualstudio-docs.fr-fr | 49c7932ec7a761e4cd7c259a5772e5415253a7a5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-19T08:00:06.000Z | 2018-10-19T08:00:06.000Z | docs/msbuild/assigntargetpath-task.md | doodz/visualstudio-docs.fr-fr | 49c7932ec7a761e4cd7c259a5772e5415253a7a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "AssignTargetPath, tâche | Microsoft Docs"
ms.custom:
ms.date: 11/04/2016
ms.reviewer:
ms.suite:
ms.technology: vs-ide-sdk
ms.tgt_pltfrm:
ms.topic: article
dev_langs:
- VB
- CSharp
- C++
- jsharp
ms.assetid: 0e830e31-3bcf-4259-b2a8-a5df49b92d51
caps.latest.revision: "4"
author: kempb
ms.author: kempb
manager: ghogen
ms.openlocfilehash: dac2cd390d6b7b706785d4807e659ca44620def9
ms.sourcegitcommit: f40311056ea0b4677efcca74a285dbb0ce0e7974
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/31/2017
---
# <a name="assigntargetpath-task"></a>Tâche AssignTargetPath
Cette tâche accepte une liste de fichiers et ajoute des attributs `<TargetPath>` s’ils ne sont pas déjà spécifiés.
## <a name="task-parameters"></a>Paramètres de tâche
Le tableau ci-dessous décrit les paramètres de la tâche `AssignTargetPath`.
|Paramètre|Description|
|---------------|-----------------|
|`RootFolder`|Paramètre d’entrée `string` facultatif.<br /><br /> Contient le chemin du dossier qui contient les liens de cible.|
|`Files`|Paramètre d’entrée <xref:Microsoft.Build.Framework.ITaskItem>`[]` facultatif.<br /><br /> Contient la liste de fichiers entrante.|
|`AssignedFiles`|Facultatif<br /><br /> Paramètre de sortie <xref:Microsoft.Build.Framework.ITaskItem> `[]`.<br /><br /> Contient la liste de fichiers résultante.|
## <a name="remarks"></a>Remarques
En plus des paramètres énumérés ci-dessus, cette tâche hérite des paramètres de la classe <xref:Microsoft.Build.Tasks.TaskExtension>, qui elle-même hérite de la classe <xref:Microsoft.Build.Utilities.Task>. Pour obtenir la liste de ces paramètres supplémentaires et leurs descriptions, consultez [TaskExtension Base Class](../msbuild/taskextension-base-class.md).
## <a name="example"></a>Exemple
L’exemple suivant exécute la tâche `AssignTargetPath` pour configurer un projet.
```xml
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="MyProject">
<AssignTargetPath
RootFolder="Resources"
Files="@(ResourceFiles)"
<Output TaskParameter="AssignedFiles"
ItemName="OutAssignedFiles"/>
</AssignTargetPath>
</Target>
</Project>
```
## <a name="see-also"></a>Voir aussi
[Tâches MSBuild](../msbuild/msbuild-tasks.md)
[Task Reference (Informations de référence sur les tâches MSBuild)](../msbuild/msbuild-task-reference.md) | 42 | 366 | 0.708636 | fra_Latn | 0.522646 |
2277a7924dc8ac68f65a46cf44b402e3bf7e5780 | 504 | md | Markdown | README.md | SuatOzcan/UiPathRoboticProcessAutomation | eda31c526f25f9f4073cda8a3ac8589e95fc0b61 | [
"MIT"
] | null | null | null | README.md | SuatOzcan/UiPathRoboticProcessAutomation | eda31c526f25f9f4073cda8a3ac8589e95fc0b61 | [
"MIT"
] | null | null | null | README.md | SuatOzcan/UiPathRoboticProcessAutomation | eda31c526f25f9f4073cda8a3ac8589e95fc0b61 | [
"MIT"
] | null | null | null | # UiPathRoboticProcessAutomation
Autonomizing computer tasks such as connecting to the web, typing keywords into search box in search engines and then extracting datatables and writing them into our own datatables and excel files on our computer. Other applications include searching pdf files and extracting text or searching for a particular line by computer vision and sending the data with automated e-mails. Processes can be queued in Orchestrator on web server and run by robots in our computers.
| 168 | 470 | 0.833333 | eng_Latn | 0.999626 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.