hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
52092bb9758f0b6f4298406dac17658e1c924b2f | 1,940 | md | Markdown | _pages/about.md | srappel/srappel.github.io | ba86f0ba93f780cf3a098b7388baefc0604acd3f | [
"MIT"
] | null | null | null | _pages/about.md | srappel/srappel.github.io | ba86f0ba93f780cf3a098b7388baefc0604acd3f | [
"MIT"
] | 1 | 2021-03-18T15:39:10.000Z | 2021-03-18T15:39:10.000Z | _pages/about.md | srappel/srappel.github.io | ba86f0ba93f780cf3a098b7388baefc0604acd3f | [
"MIT"
] | null | null | null | ---
permalink: /
title: "About Stephen"
excerpt: "Stephen Appel lives in Milwaukee, Wisconsin and is a GIS Librarian, hiker, coder, gamer, and gardner."
author_profile: true
redirect_from:
- /about/
- /about.html
---
Stephen Appel lives and works in Milwaukee, Wisconsin and is a librarian, GIS expert, Geographer, coder, and believes in not being a jerk. In his free time, he's hanging out with his cat Sputnik, playing video games, hiking, biking, camping, cooking, or, most likely, getting distracted.
## Wisconsin Old Fashioned
* Muddle an orange slice, some bar cherries with syrup, and bitters in the bottom of a large glass
* Fill the glass with ice
* Add a large serving or 2 of Brandy (nothing fancy)
* Finish with a splash of 7-Up or Squirt
<br>
My Job
======
Stephen is the Geospatial Information Specialist at the [American Geographical Society Library](https://www.uwm.edu/libraries/agsl)
--a map and geography special collection at the [University of Wisconsin-Milwaukee Libraries](https://www.uwm.edu/libraries).
Somewhere at the intersection of Geographic Information Science, Information Literacy, and Digital Humanities, you will find Stephen using GIS, Geography, and spatial data to answer questions, help people, and understand our world.
You can check out my [official UWM profile page](https://uwm.edu/libraries/people/appel-stephen/).
Affiliations
======
* [NACIS](www.nacis.org) member since 2017
* Active in the [GeoBlacklight](https://geoblacklight.org/) and geo4lib community
* Certified [Library Carpentry](https://carpentries.org/) instructor
* Project member with the [Spanish Travelers Project at Marquette University](spanishtravelers.com)
* Advisor for [GIS Club at UWM](https://www.facebook.com/groups/31093391757/) and the Alpha Mu chapter of [Gamma Theta Upsilon](https://gammathetaupsilon.org/)
* Moderator of [/r/AcademicLibrarians](https://www.reddit.com/r/AcademicLibrarians/) | 40.416667 | 287 | 0.765464 | eng_Latn | 0.916952 |
5209482d145109cc9ae1330917607a95c5e9925e | 999 | md | Markdown | README.md | AALEKH/Redis-Mysql | 6ed1c2c315c7d2994716b1ebfa019a55b346a344 | [
"MIT"
] | null | null | null | README.md | AALEKH/Redis-Mysql | 6ed1c2c315c7d2994716b1ebfa019a55b346a344 | [
"MIT"
] | null | null | null | README.md | AALEKH/Redis-Mysql | 6ed1c2c315c7d2994716b1ebfa019a55b346a344 | [
"MIT"
] | null | null | null | # Redis-Mysql
Test Code for Redis interaction with C/C++.
## Start Redis Server
### Linux: sudo service redis_6379 start
### Mac: sudo redis-server /usr/local/etc/redis.conf
## To run example.c:
* gcc example.c -lhiredis -levent -std=c99
## To run allCluster.cpp:
* g++ allCluster.cpp -std=gnu++11 -lhiredis -levent -o atg
* ./atg
## To run redis.c
* gcc redis.c -lhiredis -levent -std=c99
## Dependencies
* [Hiredis](https://github.com/redis/hiredis)
This library provides basic support for running Redis over C/C++. **example.c** and **redis.c** are the two files that demostrate working of redis over c.
* [cpp-hiredis-cluster](https://github.com/AALEKH/cpp-hiredis-cluster.git)
This library (coupled with hiredis) is required for Redis-Cluster support for C/C++. **allCluster.cpp** file demostrated a simple implementation Redis-Cluster.
* [Redis](http://redis.io)
## There is no Installation of any libraries required, just get redis server started and run these sample codes.
| 35.678571 | 167 | 0.724725 | eng_Latn | 0.710023 |
5209780043239dfa3a196f52cce45cbf630bf7cc | 1,961 | md | Markdown | docs/reconnaissance/discovery-probing.md | hoppymalt/JAM_Session | 0f6f8e4fe2e755e5b10c89d5bd047dff1808bade | [
"MIT"
] | null | null | null | docs/reconnaissance/discovery-probing.md | hoppymalt/JAM_Session | 0f6f8e4fe2e755e5b10c89d5bd047dff1808bade | [
"MIT"
] | null | null | null | docs/reconnaissance/discovery-probing.md | hoppymalt/JAM_Session | 0f6f8e4fe2e755e5b10c89d5bd047dff1808bade | [
"MIT"
] | null | null | null | ---
layout: default
title: Discovering and Probing
parent: Reconnaissance
nav_order: 2
---
# Discovering and Probing
{: .no_toc }
## Table of contents
{: .no_toc .text-delta }
1. TOC
{:toc}
---
## OSINT
* [https://osintframework.com/](https://osintframework.com/)
* [https://psbdmp.ws/](https://psbdmp.ws/) (Pastebin search)
## Discovering hosts
### ICMP
4 echo requests to a host
```
ping -c 1 <IP>
```
Send echo requests to ranges
```
fping -g 10.10.10.0/24
```
Send echo, timestamp requests and subnet mask requests
```
nmap -PEPM -sP -n 10.10.10.0/24
```
### Port Discovery
TCP
```
masscan -p20,21-23,25,53,80,110,111,135,139,143,443,445,993,995,1723,3306,3389,5900,8080 10.10.10.0/24
```
```
nmap -p20,21-23,25,53,80,110,111,135,139,143,443,445,993,995,1723,3306,3389,5900,8080 10.10.10.0/24
```
UDP
```
nmap -sU -sV --version-intensity 0 -F -n 10.10.10.0/24
```
## Discovering hosts (Internal)
### Passive
```
netdiscover -p
```
Bettercap
```
net.recon on
set net.show.meta true
net.show
```
### Active
#### ARP discovery
```
arp-scan 10.10.10.0/24 | tee arp_scan.txt
```
```
udp-proto-scanner.pl 10.10.10.0/24 | tee udp_proto.txt
```
```
nmap -sn 10.10.10.0/24
```
```
netdiscover -r 10.10.10.0/24
```
#### NBT discovery
Search in Domain
```
nbtscan -r 192.168.0.1/24
```
Bettercap
```
net.probe on
net.show
```
IPv6: Send a pingv6 to multicast:
```
ping6 -c4 -I eth0 ff02::1 | tee ipv6.txt
```
```
cat ipv6.txt | cut -d" " -f4 | sort -u | grep fe | sed s'/:$//' | tee ipv6_list.txt
```
## Scanning
Nmap
```
nmap -sV -sC -O -T4 -n -Pn -p- -oA fullfastscan <IP>
```
```
nmap -sU -sV -sC -n -F -T4 <IP>
```
Bettercap
```
syn.scan 10.10.10.0/24 1 10000
```
## Enumerating and Probing Users
Domain
```
nmblookup -A <DC_IP>
```
Extract names and
```
python namemesh.py names.txt | tee ~/usernames.txt
```
```
nmap -p88 -nvv --script=krb5-enum-users --script-args krb5-enum-users.realm='<DOMAIN>',userdb=/root/usernames.txt <DC_IP>
``` | 15.943089 | 121 | 0.63284 | yue_Hant | 0.319625 |
520c04bc04d70fd6f34adf3943853d5b1da66e26 | 7,038 | md | Markdown | content/post/2018-05-03.md | yihui/twitter-blogdown | 36f62056c288ebb73f743c984e843349e014e846 | [
"MIT"
] | 62 | 2017-04-12T21:02:20.000Z | 2021-12-27T07:06:34.000Z | content/post/2018-05-03.md | sbalci/twitter-blogdown | 0f5494281f493eeb878393f88e8f6d14191e4850 | [
"MIT"
] | 1 | 2020-11-18T18:52:45.000Z | 2021-02-02T05:10:48.000Z | content/post/2018-05-03.md | sbalci/twitter-blogdown | 0f5494281f493eeb878393f88e8f6d14191e4850 | [
"MIT"
] | 32 | 2017-04-16T18:48:43.000Z | 2022-03-17T20:05:57.000Z | {
"title": "Watch @minebocek talk about just some of the ways you can use #RMarkdown to share data science. list(notebook, html, xaringan, blogdown, gh-doc, pdf, shiny, bookdown) https://t.co/VoecqMGxzh https://t.co/r4kI4FJRmx #rstudioconf https://t.co/lpAhOdkdSg",
"date": "2018-05-03"
}
# blogdown
> **RStudio** (@rstudio; 140/51): Watch @minebocek talk about just some of the ways you can use #RMarkdown to share data science. list(notebook, html, xaringan, blogdown, gh-doc, pdf, shiny, bookdown)
https://t.co/VoecqMGxzh
https://t.co/r4kI4FJRmx #rstudioconf https://t.co/lpAhOdkdSg [↪](https://twitter.com/xieyihui/status/991668907336454145)
<!-- -->
> **R-bloggers** (@Rbloggers; 10/11): Moving to blogdown https://t.co/FzVmWmHXi1 #rstats #DataScience [↪](https://twitter.com/xieyihui/status/991525244270768129)
<!-- -->
> **Dusty Turner** (@DTDusty; 3/0): Finally figured out how to host my @rstatsnyc presentation using #blogdown. Here's the link: https://t.co/7Dwu80ERnh Thanks for everyone's support and as always, #beatnavy #rstats #rstatsnyc https://t.co/cYjXXwS90O [↪](https://twitter.com/xieyihui/status/991718514053525505)
<!-- -->
> **Lisa DeBruine 🏳️🌈** (@lisadebruine; 2/1): @Protohedgehog @open_con Blogdown is really nice. Or you can make a simple site using the step-by-step instructions here: https://t.co/skx3R0yxEr [↪](https://twitter.com/xieyihui/status/991645084771811328)
<!-- -->
> **Mitchell O'Hara-Wild** (@mitchoharawild; 1/0): @nj_tierney @andrewheiss @earowang @TimothyHyndman @rOpenSci Currently the icon package is using 4.7, but that can easily be updated. It should work in RMarkdown//blogdown, but there is an issue with headers. https://t.co/OkhlvziTir [↪](https://twitter.com/xieyihui/status/991856288383483904)
<!-- -->
> **Thomas Hütter** (@DerFredo; 0/1): Posted by Abhijit, now on R-bloggers: Moving to blogdown #rstats https://t.co/REbjpDnYOb [↪](https://twitter.com/xieyihui/status/991526482047619072)
<!-- -->
> **Lewis kirvan** (@LewisKirvan; 0/0): @EvanKaeding @rstatsbot1234 Just started playing with blogdown, it plays nice with Hugo. As usual great tutorials from the rstudio people involved. If you ready know rmarkdown well it's kind of an easy pick IMO. [↪](https://twitter.com/xieyihui/status/991858016952766464)
<!-- -->
> **Annie Pettit** (@MRXblogs; 0/0): Moving to blogdown https://t.co/SnadF8Dv7C #statsblog #MRX [↪](https://twitter.com/xieyihui/status/991706698363621379)
<!-- -->
> **Data Geek** (@datascigeek; 0/0): Moving to blogdown https://t.co/IF5NIScxAH #r #statistics #data science [↪](https://twitter.com/xieyihui/status/991657557025525760)
<!-- -->
> **Nitish Shekhar** (@nitzrulzx412; 0/0): Moving to blogdown https://t.co/YaGArCtZdn [↪](https://twitter.com/xieyihui/status/991573459263205376)
<!-- -->
> **Metamathan** (@metamathan; 0/0): Moving to blogdown https://t.co/3lCu8jPWN2 #statistics [↪](https://twitter.com/xieyihui/status/991540067641544704)
<!-- -->
> **StatsBlogs** (@StatsBlogs; 0/0): Moving to blogdown https://t.co/0TRJuoPCue [↪](https://twitter.com/xieyihui/status/991538800483557377)
<!-- -->
> **Chandan Kumar** (@Chandanrtcs; 0/0): Moving to blogdown https://t.co/xF8xkX9A4o [↪](https://twitter.com/xieyihui/status/991532797792841728)
<!-- -->
> **Deepak Taneja** (@DeepakTaneja86; 0/0): Moving to blogdown https://t.co/MbhyL8PnTk [↪](https://twitter.com/xieyihui/status/991530035478712320)
<!-- -->
> **Pierre DeBois - Zimana Digital Analytics Services** (@ZimanaAnalytics; 0/0): From R-Bloggers - Moving to blogdown https://t.co/BkTMZrdy0M [↪](https://twitter.com/xieyihui/status/991529552391409664)
<!-- -->
> **Abhijit Dasgupta** (@webbedfeet; 0/0): Moving to blogdown https://t.co/fY3yr0NKUV [↪](https://twitter.com/xieyihui/status/991524513178423296)
<!-- -->
# bookdown
> **RStudio** (@rstudio; 140/51): Watch @minebocek talk about just some of the ways you can use #RMarkdown to share data science. list(notebook, html, xaringan, blogdown, gh-doc, pdf, shiny, bookdown)
https://t.co/VoecqMGxzh
https://t.co/r4kI4FJRmx #rstudioconf https://t.co/lpAhOdkdSg [↪](https://twitter.com/xieyihui/status/991668907336454145)
<!-- -->
> **Peter Hickey** (@PeteHaitch; 38/12): Wow, this #rstats/@bioconductor scRNA-seq tutorial using #bookdown and featuring video looks incredible! https://t.co/V9zsawm1zu
Thank you @wikiselev, Tallulah Andrews, @Jenni_Westoby, @davisjmcc, @marenbuettner, and @m_hemberg [↪](https://twitter.com/xieyihui/status/991741524043059200)
<!-- -->
> **Jillian Deines** (@JillDeines; 1/0): @jrosenberg6432 Did you use Bookdown and the latex template to write/format your MSU dissertation? What were your thoughts about that workflow? I'm just getting starting setting up a document for my defense later this summer! [↪](https://twitter.com/xieyihui/status/991725606311116800)
<!-- -->
> **Superhero Residential** (@superheroreside; 0/0): YaRrr! The Pirate’s Guide to R https://t.co/OZywu6nQXS [↪](https://twitter.com/xieyihui/status/991740397985681408)
<!-- -->
> **Gordon Shotwell** (@gshotwell; 0/0): Anyone know if there's a way to display hover-over term definitions in bookdown websites? #rstats [↪](https://twitter.com/xieyihui/status/991672615277297669)
<!-- -->
# knitr
> **Daneel Olivaw** (@d_olivaw; 6/0): When you hit Ctrl + K on a long RMarkdown document... #rstats #knitr https://t.co/SHugdFphhe [↪](https://twitter.com/xieyihui/status/991732436265553920)
<!-- -->
> **RPubs hot entry** (@RPubsHotEntry; 0/0): Knitr Tutorial Example 1 (1 user) https://t.co/hxnYvXm463 [↪](https://twitter.com/xieyihui/status/991713175501451264)
<!-- -->
> **もむ** (@momentumyy; 0/0): @kohske @kokiikedaJP ああすいません。これまでさすがにknitrではやってないので、まじでいきます [↪](https://twitter.com/xieyihui/status/991705011401838592)
<!-- -->
# xaringan
> **RStudio** (@rstudio; 140/51): Watch @minebocek talk about just some of the ways you can use #RMarkdown to share data science. list(notebook, html, xaringan, blogdown, gh-doc, pdf, shiny, bookdown)
https://t.co/VoecqMGxzh
https://t.co/r4kI4FJRmx #rstudioconf https://t.co/lpAhOdkdSg [↪](https://twitter.com/xieyihui/status/991668907336454145)
<!-- -->
> **Samantha Toet** (@Samantha_Toet; 4/0): Taking the plunge and R-ifying my slides using @xieyihui's #xaringan package for #rstats. Of course I worried about it not having enough #Rladies purple, but of course @apreshill already made an AWESOME Rladies ninja theme. 🤙 [↪](https://twitter.com/xieyihui/status/991787276282859520)
<!-- -->
> **Alison Hill** (@apreshill; 2/0): @Samantha_Toet @xieyihui Hooray! If you want, you can upload a link to your deck as an example in the "how_to_use.Rmd" in the @RLadiesGlobal github resources section https://t.co/683MhTfmoo https://t.co/sORiBYE0G1 [↪](https://twitter.com/xieyihui/status/991824061926813696)
<!-- -->
| 45.115385 | 350 | 0.710855 | yue_Hant | 0.49058 |
520d6161c4e175e9d2e7a73bb970ab1eb1c26a18 | 186 | md | Markdown | README.md | nounoursheureux/notes-api | f195ae9b55973ca5d2c02f2a570def2b28a2349b | [
"WTFPL"
] | null | null | null | README.md | nounoursheureux/notes-api | f195ae9b55973ca5d2c02f2a570def2b28a2349b | [
"WTFPL"
] | null | null | null | README.md | nounoursheureux/notes-api | f195ae9b55973ca5d2c02f2a570def2b28a2349b | [
"WTFPL"
] | null | null | null | # Notes [](https://travis-ci.org/nounoursheureux/notes-api)
A REST API for creating and updating notes
| 46.5 | 141 | 0.774194 | yue_Hant | 0.515876 |
520f2f124fd0c77bf8a5e7aad8548b2fab507c2c | 699 | md | Markdown | notes/archive/vpn.md | aizatto/gitbook-public | a587e1c1d248f1b9f1f815eea1cd19b251e62b3e | [
"CC-BY-4.0"
] | 20 | 2019-02-03T18:31:22.000Z | 2022-02-22T14:11:25.000Z | notes/archive/vpn.md | aizatto/gitbook-public | a587e1c1d248f1b9f1f815eea1cd19b251e62b3e | [
"CC-BY-4.0"
] | null | null | null | notes/archive/vpn.md | aizatto/gitbook-public | a587e1c1d248f1b9f1f815eea1cd19b251e62b3e | [
"CC-BY-4.0"
] | 5 | 2019-03-24T06:28:39.000Z | 2020-11-21T18:37:45.000Z | ---
description: Virtual Private Networks
---
# VPN
Also see:
* [DNS](dns.md)
* [https://github.com/StreisandEffect/streisand](https://github.com/StreisandEffect/streisand)
* [https://getoutline.org/en/home](https://getoutline.org/en/home)
## Services
### From Wirecutter
[https://thewirecutter.com/reviews/best-vpn-service/](https://thewirecutter.com/reviews/best-vpn-service/)
* Our Pick: IVPN $70/year
* [https://wclink.co/link/26127/137912/4/74300?merchant=IVPN](https://wclink.co/link/26127/137912/4/74300?merchant=IVPN)
* Budget pick: TorGuard $60/year
* [https://wclink.co/link/26128/137913/4/74301?merchant=TorGuard](https://wclink.co/link/26128/137913/4/74301?merchant=TorGuard)
| 25.888889 | 128 | 0.735336 | yue_Hant | 0.823476 |
520ffcca358bced825f71d9ae677b5ad6d5caeee | 4,803 | md | Markdown | content/posts/pengertian-seo.md | araproject/ara-pro | ce5197db8650ec5f341be6bc2503306971448ecf | [
"MIT"
] | null | null | null | content/posts/pengertian-seo.md | araproject/ara-pro | ce5197db8650ec5f341be6bc2503306971448ecf | [
"MIT"
] | 11 | 2021-03-01T20:49:13.000Z | 2022-02-26T17:43:57.000Z | content/posts/pengertian-seo.md | araproject/ara-pro | ce5197db8650ec5f341be6bc2503306971448ecf | [
"MIT"
] | null | null | null | ---
date: 2019-10-07
title: 'Mengenal SEO dalam 5 menit'
template: post
thumbnail: '../thumbnails/mengenal-seo.png'
slug: mengenal-seo
categories:
- SEO
tags:
- optimasi
---
## Apa itu SEO ??
SEO adalah singkatan dari Search Engine Optimization, Sederhananya, SEO adalah teknik untuk membuat peringkat situs web muncul di halaman pertama hasil pencarian di Google Search.
Untuk membuat situs web muncul di posisi yang baik, Ada banyak langkah dan faktor yang digabungkan dalam hal konten dan aspek teknis.
Oleh karena itu, jika berbicara dengan menutupi arti sebenarnya dari SEO adalah untuk mengembangkan konten situs web (konten) dan struktur situs web (struktur situs) untuk Mesin Pencari (Google, Yahoo, Bing).

## Bagaimana cara melakukan SEO ?
Metode SEO yang paling sukses adalah "SEO yang Natural" yang akan menghasilkan situs web yang benar-benar berkualitas. Beberapa faktor utama adalah sebagai berikut.
### 1. SEO one page
- Riset Kata Kunci
- Meta description
- Alt Text
- Title Tag
- SSL / HTTPS
- Struktur URL
- Internal Link
- Performa Halaman
### 2. SEO Off Page
- Backlink berkualitas
- Domain Authority
- Berbagi di sosial media
Silahakan baca [Perbedaan antara SEO On-Page dan SEO Off-Page](https://www.aradechoco.com/seo-on-page-dan-seo-off-page/)
## Manfaat Melakukan SEO ?
Manfaat paling penting dari SEO adalah jika kita dapat menempatkan situs web kita di halaman pertama atau di posisi teratas Pada hasil pencarian, maka akan dapat meningkatkan jumlah orang yang memasuki situs web (Trafik) tanpa harus membayar untuk [menggunakan jasa SEO](https://www.aradechoco.com/menyewa-jasa-seo-berkualitas/) atau Iklan Berbayar.
## Belajar SEO ✔️
**Jika ingin membaca lebih lanjut tentang SEO, silahkan baca artikel terkait ini:**
- [SEO Untuk Pemula](https://www.aradechoco.com/SEO-untuk-pemula/) - Langkah Awal Untuk Naik Peringkat Google
- [SEO Dasar: 17 Tips Optimasi Yang Wajib Diketahui Pemula](https://www.aradechoco.com/seo-dasar-untuk-pemula/)
- [Tanya Jawab tentang Optimasi SEO](https://www.aradechoco.com/seo-link-building/) - Link Building
- [Apa itu backlink ?](https://www.aradechoco.com/apa-itu-backlink/) Mengapa penting untuk SEO ?
- [Cara Riset Keyword](https://www.aradechoco.com/cara-riset-keyword-untuk-pemula/) : Long Tail dan Short Tail untuk Pemula
- [Membangun Backlink melalui Wikipedia](https://www.aradechoco.com/backlink-melalui-wikipedia/)
- [Cara mengetahui peringkat situs web](https://www.aradechoco.com/cara-mengetahui-peringkat-situs-web/)
- [Cara Menghapus Backlink Spam](https://www.aradechoco.com/menghapus-backlink-spam/)
- [Optimasi SEO dengan Schema Markup](https://www.aradechoco.com/optimasi-schema-markup/)
- [Teknik SEO](https://www.aradechoco.com/teknik-seo/) - Aspek penting yang tidak boleh di lewatkan
- [Teknik Black Hat SEO Yang Harus di Hindari](https://www.aradechoco.com/teknik-black-hat-seo/)
- [Teknik White Hat SEO](https://www.aradechoco.com/teknik-white-hat-seo/)
- [Cara Menempatkan Keyword yang SEO pada Postingan blog](https://www.aradechoco.com/menempatkan-keyword-seo/)
- [Optimasi Meta Tag Yang Harus Diketahui Blogger?](https://www.aradechoco.com/optimasi-meta-tag/)
- [Guest Blogging dan Pengaruhnya terhadap SEO](https://aradechoco.com/guest-blog-seo/)
- [Perbedaan SEM dan SEO beserta Manfaatnya untuk Digital Marketing](https://www.aradechoco.com/perbedaan-sem-dan-seo/)
- [25 Tools SEO Gratis Populer yang Wajib Anda Gunakan](https://www.aradechoco.com/tools-seo-gratis/)
- [Perbedaan antara SEO On-Page dan SEO Off-Page](https://www.aradechoco.com/seo-on-page-dan-seo-off-page/)
- [Cara menulis artikel SEO agar tampil di halaman pertama Google](https://www.aradechoco.com/menulis-artikel-seo/)
- [Hal yang perlu diketahui sebelum Sewa Jasa SEO Berkualitas](https://www.aradechoco.com/menyewa-jasa-seo-berkualitas/)
- [Mengapa Riset Kata kunci itu penting untuk SEO ?](https://www.aradechoco.com/riset-kata-kunci/)
- [Sudah melakukan Optimasi SEO tapi belum juga Page One ?](https://www.aradechoco.com/optimasi-seo-page-one/)
- [Bagaimana cara mendapatkan Google Sitelink ?](https://www.aradechoco.com/google-sitelink/)
- [5 Jenis Kata kunci SEO Yang Harus Anda Ketahui](https://www.aradechoco.com/jenis-kata-kunci/)
- [Cara Menulis Judul Artikel yang SEO Friendly](https://www.aradechoco.com/judul-artikel-seo-friendly/)
- [Apa itu Do Follow dan No Follow? Pentingkah untuk SEO?](https://www.aradechoco.com/do-follow-dan-no-follow/)
- [Seberapa penting link Internal dan Eksternal untuk SEO ?](https://www.aradechoco.com/link-internal-dan-eksternal/)
- [7 faktor penting yang menentukan kesuksesan SEO](https://www.aradechoco.com/faktor-kesuksesan-seo/)
| 58.573171 | 352 | 0.755778 | ind_Latn | 0.917188 |
521186ce27a405b8b016bc2b09e54d1318eb58fb | 2,756 | md | Markdown | includes/virtual-networks-create-vnet-arm-pportal-include.md | rbirksteiner/azure-content-nlnl | dabb5b398adf6235c23e417fd0767a3cd5a8f8f8 | [
"CC-BY-3.0"
] | 1 | 2019-05-02T03:32:46.000Z | 2019-05-02T03:32:46.000Z | includes/virtual-networks-create-vnet-arm-pportal-include.md | rbirksteiner/azure-content-nlnl | dabb5b398adf6235c23e417fd0767a3cd5a8f8f8 | [
"CC-BY-3.0"
] | null | null | null | includes/virtual-networks-create-vnet-arm-pportal-include.md | rbirksteiner/azure-content-nlnl | dabb5b398adf6235c23e417fd0767a3cd5a8f8f8 | [
"CC-BY-3.0"
] | 1 | 2020-06-14T17:02:09.000Z | 2020-06-14T17:02:09.000Z | ## Een VNet aanmaken in de Azure-portal
Volg de onderstaande stappen om een VNet aan te maken met behulp van de Azure Preview Portal op basis van het bovenstaande scenario .
1. Navigeer via een browser naar http://portal.azure.com en log, indien nodig, in met uw Azure-account.
2. Klik op **Nieuw** > **Netwerken** > **Virtueel netwerk** en vervolgens op **Resource Manager** vanuit de lijst **Selecteer een implementatiemodel**. Klik daarna op **Aanmaken**, zoals in de afbeelding hieronder wordt weergegeven.

3. Configureer de VNet-instellingen op de blade **Virtueel netwerk aanmaken**, zoals aangegeven in de onderstaande afbeelding.

4. Klik op **Resourcegroep** en selecteer een resourcegroep om toe te voegen aan het VNet of klik op **Nieuwe aanmaken** om het VNet toe te voegen aan een nieuwe resourcegroep. In de onderstaande afbeelding worden de instellingen voor de resourcegroep genaamd **TestRG** weergegeven. Zie voor meer informatie over resourcegroepen [Overzicht van Azure Resource Manager](../articles/resource-group-overview.md/#resource-groups).

5. Wijzig zo nodig de instellingen van het **Abonnement** en de **Locatie** voor uw VNet.
6. Als u VNet niet als tegel wilt laten weergeven in het **Startboard** schakelt u **Vastmaken aan Startboard** uit.
7. Klik op **Aanmaken** en let op de tegel met de naam **Virtueel netwerk aanmaken** die in de onderstaande afbeelding wordt weergegeven.

8. Wacht tot het VNet is aangemaakt en klik vervolgens, in de blade **Virtueel netwerk**, op **Alle instellingen** > **Subnetten** > **Toevoegen** zoals hieronder wordt weergegeven.

9. Geef de subnetinstellingen voor het *Back-end*-subnet op, zoals hieronder wordt weergegeven, en klik vervolgens op **OK**.

10. Let op de lijst met subnetten, zoals wordt weergegeven in de afbeelding hieronder.

<!--HONumber=Jun16_HO2-->
| 65.619048 | 426 | 0.769231 | nld_Latn | 0.995631 |
521245f4d11dc77effa484928a35f2013208f30c | 4,700 | md | Markdown | source/_posts/worker.md | fnlearner/blo | 0669b934236271c944fab0701d12ee6f954adf6e | [
"MIT"
] | null | null | null | source/_posts/worker.md | fnlearner/blo | 0669b934236271c944fab0701d12ee6f954adf6e | [
"MIT"
] | null | null | null | source/_posts/worker.md | fnlearner/blo | 0669b934236271c944fab0701d12ee6f954adf6e | [
"MIT"
] | null | null | null | ---
title: worker
date: 2021-05-18 19:14:20
tags: worker
javaScript
---
Web Worker 文献综述
Web Worker 作为浏览器多线程技术, 在页面内容不断丰富, 功能日趋复杂的当下, 成为缓解页面卡顿, 提升应用性能的可选方案.
但她的容颜, 隐藏在边缘试探的科普文章和不知深浅的兼容性背后; 对 JS 单线程面试题倒背如流的前端工程师, 对多线程开发有着天然的陌生感.
业务背景
因为我负责的组件中有个是树组件,那么对于这个树组件有个功能是叫做搜索的功能,一开始的时候在搜索匹配节点的时候是在主线程中去执行代码,但是在遇到计算量大的时候,就会一直占用主线程的资源,导致页面不能及时的跟用户进行响应,那么我就想到了将这个计算的代码放到后台(也就是worker)去执行逻辑,这样的话就不会阻塞页面的渲染工作,因为worker的代码跟主线程的代码是属于并行的关系。
#### worker介绍
官方对这个[web worker](https://link.zhihu.com/?target=https%3A//developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API)的表述是
```
Web Workers makes it possible to run a script operation
in a background thread separate from the main execution thread
of a web application.
```

通过这个其实可以想到的是 并行可以提升执行效率,任务的拆分能减少页面的卡顿
<!-- more -->
#### 技术规范
[worker的技术规范](https://www.w3.org/TR/workers/)并不是近几年才开始提出的,它在09年就已经有了草案

#### Worker的分类
在mdn上worker介绍了三种worker类型,分别是Dedicated workers (专用worker),Shared Worker(共享worker)以及 Service Worker。
Dedicated worker 只能与创建它的页面渲染进程通行,不能够进行多tab共享,这是最常见的也是用的最多的一种worker
而 SharedWorker 可以在多个浏览器 Tab 中访问到同一个 Worker 实例, 实现多 Tab 共享数据, 共享 webSocket 连接等. 看起来很美好, 但 safari 放弃了 SharedWorker 支持, 因为 webkit 引擎的技术原因. 如下图所示, 只在 safari 5~6 中短暂支持过.

相比之下, DedicatedWorker 有着更广的兼容性和更多业务落地实践, 本文后面讨论中的 Worker 都是特指 DedicatedWorker.
在创建worker的时候,那么其实是会创建操作系统级别的线程
```bash
The Worker interface spawns real OS-level threads. -- MDN
```
JS 多线程, 是有独立于主线程的 JS 运行环境. 如下图所示: Worker 线程有独立的内存空间, Message Queue, Event Loop, Call Stack 等, 线程间通过 postMessage 通信,并且多个线程是可以并发执行的。
#### 异步任务与worker的区别
JS 单线程中的"并发", 准确来说是 Concurrent. 如下图所示, 运行时只有一个函数调用栈, 通过 Event Loop 实现不同 Task 的上下文切换(Context Switch). 这些 Task 通过 BOM API 调起其他线程为主线程工作, 但回调函数代码逻辑依然由 JS 串行运行.

#### 应用场景
+ 可以减少主线程卡顿.
+ 可能会带来性能提升.
Worker 的多线程能力, 使得同步 JS 任务的拆分一步到位: 从宏观上将整个同步 JS 任务异步化. 不需要再去苦苦寻找原子逻辑, 逻辑异步化的设计上也更加简单和可维护.
这给我们带来更多的想象空间. 如下图所示, 在浏览器主线程渲染周期内, 将可能阻塞页面渲染的 JS 运行任务(Jank Job)迁移到 Worker 线程中, 进而减少主线程的负担, 缩短渲染间隔, 减少页面卡顿.
#### 性能提升
Worker 多线程并不会直接带来计算性能的提升, 能否提升与设备 CPU 核数和线程策略有关.
#### worker的线程策略
一台设备上相同任务在各线程中运行耗时是一样的. 如下图所示: 我们将主线程 JS 任务交给新建的 Worker 线程, 任务在 Worker 线程上运行并不会比原本主线程更快, 而线程新建消耗和通信开销使得渲染间隔可能变得更久.
在单核机器上, 计算资源是内卷的, 新建的 Worker 线程并不能为页面争取到更多的计算资源. 在多核机器上, 新建的 Worker 线程和主线程都能做运算, 页面总计算资源增多, 但对单次任务来说, 在哪个线程上运行耗时是一样的.
真正带来性能提升的是`多核多线程并发`.
如多个没有依赖关系的同步任务, 在单线程上只能串行执行, 在多核多线程中可以并行执行.
#### 把主线程还给 UI
Worker 的应用场景, 本质上是从主线程中剥离逻辑, 让主线程专注于 UI 渲染.
#### woker 通信方式
[worker相关api](https://developer.mozilla.org/en-US/docs/Web/API/DedicatedWorkerGlobalScope)

#### worker 跟主线程的 异同
##### 共同点
+ 包含完整的 JS 运行时, 支持 ECMAScript 规范定义的语言语法和内置对象.
+ 支持 XmlHttpRequest, 能独立发送网络请求与后台交互.
+ 包含只读的 Location, 指向 Worker 线程执行的 script url, 可通过 url 传递参数给 Worker 环境.
+ 包含只读的 Navigator, 用于获取浏览器信息, 如通过 Navigator.userAgent 识别浏览器.
+ 支持 setTimeout / setInterval 计时器, 可用于实现异步逻辑.
+ 支持 WebSocket 进行网络 I/O; 支持 IndexedDB 进行文件 I/O.
##### 不同点
+ Worker 线程没有 DOM API, 无法新建和操作 DOM; 也无法访问到主线程的 DOM Element.
+ Worker 线程和主线程间内存独立, Worker 线程无法访问页面上的全局变量(window, document 等)和 JS 函数.
+ Worker 线程不能调用 alert() 或 confirm() 等 UI 相关的 BOM API.
+ Worker 线程被主线程控制, 主线程可以新建和销毁 Worker.
+ Worker 线程可以通过 self.close 自行销毁.
从差异点上看, Worker 线程无法染指 UI, 并受主线程控制, 适合默默干活.
#### 通信速度
Worker 多线程虽然实现了 JS 任务的并行运行, 也带来额外的通信开销. 如下图所示, 从线程A 调用 postMessage 发送数据到线程B onmessage 接收到数据有时间差, 这段时间差称为通信消耗.

提升的性能 = 并行提升的性能 – 通信消耗的性能. 在线程计算能力固定的情况下, 要通过多线程提升更多性能, 需要尽量减少通信消耗.
#### 数据传输方式
通信方式有 3 种: Structured Clone, Transfer Memory 和 Shared Array Buffer.
##### Structured Clone
Structured Clone 是 postMessage 默认的通信方式. 如下图所示, 复制一份线程A 的 JS Object 内存给到线程B, 线程B 能获取和操作新复制的内存.
Structured Clone 通过复制内存的方式简单有效地隔离不同线程内存, 避免冲突; 且传输的 Object 数据结构很灵活. 但复制过程中, 线程A 要同步执行 Object Serialization, 线程B 要同步执行 Object Deserialization; 如果 Object 规模过大, 会占用大量的线程时间.
##### Transfer Memory
Transfer Memory 意为转移内存, 它不需要 Serialization/Deserialization, 能大大减少传输过程占用的线程时间. 如下图所示 , 线程A 将指定内存的所有权和操作权转给线程B, 但转让后线程A 无法再访问这块内存.
Transfer Memory 以失去控制权来换取高效传输, 通过内存独占给多线程并发加锁. 但只能转让 ArrayBuffer 等大小规整的二进制(Raw Binary)数据; 对矩阵数据(如 RGB 图片,对rgb图片像素操作,比如取反色)比较适用. 实践上也要考虑从 JS Object 生成二进制数据的运算成本。
##### Shared Array Buffers
Shared Array Buffer 是共享内存, 线程A 和线程B 可以同时访问和操作同一块内存空间. 数据都共享了, 也就没有传输什么事了.
但多个并行的线程共享内存, 会产生竞争问题(Race Conditions). 不像前 2 种传输方式默认加锁, Shared Array Buffers 把难题抛给开发者, 开发者可以用 Atomics 来维护这块共享的内存. 作为较新的传输方式, 浏览器兼容性可想而知, 目前只有 Chrome 68+ 支持.
#### 兼容性
兼容性是前端技术方案评估中需要关注的问题. 对 Web Worker 更是如此, 因为 Worker 的多线程能力, 要么业务场景完全用不上; 要么一用就是重度依赖的基础能力.
worker的兼容性还不错 ,能够兼容大部分的主流浏览器 skr
| 30.921053 | 189 | 0.783404 | yue_Hant | 0.72005 |
521257911c93f7e3dc209445ffb8051bedc433fd | 482 | md | Markdown | e/eupatanerkprot/index.md | nnworkspace/gesetze | 1d9a25fdfdd9468952f739736066c1ef76069051 | [
"Unlicense"
] | 1 | 2020-06-20T11:34:20.000Z | 2020-06-20T11:34:20.000Z | e/eupatanerkprot/index.md | nagy/gesetze | 77abca2ceea3b7b89ea70afb13b5dd55415eb124 | [
"Unlicense"
] | null | null | null | e/eupatanerkprot/index.md | nagy/gesetze | 77abca2ceea3b7b89ea70afb13b5dd55415eb124 | [
"Unlicense"
] | null | null | null | ---
Title: Protokoll über die gerichtliche Zuständigkeit und die Anerkennung von Entscheidungen
über den Anspruch auf Erteilung eines europäischen Patents
jurabk: EuPatAnerkProt
layout: default
origslug: eupatanerkprot
slug: eupatanerkprot
---
# Protokoll über die gerichtliche Zuständigkeit und die Anerkennung von Entscheidungen über den Anspruch auf Erteilung eines europäischen Patents (EuPatAnerkProt)
Ausfertigungsdatum
: 1973-10-05
Fundstelle
: BGBl II: 1976, 982
| 25.368421 | 162 | 0.815353 | deu_Latn | 0.988526 |
521443d6fcaa83605f07bd157d3dd363e88865fd | 812 | md | Markdown | _posts/2021-02-24-react-webview-kakao.md | mnmsoft/mnmsoft.github.io | 4170479aefb4b00cf0a93d3ec87da773b9fe74a5 | [
"MIT"
] | null | null | null | _posts/2021-02-24-react-webview-kakao.md | mnmsoft/mnmsoft.github.io | 4170479aefb4b00cf0a93d3ec87da773b9fe74a5 | [
"MIT"
] | null | null | null | _posts/2021-02-24-react-webview-kakao.md | mnmsoft/mnmsoft.github.io | 4170479aefb4b00cf0a93d3ec87da773b9fe74a5 | [
"MIT"
] | null | null | null | ---
title: "react 에서 kakao undefind"
excerpt: "react 에서 카카오 script가 안될때."
toc: true
toc_sticky: ture
categories:
- react
tags:
- react
- kakao
- webview
- android
- hybrid
last_modified_at: 2021-02-24T18:00:00+09:00
---
# react kakao 라이브러리를 사용하는데 undefined가 나올때
``` js
<script src="//developers.kakao.com/sdk/js/kakao.js">
```
public/index.html 내부에 위 태그를 써서 kakao sdk 를 사용하고 있었다.
리엑트 컴포넌트에서 Kakao 가 undefined 여서 검색해보니 ```window.Kakao``` 를 사용하라고 한다.
잘된다 크롬에서 잘 돌아간다.
그래서 webview에 올려서 테스트 하는데 또 안된다ㅠ
검색을 해보니 나같은 사람이 있는거 같다.. 검색...검색.. 3시간 가까이 시간낭비후에
결과를 얻어냈다...
``` js
<script src="https://developers.kakao.com/sdk/js/kakao.js">
```
``` //~ ``` 로 url을 적으면 알아서 맞게 변환해준다고 하여 여태껐쓰고 있었는데
안드로이드 webview에서는 안되나 봅니다.
3시간을 고생해서 찾아냈는데... 왠지 예전에도 이것 때문에 고생한적이 있는 느낌..
이제는 다시 이걸로 고생하지 말자!!!
끝.
| 15.615385 | 68 | 0.669951 | kor_Hang | 1.000009 |
52145ba3fe12f9b0bdf3cf26cf149780de8b933e | 78 | md | Markdown | README.md | ramphor/seo-adapter | f2a9a732ebe5252d731512b8e2861de35a81daa9 | [
"MIT"
] | null | null | null | README.md | ramphor/seo-adapter | f2a9a732ebe5252d731512b8e2861de35a81daa9 | [
"MIT"
] | null | null | null | README.md | ramphor/seo-adapter | f2a9a732ebe5252d731512b8e2861de35a81daa9 | [
"MIT"
] | null | null | null | # seo-adapter
The SEO adapter to support custom data for popular SEO plugins.
| 26 | 63 | 0.794872 | eng_Latn | 0.918192 |
521580665b9d6f8a24a67549bcd79b879f8411b3 | 624 | md | Markdown | Using/README_pl.md | grzegorzbalcerek/scala-exercises | e3c8320343fce44cc4f0530f808c9f1a4e102628 | [
"BSD-2-Clause"
] | 2 | 2016-05-18T14:11:58.000Z | 2017-07-05T08:10:59.000Z | Using/README_pl.md | grzegorzbalcerek/scala-exercises | e3c8320343fce44cc4f0530f808c9f1a4e102628 | [
"BSD-2-Clause"
] | null | null | null | Using/README_pl.md | grzegorzbalcerek/scala-exercises | e3c8320343fce44cc4f0530f808c9f1a4e102628 | [
"BSD-2-Clause"
] | null | null | null | Using
=====
Utworzyć metodę `using` która ma dwie listy parametrów z jednym
parametrem w każdej z nich. Pierwszy parametr reprezentuje referencję
do obiektu implementującego metodę `close`. Drugi reprezentuje funkcję
do wykonania. Metoda `using` wykona funkcję przekazując jej instancję
przekazaną w pierwszym parametrze. Rezultatem funkcji oraz metody
`using` jest `Unit`. Po wykonaniu funkcji a także w przypadku gdy ta
funkcja wygeneruje wyjątek metoda `using` wykonuje metodę `close`
instancji przekazanej w pierwszym parametrze i kończy działanie. Do
sprawdzenia działania metody `using` wykorzystać klasę `Resource`.
| 48 | 70 | 0.81891 | pol_Latn | 1.000007 |
5215b3a060829b8a6f335f3a2e2f284d0a4578b1 | 32 | md | Markdown | 3/README.md | nagendramca2011/OCA-JavaSE7-Programmer-Certification-Guide | 5fcd55bd97d237572ef0e5d73cad21f383d47624 | [
"MIT"
] | null | null | null | 3/README.md | nagendramca2011/OCA-JavaSE7-Programmer-Certification-Guide | 5fcd55bd97d237572ef0e5d73cad21f383d47624 | [
"MIT"
] | null | null | null | 3/README.md | nagendramca2011/OCA-JavaSE7-Programmer-Certification-Guide | 5fcd55bd97d237572ef0e5d73cad21f383d47624 | [
"MIT"
] | null | null | null | # 3. Methods and encapsulation
| 16 | 31 | 0.75 | eng_Latn | 0.994598 |
52164c8126bbf20ce1bf9dac9862f97345c803bb | 3,424 | md | Markdown | _posts/2018-04-02-retrieve-an-oauth-token-from-azure-ad-using-runscope.md | mivano/mindbyte.nl | b9e6a355c0089ca35ae68ec585a9c2621f74c929 | [
"MIT"
] | null | null | null | _posts/2018-04-02-retrieve-an-oauth-token-from-azure-ad-using-runscope.md | mivano/mindbyte.nl | b9e6a355c0089ca35ae68ec585a9c2621f74c929 | [
"MIT"
] | null | null | null | _posts/2018-04-02-retrieve-an-oauth-token-from-azure-ad-using-runscope.md | mivano/mindbyte.nl | b9e6a355c0089ca35ae68ec585a9c2621f74c929 | [
"MIT"
] | null | null | null | ---
published: true
title: Retrieve an OAuth client token from Azure AD using Runscope
tags:
- azure
- security
header:
image: /images/runscopeteststep.png
excerpt: >-
Runscope is a great online tool to validate and test API endpoints. For a
recent project, we are using it to mimic traffic from an external system that
is supposed to submit XML files to our application. The application, however,
requires an authorization token with a valid JWT to authenticate and authorize
the caller. As there is a lifetime on those tokens, we need to retrieve a new
one each time we start a test. Luckily that is not too difficult in Runscope.
---
[Runscope](http://www.runscope.com) is a great online tool to validate and test API endpoints. For a recent project, we are using it to mimic traffic from an external system that is supposed to submit XML files to our application.
The application, however, requires an authorization token with a valid JWT to authenticate and authorize the caller. As there is a lifetime on those tokens, we need to retrieve a new one each time we start a test. Luckily that is not too difficult in Runscope.
For this, I created a new test called **Token** with a single test step. In this step, I do a POST to `https://login.microsoftonline.com/yourtenantname/oauth2/token`, which is the Azure Active Directory endpoint to fetch tokens. Put in your own tenant name or id of your Azure AD.
As we do a form post, we need to add a _content-type_ of `application/x-www-form-urlencoded`.
We are going to post four parameters, the _client_id_ and _client_secret_, which are used in the client OAuth flow to identify and authorize the client, and a _resource_id_ and _grant_type_. The grant type is `client_credentials` as that is the way we authenticate.
The _resource_id_ is the id of the application you want to access. You can find those id's in the Azure Portal under the Application registrations.
The client id and secret are coming from the application you have created in the same Azure portal. The client id is visible in the main overview and the secret can be generated from the settings.
You end up with a test like below

When running the test, the call will be made and, if all parameters are correct, there will be a response in JSON. One of the parameters is the actual access code which we need in subsequent tests. Using the variables option, you can retrieve this property.

The JSON body will be inspected for a property called `access_token` which is then stored in the variable called `token`.
We can now use this token in the other calls to our application by including it in an `Authorization: Bearer {{token}}` header.
However, it is not very efficient to repeat the above steps in each test. Runscope supports subtests to handle this. So create a new test, add a **subtest** step and refer to the test case containing the logic to fetch the token.

The execution of this subtest produces a response with variables, which then contains the token we extracted before. So as shown above, we use variables again to extract from the JSON body the token from the `variables.token`.
Now in your further test steps, just include the `Authorization: Bearer {{token}}` to reuse the token in the calls.
| 69.877551 | 280 | 0.783002 | eng_Latn | 0.999472 |
52173871bcf88ff634beb4c47ea917af816066e2 | 3,524 | md | Markdown | README.md | dmitryserenko/easyContactForm | c458698077056730d8b8024b8d08f4e290824ef8 | [
"MIT"
] | null | null | null | README.md | dmitryserenko/easyContactForm | c458698077056730d8b8024b8d08f4e290824ef8 | [
"MIT"
] | null | null | null | README.md | dmitryserenko/easyContactForm | c458698077056730d8b8024b8d08f4e290824ef8 | [
"MIT"
] | null | null | null | # easyContactForm
Simple contact form snippet for MODX Revolution 2.x.x
@author Dmitry Serenko
@copyright Copyright 2021, Dmitry Serenko
### Options
Additional options for customizing snippet
to - Primary email address for sending mail [[email protected]]
```shell
&to=`[email protected]`
```
cc - Add carbon copy to header of the mail [default=]
```shell
&cc=`[email protected], [email protected]`
```
bcc - Add blind carbon copy to header of the mail [default=]
```shell
&bcc=`[email protected], [email protected]`
```
subject - Subject of the mail [default=Feedback from the site yourdomain.com]
```shell
&subject=``
```
headline - Headline of the message [default=You have received a new message from the site]
```shell
&headline=``
```
success - [default=Your message has been successfully sent]
```shell
&success=``
```
input - Input list of the form [default={"name":"Contact person","email":"Email","phone":"Phone"}]
```shell
&input=``
```
textarea - Textarea list of the form [default=]
```shell
&textarea=`{"text":"Your message"}`
```
button - Text of the submit button [default=Submit]
```shell
&button=`Submit form`
```
placeholder - Display title as placeholder [default=false]
```shell
&placeholder=`true`
```
### Использование
Примеры быстрого использования сниппета
Форма обратной с вязи с полями: Контактное лицо, Email, Телефон, Сообщение
```shell
[[!easyContactForm?
&subject=`Сообщение с сайта [[++site_name]]`
&to=`[email protected]`
&headline=`Поступило новое сообщение с сайта [[++site_name]]`
&success=`Ваше сообщение успешно отправлено`
&input=`{"name":"Контактное лицо","email":"Email","phone":"Телефон"}`
&textarea=`{"text":"Сообщение"}`
&button=`Отправить`
]]
```
Форма заказа обратного звонка с полями: Контактное лицо, Телефон
```shell
[[!easyContactForm?
&subject=`Обратный звонок с сайта [[++site_name]]`
&to=`[email protected]`
&headline=`Поступил заказ обратного звонка с сайта [[++site_name]]`
&success=`Заявка успешно отправлена, наш менеджер перезвонит вам в ближайшее время`
&input=`{"name":"Контактное лицо","phone":"Телефон"}`
&button=`Заказать звонок`
]]
```
### Опции
Дополнительные опции для настройки сниппета с примерами использования
to - Основной Email для оправки формы [[email protected]]
```shell
&to=`[email protected]`
```
cc - Добавить получателей в копию (список адресов через запятую) [default=]
```shell
&cc=`[email protected], [email protected]`
```
bcc - Добавить получателей в скрытую копию (список адресов через запятую) [default=]
```shell
&bcc=`[email protected], [email protected]`
```
subject - Тема отправляемого сообщения [default=Feedback from the site yourdomain.com]
```shell
&subject=`Тема сообщения`
```
headline - Заголовок сообщения [default=You have received a new message from the site]
```shell
&headline=`Сообщение с сайта`
```
success - Текст после успешной отправки [default=Your message has been successfully sent]
```shell
&success=`Ваше сообщение успешно отправлено`
```
input - Список полей типа input (список полей в виде массива) [default={"name":"Contact person","email":"Email","phone":"Phone"}]
```shell
&input=`{"name":"Контактное лицо","email":"Email","phone":"Телефон"}`
```
textarea - Список полей типа textarea (список полей в виде массива) [default=]
```shell
&textarea=`{"text":"Сообщение"}`
```
button - Текст кнопки отправить [default=Submit]
```shell
&button=`Отправить`
```
placeholder - Отображать title как placeholder [default=false]
```shell
&placeholder=`true`
```
| 27.317829 | 129 | 0.71311 | kor_Hang | 0.209706 |
521744367f9c2f3dd2031491df9c7a256aabd19a | 792 | md | Markdown | _posts/2015-09-13-SWEET-16-Sherri-Hill-11039-Cap-Sleeve-Homecoming-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | _posts/2015-09-13-SWEET-16-Sherri-Hill-11039-Cap-Sleeve-Homecoming-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | _posts/2015-09-13-SWEET-16-Sherri-Hill-11039-Cap-Sleeve-Homecoming-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | ---
layout: post
date: '2015-09-13'
title: "SWEET 16 Sherri Hill 11039 Cap Sleeve Homecoming Dress"
category: SWEET 16
tags: [SWEET 16]
---
### SWEET 16 Sherri Hill 11039 Cap Sleeve Homecoming Dress
Just **$468.99**
###
Sherri Hill 11039 Cap Sleeve Homecoming Dress.
Available in sizes 0-18.
<a href="https://www.eudances.com/en/sweet-16/1753-sherri-hill-11039-cap-sleeve-homecoming-dress.html"><img src="//www.eudances.com/5209-thickbox_default/sherri-hill-11039-cap-sleeve-homecoming-dress.jpg" alt="Sherri Hill 11039 Cap Sleeve Homecoming Dress" style="width:100%;" /></a>
<!-- break -->
Buy it: [https://www.eudances.com/en/sweet-16/1753-sherri-hill-11039-cap-sleeve-homecoming-dress.html](https://www.eudances.com/en/sweet-16/1753-sherri-hill-11039-cap-sleeve-homecoming-dress.html)
| 41.684211 | 283 | 0.744949 | kor_Hang | 0.207608 |
5217fba894958a4e2bde9cbe1ba6fe285fb0ac02 | 233 | md | Markdown | _location_janbrueghel/cambridge-ma.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | _location_janbrueghel/cambridge-ma.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | _location_janbrueghel/cambridge-ma.md | brueghelfamily/brueghelfamily.github.io | a73351ac39b60cd763e483c1f8520f87d8c2a443 | [
"MIT"
] | null | null | null | ---
pid: cambridge-ma
title: 'Location: Cambridge, MA'
category: United States
label: Cambridge, MA
collection: location_janbrueghel
layout: locationpage_janbrueghel
order: '091'
permalink: "/janbrueghel/locations/cambridge-ma/"
---
| 21.181818 | 49 | 0.781116 | kor_Hang | 0.157799 |
521863102048cf955991f98987d1a1317bd2d31a | 66 | md | Markdown | README.md | Nethermaker/10-seconds | 86767f4724f038e8818125df4ed6eb1716fdf29c | [
"MIT"
] | null | null | null | README.md | Nethermaker/10-seconds | 86767f4724f038e8818125df4ed6eb1716fdf29c | [
"MIT"
] | null | null | null | README.md | Nethermaker/10-seconds | 86767f4724f038e8818125df4ed6eb1716fdf29c | [
"MIT"
] | null | null | null | # 10-seconds
UNL Game Dev Club repository for the 10-seconds team
| 22 | 52 | 0.787879 | eng_Latn | 0.990481 |
5218a88285fd15651d2cccb2a204310b3db303db | 96 | md | Markdown | .github/ISSUE_TEMPLATE.md | Hennamann/Live-Chat-Question-Flagger | 72349c938772b1fe9795747af1a997c663441a2a | [
"MIT"
] | 18 | 2017-09-05T11:37:36.000Z | 2021-08-19T17:42:29.000Z | .github/ISSUE_TEMPLATE.md | Hennamann/Live-Chat-Question-Flagger | 72349c938772b1fe9795747af1a997c663441a2a | [
"MIT"
] | 11 | 2017-08-17T17:06:57.000Z | 2020-12-09T02:46:15.000Z | .github/ISSUE_TEMPLATE.md | Hennamann/Live-Chat-Question-Flagger | 72349c938772b1fe9795747af1a997c663441a2a | [
"MIT"
] | 3 | 2017-08-23T13:57:34.000Z | 2020-05-07T18:25:45.000Z | Platform (OS):
Live Streaming Service(s):
Version:
Steps to reproduce:
Expected Behaviour:
| 9.6 | 26 | 0.739583 | eng_Latn | 0.353992 |
521969b2958b525e9e70d7a13c38fc5725e93790 | 318 | md | Markdown | _posts/2020-01-11-outlaws_feudal_leases_primogeniture_but.md | fernrees/anti_rent_revival | 660762309945417e79e9f9e6bdadc1c4b300858e | [
"MIT"
] | null | null | null | _posts/2020-01-11-outlaws_feudal_leases_primogeniture_but.md | fernrees/anti_rent_revival | 660762309945417e79e9f9e6bdadc1c4b300858e | [
"MIT"
] | null | null | null | _posts/2020-01-11-outlaws_feudal_leases_primogeniture_but.md | fernrees/anti_rent_revival | 660762309945417e79e9f9e6bdadc1c4b300858e | [
"MIT"
] | null | null | null | ---
layout: post
title: Outlaws feudal leases ('primogeniture') but...
date: 2020-01-11
categories:
- Juice
description: Outlaws feudal leases ('primogeniture') but with critical loop hole for 'sales'.
image:
image-sm:
---
Outlaws feudal leases ('primogeniture') but with critical loop hole for 'sales'. | 28.909091 | 94 | 0.710692 | eng_Latn | 0.89178 |
5219c35d1c7cc9698553358092c7dd229652e471 | 80 | md | Markdown | README.md | cnamejj/PollEngine | f1d6c736556ed1ec9ced9aec086d71da1ca8b628 | [
"Apache-2.0"
] | null | null | null | README.md | cnamejj/PollEngine | f1d6c736556ed1ec9ced9aec086d71da1ca8b628 | [
"Apache-2.0"
] | null | null | null | README.md | cnamejj/PollEngine | f1d6c736556ed1ec9ced9aec086d71da1ca8b628 | [
"Apache-2.0"
] | null | null | null | PollEngine
==========
Async network I/O framework for writing simple servers
| 11.428571 | 54 | 0.7 | eng_Latn | 0.953241 |
521a1823d9754f7dd980568e633b60a0e6c3e3fc | 29,307 | md | Markdown | 01_All_in_One_singleNIC.md | konono/tungsten-fabric-procedures | 7d3a1f06cf49008c70bece138623d3b21f34a855 | [
"MIT"
] | 3 | 2019-01-17T07:35:10.000Z | 2021-03-20T09:42:52.000Z | 01_All_in_One_singleNIC.md | konono/tungsten-fabric-procedures | 7d3a1f06cf49008c70bece138623d3b21f34a855 | [
"MIT"
] | null | null | null | 01_All_in_One_singleNIC.md | konono/tungsten-fabric-procedures | 7d3a1f06cf49008c70bece138623d3b21f34a855 | [
"MIT"
] | 3 | 2018-06-04T08:28:29.000Z | 2020-12-16T01:52:41.000Z | # Contrail 5.0 + OpenStack kolla ALL In One Install
## 0. Rqeuirement
Vagrant Box: centos/7
CentOS: CentOS Linux release 7.4.1708 (Core)
Kernel: 3.10.0-862.9.1.el7.x86_64
Network Interface: 1NIC
KVM Host: Enable Nested
contrail-ansible-deployer: commit a49186a1e454d45a3f8ac7499fc04885e42a037c
contrail-kolla-ansible: commit bea4145fb1044be0c637c4b8eef34dbc11aad25f
CONTAINER_REGISTRY: opencontrailnightly
CONTRAIL_VERSION: ocata-master-206
## 1. Create VM using by Vagrant
### 1.1. Create Vagrant file & Directory
Target: **Host**
```
$ cd ~
$ mkdir c01
$ cd c01
$ cat <<EOF > Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.define :k1 do |k1|
end
config.vm.provider :libvirt do |lv|
lv.uri = 'qemu+unix:///system'
lv.cpus = 4
lv.memory = 65536
lv.boot 'hd'
lv.management_network_name = "vag-mgmt01"
lv.management_network_autostart = "ture"
lv.management_network_address = "192.168.120.0/24"
lv.management_network_autostart = "ture"
end
end
EOF
```
### 1.2. Deploy VM & Login
```
$ vagrant up
$ vagrant ssh
```
# 2. Prepare deploy
Target: **VM**
```
$ sudo su -
```
### 2.1. Enable Password authentication
```
$ sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
$ sudo systemctl restart sshd
$ sudo passwd root
```
### 2.2. Install require packages
```
$ sudo yum install -y vim ntp epel-release git ansible net-tools python-devel
```
### 2.3. Start and enable ntpd
```
$ sudo systemctl restart ntpd
$ sudo systemctl enable ntpd
```
### 2.4. Increase disk size
*The CentOS Disk size defaults to 40 GB, however AIO Requirements will require 300 GB. So we need to increase the Disk size*
Target: **VM**
```
$ sudo shutdown -h now
```
Target: **Host**
```
cd /var/lib/libvirt/images/
sudo qemu-img resize c01_k1.img +250G
cd ~/c01
vagrant up
vagrant ssh
```
Target: **VM**
```
sudo fdisk /dev/vda
======================================================================
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): d
Partition number (1-3, default 3):
Partition 3 is deleted
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p):
Using default response p
Partition number (3,4, default 3):
First sector (2101248-610271231, default 2101248):
Using default value 2101248
Last sector, +sectors or +size{K,M,G} (2101248-610271231, default 610271231):
Using default value 610271231
Partition 3 of type Linux and of size 290 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
======================================================================
sudo reboot
```
Target: **Host**
```
vagrant ssh
```
Target: **VM**
```
$ sudo su -
$ sudo pvresize /dev/vda3
$ sudo lvextend -L +250G /dev/mapper/VolGroup00-LogVol00
$ sudo xfs_growfs /dev/mapper/VolGroup00-LogVol00
```
### 2.5. Configure /etc/hosts
**NW information in my environment**
```
======================================================================
[root@localhost ~]# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
2: eth0 inet 192.168.120.226/24 brd 192.168.120.255 scope global eth0\ valid_lft forever preferred_lft forever
2: eth0 inet6 fe80::5054:ff:fef1:5155/64 scope link \ valid_lft forever preferred_lft forever
======================================================================
```
### 2.6. Update Kernel
Current require kernel version at May 30.
Linux localhost.localdomain 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
```
$ yum -y update kernel
$ sudo reboot
```
### 2.7 Clone repository
```
$ cd ~/
$ git clone http://github.com/Juniper/contrail-ansible-deployer
$ cd contrail-ansible-deployer/
```
### 2.8. Apply patch
```
sed -i 's/roles.item /roles[item] /g' playbooks/roles/create_openstack_config/tasks/host_params.yml
```
***[Patch information](https://github.com/Juniper/contrail-ansible-deployer/pull/14)***
### 2.9. Configuration instances.yaml
**NW information in my environment (before deploy)**
```
======================================================================
[root@localhost ~]# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
2: eth0 inet 192.168.120.226/24 brd 192.168.120.255 scope global eth0\ valid_lft forever preferred_lft forever
2: eth0 inet6 fe80::5054:ff:fef1:5155/64 scope link \ valid_lft forever preferred_lft forever
======================================================================
```
**NW information in my environment (after deploy)**
```
======================================================================
[root@localhost ~]# ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
2: eth0 inet6 fe80::5054:ff:fef1:5155/64 scope link \ valid_lft forever preferred_lft forever
8: vhost0 inet 192.168.120.226/24 brd 10.1.0.255 scope global vhost0\ valid_lft forever preferred_lft forever
8: vhost0 inet6 fe80::5054:ff:fedb:bf29/64 scope link \ valid_lft forever preferred_lft forever
9: docker0 inet 172.17.0.1/16 scope global docker0\ valid_lft forever preferred_lft forever
======================================================================
```
```
$ vim contrail-ansible-deployer/config/instances.yaml
provider_config:
bms:
ssh_pwd: lab
ssh_user: root
ntpserver: 210.173.160.27
domainsuffix: localdomain
# ssh_public_key: /home/centos/.ssh/id_rsa.pub # Optional. Not needed if ssh password is used.
# ssh_private_key: /home/centos/.ssh/id_rsa # Optional. Not needed if ssh password is used.
instances:
bms1:
provider: bms
ip: 192.168.120.226
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
openstack_control:
openstack_network:
network_interface: eth0
openstack_storage:
openstack_monitoring:
vrouter:
VROUTER_GATEWAY: 192.168.120.1
PHYSICAL_INTERFACE: eth0
openstack_compute:
kolla_config:
customize:
nova.conf: |
[libvirt]
virt_type=kvm
cpu_mode=host-passthrough
kolla_globals:
openstack_release: ocata
enable_haproxy: no
enable_swift: no
enable_barbican: no
enable_heat: no
enable_ironic: no
kolla_internal_vip_interface: eth0
kolla_internal_vip_address: 192.168.120.226
# heat/templates/heat.conf.j2:api_server = {{ contrail_api_interface_address }}
contrail_api_interface_address: 192.168.120.226
# neutron/templates/ContrailPlugin.ini.j2:api_server_ip = {{ opencontrail_api_server_ip }}
opencontrail_api_server_ip: 192.168.120.226
kolla_passwords:
keystone_admin_password: lab
global_configuration:
CONTAINER_REGISTRY: opencontrailnightly
# REGISTRY_PRIVATE_INSECURE: True
# CONTAINER_REGISTRY_USERNAME: YourRegistryUser
# CONTAINER_REGISTRY_PASSWORD: YourRegistryPassword
contrail_configuration:
CONTRAIL_VERSION: ocata-master-117
UPGRADE_KERNEL: true
CLOUD_ORCHESTRATOR: openstack
CONTROLLER_NODES: 192.168.120.226
CONTROL_NODES: 192.168.120.226
ANALYTICSDB_NODES: 192.168.120.226
WEBUI_NODES: 192.168.120.226
ANALYTICS_NODES: 192.168.120.226
CONFIGDB_NODES: 192.168.120.226
CONFIG_NODES: 192.168.120.226
RABBITMQ_NODE_PORT: 5673
AUTH_MODE: keystone
KEYSTONE_AUTH_URL_VERSION: /v3
KEYSTONE_AUTH_HOST: 192.168.120.226
```
### 2.10. Configure instance
```
$ cd contrail-ansible-deployer
$ ansible-playbook -i inventory/ playbooks/configure_instances.yml
```
### 2.11. Apply patch
```
$ sed -i 's/use_neutron = True//g' ~/contrail-kolla-ansible/ansible/roles/nova/templates/nova.conf.j2
```
***[Patch information](https://bugs.launchpad.net/kolla-ansible/+bug/1651665)***
### 2.11.1 Deploy OpenStack
```
$ ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_openstack.yml
```
### 2.11.2 Deploy Contrail
```
$ ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml
```
### 2.12. Check contrail-status
```
[root@localhost ~]# contrail-status
Pod Service Original Name State Status
analytics alarm-gen contrail-analytics-alarm-gen running Up 18 minutes
analytics api contrail-analytics-api running Up 20 minutes
analytics collector contrail-analytics-collector running Up 20 minutes
analytics nodemgr contrail-nodemgr running Up 20 minutes
analytics query-engine contrail-analytics-query-engine running Up 20 minutes
config api contrail-controller-config-api running Up 20 minutes
config cassandra contrail-external-cassandra running Up 20 minutes
config device-manager contrail-controller-config-devicemgr running Up 20 minutes
config nodemgr contrail-nodemgr running Up 20 minutes
config rabbitmq contrail-external-rabbitmq running Up 20 minutes
config schema contrail-controller-config-schema running Up 20 minutes
config svc-monitor contrail-controller-config-svcmonitor running Up 20 minutes
config zookeeper contrail-external-zookeeper running Up 20 minutes
control control contrail-controller-control-control running Up 20 minutes
control dns contrail-controller-control-dns running Up 20 minutes
control named contrail-controller-control-named running Up 20 minutes
control nodemgr contrail-nodemgr running Up 20 minutes
database cassandra contrail-external-cassandra running Up 20 minutes
database kafka contrail-external-kafka running Up 20 minutes
database nodemgr contrail-nodemgr running Up 20 minutes
database zookeeper contrail-external-zookeeper running Up 20 minutes
vrouter agent contrail-vrouter-agent running Up 20 minutes
vrouter nodemgr contrail-nodemgr running Up 20 minutes
webui job contrail-controller-webui-job running Up 20 minutes
webui web contrail-controller-webui-web running Up 20 minutes
vrouter kernel module is PRESENT
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active
== Contrail database ==
kafka: active
nodemgr: active
zookeeper: active
cassandra: active
== Contrail analytics ==
nodemgr: active
api: active
collector: active
query-engine: active
alarm-gen: active
== Contrail webui ==
web: active
job: active
== Contrail vrouter ==
nodemgr: active
agent: active
== Contrail config ==
api: active
zookeeper: active
svc-monitor: active
nodemgr: active
device-manager: active
cassandra: active
rabbitmq: active
schema: active
```
:)
# Tips
### How to start openstack
```
[root@localhost contrail-kolla-ansible]# pip install python-openstackclient
-- snip --
Successfully installed PrettyTable-0.7.2 appdirs-1.4.3 asn1crypto-0.24.0 cffi-1.11.5 cliff-2.12.0 cmd2-0.8.7 contextlib2-0.5.5 cryptography-2.2.2 deprecation-2.0.2 dogpile.cache-0.6.5 futures-3.2.0 jsonpatch-1.23 jsonpointer-2.0 keystoneauth1-3.7.0 msgpack-0.5.6 munch-2.3.2 openstacksdk-0.13.0 os-client-config-1.31.1 os-service-types-1.2.0 osc-lib-1.10.0 oslo.serialization-2.25.0 packaging-17.1 pyOpenSSL-18.0.0 pyperclip-1.6.1 python-cinderclient-3.5.0 python-glanceclient-2.11.0 python-keystoneclient-3.16.0 python-novaclient-10.2.0 python-openstackclient-3.15.0 requestsexceptions-1.4.0 subprocess32-3.5.1 unicodecsv-0.14.1 warlock-1.3.0 wcwidth-0.1.7
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
[root@localhost contrail-kolla-ansible]# source /etc/kolla/admin-openrc.sh
[root@localhost contrail-kolla-ansible]# sudo yum -y install wget
-- snip --
Installing : wget-1.14-15.el7_4.1.x86_64 1/1
Verifying : wget-1.14-15.el7_4.1.x86_64 1/1
Installed:
wget.x86_64 0:1.14-15.el7_4.1
Complete!
[root@localhost contrail-kolla-ansible]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
--2018-05-30 18:12:45-- http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85, 2607:f298:6:a036::bd6:a72a
Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12716032 (12M) [text/plain]
Saving to: ‘cirros-0.4.0-x86_64-disk.img’
100%[=====================================================================================================================================================================================================================>] 12,716,032 2.65MB/s in 12s
2018-05-30 18:12:57 (1.02 MB/s) - ‘cirros-0.4.0-x86_64-disk.img’ saved [12716032/12716032]
[root@localhost contrail-kolla-ansible]# openstack image create cirros2 --disk-format qcow2 --public --container-format bare --file cirros-0.4.0-x86_64-disk.img
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2018-05-30T18:13:02Z |
| disk_format | qcow2 |
| file | /v2/images/480d6322-6089-4e2f-aaa2-5810f5e3b0db/file |
| id | 480d6322-6089-4e2f-aaa2-5810f5e3b0db |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros2 |
| owner | 948b76c981d248beb070aa399ec4e40f |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2018-05-30T18:13:03Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
[root@localhost contrail-kolla-ansible]# openstack network create testvn
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | None |
| availability_zones | None |
| created_at | None |
| description | |
| dns_domain | None |
| id | 2299f8d8-4320-4598-b3ff-05c9c5216d45 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | None |
| name | testvn |
| port_security_enabled | True |
| project_id | 948b76c981d248beb070aa399ec4e40f |
| provider:network_type | None |
| provider:physical_network | None |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | None |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | None |
+---------------------------+--------------------------------------+
[root@localhost contrail-kolla-ansible]# openstack subnet create --subnet-range 192.168.100.0/24 --network testvn subnet1
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.100.2-192.168.100.254 |
| cidr | 192.168.100.0/24 |
| created_at | 2018-05-30T18:13:12.704717 |
| description | None |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.100.1 |
| host_routes | |
| id | 4e5c3056-5fc1-42c4-9887-38acd96af7df |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | subnet1 |
| network_id | 2299f8d8-4320-4598-b3ff-05c9c5216d45 |
| project_id | 948b76c981d248beb070aa399ec4e40f |
| revision_number | None |
| segment_id | None |
| service_types | None |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-05-30T18:13:12.704717 |
+-------------------+--------------------------------------+
[root@localhost contrail-kolla-ansible]# openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 7030eb41-1cd1-499d-b23f-c1e5bce6db3b |
| name | m1.tiny |
| os-flavor-access:is_public | True |
| properties | |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+
[root@localhost contrail-kolla-ansible]# NET_ID=`openstack network list | grep testvn | awk -F '|' '{print $2}' | tr -d ' '`
[root@localhost contrail-kolla-ansible]# openstack server create --flavor m1.tiny --image cirros2 --nic net-id=${NET_ID} test_vm1
+-------------------------------------+------------------------------------------------+
| Field | Value |
+-------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 4c62J5smK3nJ |
| config_drive | |
| created | 2018-05-30T18:13:33Z |
| flavor | m1.tiny (7030eb41-1cd1-499d-b23f-c1e5bce6db3b) |
| hostId | |
| id | b01ac0d8-64ce-4e0b-8325-61acc115fa3b |
| image | cirros2 (480d6322-6089-4e2f-aaa2-5810f5e3b0db) |
| key_name | None |
| name | test_vm1 |
| progress | 0 |
| project_id | 948b76c981d248beb070aa399ec4e40f |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2018-05-30T18:13:33Z |
| user_id | fc353d7393644069a0abaad10e508a00 |
| volumes_attached | |
+-------------------------------------+------------------------------------------------+
[root@localhost contrail-kolla-ansible]# openstack server create --flavor m1.tiny --image cirros2 --nic net-id=${NET_ID} test_vm2
+-------------------------------------+------------------------------------------------+
| Field | Value |
+-------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | UG7izbYVjgNo |
| config_drive | |
| created | 2018-05-30T18:13:38Z |
| flavor | m1.tiny (7030eb41-1cd1-499d-b23f-c1e5bce6db3b) |
| hostId | |
| id | f9dceb00-2a5f-4430-a3f8-c4d187243597 |
| image | cirros2 (480d6322-6089-4e2f-aaa2-5810f5e3b0db) |
| key_name | None |
| name | test_vm2 |
| progress | 0 |
| project_id | 948b76c981d248beb070aa399ec4e40f |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2018-05-30T18:13:38Z |
| user_id | fc353d7393644069a0abaad10e508a00 |
| volumes_attached | |
+-------------------------------------+------------------------------------------------+
[root@localhost contrail-kolla-ansible]# ip route
default via 192.168.120.1 dev eth1
192.168.120.0/24 dev eth0 proto kernel scope link src 192.168.120.226
[root@localhost contrail-kolla-ansible]# ssh [email protected]
The authenticity of host '169.254.0.3 (169.254.0.3)' can't be established.
ECDSA key fingerprint is SHA256:CysQU7lFgg8m3WDwebbvKrQwyJ85VzLcoAAel9ItQwA.
ECDSA key fingerprint is MD5:3e:cf:d2:b8:7f:f0:1c:8d:b0:c6:82:be:0f:4f:e9:3f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '169.254.0.3' (ECDSA) to the list of known hosts.
[email protected]'s password:
$
$ ip -o a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000\ link/ether 02:b7:60:c0:c5:bf brd ff:ff:ff:ff:ff:ff
2: eth0 inet 192.168.100.3/24 brd 192.168.100.255 scope global eth0\ valid_lft forever preferred_lft forever
2: eth0 inet6 fe80::b7:60ff:fec0:c5bf/64 scope link \ valid_lft forever preferred_lft forever
$ ping 192.168.100.4
PING 192.168.100.4 (192.168.100.4): 56 data bytes
64 bytes from 192.168.100.4: seq=0 ttl=64 time=2.695 ms
64 bytes from 192.168.100.4: seq=1 ttl=64 time=2.316 ms
^C
--- 192.168.100.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 2.316/2.505/2.695 ms
$ Connection to 169.254.0.3 closed.
```
| 45.935737 | 657 | 0.477531 | eng_Latn | 0.306988 |
521a4f080586f3fe3b1587d4949c35d860e3c238 | 160 | md | Markdown | README.md | AlexCCo/MoodCalendar | 8578d8b193200537f1e57f6e81122c54c35c686e | [
"MIT"
] | 1 | 2021-03-05T07:23:17.000Z | 2021-03-05T07:23:17.000Z | README.md | AlexCCo/MoodCalendar | 8578d8b193200537f1e57f6e81122c54c35c686e | [
"MIT"
] | null | null | null | README.md | AlexCCo/MoodCalendar | 8578d8b193200537f1e57f6e81122c54c35c686e | [
"MIT"
] | null | null | null | # MoodCalendar
MoodCalendar is a mobile app design to work as a calendar but with the added of rate you feelings about a day with colors and keep track of them
| 53.333333 | 144 | 0.8 | eng_Latn | 0.999891 |
521b3f496c20b07393646c96b56d3f0962647f73 | 962 | md | Markdown | data/publication/historic-england/protected-wreck-sites.md | psd/digital-land-collector | e33abfeab9966a6918d2c23f1892caec0c2accf2 | [
"MIT"
] | 2 | 2019-11-08T18:56:02.000Z | 2021-04-09T23:10:59.000Z | data/publication/historic-england/protected-wreck-sites.md | digital-land/digital-land-collector | e33abfeab9966a6918d2c23f1892caec0c2accf2 | [
"MIT"
] | 3 | 2018-06-20T10:53:57.000Z | 2018-06-29T08:59:09.000Z | data/publication/historic-england/protected-wreck-sites.md | digital-land/digital-land-collector | e33abfeab9966a6918d2c23f1892caec0c2accf2 | [
"MIT"
] | 2 | 2019-04-24T14:08:26.000Z | 2019-05-22T10:47:52.000Z | ---
publication: protected-wreck-sites
name: Protected Wreck Sites
organisation: government-organisation:PB1164
copyright: historic-england
licence: historic-england
data-gov-uk: 5bdf0248-1097-49f7-b010-d3c378888ddf
documentation-url: https://www.historicengland.org.uk/listing/the-list/data-downloads
data-url: https://s3.eu-west-2.amazonaws.com/digital-land/english-heritage/2018-06-15/Protected+Wreck+Sites+(England).zip
task: shape-zip
prefix: protected-wreck-site
shape-zip-path: ProtectedWrecks_15June2018.shp
key: ListEntry
---
The Government has the power to safeguard the site of any shipwreck in English territorial waters. Historic England manages the licensing scheme that enables access to the wreck sites. It is a criminal offence to interfere with a protected wreck without a licence. Wreck sites are identified as being likely to contain the remains of a vessel, or its contents, which are of historical, artistic or archaeological importance.
| 56.588235 | 425 | 0.817048 | eng_Latn | 0.968655 |
521c9eec082e4f15c63d8503c57bfdda0f2ea245 | 1,620 | md | Markdown | _yearly/2021_Yearly_40_01.md | seven-teams/seven-teams.github.io | 8144c65d236846c85f7d4f80eee616ab4302d93c | [
"MIT"
] | null | null | null | _yearly/2021_Yearly_40_01.md | seven-teams/seven-teams.github.io | 8144c65d236846c85f7d4f80eee616ab4302d93c | [
"MIT"
] | null | null | null | _yearly/2021_Yearly_40_01.md | seven-teams/seven-teams.github.io | 8144c65d236846c85f7d4f80eee616ab4302d93c | [
"MIT"
] | 1 | 2021-06-22T15:09:51.000Z | 2021-06-22T15:09:51.000Z | ---
title: 'Celebrating 40 Years of Sahaja Yoga in Australia, Canada, Italy and U.S.A. and its Culture, Post 1 on the Epiphany Day'
date: 2021-01-06
permalink: /yearly/2021/0106
tags:
- 40 Years of Sahaja Yoga in Australia, Canada, Italy and U.S.A. and its Culture
---
<div style="text-align: left"><img src="/images/Celebrating40YearsSahajaYoga.png" width="250" /></div><br>
<div style="text-align: center"><img src="/images/image610.png" /></div>
<br>
<p style="color:DeepPink; text-align:center">
<font size="+2"><b>The Three People Who Went To See Jesus When He Was Born Were Brahmā, Viṣhṇu, Maheśha</b><br></font>
</p>
<p>
"The three people who went to see Jesus when He was born were Brahmā, Viṣhṇu, Maheśha. Is described, were the three great people who went to see Jesus [when] He was born was Śhrī Kṛiṣhṇa as Viṣhṇu, and Maheśha is Śhiva, and Brahmā as Brahmadeva. All these people went to worship Her. Nobody knew from where they came, who they were: they were called as the three wise men."<br>
<font color="blue"><b>1983-0307 Public Program, Day 2, Meeting Room, Town Hall, 128 King William St and Pirie Street, Adelaide, SA, Australia</b></font><br>
</p>
Links to suggested talk: <a href="https://vimeo.com/104918602"> vimeo</a>, <a href="https://www.youtube.com/watch?v=RHjHc8BaT_k"> youtube</a><br>
<iframe width="560" height="315" src="https://www.youtube.com/embed/RHjHc8BaT_k" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p style="color:red;">Jay Śhrī Mātājī!<br></p>
Yearly Topics Team
| 52.258065 | 384 | 0.723457 | eng_Latn | 0.921907 |
521e139e924ac2b6d5626d16fd410b0a006b40a0 | 2,871 | md | Markdown | _posts/2016-06-30-knowledge-engine-expert-system.md | ning-yang/ning-yang.github.io | 92a95920b63e07bc1912284a0ba35b62fe48bbd3 | [
"CC-BY-4.0"
] | null | null | null | _posts/2016-06-30-knowledge-engine-expert-system.md | ning-yang/ning-yang.github.io | 92a95920b63e07bc1912284a0ba35b62fe48bbd3 | [
"CC-BY-4.0"
] | null | null | null | _posts/2016-06-30-knowledge-engine-expert-system.md | ning-yang/ning-yang.github.io | 92a95920b63e07bc1912284a0ba35b62fe48bbd3 | [
"CC-BY-4.0"
] | null | null | null | ---
layout: post
title: "Knowledge Engine (expert system)"
date: 2016-06-30 14:35:19
comments: true
description: "Knowledge Engine (expert system)"
keywords: "knowledge engine, pyke, expert system, python"
categories:
- development
tags:
- python
---
While developing backend service, I hit the concept of knowledge engine, specifically [PYKE](http://pyke.sourceforge.net/). After spending a few days with it, I would like to write down what I've learned so far.
# what is knowledge engine
Basically, you tell the engine a bunch of rules and factors. Based on those rules and factors, you can ask the engine different question and let the engine solve the puzzle on how to receive your goal.
Rule has two basic type:
- forward chaining: if ... then ...
- backward chaining: Then ... if...
# how do we use it
Each object has a current state, which gives us some basic factors. When the service receive a request, it set the target state accordingly.
Now, we run forward chaining rule to add more factors based on initial ones. Then use backward chaining rule to find the steps we need to take to reach the target state.
# Pro. vs Con.
## Pros
### 1. easy to program complicated operations:
If the operation involve making calls to several different services and wait for the response, then based on the response do different actions. The engine can solve those cleanly. You only need to care about small steps and let the engine put them together.
### 2. increase code reuse:
Each rule(step) doesn't need to care about what happened before. It only cares about the current state of the objects and if it meet the rule, it just do the corresponding operations. So it is common that some operation share the same rules. for example, if all we need charge objects, we just check if the state of the object is 'uncharge' and send the request to our 'charge' service to do the work. Then push the state of the object to 'charged'.
### 3. easy for handling retry and cancel
If some operation failed and we need to do retry, we can just blindly throw the object back to the engine without modifying the state, so that the engine will pick the same rule again.
For cancel, we only need to change the target statue of the object and engine will nicely push the state to the 'cancel' statue.
## Cons
### 1. hard to debug
the rule is not written in code and not easy to debug through
### 2. unclear change scope
if you modify a rule, the scope of change is unclear as it will be shared by a lot of operations, which cannot be figured out easily. Some operation may accidentally enter the new rule and cause unexpected behavior.
### 3. need to be designed very carefully with everything in mind
since it is not very easy to change, if you don't design it correctly, i.e., you want to change the name of state, add/remove states, it will not be a easy task.
| 46.306452 | 449 | 0.761059 | eng_Latn | 0.99984 |
521f19480fb1c73837545edc38bb3477aa774c40 | 7,164 | md | Markdown | _pages/0_Logging_on.md | enormandeau/eukaryotic-genome-assembly.github.io | 5d1a4c2fe6edfa1507e9c823939092bf5e934e46 | [
"MIT"
] | null | null | null | _pages/0_Logging_on.md | enormandeau/eukaryotic-genome-assembly.github.io | 5d1a4c2fe6edfa1507e9c823939092bf5e934e46 | [
"MIT"
] | null | null | null | _pages/0_Logging_on.md | enormandeau/eukaryotic-genome-assembly.github.io | 5d1a4c2fe6edfa1507e9c823939092bf5e934e46 | [
"MIT"
] | null | null | null | ---
title: "Logging on to the cluster"
layout: archive
permalink: /logging_on/
---
Typically when working in bioinformatics we would log into a computing cluster such as the one we have prepared for this course. Those clusters provide computational resources much more powerful than the average personal computers, allowing us to run highly complex computational tasks required by some bioinformatics analyses. When you are back in your home institutions, you can also likely use a similar method to log into the computing nodes there. How do we log in for this course?
### Mac OS X and Linux users
#### Logging on
If you are using a Mac or Linux machine, you will need to open a `terminal` window and then type `ssh`.
`ssh` stands for [secure shell](https://en.wikipedia.org/wiki/Secure_Shell) and is a way of interacting with remote servers. You will need to log in to the cluster using both `ssh` and a keyfile that has been generated for you.
Firstly, download the keyfile and open a terminal window. Then copy it into your home directory like so:
```shell
cp mark.pem ~
```
Give your user permission to read and write to the keyfile:
```shell
chmod 600 mark.pem
```
Then you should be able to log in with `ssh` whatever your working directory is. You need to provide `ssh` with the path to your key, which you can do with the `-i` flag. This basically points to your identity file or keyfile. For example:
```shell
ssh -i "~/mark.pem" [email protected]
```
Of course you will need to change the log in credentials shown here (i.e. the username and keyfile name) with your own. **Also be aware that the cluster IP address will change everyday**. We will update you on this each day.
You might be prompted to accept an RSA key - if so just type yes and you will log in to the cluster!
#### Downloading and uploading files
Occassionally, we will need to shift files between the cluster and our local machines. To do this, we can use a command utility called `scp` or [secure copy](https://en.wikipedia.org/wiki/Secure_copy). It works in a similar way to `ssh`. Let's try making a dummyfile in our local home directory and then uploading it to our home directory on the cluster.
```shell
# make a file
touch test_file
# upload to cluster
scp -i "~/mark.pem" test_file [email protected]:~/
```
Just to break this down a little we are simply copying a file, `test_file` in this case to the cluster. After the `:` symbol, we are specifying where on the cluster we are placing the file, here we use `~/` to specify the home directory.
Copying files back on to our local machine is just as straightforward. You can do that like so:
```shell
# download to local
scp -i "~/mark.pem" [email protected]:~/test_file ./
```
Where here all we did was use `scp` with cluster address first and the location (our working directory) second - i.e. `./`
#### Making life a bit easier
If you are logging in and copying from a cluster regularly, it is sometimes good to use an `ssh` alias. Because the cluster IP address changes everyday, we will not be using these during the course. However, if you would like some information on how to set them up, see [here](https://markravinet.github.io/CEES_tips_&_tricks.html)
### Windows users
#### Logging on
If you are using a Windows machine, you will need to log on using [PuTTY](https://www.putty.org/) since there is no native `ssh` client. PuTTY does not natively support the private key format (.pem) needed to login to our Amazon cloud instance. You first need to convert the private key that we gave to you to a key that PuTTY can read. PuTTY has a tool named PuTTYgen, which can convert keys to the required PuTTY format (.ppk). When you installed PuTTY, it will also have installed PuTTYgen.
First, start PuTTYgen (for example, from the Start menu, choose All Programs > PuTTY > PuTTYgen). Then select RSA and click on Load:

In the new window that pops up, Change "PuTTY Private Key Files" to "All Files" to allow you to find your pem file.

Then save your key and click on YES to dismiss the Warning as shown below.

Great, now your key file is ready and we can start Putty. In Putty, enter your user name and the IP address in the format \<user_name\>@\<IP adress\>. Make sure that 22 is given as Port and that SSH is selected.

Next, on the menu on the left, expand the "SSH" panel by clicking on the + and select "Auth". Then, select your new putty format key file (.ppk) with Browse. Do NOT click on Open yet.

To make sure that you will not get logged out if you are inactive for a while, you can configure PuTTY to automatically send 'keepalive' signals at regular intervals to keep the session active. Click on Connection, then insert 180 to send keepalive signals every 3 minutes. Do NOT click on Open yet.

To avoid having to change the settings each time you log in, you can save the PuTTY session. Click on Session to get back to the basic options window. Once you are happy with all settings, give the session a name and click Save. Now you are ready to start the session with "Open". The first time PuTTY will display a security alert dialog box that asks whether you trust the host you are connecting to. Click yes.

When you log in the next time, you can just click on the saved session and click load. If the IP address changed in the mean time (e.g. because we stopped the Amazon instance over night), you will need to replace the IP address by the new one. I would then recommend to Save the changed settings. Then simply click Open to start the session.

If the IP address did not change and you just want to login again, you can also right-click on the putty symbol in the taskbar (provided that you have pinned it to the taskbar) and select the session.

#### Downloading and uploading files with Filezilla
Filezilla is a handy software to move files from a remote server such as the Amazon cloud or a cluster of your university.
Open Filezilla and choose Edit -> Settings.

Next, choose SFTP and Add the .pem key file as indicated below and click OK.

Finally, enter the IP address and the user name and when you hit enter, it should connect you. Next, time, you can use the Quickconnect dropdown menu, provided the IP address has not changed in the meantime.

Now you will see the file directory system (folders) on your local computer on the left and your folders on the amazon cloud on the right. You can now just drag and drop files from one side to the other.
#### Acknowledgements
Although we have made some small adjustments, this *log in* tutorial was mainly prepared by Dr. Mark Ravinet and Dr. Joana I. Meier for their Physalia course on Speciation Genomics (https://www.physalia-courses.org/courses-workshops/course37/). We would like to thank both of them for allowing us to use it in this course.
| 56.857143 | 493 | 0.755165 | eng_Latn | 0.998909 |
521f536bef5046be976a7e73a20451a6687bda67 | 2,446 | md | Markdown | User Guide.md | mylibrar/stave | 43145015253d0577dfc757419ad8b4fa06a04042 | [
"Apache-2.0"
] | 35 | 2020-01-29T04:21:10.000Z | 2021-12-13T01:44:28.000Z | User Guide.md | mylibrar/stave | 43145015253d0577dfc757419ad8b4fa06a04042 | [
"Apache-2.0"
] | 86 | 2020-04-17T16:36:13.000Z | 2022-03-25T22:51:34.000Z | User Guide.md | mylibrar/stave | 43145015253d0577dfc757419ad8b4fa06a04042 | [
"Apache-2.0"
] | 18 | 2020-02-04T17:40:02.000Z | 2021-06-17T07:11:42.000Z | # User Guide
An example database is provided for users to get familiar with Stave. [Go to example](#example-database)
## Permission System
This chapter is to familiarize users with the permission system in Stave.
With permission system, the user will be able to assign permission **per project**, **per user**. For example, user Joe could be allowed to edit project A but not project B.
### Preparation (will be moved into README when merged)
```
pip install django-guardian
```
### Permission Design
#### 6 Per Object Permissions:
- read_project -- view **a certain project** and its documents
- edit_annotation -- add, edit and delete annotations in documents of **a certain project**
- edit_text -- edit text pack and other metadata of documents, of **a certain project**
- edit_project -- edit metadata of **a certain project**
- remove_project -- remove documents in **a certain project**
- new_project -- create new documents in **a certain project**
#### 2 Default Django Permissions:
- add_project -- create new project
- view_project -- access all projects
Besides, a staff user and the owner (default the creator) of the project has all access on this project (with documents inside).
### How to assign permissions
Firstly, log in as a staff member through Django admin site, which is http://localhost:8000/admin/
#### To assign permission per object:
1. click the project

2. click "Object Permissions" in the upper-right corner

3. Enter exact user name to find or edit users already modified before

4. Assign permissions.
Reminder: Only assign permissions mentioned in Permission Design, others are not yet configurated.

## Example Database
**Users:**
- *normal1*
- password: *example1*
- *normal2*
- password: *example2*
**Projects:**
- *example-project*-1
- owner is user *normal1*
- *example-project-2*
- owner is *normal2*
| 30.962025 | 175 | 0.725675 | eng_Latn | 0.915092 |
521fe404382912d7081a5b6b2650d46a5bdd5deb | 25 | md | Markdown | README.md | srahim12/Scheme | f8640f6956331c9b4d97ce7aa2c6ea5caebc7f64 | [
"MIT"
] | null | null | null | README.md | srahim12/Scheme | f8640f6956331c9b4d97ce7aa2c6ea5caebc7f64 | [
"MIT"
] | null | null | null | README.md | srahim12/Scheme | f8640f6956331c9b4d97ce7aa2c6ea5caebc7f64 | [
"MIT"
] | null | null | null | # Scheme
Scheme projects
| 8.333333 | 15 | 0.8 | eng_Latn | 0.975971 |
52207d97d6d5556ce5d343d34b7b5770f8de24c8 | 983 | md | Markdown | career/management/the-essential-questions-that-have-powered-this-top-silicon-valley-managers-career.md | eginwong/course-notes | 27dc69570537e70afcc8d7bb760c6b967f058496 | [
"MIT"
] | 1 | 2018-10-18T14:44:35.000Z | 2018-10-18T14:44:35.000Z | career/management/the-essential-questions-that-have-powered-this-top-silicon-valley-managers-career.md | eginwong/course-notes | 27dc69570537e70afcc8d7bb760c6b967f058496 | [
"MIT"
] | null | null | null | career/management/the-essential-questions-that-have-powered-this-top-silicon-valley-managers-career.md | eginwong/course-notes | 27dc69570537e70afcc8d7bb760c6b967f058496 | [
"MIT"
] | null | null | null | # The Essential Questions That Have Powered This Top Silicon Valley Manager’s Career
[ref](https://firstround.com/review/the-essential-questions-that-have-powered-this-top-silicon-valley-managers-career/)
- questions to ask to build trust
- do my reports regularly bring their biggest challenges to my attention?
- would my reports work for me again?
- do all my 1:1s feel a little awkward, because they should? Most important conversations are uncomfortable with deep fears and secret hopes
- am I giving feedback often enough
- does this feedback resonate with you? Why or why not?
- to make sure we're on the same page, what are your takeaways and next steps?
- managers empower others to save themselves
- listen and probe
- what's top of mind for you?
- what are your priorities for this week?
- what does your ideal outcome look like?
- what do you really care about?
- what's the worst-case scenario you're worried about?
- how can I make you more successful? | 54.611111 | 142 | 0.766022 | eng_Latn | 0.999586 |
5220f0aa8c9524037c12b3e07e4e80a8ccba90ce | 7,530 | md | Markdown | azurerm/r/azurerm_api_management_diagnostic.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 78 | 2021-01-15T14:10:30.000Z | 2022-02-14T09:17:40.000Z | azurerm/r/azurerm_api_management_diagnostic.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 5 | 2021-04-09T15:21:28.000Z | 2022-01-28T19:02:05.000Z | azurerm/r/azurerm_api_management_diagnostic.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 30 | 2021-01-17T13:16:57.000Z | 2022-03-21T12:52:08.000Z | # azurerm_api_management_diagnostic
[back](../azurerm.md)
### Index
- [Example Usage](#example-usage)
- [Variables](#variables)
- [Resource](#resource)
- [Outputs](#outputs)
### Terraform
```terraform
terraform {
required_providers {
azurerm = ">= 2.54.0"
}
}
```
[top](#index)
### Example Usage
```terraform
module "azurerm_api_management_diagnostic" {
source = "./modules/azurerm/r/azurerm_api_management_diagnostic"
# always_log_errors - (optional) is a type of bool
always_log_errors = null
# api_management_logger_id - (required) is a type of string
api_management_logger_id = null
# api_management_name - (required) is a type of string
api_management_name = null
# enabled - (optional) is a type of bool
enabled = null
# http_correlation_protocol - (optional) is a type of string
http_correlation_protocol = null
# identifier - (required) is a type of string
identifier = null
# log_client_ip - (optional) is a type of bool
log_client_ip = null
# resource_group_name - (required) is a type of string
resource_group_name = null
# sampling_percentage - (optional) is a type of number
sampling_percentage = null
# verbosity - (optional) is a type of string
verbosity = null
backend_request = [{
body_bytes = null
headers_to_log = []
}]
backend_response = [{
body_bytes = null
headers_to_log = []
}]
frontend_request = [{
body_bytes = null
headers_to_log = []
}]
frontend_response = [{
body_bytes = null
headers_to_log = []
}]
timeouts = [{
create = null
delete = null
read = null
update = null
}]
}
```
[top](#index)
### Variables
```terraform
variable "always_log_errors" {
description = "(optional)"
type = bool
default = null
}
variable "api_management_logger_id" {
description = "(required)"
type = string
}
variable "api_management_name" {
description = "(required)"
type = string
}
variable "enabled" {
description = "(optional)"
type = bool
default = null
}
variable "http_correlation_protocol" {
description = "(optional)"
type = string
default = null
}
variable "identifier" {
description = "(required)"
type = string
}
variable "log_client_ip" {
description = "(optional)"
type = bool
default = null
}
variable "resource_group_name" {
description = "(required)"
type = string
}
variable "sampling_percentage" {
description = "(optional)"
type = number
default = null
}
variable "verbosity" {
description = "(optional)"
type = string
default = null
}
variable "backend_request" {
description = "nested block: NestingList, min items: 0, max items: 1"
type = set(object(
{
body_bytes = number
headers_to_log = set(string)
}
))
default = []
}
variable "backend_response" {
description = "nested block: NestingList, min items: 0, max items: 1"
type = set(object(
{
body_bytes = number
headers_to_log = set(string)
}
))
default = []
}
variable "frontend_request" {
description = "nested block: NestingList, min items: 0, max items: 1"
type = set(object(
{
body_bytes = number
headers_to_log = set(string)
}
))
default = []
}
variable "frontend_response" {
description = "nested block: NestingList, min items: 0, max items: 1"
type = set(object(
{
body_bytes = number
headers_to_log = set(string)
}
))
default = []
}
variable "timeouts" {
description = "nested block: NestingSingle, min items: 0, max items: 0"
type = set(object(
{
create = string
delete = string
read = string
update = string
}
))
default = []
}
```
[top](#index)
### Resource
```terraform
resource "azurerm_api_management_diagnostic" "this" {
# always_log_errors - (optional) is a type of bool
always_log_errors = var.always_log_errors
# api_management_logger_id - (required) is a type of string
api_management_logger_id = var.api_management_logger_id
# api_management_name - (required) is a type of string
api_management_name = var.api_management_name
# enabled - (optional) is a type of bool
enabled = var.enabled
# http_correlation_protocol - (optional) is a type of string
http_correlation_protocol = var.http_correlation_protocol
# identifier - (required) is a type of string
identifier = var.identifier
# log_client_ip - (optional) is a type of bool
log_client_ip = var.log_client_ip
# resource_group_name - (required) is a type of string
resource_group_name = var.resource_group_name
# sampling_percentage - (optional) is a type of number
sampling_percentage = var.sampling_percentage
# verbosity - (optional) is a type of string
verbosity = var.verbosity
dynamic "backend_request" {
for_each = var.backend_request
content {
# body_bytes - (optional) is a type of number
body_bytes = backend_request.value["body_bytes"]
# headers_to_log - (optional) is a type of set of string
headers_to_log = backend_request.value["headers_to_log"]
}
}
dynamic "backend_response" {
for_each = var.backend_response
content {
# body_bytes - (optional) is a type of number
body_bytes = backend_response.value["body_bytes"]
# headers_to_log - (optional) is a type of set of string
headers_to_log = backend_response.value["headers_to_log"]
}
}
dynamic "frontend_request" {
for_each = var.frontend_request
content {
# body_bytes - (optional) is a type of number
body_bytes = frontend_request.value["body_bytes"]
# headers_to_log - (optional) is a type of set of string
headers_to_log = frontend_request.value["headers_to_log"]
}
}
dynamic "frontend_response" {
for_each = var.frontend_response
content {
# body_bytes - (optional) is a type of number
body_bytes = frontend_response.value["body_bytes"]
# headers_to_log - (optional) is a type of set of string
headers_to_log = frontend_response.value["headers_to_log"]
}
}
dynamic "timeouts" {
for_each = var.timeouts
content {
# create - (optional) is a type of string
create = timeouts.value["create"]
# delete - (optional) is a type of string
delete = timeouts.value["delete"]
# read - (optional) is a type of string
read = timeouts.value["read"]
# update - (optional) is a type of string
update = timeouts.value["update"]
}
}
}
```
[top](#index)
### Outputs
```terraform
output "always_log_errors" {
description = "returns a bool"
value = azurerm_api_management_diagnostic.this.always_log_errors
}
output "http_correlation_protocol" {
description = "returns a string"
value = azurerm_api_management_diagnostic.this.http_correlation_protocol
}
output "id" {
description = "returns a string"
value = azurerm_api_management_diagnostic.this.id
}
output "log_client_ip" {
description = "returns a bool"
value = azurerm_api_management_diagnostic.this.log_client_ip
}
output "sampling_percentage" {
description = "returns a number"
value = azurerm_api_management_diagnostic.this.sampling_percentage
}
output "verbosity" {
description = "returns a string"
value = azurerm_api_management_diagnostic.this.verbosity
}
output "this" {
value = azurerm_api_management_diagnostic.this
}
```
[top](#index) | 23.312693 | 80 | 0.668924 | eng_Latn | 0.747644 |
522209ebfd35424581eb5dd1c80c5c65a03446c1 | 293 | md | Markdown | _project/47-epic-video-game-room-decoration-ideas-for-2018.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/47-epic-video-game-room-decoration-ideas-for-2018.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/47-epic-video-game-room-decoration-ideas-for-2018.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "47+ Epic Video Game Room Decoration Ideas for 2018"
slug: "47-epic-video-game-room-decoration-ideas-for-2018"
parent: "diy-living-room-spring-decor"
---
Really like this set up for a gaming room, would work perfect with two TVs and minimalist computer desk. | 41.857143 | 104 | 0.767918 | eng_Latn | 0.975943 |
5222b3ff3a90a31311c98f491b1c1daf689f2666 | 14,712 | md | Markdown | vol02/contents/s08_chapter02.md | jlnieh/sweetsmelloforchid | 29a2f46d428406a9424808355cbc0b8994da6417 | [
"MIT"
] | null | null | null | vol02/contents/s08_chapter02.md | jlnieh/sweetsmelloforchid | 29a2f46d428406a9424808355cbc0b8994da6417 | [
"MIT"
] | 17 | 2018-11-26T11:08:10.000Z | 2019-08-18T03:46:39.000Z | vol02/contents/s08_chapter02.md | jlnieh/sweetsmelloforchid | 29a2f46d428406a9424808355cbc0b8994da6417 | [
"MIT"
] | null | null | null | ## 
### 神州揽胜
诗中五、七言绝句多仿古体、平仄声韵不拘,以言简意赅为主。
#### 西安[1]观兵马俑[2]
> 秦皇建业在长安,一统江山六国残。
> 身后犹留兵马俑,至今仍是叹奇观。
[1] 秦皇建都在咸阳
[2] 此诗乃 90 年探亲所作,特补收在本卷。
#### 由台乘机返蓉 94.9.9
> 铁鸟高飞掠碧空,瞬时已过海之东。
> 香江暂憩加餐后,振翼长驱蜀汉宫。
#### 故里探亲扫墓后<small>(三首)</small> 94.9.14
> 三度还乡祭祖坟,此生不肖复何云。
> 徒伤风木终无补,遗训惟遵尚克勤。
> 羁旅台湾数十春,还乡难自认归根。
> 只容两地为常客,但愿平安度此身。
> 年少无知未染瑕,误随蒋氏走天涯。
> 而今唤作台湾客,原本吾家不是家。
#### 夜发成都晨抵秦岭 94.10.11
> 夜幕深垂别锦城,昏昏一觉到天明。
> 推窗四望群山拥,秦岭云开丽日呈。
#### 经宝鸡
> 今日宝鸡冲要地,交通四达控秦川。
> 匆匆路过多遗憾,未及窥探所以然。
#### 天水至渭源
> 铁路长沿渭水行,车声辘辘伴溪鸣。
> 群山乍合时开展,偶见青天白鹭迎。
#### 旅经武威 94.10.12
> 异域扬威汉武功,犹遗铜马显威风。
> 河西峡谷开丝道,欧亚方能有路通。
#### 经张掖
> 河西走道绿洲中,遥望祁连雪色融。
> 张掖尽头戈壁现,茫茫砂磧返鸿朦。
#### 经酒泉
> 路近边城到酒泉,层峰雪压是祁连。
> 黄沙地底藏珍宝,凿井探油值万钱。
#### 登嘉峪关 94.10.13
> 万里长城到尽头,雄关嘉峪起危楼。
> 当年御敌功殊伟,今日登临阔远眸。
#### 出玉门
> 长车载我向西奔,气笛声声出玉门。
> 遥想当年征战苦,而今无处觅遗痕。
#### 经哈密
> 东疆哈密地当冲,名产甜瓜远近崇。
> 现代工商兴起后,煤田处处蕴藏丰。
#### 经鄯善
> 古都鄯善在疆东,曾有匈奴据地雄。
> 幸赖从戎班定远,降番卅六建奇功。
#### 吐鲁番
> 吐鲁番前热气薰,只因洼地忒销魂。
> 艾丁湖影明于镜,映日红霞似火焚。
#### 初抵乌鲁木齐<small>(曾名迪化)</small>
> 华灯初上抵边城,大雪纷纷饰地迎。
> 借宿崇楼心自暖,人生到处总多情。
#### 乌鲁木齐揽胜<small>(二首)</small> 94.10.14
> 乌鲁木齐景象新,高楼林立遏行云。
> 红山顶上回头望,盛况何输内陆城。
> 两塔遥遥对峙中,红山顶上辨西东。
> 新开胜境边关外,族类融和晋大同。
#### 游天山瑶池<small>(六首)</small> 94.10.15
> 风和日丽访瑶池,雪拥天山接水湄。
> 伫立凝神观物化,波摇影动景迷离。
> 天山银树映瑶池,荡漾清波逐景移。
> 王母翩翩来入浴,仙姬随侍好相嬉。
> 云开雪霁访瑶池,皎皎天山丽日披。
> 莫谓天高寒浸骨,有缘可使晤仙姬。
> 映日瑶池泛碧波,晶光灿烂影婆娑。
> 追欢王母潭中戏,阵阵香风拂面过。
> 踏雪环池信步游,苍松林下境偏幽。
> 流连不觉寒风冷,还欲登高竞白头。
> 天山骑马好风流,踏雪攀岩上古丘。
> 俯瞰瑶池波潋滟,欲邀王母与随游。
#### 由乌鲁木齐返兰州<small>(七首)</small> 94.10.17
> 兰新铁路贯西东,日夜奔驰沙碛中。
> 商旅往来皆便捷,江山一统乐融融。
> 艰难丝路变通衢,各族融和共远图。
> 衰草白杨呈异趣,雪山时有彩云扶。
> 黄沙滚滚浪涛中,无尽资源蕴积丰。
> 开发供需成大用,裕民富国有神功。
> 万里黄沙黯淡天,车声轧轧好催眠。
> 醒来不觉身何在,但见闲云绕雪山。
> 金昌名号不虚传,镍铁银铜遍地填。
> 开发无穷充国用,振兴工业利民先。
> 回车乌鞘横穿岭,大雪纷纷送客人。
> 山舞银蛇非是幻,原驰蜡象更传神。
> 白雪飘飘遍地冰,乘龙回绕任奔腾。
> 霎时越过乌鞘岭,日丽风和到永登。
#### 回到兰州。姑表弟杜志斌相迎<small>(三首)</small>
> 兰州重到再探亲,车站来迎杜志斌。
> 多少故人凋谢尽,唯余我俩尚偷生。
> 重到皋兰访故人,志斌抱病出迎宾。
> 畅谈数日言难尽,临别依依嘱咐频。
> 西北行来略受寒,身遭小恙意犹欢。
> 志斌厚爱施针药,一片真情感肺肝。
#### 重游兰州白塔山 94.10.19
> 白塔山前绕白云,黄河流水总多情。
> 虽然浊浪难澄彻,大地仍沾润泽恩。
#### 兰州古铁桥
> 铁桥横跨大河津,自古流传遐迩闻。
> 昔日凭它通客运,而今南北得相亲。
#### 离兰州赴北京 94.10.21
> 惜别皋兰赴北京,志斌阁府送吾行。
> 站前合影留深意,但愿来年再笑迎。
#### 青铜峡至银川
> 青铜峡口大河开,塞外风光入眼来。
> 千里江南非所拟,白杨青草胜寒梅。
#### 遥望腾格里沙漠
> 沙丘起伏似波涛,浊浪连天气势豪。
> 弱水三千非是水,既无舟楫也无牦。
#### 呼和浩特至大同 94.10.22
> 飞车尽日度边关,万里长城断续间。
> 羌笛无闻驼马绝,煤田处处冒青烟。
#### 北京香山碧云寺谒中山灵
> 碧云寺里谒孙文,叩问当年革命因。
> 若说为民谋晏乐,如何世道日纷纷。
#### 登北京景山 94.10.23
> 登高四望北京城,旧殿新楼错杂呈。
> 世代兴亡皆有自,几人悟得鉴崇祯。
#### 登北京正阳门城楼
> 正阳门上望天都,招展红旗景象殊。
> 金碧辉煌宫殿拥,纵横辽阔广场铺。
> 熙来攘往人如鲫,国富民殷德不孤。
> 两岸早应为一体,和衷共济逞鸿图。
#### 游北京北海公园
> 北海园中景物荣,千年胜迹久垂名。
> 琼华岛上神仙会,阅古楼中雅士盟。
> 近水红廊亲绿漾,凌峰白塔入青冥。
> 流连竟日思潮涌,忧乐何人为众氓。
#### 重游北京颐和园
> 颐和园里风光异,千丈长廊彩绘奇。
> 画栋雕梁宫殿丽,繁花翠柳本枝宜。
> 昆明湖水含烟碧,万寿山林带露滋。
> 故国名园惟此胜,纵情饱览总心怡。
#### 再登长城 94.10.24
> 秋高气爽上长城,塞外丛山塞内平。
> 胡汉相争千百载,而今统一共滋荣。
#### 北京天坛祈年殿
> 自古帝王名好善,为求丰穰每祈年。
> 深藏宫里无知识,不问苍生却问天。
#### 北京大观园
> 步入园门百卉丛,亭台楼阁巧玲珑。
> 多情公子多情女,一列徒留梦幻中。
#### 洛阳白马寺 94.10.28
> 白马寺前看白马,雄姿仍不减当年。
> 为宏教义难辞苦,万里驮经佛法宣。
#### 洛阳狄仁杰祠墓
> 狄公祠里礼先贤,亟谏忠言以儆奸。
> 为振朝纲防弊政,良臣生佛一人肩。
#### 湖南张家界金鞭溪 94.10.31
> 金鞭溪底水涓涓,夹岸群峰势刺天。
> 赤壁刀刊纹理现,绿荫靛染本枝连。
> 当空日影难侵谷,掠地雷声但震巅。
> 猿鸟时鸣泉响应,欲寻踪迹杳如烟。
#### 湖南张家界
> 武陵源里张家界,百怪千奇景物开。
> 似笋如簪峰笔立,仰观俯察畅情怀。
#### 寄宿天子山 94.11.1
> 天子山头草木丛,孟冬犹自剩残红。
> 寒鸦绕树寻归处,游客当思保厥躬。
#### 袁家寨后花园
> 晓雾初开景象融,后花园里展天功。
> 千峰耸翠如春笋,万壑藏幽似密宫。
> 神木攀岩红叶附,仙桥跨谷碧溪容。
> 翱翔鹰隼时高下,傲视林间不老翁。
#### 杨家界
<p class="poem-subtitle center">以杨家将故事为题来状奇峰怪石</p>
> 杨家界里常传说,奇峰怪石象形多。
> 贤郎探母登龙岭,节妇寻夫立虎坡。
> 万虎奔腾驱贼虏,一门忠义执干戈。
> 令公手捧兵书读,为振天威渡大河。
#### 贺龙纪念公园 94.11.3
> 草泽民间出贺龙,曾经崛起此山中。
> 红军百战升元帅,革命功高众所崇。
#### 黄狮寨
> 黄狮赛上览奇峰,大小高低各显容。
> 形象万千皆异趣,一离比境再难逢。
#### 十里画廊
> 十里画廊迷眼赊,峰连嶂迭影横斜。
> 深崖万丈龙蛇窖,巨木千寻鸟雀家。
> 赤柱擎天迎晓日,清泉凿谷映流霞。
> 猿猴上下传消息,振动藤萝散落花。
#### 黄龙洞
> 洞号黄龙别有天,蜿蜒上下任流连。
> 阴河荡漾航游艇,玉笋高标竖彩椽。
> 栈道迂回攀绿垒,虹桥隐卧跨琼渊。
> 深藏地底皆瑰宝,尽教来人把梦圆。
#### 过枝江大桥 94.11.4
> 枝城市外大江奔,跨岸长桥往石门。
> 湘鄂沟通为一体,洞庭浩淼拥楚原。
#### 投宿宜昌桃花岭饭店
> 寻梦桃花岭,宜昌趁夜游。
> 来朝三峡去,尽兴展吟眸。
#### 宜昌葛洲坝 94.11.5
> 葛洲坝上现奇观,万丈长堤把水拦。
> 巨舰往来无所阻,闸门开阖任周旋。
#### 三游洞江边张飞擂鼓雕象
> 张飞擂鼓大江边,惊动鱼龙出九渊。
> 怒目横刀豪气在,荆州已失恨难填。
#### 三游洞访诗贤遗迹
> 三游洞里忆诗贤,石窟清幽近碧川。
> 唐宋六家[3]曾至此,而今壁上有遗篇。
[3] 唐白乐天兄弟与元微之及宋欧阳修、苏轼兄弟各三人先后游此,故曰〝六家〞。
#### 船上遥望三峡大坝施工 94.11.6
> 万头攒动辟荒山,筑坝拦洪发电先。
> 经济振兴惟是赖,工程艰巨也非难。
#### 三峡水坝<small>(二首)</small>
> 江心中宝岛,三斗岸边坪。
> 万人兴水坝,盖世大工程。
> 三峡今犹在,人工造大湖。
> 逆流千百里,创利古来无。
#### 秭归怀古
> 名士佳人出秭归,钟灵毓秀带寒辉。
> 昭君抱怨辞明主,屈子怀忧感国非。
> 青冡长留关塞外,骚魂永驻汨江矶。
> 悠悠遗憾无终极,万古人闻泪湿衣。
#### 巴东
> 巴山尽处是巴东,川鄂分疆据地雄。
> 峻岭重峦临断岸,大江涌浪逐奔洪。
#### 三峡舟中望神女峰<small>(二首)</small>
> 巫山神女立危峰,独对斜阳影半红。
> 云雨可曾来入梦,大江东去荡胸中。
> 神女凌霄立,不知长盼谁?
> 沧桑多变故,矢志未稍移。
#### 游巫山大宁河小三峡<small>(五首)</small> 94.11.7
> 临渊跨岭架长桥,正是龙门锁噩蛟。
> 三峡于兹分大小,风光有别景难描。
> 大宁河上不安宁,水激滩高阻逆舲。
> 忍看纤夫匍匐进,声声号子未曾停。
> 大宁河上不安宁,壁立巉崖夹岸形。
> 偶见悬棺垂顶上,便疑古怪显幽灵。
> 大宁河上不安宁,倒挂藤萝似翠屏。
> 但见猿猴时跳跃,追欢无获发哀鸣。
> 大宁河上且安宁,过尽险滩清水渟。
> 岸上山村林木荫,置身世外总忘形。
#### 大宁河支流小小三峡汇流处
> 峡谷中分汇细流,吊桥连索两山头。
> 悬崖栈道通幽境,步步惊魂也乐游。
#### 小小三峡内
> 山雄水险雾深沉,洞底清幽何处寻。
> 身在其中如隔世,浑忘境外有凡人。
#### 小小三峡溜艇
> 橡艇轻随激水溜,浅滩急处石当头。
> 堆花作浪迴旋转,一阵惊魂一阵休。
#### 大宁河小三峡游
> 大宁河谷小三峡,曲水寻幽险象中。
> 猿鸟悲鸣悬壁应,藤萝倒挂绿阴笼。
> 纤夫弓背催舟进,旅客昂头盼路通。
> 侷处不知红日落,暮云紧逼动秋风。
#### 出瞿塘峡 94.11.8
> 晓烟淡淡罩群山,船过瞿塘峡口间。
> 我本蜀人归故里,得开颜处且开颜。
#### 奉节怀古
> 遥望江边白帝城,永安宫外雾萦萦。
> 千年憾事今犹忆,蜀主英灵永不平。
#### 夜泊万县城下
> 停轮城下大江中,万盏明灯万县红。
> 淡月疏星随水动,声声汽笛醒潜龙。
#### 石保塞<small>(五古)</small> 94.11.9
> 遥看石堡寨,孤立大江外。
> 四面皆悬崖,兵家守土在。
#### 重庆朝天门外江上<small>(三首)</small>
> 遥望渝州浮水上,崇楼倒影映江中。
> 往来巨舰登楼顶,不用云梯自贯通。
> 缆车横跨碧波空,不断穿梭载客通。
> 扬子嘉陵无所阻,江南江北乐融融。
#### 抵重庆堂弟国政、国庆兄弟迎聚
> 久别宗亲异地逢,多情政庆礼为恭。
> 虽然往事难言尽,彼此平安可慰衷。
#### 登白鹅岭观重庆夜景
> 月色朦胧鹅岭上,繁灯耀眼映江中。
> 水天莫辨无涯际,直觉身轻入九重。
#### 由渝乘车赴内江 94.11.11
> 五十年前流浪地,一山一水印胸中。
> 曾经跋涉多辛苦,今日乘车乐不穷。
#### 歇内江官君福光寓<small>(二首)</small>
> 福光贤弟太多情,为我删诗且发行。
> 字字不遗详校订,用心良苦见真诚。
> 巴山夜雨润甜城,并榻闲眠论辱荣。
> 始信官郎多见识,诗文世事总持衡。
#### 回富顺访政协文史会梁官恒主任 94.11.13
> 故乡文物实堪珍,幸赖梁公费苦辛。
> 博采周谘无不至,方能荟萃以传薪。
#### 赠内江台办刘卓鲜先生
> 内江台办重乡情,更有刘君抱至诚。
> 卓立超然存正气,鲜明皎洁见廉贞。
> 助人为乐平常见,与我删诗下定评。
> 《兰畹清芬》能问世,因君奖誉获殊荣。
#### 赠内江市政协于萍女史
> 安徽才女出于萍,《兰畹》诗刊赖玉成。
> 蕙质梅资呈彩笔,内江政协永驰名。
#### 第三次探亲畅游大陆后
> 名山胜水方游罢,故国风光尽可观。
> 到处随心皆自得,胸罗万象比天宽。
#### 景美仙迹岩 96.6
> 景美寻幽境,攀登仙迹岩。
> 茅棚围绿树,道观倚苍𡺎。
> 环顾高楼立,时闻小鸟喃。
> 市中能取静,暂且避尘凡。
#### 荒郊小屋
> 花木扶疏鸟雀喧,清溪流水绕孤村。
> 幽居只合幽人住,远隔尘嚣市井烦。
#### 游香港后<small>(二首)</small> 96.12.17
> 十里洋场百仞楼,香江盛况孰堪俦。
> 空中铁鸟凌虚降,海上轮舟逐浪游。
> 万国商家齐汇聚,五洲物产广交流。
> 回归有日尤兴旺,雪尽炎黄一世羞。
> 天道循环信不违,整衰起敝握先机。
> 英人顶钻行将摘,华族怀珠自必归。
> 恨史当随盟约废,荣篇犹待彩毫挥。
> 香江尔后愈昌盛,上国毋忘展德威。
#### 香港海洋公园内集古村游后
> 集古村中陈古迹,难忘祖国旧家风。
> 诸方文物应欣尝,历代宗规要敬崇。
> 民族精神斯有托,人生智慧始无穷。
> 殖民屈辱将伸雪,完璧归来晋大同。
#### 九龙至深圳<small>(二首)</small> 96.12.19
> 九龙一脉连深圳,两地今犹各自封。
> 人物往来须证照,回归以后应通融。
> 九龙深圳本相连,一纸分离一百年。
> 国耻行将昭雪净,欣看破镜早团圆。
#### 参观深圳〝世界之窗〞后
> 世界之窗观世界,文明建设各争奇。
> 五洲景物随时异,万国民风与地宜。
> 巧制图型留胜迹,精摩实象启遐思。
> 游踪一刹逾千里,放眼寰区不我遗。
#### 参观深圳〝锦绣中华〞后
> 锦绣中华似梦乡,山川胜境荟全场。
> 名楼显塔呈奇色,佛阁仙庵散异香。
> 帝苑皇宫非禁地,江村野馆是康庄。
> 长城一览无遗处,不用舟车跋涉忙。
#### 深圳游后
> 昔日渔村今闹市,十年建设展鸿图。
> 蜚声国际临香港,示范寰区畅五湖。
> 经济振兴从此始,小平远见在兹乎。
> 纵观使我长称羡,事赖人为德不孤。
#### 登广州白云山 96.12.20
> 五羊城外白云山,片片祥云去复还。
> 索道直追山顶上,白云依旧自悠闲。
#### 黄花冈七十二烈士墓抒怀
> 黄花冈上怀忠烈,浩气长存铄古今。
> 矢志成仁开国运,存心灭虏救苍生。
> 埋躯地底推先辈,遗范人间启后昆。
> 八十余年犹未靖,不知泉下感何深。
#### 越秀公园揽胜
> 越秀园中觅五羊,依山傍水绕全场。
> 茂林修竹繁花艳,晨雾朝阳瑞气香。
> 镇海楼头观胜景,逸仙塔上揽晴光。
> 珠江浩荡连沧海,穗市尘嚣竟日忙。
#### 访中山市孙中山故居
> 昔日荒郊翠亨村,钟灵毓秀产仁人。
> 倡言革命驱胡虏,开国功成不自尊。
> 〝天下为公〞名句在,〝三民主义〞善思存。
> 故居屋老留陈迹,博大精神永不泯。
#### 珠海(香洲)一瞥 96.12.21
> 水绕江村山拥城,香洲景物最分明。
> 渔舟点点随波去,游客群群傍岸行。
> 栉比高楼迎凤翥,纵横大道启鹏程。
> 但看建设鸿图展,自是欣欣日向荣。
#### 珠海至澳门
> 澳门珠海本相连,自隔华夷四百年。
> 民俗虽同行政异,仍将不久庆团圆。
#### 澳门往返
> 珠海澳门水一襟,移山填海筑机坪。
> 万邦银翼从天降,四世遗珠起陆沉。
> 赌国欣看筹码盛,醉乡乐见酒杯深。
> 回归有日蒙羞雪,吐气扬眉感不禁。
#### 海边观泳
> 白沙连绿水,碧海接蓝天。
> 逐浪随波者,多为青少年。
### 欧洲采风
#### 自台湾乘飞机赴罗马 97.9.6
> 秋高气爽好邀游,银翼凌空奋不休。
> 一别台湾三万里,星沉月落抵欧洲。
#### 罗马一日 97.9.7
> 一日古城罗马游,教堂巡礼暂勾留。
> 深玩斗兽场中味,鬼哭狼嚎不禁愁。
#### 梵蒂冈<small>(二首)</small>
> 梵蒂冈中大教堂,雄奇壮丽蔚辉煌。
> 全能上帝常如在,救世济人遍四方。
> 天主时临梵蒂冈,耶稣受难不投降。
> 只因信仰常茹苦,宁为救人甘受殃。
#### 罗马古城斗兽场
> 罗马城中斗兽场,犹留断壁与颓墙。
> 从前暴政何堪说,残酷非刑孰敢尝。
#### 罗马与庞卑(见)故城 97.9.8
> 罗马庞卑有故城,西洋文物早繁荣。
> 要知政教兴衰史,断壁残垣可证明。
#### 庞贝故城观风<small>(二首)</small>
> 火山灰土久埋尘,千万年来此现身。
> 断柱颓墙留胜迹,西洋远史可寻因。
> 庞贝故城考古游,残垣断柱望中收。
> 男欢女爱图犹丽,具见先民总好逑。
#### 夜宿意大利度假小镇苏连多<small>(七古)</small>
> 欧游次日即奔波,异域风情任揣摩。
> 卡布里岛寻幽去,名区借宿苏连多。
#### 往卡布里岛舟中 97.9.9
> 海上访仙山,大船还小船。
> 摇摇将欲覆,漂渺绿波间。
#### 卡布里岛下蓝洞<small>(二首)</small>
> 悬崖峭壁面深湍,洞穴藏幽刺骨寒。
> 海水晶莹如玉液,或为翠绿或为蓝。
> 扁舟入蓝洞,洞内颇清幽。
> 碧绿如琼液,浮光沏底柔。
#### 乘车登卡布里岛
> 攀崖缘栈道,车速故行迟。
> 府首凭窗望,茫茫大海湄。
#### 岛上双松<small>(五古)</small>
> 双松临绝壁,苍翠与云齐。
> 俯瞰拿波里,轻烟障眼迷。
#### 岛上午餐
> 岛上有琼楼,迎宾永不休。
> 西餐难入口,景物尚堪酬。
#### 拿波里港望维苏威火山<small>(五古)</small>
> 濒海拿波里,遥望维苏威。
> 山头曾喷火,浩劫毁庞卑(贝)。
#### 文艺复兴摇篮佛罗伦萨<small>(七古)</small>
> 佛罗伦萨[4]翡冷翠,文化复兴此奠基。
> 建筑精良雕绘美,西风于是八方吹。
[4] 徐志摩把佛罗伦萨译为翡冷翠。
#### 五 古
> 名城翡冷翠,文艺复兴基。
> 风气传欧陆,昌明是所师。
#### 意大利比萨斜塔
> 巨塔倾斜数百年,雷轰地震不曾颠。
> 前来比萨勘奇迹,奋力支撑欲正偏。
#### 威尼斯即景<small>(五首)</small>
> 海上浮丘不染尘,琼楼玉宇水为邻。
> 万千弱木[5]充基础,历尽波涛未使沦。
[5] 该市建设皆以木樁为基础。
> 海上浮洲成闹市,高楼栉比水为衢。
> 游人来往凭舟楫,曲折萦回欲着迷。
#### 七 古
> 水乡泽国威尼斯,〝督纪〞王朝壮大之。
> 曾是东西交易地,而今处处有遗规。
> 马可波罗[6]过我乡,我今回访望空堂。
> 中华文物西人羨,应是凭他广播扬。
[6] 马可波罗生于威尼斯
#### 五 古
> 海市威尼斯,琼楼显异姿。
> 教堂诚壮丽,官署益雄奇。
> 商旅如云集,安和与日熙。
> 既无车马驶,唯有彩舟驰。
#### 往南斯拉夫探溶洞<small>(七古)</small> 97.9.12
> 南斯拉夫探溶洞,来去通关意奧邻。
> 国号虽殊风物似,时为仇敌时相亲。
#### 五 古
> 溶洞非奇特,欧人少见之。
> 浮夸胜实际,亦可惑无知。
#### 奥地利国都维也纳<small>(十一首)</small> 97.9.13
> 名城维也纳,德奧奠丕基。
> 族类常征战,邦畿未转移。
> 称雄于国际,耀武及天陲。
> 局势今虽变,仍持中立姿。
> 双重帝国两头鹰,奥地称雄作象征。
> 泰易女皇[7]当政日,厉精图治勃然兴。
[7] 咏:玛丽亚.泰易莎女皇纪念碑
> 一河多瑙绿无涯,维也纳城景色佳。
> 音乐名家常出此,乍闻交响不胜嗟。
> 音乐名都艺术浓,嵌雕绘画极雍容。
> 管弦不绝萦梁栋,耳目之娱到处逢。
> 美泉宫里最豪华,极侈穷奢霸主家。
> 金玉满堂皆夺取,不知收敛却矜夸。
> 人面狮身[8]袒两胸,才能智慧蕴其中。
> 任谁抚弄均添智,浅笑从来不改容。
[8] 上贝尔维弟宫侧有两座狮身女面刻象,传说代表人类智慧,游人抚摸其胸可益智。
> 携眷畅游维也纳,园林胜境最堪夸。
> 红花绿叶常如锦,未觉深秋换物华。
> 多瑙河滨纵目游,蓝天翠岭水中浮。
> 长桥横卧秋波上,连接双方百尺楼。
> 石膏采尽余空洞,深邃幽缈似密宫。
> 曾被沦为军火窟[9],助魔肆虐祸无穷。
[9] 此地曾为希魔地下兵工厂
> 音乐名都维也纳,山明水秀境清幽。
> 教堂宫院皆奇伟,文物精华不尽收。
#### 五 古
> 王室美泉宫,林木郁葱葱。
> 百卉长争艳,陶醉在其中。
#### 莎姿堡<small>(二首)</small>
> 花园锦簇莎姿堡,水秀山明景物饶。
> 电影名之真善美[10],游人至此乐逍遥。
[10] 莎姿堡乃音乐家莫扎特出生地,电影《真善美》拍摄处,风景绝佳。
> 胜地莎姿堡,风光颇富饶。
> 如诗惊似画,真善美难描。
#### 茵斯布鲁克<small>(七古变体)</small>
> 胜地茵斯布鲁克,冬时奥运[11]迎嘉客。
> 雪中竞技各争强,观者如云皆自得。
[11] 是处乃两次奥运滑雪赛场,山区风景如画。
#### 五 古
> 茵斯布鲁克,风景诚殊绝。
> 翠谷绕青山,白云时应接。
#### 五 古
> 茵斯布鲁克,奥运此滑雪。
> 雪上多飞人,金牌争夺烈。
#### 欧洲极小国列支敦士坦<small>(五古)</small> 97.9.15
> 列支敦士坦,国小民安然。
> 繁华欧陆中,世外有桃源。
#### 古 风
> 地方不过百余里,人口行将两万奇。
> 国号列支敦士坦,建都名曰瓦度茲。
> 王宫古堡高山上,民宅商场大路歧。
> 中立无偏守正道,强邻接壤未相欺。
#### 中秋前夕泛舟瑞士卢森湖<small>(三首)</small>
> 初生皓月出高丘,影入湖心迎客舟。
> 彼此相随难割舍,只缘今夕近中秋。
> 远游异地遇中秋,月色何如故国优。
> 虽有时差[12]分早晚,依然同是不含羞。
[12] 此地与台湾时差约六小时
> 卢森湖上遇中秋,结伴乘舟挟月游。
> 兴尽归来宾馆里,传真有信是儿修[13]。
[13] 接得长子忠良传真贺节
#### 卢森湖拂晓前景<small>(二首)</small> 97.9.16
> 戸森湖上夜,月落星沈灭。
> 雾霭茫茫中,舟人犹未歇。
> 蒙蒙晨雾绕青山,淡淡湖光隐现间。
> 恰似美人初睡醒,娇羞犹未整容颜。
#### 瑞士铁力士山顶赏雪<small>(二首)</small>
> 阿尔卑斯力士峰,终年积雪不曾溶。
> 游人络绎登其顶,纵览奇峰傲世雄。
> 力士峰头雪,山与云俱白。
> 游人戏其顶,谁解争高洁。
#### 瑞士风光
> 窗外红花掩映,门前绿树垂阴。
> 山上莺啼鸟语,湖中帆影云岑。
#### 瑞士往来<small>(三首)</small>
> 欧游为访胜,瑞士寄闲情。
> 来往无人问,湖山处处迎。
> 瑞士湖光信是优,青山隐隐水中浮。
> 粉墙红瓦闲村舍,片片飞帆戏白鸥。
> 瑞士湖山信是优,游人络绎色肤稠。
> 语言互异难交接,兴趣相投乐自由。
#### 德国弗莱堡黑森林<small>(三首)</small>
> 黑森林近弗莱堡,特产咕咕钟有名。
> 时值中秋投宿此,欣看月蚀又重明。
> 丛林深锁迎宾馆,三座木楼别有天。
> 时值中秋逢月蚀,凭窗伫盼月重圆。
> 欧游作客逢佳节,别有心情也枉然。
> 自是洋人无此俗,因知月上莫婵娟。
#### 中秋逢月蚀
> 中秋逢月蚀,一夕有亏盘。
> 天道何多变,人生更罔评。
#### 由奥国至德国途经瑞士<small>(二首)</small> 97.9.17
> 携眷邀遊德奧间,流连瑞士好湖山。
> 逢人不解拉丁语,一笑能开彼此颜。
> 欧人住处好鲜妍,城市乡村莫不然。
> 即使深秋仍未减,凤仙花发满窗前。
#### 德国海德堡<small>(七古二首)</small>
> 大学名城海德堡,莱茵河畔开通早。
> 德人借此逞英豪,史迹斑斑犹可考。
> 海德堡中览故城,残垣耸立也峥嵘。
> 全球最大啤酒桶,俯视仰观更欲酲。
#### 莱茵河泛舟<small>(二首)</small>
> 莱茵河上漫行舟,夹岸风光放眼收。
> 水际崖前余古堡,葡萄密密遍山丘。
> 阿尔卑斯谷里游,莱茵河上展吟眸。
> 青山翠岭无穷处,绿水悠悠不尽流。
#### 大运河上古战船荷兰阿姆斯特丹<small>(十首)</small> 97.9.18
> 怀古观今访荷兰,曾于海上逞强权。
> 台湾尚有红毛港,故土河中仅剩船。
> 荷兰标志是风车,泽畔溪边映日斜。
> 灌溉排洪成大用,使能浮海作人家。
> 荷兰特产郁金花,镂木为鞋技艺嘉。
> 举世闻名谁不爱,簪头托足令人夸。
> 凭窗作态美娇娘,浅笑轻颦引野郎。
> 高挂红灯张艳帜,夜来峰蝶采花忙。
> 丹城四运河,碧水漾清波。
> 精舍缘边起,迷离影像多。
> 登上玻璃艇,巡游各运河。
> 置身楼影里,疑是梦南柯。
> 碧水蓝桥上,高天鸟影过。
> 遥迎红瓦屋,倒映益嵯峨。
> 绕市尽长河,吊桥横碧波。
> 随时能启合,航舰易穿梭。
> 风车功用大,泽国最相宜。
> 科学兴前日,唯其动力资。
> 荷人多善贾,贸易遍寰区。
> 因此能招富,国强民亦绥。
#### 比利时<small>(五首)</small> 97.9.19
> 国小风淳比利时,民殷国富自修为。
> 强邻逼境谁能侮,与世无争永不移。
#### 七 古
> 布鲁塞尔[14]景物奇,万邦博览盛一时。
> 犹留原子塔为志,九个圆球谊不移。
[14] 布鲁塞尔乃比国首府
> 神童[15]救难传佳话,刻象感恩众竞夸。
> 便溺长流终不息,新衣日换使无瑕。
[15] 市内有便溺小童雕塑象
> 国小名高比利时,长持中立不游移。
> 强邻竞霸无关己,偶作中调可解危。
> 原子塔前思核子,只缘科学太新奇。
> 以为武器堪摧敌,充作能源可集赀。
> 但怕情仇来利用,且随好恶任敷施。
> 强权借此威天下,不信人间有是非。
#### 车行荷法平原
> 平畴万顷草如茵,牛马成群不畏人。
> 卧立奔驰皆自在,祥和一片见天真。
#### 法国巴黎<small>(十二首)</small>
> 华灯初上抵巴黎,百里洋场五色迷。
> 万国游人多会此,志同那复计东西。
#### 协和广场黄昏幻景
> 协和场地本刑场,多少名人此地亡。
> 厉鬼未能兴祸患,神奇美化转为祥。
#### 塞纳河中洲自由女神像 97.9.20
> 塞纳河中逐水流,自由神像立芳洲。
> 高擎火炬狂呼唤,多少盲从碰破头。
#### 圣母院<small>(二首)</small>
> 圣母堂夸气象雄,深藏神迹在其中。
> 精灵鬼怪环墙立,严警人们妄启戎。
#### 五 古
> 巴黎圣母院,举世诚难见。
> 构造极神奇,游人无不恋。
#### 凯旋门<small>(二首五古)</small>
> 环顾凯旋门,难寻拿破仑。
> 抬头天上望,但见是浮云。
> 瞻仰凯旋门,毋忘拿破仑。
> 称雄于世界,借此显精神。
#### 苏菲尔铁塔<small>(变体五古)</small>
> 苏菲尔铁塔,直向云天插。
> 登顶览巴黎,市微唯我大。
#### 旺多姆广场<small>(变体五古)</small>
> 旺多姆广场,气象最轩昂。
> 雕柱巍然立,敢夸国力强。
#### 塞纳河
> 悠悠塞纳河,绿树影婆娑。
> 大小船如鲫,游人分外多。
#### 防卫区
> 纳伊桥外缘,广厦可摩天。
> 万国通商会[16],财源涌百川。
[16] 这里又是实业区及国际贸易中心。
> 巴黎一日游,胜景实难收。
> 世界真奇妙,人生似白鸥。
#### 乘子弹夜快车赴伦敦
> 法境赴英伦,天涯若比邻。
> 飞车穿海底,顷刻便相亲。
#### 英国伦敦<small>(十首)</small>
##### 白金汉宫与维多利亚女王纪念碑<small>(二首)</small>
> 异域琼宫气势雄,女王展翅欲凌空。
> 曾经称霸全人类,犹对斜阳送晚风。
> 维多利亚女王威,飒爽英姿奋欲飞。
> 帝国强权曾盖世,金身犹自恋余晖。
##### 伦敦塔<small>(二首)</small>
> 伦敦塔里气萧森,四面重围院落深。
> 曾作王宫兼地狱,因人好恶任升沉。
> 伦敦塔里有神鸦[17],飞去飞来守旧家。
> 护卫英人如护子,一朝遗弃丧无差。
[17] 塔中养有乌鸦六只,传说为守护神,若其飞去,英人即亡。
##### 伦敦塔桥
> 伦敦塔外塔桥横,跨水凌空构造精。
> 上下车船交织锦,风光难以副修名。
##### 大笨钟
> 遥闻大笨钟,声响彻晴空。
> 庸人知所警,奋起建奇功。
##### 泰晤士河 97.9.21
> 泰晤士河边,伦敦展壮观。
> 宫殿连云起,风光别有天。
##### 西敏寺
> 伦敦西敏寺,历史留陈迹。
> 建筑极辉煌,英人深爱惜。
##### 海德公园<small>(二首)</small>
> 海德公园里,能人辩不穷。
> 有如齐稷下,中外亦同风。
> 海德公园里,清秋枫叶红。
> 绿波明于镜,倒映影朦胧。
#### 中国城
> 伦敦中国城,市肆颇繁荣。
> 异地开新境,华侨慘澹营。
#### 旅游车中
> 日行千里车如矢,过眼风光速转移。
> 捕影留形诚不易,感怀唯有寸心知。
#### 摄影与赋诗
> 旅游处处皆留影,对影兴怀又赋诗。
> 日后随时供鉴赏,生平踪迹可留遗。
#### 观英王妃黛安娜于巴黎车祸处有感<small>(古风)</small>
> 黛妃爱风流,驱车欲出游。
> 飞奔避媒体,车毁命同休。
> 哀者如蚁聚,鲜花堆成丘。
> 荣华不足恃,富贵亦何求。
> 白云易消散,红颜难久留。
> 树大防风折,水深戒覆舟。
> 伤彼遭横祸,应知重自修。
#### 欧游综述
> 慕名求实访欧洲,罗马伦敦绕一周。
> 瑞士湖中观景色,巴黎市內察潮流。
> 莱茵岸畔斟红酒,多瑙江边戏白鸥。
> 文物精华虽可取,山川总不及神州。
#### 由伦敦乘飞机回台湾 97.9.22
> 朝发伦敦晓雾间,法兰克福暂留连。
> 凌云直越欧和亚,万里台湾一夜还。
#### 游士林官邸<small>(古风)</small>
> 士林有官邸[18],往昔是深宫。
> 满苑花争艳,环山树竞葱。
> 警卫森罗列,游人路不通。
> 运移权势易,禁地也充公。
[18] 士林官邸:前蒋先生官邸,现已开放为公园。
#### 游摩耶精舍
> 一代画师[19]流寓地,摩耶精舍绕双溪。
> 天然美景兼雕饰,妙笔生花自出奇。
[19] 此舍乃国画大师张大千流寓所。
#### 参观台北故宫博物院<small>(变体五古)</small>
> 故宫博物院,珍宝诚难见。
> 文化数千年,绵延无所间。
#### 游至善园<small>(三首五古)</small>
> 小园名至善,不仅供消遣。
> 游憩思其义,自当知所勉。
> 回廊环绿水,锦鲤戏清波。
> 悠游多自在,无畏白天鹅。
> 溪水响潺潺,汇流浅诸间。
> 既供鱼鳖戏,并压人声喧。
#### 游士林芝山岩<small>(二首五古)</small>
> 郁郁芝山上,开漳圣祖台。
> 士林传佳话,日寇有遗骸[20]。
[20] 日占初,在此山兴校实行奴化教育,有教师七人被抗日义士所杀。又,山上早有纪念开发漳州之陈将军祠,为漳州籍人所景仰。
> 锦衣戴雨农[21],护主有奇功。
> 芝山神圣地,改号以为封。
[21] 此山建有雨农纪念馆,附近地区也以雨农为名,因戴为蒋之特务头子。
#### 游乌来<small>(三首)</small>
##### 观 瀑<small>(变体五古)</small>
> 乌来风景妙,瀑布崖间哮。
> 乘坐缆车去,始能窺秘奥。
##### 遥 望<small>(绝句)</small>
> 薄雾绕青山,虚无缥缈间。
> 故乡何处是,久客在台湾。
##### 临 溪<small>(绝句)</small>
> 悠悠不断流,流向海西头。
> 欲情捎封信,乡亲可解愁。
#### 游四兽山<small>(二首)</small>
> 虎豹象狮灵兽山,濒临台北市东关。
> 登高俯瞰全区景,三水潆洄大厦间。
> 寺庙神坛布满山,僧尼道士隐其间。
> 或为修炼祈多福,更有招摇欲敛钱。
#### 游五指山军人公墓
> 军人公墓据高山,阶级分明各有班。
> 遗憾未能归故土,羁魂隔海望乡关。
#### 游草岭古道
> 草岭山中藏古道[22],兰阳开发早经行。
> 而今另辟新公路,虎字碑前忆旧情。
[22] 古道昔日乃台北往宜兰通道,途中有虎字碑为镇山之宝。
#### 金宝山墓园
> 金宝地形如蝙蝠,居然风水也迷人。
> 埋尸纳骨争相去,小小精灵岂足珍。
#### 火烧松山寺灵骨堂
> 松山寺里藏尸骨,猛火焚烧尽化尘。
> 混杂余灰难辨主,可怜遗属泪纵横。
#### 北投今昔
> 丹凤岩前是北投,温泉水滑引人游。
> 曾多艳窟藏污垢,返璞归真景象幽。
#### 游关渡观潮汐
> 淡水滔滔向海流,流经关渡扼咽喉。
> 每逢潮汐回波转,吐纳随之秽物浮。
#### 登观音山俯瞰台北市
> 观音枕卧稻江[23]滨,侧视七星拱北辰。
> 回顾台湾今首府,应知自重护斯民。
[23] 淡水河亦名稻江,观音山与七星山隔江相望。
| 14.353171 | 61 | 0.661093 | yue_Hant | 0.285136 |
5222f6c18044a2d1dc75ae05567e270f14e5f4ac | 1,035 | md | Markdown | categories/mails/3047.md | jaredyam/sl-mayor-mailbox | 77becbad9948e84f3354ab08b9e991590a7bcaac | [
"MIT"
] | null | null | null | categories/mails/3047.md | jaredyam/sl-mayor-mailbox | 77becbad9948e84f3354ab08b9e991590a7bcaac | [
"MIT"
] | null | null | null | categories/mails/3047.md | jaredyam/sl-mayor-mailbox | 77becbad9948e84f3354ab08b9e991590a7bcaac | [
"MIT"
] | null | null | null | # <a href="http://www.shangluo.gov.cn/zmhd/ldxxxx.jsp?urltype=leadermail.LeaderMailContentUrl&wbtreeid=1112&leadermailid=3047">物业乱收费问题</a>
|Title|Content|
|:---:|---|
|来信时间|2015-04-07|
|来信内容|江南小区东区物业3月底私自张贴告示,电费涨价为1元/度,自来水涨价为4.5元/吨,垃圾费代收费15元/月。国家电网商洛分公司规定的居民生活用电为0.4983元/度,商洛市自来水生活用水价格为2.45元/吨,垃圾费按规定为7.9元/月。该物业没有任何物价部门和住建部门的批复,私自就涨价。最可气的是用不买电的方式限制与强制业主,这样的霸王条款让业主该怎样生活。|
|回复时间|2015-04-21|
|回复内容|网友你好!你所反映的江南小区东区物业(即商洛市煜林物业服务有限公司)3月底张贴公示,电费涨价为1元/度,自来水涨价为4.5元/度,垃圾费代收费为15元/月的情况属实。经查确认是对商业用房收取的。住宅用户的水费、电费、垃圾处理费标准未作调整。 根据省物价局陕价价发〔2011〕169号文件规定,一般工商业用电价格为0.8731元/千瓦时;根据市物价局商政价发〔2012〕76号文件规定,非居民用水价格为4.5元/吨;根据商洛市人民政府商政发〔2012〕38号文件规定,市区商业门点垃圾处理费标准为0.8元/平方米·月。 江南小区东区电力部门未抄表到户,物业收取商业用户电费1元/千瓦时,是在电力部门应抄表到户0.8731元/千瓦时的基础上,加收共用和损耗部分;物业收取商业用户水费4.5元/吨,没有超出规定标准;物业收取商业用户垃圾处理费15元/月·户,未按规定的计量单位和标准收取。 对于以上事实,我们已要求江南小区东区物业进行以下整改:一是单独安装计量装置,对公用部分的水、电单独计量,并定期公布水、电公用和损耗部分每户应分摊的数量和费用;二是对商业用户按0.8元/平方米·月的标准收取垃圾处理费;三是每次调整收费标准前,必须要与业主委员会充分协商,征求广大业主同意,并在正式实施前一个月对收费项目和标准进行公示。商洛市物价局|
|回复机构|<a href="../../categories/agencies/商洛市物价局.md">商洛市物价局</a>|
| 115 | 580 | 0.823188 | yue_Hant | 0.263872 |
522300da4ce95ce6c6e0eb4a21fbc71de7e2250b | 266 | md | Markdown | README.md | aboroska/dialyzer-mnesia-test2 | d6f0946d8cfeeb0125c8c9e6fbc482b2b93b1e53 | [
"Apache-2.0"
] | null | null | null | README.md | aboroska/dialyzer-mnesia-test2 | d6f0946d8cfeeb0125c8c9e6fbc482b2b93b1e53 | [
"Apache-2.0"
] | null | null | null | README.md | aboroska/dialyzer-mnesia-test2 | d6f0946d8cfeeb0125c8c9e6fbc482b2b93b1e53 | [
"Apache-2.0"
] | null | null | null | t2
=====
An OTP library
Test
-----
$ rebar3 dialyzer
What to expect
--------------
When ```{user_properties, []}``` is included in ```mnesia:create_table/2``` options ```dialyzer``` will warn:
src/t2.erl
13: Function test/0 has no local return
| 13.3 | 109 | 0.586466 | eng_Latn | 0.94451 |
52246625a7c40caa8e371329b4e72c29f172e482 | 642 | md | Markdown | README.md | Venuor/tweetr-angular | 8cff2e2b811ffeb961e35e50da9a5f9cf24f2d09 | [
"MIT"
] | null | null | null | README.md | Venuor/tweetr-angular | 8cff2e2b811ffeb961e35e50da9a5f9cf24f2d09 | [
"MIT"
] | null | null | null | README.md | Venuor/tweetr-angular | 8cff2e2b811ffeb961e35e50da9a5f9cf24f2d09 | [
"MIT"
] | null | null | null | ## Tweetr-Angular
This is the Single-Page-Application interface for the [Tweetr-Application](https://github.com/Venuor/Tweetr).
Users can sign up or login into Tweetr and immediately start writing tweets limited to
140 characters.
The app features a global feed listing all Tweets of all users chronologically. Users can also
follow each other to view their Tweets on a personal feed. This feed only displays the Tweets of
users you are subscribed to.
There also exists an admin user which can delete any user and Tweet.
This app was build for educational purposes and is hosted on [a private server](https://rspiess.ddns.net/tweetr).
| 42.8 | 113 | 0.792835 | eng_Latn | 0.999362 |
52252428250d32d061462bf67b86dc8254887fc0 | 4,004 | md | Markdown | tests/container/README.md | hkwany/OctoBot | 77ed2e98ed7cc07edaa6a1001b8b2b53cbea2214 | [
"Apache-2.0"
] | 1 | 2020-07-29T03:02:18.000Z | 2020-07-29T03:02:18.000Z | tests/container/README.md | hkwany/OctoBot | 77ed2e98ed7cc07edaa6a1001b8b2b53cbea2214 | [
"Apache-2.0"
] | 25 | 2020-01-22T08:29:00.000Z | 2022-03-12T01:04:12.000Z | tests/container/README.md | hkwany/OctoBot | 77ed2e98ed7cc07edaa6a1001b8b2b53cbea2214 | [
"Apache-2.0"
] | 17 | 2019-10-14T07:13:30.000Z | 2021-09-27T11:21:31.000Z | ## Building Container-based Testing Environment with Kind
[Kind](https://kind.sigs.k8s.io/docs/) lets we run Kubernetes cluster on your
local computer using Docker container “nodes”. So, this tool requires that you
have Docker Container Engine installed and configured.
### Prerequisites
This Kind installation is tested with this following environment:
- Ubuntu Linux 18.04.04 LTS
- Docker Engine 5:20.10.0~3-0~ubuntu-bionic
- Kind 0.9.0
- Kubernetes 1.9.1
- kubectl 1.9.1
### Installation and Configuration
#### Install Docker Engine from Repository
Please find below a combine steps to get started with Docker Engine on Ubuntu.
It consists of removing old version of Docker, install prerequisite packages,
configure the repository, install Docker Engine and assign user to run Docker.
```console
sudo apt-get update
sudo apt-get -y autoremove docker docker-engine docker.io containerd runc
sudo apt-get -y install apt-transport-https ca-certificates \
curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker $USER
```
The original installation procedure can be found
[here](https://docs.docker.com/engine/install/ubuntu/
*Note*: You may need to logout and login again before go to the next step!
#### Install Kind Binary from the Repository
Stable binaries are also available on the releases page. Stable releases are
generally recommended for CI usage in particular. To install, download the
binary for your platform from “Assets” and place this into your $PATH.
```console
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.9.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin/kind
```
The detail installation step can be found
[here](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
#### Create cluster
Creating a Kubernetes cluster in Kind is as simple as single command below:
```console
kind create cluster
```
The output will be something like this:
```console
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.19.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
```
The detail how to create cluster can be found
[here](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster)
#### Install _kubectl_ from the Source Code
The Kubernetes command-line tool, _kubectl_, allows you to run commands against
Kubernetes clusters. You can use _kubectl_ to deploy applications, inspect and
manage cluster resources, and view logs.
Please find below on how to download and install _kubectl_ and set it up for
accessing your cluster. We select _kubectl_ `v1.19.1` release to match with
Kubernetes version as mentioned in the Kind installation.
```console
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.1/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv kubectl /usr/bin/
kubectl cluster-info
```
### Verification
Check Docker installation
```console
docker run hello-world
```
Check cluster installation
```console
kind get clusters
```
Check the Kubectl tools
```console
kubectl version --client
```
Check the deployment result
```console
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl get pods
```
If everything are good, congratulation! You can continue to use our OctoBot.
Please start with *Octo-Play* guide
[here](https://github.com/nus-ncl/OctoBot/tree/master/Octo-Play) or *Octo-App*
[here](https://github.com/nus-ncl/OctoBot/tree/master/Octo-App). | 30.8 | 98 | 0.754995 | eng_Latn | 0.882213 |
52258c893e81faa429295a2952c0970fe76f8153 | 555 | md | Markdown | schemas/enum-types.md | iVictor-Git/graphql-notes | 3fcfd1fef029087ea35a82b7879c30a81fcfb3ab | [
"MIT"
] | null | null | null | schemas/enum-types.md | iVictor-Git/graphql-notes | 3fcfd1fef029087ea35a82b7879c30a81fcfb3ab | [
"MIT"
] | null | null | null | schemas/enum-types.md | iVictor-Git/graphql-notes | 3fcfd1fef029087ea35a82b7879c30a81fcfb3ab | [
"MIT"
] | null | null | null | ##### [previous][previous]
## Enumeration types
So, these are a special kind of scalar type that's restricted to a set of allowed values. The benefit to this is:
1. Validate any arguments of this type to be one of the allowed values
2. Communicate through type system that a field will always be one of these finite set of values.
Here's how it would look in the GQL schema language:
```js
enum Episode {
NEWHOPE
EMPIRE
JEDI
}
```
That concludes this section, [next][next].
[previous]: ./scalar-types.md
[next]: ./lists-and-non-null.md
| 23.125 | 113 | 0.718919 | eng_Latn | 0.997678 |
5225fd87b3e11164c067f5035e150fac23cb811f | 17,534 | md | Markdown | 09-17-2020/03-25.md | preetham/greenhub | 7aac43f72d919533b7515bf016021fc6b9d023f6 | [
"MIT"
] | null | null | null | 09-17-2020/03-25.md | preetham/greenhub | 7aac43f72d919533b7515bf016021fc6b9d023f6 | [
"MIT"
] | null | null | null | 09-17-2020/03-25.md | preetham/greenhub | 7aac43f72d919533b7515bf016021fc6b9d023f6 | [
"MIT"
] | null | null | null | <h2>News Now</h2><table><tr><th>Title</th><th>Content</th><th>URL</th><th>Author</th></tr>
<tr><td><h3>Never heard of the 'boogaloo' movement? Here's how COVID-19 is helping members gain power</h3></td><td><p>The pandemic has been fertile ground for far-right messaging. American extremists are playing on people's health fears to normalise their views, writes Blyth Crawford....</p></td><td><a href=http://www.abc.net.au/news/2020-09-17/coronavirus-far-right-conspiracies-political-power/12670278>http://www.abc.net.au/news/2020-09-17/coronavirus-far-right-conspiracies-political-power/12670278</a></td><td><p>ABC News</p></td></tr><tr><td><h3>When coronavirus took hold in Melbourne, Anna fled north</h3></td><td><p>Australians are crossing a gulf in pandemic freedoms, swapping life in locked-down cities for the coronavirus-free NT. Demographers are watching closely to see if it could solve the Territory's population woes....</p></td><td><a href=http://www.abc.net.au/news/2020-09-17/nt-covid-19-refugees-solve-northern-territory-population-woes/12651268>http://www.abc.net.au/news/2020-09-17/nt-covid-19-refugees-solve-northern-territory-population-woes/12651268</a></td><td><p>ABC News</p></td></tr><tr><td><h3>Clive Palmer's company donates $2 million to United Australia Party</h3></td><td><p>The Electoral Commission Queensland's public database shows Mineralogy Pty Ltd made the donation yesterday, ahead of next month's state poll....</p></td><td><a href=http://www.abc.net.au/news/2020-09-17/clive-palmer-donates-2-million-dollars-to-united-australia-party/12673174>http://www.abc.net.au/news/2020-09-17/clive-palmer-donates-2-million-dollars-to-united-australia-party/12673174</a></td><td><p>ABC News</p></td></tr><tr><td><h3>Biden said Trump couldn't be trusted on a vaccine. Hours later, the President clashed with the person in charge of delivering it</h3></td><td><p>US president Donald Trump says his own director of the Centers for Disease Control and Prevention was "confused" when he said it would take six to nine months for a coronavirus vaccine to be effective....</p></td><td><a href=http://www.abc.net.au/news/2020-09-17/trump-contradicts-cdc-director-on-coroanvirus-vaccine/12672238>http://www.abc.net.au/news/2020-09-17/trump-contradicts-cdc-director-on-coroanvirus-vaccine/12672238</a></td><td><p>ABC News</p></td></tr><tr><td><h3>Hong Kong formally objects to US demand for 'Made in China' export label</h3></td><td><p>HONG KONG - Hong Kong has filed a formal objection with the United States over its demand for "Made in China" labels on goods exported from the Chinese semi-autonomous city, the commerce secretary said on Wednesday.
Washington's move last month followed China's imposition of a national security law ...</p></td><td><a href=http://www.asiaone.com/china/hong-kong-formally-objects-us-demand-made-china-export-label>http://www.asiaone.com/china/hong-kong-formally-objects-us-demand-made-china-export-label</a></td><td><p></p></td></tr><tr><td><h3>Malaysian advertising campaign goes viral for all the wrong reasons</h3></td><td><p>Streaming service iQIYI revealed their campaign poster for Malaysia Day on September 9 to promote local content on their portal....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741611/Malaysian-advertising-campaign-goes-viral-wrong-reasons-spot-glaring-error.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741611/Malaysian-advertising-campaign-goes-viral-wrong-reasons-spot-glaring-error.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>There's some fin in the air tonight! Danny the Dolphin leaps from sea</h3></td><td><p>Jet-skiers got a thrill when a playful dolphin leapt out of the water to greet them - even shooting them a smile....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741817/Theres-fin-air-tonight-Danny-Dolphin-leaps-sea.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741817/Theres-fin-air-tonight-Danny-Dolphin-leaps-sea.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Why Jamie Gao was murdered by cops Roger Rogerson and Glen McNamara</h3></td><td><p>New bombshell claims have come to light that Jamie Gao was lured to his death before shot dead by Roger Rogerson and Glen McNamara in Padstow storage unit in Sydney's south-west....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741315/Bombshell-new-claims-Jamie-Gao-murdered-cops-Roger-Rogerson-Glen-McNamara.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741315/Bombshell-new-claims-Jamie-Gao-murdered-cops-Roger-Rogerson-Glen-McNamara.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Karl Stefanovic slams Queensland border ban</h3></td><td><p>Queensland's harsh coronavirus restrictions will prevent Australian Prime Minister Scott Morrison from attending the historic AFL Grand Final in Brisbane on October 24....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741713/Karl-Stefanovic-slams-Queensland-border-ban-means-Scott-Morrison-miss-AFL-Grand-Final.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741713/Karl-Stefanovic-slams-Queensland-border-ban-means-Scott-Morrison-miss-AFL-Grand-Final.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>New South Wales records 5 new coronavirus cases</h3></td><td><p>Two infections have been linked to known clusters, while two are from hotel quarantine and one is under investigation....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741815/New-South-Wales-records-5-new-coronavirus-cases.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741815/New-South-Wales-records-5-new-coronavirus-cases.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Australians in China warned to be 'careful': Expats urged to come home</h3></td><td><p>Former senior Defence official and diplomat Allan Behm (pictured) issued the stern warning on Thursday after documents revealed the Australian Federal Police were investigating China's Sydney consulate....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741401/Defence-chief-issues-warning-Australians-living-China-reveals-expats-return-home.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741401/Defence-chief-issues-warning-Australians-living-China-reveals-expats-return-home.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Car flips after merging into another vehicle on Sydney highway</h3></td><td><p>Dash cam footage shows several vehicles driving along the Great Western Highway in Mt Druitt, in Sydney's west about 3pm on Wednesday....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741721/Car-flips-upside-Great-Western-Highway-Mt-Druitt-Sydneys-west.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741721/Car-flips-upside-Great-Western-Highway-Mt-Druitt-Sydneys-west.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Incredible moment three ducks escort a deadly tiger snake to shore</h3></td><td><p>Tim Kemp was in Whiteman Park around 20km north of Perth this week when he spotted the snake making its way to a patch of vegetation floating in the swamp where the ducks had been sitting....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741673/Three-ducks-escort-deadly-tiger-snake-shore-form-lake-Whiteman-Park-WA.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741673/Three-ducks-escort-deadly-tiger-snake-shore-form-lake-Whiteman-Park-WA.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Man is charged after allegedly sending death threats to QLD</h3></td><td><p>Police raided the 43-year-old's home in Narang on the Gold Coast on Wednesday night and arrested him. He was charged with one count of using a carriage service to make threats to kill....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741987/Man-charged-allegedly-sending-death-threats-QLD.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741987/Man-charged-allegedly-sending-death-threats-QLD.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Netflix trailer shows killer Chris Watt's wife in video before murders</h3></td><td><p>Netflix doc 'American Murder: The Family Next Door', airing September 30, will look into the grisly murders including the apparent picture perfect family life that led up to the crimes....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741679/Netflix-trailer-shows-killer-Chris-Watts-smiling-wife-home-video-murdered-family.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741679/Netflix-trailer-shows-killer-Chris-Watts-smiling-wife-home-video-murdered-family.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Unemployment rate is down to 6.8% after 100,000 jobs created in August</h3></td><td><p>The unemployment rate fell from a 22-year high of 7.5 per cent to 6.8 per cent in August as111,000 jobs were created....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741855/Road-recovery-Unemployment-rate-6-8-100-000-jobs-created-August.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741855/Road-recovery-Unemployment-rate-6-8-100-000-jobs-created-August.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>How fraudster socialite was brought undone by her foolish emails</h3></td><td><p>A Sydney socialite jailed for fraud in tearful scenes was brought undone by the most simple of checks after repeatedly emailing police excuses why she couldn't see them....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8738319/Annabel-Walker-socialite-brought-undone-simple-police-checks.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8738319/Annabel-Walker-socialite-brought-undone-simple-police-checks.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Texas woman's tweet gets thousands of customers for dad's taco truck</h3></td><td><p>Giselle Aviles tweeted for support after on Saturday her dad only made $6 at Taquiera Al Torito in Atascocita, Texas and the when Elias Aviles returned people were lining up before it opened Monday....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741901/Tweet-draws-thousands-customers-Taquiera-Al-Torito-taco-truck-Atascocita-Texas.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741901/Tweet-draws-thousands-customers-Taquiera-Al-Torito-taco-truck-Atascocita-Texas.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Jacinda Ardern's lockdown caused an even WORSE recession than feared</h3></td><td><p>New Zealand's strict COVID-19 lockdown has seen the nation plunge into recession for the first time in a decade....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741763/How-Jacinda-Arderns-lockdowns-caused-WORSE-recession-feared.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741763/How-Jacinda-Arderns-lockdowns-caused-WORSE-recession-feared.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Prankster who dived into zoo aquarium charged with fighting emus</h3></td><td><p>Mohammed Ali Khawaja, 30, was filmed by laughing friends swimming with fish in his shorts at Sydney Zoo, but that was not all the alleged mischief he got up to on his visit to the new zoo....</p></td><td><a href=https://www.dailymail.co.uk/news/article-8741863/Prankster-dived-Sydney-Zoo-aquarium-charged-fighting-emus.html?ns_mchannel=rss>https://www.dailymail.co.uk/news/article-8741863/Prankster-dived-Sydney-Zoo-aquarium-charged-fighting-emus.html?ns_mchannel=rss</a></td><td><p>Editor</p></td></tr><tr><td><h3>Facebook Buys Washington Corporate Campus Originally Intended for REI</h3></td><td><p>Despite making a significant commitment to remote work going forward, Facebook continues to gobble up large parcels of commercial real estate, and the complex that was originally slated to be home to outdoor clothing and gear retailer REI in Bellevue, Wash., became the latest addition to the list th...</p></td><td><a href=https://www.adweek.com/digital/facebook-buys-washington-corporate-campus-originally-intended-for-rei/>https://www.adweek.com/digital/facebook-buys-washington-corporate-campus-originally-intended-for-rei/</a></td><td><p>@adweek</p></td></tr><tr><td><h3>India x Cleantech -- September 2020</h3></td><td><p>Welcome to another issue of our new India x Cleantech series! On a monthly basis, we are pulling news from across clean technology sectors in India into a single, concise summary article about the country....</p></td><td><a href=https://cleantechnica.com/2020/09/16/india-x-cleantech-september-2020/>https://cleantechnica.com/2020/09/16/india-x-cleantech-september-2020/</a></td><td><p>@cleantechnica</p></td></tr><tr><td><h3>Tesla Owners Are Fighting Human Trafficking & Attempting A Guinness World Record</h3></td><td><p>Tesla owners will be descending upon Atlanta to help set a Guinness World Record while fighting human trafficking. "Join Us, Elon Musk!"...</p></td><td><a href=https://cleantechnica.com/2020/09/16/tesla-owners-are-fighting-human-trafficking-attempting-a-guinness-world-record/>https://cleantechnica.com/2020/09/16/tesla-owners-are-fighting-human-trafficking-attempting-a-guinness-world-record/</a></td><td><p>@https://twitter.com/JohnnaCrider0</p></td></tr><tr><td><h3>Volocopter Opens World's 1st Electric Air Taxi Flight Reservations</h3></td><td><p>Volocopter has opened up reservations for the first commercial electric air taxi rides in an electric vertical takeoff and landing (eVTOL) aircraft. A what? If you haven't been following along as this industry has been heating up, an eVTOL aircraft is sort of a new-age helicopter...</p></td><td><a href=https://cleantechnica.com/2020/09/16/volocopter-opens-worlds-1st-electric-air-taxi-flight-reservations/>https://cleantechnica.com/2020/09/16/volocopter-opens-worlds-1st-electric-air-taxi-flight-reservations/</a></td><td><p>Zachary Shahan</p></td></tr><tr><td><h3>Exelixis, Inc. (EXEL) CEO Michael Morrissey Presents Cantor 2020 Global Virtual Healthcare Conference (Transcript)</h3></td><td><p>Exelixis, Inc. (NASDAQ:EXEL) Cantor 2020 Global Virtual Healthcare Conference September 16, 2020, 04:40 PM ET Company Participants Michael Morrissey - CEO, President Conference Call Participants Alethia Young - Cantor Presentation Operator Alethia Young Hey, everybody....</p></td><td><a href=https://seekingalpha.com/article/4374812-exelixis-inc-exel-ceo-michael-morrissey-presents-cantor-2020-global-virtual-healthcare?source=feed_sector_healthcare>https://seekingalpha.com/article/4374812-exelixis-inc-exel-ceo-michael-morrissey-presents-cantor-2020-global-virtual-healthcare?source=feed_sector_healthcare</a></td><td><p>@</p></td></tr><tr><td><h3>India’s ‘Command And Control State’ Is Bizarre On Interest Rates</h3></td><td><p>The interest rate is not a political tool, but the most critical price in a competitive setting, writes Raghav Bahl....</p></td><td><a href=https://www.bloombergquint.com/opinion/indias-command-and-control-state-is-bizarre-on-interest-rates>https://www.bloombergquint.com/opinion/indias-command-and-control-state-is-bizarre-on-interest-rates</a></td><td><p>Raghav Bahl</p></td></tr><tr><td><h3>Why This Bank’s Customers Are Swiping Their Credit Cards For Cash</h3></td><td><p>Unlike other lenders, RBL Bank isn't frowning on cash withdrawals via credit cards. Is it being innovative or imprudent?...</p></td><td><a href=https://www.bloombergquint.com/business/why-this-banks-customers-are-swiping-their-credit-cards-for-cash>https://www.bloombergquint.com/business/why-this-banks-customers-are-swiping-their-credit-cards-for-cash</a></td><td><p>Vishwanath Nair</p></td></tr><tr><td><h3>Paytm Money Aims To Be India’s Top Wealth Manager</h3></td><td><p>Paytm Money aims to capitalise on its existing user base....</p></td><td><a href=https://www.bloombergquint.com/mutual-funds/paytm-money-aims-to-be-indias-top-wealth-manager>https://www.bloombergquint.com/mutual-funds/paytm-money-aims-to-be-indias-top-wealth-manager</a></td><td><p>Arushi Rajput</p></td></tr><tr><td><h3>Live: SGX Nifty Indicates Losses; Happiest Minds, IEX, SAIL, HSIL In Focus</h3></td><td><p>Catch all live updates on share prices, index moves, corporate announcements and more from the Sensex and Nifty, today....</p></td><td><a href=https://www.bloombergquint.com/markets/live-sgx-nifty-indicates-losses-happiest-minds-iex-sail-hsil-in-focus>https://www.bloombergquint.com/markets/live-sgx-nifty-indicates-losses-happiest-minds-iex-sail-hsil-in-focus</a></td><td><p>Hormaz Fatakia</p></td></tr><tr><td><h3>Harry Potter role-playing video game unveiled by Warner Bros</h3></td><td><p>The book and movie franchise from J.K. Rowling, now has spawned a role playing video game, "Hogwarts Legacy," that will let players experience life as a student at the Hogwarts School of Witchcraft and Wizardry in the 1800s....</p></td><td><a href=https://www.foxbusiness.com/media/harry-potter-role-playing-video-game-unveiled-by-warner-bros>https://www.foxbusiness.com/media/harry-potter-role-playing-video-game-unveiled-by-warner-bros</a></td><td><p>Fox Business</p></td></tr></table>
| 4,383.5 | 14,806 | 0.763944 | eng_Latn | 0.555034 |
5226a0f1ec6976f8fdec4c51816778873b52fdc4 | 1,030 | md | Markdown | README.md | devicebuilder/Blue_Iris_Server_for_ST | cab994734114aa81bc099f60997c3577c3fd01a2 | [
"Apache-2.0"
] | null | null | null | README.md | devicebuilder/Blue_Iris_Server_for_ST | cab994734114aa81bc099f60997c3577c3fd01a2 | [
"Apache-2.0"
] | null | null | null | README.md | devicebuilder/Blue_Iris_Server_for_ST | cab994734114aa81bc099f60997c3577c3fd01a2 | [
"Apache-2.0"
] | 1 | 2018-11-04T12:09:38.000Z | 2018-11-04T12:09:38.000Z | # Blue Iris Server for ST
Smartapp to manage Blue Iris Server from Smartthings app.
Using the Smartapp to check on the status of the Blue Irsi (BI) server and to set various configuration. This is just an initial project to see if I can write a Smartapp successfully after creating a successful custom device for each Blue Iris camera.
This is a learning project made possible by generous code contributors.
Update 2/18/15
- got the SmartApp working to get status and list of cameras and the camera's settings.
- status of server can be seen as text.
- status of camera can be seen as text along with link to a snapshot.
- camera's video feed is not working well in the Smartthings app internal browser.
- Looking for ideas on what would a Blue Iris Server integration do on Smartthings versus just have indidual cameras as devices.
- May stop this project since I know what I need to know to connect to the BI server and focus on the cameras as devices instead.
- Stay tuned for the device code when I have time to post them.
| 68.666667 | 251 | 0.78835 | eng_Latn | 0.999557 |
52274a618292f449e597e4719aa175108b0f4683 | 63 | md | Markdown | README.md | udzura/infrastudy-4th | 89b1a9f4ad2abc47efd9ea6f181b7d17ae51cbc0 | [
"MIT"
] | 2 | 2020-07-29T12:16:46.000Z | 2020-08-13T23:31:51.000Z | README.md | udzura/infrastudy-4th | 89b1a9f4ad2abc47efd9ea6f181b7d17ae51cbc0 | [
"MIT"
] | null | null | null | README.md | udzura/infrastudy-4th | 89b1a9f4ad2abc47efd9ea6f181b7d17ae51cbc0 | [
"MIT"
] | null | null | null | # infrastudy-4th
Infra Study Meetup #4「インフラの面白い技術とこれから」で使ったコード
| 21 | 45 | 0.825397 | azb_Arab | 0.141445 |
5227f9d67d9b8a9d40b492445071bd409577589a | 6,180 | md | Markdown | README.md | Civil-Service-Human-Resources/csl-alpha | 92b8102eaf0a17852452e0c6f917d3987443540d | [
"MIT"
] | null | null | null | README.md | Civil-Service-Human-Resources/csl-alpha | 92b8102eaf0a17852452e0c6f917d3987443540d | [
"MIT"
] | null | null | null | README.md | Civil-Service-Human-Resources/csl-alpha | 92b8102eaf0a17852452e0c6f917d3987443540d | [
"MIT"
] | null | null | null | CSL - My Learning Plan (MLP)
============================
This is a version of the prototype which uses [OpenLRS](http://apereo-learning-analytics-initiative.github.io/OpenLRS) as learning record store.
Service dependencies
--------------------
In order to fully run the prototype you have to have 2 other services configured and running, those are:
- Learning Registry - [docker container](https://github.com/crossgovernmentservices/csl-learningregistry-containers)
- Learning Records Store ([OpenLRS](https://learninglocker.net/)) - [docker container](https://github.com/crossgovernmentservices/csl-openlrs-container)
Requirements
------------
#### Running docker
- [Docker](https://www.docker.com)
- [VirtualBox](https://www.virtualbox.org) - *if running on Mac*
#### Running Python virtual environment
- Python 3
- MongoDb
- SASS (for flask assets)
- virtualenv and virtualenvwrapper (not a hard requirement but steps below assume you are using them)
Quickstart
----------
### Docker
Just run:
```
docker-compose up
```
or
```
docker-compose up -d
```
and there should be 3 containers:
- webusers - *recreating prototype users*
- web - *the actual running prototype*
- mongo
You can access the prototype by navigating to your docker machine ip with port `8002`. You can find out your docker machine ip by running `docker-machine ip [machine name]` where default machine name is `default`.
### Python virtual environment
Then run the following commands to bootstrap your environment.
```
mkvirtualenv --python=/path/to/python3 [appname]
```
Change to the directory you checked out and install python requirements.
```
pip install -r requirements.txt
```
The base environment variables for running application locally are in `environment.sh`.
Once that this all done you can:
Start mongo:
```
mongod
```
Then run app
```
./run.sh
```
You can access the prototype by navigating to your localhost with port `8000`
Comparison to Learning Locker
-----------------------------
A simple comparison of [Learning Locker](https://learninglocker.net) and [OpenLRS](http://apereo-learning-analytics-initiative.github.io/OpenLRS) answering the question: *how difficult would it be to switch to OpenLRS without compromising on the functionality?*
### Querying
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| **MongoDB’s Aggregation Framework**. HTTP GET with JSON passed in url params - this forces some queries values to be escaped which then may not be recognised by Mongo Aggregate API. Luckily escaped urls are mapped to their unescaped equivalent. | **Elastic Search** (mappings, dynamic mapping) - allows HTTP GET with query json in params and POST which is quite easy and has no string escaping complications. Expected async issues when querying straight after API use. |
| If it was in a distributed model **async issues would surface over here as well** | **Very simple API** - there is a simple API (looked at source code) but we haven’t explored this route |
### xApi standard
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| Keeps up with the standard. API is easy to use and user friendly. More mature behaviour, for example adds timestamp to a new statement if not present | No timestamp on OpenLRS statements (if not present on a statement LRS SHOULD fill it in using Stored value according to the specification) |
| | Given async issues one has to utilise Timestamp here - using Stored is not the best option |
| | Very close to the bone when it comes to the specification (MVP approach). Whenever the specification say SHOULD one can assume the functionality is not there. |
### Voiding statements
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| Using API with Mongo aggregation framework query to filter statements which are supposed to be voided | POST new voiding reference statement(s) which voids another (complies with the spec) |
| | Possible unnecessary void statements if going with 2 statements approach to planning - voiding Planned reference and the actual action statement |
### JSON integration
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| HTTP code: `{ code: 200 }` | HTTP code: `{ status: 200 }` |
| | Latencies (due to the distributed model) between API post and elastic search queries results on: <ul><li>**Course complete page** (new learning record - “completed”) - solved by storing info in cookie</li><li>**Plan page** after diagnostics (new plan) - can be solved in a similar fashion but it’s not for now</li><ul> |
### Planning functionality
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| Done using SubStatement Context node | Missing Context node in Object class for SubStatements. One solution is to have 2 statements: 1 statement Planned referencing 2 statement which describes the action to complete. This creates a lot of work when it comes to statement management and filtering (especially if the planner and learner are not the same person). Aprero commented on this as not intentional and very willing to collaborate to enhance the model. |
| | For simplicity planner and learner are the same person |
| | Marking planned item as ‘done’ has not implemented as it’s a overhead for this piece of functionality to be added without Context node in SubStatement object - 2 statements workaround |
### Admin and setup
| Learning Locker | OpenLRS |
| ---------------- | ------- |
| Long winded PHP install routine with many PHP based dependencies | Need to build the package using Maven. Good if you are familiar with this routine. Generates a “fat jar” which should be setup elsewhere |
| Comes as a complete packaged app with Admin Interface, Basic Reporting. Quite useful in general when starting to get to grips with an LRS | Once the jar or war file have been generated, the only dependency is the JVM itself, so deployment is quite simple |
| Multiple LRS and Client support within the same instance | Only supports one user per LRS instance. Each instance is a new process |
| | No GUI, but the Apereo project has Dashboards and Other Applications that can plug-in in to create a more complete solution |
| 48.28125 | 472 | 0.727994 | eng_Latn | 0.993292 |
522c784c2bac5de5c35a9990e4fc0eb4fc43428f | 4,846 | md | Markdown | docs/framework/wcf/ws-atomictransaction-configuration-utility-wsatconfig-exe.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/ws-atomictransaction-configuration-utility-wsatconfig-exe.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/ws-atomictransaction-configuration-utility-wsatconfig-exe.md | cihanyakar/docs.tr-tr | 03b6c8998a997585f61b8be289df105261125239 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: WS-AtomicTransaction Yapılandırma Yardımcı Programı (wsatConfig.exe)
ms.date: 03/30/2017
ms.assetid: 1c56cf98-3963-46d5-a4e1-482deae58c58
ms.openlocfilehash: 31b2b3cf16857bf08a4f8d09f47f80d9b34a53b8
ms.sourcegitcommit: 64f4baed249341e5bf64d1385bf48e3f2e1a0211
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 09/07/2018
ms.locfileid: "44085912"
---
# <a name="ws-atomictransaction-configuration-utility-wsatconfigexe"></a>WS-AtomicTransaction Yapılandırma Yardımcı Programı (wsatConfig.exe)
WS-AtomicTransaction yapılandırma yardımcı programını temel WS-AtomicTransaction destek ayarları yapılandırmak için kullanılır.
## <a name="syntax"></a>Sözdizimi
```
wsatConfig [Options]
```
## <a name="remarks"></a>Açıklamalar
Bu komut satırı aracı, yalnızca yerel makine içinde temel WS-AT ayarlarını yapılandırmak için kullanılabilir. Hem yerel hem de uzak makinelerde ayarlarını yapılandırmak varsa, MMC ek bileşeninde açıklandığı gibi kullanmalısınız [WS-Atomic işlem desteğini yapılandırma](../../../docs/framework/wcf/feature-details/configuring-ws-atomic-transaction-support.md).
Komut satırı aracını Windows SDK'sını yükleme konumunda, özellikle bulunabilir,
%SystemRoot%\Microsoft.Net\Framework\v3.0\Windows iletişimi Foundation\wsatConfig.exe
Çalıştırıyorsanız [!INCLUDE[wxp](../../../includes/wxp-md.md)] veya [!INCLUDE[ws2003](../../../includes/ws2003-md.md)], WsatConfig.exe çalıştırmadan önce bir güncelleştirme karşıdan yüklemeniz gerekir. Bu güncelleştirme hakkında daha fazla bilgi için bkz. [(KB912817) Commerce Server 2007 için güncelleştirme](https://go.microsoft.com/fwlink/?LinkId=95340) ve [kullanılabilirliği, Windows XP COM + Düzeltme dökümü paket 13](https://go.microsoft.com/fwlink/?LinkId=95341).
Aşağıdaki tabloda WS-AtomicTransaction yapılandırma yardımcı programı ile (wsatConfig.exe) kullanılabilir seçenekler gösterilmektedir.
> [!NOTE]
> Seçilen bir bağlantı noktası için SSL sertifikası ayarladığınızda, özgün SSL sertifikası varsa, bu bağlantı noktasıyla ilişkili üzerine yazın.
|Seçenekler|Açıklama|
|-------------|-----------------|
|-hesapları:\<hesabı >|WS-AtomicTransaction katılabilir hesapları, virgülle ayrılmış listesini belirtir. Bu hesaplar geçerliliğini denetlenmez.|
|-accountsCerts:\<thumb >|"Issuer\SubjectName" >|WS-AtomicTransaction katılabilir sertifikaları, virgülle ayrılmış listesini belirtir. Sertifika parmak izi veya Issuer\SubjectName çifti tarafından belirtilir. Boşsa, konu adı için {boş} kullanın.|
|-endpointCert: < makine|\<thumb >|"Issuer\SubjectName" >|Makine sertifikası veya başka bir yerel uç nokta sertifika parmak izi veya Issuer\SubjectName çifti tarafından belirtilen kullanır. Boş ise konu adı için {boş} kullanır.|
|-maxTimeout:\<sn >|En uzun zaman aşımı saniye cinsinden belirtir. Geçerli değerler 0 ile 3600 arasındadır.|
|-Ağ:\<etkinleştirme|devre dışı >|Etkinleştirir veya WS-AtomicTransaction ağ desteği devre dışı bırakır.|
|-bağlantı noktası:\<portNum >|HTTPS bağlantı noktası WS-AtomicTransaction için ayarlar.<br /><br /> Bu aracı çalıştırmadan önce güvenlik duvarı zaten etkinleştirdiyseniz, bağlantı noktası özel durum listesine otomatik olarak kaydedilir. Bu aracı çalıştırmadan önce güvenlik duvarı devre dışıysa, başka bir şey ile ilgili güvenlik duvarı yapılandırılır.<br /><br /> Güvenlik Duvarı WS-AT yapılandırdıktan sonra etkinleştirirseniz, bu aracı yeniden çalıştırın ve bu parametre kullanarak bağlantı noktası numarası girmeniz gerekir. Yapılandırdıktan sonra güvenlik duvarı devre dışı bırakırsanız WS-AT ek giriş çalışmaya devam eder.|
|-zaman aşımı:\<sn >|Varsayılan zaman aşımını saniye cinsinden belirtir. Geçerli değerler 1 ile 3600:.|
|-traceActivity:\<etkinleştirme|devre dışı >|Etkinleştirir veya etkinlik olaylarını izlemeyi devre dışı bırakır.|
|-traceLevel:\<kapalı|hata|kritik|uyarı|bilgi| ayrıntılı|tüm >}|İzleme düzeyini belirtir.|
|-TracePII:\<etkinleştirme|devre dışı >|Etkinleştirir veya kişisel olarak tanımlanabilen bilgilerin izlemeyi devre dışı bırakır.|
|-traceProp:\<etkinleştirme|devre dışı >|Etkinleştirir veya yayma olaylarını izlemeyi devre dışı bırakır.|
|-Yeniden Başlat|Değişiklikler hemen etkinleştirmeye MSDTC yeniden başlatır. Bu seçenek belirtilmezse MSDTC yeniden başlatıldığında değişiklikler geçerli olacaktır.|
|-Göster|Geçerli WS-AtomicTransaction protokol ayarlarını görüntüler.|
|-virtualServer:\<virtualServer >|DTC kaynak kümesi adını belirtir.|
## <a name="see-also"></a>Ayrıca Bkz.
[WS-AtomicTransaction Kullanma](../../../docs/framework/wcf/feature-details/using-ws-atomictransaction.md)
[WS-Atomic İşlem Desteğini Yapılandırma](../../../docs/framework/wcf/feature-details/configuring-ws-atomic-transaction-support.md)
| 88.109091 | 632 | 0.786835 | tur_Latn | 0.99934 |
522ca2f0119eb037d3ac6fef0351a1d1142cbd03 | 1,549 | md | Markdown | vendor/src/github.com/denizeren/dynamostore/README.md | stevenbooru/stevenbooru | c3d6d8a6130a4054424ecfcc0fc8605f38dd4f89 | [
"CC0-1.0"
] | 3 | 2015-08-01T02:24:52.000Z | 2015-08-10T16:37:06.000Z | README.md | denizeren/dynamostore | 69258d14eb58e5a5b894138d7f2f2609a5effc1f | [
"MIT"
] | 13 | 2015-08-01T02:23:22.000Z | 2015-08-02T23:18:15.000Z | vendor/src/github.com/denizeren/dynamostore/README.md | stevenbooru/stevenbooru | c3d6d8a6130a4054424ecfcc0fc8605f38dd4f89 | [
"CC0-1.0"
] | 1 | 2022-02-04T15:41:01.000Z | 2022-02-04T15:41:01.000Z | # DynamoStore [](http://godoc.org/github.com/denizeren/dynamostore) [](https://travis-ci.org/denizeren/dynamostore)
A session store backend for [gorilla/sessions](http://www.gorillatoolkit.org/pkg/sessions) - [src](https://github.com/gorilla/sessions).
## Requirements
Depends on the [Goamz/aws](https://github.com/crowdmob/goamz/aws) Go Amazon Library
Depends on the [Goamz/dynamodb](https://github.com/crowdmob/goamz/dynamodb) Go Amazon Dynamodb Library
## Installation
go get github.com/denizeren/dynamostore
## Documentation
Available on [godoc.org](http://godoc.org/github.com/denizeren/dynamostore).
See http://www.gorillatoolkit.org/pkg/sessions for full documentation on underlying interface.
### Example
// Fetch new store.
store, err := NewDynamoStore("AWS_ACCESS_KEY", "AWS_SECRET_KEY", "DYNAMODB_TABLE_NAME", "AWS_REGION_NAME", "SECRET-KEY")
if err != nil {
panic(err)
}
// Get a session.
session, err = store.Get(req, "session-key")
if err != nil {
log.Error(err.Error())
}
// Add a value.
session.Values["foo"] = "bar"
// Save.
if err = sessions.Save(req, rsp); err != nil {
t.Fatalf("Error saving session: %v", err)
}
// Delete session.
session.Options.MaxAge = -1
if err = sessions.Save(req, rsp); err != nil {
t.Fatalf("Error saving session: %v", err)
}
| 32.270833 | 274 | 0.679793 | yue_Hant | 0.510906 |
0d4f357f9d0513e310a1f50a21218bf50bf87803 | 79 | md | Markdown | INDIVIDU.md | hexatester/sdgs-dashboard | 2e646070698580bd903d56786235c9d26d947d4e | [
"MIT"
] | null | null | null | INDIVIDU.md | hexatester/sdgs-dashboard | 2e646070698580bd903d56786235c9d26d947d4e | [
"MIT"
] | 1 | 2021-08-30T08:32:42.000Z | 2021-08-30T08:32:42.000Z | INDIVIDU.md | hexatester/sdgs-dashboard | 2e646070698580bd903d56786235c9d26d947d4e | [
"MIT"
] | null | null | null | # Survey Individu
<https://dashboard-sdgs.kemendesa.go.id/#/survey/individu>
| 15.8 | 58 | 0.746835 | yue_Hant | 0.283527 |
0d4ffea44c7e011ba80559f1b98dc0276f423393 | 44 | md | Markdown | README.md | Gab2110/Fundamentos-de-programaci-n-Gabriel-Quispe | c1606479a16f7ab4f9199763b048c0b4d5c82982 | [
"Apache-2.0"
] | null | null | null | README.md | Gab2110/Fundamentos-de-programaci-n-Gabriel-Quispe | c1606479a16f7ab4f9199763b048c0b4d5c82982 | [
"Apache-2.0"
] | null | null | null | README.md | Gab2110/Fundamentos-de-programaci-n-Gabriel-Quispe | c1606479a16f7ab4f9199763b048c0b4d5c82982 | [
"Apache-2.0"
] | null | null | null | # Fundamentos-de-programaci-n-Gabriel-Quispe | 44 | 44 | 0.840909 | por_Latn | 0.237769 |
0d503733ca8842c1098d8fb37c3d1b417b826962 | 12,008 | md | Markdown | _pages/privacy.md | forestobservatory/forestobservatory.github.io | a5ce811f1c49f2227094a7f87cba642d9ae2aaaf | [
"Apache-2.0"
] | null | null | null | _pages/privacy.md | forestobservatory/forestobservatory.github.io | a5ce811f1c49f2227094a7f87cba642d9ae2aaaf | [
"Apache-2.0"
] | null | null | null | _pages/privacy.md | forestobservatory/forestobservatory.github.io | a5ce811f1c49f2227094a7f87cba642d9ae2aaaf | [
"Apache-2.0"
] | null | null | null | ---
title: Privacy Policy
hero-image: hero-strix.jpg
permalink: privacy
---
## Privacy Policy
Effective date: September 8th, 2020
The California Forest Observatory knows you care about how your personal information is used and shared,
and we take your privacy seriously. Please read the following to learn more about our Privacy Policy.
**By using or accessing the California Forest Observatory or any Services in any manner, you acknowledge
that you accept the practices and policies outlined in this Privacy Policy, and you hereby consent that
we may collect, use, and share your information in the following ways**.
Remember that your use of the California Forest Observatory and the Services is at all times subject to
the [Terms of Use]({{ site.baseurl }}{% link _pages/terms-of-use.md %}), which incorporates this Privacy Policy. Any terms we use in this Policy
without defining them have the definitions given to them in the [Terms of Use]({{ site.baseurl }}{% link _pages/terms-of-use.md %}).
### What does this Privacy Policy cover?
This Privacy Policy covers our treatment of personally identifiable information ("Personal Information")
that we gather when you are accessing or using our Services, but not to the practices of companies we don’t
own or control, or people that we don’t manage. We gather various types of Personal Information from our
users, as explained in more detail below, and we use this Personal Information internally in connection with
our Services, including to personalize, provide, and improve our services, to allow you to set up a user account
and profile, to contact you and allow other users to contact you, to fulfill your requests for certain products
and services, and to analyze how you use the Services. In certain cases, we may also share some Personal
Information with third parties, but only as described below.
As noted in the [Terms of Use]({{ site.baseurl }}{% link _pages/terms-of-use.md %}), we do not knowingly collect or solicit personal information from anyone
under the age of 13. If you are under 13, please do not attempt to register for the Services or send any personal information
about yourself to us. If we learn that we have collected personal information from a child under age 13, we will
delete that information as quickly as possible. If you believe that a child under 13 may have provided us personal
information, please contact us at [[email protected]](mailto:[email protected]).
### Will we ever change this Privacy Policy?
We’re constantly trying to improve our Services, so we may need to change this Privacy Policy from time to time as well,
but we will alert you to changes by placing a notice on the Services, by sending you an email, and/or by some other means.
Please note that if you’ve opted not to receive legal notice emails from us (or you haven’t provided us with your email
address), those legal notices will still govern your use of the Services, and you are still responsible for reading and
understanding them. If you use the Services after any changes to the Privacy Policy have been posted, that means you agree
to all of the changes. Use of information we collect now is subject to the Privacy Policy in effect at the time such
information is collected.
### What information do we collect?
<u>Information You Provide to Us</u>
We receive and store any information you knowingly provide to us. For example, through the registration process and/or
through your account settings, we may collect Personal Information such as your name, email address, phone number,
and/or place of employment. Certain information may be required to register with us or to take advantage of some of our features.
We may communicate with you if you’ve provided us the means to do so. For example, if you’ve given us your email address,
we may email you about your use of the Services. Also, we may receive a confirmation when you open an email from us.
This confirmation helps us make our communications with you more interesting and improve our services. If you do not
want to receive communications from us, please indicate your preference by updating your account preferences or emailing
us directly at [[email protected]](mailto:[email protected]).
<u>Information Collected Automatically</u>
Whenever you interact with our Services, we automatically receive and record information on our server logs from your
browser or device, which may include your IP address, geolocation data, device identification, “cookie” information,
the type of browser and/or device you’re using to access our Services, and the page or feature you requested. “Cookies”
are identifiers we transfer to your browser or device that allow us to recognize your browser or device and tell us how
and when pages and features in our Services are visited and by how many people. You may be able to change the preferences
on your browser or device to prevent or limit your device’s acceptance of cookies, but this may prevent you from taking
advantage of some of our features. If you click on a link to a third party website or service, such third party may also
transmit cookies to you. Again, this Privacy Policy does not cover the use of cookies by any third parties, and we aren’t
responsible for their privacy policies and practices. Please be aware that cookies placed by third parties may continue to
track your activities online even after you have left our Services, and those third parties may not honor “Do Not Track”
requests you have set using your browser or device.
We may use this data to improve the Services - for example, this data can tell us how often users use a particular feature
of the Services, and we can use that knowledge to make the Services interesting to as many users as possible. We may also
use it to customize content or communications for you that we think you might like, based on your usage patterns.
<u>Information Collected From Other Websites and Do Not Track Policy</u>
Through cookies we or our service providers (such as Google Analytics) place on your browser or device, we may collect
information about your online activity after you leave our Services. Just like any other usage information we collect,
this information allows us to improve the Services and customize your online experience, and otherwise as described in
this Privacy Policy. Your browser may offer you a “Do Not Track” option, which allows you to signal to operators of
websites and web applications and services (including behavioral advertising services) that you do not wish such operators
to track certain of your online activities over time and across different websites. Our Services do not support Do Not
Track requests at this time, which means that we may collect information about your online activity both while you are using
the Services and after you leave our Services.
### Will we share any of the Personal Information we collect?
We may share your Personal Information with third parties as described in this below:
**Information that’s been de-identified**: We may de-identify your Personal Information so that you are not identified
as an individual, and provide that information to our partners and service providers. However, we never disclose aggregate
usage or de-identified information to a partner (or allow a partner to collect such information) in a manner that would
identify you as an individual.
**Affiliated Businesses**: In certain situations, businesses or third party websites we’re affiliated with may provide
products or services to you through or in connection with the Services (either alone or jointly with us). You can recognize
when an affiliated business is associated with such a transaction or service, and we will share your Personal Information
with that affiliated business only to the extent that it is related to such transaction or service. We have no control over
the policies and practices of third party websites or businesses as to privacy or anything else, so if you choose to take part
in any transaction or service relating to an affiliated website or business, please review all such business’ or websites’ policies.
**Our Agents**: We employ other companies and people to perform tasks on our behalf and need to share your information with
them to provide products or services to you. Unless we tell you differently, our agents do not have any right to use the
Personal Information we share with them beyond what is necessary to assist us.
**User Profiles and Submissions**: If you use the user forum functionality (currently offered through Google Groups, referred
to herein as the “Forum”) the following applies to your use of that Forum (other terms and conditions from Google may also apply).
Certain user profile information, including your name, location, and any video or image content that such user has uploaded to the
Forum, may be displayed to other users to facilitate user interaction within the Forum or address your request for our services.
Please remember that any content you upload to your public user profile, along with any Personal Information or content that you
voluntarily disclose online in a manner other users can view (on discussion boards, in messages and chat areas, etc.) becomes publicly
available, and can be collected and used by anyone. Your user name may also be displayed to other users if and when you send messages
or comments or upload images or videos through the Forum and other users can contact you through messages and comments.
**Protection of Salo and Others**: We reserve the right to access, read, preserve, and disclose any information that we reasonably
believe is necessary to comply with law or court order; enforce or apply our Terms and other agreements; or protect the rights,
property, or safety of Salo, our employees, our users, or others.
### Is Personal Information about me secure?
Your account is protected by a password for your privacy and security. You must prevent unauthorized access to your account and
Personal Information by selecting and protecting your password appropriately and limiting access to your computer or device and
browser by signing off after you have finished accessing your account.
We endeavor to protect the privacy of your account and other Personal Information we hold in our records, but unfortunately,
we cannot guarantee complete security. Unauthorized entry or use, hardware or software failure, and other factors, may compromise
the security of user information at any time.
### What Personal Information can I access?
Through the functionality of the Services, you may access, and, in some cases, edit or delete the following information you’ve
provided to us:
- e-mail address
- password
The information you can view, update, and delete may change as the Services change. If you have any questions about viewing or
updating information we have on file about you, please contact us at [[email protected]](mailto:[email protected]).
### What choices do I have?
You can always opt not to disclose information to us, but keep in mind some information may be needed to register with us or
to take advantage of some of our features.
You may be able to add, update, or delete information as explained above. When you update information, however, we may maintain
a copy of the unrevised information in our records. Some information may remain in our records after your deletion of such
information from your account. We may use any aggregated data derived from or incorporating your Personal Information after
you update or delete it, but not in a manner that would identify you personally.
### What if I have questions about this policy?
If you have any questions or concerns regarding our privacy policies, please send us a detailed message to
[[email protected]](mailto:[email protected]), and we will try to resolve your concerns.
**Thank you for reading**.
| 74.583851 | 156 | 0.800383 | eng_Latn | 0.999884 |
0d50cbdab07bbabf0c2a35b38c64f6479eaebae1 | 892 | md | Markdown | README.md | DevOps2021-gb/devops2021 | 1cdbf332e46f3d69bb39038f20e326a28ef18d12 | [
"Apache-2.0"
] | null | null | null | README.md | DevOps2021-gb/devops2021 | 1cdbf332e46f3d69bb39038f20e326a28ef18d12 | [
"Apache-2.0"
] | 130 | 2021-03-01T07:12:11.000Z | 2021-05-17T13:31:34.000Z | README.md | JesperFalkenberg/devops2021 | 1cdbf332e46f3d69bb39038f20e326a28ef18d12 | [
"Apache-2.0"
] | 1 | 2021-05-18T17:58:55.000Z | 2021-05-18T17:58:55.000Z | [](https://github.com/DevOps2021-gb/devops2021/actions/workflows/maven.yml)
[](https://github.com/DevOps2021-gb/devops2021/actions/workflows/sonarcloud.yml)
# devops2021
Group B
# Running locally
### Running only website
Simply run java project like any other project.
### Running website
Either run "./run_local.sh"
or the commands:
sudo chmod +x setup_elk.sh
source setup_elk.sh
docker-compose -f docker-compose-local.yml up --build
### Closing docker containers
Run the commands:
docker-compose down -v --rmi 'all' --remove-orphans
docker_clean.sh
### Running simulator
Either run "run_sim.sh"
or "python3 minitwit_simulator.py http://localhost:4567"
| 42.47619 | 181 | 0.754484 | eng_Latn | 0.37785 |
0d513fc9a2dbd4ea2cb1a25b8e940fb2e5f28c05 | 1,004 | md | Markdown | GrandSharedModule/Public/General/readme.md | Toasterlabs/Scriptiorium | 4a20c54f486f76e89fd362d1fe37bb2a44e7c682 | [
"MIT"
] | null | null | null | GrandSharedModule/Public/General/readme.md | Toasterlabs/Scriptiorium | 4a20c54f486f76e89fd362d1fe37bb2a44e7c682 | [
"MIT"
] | null | null | null | GrandSharedModule/Public/General/readme.md | Toasterlabs/Scriptiorium | 4a20c54f486f76e89fd362d1fe37bb2a44e7c682 | [
"MIT"
] | null | null | null | # General
## Invoke-Speak
* Author: Marc Dekeyser
* DESCR: Makes powershell speak
* PARAM: Message
* PARAM: Voice
## Get-ElapsedTime
* Author: Marc Dekeyser
* DESCR: Calculates a time interval between two DateTime Objects
* PARAM: Start
* PARAM: End
## Invoke-Countdown
* Author: Marc Dekeyser
* DESCR: Counts down to point in time
* PARAM: Timer (Optional, switch statement)
* PARAM: Days (Optional, specify days to count down to)
* PARAM: Hours (Optional, specify hours to count down to)
* PARAM: Minutes (Optional, specify minutes to count down to)
* PARAM: Seconds (Optional, specify seconds to count down to)
* PARAM: Format (Optional, switch statement)
* PARAM: ProgressBackGround (Optional, specify color or transparent for the background of the progress bar)
* PARAM: ProgressForeGround (Optional, specify color for the text of the progress bar)
* PARAM: RandomFormat (Optional, random format of the progress bar)
* PARAM: RandomMessage (Optional, random message displayed in the progress bar)
| 37.185185 | 107 | 0.762948 | eng_Latn | 0.813994 |
0d52db79c2398a4ba51dde9ca1ee3a13b5bff70e | 10,131 | md | Markdown | README.md | felixhorger/EPG-X | e4239a48aeb64d52a6cd93c5d3a76be23105b2a8 | [
"MIT"
] | 27 | 2017-12-15T17:09:29.000Z | 2022-01-18T01:20:46.000Z | README.md | felixhorger/EPG-X | e4239a48aeb64d52a6cd93c5d3a76be23105b2a8 | [
"MIT"
] | null | null | null | README.md | felixhorger/EPG-X | e4239a48aeb64d52a6cd93c5d3a76be23105b2a8 | [
"MIT"
] | 16 | 2018-01-08T08:49:18.000Z | 2022-03-26T11:53:27.000Z | # EPG-X
An Extended Phase Graph (EPG) approach for modelling of MRI sequences for systems with Magnetization Transfer or Exchange
The EPG algorithm is extended to coupled exchanging systems that are:
1. governed by the Bloch-McConnell equations (BM) or
2. described by the 'binary spin bath' model for pulsed magnetization transfer (MT).
The theory is described in [**this paper**](http://onlinelibrary.wiley.com/doi/10.1002/mrm.27040/full) (follow the link, it's open access). Essentially a two compartment system is modelled by describing each compartment with a separate EPG calculation. The operators governing evolution periods between RF pulses are updated to include exchange between compartments. For the MT case the second compartment consists only of longitudinal states. Although only two compartment systems are handled by this code, the method is in principle straightforward to extend.
<img src="bin/diag.png" alt="diagram" width="70%">
This code is distributed under the MIT license. If you find it useful please cite the publication [Malik et al, MRM 2017. doi:10.1002/mrm.27040](http://onlinelibrary.wiley.com/doi/10.1002/mrm.27040/full).
You can also cite the code itself [](https://zenodo.org/badge/latestdoi/99567997).
Shaihan Malik, King's College London, July 2017. (updated October 2017)
[@shaihanmalik](https://twitter.com/shaihanmalik)
## Example calculations
The EPG-X source code in this repository is completely general and can be adapted for modelling of a wide range of MR sequences. Functions have explicitly been written to simulate two commonly modeled sequence types: rapid gradient echo (including SPGR and bSSFP) and turbo spin echo (see descriptions of functions, below)
Four separate example scripts are given in the top directory; these may be used for reproducing the four experiments presented in the paper.
* **Test 1** ( `test1_steady_state_GRE.m`)
Compares the steady-state found by EPG-X calculation with direct steady-state calculations for which solutions exist. Examples are given for SPGR and bSSFP sequences for water exchange and MT models.
- The transient phase of SPGR is considered and compared with isochromat simulations (code included)
- SPGR signal vs RF spoiling phase increment also included
- bSSFP including model with frequency offset for second compartment
* **Test 2** (`test2_multicomponent_CPMG.m`)
Simulates multiecho CPMG sequence for two compartment system coupled by intracompartmental exchange (follows Bloch-McConnell equations). This type of measurement is used for multicomponent T2 estimation - the simulation explores how exchange can influence the estimated parameters and also considers influence of frequency offset for second compartment
* **Test 3** (`test3a_transient_GRE.m` and `test3b_experimental_data.m`)
3a: Simulates balanced SSFP and SPGR sequences with variable flip angles following an inversion pulse, for a system with MT effects. This type of sequence has been proposed for use in Magnetic Resonance Fingerprinting (MRF) - the experiment explores the possible influence of MT on this method.
3b: Experimental (phantom) data using SPGR style sequence are fitted with the EPG-X model to determine MT parameters. Data are included in /bin
* **Test 4** ( `test4_multislice_TSE.m`)
Compares single slice and multi-slice TSE for a system with MT effects. In the multi-slice case the excitation of other slices creates off-resonant saturation of the bound pool magnetization in the local slice, leading to signal attenuation when compared with the single slice case.
Predictions are matched to an in-vivo experiment: experimental data are included in /bin
* **Additional example** ( `Additional_bSSFPX_CEST.m`)
*Not included in the paper.* [Zhang et al, 2017](https://www.ncbi.nlm.nih.gov/pubmed/28012297/) proposed using the bSSFP off-resonance profile to detect CEST effects, using a method called bSSFPX. This script reproduces Figure 4 from Zhang et al using values taken from the paper. This shows that the EPG-X method could also be used for further modeling of CEST contrast arising over sequences
## Detailed description of implementations
The EPG-X code is contained in subfolder `EPGX-src`. The code efficiently implements EPG and EPG-X using Matlab sparse matrices; this has been found to be very efficient. The current state is stored in a vector arranged as follows:
* EPG: `[F0 F0* Z0 F1 F-1* Z1 F2 F-2* Z2 ... ]^T`
* EPG-X(MT) *pulsed MT version*
`[F0A F0A* Z0A Z0B F1A F-1A* Z1A Z1B F2A F-2A* Z2A Z2B ... ]^T`
i.e. There are Z states for both compartments but F states only for compartment A (4 states per 'order')
* EPG-X(BM) *Full Bloch-McConnell version*
`[F0A F0A* Z0A F0B F0B* Z0B F1A F-1A* Z1A F1B F-1B* Z1B F2A F-2A* Z2A F2B F-2B* Z2B ... ]^T`
there are six states per EPG order
For two compartment simulations compartment A is taken to be the larger one (for MT this is the free pool). There are six major simulation functions:
* `EPG_GRE.m`
Classic EPG simulation code for gradient echo sequences. Arguments:
- `theta` - series of flip angles (rad)
- `phi` - series of RF pulse phases (rad)
- `TR` - repetition time (ms)
- `T1` - T1 (ms)
- `T2` - T2 (ms)
Different flavours of sequence can be defined by setting `phi`: alternating 0,180 gives bSSFP; quadratic progression for SPGR. Function `RF_phase_cycle.m` can be used to set this.
* `EPGX_GRE_MT.m`
EPG-X(MT) GRE simulation. Same syntax as `EPG_GRE.m` with additional arguments:
- `B1SqrdTau`: the integrated square amplitude of each RF pulse, units uT^2 ms
- `f`: *smaller* pool fraction (for MT this is the bound pool)
- `ka`: Exchange rate from compartment A to B. Units s^-1
- `G`: Absorption line value at the frequency of interest; this does not necessarily have to be zero, it depends on the pulse being simulated. Units us (microseconds)
In addition the T1 argument now takes two values (compt. A and B)
* `EPGX_GRE_BM.m`
EPG-X(BM) GRE simulation. As above, but both T1 and T2 have two components, and the RF power and absorption line value are not needed. An optional `delta` argument can be used to specify a frequency offset for the second compartment. This could be used for simulation of myelin water (explored in the paper) or even CEST with larger offsets. Note that the effect of off-resonance on the RF flip angle is not considered (yet).
Signal returned is the sum of both compartments
* `EPG_TSE.m`
Classic EPG simulation code for TSE sequences. Arguments:
- `theta` - series of flip angles (rad); can be complex if phase modulation also needed
- `ESP` - echo spacing time (ms)
- `T1` - T1 (ms)
- `T2` - T2 (ms)
* `EPGX_TSE_MT.m`
EPG-X(MT) TSE simulation. Same syntax as `EPG_TSE.m` with additional `B1SqrdTau`, `f`, `ka` & `G` as defined above
* `EPGX_TSE_BM.m`
EPG-X(BM) TSE simulation. As above but T1 and T2 both have two compartments and RF saturation parameters are not needed.
#### Implementation of shift operators
Shifts have been implemented using matrices that are defined by separate functions `EPG_shift_matrices.m`, `EPGX_MT_shift_matrices.m` and `EPGX_BM_shift_matrices.m`. They increase the index (k-value) of each F state and map Z states to themselves. They are efficiently defined using matlab sparse matrices.
#### 'kmax' variable
All functions have an optional variable `kmax`. This sets the maximum order of EPG that is included in the calculation. Reducing will lead to smaller matrices and hence faster operation. The maximum k-value is also varied throughout the sequence in order to exclude states that will not contribute to the signal. See the appendix to [**this paper**](http://dx.doi.org/10.1002/mrm.24153) and supporting information to [**this one**](http://dx.doi.org/10.1002/mrm.25192) for more detail.
#### Diffusion
Diffusion effects are easily integrated into the EPG framework - see [this paper by Weigel et al](http://dx.doi.org/10.1016/j.jmr.2010.05.011) for detailed information. These are efficiently implemented in the code by combining the diffusion, relaxation (& exchange) and shift operators. The functions `Xi_diff_BM` and `Xi_diff_MT` are coded efficiently to do this. The code is hard to read but effectively creates band diagonal matrices directly using matlab `spdiags` function. For the 'classic' EPG code, the function `E_diff` combines diffusion and relaxation in the same way - this is simpler as the matrix is diagonal.
The code requires a structure 'd' that contains the diffusion coefficient, gradient strengths and durations:
d = struct;
d.D = 2.3e-9; %<- diffusion coeff in m^2/s
d.G = [-5.9 4.5 12]; % Gradient values, mT/m
d.tau = [1.4 6.6 3]; % Duration of each segment, ms
The gradient and duration values summarise the gradient activity during each TR period. This can be a single value/duration pair, or could in principle represent shaped gradients by containing a list of hundreds of temporal samples. We have used three values for the gradient pre-winder, readout, and spoiler.
#### Prep pulses
The gradient echo simulation functions `EPG_GRE`, `EPGX_GRE_BM` and `EPGX_GRE_MT` all have the optional variable 'prep'. This can be used to add an inversion (or other) pre-pulse prior to the GRE segment. Note that the pulse is assumed to be well spoiled - only the effect on longitudinal magnetization is considered. The argument will be a structure 'prep' with the following fields:
prep = struct;
prep.flip=pi; % flip angle, radians
prep.t_delay=0; % delay until start of GRE, ms
prep.B1SqrdTau=433; % Only needed for MT version - RF energy in prep pulse, units uT^2 ms
#### 'zinit'
Functions `EPG_TSE` and `EPGX_TSE_MT` have additional optional variable `zinit`. This can be used to specify the starting longitudinal magnetization Z0, which would otherwise be M0, or [M0a M0b]. This is used for test 4 - multi-slice TSE - which chains multiple simulations together. Only the Z0 states are assumed to be carried over between runs.
| 66.651316 | 624 | 0.761129 | eng_Latn | 0.997486 |
0d5356ad1359bfb56d2d73d57ec6c91ba2bc17c2 | 679 | md | Markdown | README.md | batestin1/pyPaintWall | 4c526b77e46d5c1e4cd1a14ffcb300dd2efacd12 | [
"MIT"
] | null | null | null | README.md | batestin1/pyPaintWall | 4c526b77e46d5c1e4cd1a14ffcb300dd2efacd12 | [
"MIT"
] | null | null | null | README.md | batestin1/pyPaintWall | 4c526b77e46d5c1e4cd1a14ffcb300dd2efacd12 | [
"MIT"
] | null | null | null | <h1 align="center">
<img src="https://img.shields.io/static/v1?label=pyPaintWall%20POR&message=MAYCON%20BATESTIN&color=7159c1&style=flat-square&logo=ghost"/>
<h3> <p align="center">pyPaintWall</p> </h3>
<h3> <p align="center"> ================= </p> </h3>
>> <h3> Resume </h3>
<p> This library allows you to immediately calculate the area of a wall (width and height) and returns the liters of paint needed to paint it.
knowing that each liter of paint paints an area of 2 square meters.
Very useful for home!</p>
>> <h3> How Works </h3>
<p> install </p>
```
pip install pyPaintWall
```
<p> on script </p>
```
from pyPaintWall import *
pyPaintWall(width, height)
```
| 21.903226 | 142 | 0.673049 | eng_Latn | 0.922404 |
0d5372d090aff5553e55d2bd02ea878917144cf0 | 549 | md | Markdown | README.md | vavrines/SKS.jl | c20dc5131a742bf7d8155667d79f696b0027d043 | [
"MIT"
] | 2 | 2021-01-04T19:03:15.000Z | 2021-01-27T21:01:10.000Z | README.md | vavrines/Langevin.jl | d55cbdacdc5f4eea7ab95621ab2a6d5919711143 | [
"MIT"
] | null | null | null | README.md | vavrines/Langevin.jl | d55cbdacdc5f4eea7ab95621ab2a6d5919711143 | [
"MIT"
] | null | null | null | # Langevin.jl
<!--[](https://travis-ci.com/vavrines/Langevin.jl)-->

[](https://github.com/vavrines/Langevin.jl)
This package serves as a lightweight extension of [Kinetic.jl](https://github.com/vavrines/Kinetic.jl) for the study of uncertainty quantification.
| 68.625 | 183 | 0.766849 | yue_Hant | 0.561983 |
0d53e380a9f85821b32dc68cf4978e9b41aca560 | 3,155 | md | Markdown | README.md | Mbosco/AfricasTalkingGateway-Python | 63eba9c260a0f47b21b878b57df4cc471640f5de | [
"Apache-2.0"
] | 1 | 2019-11-15T13:12:46.000Z | 2019-11-15T13:12:46.000Z | README.md | boscoMW/AfricasTalkingGateway-Python | 63eba9c260a0f47b21b878b57df4cc471640f5de | [
"Apache-2.0"
] | 1 | 2016-11-14T07:38:54.000Z | 2016-11-14T07:38:54.000Z | README.md | Mbosco/AfricasTalkingGateway-Python | 63eba9c260a0f47b21b878b57df4cc471640f5de | [
"Apache-2.0"
] | 1 | 2019-11-15T13:12:54.000Z | 2019-11-15T13:12:54.000Z | AfricasTalkingGateway-Python
============================
A Python module for communicating with the AfricasTalking API
[Click here to read the full
documentation.](https://www.africastalking.com/tutorials/sendsms/python "Africastalking
Python library documentation")
## Installation
Install from PyPi using [pip](http://www.pip-installer.org/en/latest/), a
package manager for Python.
pip install africastalking
Or, you can [download the source code
(ZIP)](https://github.com/twilio/twilio-python/zipball/master "AfricasTalkingGateway-python
source code") for `AfricasTalkingGateway-python`, and then run:
python setup.py install
You may need to run the above commands with `sudo`.
## Getting Started
Getting started with the AfricasTalkingGateway API couldn't be easier. Create a
`AfricasTalkingGateway` and you're ready to go.
### API Credentials
The `AfricasTalkingGateway` needs your credentials. You can either pass these
directly to the constructor (see the code below) or via environment variables.
```python
from AfricasTalkingGateway import AfricasTalkingGateway, AfricasTalkingGatewayException
username = "MyAfricasTalking_Username";
apikey = "MyAfricasTalking_APIKey";
gateway = AfricasTalkingGateway(username, apikey)
```
We suggest storing your credentials as environment variables. Why? You'll never
have to worry about committing your credentials and accidentally posting them
somewhere public.
### Send an SMS
```python
# Import the helper gateway class
from AfricasTalkingGateway import AfricasTalkingGateway, AfricasTalkingGatewayException
# Specify your login credentials
username = "MyAfricasTalking_Username";
apikey = "MyAfricasTalking_APIKey";
# Specify the numbers that you want to send to in a comma-separated list
# Please ensure you include the country code (+254 for Kenya in this case)
to = "+254711XXXYYYZZZ,+254733XXXYYYZZZ";
# And of course we want our recipients to know what we really do
message = "I'm a lumberjack and it's ok, I sleep all night and I work all day"
# Create a new instance of our awesome gateway class
gateway = AfricasTalkingGateway(username, apikey)
# Any gateway errors will be captured by our custom Exception class below,
# so wrap the call in a try-catch block
try:
# Thats it, hit send and we'll take care of the rest.
recipients = gateway.sendMessage(to, message)
for recipient in recipients:
# Note that only the Status "Success" means the message was sent
print 'number=%s;status=%s;messageId=%s;cost=%s' % (recipient['number'],
recipient['status'],
recipient['messageId'],
recipient['cost'])
except AfricasTalkingGatewayException, e:
print 'Encountered an error while sending: %s' % str(e)
```
### Digging Deeper
The full power of the AfricasTalkingGateway API is at your fingertips. The [full
documentation](https://www.africastalking.com/tutorials "AfricasTalking documentation") explains all the awesome features available to
use.
| 35.449438 | 134 | 0.725198 | eng_Latn | 0.942523 |
0d5469bef3849483c015e15ad286b203e1d644be | 574 | md | Markdown | README.md | hapttics/nearby-wireless-listener | 9254abd3fa27e64d8b0eaa2615009b4de1bd2c5d | [
"MIT"
] | 2 | 2018-11-15T15:21:20.000Z | 2019-03-12T19:55:40.000Z | README.md | hapttics/nearby-wireless-listener | 9254abd3fa27e64d8b0eaa2615009b4de1bd2c5d | [
"MIT"
] | null | null | null | README.md | hapttics/nearby-wireless-listener | 9254abd3fa27e64d8b0eaa2615009b4de1bd2c5d | [
"MIT"
] | null | null | null | # hapttics Nearby Wireless Listener
hapttics Nearby Wireless Listener is a Node application that uses a Raspberry Pi + WiFi anthenna in monitor mode, to collect WiFi probes with the aim to detect the presence of wireless devices. The presence is detected with no active interaction from the devices around, getting details like mac address, signal strength, date, time, among others.
Although there are other similar projects in GitHub, this one is a more comprehensive one which includes device management, and telemetry transmission to a cloud service in near real-time.
| 114.8 | 347 | 0.817073 | eng_Latn | 0.999634 |
0d555c1e4ee6c228b479b8ee4e36b5e043458209 | 185 | md | Markdown | README.md | navya/oars | fa0e9bfcdf8e1c54a356ec09447ec4b372c59631 | [
"MIT"
] | 3 | 2016-08-06T20:51:41.000Z | 2021-11-16T11:47:22.000Z | README.md | navya/oars | fa0e9bfcdf8e1c54a356ec09447ec4b372c59631 | [
"MIT"
] | null | null | null | README.md | navya/oars | fa0e9bfcdf8e1c54a356ec09447ec4b372c59631 | [
"MIT"
] | 1 | 2016-08-06T20:51:46.000Z | 2016-08-06T20:51:46.000Z | An AngularJS version of OARS data
====
A pretty interface for oars data.
Dependencies
====
- Foundation
- Angular
- angular-bindonce
Build
====
- Run npm install
- Run bower install
| 11.5625 | 33 | 0.713514 | eng_Latn | 0.688873 |
0d56987b30d890d9392b302d4f4fad61bd73b31e | 1,649 | md | Markdown | History.md | MatthewMueller/component-test | c7c52906880c7af62b0a2c6278cc658cf1244a03 | [
"MIT",
"Unlicense"
] | 1 | 2015-03-01T22:53:41.000Z | 2015-03-01T22:53:41.000Z | History.md | MatthewMueller/component-test | c7c52906880c7af62b0a2c6278cc658cf1244a03 | [
"MIT",
"Unlicense"
] | null | null | null | History.md | MatthewMueller/component-test | c7c52906880c7af62b0a2c6278cc658cf1244a03 | [
"MIT",
"Unlicense"
] | null | null | null |
0.1.6 / 2016-05-20
==================
NPM 3 tries to install dependencies in a
[flat structure](https://docs.npmjs.com/how-npm-works/npm3) and therefore
references to those dependencies now need to use a dynamic lookup (e.g.
require.resolve()) instead of assuming a particular relative file path. - #37
0.1.5 / 2015-09-16
==================
The changes fix a bug where the tool hangs due to the user having a newer version of phantomjs in their path than what is compatible with the tool. The changes fix the issue by bundling its own version of phantomjs as a dependency instead of relying on the version of phantomjs installed in the developer's path. - #36
0.1.4 / 2014-10-16
==================
* Release 0.1.4
* Fix spawn on Windows - #33
* add mocha-phantomjs hook to write coverage stats to a file
* update mocha-phantomjs for custom reporter support
0.1.3 / 2014-01-24
==================
* give an error if no build/ dir present
* Updated misleading wording in travis/saucelabs docs (@jb55)
* component-test2 => component-test thanks to @ask11
* pass arguments through to mocha-phantomjs
0.1.2 / 2013-12-19
==================
* pin mocha cloud
* have errors pass through mochaResults
0.1.1 / 2013-12-19
==================
* remove extra deps
0.1.0 / 2013-12-18
==================
* add tunnel option to browser
* add debug statements
* listen for JS errors
* prevent IE from going into compatibility mode
* browser: choose which browser to open
* phanton: cleanup stream handling
* use the name property of component.json as the document’s title
0.0.1 / 2010-01-03
==================
* Initial release
| 28.929825 | 318 | 0.673135 | eng_Latn | 0.982158 |
0d56daba7ae24894a8c0a943f1128ba57603e584 | 2,030 | md | Markdown | Contributing.md | jaredpalmer/slate | 877e2c111825873ad976d17925d9a1dff0b4886c | [
"MIT"
] | null | null | null | Contributing.md | jaredpalmer/slate | 877e2c111825873ad976d17925d9a1dff0b4886c | [
"MIT"
] | null | null | null | Contributing.md | jaredpalmer/slate | 877e2c111825873ad976d17925d9a1dff0b4886c | [
"MIT"
] | null | null | null |
# Contributing
Want to contribute to Slate? That would be awesome!
### Running Tests
To run the examples, you need to have the Slate repository cloned to your computed. After that, you need to `cd` into the directory where you cloned it, and install the dependencies from `npm`.
```
make install
```
And then build the source and run the tests:
```
make dist
make test
```
And to run the linter:
```
make lint
```
If you need to debug something, you can add a `debugger` line to the source, and then run `make test` with the `DEBUG=true` flag enabled.
To keep the source rebuilding on every file change, you need to run an additional watching command:
```
make watch-dist
```
### Running Examples
To run the examples, you need to have the Slate repository cloned to your computed. After that, you need to `cd` into the directory where you cloned it, and install the dependencies from `npm`.
```
make install
```
And then build the source and run the examples server:
```
make dist
make start-examples
```
If you want to edit the source while running the examples and have those changes immediately reflected, you need to run two additional watching commands in your terminal:
```
make watch-dist
```
```
make watch-examples
```
### Pull Requests
All pull requests are super welcomed and greatly appreciated! Easy issues are marked with an [`easy-one`](https://github.com/ianstormtaylor/slate/issues?q=is%3Aopen+is%3Aissue+label%3Aeasy-one) label if you're looking for a simple place to get familiar with the code base.
Please include tests and docs with every pull request!
### Browser Support
Slate aims to targeted all of the modern browsers, and eventually the modern mobile platforms. Right now browser support is limited to the latest versions of [Chrome](https://www.google.com/chrome/browser/desktop/), [Firefox](https://www.mozilla.org/en-US/firefox/new/), and [Safari](http://www.apple.com/safari/), but if you are interested in adding support for another modern platform, that is welcomed!
| 27.432432 | 405 | 0.749261 | eng_Latn | 0.998951 |
0d5788237dd0c63a1f379ba44d830961b54a4762 | 1,192 | md | Markdown | README.md | plimble/tsplitter | 63c87aa8ecc22d05c550ed277cc1ee16cc4259a1 | [
"Apache-2.0"
] | 7 | 2015-02-09T11:59:54.000Z | 2016-01-16T09:18:03.000Z | README.md | plimble/tsplitter | 63c87aa8ecc22d05c550ed277cc1ee16cc4259a1 | [
"Apache-2.0"
] | null | null | null | README.md | plimble/tsplitter | 63c87aa8ecc22d05c550ed277cc1ee16cc4259a1 | [
"Apache-2.0"
] | 2 | 2015-02-05T03:34:21.000Z | 2021-12-21T10:22:12.000Z | tsplitter [](http://godoc.org/github.com/plimble/tsplitter) [](http://gocover.io/github.com/plimble/tsplitter) [](https://travis-ci.org/plimble/tsplitter) [](http:/goreportcard.com/report/plimble/tsplitter)
=========
Thai word break written in GO
### Installation
`go get -u github.com/plimble/tsplitter`
### Example
#####Get all words
```go
func main(){
dict := NewFileDict("dictionary.txt")
txt := "ตัดคำไทย"
words := Split(dict, txt)
fmt.Println(words.All()) //ตัด, คำ, ไทย
}
```
#####Get deduplicate word
```go
func main(){
dict := NewFileDict("dictionary.txt")
txt := "ตัดคำไทย"
words := Split(dict, txt)
fmt.Println(words.Known())
fmt.Println(words.Unknown())
}
```
### Documentation
- [GoDoc](http://godoc.org/github.com/plimble/tsplitter)
###Contributing
If you'd like to help out with the project. You can put up a Pull Request.
| 31.368421 | 507 | 0.680369 | yue_Hant | 0.508424 |
0d57dc4da227be96f3806b989f2fe8cb2978e93e | 20 | md | Markdown | README.md | rasimavci/platform-base-app | 6b12cace1017c28b3895e67bd1613d808a4de6fa | [
"MIT"
] | 1 | 2019-03-29T07:29:38.000Z | 2019-03-29T07:29:38.000Z | README.md | rasimavci/platform-base-app | 6b12cace1017c28b3895e67bd1613d808a4de6fa | [
"MIT"
] | null | null | null | README.md | rasimavci/platform-base-app | 6b12cace1017c28b3895e67bd1613d808a4de6fa | [
"MIT"
] | null | null | null | # platform-base-app
| 10 | 19 | 0.75 | swe_Latn | 0.227854 |
0d593191ff2152f81cb53dcbde7d257833ec8da3 | 1,061 | md | Markdown | CHANGELOG.md | velkuns/codingame-template | 8b4407facd4da00dd309dd433a6947f81c0e3044 | [
"MIT"
] | null | null | null | CHANGELOG.md | velkuns/codingame-template | 8b4407facd4da00dd309dd433a6947f81c0e3044 | [
"MIT"
] | null | null | null | CHANGELOG.md | velkuns/codingame-template | 8b4407facd4da00dd309dd433a6947f81c0e3044 | [
"MIT"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
```
## [template]
[template]: https://github.com/velkuns/codingame-template/compare/1.1.0...master
### Changed
### Added
### Removed
```
## [2.0.0] - 2022-04-15
[2.0.0]: https://github.com/velkuns/codingame-template/compare/1.0.1...2.0.0
### Changed
* Now compatible with PHP 7.3 to 8.1
* Fix phpdoc + some return type according to phpstan analysis
* Change the Default namespace
* Some minor refactoring in main Game.php
### Added
* phpstan for static analysis
* Makefile
* CI config
* GitHub workflow for CI
* Add new dependencies (core game, core utils & core compiler)
### Removed
* Old dependency (core)
## [1.0.1] - 2018-08-26
[1.0.1]: https://github.com/eureka-framework/component-orm/compare/1.0.0...1.0.1
### Changed
* Update links in README
## [1.0.0]
### Added
* Initial app template
| 25.878049 | 86 | 0.69557 | eng_Latn | 0.592911 |
0d594cec5463435fc13ad848ef4f54eea947ce7f | 4,949 | md | Markdown | _posts/2016-04-24-Git.md | wanglizhi/wanglizhi.github.io | 5eb5e23b1b766a5f023b33bce1725164b5ababc4 | [
"Apache-2.0"
] | 1 | 2018-01-09T06:09:36.000Z | 2018-01-09T06:09:36.000Z | _posts/2016-04-24-Git.md | wanglizhi/wanglizhi.github.io | 5eb5e23b1b766a5f023b33bce1725164b5ababc4 | [
"Apache-2.0"
] | null | null | null | _posts/2016-04-24-Git.md | wanglizhi/wanglizhi.github.io | 5eb5e23b1b766a5f023b33bce1725164b5ababc4 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "Git 详解~"
subtitle: "版本控制,团队合作的必备工具"
date: 2016-04-24 12:00:00
author: "Wanglizhi"
header-img: "img/home-bg.jpg"
catalog: true
tags:
- Git
---
> 能长期在Github活跃,为开源项目做贡献,是程序员学习的最好状态,很难也很简单,是我追求的目标!
**有个坑提醒下:jekyll 的date指定的日期比实际时间少8小时**
**概括**
Git是一个开源的分布式版本控制系统,用以有效、高速的处理从很小到非常大的项目版本管理。Git 是 Linus Torvalds 为了帮助管理 Linux 内核开发而开发的一个开放源码的版本控制软件。
在大三之前都是用subversion,目前业界基本都是Git了,学会Git成了开发者的必备技能。
**Git常用命令参考(长期更新)**
```
git fetch
git merge origin/master
git mergetool // 解决冲突工具
git reset —hard HEAD~1 // 回退到上个版本
git reset --hard 版本号 // 回退到某个版本
git remote set -url origin URL // 本地修改远程地址
```
## Git 远程操作详解
Git有很多优势,其中之一就是远程操作非常简便。下面将详细介绍5个Git命令:
* git clone
* git remote
* git fetch
* git pull
* git push

#### git clone
git clone支持多种协议地址
```
$ git clone <版本库的网址>
$ git clone http[s]://example.com/path/to/repo.git/
$ git clone ssh://example.com/path/to/repo.git/
$ git clone git://example.com/path/to/repo.git/
$ git clone /opt/git/project.git
$ git clone file:///opt/git/project.git
$ git clone ftp[s]://example.com/path/to/repo.git/
$ git clone rsync://example.com/path/to/repo.git/
```
通常来说,Git协议下载速度最快,SSH协议用于需要用户认证的场合。
#### git remote
Git要求每个远程主机都必须指定一个主机名。`git remote`命令就用于管理主机名。
```
$ git remote
origin
$ git remote -v
origin [email protected]:jquery/jquery.git (fetch)
origin [email protected]:jquery/jquery.git (push)
```
克隆版本库的时候,所使用的远程主机自动被Git命名为`origin`。如果想用其他的主机名,需要用`git clone`命令的`-o`选项指定。
```
$ git clone -o jQuery https://github.com/jquery/jquery.git
$ git remote
jQuery
```
删除、添加、重命名远程主机的命令
```
$ git remote show <主机名>
$ git remote add <主机名> <网址>
$ git remote rm <主机名>
$ git remote rename <原主机名> <新主机名>
```
#### git fetch
将远程主机版本库更新取回本地
```
$ git fetch <远程主机名>
// 默认取回所有分支的更新,也可以指定分支
$ git fetch <远程主机名> <分支名>
```
在本地主机上要用"远程主机名/分支名"的形式读取。比如`origin`主机的`master`,就要用`origin/master`读取。
`git branch`命令的`-r`选项,可以用来查看远程分支,`-a`选项查看所有分支。
```
$ git branch -r
origin/master
$ git branch -a
* master
remotes/origin/master
```
取回远程主机的更新以后,可以在它的基础上,使用`git checkout`命令创建一个新的分支。
```
$ git checkout -b newBrach origin/master
```
此外,也可以使用`git merge`命令或者`git rebase`命令,在本地分支上合并远程分支。
```
$ git merge origin/master
# 或者
$ git rebase origin/master
```
#### git pull
`git pull`命令的作用是,取回远程主机某个分支的更新,再与本地的指定分支合并。它的完整格式稍稍有点复杂。
```
$ git pull <远程主机名> <远程分支名>:<本地分支名>
```
如果合并需要采用rebase模式,可以使用`--rebase`选项
```
$ git pull --rebase <远程主机名> <远程分支名>:<本地分支名>
```
如果远程主机删除了某个分支,默认情况下,`git pull` 不会在拉取远程分支的时候,删除对应的本地分支。这是为了防止,由于其他人操作了远程主机,导致`git pull`不知不觉删除了本地分支。
但是,你可以改变这个行为,加上参数 `-p` 就会在本地删除远程已经删除的分支。
```
$ git pull -p
# 等同于下面的命令
$ git fetch --prune origin
$ git fetch -p
```
#### git push
`git push`命令用于将本地分支的更新,推送到远程主机。
```
$ git push <远程主机名> <本地分支名>:<远程分支名>
$ git push origin master
// 将本地的master分支推送到origin主机的master分支。如果后者不存在,则会被新建。
```
如果省略本地分支名,则表示删除指定的远程分支,因为这等同于推送一个空的本地分支到远程分支。
```
$ git push origin :master
# 等同于
$ git push origin --delete master
```
如果远程主机的版本比本地版本更新,推送时Git会报错,要求先在本地做`git pull`合并差异,然后再推送到远程主机。这时,如果你一定要推送,可以使用`--force`选项。
```
$ git push --force origin
```
上面命令使用`--force`选项,结果导致远程主机上更新的版本被覆盖。除非你很确定要这样做,否则应该尽量避免使用`--force`选项。
**参考** [Git远程操作详解](http://www.ruanyifeng.com/blog/2014/06/git_remote.html)
## Git 使用规范流程
团队开发中,遵循一个合理、清晰的Git使用流程,是非常重要的。
下面是[ThoughtBot](https://github.com/thoughtbot/guides/tree/master/protocol/git) 的Git使用规范流程。

#### 第一步:新建分支
```
# 获取主干最新代码
$ git checkout master
$ git pull
# 新建一个开发分支myfeature
$ git checkout -b myfeature
```
#### 第二步:提交分支commit
```
$ git add --all // 等于git add .
$ git status
$ git commit --verbose
```
git commit 命令的verbose参数,会列出 [diff](http://www.ruanyifeng.com/blog/2012/08/how_to_read_diff.html) 的结果。
#### 第三步:撰写提交信息
提交commit时,必须给出完整扼要的提交信息
第一行是不超过50个字的提要,然后空一行,罗列出改动原因、主要变动、以及需要注意的问题。最后,提供对应的网址(比如Bug ticket)。
#### 第四步:与主干同步
```
$ git fetch origin
$ git rebase origin/master
```
git rebase用于把一个分支的修改合并到当前分支,而那些老的提交会被丢弃。 如果运行垃圾收集命令(pruning garbage collection), 这些被丢弃的提交就会删除.

#### 第五步:合并commit
分支开发完成后,很可能有一堆commit,但是合并到主干的时候,往往希望只有一个(或最多两三个)commit,这样不仅清晰,也容易管理。
```
$ git rebase -i origin/master
```
git rebase命令的i参数表示互动(interactive),这时git会打开一个互动界面,进行下一步操作。
#### 第六步:推送到远程仓库
```
$ git push --force origin myfeature
```
git push命令要加上force参数,因为rebase以后,分支历史改变了,跟远程分支不一定兼容,有可能要强行推送(**实际开发慎用force**)
#### 第七步:发出Pull Request
提交到远程仓库以后,就可以发出 Pull Request 到master分支,然后请求别人进行代码review,确认可以合并到master。
#### git merge 对比 git rebase (history)

对于使用git merge来合并所看到的commit的顺序(从新到旧)是:C7 ,C6,C4,C5,C3,C2,C1
对于使用git rebase来合并所看到的commit的顺序(从新到旧)是:C7 ,C6‘,C5',C4,C3,C2,C1
**参考** [Git 使用规范流程](http://www.ruanyifeng.com/blog/2015/08/git-use-process.html)
[git rebase简介(基本篇)](http://blog.csdn.net/hudashi/article/details/7664631) | 18.675472 | 101 | 0.718731 | yue_Hant | 0.638348 |
0d5a055c66616ee5a3e4fd254711f50882edc0d5 | 380 | md | Markdown | labs/lab-08/lab8.md | christinekoul/oss-repo | 3c8e165bb9720f403121ea96549443a321d6f69b | [
"CC0-1.0"
] | null | null | null | labs/lab-08/lab8.md | christinekoul/oss-repo | 3c8e165bb9720f403121ea96549443a321d6f69b | [
"CC0-1.0"
] | null | null | null | labs/lab-08/lab8.md | christinekoul/oss-repo | 3c8e165bb9720f403121ea96549443a321d6f69b | [
"CC0-1.0"
] | null | null | null | # Lab 08 Report - Christine Koulopoulos
### Example 0

### Example 1

### Example 2

### Example 3

### Example 4



| 15.833333 | 40 | 0.663158 | kor_Hang | 0.697503 |
0d5ac6ef004c71279991e31c06c7034876105382 | 2,805 | md | Markdown | README.md | sulsyeda03/springular-books | 06943b5673e9e65deda7d6243ad3c6e9e5d2e902 | [
"MIT"
] | null | null | null | README.md | sulsyeda03/springular-books | 06943b5673e9e65deda7d6243ad3c6e9e5d2e902 | [
"MIT"
] | null | null | null | README.md | sulsyeda03/springular-books | 06943b5673e9e65deda7d6243ad3c6e9e5d2e902 | [
"MIT"
] | null | null | null | # Project_2: Springular Books
# Project Description
This project is an E-commerce IT bookstore which uses the IT bookstore api found at https://api.itbook.store/1.0/.
# Technologies Used
<h3>Front-end</h3>
<ul>
<li> Angular </li>
<li> Typescript </li>
<li> HTML</li>
<li> CSS </li>
<li> Javascript </li>
<li> Bootstrap </li>
</ul>
<h3>Back-end</h3>
<ul>
<li> Java 16</li>
<li> Spring Boot</li>
<li> Spring Data JPA</li>
<li> Lombok </li>
<li> Tomcat </li>
<li> Slf4j </li>
<li> REST Api </li>
</ul>
<h3> Database </h3>
<ul>
<li> MySQL </li>
</ul>
<h3> Design Pattern </h3>
<ul>
<li>Spring MVC</li>
</ul>
<h3> Version Control </h3>
<ul>
<li> Git </li>
<li> Github </li>
</ul>
# Features
<ul>
<li> User Login- a user can log into the website, their information is then stored in local storage and is accessible throughout the application</li>
<li> Title, Author, ISBN Search - a user can search through books by title, author, isbn, or partial isbn</li>
<li> Category Sort- a user can select a category and see given results based on the category selected</li>
<li> Multiple Pages of Results- a user can view multiple pages of results and will be navigated to the next page when hitting the forward or back arrows at the bottom of the page</li>
<li> Detailed Page- a user can click more details and will be navigated to a page specific to a book with more details about the individual book such as publisher, price, description, etc.</li>
<li> Add to Cart- a user can add multiple books to their cart and the number of items in the cart will update as more items are added</li>
<li> View Cart- a user can navigate to the cart where they will see the current items in their cart along with the total price of the items plus tax and the amount of tax</li>
<li>Remove Items from Cart- a user is able to remove an individual item from their cart by pressing the "remove" button next to the individual item or a user can clear their entire cart by clicking the "clear all" button below the checkout</li>
<li>Checkout- a user can click "checkout" on the cart page and if logged in, the order information will be stored in a database and the user will be redirected to a page saying that their order is confirmed</li>
<li>New Page- here a user can view a page of new books</li>
<li>Contact, About Us, Social Media- static pages which provide more information to the user</li>
</ul>
# Getting Started
# Usage
<ul>
<li> Create a preview of an IT Book Store </li>
</ul>
# Contributors
[Syeda Sultana](https://github.com/sulsyeda03/)
[Mohamad Elshalati](https://github.com/mohamadelshalati)
[Simon Irvin-Vitela](https://github.com/simonirvinvitela730)
[Christy Catlin](https://github.com/CCatlin28)
| 40.652174 | 246 | 0.700178 | eng_Latn | 0.978254 |
0d5b84016402315d642832c048a75adc362f4de4 | 6,758 | md | Markdown | публичные конспекты/википедия о Френсисе Крике.md | zybinmikhail/zybinmikhail.github.io | 39eade31eb43fbd387f519f02a28061a6acb14cd | [
"MIT"
] | null | null | null | публичные конспекты/википедия о Френсисе Крике.md | zybinmikhail/zybinmikhail.github.io | 39eade31eb43fbd387f519f02a28061a6acb14cd | [
"MIT"
] | null | null | null | публичные конспекты/википедия о Френсисе Крике.md | zybinmikhail/zybinmikhail.github.io | 39eade31eb43fbd387f519f02a28061a6acb14cd | [
"MIT"
] | null | null | null | Френсис Крик, открывший структуру молекулы ДНК, получил бакалавра по физике и не знимался биологией до 30 лет.
Френсис очень любил Детскую Энциклопедию, которую ему покупали родители.
Ф боялся, что к тому моментлу, когда он аырастет, все уже откроют
Френсису казалось, что если он ухватил конпепцию, то он полностью глубоко ее понял.
Вся семья обожала теннтс.
Около 12 лет постепенно стал атеистом.
Ф не нравилась математика сама по себе. Он мог воспринимать элегантность небольших доказательств.
Физика, которой Ф учили в школе, была несколько устаревшей, как и математика.
По окончании бакалавриата стал работать над очень скучной проблемой определения вязкости воды при различных темературах.
Ф работал во всяких там лабораториях. Установку, которую он слдеолал для определения вязкости воды, уничтожила бомба.
# 2. The Gossip Test
Во время WWII работал над созданием безконтактных мин. После войны не сразу понял, чем заниматься. Многим ученым после 30 лет очень тяжело менять сферу деятельности, потому тчо много сил вложено. Ф знал чуть-чуть физику и математику, и это было преимуществом.
Однако в силу возраста Ф не мог провести пару лет где-то и потом сменить выбор. Его выбор должен быть окончательным.
То, о чем вы сплетничаете, - это то, что вам больше всего интересно.
Ф выбирал между молекулярной биологией и нейробиологией. Ему очень хотелось внести вклад в изучение разницы между живым и неживым или в понимание сознания.
# 3 The Baffling Problem
Организмы устроены очень сложно, но при этом гармонично. Люди думали, что это свидетельство в пользу разумного творения.
Фичи появляются не сразу, а как результат последовательности улучшений.
Почему людям тяжело принять идею о естественном отборе
1. Очень медленно идет процесс
2. Как случайность мутаций поволяет получать очень неслучайные формы
Энзимы - катализаторы внутренних химических реакцтий. Открыты в 1897. Опыт - раздавить под прессом кучу дрожжей и посмотреть, продолжают ли в нем идти реакции, которые шли в летах.
Каждый энзим очень специфичен и катализирует конкретную реакцию.
Окалалось, что энзим - это макромолекула, принадлежащая к группе протеинов. Принодлежность энзимов к протеинам доказал человек с одной рукой.
Есть исследование, которое показало, что мутация приводит к изменению одного энзима. Следовательно, "один ген - один энзим".
Протеины являются гетерополимерами. Протеины можно сравнить с текстом, написанным на 20-буквенном алфавите.
Если протеин нагреть, то он денатурируется и перестает катализировать реакции. Значит, функции протена зависят от его трехмерной конфигурации.
В сороковые было открыто, что молекулы днк не короткие.
Вообще среди ученых было непонятно, как происходит наследование признаков и что такое гены. До того как ответ появился, он казался очень неочевидным.
# Rocking the Boat
В лаборатории Кавендиша Дж. Дж. Томсон открыл электрон. Резерфорд тоже там работал. Джеймс Чедви ктам открыл нейтрон. Лоуренс Брегг тоже там работал. Он получил в 25 лет нобелевку вместе с отцом.
Макс Перутц.
Брегг занимался рентгенокристаллографией.
Гемоглобин формирует структуру из 4 субЪунитов
В результате направления рентгеновских лучей можно получить распределение электронных облаков. Можно получить интенсивности Фурье, но не частоты.
Когда комментируешь чужую научную работу, то сначала лучше похвалить и только потом говорить свои замечания.
По мнению Ф, все существовавшие на тот момент методы кристаллографии были несостоятельными.
Ф придумал метод замещения некоторых легких атомов в молекуле на более тяжелые с сохранением трехмерной структуры.
Глава лаборатории воспринимал Ф недружелюбно из-за того что Ф много критиковал работу дугих.
# The alpha helix
Лоуренс Брегг был великим ученым, но раз в неделю подрабатывал садовником, потому что ему это нравилось. Также в процессе исследований он никогда не терял энтузиазм.
У глицина остаток - атом водорода. У аланина CH_3. У некоторых аминокислот есть поожительный заряд, у некоторых отрицательный, а у кото-то нулевой
Линус Паулинг занимался в то время похожими вещами. Он первый стал исползовать квантовую физику в химии.
В альфа-хеликсе 3.6 единиц на оборот.
Паулинг имел в химии огромное значение.
Молекулярная биология - сердце биологии в целом.
# How to live with a golden helix
Джим Уотсон имел очень похожие устремления. Он был при этом экспертом в области фагов. Когда они встретились, они много и часто разговаривали. Им отвели для этого отдельную комнату. Джим уже был PhD, Френсис же учился в магистратуре.
*chance favors the prepared mind*
что типично для научных открытий
- плохое качество данных
- неправильные гипотезы
- трудности отношений между людьми
подобное происходило и при открытии трехспиральной структуры коллагена
Там еще работала Розалинда Франклин.
Прошло 25 лет с осторожных предположений до того как они стали общепринятыми.
Ф и Д были молоды и не обладали, по мнению критиков, достаточной массой знаний для совершения открытия.
Ф и Д выбрали правильную проблему и ухватились за нее достаточно сильно. Почти никто одновременно с ними не занимался химической структурой генов. Ф и Д думали на более глубоком теоретическом уровне, чем их научные коллеги
Ф получал удовольствие от всего процесса работы.
Был такой ученый Гамов, который внес вклад в комологию, а также предложил теорию функционирования ДНК.
Поначалу исследователти поредполагали, что триплеты пересекаются.
Еще один космолог Fred Goyle предложил идею, связанную с кодированием.
Давайте разбираться, как же ДНК приводит к образованию протеина. можно слдеолать эксперимент на лизоциме. Лизоцим можно найти в яичном белке и человеческих слезах. Ф плакал для того чтобы добыть слезы. Лизоцим обладает антисептическим действием. Более того, Ф попробовал слезы полудюжины людей. Ожидаемой мутации не последоватло.
Было исследование серповиднокшлеточной анемии
# 10 Theory in Molecula Biology
Что особенного в биологии
- изучение систем, являющихся результатами процесса, длившегося миллиарды лет
- есть много похожих экземпляров
- законы физики верны везде, а в биологии законы - это статистические обобщения
A busy life is a wasted life
В 60 лет пошел в новое поле - изучать мзозг.
Бывают паразитические дна, которые ничего не делают.
Бихевиористы не считают нужным моделировать внетренние состояния животных. Они считают, что поведение надо изучать.
Френсис выбирал животных для опытов и остановился на обезьянах.
Нейронауки находятся на пересечении физиологичского исследования мозга, разработках ИИ и математики
два возможных подхода
- разделить систему на части и имссоледровать часьтти
- смотреть как части взаимодействуют
Ф считает, что скоро появится молекулярная психология, аналогично тому как появилась молекулярная биология. | 46.930556 | 329 | 0.819473 | rus_Cyrl | 0.994803 |
0d5c1f9fff1571654cb1cc2777bed007f8507332 | 13,038 | md | Markdown | lib/msal-browser/docs/shr-server-nonce.md | souravmishra-msft/microsoft-authentication-library-for-js | ebb6c41fd53804cd5ba45c748e5f142ec2954e5e | [
"MIT"
] | 1 | 2022-03-11T15:08:24.000Z | 2022-03-11T15:08:24.000Z | lib/msal-browser/docs/shr-server-nonce.md | souravmishra-msft/microsoft-authentication-library-for-js | ebb6c41fd53804cd5ba45c748e5f142ec2954e5e | [
"MIT"
] | 45 | 2020-10-24T00:50:17.000Z | 2022-03-15T22:29:02.000Z | lib/msal-browser/docs/shr-server-nonce.md | souravmishra-msft/microsoft-authentication-library-for-js | ebb6c41fd53804cd5ba45c748e5f142ec2954e5e | [
"MIT"
] | 1 | 2022-01-10T18:48:11.000Z | 2022-01-10T18:48:11.000Z | # SHR Server Nonce
As an enhancement of Access Token Proof-of-Possesion, MSAL Browser provides a way to insert a server-generated signed timestamp (a.k.a **server nonce**) into a `Signed HTTP Request`, also known as a `PoP Token`. This server generated nonce can be added to any token request that uses the `POP` authentication scheme.
Given that [MSAL does not cache](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/access-token-proof-of-possession.md#bound-access-token) `Signed HTTP Requests`, server nonces will **not** be cached either. This means that the server nonce must be passed into every `acquireTokenSilent` call in order for it to be added to the resulting `SignedHttpRequest` object.
Once the `Signed HTTP Request` is sent to the resource server as a `PoP Token`, the resource server is responsible for validating the signed payload as well as extracting and validating the `shrNonce`.
## Usage
Once [acquired](#acquiring-a-server-nonce), server-generated nonces for SHRs can be passed into any token request object as `shrNonce`. It is important to note that the SHR server nonce is an extension of the Access Token Proof-of-Possession scheme, so it will only be used when the token request's `authenticationScheme` is set to `POP`.
### Acquire Token Request Example
```javascript
const popTokenRequest = {
scopes: ["User.Read"],
authenticationScheme: msal.AuthenticationScheme.POP,
resourceRequestMethod: "POST",
resourceRequestUri: "YOUR_RESOURCE_ENDPOINT",
shrNonce: "eyJhbGciOiJIUzI1NiIsImtpZCI6IktJRCIsInR5cCI6IkpXVCJ9.eyJ0cyI6IjE2MjU2NzI1MjkifQ.rA5ho63Lbdwo8eqZ_gUtQxY3HaseL0InIVwdgf7L_fc" // Sample Base64URL encoded server nonce value
}
```
Once the request has been configured and `POP` is set as the `authenticationScheme`, it can be passed to any loginXXX or acquireTokenXXX API:
```javascript
const response = await myMSALObj.acquireTokenSilent(popTokenRequest);
```
The response will contain a property called `accessToken`, which will contain the `Signed HTTP Request`. When verified using the public key, the SHR's JWT payload will look something like the following:
```javascript
{
at: ...,
ts: ...,
m: "POST",
u: "YOUR_RESOURCE_ENDPOINT",
nonce: "eyJhbGciOiJIUzI1NiIsImtpZCI6IktJRCIsInR5cCI6IkpXVCJ9.eyJ0cyI6IjE2MjU2NzI1MjkifQ.rA5ho63Lbdwo8eqZ_gUtQxY3HaseL0InIVwdgf7L_fc",
p: ...,
q: ...,
client_claims: "{\"nonce\": \"AQAA123456\",\"local_nonce\": \"AQAA7890\"}"
}
```
## SignedHttpRequest Nonce Attribute
The `shrNonce` value that can be configured inside the PoP token request will be assigned to the `nonce` attribute in the `SignedHttpRequest` returned in the `AuthenticationResult`. However, the `nonce` attribute in the SHR will not be empty if it isn't manually configured through the `shrNonce` value. The following list describes the logic (in order of precedence) with which the `nonce` value is set on the `SignedHttpRequest`:
1. If `shrNonce` in auth request is not `null` or `undefined`, MSAL assigns that value to the `nonce` property in the SHR
2. If `shrNonce` is `null` or `undefined`, MSAL generates a random GUID string and assigns it to the `nonce` property in the SHR
## Acquiring a server nonce
The method through which a client application initially acquires and then renews a server-generated nonce may be different depending on which resource server the client application is interacting with. The following is an overview of the server nonce acquisition and renewal flow for which MSAL Browser is optimized. Keep in mind that even if the nonce acquisition flow is different, the server nonce is still always added manually into the token request by the client application as described in the [Usage](#usage) section below.
### Acquiring initial nonce
1. The first step to acquire a server-generated nonce is to make an authorized request to the resource. In this authorized request, a `PoP Token` will be added to the `Authorization` header, but said `PoP Token` will not include a valid nonce.
```javascript
let shrNonce = null; // Globally scoped variable
// 1. Configure PoP Token Request without a valid SHR Nonce
const popTokenRequest = {
scopes: ["User.Read"],
authenticationScheme: msal.AuthenticationScheme.POP,
resourceRequestMethod: "POST",
resourceRequestUri: "YOUR_RESOURCE_ENDPOINT"
shrNonce: shrNonce // SHR Nonce is invalid as null string at this point
};
// Get PoP token to make authenticated request
const shr = await publicClientApplication.acquireTokenSilent(popTokenRequest);
// Set up PoP Resource request
const reqHeaders = new Headers();
const authorizationHeader = `PoP ${shr}`;
headers.append("Authorization", authorizationHeader);
const options = {
method: method,
headers: headers
};
```
2. Once the request has been set up and the `SHR` has been added to the Authorization header, the request is `POST`ed to the resource. At this point, without a valid server nonce in the `PoP Token`/`SHR`, the resource should respond with a `401 Unauthorized` HTTP error, which will have a `WWW-Authenticate` header containing the first valid server nonce as one of it's challenges.
```typescript
// Make call to resource with SHR
return fetch(resourceEndpointData.endpoint, options)
.then(response => response.json())
.then(response => {
// At this point, the response will be a 401 Error, so ignore the success case for now
})
.catch(error => {
// This error will be a `401 Unauthorized` error, containing a WWW-Authenticate header with an error message such as "nonce_missing" or "nonce_malformed"
// The correct way to handle this scenario is shown in the following step.
});
```
3. In order to extract the server nonce from said `WWW-Authenticate` header, MSAL exposes the `AuthenticationHeaderParser` class, which includes the `getShrNonce` API that will parse the server nonce out of the authentication headers it comes in:
```typescript
import { PublicClientApplication, AuthenticationHeaderParser } from "@azure/msal-browser";
...
// Make call to resource with SHR
return fetch(resourceEndpointData.endpoint, options)
.then(response => response.json())
.then(response => {
if (response.status === 200 && response.headers.get("Authentication-Info")) {
// At this point, the response will be a 401 Error, so ignore the success case for now
}
// Check if error is 401 unauthorized and WWW-Authenticate header is included
else if (response.status === 401 && response.headers.get("WWW-Authenticate")) {
lastResponseHeaders = response.headers;
const authHeaderParser = new AuthenticationHeaderParser(response.headers);
shrNonce = authHeaderParser.getShrNonce(); // Null is replaced with valid nonce from WWW-Authenticate header
} else {
// Deal with other errors as necessary
}
});
});
```
### Using and renewing a valid nonce
4. Now that the `shrNonce` has been acquired for the first time, the `PoP Token` can be requested again, including a valid nonce, and the authorized resource request can be completed successfully. The `200 OK` successful response will now include a `Authentication-Info` header that will have a `nextnonce` challenge, which can be parsed by MSAL in the same way as the `WWW-Authenticate` nonce:
```typescript
import { PublicClientApplication, AuthenticationHeaderParser } from "@azure/msal-browser";
// 1. Configure PoP Token Request without a valid SHR Nonce
const popTokenRequest = {
scopes: ["User.Read"],
authenticationScheme: msal.AuthenticationScheme.POP,
resourceRequestMethod: "POST",
resourceRequestUri: "YOUR_RESOURCE_ENDPOINT"
shrNonce: shrNonce // SHR Nonce is now a valid server-generated nonce
};
// Get PoP token to make authenticated request
const shr = await publicClientApplication.acquireTokenSilent(popTokenRequest);
// Set up PoP Resource request
const reqHeaders = new Headers();
const authorizationHeader = `PoP ${shr}`;
headers.append("Authorization", authorizationHeader);
const options = {
method: method,
headers: headers
};
// Make call to resource with SHR
return fetch(resourceEndpointData.endpoint, options)
.then(response => response.json())
.then(response => {
if (response.status === 200 && response.headers.get("Authentication-Info")) {
/** NEW **/
// 200 OK if nonce was valid
lastResponseHeaders = response.headers;
const authHeaderParser = new AuthenticationHeaderParser(response.headers);
shrNonce = authHeaderParser.getShrNonce(); // Previous nonce (possibly expired) is replaced with the nextnonce generated by the server
}
// Check if error is 401 unauthorized and WWW-Authenticate header is included
else if (response.status === 401 && response.headers.get("WWW-Authenticate")) {
/** SAME AS BEFORE **/
lastResponseHeaders = response.headers;
const authHeaderParser = new AuthenticationHeaderParser(response.headers);
shrNonce = authHeaderParser.getShrNonce(); // Null is replaced with valid nonce from WWW-Authenticate header
} else {
// Deal with other errors as necessary
}
});
});
```
### Integrated Nonce Acquisition Cycle
The following script proposes the recommended way to handle `PoP Token` Requests that require a server nonce to be acquired and renewed continuously:
```typescript
/**
* Application script
*/
import { PublicClientApplication, AuthenticationHeaderParser } from "@azure/msal-browser";
const publicClientApplication = new PublicClientApplication(msalConfig);
// Initialize header map to keep track of the "last" response's headers.
let lastResponseHeaders: HttpHeaders = null;
// Call the PoP API endpoint
const { responseBody, lastResponseHeaders } = await callPopResource(publicClientApplication, resourceEndpointData, lastResponseHeaders);
/**
* End Application script
*/
/**
* Source Code:
* This method is responsible for getting data from a PoP-protected API. It is called at the bottom of the
* demo code in the application script.
*/
const async callPopResource(
publicClientApplication: PublicClientApplication,
resourceEndpointData: ResourceEndpointData,
lastResponseHeaders: HttpHeaders): ResponseBody {
// Get headers from last response's headers
const headerParser = new AuthenticationHeaderParser(lastResponseHeaders);
let shrNonce: string | null;
try {
shrNonce = headerParser.getShrNonce(); // Will return
} catch (e) {
// If the lastResponse headers are null, .getShrNonce will throw (by design)
shrNonce = null;
}
// Build PoP request as usual, adding the server nonce
const popTokenRequest = {
account: CURRENT_ACCOUNT,
scopes: resourceEndpointData.POP_RESOURCE_SCOPES,
authenticationScheme: AuthenticationScheme.POP,
resourceRequestUri: resourceEndpointData.RESOURCE_URI,
resourceRequestMethod: resourceEndpointData.METHOD,
shrClaims: resourceEndpointData.CUSTOM_CLAIMS_STRING,
shrNonce: shrNonce || undefined // Will be undefined on the first call, shrNonce should be valid on subsequent calls
}
// Get pop token to make authenticated request
const shr = await publicClientApplication.acquireTokenSilent(popTokenRequest);
// PoP Resource request
const reqHeaders = new Headers();
const authorizationHeader = `PoP ${shr}`; //Create Authorization header
headers.append("Authorization", authorizationHeader); // Add Authorization header to request headers
const options = {
method: method,
headers: headers
};
// Make call to resource with SHR
return fetch(resourceEndpointData.endpoint, options)
.then(response => response.json())
.then(response => {
if (response.status === 200 && response.headers.get("Authentication-Info")) {
lastResponseHeaders = response.headers;
const authHeaderParser = new AuthenticationHeaderParser(response.headers);
shrNonce = authHeaderParser.getShrNonce(); // Previous nonce (possibly expired) is replaced with the nextnonce generated by the server
}
// Check if error is 401 unauthorized and WWW-Authenticate header is included
else if (response.status === 401 && response.headers.get("WWW-Authenticate")) {
/** SAME AS BEFORE **/
lastResponseHeaders = response.headers;
const authHeaderParser = new AuthenticationHeaderParser(response.headers);
shrNonce = authHeaderParser.getShrNonce(); // Null is replaced with valid nonce from WWW-Authenticate header
} else {
// Deal with other errors as necessary
}
});
});
}
```
| 47.410909 | 531 | 0.726952 | eng_Latn | 0.961578 |
0d5cd11ef2482f944abfbe4e4b3259a2110587ea | 273 | md | Markdown | README.md | jdpx/jdpx-co-uk | 4738ace78fdc54281a4eb8136a1e46b48c8e2f23 | [
"MIT"
] | null | null | null | README.md | jdpx/jdpx-co-uk | 4738ace78fdc54281a4eb8136a1e46b48c8e2f23 | [
"MIT"
] | 6 | 2020-11-10T09:52:40.000Z | 2022-02-26T13:35:32.000Z | README.md | jdpx/jdpx-co-uk | 4738ace78fdc54281a4eb8136a1e46b48c8e2f23 | [
"MIT"
] | null | null | null | # jdpx.co.uk
This is the repo for personal website [jdpx.co.uk](https://www.jdpx.co.uk/)
## Setup
- Install dependancies
- `yarn install`
- Run the site locally
- `yarn start`
## Deployment
- Build site
- `yarn build`
- Deploy the contents of the `/build` folder | 17.0625 | 75 | 0.677656 | eng_Latn | 0.958437 |
0d5dc20ed988f76aa33369fda363450fa5efaaa1 | 3,506 | md | Markdown | src/pages/post062.md | nicholaspretorius/npgh | dc3f84caaa22a2a09ba30299087be5fcbf3e8963 | [
"MIT"
] | null | null | null | src/pages/post062.md | nicholaspretorius/npgh | dc3f84caaa22a2a09ba30299087be5fcbf3e8963 | [
"MIT"
] | 6 | 2021-03-08T23:48:42.000Z | 2022-02-26T08:41:00.000Z | src/pages/post062.md | nicholaspretorius/npgh | dc3f84caaa22a2a09ba30299087be5fcbf3e8963 | [
"MIT"
] | null | null | null | ---
title: "Setup a React app using Create React App with TypeScript"
id: "POST 062"
date: "2020-04-22"
---
To get started with Create React App (CRA) and TypeScript is easy, run:
* `npx create-react-app my-app --template typescript`
From there, run: `npm start` and visit [http://localhost:3000](http://localhost:3000)
You will notice that CRA defaults the app to running on PORT 3000, which, coincidentally is the port we use to run `sls offline start`. It is up to you which port you are going to change.
#### Change Port on Create React App
In your "package.json" file, update the "start" script in the "scripts" section to:
`"start": "PORT=3001 react-scripts start",`
This will instruct CRA to start your app on PORT 3001. Check the app out in your browser to confirm all is working.
#### Change Port on Serverless Offline
To change the port on the Serverless offline side, you can run the following:
* `sls offline start --httpPort 3001`
Whichever you choose, choose one and stick to it!
#### Setting up Cypress
* `npm i -D cypress` to install Cypress
* `npx @bahmutov/cly init` to scaffold Cypress with a single test
* `npm i -D @bahmutov/add-typescript-to-cypress webpack`
Once the above is done, open the file "cypress/integration/spec.ts" and add the following code:
```
describe("CRA", () => {
it("shows learn link", () => {
cy.visit("http://localhost:3001");
cy.get(".App-header").should("be.visible");
cy.get(".App-header p").should("be.visible").and("have.text", "Hello world!");
});
});
export { };
```
Note: The arbitrary "export" at the bottom of the file is to stop the TypeScript compiler to stop shouting about the --isolatedModules rule from .tsconfig.
While we have turned that rule off within the child .tsconfig file in Cypress, VS Code does not seem to pick up on it.
Then, in "package.json" add the following to "scripts":
`"test:e2e": "./node_modules/.bin/cypress open",`
This will open and run Cypress tests.
### Heroku
We will be using Heroku to host our frontend, in order to do so, you need to ensure you have both the Heroku CLI and the Travis CLI installed.
Verify via:
* `heroku -v`
Then run: `heroku login`
#### Travis CLI
In order to install the Travis CLI, you will need to have Ruby installed, at least at version 2. To check if you have Ruby, run:
* `ruby -v`
From there, you can install the CLI via: `gem install travis`
If you have problems with Ruby or openssl, you will need to rectify those issues in order to install the Travis CLI. Since everyone's local development environment will be different, I will outline the broad streps to take:
* `ruby -v` - Will tell you which version of Ruby you are running. Travis CLI will need at least v2.4.0, as of writing, the current version is 2.7.0 which is what I used.
* `which ruby` - Will tell you where Ruby is running from - this is important since if it is "/usr/local/bin/ruby" it is likely the default location of Ruby for Mac OS. This is probably very old. As such, you *likely* need to install a newer version of Ruby.
* `which openssl` - Will tell you which version of OpenSSL you have installed. Take note, it will *likely* be at "/usr/local/bin/openssl".
In order to manage multiple versions of Ruby, you can install RVM (Ruby Version Manager). This will enable you to add a new version of Ruby, then update your PATH to point to that specific Ruby location. From there, Travis CI is a "gem" and will be installed via: `gem install travis` | 41.247059 | 285 | 0.724472 | eng_Latn | 0.996828 |
0d6143e156bf65f7df93c083673fcdad380834e9 | 1,233 | md | Markdown | backup_scripts/ServerPilot/readme.md | insspb/Bash | 8e4ad295950267fa1fc6960411389460f8a8a2eb | [
"MIT"
] | null | null | null | backup_scripts/ServerPilot/readme.md | insspb/Bash | 8e4ad295950267fa1fc6960411389460f8a8a2eb | [
"MIT"
] | 4 | 2016-03-18T09:18:32.000Z | 2016-03-21T00:47:23.000Z | backup_scripts/ServerPilot/readme.md | insspb/bash | 8e4ad295950267fa1fc6960411389460f8a8a2eb | [
"MIT"
] | 1 | 2020-11-24T11:20:46.000Z | 2020-11-24T11:20:46.000Z | # ServerPilot.io backup script
## Description
I am using free version of ServerPilot.io management engine. This engine is lack of backup capabilities. I tried to find something that solve this problem, but with no luck. So I wrote it myself. Feel free to use it or commit
here.
As always with free software: **You use it on you own risk. No warranty.**
## Action sequence
This script will do the following actions:
1. It will use debian mysql config to access mysql with root rights. If you do not have such file you need to create it manually and change path in ***MYSQLCFG*** variable.
2. It will collect all mysql databases directly from mysql
3. It will get all users folders directly from ***/srv/users/***
4. It will check user for root rights.
5. It will check and create logs structure if needed.
6. It will create separate backup for each user folder.
7. It will create separate backup for each database.
8. It will force change access rights to backups just to **root** account.
9. It will clean backups older than **24 * KEEPDAYS**
## Installation
- Make copy of script on you server
- Make it executable
- Add it to cron table with root privileges.
## Versions
### 0.1
- Initial public release.
| 35.228571 | 226 | 0.743715 | eng_Latn | 0.998986 |
0d629e81f0b8831738accc34dc129e00e46ca0cc | 1,315 | md | Markdown | common/components/notes/README.md | AleksaDursun/stolarija | 99d7299f276b5da83a1a5b2e551b3204615e19c2 | [
"BSD-3-Clause"
] | null | null | null | common/components/notes/README.md | AleksaDursun/stolarija | 99d7299f276b5da83a1a5b2e551b3204615e19c2 | [
"BSD-3-Clause"
] | 1 | 2022-03-02T11:34:26.000Z | 2022-03-02T11:34:26.000Z | common/components/notes/README.md | AleksaDursun/stolarija | 99d7299f276b5da83a1a5b2e551b3204615e19c2 | [
"BSD-3-Clause"
] | null | null | null | Note module
* Add module namespace in composer
"autoload" : {
"psr-4": {
"notes\\": "app/modules/notes/"
}
},
* Add sumernote to composer
"marqu3s/yii2-summernote": "1.0.0"
* Add module in config
'modules' => [
'notes' => [
'class' => 'app\modules\notes\Notes',
'controllerNamespace' => 'app\modules\notes\controllers'
],
...
]
* Add NoteTrait on model that use notes
use NoteTrait;
* Notes action in controller is needed
'notes' => [
'class' => ListAction::className(),
'modelClass' => $this->modelClass,
'createRedirectRoute' => 'project/notes',
'subMenuView' => '//project/_project_nav',
'panelOptions' => [
'title' => function ($model) {
return 'Project: ' . $model->name;
},
]
],
* Grid Action
[
'class' => ActionColumn::class,
'template' => "{notes} {update} {delete}",
'buttons' => [
'notes' => NoteHelper::initGridNoteButton('project-grid-id')
]
], | 25.784314 | 77 | 0.429658 | eng_Latn | 0.224147 |
0d62a2caa0dc13d4c3e0ddca3ac6a24cb99d219a | 204 | md | Markdown | README.md | phpmanual/glossary-for-translators | 440192bab11526e75af1baf031a9775dd1f75572 | [
"MIT"
] | 2 | 2016-05-01T19:47:54.000Z | 2016-12-29T16:39:39.000Z | README.md | phpmanual/glossary-for-translators | 440192bab11526e75af1baf031a9775dd1f75572 | [
"MIT"
] | null | null | null | README.md | phpmanual/glossary-for-translators | 440192bab11526e75af1baf031a9775dd1f75572 | [
"MIT"
] | null | null | null | # glossary-for-translators
:books: Glossary for translators of PHP manual
All glossary will be maintained via this project wiki.
Checkout on <https://github.com/phpmanual/glossary-for-translators/wiki>
| 29.142857 | 72 | 0.803922 | eng_Latn | 0.71922 |
0d63e46b99f23b9ffa785c84d7dae56f937955c3 | 2,404 | md | Markdown | README.md | phenan/eidos | 3fc4ee8dd3979a2affae3de11fe4d36c9d9aac5e | [
"Apache-2.0"
] | 2 | 2017-12-19T01:21:53.000Z | 2017-12-20T21:44:43.000Z | README.md | phenan/eidos | 3fc4ee8dd3979a2affae3de11fe4d36c9d9aac5e | [
"Apache-2.0"
] | null | null | null | README.md | phenan/eidos | 3fc4ee8dd3979a2affae3de11fe4d36c9d9aac5e | [
"Apache-2.0"
] | null | null | null | # eidos
Invariants functors and arrows with generics in shapeless
## Converter
`Converter` is a type class that represents both of parsers and printers.
We can construct parsers and printers by a single definition.
The following definition is a sample program using `Converter`.
```scala
import com.phenan.eidos._
import com.phenan.eidos.common.converter._
import com.phenan.eidos.syntax.converter._
import scala.language.higherKinds
sealed trait Hoge
case object Foo extends Hoge
case class Bar (n: Int, s: String) extends Hoge
object Test {
def hoge [F[_]] (implicit F: Converter[F, Char]): F[Hoge] = union[Hoge] (
keyword("foo") *> struct[Foo.type](),
keyword("bar: ") *> struct[Bar](integer, keyword(", ") *> javaIdentifier)
)
}
```
`union` and `struct` is a combinator of `Converter` using generics in shapeless.
They express composition of `Converter`s following to algebraic data types.
`union` is a parallel composition of `Converter`s and it returns a `Converter` of the super type that is defined as `sealed class` or `sealed trait`.
`union` takes a type that is the super type and it takes `Converter`s for all the sub types as arguments.
It returns the `Converter` of the given super type.
Surprisingly, we do not need to write a conversion between the super type (`Hoge` in this example) and the sub types (`Foo` and `Bar`).
Such conversion is automatically inferred by generics.
`struct` is a sequential composition of `Converter`s and it returns a `Converter` of the composed data type that is defined as `case class`.
`struct` takes a type that is the composed data type and it takes `Converter`s for all argument types of the primary constructor of the data.
For example, `Bar` takes two arguments, `Int` and `String`,
so `struct[Bar]` takes two `Converter`s of `Int` and `String`.
Converter is implemented as a tagless-final style,
so we should declare which interpreter we want to use.
This library provides `PEGParser` and `StreamPrinter` as an interpreter of `Converter`.
The following program is a sample program that interprets `Test.hoge`.
```scala
import com.phenan.eidos.data._
val x = Test.hoge[PEGParser[Char, ?]].evalParser("bar: 123, hoge".toStream)
// x : Some(Bar(123, "hoge"))
val y = Test.hoge[StreamPrinter[Char, ?]].runPrinter(Bar(-456, "piyo"))
// y : "bar: -456, piyo".toStream
```
## Author
[@phenan](https://twitter.com/phenan) | 40.745763 | 149 | 0.739601 | eng_Latn | 0.988036 |
0d6531ca57497cd1668ada8c0a42c2f8b055d205 | 1,464 | md | Markdown | gianpaolo/2020-05-01.md | simonetripodi/quarantine-workouts | 12f1977aae383ffb436321ab9de5a385cf6aec6d | [
"CC0-1.0"
] | null | null | null | gianpaolo/2020-05-01.md | simonetripodi/quarantine-workouts | 12f1977aae383ffb436321ab9de5a385cf6aec6d | [
"CC0-1.0"
] | null | null | null | gianpaolo/2020-05-01.md | simonetripodi/quarantine-workouts | 12f1977aae383ffb436321ab9de5a385cf6aec6d | [
"CC0-1.0"
] | null | null | null | # Warm-up
* circonduzioni dx/sx/enrambi in avanti/dietro;
* circonduzioni alternate
* circonduzioni gomiti
* circonduzioni polsi
* extra rotazioni gomiti a terra
* cat/arch in quadrupedia
# Attivazione
* routine con loopband monoarticolare: avanti, laterale, laterale supino, martello dietro, incrociato
* 3 x 10 spinte con loopband rossa
* 5 x 15" plank portare le spalle in avanti rispetto la linea delle mani, ricordarsi di strizzare forte i glutei
* 5 x 5 spinte sulla palla svizzera OPPURE 3 x 3 con il puff senza ruote
# 3pod
* 3 x 10" raccolti
* 3 x 10" distensione gambe alterate
* 10 raccolti + estensione e ritorno
* 10 aperture bacino gambe divaricate (gambe rimangono in sospensione)
* 10 aperture bacino gambe divaricate + estensione
*Note*
Puntare a lavorare unbroken _oppure_ in accumulo
# Handstand
* 5 x 15" plank a 45°
* 10 slanci al muro andata/ritorno simmetrici: gambe tese/contratte in arrivo, schiena in hollow (non arcare in fase di arrivo), controllare il ritorno
* 1 slancio + 10 shrug sulle spalle
* 5 + 5 salite fronte/schiena al muro, stacco una gamba attraverso attivazione di gluteo e dorsale, prendo posizione e avvicino l'altra; tenuta!!!
* Accumulo 5' free HS (usare soffitto): scendere ogni volta che si perde la forma corretta; fermare il cronometro ad ogni discesa, il tempo è sulla posizione.
# Note
recuperare bene, 2'/2'30" di intervallo tra un set e l'altro (o by feel in base al recupero)
| 36.6 | 159 | 0.757514 | ita_Latn | 0.998938 |
0d659fd46a4e39fc3dea6fe38fa5f6233bc9fffd | 110 | md | Markdown | README.md | HomikSoni/First-Python | 2e6f0361bf21b3b00d9cf34ad314e2d315a4d22d | [
"MIT"
] | null | null | null | README.md | HomikSoni/First-Python | 2e6f0361bf21b3b00d9cf34ad314e2d315a4d22d | [
"MIT"
] | null | null | null | README.md | HomikSoni/First-Python | 2e6f0361bf21b3b00d9cf34ad314e2d315a4d22d | [
"MIT"
] | null | null | null | # First-Python
This repository contains, my first repository, I am still learning!!!a
I will edit this later
| 22 | 70 | 0.772727 | eng_Latn | 0.995557 |
0d6647fe891f7cde1e20492fe00dfb758d092728 | 212 | md | Markdown | README.md | StatisticalProgramming/statisticalconditioning.github.io | b63aa16efaa6c3f38d8d9619dbf975f706997754 | [
"MIT"
] | null | null | null | README.md | StatisticalProgramming/statisticalconditioning.github.io | b63aa16efaa6c3f38d8d9619dbf975f706997754 | [
"MIT"
] | null | null | null | README.md | StatisticalProgramming/statisticalconditioning.github.io | b63aa16efaa6c3f38d8d9619dbf975f706997754 | [
"MIT"
] | null | null | null | # Statistical Conditioning
This is the repository for the **Statistical Conditioning** website.
The site can be found at [https://statisticalconditioning.github.io/](https://statisticalconditioning.github.io/). | 42.4 | 114 | 0.79717 | eng_Latn | 0.820932 |
0d67b58d66022354396181133263d0663ce57c19 | 723 | md | Markdown | docs/experiments-adhoc.md | vishalbelsare/OpenMatch | 84b25502bf52c58b9e71bd0754b2fc192d9b448f | [
"MIT"
] | 403 | 2020-01-17T06:54:46.000Z | 2022-03-30T05:47:42.000Z | docs/experiments-adhoc.md | vishalbelsare/OpenMatch | 84b25502bf52c58b9e71bd0754b2fc192d9b448f | [
"MIT"
] | 30 | 2020-06-07T12:28:07.000Z | 2022-03-20T05:26:03.000Z | docs/experiments-adhoc.md | vishalbelsare/OpenMatch | 84b25502bf52c58b9e71bd0754b2fc192d9b448f | [
"MIT"
] | 48 | 2020-07-15T09:45:46.000Z | 2022-03-01T07:27:59.000Z | # Ad-hoc Search
All results is measured on ndcg@20 with 5 fold cross-validation. More details are available at [ClueWeb09](http://lemurproject.org/clueweb09/), [ClueWeb12](http://www.lemurproject.org/clueweb12.php/).
## Datasets
Data can be downloaded from [Datasets](https://cloud.tsinghua.edu.cn/d/77741ef1c1704866814a/).
|Datasets|Queries/Anchors|Query/Anchor-Doc Pairs|Released Files|
|:-------|:-------------:|:--------------------:|:-------------|
|**ClueWeb09-B**|200|47.1K|Queries, Q-D Relations, SDM scores|
|**Robust04**|249|311K|Queries, Q-D Relations, SDM scores|
|**ClueWeb12-B13**|100|28.9K|Queries, Q-D Relations, SDM scores|
As we cannot release the document contents, the document IDs are used instead.
| 51.642857 | 200 | 0.692946 | yue_Hant | 0.399312 |
0d67da1098d29abdc740b90036f6304a34ccd99a | 6,230 | md | Markdown | examples/gloo-edge/README.md | keshavprasadms/opa-envoy-plugin | 8b232874cbc5c60c6194aebda836fce49b05de4d | [
"Apache-2.0"
] | 127 | 2018-04-17T20:49:35.000Z | 2020-07-16T18:43:31.000Z | examples/gloo-edge/README.md | keshavprasadms/opa-envoy-plugin | 8b232874cbc5c60c6194aebda836fce49b05de4d | [
"Apache-2.0"
] | 70 | 2018-04-02T18:23:15.000Z | 2020-08-10T16:58:14.000Z | examples/gloo-edge/README.md | keshavprasadms/opa-envoy-plugin | 8b232874cbc5c60c6194aebda836fce49b05de4d | [
"Apache-2.0"
] | 58 | 2020-08-31T09:30:07.000Z | 2022-03-30T08:56:35.000Z | ## Overview
[Gloo Edge](https://docs.solo.io/gloo-edge/latest/) is an Envoy based API Gateway that provides a Kubernetes CRD
to manage Envoy configuration for performing traffic management and routing.
`Gloo Edge` allows creation of a [Custom External Auth Service](https://docs.solo.io/gloo-edge/master/guides/security/auth/custom_auth/)
that implements the Envoy spec for an [External Authorization Server](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ext_authz_filter.html).
The purpose of this tutorial is to show how OPA could be used with Gloo Edge to apply security policies for upstream services.
## Prerequisites
This tutorial requires Kubernetes 1.14 or later. To run the tutorial locally, we recommend using [minikube](https://minikube.sigs.k8s.io/docs/start/) in
version v1.0+ with Kubernetes 1.14+.
The tutorial also requires [Helm](https://helm.sh/docs/intro/install/) to install Gloo Edge on a Kubernetes cluster.
## TL;DR
Execute `./setup.sh`, the script will set up everything and run sample tests to prove that the setup worked.
## Steps
### Start Minikube
```bash
minikube start
```
### Setup and configure Gloo Edge
```bash
helm repo add gloo https://storage.googleapis.com/solo-public-helm
helm upgrade --install --namespace gloo-system --create-namespace gloo gloo/gloo
kubectl config set-context $(kubectl config current-context) --namespace=gloo-system
```
Ensure all pods are running using `kubectl get pod` command.
### Create Virtual Service and Upstream
[Virtual Services](https://docs.solo.io/gloo-edge/latest/introduction/architecture/concepts/#virtual-services) define a
set of route rules, security configuration, rate limiting, transformations, and other core routing capabilities
supported by Gloo Edge.
[Upstreams](https://docs.solo.io/gloo-edge/latest/introduction/architecture/concepts/#upstreams) define destinations for routes.
Save the configuration as **vs.yaml**.
```yaml
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
name: httpbin
spec:
static:
hosts:
- addr: httpbin.org
port: 80
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: httpbin
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: /
routeAction:
single:
upstream:
name: httpbin
namespace: gloo-system
options:
autoHostRewrite: true
```
```bash
kubectl apply -f vs.yaml
```
### Test Gloo
For simplification port-forwarding will be used. Open another terminal and execute.
```bash
kubectl port-forward deployment/gateway-proxy 8080:8080
```
The `VirtualService` created in the previous step forwards requests to [http://httpbin.org](http://httpbin.org).
Let's test that Gloo works properly by running the below commands in the first terminal.
```bash
curl -XGET -Is localhost:8080/get | head -n 1
HTTP/1.1 200 OK
curl http -XPOST -Is localhost:8080/post | head -n1
HTTP/1.1 200 OK
```
### Define a OPA policy
The following OPA policy only allows `GET` requests.
**policy.rego**
```rego
package envoy.authz
import input.attributes.request.http as http_request
default allow = false
allow {
action_allowed
}
action_allowed {
http_request.method == "GET"
}
```
Store the policy in Kubernetes as a Secret.
```bash
kubectl create secret generic opa-policy --from-file policy.rego
```
### Setup OPA-Envoy
Create a deployment as shown below and save it in **deployment.yaml**:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opa
labels:
app: opa
spec:
replicas: 1
selector:
matchLabels:
app: opa
template:
metadata:
labels:
app: opa
spec:
containers:
- name: opa
image: openpolicyagent/opa:0.26.0-envoy
volumeMounts:
- readOnly: true
mountPath: /policy
name: opa-policy
args:
- "run"
- "--server"
- "--addr=0.0.0.0:8181"
- "--set=plugins.envoy_ext_authz_grpc.addr=0.0.0.0:9191"
- "--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow"
- "--set=decision_logs.console=true"
- "--ignore=.*"
- "/policy/policy.rego"
volumes:
- name: opa-policy
secret:
secretName: opa-policy
```
```bash
kubectl apply -f deployment.yaml
```
Ensure all pods are running using `kubectl get pod` command.
Next, define a Kubernetes `Service` for OPA-Envoy. This is required to create a DNS record and thereby create
a Gloo `Upstream` object.
**service.yaml**
```yaml
apiVersion: v1
kind: Service
metadata:
name: opa
spec:
selector:
app: opa
ports:
- name: grpc
protocol: TCP
port: 9191
targetPort: 9191
```
> Note: Since the name of the service port is `grpc`, `Gloo` will understand that traffic should be routed using HTTP2 protocol.
```bash
kubectl apply -f service.yaml
```
### Configure Gloo Edge to use OPA
To use OPA as a custom auth server, we need to add the `extauth` attribute as describe below:
**gloo.yaml**
```yaml
global:
extensions:
extAuth:
extauthzServerRef:
name: gloo-system-opa-9191
namespace: gloo-system
```
To apply it, run the following command:
```bash
helm upgrade --install --namespace gloo-system --create-namespace -f gloo.yaml gloo gloo/gloo
```
Then, configure Gloo Edge routes to perform authorization via configured `extauth` before regular processing.
**vs-patch.yaml**
```yaml
spec:
virtualHost:
options:
extauth:
customAuth: {}
```
Then apply the patch to our `VirtualService` as shown below:
```bash
kubectl patch vs httpbin --type=merge --patch "$(cat vs-patch.yaml)"
```
### Exercise the OPA policy
After the patch is applied, let's verify that OPA allows only allows `GET` requests.
```bash
curl -XGET -Is localhost:8080/get | head -n 1
HTTP/1.1 200 OK
curl http -XPOST -Is localhost:8080/post | head -n1
HTTP/1.1 403 Forbidden
```
Check OPA's decision logs to view the inputs received by OPA from Gloo Edge and the results generated by OPA.
```bash
$ kubectl logs deployment/opa
```
| 23.598485 | 166 | 0.690209 | eng_Latn | 0.859885 |
0d689289909766284cd5e8c9374c8c5668b0719f | 461 | md | Markdown | content/featured/COVID-Tracker/index.md | karenefereyan/v2 | f609641801b3e3f3c973ccbbbc46bacc29b56f01 | [
"MIT"
] | null | null | null | content/featured/COVID-Tracker/index.md | karenefereyan/v2 | f609641801b3e3f3c973ccbbbc46bacc29b56f01 | [
"MIT"
] | null | null | null | content/featured/COVID-Tracker/index.md | karenefereyan/v2 | f609641801b3e3f3c973ccbbbc46bacc29b56f01 | [
"MIT"
] | null | null | null | ---
date: '2'
title: 'Covid19 Tracker App'
cover: './covid.png'
github: 'https://github.com/KarenEfereyan/covik-19-tracker'
external: 'https://covik-19-tracker.vercel.app/'
tech:
- Reactjs
- Css
- Material UI
- Chartjs
- Netlify
showInProjects: true
---
A web application for monitoring the number of corona virus cases worldwide. It allows
one to toggle the infected, recovered and deaths due to corona virus from different countries of the world. | 27.117647 | 107 | 0.735358 | eng_Latn | 0.865903 |
0d6926497f2edde3d420907f3d2ac110eec3a8a2 | 1,517 | md | Markdown | _portfolio/2018-09-18-iron-man.md | gmonteir/gmonteir.github.io | 51a950a51c27b74490de0dedc0a30dca5053fba7 | [
"MIT"
] | null | null | null | _portfolio/2018-09-18-iron-man.md | gmonteir/gmonteir.github.io | 51a950a51c27b74490de0dedc0a30dca5053fba7 | [
"MIT"
] | 4 | 2020-02-25T21:38:16.000Z | 2022-02-26T04:54:08.000Z | _portfolio/2018-09-18-iron-man.md | gmonteir/gmonteir.github.io | 51a950a51c27b74490de0dedc0a30dca5053fba7 | [
"MIT"
] | null | null | null | ---
layout: post-wide
hero-bg-color: "#000000"
uid: iron-man
title: "Iron Man Initiative"
worktype: "3D Modeling"
date: 2018-09-18 15:35:01
categories: project
progress: 40
---
<p class="meta">
<a href="https://github.com/gmonteir/iron-man-initiative">github/iron-man</a> | Date: <strong>{{ page.date | date: "%b %Y" }}</strong>
</p>
<p>
I organized a team of 10 students over three quarters to fabricate a wearable suit of Iron Man armor. The final suit will have electronic and moving parts, including a motorized helmet, LED blasters, and more.
We are currently in the process of 3D printing the parts for the suit. Check back here for more progress!
</p>
<div class="skills">
<p>Skills and contributions:</p>
<ul>
<li>Leadership and management of a large scale, year-long project</li>
<li>3D modeling using <b>Meshmixer</b> and <b>Photoshop</b></li>
<li>Lighting and controls using an <b>Arduino</b> microcontroller</li>
<li>Sanding, painting, and finishing 3D printed parts</li>
<li>Poster design using <b>InDesign</b></li>
</ul>
</div>
<div class="showcase">
<img style="width:50%" src="/images/portfolio/iron-man/1.jpg" alt="">
<img style="width:50%" src="/images/portfolio/iron-man/2.png" alt="">
<img style="width:50%" src="/images/portfolio/iron-man/3.png" alt="">
<img style="width:50%" src="/images/portfolio/iron-man/4.jpg" alt="">
<h2>Our team</h2>
<img style="width:50%" src="/images/portfolio/iron-man/5.jpg" alt="">
</div>
| 37 | 211 | 0.671721 | eng_Latn | 0.777999 |
0d6934a6a7bc4aa13f0697d6b3f71ce752b053ec | 10,249 | md | Markdown | README.md | Gebre2020/JS-Visitor-Portfolio-Project-Blog | cac02eca395f8ec305c2177b7b2a420ffa8fe493 | [
"MIT"
] | null | null | null | README.md | Gebre2020/JS-Visitor-Portfolio-Project-Blog | cac02eca395f8ec305c2177b7b2a420ffa8fe493 | [
"MIT"
] | null | null | null | README.md | Gebre2020/JS-Visitor-Portfolio-Project-Blog | cac02eca395f8ec305c2177b7b2a420ffa8fe493 | [
"MIT"
] | null | null | null |
** JS Visitor Portfolio Project Blog **
As a Flatiron School student I build a Java Script with ruby on rails backend app
called Js-Visitor-Project-Backend and Front end with JavaScript. I have to bring
ruby on rails and JavaScript together (backed and frontend) in to one single app.
My app is about visitor to find or create a location's place name and to input
the trip's name, address and budget. It also manages to render all trips and location,
to create, edit, update and delete.
First, to start the project I used two separate entities/folders backend reusable
backend for other front end applications like an api to use over and over again.
So I create two new ones and create the backend first. In the terminal I run
- cd Desktop,
- `rails new JS-Vistor-Project-Backend --api--database=postgresql`
- cd JS-visitor-project-backend
- code.
Next I created a github repository and upload the application to git hub. Then in the
terminal I run
- git add.
- git commit-m "start project"
- git push
- & copy from the github 3 lines (git remote..., git branch, git push ...)
paste and run to the terminal.
Then to check the github refresh the page.
* Backend (Ruby on Rails) Setup
I create two models with the relationship. The first is locations with attributes of
name and has many trips.
The second is trips with attribute of name, address, budget, location, id & belongs to location.
` class Location < ApplicationRecord
has_many :trips
end `
` class Trip < ApplicationRecord
belongs_to :location
end `
In order to create the model and the table, I use a resource generate and
run the parent element first
- `rails g resource location name`
- `rails g resource trip name address budget float location belongs to`
- `rails db:create`
- `rails db: migrate`
I build a data in the seed file then run
- `rails db:seed`.
Then check the data run`rails console` and the relationship of the table.
Next going to the 'config' folder then, open the 'initializers' folder and open 'corse.rb' file
and uncomment from 'config middleware' up to the end.
And, change at the 'origins example.com' code to 'origins "*"' to grab it from anywhere
after the app development finish and allowing websites to access api and change "*" to url.
Then open Gemfile and uncomment gem 'rack-cores' and `run bundle install`.
- Next in the controller create a CRUD method render with json.
- run `rails server` for starting the server (this would open it by default on
http://localhost:3000) to check & see the connection with data object.
- I add a serializers to the app and run `rails g serializer location` and
`rails g serializer trip`
- At Gemfile I add `gem 'active_model_serializers' '~> 0.10.2'`
- `gem 'fast_jsonapi'` and run bundle install in the terminal.
* Front End Setup
For the frontend I run ` mkdir JS-Visitor-Project-Frontend ` at the terminal and run
- `cd JS-Visitor-Project-Frontend`
- `code . ` in the terminal to open the visual studio code editor.
In the editor I create a folder & file of 'image', 'src', 'index.html', 'index.js', 'README.md'
and 'style.css'. Also, in the 'src' folder I create 'location_service.js', 'location.js',
'trip_service.js' and 'trips.js'.
- Then writing html code to the 'index.html' file and in the body & adding the script src
link to connect the javascript files.
- Next, I have to check the 'index.js' file connection to my application page by writing
' console.log('Hello') ' in the 'index.js' file and run `open index.html` in the terminal
to open the app browser. And inspect the page and open the console & check 'Hello' printed.
Next I have to create a github respository. In the terminal run
- git init
- git add .
- git commit -m "fronted started"
- go to a github and I create a git hub respository for frontend application & run
` git remote ..., git branch, git push... ` in the terminal & refresh the github.
I check the connection of backend and frontend by fetch request. I started from 'trip_service.js'
file. creating a class method with constructor.
Inside in 'index.js' file creating a global variable
const port = `http://localhost:3000`;
const tripCall = new TripService(port);
const locationCall = new LocationService(port);
and check the connection in the terminal by tripCall & locationCall of instance method.
- In the 'trip_servive.js' file write a getTrip() function with fetch request
getTrips() {
fetch(this.port + `/trips`)
.then(response => response.json())
.then(json => console.log(json)
}
In order to print/see the object, in the page inspect console I have add/call by writing
tripCall.getTrips()
locationCall.getLocations()
at 'index.js' file.
- Then in the 'tripService.js' file to get the instance method change the code
`.then(json ==> console.log(json))` to
`.then(json => {
//debugger
for(const trip of json.data) {
let t = new Trip(trip)
t.attachToDom()
}
})`
- Next in the 'trip.js' file I build a class method with constructor takes an arrgument.
And I create an element li assign with id and add "static all = []" and "Trip.all.push(this)
in the class Trip & constructor method.
Then I create a rendering function called 'render() {...}'
Also I build a function called 'attachToDom()'
`attachToDom() {
Trip.tripsContainer.appendChild(this.render())
}`
and at the top in the class Trip I assign 'static tripsContainer' variable.
* Create an Object
- Next I build create function
. 1st I have build a trip-form id inside 'index.html' file and
also I add location-dropdown form
` <form id="trip-form">
<label for="trip-name">Name: </label>
<input type="text" name="name" id="trip-name"/><br/>
<label for="trip-address">Address: </label>
<input type="text" name="address" id="trip-address"/><br/>
<label for="trip-budget">Budget: </label>
<input type="number" name="budget" id="trip-budget" min="0" step="0.01"/><br>
<label for="location-dropdown">Choose a Location Name:</label>
<select name="location_id" id="location-dropdown"></select><br/><br/>
<label for="location-Name">Location Name: </label>
<input type="text" name="name" id="location-name"/><br/>
<input class="button" type="submit" value="Create Trip"/>
</form>`
. 2nd I create an event listener function in 'index.js' file
`form.addEventListener('submit', handleSubmit)`
`function handleSubmit(e) {...}`
And, I assign a variable for trip-form and location-dropdown in the 'index.js' file
`const form = document.getElementById("trip-form")`
`const dropdown = document.getElementById("location-dropdown")`
- Next, in the 'trip-service.js' file I build a createTrips function
`createTrips() {
const tripInfo = {
trip: {...}
}
const configObject = {
method: 'POST',
header: {...},
body: JSON.stringify(tripInfo)
}
fetch(...)
.then(...)
.then(json => {...})
}`
- Then in the 'index.js' file I assign a variable name for each value of attributes
`const nameValue = document.getElementById("trip-name")`
`const addressValue = document.getElementById("trip-address")`
`const budgetValue = document.getElementById("trip-budget")`
`const locNameValue = document.getElementById("location-name")`
- Next in the 'location.js' file build a class method with constructor takes
an argument attribute and assign a static location container also add
`static all = [];`
`static locationContainer = document.getElementById('loc-container');`
and `Location.all.push(this) ` inside constructor .
- Then in side 'locaion-service.js' file build the class locationService
method with constructor and create an instance method called getLocations
of fetch request.
After I created a fetch request, open 'location.js' file & create a function
to add/render() the information & creating a new object to the location dropdown is called
addToDrop.
* Edit, Update, and Delete
- Next I am going to build an edit, update & delete functions.
. 1st I need to add a button in 'trip.js' file in render function
Edit Trip and Delete button.
. Then I add an event listener in constructor attributes
`this.element.addEventListener('click', this.handleClick)`
. Then I create handleClick function and UpdateTripInfo()
` handleClick = e => {...}`
` updatedTripInfo() {...}`
In the updateTripInfo() function I add a code to call tripCall.updateTrip(this).
- Next in the 'trip-service.js' file I create updateTrip(trip) function for PATCH
request. Also I build deleteTrip(e) {...} function.
* Backend
- In the backend in the 'trip-controller.rb' file I add a location_name attribute
in the private trip_params method then in the 'trip.rb' model file I build
location_name method to find or create location attribute. Then, in 'trip_serializer.rb'
file I add :location_name at attributes in order display at front/client page when
I created a location_name.
- In order Update/display the new or created location name I have to const
findLoc variable in fetch request at 'createTrips() function in the
'trip-service,js' file.
`const findLoc = Location.all.find(l => parseInt(l.id) === t.locationId)`
# Code & Demo
You can check out this project on Github Backend and Frontend
`https://github.com/Gebre2020/JS-Visitor-Project-Backend` and
`https://github.com/Gebre2020/JS-Visitor-Project-Frontend`
see the demo at `https://www.youtube.com/watch?v=RcMxTCQd3j8` hosted
## License & copyright
Licensed under the [MIT License](LICENCE).
| 43.987124 | 97 | 0.674505 | eng_Latn | 0.979828 |
0d699d8d0cfb135da9b3269393143a33c590a317 | 677 | md | Markdown | src/web-map/README.md | chumano/nextone | 7a387d7b96c7c33aa69f97be1a76ec51496c9c3c | [
"Apache-2.0"
] | null | null | null | src/web-map/README.md | chumano/nextone | 7a387d7b96c7c33aa69f97be1a76ec51496c9c3c | [
"Apache-2.0"
] | null | null | null | src/web-map/README.md | chumano/nextone | 7a387d7b96c7c33aa69f97be1a76ec51496c9c3c | [
"Apache-2.0"
] | null | null | null | # Getting Started with Create React App
run dev:
```
npm run start
```
buid:
```
npm run build
```
serve build
```
serse -s build
```
# libraries
- antd: ui library
- utils
- - axios: http
- - rxjs : observeble
- - classNames:
# Enviroment variables
Chỉ được sử dụng khi dev và build,
Files on the left have more priority than files on the right:
```
npm start: .env.development.local, .env.local, .env.development, .env
npm run build: .env.production.local, .env.local, .env.production, .env
npm test: .env.test.local, .env.test, .env (note .env.local is missing)
```
Dùng file nào thì file khác bị bỏ qua
## muốn cấu hình động sau khi build
thì dùng public/config.js | 20.515152 | 71 | 0.694239 | vie_Latn | 0.665693 |
0d69eabf833817d2a315c79138ad8eccfda71ae7 | 5,252 | md | Markdown | articles/virtual-machines/virtual-machines-sql-server-automated-backup.md | huiw-git/azure-content-zhtw | f20103dc3d404c9c929c155b36c5a47aee5baed6 | [
"CC-BY-3.0"
] | null | null | null | articles/virtual-machines/virtual-machines-sql-server-automated-backup.md | huiw-git/azure-content-zhtw | f20103dc3d404c9c929c155b36c5a47aee5baed6 | [
"CC-BY-3.0"
] | null | null | null | articles/virtual-machines/virtual-machines-sql-server-automated-backup.md | huiw-git/azure-content-zhtw | f20103dc3d404c9c929c155b36c5a47aee5baed6 | [
"CC-BY-3.0"
] | 1 | 2020-11-04T04:34:56.000Z | 2020-11-04T04:34:56.000Z | <properties
pageTitle="SQL Server 虛擬機器的自動備份 | Microsoft Azure"
description="針對在 Azure 虛擬機器中,以資源管理員部署模型來執行的 SQL Server,說明自動備份功能。"
services="virtual-machines"
documentationCenter="na"
authors="rothja"
manager="jeffreyg"
editor="monicar"
tags="azure-resource-manager" />
<tags
ms.service="virtual-machines"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="vm-windows-sql-server"
ms.workload="infrastructure-services"
ms.date="02/03/2016"
ms.author="jroth" />
# Azure 虛擬機器中的 SQL Server 自動備份
自動備份會針對執行 SQL Server 2014 Standard 或 Enterprise 之 Azure VM 上所有現存和新的資料庫,自動設定 [Managed Backup 到 Microsoft Azure](https://msdn.microsoft.com/library/dn449496.aspx)。這可讓您設定採用持久性 Azure Blob 儲存體的一般資料庫備份。
[AZURE.INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-classic-include.md)]資源管理員模型。
## 自動備份的設定
下表說明可以為自動備份設定的選項。實際的設定步驟會依據您是使用 Azure 入口網站或 Azure Windows PowerShell 命令而有所不同。
|設定|範圍 (預設值)|說明|
|---|---|---|
|**自動備份**|啟用/停用 (已停用)|針對執行 SQL Server 2014 Standard 或 Enterprise 的 Azure VM,啟用或停用自動備份。|
|**保留期限**|1-30 天 (30 天)|保留備份的天數。|
|**儲存體帳戶**|Azure 儲存體帳戶 (針對指定 VM 建立的儲存體帳戶)|將自動備份檔案儲存在 Blob 儲存體中時,所使用的 Azure 儲存體帳戶。這個位置會建立一個容器來儲存所有備份檔案。備份檔案命名慣例包括日期、時間和電腦名稱。|
|**加密**|啟用/停用 (已停用)|啟用或停用加密。啟用加密時,用來還原備份的憑證會放在與使用相同命名慣例之相同 automaticbackup 容器中的指定儲存體帳戶裡。如果密碼變更,就會以該密碼產生新的憑證,但是舊的憑證還是會保留,以還原先前的備份。|
|**密碼**|密碼文字 (無)|加密金鑰的密碼。唯有啟用加密時,才需要此密碼。若要還原加密的備份,您必須要有建立備份時所使用的正確密碼和相關憑證。|
## 在 Azure 入口網站中設定自動備份
在建立新的 SQL Server 2014 虛擬機器時,您可以透過 Azure 入口網站設定自動備份。
>[AZURE.NOTE] 自動備份相依於 SQL Server IaaS 代理程式。若要安裝並設定該代理程式,您必須要有在目標虛擬機器上執行的 Azure VM 代理程式。較新的虛擬機器資源庫映像預設會啟用此選項,但現有的 VM 上可能會遺失了 Azure VM 代理程式。如果您正在使用自己的 VM 映像,也需要安裝 SQL Server IaaS 代理程式。如需詳細資訊,請參閱 [VM 代理程式和延伸模組](https://azure.microsoft.com/blog/2014/04/15/vm-agent-and-extensions-part-2/)。
下列 Azure 入口網站螢幕擷取畫面顯示 [選用組態] | [SQL 自動備份] 下的選項。

針對現有的 SQL Server 2014 虛擬機器,在虛擬機器屬性的 [**組態**] 區段中,選取 [**自動備份**] 設定。在 [**自動備份**] 視窗中,您可以啟用功能、設定保留期限、選取儲存體帳戶及設定加密。如下列螢幕擷取畫面所示。

>[AZURE.NOTE] 當您第一次啟用自動備份時,Azure 會在背景中設定 SQL Server IaaS 代理程式。在此期間,Azure 入口網站不會顯示已設定自動備份。請等候幾分鐘的時間來安裝及設定代理程式。之後,Azure 入口網站將會反映新的設定。
## 使用 PowerShell 設定自動備份
在下列 PowerShell 範例中,是針對現有的 SQL Server 2014 VM 設定自動備份。**New-AzureVMSqlServerAutoBackupConfig** 命令會設定自動備份設定,以在 $storageaccount 變數所指定的 Azure 儲存體帳戶中儲存備份。這些備份將會保留 10 天。**Set-AzureVMSqlServerExtension** 命令會使用這些設定更新指定的 Azure VM。
$storageaccount = "<storageaccountname>"
$storageaccountkey = (Get-AzureStorageKey -StorageAccountName $storageaccount).Primary
$storagecontext = New-AzureStorageContext -StorageAccountName $storageaccount -StorageAccountKey $storageaccountkey
$autobackupconfig = New-AzureVMSqlServerAutoBackupConfig -StorageContext $storagecontext -Enable -RetentionPeriod 10
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -AutoBackupSettings $autobackupconfig | Update-AzureVM
可能需要幾分鐘的時間來安裝及設定 SQL Server IaaS 代理程式。
若要啟用加密,請修改先前的指令碼,為 CertificatePassword 參數傳遞 EnableEncryption 參數和密碼 (安全字串)。下列指令碼會啟用上述範例中的自動備份設定,並新增加密。
$storageaccount = "<storageaccountname>"
$storageaccountkey = (Get-AzureStorageKey -StorageAccountName $storageaccount).Primary
$storagecontext = New-AzureStorageContext -StorageAccountName $storageaccount -StorageAccountKey $storageaccountkey
$password = "P@ssw0rd"
$encryptionpassword = $password | ConvertTo-SecureString -AsPlainText -Force
$autobackupconfig = New-AzureVMSqlServerAutoBackupConfig -StorageContext $storagecontext -Enable -RetentionPeriod 10 -EnableEncryption -CertificatePassword $encryptionpassword
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -AutoBackupSettings $autobackupconfig | Update-AzureVM
若要停用自動備份,請執行相同的指令碼,但不要對 **New-AzureVMSqlServerAutoBackupConfig** 使用 **-Enable** 參數。和安裝一樣,可能需要幾分鐘的時間來停用自動備份。
## 停用及解除安裝 SQL Server IaaS 代理程式
如果您想要停用自動備份和修補作業的 SQL Server IaaS 代理程式,請使用下列命令:
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -Disable | Update-AzureVM
若要解除安裝 SQL Server IaaS 代理程式,請使用下列語法:
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension –Uninstall | Update-AzureVM
您也可以使用 **Remove-AzureVMSqlServerExtension** 命令解除安裝延伸模組:
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Remove-AzureVMSqlServerExtension | Update-AzureVM
>[AZURE.NOTE] 停用及解除安裝 SQL Server IaaS 代理程式不會移除先前設定的受管理備份設定。在停用或解除安裝 SQL Server IaaS 代理程式之前,應先停用自動備份。
## 相容性
下列產品與自動備份的 SQL Server IaaS 代理程式功能相容:
- Windows Server 2012
- Windows Server 2012 R2
- SQL Server 2014 Standard
- SQL Server 2014 Enterprise
## 後續步驟
自動備份會在 Azure VM 上設定受管理備份。因此,請務必[檢閱受管理備份的文件](https://msdn.microsoft.com/library/dn449496.aspx),以了解其行為和隱含意義。
您可以在下列主題中找到 Azure VM 上 SQL Server 的其他備份和還原指引:[Azure 虛擬機器中的 SQL Server 備份和還原](virtual-machines-sql-server-backup-and-restore.md)。
Azure 中的 SQL Server VM 相關功能為 [Azure 虛擬機器中的 SQL Server 自動修補](virtual-machines-sql-server-automated-patching.md)。
請檢閱其他[在 Azure 虛擬機器中執行 SQL Server 的資源](virtual-machines-sql-server-infrastructure-services.md)。
<!---HONumber=AcomDC_0204_2016--> | 44.888889 | 285 | 0.790366 | yue_Hant | 0.876762 |
0d6a621e35e2f455a8ad557ff62755e0a459c646 | 427 | md | Markdown | scratch-2/lesson-2.md | tarr11/coding-lessons | fc8f0c5e47de481aaca51b9cb2a4a6c6ecb5e1cf | [
"MIT"
] | 7 | 2015-04-02T20:05:37.000Z | 2019-10-15T16:53:44.000Z | scratch-2/lesson-2.md | rsreddy/coding-lessons | fc8f0c5e47de481aaca51b9cb2a4a6c6ecb5e1cf | [
"MIT"
] | null | null | null | scratch-2/lesson-2.md | rsreddy/coding-lessons | fc8f0c5e47de481aaca51b9cb2a4a6c6ecb5e1cf | [
"MIT"
] | 10 | 2015-06-19T16:33:27.000Z | 2019-12-09T00:18:39.000Z | # Tynker
## Watch This!
[Platformer Timelapse](http://www.youtube.com/watch?v=Tdt0aha-1tY)
## Overview
We are going to try out Tynker today. It's like Scratch, except it has Physics, so stuff like jumping is easier.
## Lesson
Let's do the Tynker intro lesson
## Review
## Advanced / Homework
Can you make your character bounce?
Use variables to make him jump at different speeds
Use inventory to give him a super boost
| 21.35 | 113 | 0.747073 | eng_Latn | 0.96177 |
0d6c0edbac95d32f92f949f9eb1325a8097bab3e | 2,699 | md | Markdown | _indicators/8-4-1.md | CroatianBureauOfStatistics/indikatori | 528aace350777a5ee590edc3045778cc2456042d | [
"CC0-1.0"
] | null | null | null | _indicators/8-4-1.md | CroatianBureauOfStatistics/indikatori | 528aace350777a5ee590edc3045778cc2456042d | [
"CC0-1.0"
] | 1 | 2019-03-13T21:00:34.000Z | 2019-05-06T20:09:32.000Z | _indicators/8-4-1.md | CroatianBureauOfStatistics/indikatori | 528aace350777a5ee590edc3045778cc2456042d | [
"CC0-1.0"
] | null | null | null | ---
title: >-
Material footprint, material footprint per capita, and material footprint per
GDP
permalink: /8-4-1/
sdg_goal: 8
layout: indicator
indicator: 8.4.1
indicator_variable: null
graph: null
graph_type_description: EPA does not have these data
graph_status_notes: unk
variable_description: null
variable_notes: null
un_designated_tier: '3'
un_custodial_agency: 'UNEP (Partnering Agencies: OECD)'
target_id: '8.4'
has_metadata: true
rationale_interpretation: >-
Material footprint of consumption reports the amount of primary materials
required to serve final demand of a country and can be interpreted as an
indicator for the material standard of living/level of capitalization of an
economy. Per_capita MF describes the average material use for final demand.
DMC and MF need to be looked at in combination as they cover the two aspects
of the economy, production and consumption. The DMC reports the actual amount
of material in an economy, MF the virtual amount required across the whole
supply chain to service final demand. A country can, for instance have a very
high DMC because it has a large primary production sector for export or a very
low DMC because it has outsourced most of the material intensive industrial
processes to other countries. The material footprint corrects for both
phenomena.
goal_meta_link: 'http://unstats.un.org/sdgs/files/metadata-compilation/Metadata-Goal-8.pdf'
goal_meta_link_page: 7
indicator_name: >-
Material footprint, material footprint per capita, and material footprint per
GDP
target: >-
Improve progressively, through 2030, global resource efficiency in consumption and production and endeavour to decouple economic growth from environmental degradation, in accordance with the 10-Year Framework of Programmes on Sustainable Consumption and Production, with developed countries taking the lead.
indicator_definition: >-
Material footprint (MF) is the attribution of global material extraction to
domestic final demand of a country. It is calculated as raw material
equivalent of imports (RMEIM) plus domestic extraction (DE) minus raw material
equivalents of exports (RMEEX). For the attribution of the primary material
needs of final demand a global, multi_regional input_output (MRIO) framework
is employed. The attribution method based on I_O analytical tools is described
in detail in Wiedmann et al. 2015. It is based on the EORA MRIO framework
developed by the University of Sydney, Australia (Lenzen et al. 2013) which is
an internationally well_established and the most detailed and reliable MRIO
framework available to date.
source_title: null
source_notes: null
published: true
---
| 49.981481 | 309 | 0.802519 | eng_Latn | 0.992809 |
0d6c51ec2d77b8a8d575caa795ab7dc068bf9eeb | 7,296 | md | Markdown | README.md | UrbanInstitute/custom-analytics | 42c2068ac3dd3111a14f23c6a97e1e343a63cd3b | [
"MIT"
] | 5 | 2015-03-05T20:37:58.000Z | 2019-07-02T04:38:41.000Z | README.md | UrbanInstitute/custom-analytics | 42c2068ac3dd3111a14f23c6a97e1e343a63cd3b | [
"MIT"
] | null | null | null | README.md | UrbanInstitute/custom-analytics | 42c2068ac3dd3111a14f23c6a97e1e343a63cd3b | [
"MIT"
] | 1 | 2015-03-11T18:36:03.000Z | 2015-03-11T18:36:03.000Z | # custom-analytics
Custom events for google analytics
##Installing this script
At the bottom of a page that includes google analytics (currently only supported for `analytics.js`), embed the javascript with
```html
<script src = "path/to/custom-analytics.min.js"></script>
```
**below** the `analytics.js` embed script.
#Tracking non-scroll events
To track custom events, you will be calling the either the `scroll_track` method (for scrolling events) or the `track` method (for all other events) of the `custom_analytics` function inside a `<script>` tag, underneath the link to `urban-analytics.js`. The function takes a [javascript objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Working_with_Objects) that is a list of settings to send to Google. Here's an example of what the `track` method call might look like. Below, there are descriptions of each value in the passed object.
```html
<script type="text/javascript">
$(".download-button")
.custom_analytics('track',{
category: "button",
action: "click",
label: "object_ID",
value: 1,
timing: true,
interaction: true
});
</script>
```
Let's break that up line by line.
##elements
In this example, `$(CSS_SELECTOR).custom_analytics('track',{options})` is the general form of function calls, for this plugin. Since this code plugs in to jQuery, jQuery is required (it's standard on all Urban pages), and selection should be made as above, using jQuery selectors. This structure means that:
1. Any selector will be valid, whether it selects a single element (usually by id) or multiple elements (usually by class). Selections made on multiple elements will make separate calls to google analytics for each element, using the same options for all calls.
2. Functions can be chained together. This means the following structure would work great:
```javascript
$(".download-button")
.custom_analytics('track',{OPTIONS FOR CLICK EVENTS})
.custom_analytics('track',{OPTIONS FOR MOUSEOVER EVENTS})
.custom_analytics('scroll_track',{OPTIONS FOR SCROLL EVENTS (SEE BELOW)})
```
##category
*This is a required field.*
The `category` key is used by Google Analytics to group similar events together. It can be any String value you like. As a best practice, I suggest element types are used as categories (button, video, paragraph, window, text box, etc.)
##action
*This is a required field*
The next tier of classification Google uses is `action`s. The heirarchical structure means, for example:
All `button -> click`, `button -> focus`, or `button -> hover` events would be aggregated under `button` and could be further drilled down to examine the individual actions.
However, `button -> click`, `paragraph -> click`, and `image -> click` cannot be grouped together under the `click` action, they are all aggregated under their corresponding categories.
*IMPORTANT NOTE* The `action` field accepts a String, but it must be the name of a [standard web event](https://developer.mozilla.org/en-US/docs/Web/Events). Examples include `click`, `dblclick`, `mouseenter`, etc. See the [MDN reference](https://developer.mozilla.org/en-US/docs/Web/Events) for a link of all event names.
##label
*This is NOT a required field*
The optional `label` is a third tier of classification that Google uses. So, for example, each type of `button -> click` could have a separate label ("Download button", "Animate button", "Toggle button", etc.) If you do not specify a label, none will be sent.
####Special labels:
There are two special values for the `label` parameter.
- `label: "object_ID"` will use the object's css id as the label to send to google analytics.
- `label: "object_HTML"` will send the entire html of the object as a label, e.g. `<button id = "cool_button" class = "buttons">Button</button>`
####label function
The label attribute may also be a function of the form...
```javascript
function (el) {
return ...
}
```
which takes in the selected DOM node and returns a string
##value
*This is NOT a required field*
The optional `value` can be used to attach an *integer* value to an event. Google tracks averages of these values, so they could be used for functions such as scroll depth tracking (see below), or any other numeric value. If you do not specify a value, none will be sent.
##timing
*This is NOT a required field*
Google analytics can also send "timing" events, which are separate in the analytics dashboard from all other events. If you enable timing by setting it to `true`, then the given event will be paired with a separate timing event, storing how long the event happened after the page loaded (in milliseconds). *If you do not specifically set this value to `true`, it defaults to `false`.*
##interaction
*This is NOT a required field*
By default, all events are set as non-interaction events, meaning they will not affect the [bounce rate](https://support.google.com/analytics/answer/1009409?hl=en). It may make sense to set certain events as interaction events (e.g. button clicks far down on the page), but I'd recomend leaving the default value of `false` for the interaction field during initial implementation. Setting many events as interaction events will make cross-project analytics difficult, as well as comparison of projects over time. *If you do not specifically set this value to `true`, it defaults to `false`.*
#Tracking scroll events
There are two types of scroll events:
- **breakpoint** events fire when the user has scrolled 25%, 50%, 75%, and 100% of the way down the page
- **element bound** events fire when a user has scrolled past a certain element.
Here are examples of each event type:
##Breakpoint event
Breakpoint events are always called on the `window` object.
```javascript
$(window)
.custom_analytics('scrollTrack', {
timing: true,
interaction: true
});
```
**NOTE: For breakpoint events you should not set `category`, `action`, `label`, or `value` fields for breakpoint events**
For breakpoint events, the `category` is always set to `scroll breakpoint`, the `action` is set to `scroll`, the `label` is set to the percentage down the page, and the `value` is set to the number of pixels down the page (which may differ across devices). As above, `timing` and `interaction` default to false, but may be set to true.
##Element bound events
```javascript
$("#download-button")
.custom_analytics('scrollTrack', {
label: "object_HTML",
value: 1,
timing: true,
interaction: true
});
```
For element bound events, the `category` is always set to "scroll past element" and the `action` is always set to "scroll". You may specify:
####label
The same as for non-scrolling events, including the special "object_ID" and "object_HTML" lables.
####value
The same as for non-scrolling events. **Note that if no value is specified, a default value will be sent to google analytics, equal to the distance in pixels from the top of the page to the top of the element**
####timing
The same as for non-scrolling events
####interaction
The same as for non-scrolling events
##Both breakpoint and element bound events
If you want to call scroll events on certain objects, but also track the 25%, 50%, 75%, 100% breakpoints, you may do so with a call like:
```javascript
$("#download-button")
.custom_analytics('scrollTrack', {
breakpoints: true
});
``` | 52.489209 | 591 | 0.752467 | eng_Latn | 0.995762 |
0d6dd5d0ed7cdaa3bda0dcde4453cbc83c3c0b7f | 1,692 | md | Markdown | _posts/2019-01-27-Pixelated-Semantic-Colorization.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2019-01-27-Pixelated-Semantic-Colorization.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2019-01-27-Pixelated-Semantic-Colorization.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "Pixelated Semantic Colorization"
date: 2019-01-27 20:28:48
categories: arXiv_CV
tags: arXiv_CV Segmentation Embedding CNN Semantic_Segmentation
author: Jiaojiao Zhao, Jungong Han, Ling Shao, Cees G. M. Snoek
mathjax: true
---
* content
{:toc}
##### Abstract
While many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed convolutional neural network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art.
##### Abstract (translated by Google)
##### URL
[http://arxiv.org/abs/1901.10889](http://arxiv.org/abs/1901.10889)
##### PDF
[http://arxiv.org/pdf/1901.10889](http://arxiv.org/pdf/1901.10889)
| 65.076923 | 1,209 | 0.801418 | eng_Latn | 0.98647 |
0d6e5f669cb9dd4b47655d354b40624a88886f17 | 472 | md | Markdown | src/code.cloudfoundry.org/vendor/code.cloudfoundry.org/policy_client/README.md | cloudfoundry-incubator/cf-networking | 1197c5d4f357dacd7772b009ee4e49e948357a9e | [
"Apache-2.0"
] | 10 | 2018-06-28T11:11:59.000Z | 2021-12-15T11:57:16.000Z | src/code.cloudfoundry.org/vendor/code.cloudfoundry.org/policy_client/README.md | cloudfoundry-incubator/cf-networking | 1197c5d4f357dacd7772b009ee4e49e948357a9e | [
"Apache-2.0"
] | 40 | 2018-02-07T17:29:22.000Z | 2022-03-10T13:58:11.000Z | src/code.cloudfoundry.org/vendor/code.cloudfoundry.org/policy_client/README.md | cloudfoundry-incubator/cf-networking | 1197c5d4f357dacd7772b009ee4e49e948357a9e | [
"Apache-2.0"
] | 17 | 2018-03-01T18:52:44.000Z | 2022-03-18T11:26:54.000Z | # Policy Client
Policy Client allows Cloud Foundry system components to query the policy server
for policies. It is currently used by the VXLAN policy agent (in
[silk-release](https://github.com/cloudfoundry/silk-release))
and [copilot](https://github.com/cloudfoundry/copilot).
## Getting Help
For help or questions about this component, you can reach the maintainers on Slack at [cloudfoundry.slack.com](https://cloudfoundry.slack.com) in the `#networking` channel.
| 39.333333 | 172 | 0.783898 | eng_Latn | 0.989966 |
0d6e93efb45535531e5fd75d4f33385988726a8c | 92 | md | Markdown | README.md | tashybob1/Shellphish-v.2.5 | 0fcd15901b52f0dfe54691acb5f8bfb0cf95e5ec | [
"Apache-2.0"
] | 3 | 2021-01-19T11:15:17.000Z | 2021-07-01T21:04:24.000Z | README.md | tashybob1/Shellphish-v.2.5 | 0fcd15901b52f0dfe54691acb5f8bfb0cf95e5ec | [
"Apache-2.0"
] | null | null | null | README.md | tashybob1/Shellphish-v.2.5 | 0fcd15901b52f0dfe54691acb5f8bfb0cf95e5ec | [
"Apache-2.0"
] | 3 | 2021-12-01T16:52:33.000Z | 2022-02-26T18:51:32.000Z | # ShellPhish-master-MOD
Hack websites using this simple tool written in bash shell script
| 30.666667 | 67 | 0.804348 | eng_Latn | 0.990976 |
0d70c297247ea2d5adf85eca3dad1172387012cd | 418 | md | Markdown | docs/src/domains.md | briochemc/GeoStats.jl | f0c5cdd3da724a6c66500bbb4d9da158fcbba0a7 | [
"MIT"
] | null | null | null | docs/src/domains.md | briochemc/GeoStats.jl | f0c5cdd3da724a6c66500bbb4d9da158fcbba0a7 | [
"MIT"
] | null | null | null | docs/src/domains.md | briochemc/GeoStats.jl | f0c5cdd3da724a6c66500bbb4d9da158fcbba0a7 | [
"MIT"
] | null | null | null | # Domains
Below is the list of currently implemented domain types from
[Meshes.jl](https://github.com/JuliaGeometry/Meshes.jl)
## PointSet
```@docs
PointSet
```
```@example domains
using GeoStats # hide
using Plots # hide
gr(size=(600,600)) # hide
plot(PointSet(rand(3,100)), camera=(30,60))
```
## CartesianGrid
```@docs
CartesianGrid
```
```@example domains
plot(CartesianGrid(10,10,10), camera=(30,60))
```
| 14.413793 | 60 | 0.691388 | eng_Latn | 0.495232 |
0d76b0a2cbfd5aadeb3bddfcee3bf38fa87d5d2a | 139 | md | Markdown | README.md | bakabot-php/component | 8ad4a3db466bf1f6e0f8e5a220118a48f26f23f0 | [
"MIT"
] | null | null | null | README.md | bakabot-php/component | 8ad4a3db466bf1f6e0f8e5a220118a48f26f23f0 | [
"MIT"
] | null | null | null | README.md | bakabot-php/component | 8ad4a3db466bf1f6e0f8e5a220118a48f26f23f0 | [
"MIT"
] | null | null | null | # bakabot-component
Provides the base classes and interfaces for Bakabot components.
## Installation
`composer require bakabot/component`
| 23.166667 | 64 | 0.820144 | eng_Latn | 0.924904 |
0d76fad8176c68f5c8556ca9310f9b2b2972116d | 810 | md | Markdown | README.md | slynch13/useInterval | 32ed016d9b395400929047f3524a1cc82efe1ecc | [
"MIT"
] | null | null | null | README.md | slynch13/useInterval | 32ed016d9b395400929047f3524a1cc82efe1ecc | [
"MIT"
] | 8 | 2021-03-09T17:22:45.000Z | 2022-02-26T17:40:46.000Z | README.md | slynch13/useInterval | 32ed016d9b395400929047f3524a1cc82efe1ecc | [
"MIT"
] | null | null | null | # useinterval
> React hook for setting up an interval
[](https://www.npmjs.com/package/useinterval) [](https://standardjs.com)
## Install
```bash
npm install --save @slynch13/useinterval
```
## Usage
```jsx
import React, { useState } from 'react'
import { useInterval } from 'useinterval'
const App = () => {
let [example, setExample] = useState(0)
useInterval(() => {
setExample(x => setExample(x + 1))
}, 1000)
return (
<div>
{example}
</div>
)
}
export default App
```
## License
MIT © [slynch13](https://github.com/slynch13)
---
This hook is created using [create-react-hook](https://github.com/hermanya/create-react-hook).
| 20.25 | 215 | 0.669136 | yue_Hant | 0.382104 |
0d780180739ebca7f141c7d38ab52dbf841e4647 | 936 | md | Markdown | i18n/zh/docusaurus-plugin-content-docs/current/02-getting-started/02-setup/04-test-network.md | starcoinorg/starcoin-cookbook | f25d6b2d7100d629547075eb8f3c9c05af38cc67 | [
"Apache-2.0"
] | null | null | null | i18n/zh/docusaurus-plugin-content-docs/current/02-getting-started/02-setup/04-test-network.md | starcoinorg/starcoin-cookbook | f25d6b2d7100d629547075eb8f3c9c05af38cc67 | [
"Apache-2.0"
] | 10 | 2022-03-12T01:12:12.000Z | 2022-03-29T08:03:27.000Z | i18n/zh/docusaurus-plugin-content-docs/current/02-getting-started/02-setup/04-test-network.md | starcoinorg/starcoin-cookbook | f25d6b2d7100d629547075eb8f3c9c05af38cc67 | [
"Apache-2.0"
] | null | null | null | # 如何参与测试网络
TODO
1. Introduce the test networks and the different between test networks.
2. How to get test token from test network.
3. Run node and join test network.
## 加入 Halley 网络
**Halley** 是 Starcoin 的第一个测试网络。链上的数据将被定期清理。
你可以使用下面的命令加入 Halley 网络:
```shell
starcoin -n halley
```
“Halley” 这个名字的灵感来自 [Comet Halley](https://en.wikipedia.org/wiki/Halley%27s_Comet),官方命名为 1P/Halley,是一颗每隔 75-76 年就可以从地球上看到的短周期彗星。
## 加入 Proxima 网络
**Proxima** 是 Starcoin 长期运行测试网络,发布于 2020 年第三季度。
你可以使用下面的命令加入 Proxima 网络:
```shell
starcoin -n proxima
```
“Proxima” 这个名字的灵感来自于 [Proxima Centauri](https://en.wikipedia.org/wiki/Proxima_Centauri),它是一颗小型的低质量恒星,位于人马座南部,距离太阳 4.244 光年(1.301 pc)。
## 加入 Barnard 网络
**Barnard** 是 Starcoin 永久测试网络,Barnard 是 Proxima 的继任者。
你可以使用下面的命令加入 Barnard 网络:
```shell
starcoin -n barnard
```
“Barnard” 这个名字的灵感来自 [Barnard's Star](https://en.wikipedia.org/wiki/Barnard%27s_Star),它是一颗红矮星,位于 Ophiuchus 星座,距离地球约 6 光年。
| 20.8 | 133 | 0.746795 | yue_Hant | 0.776705 |
0d7b523fc36e24d253539e54411161150cf43052 | 567 | md | Markdown | README.md | dylanashley/reward-weighted-regression | 563b9fa39f3ab07d723eb9096e811da3c6003492 | [
"MIT"
] | 3 | 2021-07-26T18:45:22.000Z | 2022-01-27T16:39:50.000Z | README.md | dylanashley/reward-weighted-regression | 563b9fa39f3ab07d723eb9096e811da3c6003492 | [
"MIT"
] | null | null | null | README.md | dylanashley/reward-weighted-regression | 563b9fa39f3ab07d723eb9096e811da3c6003492 | [
"MIT"
] | null | null | null | # reward-weighted-regression
This repository contains the source code for the experiments presented in *Reward-Weighted Regression Converges to a Global Optimum* by Miroslav Štrupl, Francesco Faccio, Dylan R. Ashley, Rupesh Kumar Srivastava, and Jürgen Schmidhuber.
To produce the plots shown in the paper, first ensure you have python 3.8.0 installed and execute the following in a bash shell:
```bash
pip install -r requirements.txt
./build.sh
./tasks_0.sh
./plot.py results.pdf
```
After execution has completed, a `results.pdf` file should have been generated.
| 43.615385 | 236 | 0.790123 | eng_Latn | 0.996041 |
0d7cb2c96f57dc089b21ef395421647c5ccc5d29 | 5,899 | md | Markdown | docs/projeto/evm.md | fga-eps-mds/EPS-2020-2-G2 | 6d5e9199a0d47cf2a935affdb4b1e2fd4446771d | [
"MIT"
] | 5 | 2021-02-06T14:29:06.000Z | 2021-02-08T14:46:37.000Z | docs/projeto/evm.md | fga-eps-mds/2020.2-Eccoar | 6d5e9199a0d47cf2a935affdb4b1e2fd4446771d | [
"MIT"
] | 162 | 2021-03-04T20:01:48.000Z | 2021-05-26T12:48:08.000Z | docs/projeto/evm.md | fga-eps-mds/EPS-2020-2-G2 | 6d5e9199a0d47cf2a935affdb4b1e2fd4446771d | [
"MIT"
] | 5 | 2021-02-06T17:30:54.000Z | 2021-02-25T22:39:23.000Z | # Earned Value Management
## Histórico de revisão
| Autor | Mudanças | Data | Versão |
| -------------------------------------------------- | ------------------------------------------ | ---------- | ------ |
| [Matheus Blanco](https://github.com/MatheusBlanco) | Criação do documento | 13/03/2021 | 1.0 |
| [Matheus Blanco](https://github.com/MatheusBlanco) | Adição da tabela e explicação das fórmulas | 13/03/2021 | 2.0 |
| [Matheus Blanco](https://github.com/MatheusBlanco) | Ajuste dos valores de custo | 21/03/2021 | 3.0 |
## 1. Introdução
A técnica de mensuração _Earned Value Management_ é amplamente utilizada por gerentes de projeto, por ser capaz de integrar a definição de recursos, escopo e de cronograma, de um determinado projeto, de maneira a medir regularmente o progresso de dito projeto, mostrando valores monetários relativos a essa medição para a facilitação do entendimento.
O EVM é calculado em uma série de etapas que comprimem a relação de custos, a quantidade de progresso alcançado em determinado período de tempo e o valor planejado a ser definido pelo time e pelo escopo. São os valores:
- Valor planejado (PV);
- Custo atual (AC);
- Valor agregado (EV);
A tabela de EVM da equipe encontra-se [aqui](https://docs.google.com/spreadsheets/d/1dIPz4ku9tcfeAwUH-QSNfKg3fyFHK_nozA9CRUzxwNs/edit?usp=sharing).
<iframe src="https://docs.google.com/spreadsheets/d/1dIPz4ku9tcfeAwUH-QSNfKg3fyFHK_nozA9CRUzxwNs/edit?usp=sharing" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
## Valor planejado (PV)
O valor planejado de um projeto se refere ao custo total do mesmo em determinado período de tempo, levando em consideração o BAC, valor total de custos calculado para o projeto. O BAC deste projeto foi calculado de acordo com custos de ferramentas, salário de membros envolvidos e de equipamentos necessitados para que os mesmos pudessem produzir. Tais valores encontram-se melhor detalhados no [Termo de Abertura de Projeto](./tap.md), e se resume no valor de **R$ 133.520,96**.
A partir do cálculo do BAC, pode-se calculcar o PV dividindo-se o BAC na quantidade de checkpoints existentes no projeto (_sprints_), e atribuindo-se a cada uma das _sprints_ um valor parcelado do BAC. Normalmente divide-se o BAC pelo número de _sprints_ e atribui-se este valor a cada uma delas.
Utilizando-se então da fórmula **PV = % completado(planejado) \* BAC**, pode-se alcançar o valor do PV alcançado até determinado ponto do projeto.
Ex: Se o projeto dura 4 meses, o BAC é de **R$ 133.520,96**, e dois meses se passaram, com 75% de completude, o PV seria = 75% \* **R$ 133.520,96**.
## Custo Atual (AC)
O AC representa os custos atuais utilizados para se completar determinada porcentagem do projeto. Seu resultado final nos mostra o valor que foi gasto pela equipe para completar uma quantidade X de sprints, Histórias de Usuário, ou ter alcançado um determinado checkpoint do projeto. Não tem uma fórmula fixa pois é o representado diretamente pelos gastos atuais do projeto.
## Valor Agregado (EV)
O EV representa o valor criado até determinado momento do projeto. Se o mesmo for parado, por exemplo, depois de dois meses, o EV irá mostrar para o cliente o valor gerado com o progresso do projeto.
Sua fórmula é: **EV = % completado(atual) \* BAC**.
Ex: Se o projeto dura 4 meses, o BAC é de **R$ 133.520,96**, e dois meses se passaram, com 75% de completude e **R$ 50.000,00** foram gastos, o EV seria = 75% \* **R$ 133.520,96,76** = **R$ 100.140,72**. O projeto então vale mais do que o que foi gasto atualmente.
## Outros índices de cálculo
Além dos três valores principais, existem variações que podem ser calculadas a partir de seus resultados e devem ser medidas por _sprint_. Estas variações são:
**Variação de Cronograma (SV)**, que indica a variação entre a quantidade de trabalho realizado e a quantidade de trabalho a ser feita. Sua fórmula envolve valor planejado e valor agregado (**SV = EV - PV**), e seu resultado deve ser analisado em questão de positividade, quando o projeto está adiantado, neutralidade, quando o projeto está no tempo esperado, e negatividade, quando o projeto está atrasado.
**Variação de Custo (CV)**, que indica a diferença entre a o custo planejado e o custo atual do projeto (**CV = EV - AC**), e seu resultado deve ser analisado em questão de positividade, quando o projeto está abaixo do orçamento, neutralidade, quando o projeto está no orçamento esperado, e negatividade, quando o projeto estourou o orçamento.
**Índice de perfomance por cronograma (SPI)**, que mostra o progresso do projeto em relação ao seu cronograma. É representado pela fórmula **SPI = EV/PV**, e seu resultado deve ser analisado:
- Maior que 1: O projeto está adiantado no cronograma;
- Menor que 1: O projeto está atrasado no cronograma;
- Igual a 1: O projeto está de acordo com o cronograma;
**Índice de performance por custo (CPI)**, igual ao SPI, que mostra a eficiência de custo do projeto e mede o valor agregado em relação aos custos atuais do projeto. É calculado com **CPI = EV/AC**, onde o resultado:
- Maior que 1: O projeto está dentro do orçamento e o projeto está sendo lucrativo;
- Menor que 1: O projeto está acima do orçamento e o projeto não está valendo os custos realizados
- Igual a 1: O valor agregado é igual aos custos do projeto;
## Referências:
- **A BEGINNER’S Guide to Earned Value Management**: Earned value is a technique used in project management to estimate where a project is versus the planned budget and schedule. We’ll consider its benefits and how to calculate it.. [S. l.], 9 jan. 2021. Disponível em: https://www.fool.com/the-blueprint/earned-value-management/. Acesso em: 13 mar. 2021.
| 84.271429 | 479 | 0.731141 | por_Latn | 0.999441 |
0d7cf56aa46f0d4c3759f8352af55835f92c1a93 | 18 | md | Markdown | README.md | JanneMattila/210-spa-func-b2c | c34ccb9f5d6f1e703801ca3b7b097587a3f1f659 | [
"MIT"
] | null | null | null | README.md | JanneMattila/210-spa-func-b2c | c34ccb9f5d6f1e703801ca3b7b097587a3f1f659 | [
"MIT"
] | null | null | null | README.md | JanneMattila/210-spa-func-b2c | c34ccb9f5d6f1e703801ca3b7b097587a3f1f659 | [
"MIT"
] | null | null | null | # 210-spa-func-b2c | 18 | 18 | 0.722222 | vie_Latn | 0.350566 |
0d7d5fa5eb942818e106d8be29480bdec37cd873 | 1,983 | md | Markdown | _pages/publications.md | NabinGiri/nabingiri.github.io | e1a95120e45ca0fcd62c4bed3cc8f13685019d18 | [
"MIT"
] | null | null | null | _pages/publications.md | NabinGiri/nabingiri.github.io | e1a95120e45ca0fcd62c4bed3cc8f13685019d18 | [
"MIT"
] | null | null | null | _pages/publications.md | NabinGiri/nabingiri.github.io | e1a95120e45ca0fcd62c4bed3cc8f13685019d18 | [
"MIT"
] | null | null | null | ---
layout: archive
title: "Publications"
permalink: /publications/
author_profile: true
---
{% if author.googlescholar %}
You can also find my articles on <u><a href="{{author.googlescholar}}">my Google Scholar profile</a>.</u>
{% endif %}
{% include base_path %}
**Nabin Giri** , Jianlin Cheng. <a href="https://www.journals.elsevier.com/current-opinion-in-structural-biology" target="_blank">Artificial Intelligence (AI) Methodologies in Structural Biology section (2023) of Current Opinion in Structural Biology</a> - *Invited - Manuscript in Preparation*
Elham Soltanikazemi, Raj S. Roy, Farhan Quadir, **Nabin Giri**, Alex Morehead, Jianlin Cheng. <a href="https://arxiv.org/abs/2205.13594" target="_blank">DRLComplex: Reconstruction of protein quaternary structures using deep reinforcement learning</a> - *Preprint*
**Nabin Giri** , Jianlin Cheng. <a href="https://www.biorxiv.org/content/10.1101/2022.05.27.493799v1" target="_blank">A Deep Learning Bioinformatics Approach to Modeling Protein-Ligand Interaction with cryo-EM Data in 2021 Ligand Model Challenge</a> - *Preprint*
Gao, Mu, Peik Lund-Andersen, Alex Morehead, Sajid Mahmud, Chen Chen, Xiao Chen, **Nabin Giri** et al. <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9652872" target="_blank">High-Performance Deep Learning Toolbox for Genome-Scale Prediction of Protein Structure and Function</a> - *2021 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC)*
---------------------------------------
My master's thesis titled **Recommendation System Using Factorization Model and MapReduce Framework** is available <a href="https://zenodo.org/record/6591586#.YpPLquzMJBY" target="_blank">here</a>, the codes are available in GitHub <a href="https://github.com/nabingiri/recommendation-system" target="_blank">repository</a>, and to watch my three-minute thesis presentation please go <a href="https://youtu.be/KVL9eQ35YSY" target="_blank">here</a>.
| 79.32 | 449 | 0.747352 | eng_Latn | 0.325786 |
0d7da45ceac4fcf950d6b3d3751d3b0e0dac3aa2 | 6,726 | md | Markdown | articles/web-application-firewall/afds/waf-front-door-create-portal.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/web-application-firewall/afds/waf-front-door-create-portal.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/web-application-firewall/afds/waf-front-door-create-portal.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: 'Självstudie: skapa WAF-princip för Azures frontend-Azure Portal'
description: I den här självstudien får du lära dig hur du skapar en WAF-princip (Web Application Firewall) med hjälp av Azure Portal.
author: vhorne
ms.service: web-application-firewall
services: web-application-firewall
ms.topic: tutorial
ms.date: 03/31/2021
ms.author: victorh
ms.openlocfilehash: e7b4544530dc9c0c894ae7a0f2b1d2830f895928
ms.sourcegitcommit: 9f4510cb67e566d8dad9a7908fd8b58ade9da3b7
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/01/2021
ms.locfileid: "106122264"
---
# <a name="tutorial-create-a-web-application-firewall-policy-on-azure-front-door-using-the-azure-portal"></a>Självstudie: skapa en brand Väggs princip för webb program på Azure-frontend med hjälp av Azure Portal
I den här självstudien får du lära dig hur du skapar en grundläggande WAF-princip (Azure Web Application Firewall) och använder den på en klient dels värd i Azures front dörr.
I den här guiden får du lära dig att:
> [!div class="checklist"]
> * Skapa en WAF-princip
> * Koppla den till en klient dels värd
> * Konfigurera WAF-regler
## <a name="prerequisites"></a>Förutsättningar
Skapa en [frontend-dörr](../../frontdoor/quickstart-create-front-door.md) eller en [Standard-/Premium-profil för front dörren](../../frontdoor/standard-premium/create-front-door-portal.md) .
## <a name="create-a-web-application-firewall-policy"></a>Skapa en brand Väggs princip för webb program
Börja med att skapa en grundläggande WAF-princip med hanterad standard regel uppsättning (DRS) med hjälp av portalen.
1. På den övre vänstra sidan av skärmen väljer du **skapa en resurs** > söker efter **WAF** > Välj **brand vägg för webbaserade program (WAF)** > Välj **skapa**.
1. På fliken **grundläggande** på sidan **skapa en WAF-princip** anger eller väljer du följande information, accepterar standardinställningarna för återstående inställningar och väljer sedan **Granska + skapa**:
| Inställning | Värde |
| --- | --- |
| Princip för | Välj **Global WAF (Front dörr)**. |
| SKU för front dörren | Välj mellan SKU: n Basic, standard och Premium. |
| Prenumeration | Välj prenumerations namnet på din front dörr.|
| Resursgrupp | Välj resurs grupps namnet för front dörren.|
| Principnamn | Ange ett unikt namn för din WAF-princip.|
| Princip tillstånd | Ange som **aktive rad**. |
:::image type="content" source="../media/waf-front-door-create-portal/basic.png" alt-text="Skärm bild av sidan Skapa en o-F-princip med en granska + skapa-knapp och list rutor för prenumerationen, resurs gruppen och princip namnet.":::
1. På fliken **Association** på sidan **skapa en WAF-princip** väljer du **+ associera en profil för front dörr**, anger följande inställningar och väljer sedan **Lägg till**:
| Inställning | Värde |
| --- | --- |
| Profil för front dörr | Välj profil namnet för din klient del. |
| Domains | Välj de domäner som du vill associera WAF-principen till och välj sedan **Lägg till**. |
:::image type="content" source="../media/waf-front-door-create-portal/associate-profile.png" alt-text="Skärm bild av sidan koppla en profil för en front dörr.":::
> [!NOTE]
> Om domänen är associerad med en WAF-princip visas den som nedtonad. Du måste först ta bort domänen från den associerade principen och sedan associera domänen till en ny WAF-princip.
1. Välj **Granska + skapa** och välj sedan **Skapa**.
## <a name="configure-web-application-firewall-rules-optional"></a>Konfigurera brand Väggs regler för webb program (valfritt)
### <a name="change-mode"></a>Ändra läge
När du skapar en WAF-princip är standard principen för WAF i **identifierings** läge. I **identifierings** läge blockerar WAF inte några begär Anden, i stället loggas begär Anden som matchar WAF-reglerna på WAF-loggar.
Om du vill se WAF i praktiken kan du ändra läges inställningarna från **identifiering** till **förebyggande**. I **skydds** läge blockeras och loggas begär Anden som matchar regler som definieras i standard regel UPPSÄTTNINGEN (DRS) och loggas på WAF-loggar.
:::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Skärm bild av avsnittet princip inställningar. Växla läge är inställt på förebyggande.":::
### <a name="custom-rules"></a>Anpassade regler
Du kan skapa en anpassad regel genom att välja **Lägg till anpassad regel** under avsnittet **anpassade regler** . Då startas sidan anpassad regel konfiguration.
:::image type="content" source="../media/waf-front-door-create-portal/custom-rules.png" alt-text="Skärm bild av sidan anpassade regler.":::
Nedan visas ett exempel på hur du konfigurerar en anpassad regel för att blockera en begäran om frågesträngen innehåller **blockme**.
:::image type="content" source="../media/waf-front-door-create-portal/customquerystring2.png" alt-text="Skärm bild av sidan anpassad regel konfiguration som visar inställningarna för en regel som kontrollerar om variabeln QueryString innehåller värdet blockme.":::
### <a name="default-rule-set-drs"></a>Standard regel uppsättning (DRS)
Azure-hanterad standard regel uppsättning är aktiverat som standard. Den aktuella standard versionen är DefaultRuleSet_1.0. Från WAF **Managed Rules**, **Assign**, nyligen tillgängliga ruleset Microsoft_DefaultRuleSet_1 1 finns i list rutan.
Om du vill inaktivera en enskild regel markerar du **kryss rutan** framför regel numret och väljer **inaktivera** överst på sidan. Om du vill ändra åtgärds typer för enskilda regler i regel uppsättningen markerar du kryss rutan framför regel numret och väljer sedan **åtgärden ändra** överst på sidan.
:::image type="content" source="../media/waf-front-door-create-portal/managed-rules.png" alt-text="Skärm bild av sidan hanterade regler som visar en regel uppsättning, regel grupper, regler och knapparna Aktivera, inaktivera och ändra åtgärd." lightbox="../media/waf-front-door-create-portal/managed-rules-expanded.png":::
## <a name="clean-up-resources"></a>Rensa resurser
Ta bort resurs gruppen och alla relaterade resurser när de inte längre behövs.
## <a name="next-steps"></a>Nästa steg
> [!div class="nextstepaction"]
> [Läs mer om Azures front dörr](../../frontdoor/front-door-overview.md)
> [Läs mer om Azures främre dörr standard/Premium](../../frontdoor/standard-premium/overview.md)
| 66.594059 | 322 | 0.713946 | swe_Latn | 0.99721 |
0d7dc1e636b862d513458fc316d1bb2851d59f36 | 1,932 | md | Markdown | README.md | danfromisrael/secret-agent | 41a6c144ca73d86460d4fa4e76bced5b80b2a981 | [
"MIT"
] | 2 | 2018-01-23T05:39:55.000Z | 2018-04-10T17:29:15.000Z | README.md | danfromisrael/secret-agent | 41a6c144ca73d86460d4fa4e76bced5b80b2a981 | [
"MIT"
] | 4 | 2017-11-23T21:48:34.000Z | 2017-11-25T18:11:45.000Z | README.md | buzzdan/secret-agent | 41a6c144ca73d86460d4fa4e76bced5b80b2a981 | [
"MIT"
] | null | null | null | # secret-agent
Secret agent is in the isolated country nearby, where exactly ?
Good morning, agent. MI6 is trying to gather statistics about its missions.
Your mission, should you decide to accept it, has two parts:
## Endpoint: GET /countries-by-isolation
An isolated agent is defined as an agent that participated in a single mission.this endpoint will find the most isolated country (the country with the highest degree of isolation).
For the sample input (see [db-setup script](https://github.com/danfromisrael/secret-agent/blob/master/scripts/db-setup.js)) input:
- Brazil has 1 isolated agent (008) and 2 non-isolated agents (007, 005)
- Poland has 2 isolated agents (011, 013) and one non-isolated agent (005)
- Morocco has 3 isolated agents (002, 009, 003) and one non-isolated agent (007)
So the result is Morocco with an isolation degree of 3.
## Endpoint: POST /find-closest
Body: { “target-location”: “[address or geo coordinates]”
Will Find the closest and farthest missions from a specific address (it uses Google API for this)
# How to run the service
### Environment Variables Setup
Create the following environemnt variables to make the project work:
GEOCODE_API_KEY - Create an API key on [Google Developer Console](https://console.developers.google.com) for [Google Maps Geocoding API service](https://developers.google.com/maps/documentation/geocoding/intro)
(you can create a .env file with the env variable)
## Docker
For Production:
`docker-compose up`
For Development:
`docker-compose -f docker-compose.dev.yml up`
* Will activate shared volume of host directory inside the container
* Watches the files as development takes place
* Server restarts for each change
Wanna just compose it and bash into the container without starting the service ?
you got it!
`docker-compose -f docker-compose.dev.yml run web bash`
just go in there and do whatever you feel like -> you got full control
| 36.45283 | 210 | 0.771739 | eng_Latn | 0.993091 |
0d7e4f545d85d53308f7b6b5e33e8dcdd34d95a8 | 1,865 | md | Markdown | README.md | NishTech/Sitecore-Commerce-Integration-Framework | 2fadbb501f03278733a7504c0925f6cff099330c | [
"MIT"
] | 1 | 2018-07-23T07:59:15.000Z | 2018-07-23T07:59:15.000Z | README.md | NishTech/Sitecore-Commerce-Integration-Framework | 2fadbb501f03278733a7504c0925f6cff099330c | [
"MIT"
] | null | null | null | README.md | NishTech/Sitecore-Commerce-Integration-Framework | 2fadbb501f03278733a7504c0925f6cff099330c | [
"MIT"
] | null | null | null | # Sitecore Commerce Integration Framework
In most online ecommerce implementations, integration with backend systems like ERP, PIM etc. plays an important role. Most companies spend years building these systems and want to keep using them. A modern ecommerce platform like Sitecore Experience Commerce helps customers to enter in the much needed digital commerce space, but it needs a communication link to the backend system to complete ecommerce transactions. This open source integration framework for Sitecore Commerce will give a jump start for your Sitecore Commerce project that needs to integrate with backend system.
## How It Works
There are two plugins in this solution
- Plugin.NishTech.IntegrationFramework
This is the core framework project.
- Plugin.NishTech.Sample.IntegrationProcessor
This is a sample project that shows, how to create an integration processor.
Add these two plugins in your Sitecore Commerce solution. In the Postman folder, you will find the postman collection, that you can use to create job scheduler entities and also review them.
In **src\Plugin.NishTech.Sample.IntegrationProcessor\SQL Scripts**, you will find some SQL scripts that can be used to load sample customer data in FakeERP database.
Add the following in the **PlugIn.AdventureWorks.CommerceMinions-1.0.0.json** and **PlugIn.Habitat.CommerceMinions-1.0.0.json** for creating the minion.
```json
{
"$type": "Sitecore.Commerce.Core.MinionPolicy, Sitecore.Commerce.Core",
"WakeupInterval": "00:05:00",
"ListToWatch": "QueuedJobs",
"FullyQualifiedName": "Plugin.NishTech.IntegrationFramework.JobSchedulerMinion, Plugin.NishTech.IntegrationFramework",
"ItemsPerBatch": 10,
"SleepBetweenBatches": 500
}
```
Bootstrap the application before using the framework.
Tested with Sitecore Commerce 9 Update 1.
| 66.607143 | 584 | 0.783914 | eng_Latn | 0.953272 |
0d7e71a9487f8f11581a127df526c482293bfda2 | 1,339 | md | Markdown | README.md | cb88/tme | 8737ce4fc8d0834676a473c7f3b033b1ff764a60 | [
"BSD-4-Clause"
] | 1 | 2017-12-14T09:03:41.000Z | 2017-12-14T09:03:41.000Z | README.md | cb88/tme | 8737ce4fc8d0834676a473c7f3b033b1ff764a60 | [
"BSD-4-Clause"
] | null | null | null | README.md | cb88/tme | 8737ce4fc8d0834676a473c7f3b033b1ff764a60 | [
"BSD-4-Clause"
] | null | null | null | The purpose of this project is to mirror code at:
http://people.csail.mit.edu/fredette/tme/tme-0.8.tar.gz
Build Instructions:
tar xzf glib-1.2.10.tar.gz
cd glib-1.2.10
./configure
make
make install (as root)
tar xzf gtk+-1.2.10.tar.gz
cd gtk+-1.2.10
./configure
make
make install (as root)
tar xzf tme-0.4.tar.gz
cd tme-0.4
./configure --disable-shared
make
make install (as root)
Setup:
http://people.csail.mit.edu/fredette/tme/sun4-75-nbsd.html
Running:
Add the following lines or similar at appropriate places in your .profile or .basrc
otherwise it isn't going to work:
echo "PATH=$PATH:/usr/local/bin;export PATH"
echo "LTDL_LIBRARY_PATH=/usr/local/lib;export LTDL_LIBRARY_PATH"
I haven't contacted the original developer but will get around to it once I have something to show.
1. My todo is to fix Linux and uptodate GCC support,
2. Perhaps remove libltdl, convert it to libdl or something to improve the situation.
3. Change GUI to SDL like QEMU since gtk is a sinking bloatship.
4. Improve non BSD networking.
Currently im only going to work on 1. if someone else wants to work on the rest feel free!
Of course I'll mark 1. done when its done and move on.
Also I intend to mirror documentation from the sun3/4 zoo and original project pages
just in case something happens to them.
-cb88
| 24.345455 | 99 | 0.746079 | eng_Latn | 0.977106 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.