hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3eab4d8ce2165fe355bb119ddf0dc956a224311d | 42 | md | Markdown | README.md | nsmoore/AISC | d260b110bf731ce44c941be355400ca687b3571a | [
"MIT"
] | null | null | null | README.md | nsmoore/AISC | d260b110bf731ce44c941be355400ca687b3571a | [
"MIT"
] | null | null | null | README.md | nsmoore/AISC | d260b110bf731ce44c941be355400ca687b3571a | [
"MIT"
] | null | null | null | # AISC
AISC project
Finished local testing | 14 | 22 | 0.833333 | eng_Latn | 0.764459 |
3eac5a6e432c005334fc1d9f2240d128a567bd93 | 7,369 | md | Markdown | docs/testrunner-toolkit/configuration/testcafe.md | titusfortner/sauce-docs | 25c288a540b146aafc98324b14f58a1f556b2062 | [
"MIT"
] | null | null | null | docs/testrunner-toolkit/configuration/testcafe.md | titusfortner/sauce-docs | 25c288a540b146aafc98324b14f58a1f556b2062 | [
"MIT"
] | null | null | null | docs/testrunner-toolkit/configuration/testcafe.md | titusfortner/sauce-docs | 25c288a540b146aafc98324b14f58a1f556b2062 | [
"MIT"
] | null | null | null | ---
id: testcafe
title: "Configuration Syntax: TestCafe"
sidebar_label: TestCafe
---
Please refer to the [Common Configuration Syntax Reference](/testrunner-toolkit/configuration#common-syntax-reference)for information regarding fields such as `apiVersion`, `kind`, and `sauce`.
## TestCafe Considerations
Execution time for TestCafe tests is limited to a maximum of 30 minutes. If the limit is exceeded, the test terminates and Sauce Control uploads assets (videos, screenshots, logs, etc..) to the Sauce Labs platform.
Consider breaking up longer TestCafe tests to optimize performance and ensure you do not exceed this time limit.
## Example Configuration
```yaml reference
https://github.com/saucelabs/saucectl-testcafe-example/blob/master/.sauce/config.yml
```
## `testcafe`
__Description__: Details specific to the `testcafe` project configuration.
__Type__: *object*
__Example__:
```yaml
testcafe:
version: ##VERSION##
```
### `version`
__Description__: Version of `testcafe` to use during tests
__Type__: *string*
__Example__:
```yaml
version: ##VERSION##
```
## `suites`
__Description__: Field for defining test suite details such as the suite `name`, desired `browserName`, and test configurations.
__Type__: *object*
__Examples__:
```yaml reference
https://github.com/saucelabs/saucectl-testcafe-example/blob/master/.sauce/config.yml#L20-L30
```
```yaml reference
https://github.com/saucelabs/saucectl-testcafe-example/blob/master/.sauce/config.yml#L39-L52
```
### `name`
__Description__: Name of the test suite.
__Type__: *string*
__Example__:
```yaml
- name: "saucy test"
```
### `browserName`
__Description__: Name of desired browser. Although Testcafe supports triggering one test in multiple browsers, it is better here to split them into every suite to indicate each suite has its own test point.
__Type__: *string*
__Example__:
```yaml
browserName: "chrome"
```
### `src`
__Description__: The explicit name, file glob, or location of the test files.
__Type__: *object*
__Example__:
```yaml
src:
- "tests/test_file1.test.js"
- "tests/integrations"
- "*/*.test.js"
```
### `devices`
<p><small><span class="highlight sauce-cloud">Sauce Cloud only</span></small></p>
__Description__: Field for defining device details such as device `name`, `platformName`, `platformVersions`.
__Type__: *object*
__Example__:
```yaml
devices:
- name: iPhone 12 Simulator
platformName: iOS
platformVersions:
- "14.3"
```
### `env`
__Description__: Environment variables. Substituted variables like $MY_VAR can be expanded.
__Type__: *object*
__Example__:
```yaml
env:
hello: world
foo: $MY_BAR
```
### `screenshots`
__Description__: Screenshots settings for testcafe. [See link in Testcafe](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#screenshots).
__Type__: *object*
__Example__:
```yaml
screenshots:
takeOnFails: true
fullPage: true
```
### `speed`
__Description__: Specifies the test execution speed. Tests are run at the maximum speed by default. You can use this option to slow the test down. Provide a number between 1 (the fastest) and 0.01 (the slowest).
__Type__: *float64*
__Example__:
```yaml
speed: 1
```
### `platformName`
<p><small><span class="highlight sauce-cloud">Sauce Cloud only</span></small></p>
__Description__: Operating system on which the browser and test runs.
__Type__: *string*
__Example__:
```yaml
platformName: "Windows 10"
```
### `screenResolution`
<p><small><span class="highlight sauce-cloud">Sauce Cloud only</span></small></p>
__Description__: Set browser window screen resolution.
__Type__: *string*
__Example__:
```yaml
screenResolution: "1920x1080"
```
### `disableScreenshots`
__Description__: Prevents TestCafe from taking screenshots. [TestCafe `disableScreenshots` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#disablescreenshots).
__Type__: *boolean*
__Example__:
```yaml
disableScreenshots: true
```
### `tsConfigPath`
__Description__: The absolute or relative path to the TypeScript configuration file. Relative paths are resolved against the current directory (the directory from which you run TestCafe).
__Type__: *string*
__Example__:
```yaml
tsConfigPath: /path/to/file
```
### `clientScripts`
__Description__: Injects scripts into all pages visited during the test. [TestCafe `clientScripts` definition](https://devexpress.github.io/testcafe/documentation/reference/test-api/fixture/clientscripts.html).
__Type__: *array*
__Example__:
```yaml
clientScripts: ["/path/to/file1", "/path/to/file2"]
```
### `skipJsErrors`
__Description__: Ignores JavaScript errors on a webpage. [Testcafe `skipJsErrors` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#skipjserrors).
__Type__: *boolean*
__Example__:
```yaml
skipJsErrors: true
```
### `quarantineMode`
__Description__: Enables the quarantine mode for tests that fail. [Testcafe `quarantineMode` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#quarantinemode).
__Type__: *boolean*
__Example__:
```yaml
quarantineMode: true
```
### `skipUncaughtErrors`
__Description__: Ignores uncaught errors and unhandled promise rejections in test code. [Testcafe `skipUncaughtErrors` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#skipUncaughtErrors).
__Type__: *boolean*
__Example__:
```yaml
skipUncaughtErrors: true
```
### `selectorTimeout`
__Description__: Specifies the time (in milliseconds) within which selectors attempt to return a node. [Testcafe `selectorTimeout` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#selectorTimeout`).
__Type__: *int*
__Example__:
```yaml
selectorTimeout: 1000
```
### `assertionTimeout`
__Description__: Specifies the time (in milliseconds) TestCafe attempts to successfully execute an assertion if a selector property or a client function was passed as an actual value. [Testcafe `assertionTimeout` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#assertionTimeout).
__Type__: *int*
__Example__:
```yaml
assertionTimeout: 1000
```
### `pageLoadTimeout`
__Description__: Specifies the time (in milliseconds) passed after the DOMContentLoaded event, within which TestCafe waits for the window.load event to fire. [Testcafe `pageLoadTimeout` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#pageLoadTimeout).
__Type__: *int*
__Example__:
```yaml
pageLoadTimeout: 1000
```
### `stopOnFirstFail`
__Description__: Stops a test run if a test fails. [Testcafe `stopOnFirstFail` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#stopOnFirstFail).
__Type__: *boolean*
__Example__:
```yaml
stopOnFirstFail: true
```
### `disablePageCaching`
__Description__: Prevents the browser from caching page content. [Testcafe `disablePageCaching` definition](https://devexpress.github.io/testcafe/documentation/reference/configuration-file.html#disablePageCaching).
__Type__: *boolean*
__Example__:
```yaml
disablePageCaching: true
```
| 25.947183 | 329 | 0.760483 | eng_Latn | 0.60758 |
3eadb92f1de2aa231b8ef2ec8e75f36f74249d37 | 3,999 | md | Markdown | cc3200-sdk/example/provisioning_smartconfig/README.md | rchtsang/cc3200-sdk-linux-gcc | 92d11e088a19cdea9374ef0ead2c3dd55995dcff | [
"MIT"
] | null | null | null | cc3200-sdk/example/provisioning_smartconfig/README.md | rchtsang/cc3200-sdk-linux-gcc | 92d11e088a19cdea9374ef0ead2c3dd55995dcff | [
"MIT"
] | null | null | null | cc3200-sdk/example/provisioning_smartconfig/README.md | rchtsang/cc3200-sdk-linux-gcc | 92d11e088a19cdea9374ef0ead2c3dd55995dcff | [
"MIT"
] | null | null | null | ## Overview
### What is SmartConfig Provisioning?
A first step in utilizing CC3200 in a Wi-Fi enabled application is to
configure the CC3200 to a user's Wi-Fi network. This requires information on
the AP, or SSID name, and the security passcode if using WEP/WPA/WPA2. Considering that embedded Wi-Fi applications will generally lack user interfaces such as keypads or touchscreens, this process can be complex without the use of advanced I/O.
To create a great user experience, TI has created SmartConfig technology: a one-step and one-time process to connect a CC3200 device to
a home wireless network. This greatly stands apart from other Wi-Fi
suppliers who require multiple steps to configure a device onto the
network.
SmartConfig leverages the standard mechanisms present in Wi-Fi to
configure a CC3200's association information on the fly, regardless of
whether the user-interface is available. In this process a Wi-Fi enabled
device such as a smartphone, tablet or a laptop is used to send the
association information to the CC3200.
Addtionally, it can be used to associate multiple devices to the same AP
simultaneously. The Configuration process is secured with AES-128
encryption, and the SSID and key length are supported up to 32 bytes.
Furthermore, the device used for configuration (smartphone, tablet, or
PC) stays connected to the user's home network during the configuration
process unlike other methods that require disconnection.
## Application details
### Program Flow
1. Initialize the device networking layer
2. Delete all the profiles to ensure the CC3200 device does not connect to any other AP
3. Set the Connection Policy to Auto. This ensures that the device connects to this AP automatically once SmartConfig is complete
4. Wait for configuration from TI SimpleLink Wi-Fi Starter Pro application
5. Wait for the connection to the AP
### Prerequisites
- Simplelink Wi-Fi Starter Pro mobile application for Android and iOS: <http://www.ti.com/tool/wifistarterpro>
## Source Files briefly explained
- **main.c** - Device Initialization, Profile deletion, Connection Policy change, Trigger Smart Configuration, Wait for Device Events
- **gpio\_if.c** - GPIO interface file to handle LED blinking TASK
## Usage
1. Download the SimpleLink Wi-Fi Starter Pro mobile application to a smartphone: <http://www.ti.com/tool/wifistarterpro>
2. Setup a serial communication application. Open a serial terminal on a PC with the following settings:
- **Port: ** Enumerated COM port
- **Baud rate: ** 115200
- **Data: ** 8 bit
- **Parity: ** None
- **Stop: ** 1 bit
- **Flow control: ** None
3. Run the reference application.
- Open the project in CCS/IAR. Build the application and debug to load to the device, or flash the binary using [UniFlash](http://processors.wiki.ti.com/index.php/CC3100_%26_CC3200_UniFlash_Quick_Start_Guide).
4. Once the application starts, it will start the NWP in Station mode and wait to be provisioned. Turn on the Wi-Fi setting on your smartphone and connect to the network you would like to use to provison the CC3200.
5. Open the SimpleLink Wi-Fi Starter Pro app. Go to the Settings page and check that **Enable Smart Config** is turned **on**.<br>

6. On the Provisioning page, enter the AP credentials using the Network Name and Network Password fields. You can also choose to name your device to use when it joins the network. Press **Start Configuration**.<br>

7. You can verify that the provisioning process has succeeded by checking the terminal output or the mobile app.
## Limitations/Known Issues
SmartConfig cannot provision embedded systems in all circumstances. Therefore, products should not go to production implementing only SmartConfig. Access Point mode should be implemented and be used as a backup in case SmartConfig fails to provision the system. | 58.808824 | 261 | 0.768692 | eng_Latn | 0.992021 |
3eadb96f16f9bb7876e4371dd836fc2696c68a09 | 2,334 | md | Markdown | docs/usage.md | roberto910907/JqueryValidationBundle | bdaf5fc1839338ae3961c94f2ad8d01892aa1fcc | [
"MIT"
] | 28 | 2015-02-23T14:43:39.000Z | 2018-04-30T11:48:18.000Z | docs/usage.md | roberto910907/JqueryValidationBundle | bdaf5fc1839338ae3961c94f2ad8d01892aa1fcc | [
"MIT"
] | 40 | 2015-04-04T21:29:12.000Z | 2019-04-08T16:44:24.000Z | docs/usage.md | roberto910907/JqueryValidationBundle | bdaf5fc1839338ae3961c94f2ad8d01892aa1fcc | [
"MIT"
] | 36 | 2015-02-24T14:27:27.000Z | 2021-04-17T18:39:10.000Z | Usage
============
So your done [installing](install.md). Time to get the validation to the client-side!
It's really simple just open a twig template that has a form and add the following:
```twig
{# These are the required libs, you really should move them somewhere else! #}
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://ajax.aspnetcdn.com/ajax/jquery.validate/1.13.1/jquery.validate.js"></script>
{# The code below generates the form validation #}
<script>
{{ form_jquery_validation(form) }}
</script>
```
Now go to you page and enjoy.
Additional rules
-------------
To get the best result and most client side validations you can enable the additional-methods provided by the bundle.
To enable addition-methods you need to enable it in your `config.yml` by adding:
```YAML
boekkooi_jquery_validation:
form:
...
additionals: true
```
After this you will need to include the `additional-methods.min.js` within your twig template after `jquery.validate.js`.
```twig
{% javascripts '@BoekkooiJqueryValidationBundle/Resources/public/additional-methods.min.js' %}
<script type="text/javascript" src="{{ asset_url }}"></script>
{% endjavascripts %}
```
Collection prototype
-------------
O no you have a form with a collection that has `allow_add` set..... Now you need to do more work!
The simple way to enjoy client-side validation with collections is to open you twig template and add the following:
```twig
{% form_theme form _self %}
{% block collection_widget %}
{% if prototype is defined %}
{# The change here is that we add the javascript for a new row here #}
{%- set attr = attr|merge({'data-prototype': form_row(prototype) ~ '<script>' ~ form_jquery_validation(form) ~ '</script>'}) -%}
{% endif %}
{{- block('form_widget') -}}
{%- endblock collection_widget %}
```
Now refresh your page and hit that add button.
Validation groups (and buttons)
-------------
O yes it's time to abuse the power of the FORM by your usage of buttons with validation groups! No problem we can do that!
**Remark** If you are using callable `validation_groups` then please set the `jquery_validation_groups` option with a array or a string.
More
-------------
- [Error layout/design](layout.md)
- [Custom constraints](custom_constraints.md)
| 37.047619 | 136 | 0.703085 | eng_Latn | 0.979584 |
3eadfec6ebd7c7c8d0ec6f57a8fbfd04630c5882 | 469 | md | Markdown | _teaching/2015-spring-teaching-2.md | preshitambade/preshitambade.github.io | 41099364b2729c30f35384fd901ded43c5be14c4 | [
"MIT"
] | null | null | null | _teaching/2015-spring-teaching-2.md | preshitambade/preshitambade.github.io | 41099364b2729c30f35384fd901ded43c5be14c4 | [
"MIT"
] | null | null | null | _teaching/2015-spring-teaching-2.md | preshitambade/preshitambade.github.io | 41099364b2729c30f35384fd901ded43c5be14c4 | [
"MIT"
] | null | null | null | ---
title: "Distrct Level Master Trainer"
collection: teaching
type: "Workshop"
permalink: /teaching/2015-spring-teaching-1
venue: "Rajnandgaon,Chhattisgarh, India"
date: 2014-03-01
location: "Rajnadgaon, India"
---
Designed and imparted several training sessions on rural development schemes
Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA)
======
Chhattisgarh State Skill Development Mission (CSSDM)
======
Sansad Adarsh Gram Yojana (SAGY)
======
| 22.333333 | 76 | 0.761194 | eng_Latn | 0.839931 |
3eae0db9db509e06dde246d492cb0327ed8536de | 266 | md | Markdown | _indicators/10-7-4.md | KMendin/sdg-indicators-pl | d2b3ffa5cf432fe9b5a5dd1309fd33173733e0eb | [
"CC0-1.0"
] | 8 | 2020-01-28T15:03:46.000Z | 2022-03-18T13:32:09.000Z | _indicators/10-7-4.md | KMendin/sdg-indicators-pl | d2b3ffa5cf432fe9b5a5dd1309fd33173733e0eb | [
"CC0-1.0"
] | null | null | null | _indicators/10-7-4.md | KMendin/sdg-indicators-pl | d2b3ffa5cf432fe9b5a5dd1309fd33173733e0eb | [
"CC0-1.0"
] | 14 | 2019-04-03T09:58:23.000Z | 2021-07-21T12:28:25.000Z | ---
layout: indicator
indicator_variable_1: null
indicator_variable_2: null
sdg_goal: 10
indicator: 10.7.4
target_id: '10.7.4'
permalink: /statistics_glob/10-7-4/
pre: null
graph: null
source_url: null
lang: pl
kategorie: null
zmienne: null
---
| 16.625 | 36 | 0.703008 | hun_Latn | 0.300848 |
3eae41402aca3d95042ef49b22815b1ef7730905 | 2,843 | md | Markdown | Loose-notes/Geologi Menyapa ke-17.md | dasaptaerwin/my-ite | d7196395be2d3083f531f3c766b1f222ca2bd49a | [
"CC0-1.0"
] | null | null | null | Loose-notes/Geologi Menyapa ke-17.md | dasaptaerwin/my-ite | d7196395be2d3083f531f3c766b1f222ca2bd49a | [
"CC0-1.0"
] | null | null | null | Loose-notes/Geologi Menyapa ke-17.md | dasaptaerwin/my-ite | d7196395be2d3083f531f3c766b1f222ca2bd49a | [
"CC0-1.0"
] | null | null | null | # Geologi menyapa #17
- Geofisika dalam eksplorasi migas (Dr. Prihadi Sumintadireja)
- Pendahuluan
- Poin utama eksplorasi:
- apa
- lokasi
- bagaimana
- alat apa
- berapa potensinya -> terkait eksplorasi sumber daya
- Aplikasi
- pencarian sumber daya bumi
- mineral
- batubara
- migas
- geotermal
- CCS
- dll
- pencarian obyek arkeologi
- candi
- makam
- sarkofagus
- dll
- bencana kebumian
- potensi likuifaksi
- seismologi via gempa
- dll
- lingkungan
- geologi teknik
- air tanah/hidrogeologi
- kestabilan lereng
- dll
- alur
- pengambilan data ->
- pengolahan data ->
- interpretasi data -> data sama, interpretasi bisa berbeda
- Kompetensi -> SDM
- Tingkatan kompetensi
- awareness (sekedar tahu)
- aplikasi dasar (paham aplikasi dasar dengan supervisi)
- aplikasi lanjut (paham aplikasi tanpa supervisi)
- master
- HSE dan perencanaan
- HSE
- penting
- wajib di industri
- ada auditnya
- TOR (biasanya) ada
- Perencanaan eksplorasi
- tahapan
- Tahapan eksplorasi
- survei awal
- [[teknologi LIDAR ]]sudah biasa dipakai
- di perusahaan besar
- [[eksplorasi GGG]]
- geologi
- petrologi
- stratigrafi
- sedimentologi
- struktur geologi
- geofisika
- geokimia
- bor eksplorasi
- studi kelayakan
- pemboran mulai beroperasi
- operasional dan pemeliharaan
- Metode geofisika di dunia industri
- Metode yang tercanggih saat ini yang digunakan
- industri migas -> seismik refleksi
- industri geotermal -> magnetotelluric (MT/CSAMT) -> penetrasi lebih dalam -> sinyal yang diukur bisa natural/buatan
- Seismik
- bisa di darat atau di laut,
- jumlah geofon bisa ratusan,
- MT
- pengukuran seharian
- penyajian 2D atau 3D
- Enhanced oli recovery
- Geofisika untuk memantau injeksi
- Data pemantauan real time dan banyak dapat membuat animasi pergerakan fluida seperti kartun
- Pesan penutup
- Data tidak akan cukup, maka perlu pembatasan masalah dan optimalisasi data
- Pengetahuan geosains terus berkembang -> kemajuan teknologi -> volume data terus bertambah
- Data logger -> perlu bergeser dari pemetaan ke pemantauan | 33.845238 | 127 | 0.544847 | ind_Latn | 0.961936 |
3eafbd7fc78b5f9d8b097c738fcfab146b3854b9 | 1,646 | md | Markdown | README.md | Tiryoh/actions-mkdocs | 15037499841a543639fd109470aa94e3e6b6ae61 | [
"MIT"
] | null | null | null | README.md | Tiryoh/actions-mkdocs | 15037499841a543639fd109470aa94e3e6b6ae61 | [
"MIT"
] | 33 | 2020-12-22T00:53:13.000Z | 2021-12-13T22:03:29.000Z | README.md | Tiryoh/actions-mkdocs | 15037499841a543639fd109470aa94e3e6b6ae61 | [
"MIT"
] | 1 | 2022-02-22T05:07:34.000Z | 2022-02-22T05:07:34.000Z | # actions-mkdocs
[](https://github.com/Tiryoh/actions-mkdocs/actions/workflows/test.yaml?query=branch%3Amain)
GitHub Actions for MkDocs
## Inputs
### `mkdocs_version`
Default: `'latest'`
The version of pip, MkDocs.
### `requirements`
Default: `requirements.txt`
The path to `requirements.txt`
### `configfile`
Default: `mkdocs.yml`
The path to `mkdocs.yml`
## Example usage
```yaml
name: Deploy
on:
push:
branches:
- main
jobs:
build:
name: Deploy docs to GitHub Pages
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build
uses: Tiryoh/actions-mkdocs@v0
with:
mkdocs_version: 'latest' # option
#mkdocs_version: '1.1' # option
requirements: 'requirements.txt' # option
configfile: 'mkdocs.yml' # option
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
```
## Related projects
* [Tiryoh/docker-mkdocs-builder](https://github.com/Tiryoh/docker-mkdocs-builder)
* Dockerfile that just builds the MkDocs document
* [squidfunk/mkdocs-material](https://github.com/squidfunk/mkdocs-material)
* A Material Design theme for MkDocs, this project includes build & deploy Dockerfile
* [peaceiris/actions-gh-pages](https://github.com/peaceiris/actions-gh-pages)
* GitHub Actions for GitHub Pages rocket Deploy static files and publish your site easily. Static-Site-Generators-friendly. | 26.126984 | 191 | 0.682868 | eng_Latn | 0.529693 |
3eafc8dd0499aa8eebce0ffd5ca60f0d6356fc5e | 3,671 | md | Markdown | Logic-merging_meeting_times.md | learn-with-me/coding-problems | 243be7f678a72fd91bf7bf2213db2d0a9469e426 | [
"MIT"
] | null | null | null | Logic-merging_meeting_times.md | learn-with-me/coding-problems | 243be7f678a72fd91bf7bf2213db2d0a9469e426 | [
"MIT"
] | null | null | null | Logic-merging_meeting_times.md | learn-with-me/coding-problems | 243be7f678a72fd91bf7bf2213db2d0a9469e426 | [
"MIT"
] | null | null | null | # Merging Meeting Times
## Description
Write a function for merging meeting times given everyone's schedules. It's an enterprise end-to-end scheduling solution, dog.
Your company built an in-house calendar tool called HiCal. You want to add a feature to see the times in a day when everyone is available.
## Details
To do this, you’ll need to know when any team is having a meeting. In HiCal, a meeting is stored as an object of a Meeting class with integer variables startTime and endTime. These integers represent the number of 30-minute blocks past 9:00am.
```
public class Meeting {
private int startTime;
private int endTime;
public Meeting(int startTime, int endTime) {
// number of 30 min blocks past 9:00 am
this.startTime = startTime;
this.endTime = endTime;
}
public int getStartTime() {
return startTime;
}
public void setStartTime(int startTime) {
this.startTime = startTime;
}
public int getEndTime() {
return endTime;
}
public void setEndTime(int endTime) {
this.endTime = endTime;
}
}
```
For example:
```
new Meeting(2, 3); // meeting from 10:00 – 10:30 am
new Meeting(6, 9); // meeting from 12:00 – 1:30 pm
```
Write a method `mergeRanges()` that takes a list of multiple meeting time ranges and returns a list of condensed ranges.
For example, given:
```
[Meeting(0, 1), Meeting(3, 5), Meeting(4, 8), Meeting(10, 12), Meeting(9, 10)]
```
your method would return:
```
[Meeting(0, 1), Meeting(3, 8), Meeting(9, 12)]
```
**Do not assume the meetings are in order.** The meeting times are coming from multiple teams.
**Write a solution that's efficient even when we can't put a nice upper bound on the numbers representing our time ranges.** Here we've simplified our times down to the number of 30-minute slots past 9:00 am. But we want the method to work even for very large numbers, like Unix timestamps. In any case, the spirit of the challenge is to merge meetings where startTime and endTime don't have an upper bound.
## Gotchas
Look at this case:
```
[Meeting(1, 2), Meeting(2, 3)]
```
These meetings should probably be merged, although they don't exactly "overlap"—they just "touch." Does your method do this?
Look at this case:
```
[Meeting(1, 5), Meeting(2, 3)]
```
Notice that although the second meeting starts later, it ends before the first meeting ends. Does your method correctly handle the case where a later meeting is "subsumed by" an earlier meeting?
Look at this case:
```
[Meeting(1, 10), Meeting(2, 6), Meeting(3, 5), Meeting(7, 9)]
```
Here all of our meetings should be merged together into just Meeting(1, 10). We need keep in mind that after we've merged the first two we're not done with the result—the result of that merge may itself need to be merged into other meetings as well.
Make sure that your method won't "leave out" the last meeting.
We can do this in O(n lg n) time.
## Complexity
O(n lg n) time and O(n) space.
Even though we only walk through our list of meetings once to merge them, we sort all the meetings first, giving us a runtime of O(n lg n). It's worth noting that if our input were sorted, we could skip the sort and do this in O(n) time!
We create a new list of merged meeting times. In the worst case, none of the meetings overlap, giving us a list identical to the input list. Thus we have a worst-case space cost of O(n).
## Bonus
* What if we did have an upper bound on the input values? Could we improve our runtime? Would it cost us memory?
* Could we do this "in place" on the input list and save some space? What are the pros and cons of doing this in place?
| 37.080808 | 407 | 0.714792 | eng_Latn | 0.998428 |
3eb066f7e7fe5a3b89777713f4defd077e9539aa | 2,275 | md | Markdown | results/rtings/avg/Razer Kraken USB/README.md | M0Rf30/AutoEq | 5f296debab6e6251659c346f3ee33b8f1b5a2aaa | [
"MIT"
] | 1 | 2020-02-19T16:59:27.000Z | 2020-02-19T16:59:27.000Z | results/rtings/avg/Razer Kraken USB/README.md | M0Rf30/AutoEq | 5f296debab6e6251659c346f3ee33b8f1b5a2aaa | [
"MIT"
] | null | null | null | results/rtings/avg/Razer Kraken USB/README.md | M0Rf30/AutoEq | 5f296debab6e6251659c346f3ee33b8f1b5a2aaa | [
"MIT"
] | null | null | null | # Razer Kraken USB
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options.
### EqualizerAPO
In case of using EqualizerAPO without any GUI, replace `C:\Program Files\EqualizerAPO\config\config.txt`
with:
```
Preamp: -6.1dB
GraphicEQ: 21 0.0; 23 6.0; 25 5.8; 28 4.9; 31 3.8; 34 2.8; 37 1.9; 41 0.8; 45 -0.2; 49 -1.2; 54 -2.4; 60 -3.7; 66 -5.0; 72 -6.0; 79 -6.9; 87 -7.6; 96 -8.0; 106 -8.2; 116 -8.2; 128 -8.2; 141 -7.9; 155 -7.5; 170 -6.9; 187 -6.1; 206 -5.1; 227 -4.4; 249 -4.6; 274 -5.5; 302 -6.7; 332 -7.6; 365 -8.1; 402 -8.3; 442 -8.0; 486 -8.5; 535 -8.5; 588 -6.9; 647 -5.4; 712 -4.3; 783 -2.9; 861 -1.4; 947 -0.3; 1042 0.2; 1146 0.9; 1261 1.7; 1387 2.7; 1526 3.8; 1678 4.4; 1846 4.3; 2031 4.6; 2234 5.8; 2457 6.0; 2703 6.0; 2973 6.0; 3270 6.0; 3597 6.0; 3957 6.0; 4353 6.0; 4788 6.0; 5267 6.0; 5793 4.0; 6373 -2.5; 7010 -3.6; 7711 -3.9; 8482 -6.5; 9330 -6.8; 10263 -1.8; 11289 0.0; 12418 0.0; 13660 -0.5; 15026 -0.1; 16529 0.0; 18182 0.0; 20000 -4.1
```
### HeSuVi
HeSuVi 2.0 ships with most of the pre-processed results. If this model can't be found in HeSuVi add
`Razer Kraken USB GraphicEQ.txt` to `C:\Program Files\EqualizerAPO\config\HeSuVi\eq\custom\` folder.
Set volume attenuation in the Connection tab for both channels to **-61**
### Peace
In case of using Peace, click *Import* in Peace GUI and select `Razer Kraken USB ParametricEQ.txt`.
### Parametric EQs
In case of using other parametric equalizer, apply preamp of **-6.8dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-6.7dB**.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:---------|
| Peaking | 23 Hz | 0.84 | 7.0 dB |
| Peaking | 102 Hz | 0.72 | -8.6 dB |
| Peaking | 479 Hz | 1.02 | -8.8 dB |
| Peaking | 4120 Hz | 0.38 | 8.3 dB |
| Peaking | 8244 Hz | 1.48 | -12.4 dB |
| Peaking | 237 Hz | 5.91 | 1.4 dB |
| Peaking | 330 Hz | 5.76 | -1.1 dB |
| Peaking | 5473 Hz | 5.94 | 3.1 dB |
| Peaking | 6400 Hz | 7.99 | -4.0 dB |
| Peaking | 10915 Hz | 9.2 | 2.0 dB |
 | 59.868421 | 731 | 0.619341 | eng_Latn | 0.371607 |
3eb2865f04dd710c266a926a7a8ddca78112180f | 1,153 | md | Markdown | src/docs/xiangyi_lu_donate.md | wook2014/SnpEff | d2f9a3ce032158313172659b7a3d5fdf796b7f1b | [
"MIT"
] | 139 | 2015-01-02T17:49:28.000Z | 2022-03-23T12:53:38.000Z | src/docs/xiangyi_lu_donate.md | wook2014/SnpEff | d2f9a3ce032158313172659b7a3d5fdf796b7f1b | [
"MIT"
] | 320 | 2015-01-02T19:26:50.000Z | 2022-03-30T18:07:44.000Z | src/docs/xiangyi_lu_donate.md | wook2014/SnpEff | d2f9a3ce032158313172659b7a3d5fdf796b7f1b | [
"MIT"
] | 69 | 2015-02-02T10:39:53.000Z | 2022-03-27T21:23:57.000Z | # Xiangyi Lu Donate
## In memory of Dr. Xiangyi Lu: [Click here to donate](https://giving.wayne.edu/donate/medicine)
{: .right}
On October 22, 2017, Xiangyi Lu, a co-author on the SnpEff and SnpSift papers, died of ovarian cancer after a three year struggle.
Douglas Ruden, Xiangyi's husband and senior author on the papers, has requested that a non-mandatory gift of at least $10 for using SnpEff or SnpSift be donated to WSU to honor Xiangyi Lu.
All gifts will go to a newly named fund, the **"Xiangyi Lu Graduate Student Fellowship in Bioinformatics Fund."** with the goal of raising $1 million, in order to permanently endow one graduate student research position in bioinformatics every year.
## How to donate
* Visit [Wayne State University donation site](https://giving.wayne.edu/donate/medicine)
* Choose the amount that you would like to donate
* Click on the designation box and click on the option "Other"
* In the next box, enter: IMO Dr. Xiangyi Lu
* At the bottom of the page, click on "Give Now."
**Donation page example:**
[](https://giving.wayne.edu/donate/medicine)
| 46.12 | 249 | 0.756288 | eng_Latn | 0.989622 |
3eb32eb81a0e4dba77ca92eb0bb941cf4af8bd2c | 282 | md | Markdown | .github/pull_request_template.md | lhbruneton/taormina | a2e2c5205dcfb7bb908ae07d960874398b919e2c | [
"MIT"
] | 3 | 2021-04-14T13:35:09.000Z | 2021-10-02T09:16:25.000Z | .github/pull_request_template.md | lhbruneton/taormina | a2e2c5205dcfb7bb908ae07d960874398b919e2c | [
"MIT"
] | 9 | 2021-09-08T19:30:56.000Z | 2022-02-15T01:00:41.000Z | .github/pull_request_template.md | lhbruneton/taormina | a2e2c5205dcfb7bb908ae07d960874398b919e2c | [
"MIT"
] | null | null | null | ### Description
Please explain the changes you made here.
https://lhbruneton.atlassian.net/browse/TAOR-XXX
### Checklist
- [ ] Created tests with given/when/then blocks
- [ ] Created tests for the nominal use case
- [ ] Created tests for errors
- [ ] Build project without errors
| 25.636364 | 48 | 0.734043 | eng_Latn | 0.988931 |
3eb3512fe7695d310e77f47053b05bb444edcf10 | 11,661 | md | Markdown | wp3/fp-anno-model/nif.md | fusepoolP3/overall-architecture | 52b1053031dfab7639b2dc26b2b4122421812176 | [
"Apache-2.0"
] | 5 | 2015-03-17T08:36:55.000Z | 2018-09-24T05:03:50.000Z | wp3/fp-anno-model/nif.md | DalavanCloud/overall-architecture | 52b1053031dfab7639b2dc26b2b4122421812176 | [
"Apache-2.0"
] | 3 | 2015-07-11T07:11:17.000Z | 2016-04-07T19:39:01.000Z | wp3/fp-anno-model/nif.md | DalavanCloud/overall-architecture | 52b1053031dfab7639b2dc26b2b4122421812176 | [
"Apache-2.0"
] | 5 | 2015-03-16T19:42:35.000Z | 2018-09-24T05:03:37.000Z | NLP Interchange Format (NIF)
----------------------------
The NLP Interchange Format (NIF) is a RDF/OWL-based format that aims to improve interoperability between Natural Language Processing (NLP) tools as well as language resources. To facilitate this NIF defines an annotation model for NLP annotations including words, lemmas, stemms, part of speech tags, phrases, named entities and entity mentions. As such NIF can be compared to other annotation standards such as [Open Annotation](http://www.openannotation.org/spec/core/).
This section describes [NIF version 2.0](http://persistence.uni-leipzig.org/nlp2rdf/) with a focus on the defined Annotation Model leaving other important aspects - such as the interoperability - of the standard nearly untouched.
NIF was specified with the following principles in mind. RDF was chosen as data formt as it provides both structural and conceptual interoperability. NIF aims for a big coverage of different NLP annotations going from coarse grained - document level - annotations down to annotations on word level. As fined grained - word level - annotations create a lot information scalability a simple structures with a low triple count was an importnat requirement. Provenance and Confidence information are also defined by NIF.
The remaining sections of this chapter first describe the URI Schemes used by NIF to assign annotated parts of the text unique URIs, second the annotation model as defined by the NIF core ontology and finally two sections describing Ontologies NIF is integrated with. First the Ontologies of Linguistic Annotation (OLiA) the ontology providing stable identifier for morpho-syntactical annotations and second the RDF version of the Internationalization Tag Set used by NIF to describe entity mentions.
### URI Schemes
The URI scheme of NIF combines two things. First and mot importnat it ensures to generate unique identifiers for annotated parts of the text. Second it also allows to encode the actual selection within the URI itself. That means that with NIF a single URI can be used to replace rich selectors as e.g. defined by [Open Annotation Selectors](http://www.openannotation.org/spec/core/specific.html#Selectors). In that aspect NIF is very much in line with the W3C [Media Fragment](http://www.w3.org/TR/media-frags/) specification.
While NIF allows to use different URI Schemes the preferred URI Scheme of NIF 2.0 is based on [RFC 5147](http://tools.ietf.org/html/rfc5147). It is based on start/end character indexes. To give an example lets assume a document with the URI `http://example.org/doc/demo1` containing a text with 2345 characters. Based on this URI Scheme the whole text is uniquely identified by `http://example.org/doc/demo1#char=0,2345`. Assuming the the word "Fusepool" is mentioned at position [1234..1245] it will use the identifier `http://example.org/doc/demo1#char=1234,1245`.
So the URI scheme ensure that we have three unique identifier (1) for the document (2) for the text contained in the document and (3) for the word "Fusepool" contained in the text. Those identifier now allow to make formal statements about the document, the text contained in the document and any arbitrary selection of characters in that text. In addition the URI scheme also - if desirable - to reduce the triple count of annotation as a lot of information are already encoded in the URI.
As mentioned above NIF allows to use different URI schemes. The most interesting alternative on is the Context-Hash-based URI Scheme as defined in section 2.1 (page 7-8) of [Towards an Ontology for Representing Strings](http://svn.aksw.org/papers/2012/WWW_NIF/public/string_ontology.pdf). This URI Scheme has two unique features: First it provides stable identifiers even if other parts of the content change and second it also works for rich text documents where character offsets can not be used.
The used URI Scheme can be indicated by adding the according `nif:UriSchme` type to `nif:String` instances. As in most application the same URI Scheme will be used for all NIF annotations it should be sufficient to define the URIScheme for the `nif:Context` instance - this is the annotation selecting the content as a whole (see the next section for details).
### NIF Core Ontology
The NIF Core Ontology define an text annotation model focused the description of relations between substrings, text, documents and their URI schemes. the following figure shows the concepts and relations defined by the core ontology

The main class in the ontology is `nif:String`, which is the class of all words over the alphabet of Unicode characters. `nif:String` defines data type properties for the selection (start/end index as well as before/anchorOf/after), properties for NLP related information (stemm, lemma, POS, sentiment) as well as relations to other `nif:String` instances.
`nif:Structure` a subclass of `nif:String` is the parent of a collection of concepts defining the structure of texts including `nif:Title`, `nif:Paragraph`, `nif:Sentence`, `nif:Phrase` and `nif:Word`. A collection of properties between those classes allow to describe things like that a `nif:Word` is the first word of a `nif:Sentence` or the sequence of `nif:Words` within a text.
The `nif:Context` type is assigned to instance selecting the whole text. This instance is used as the context for all `nif:String` instances created for the text. This means that all `nif:beginIndex` and `nif:endIndex` values are relative to the `nif:Context` instance referenced by the `nif:referenceContext` property. `nif:Context` defines to more importnat properties: First the `nif:sourceURL` property links to the unique identifier of the document and second the `nif:isString` property can be used to include the actual text as `rdf:Literal` in the RDF Graph.
`nif:URIScheme` is used to define the URI Scheme used to create unique identifier for `nif:String` instances. For more details about URI Schemes see the previous section.
When parsing NIF annotated documents a typical workflow looks like:
1. Query for a `nif:Context` instance that has a `nif:sourceURL` relation to the URI of the document in question. This `nif:Context` instance is in the following referenced by `{context}`
* in cases where one wants to parse information from the used URI Scheme one needs to validate if the expected `nif:URIScheme` types is present for the `{context}`
* the `nif:isString` property - if present - can be used to obtain the text.
2. Query for all interesting `nif:String` instances that do have the `{context}` as `nif:referenceContext`
* For parsing the text sentence by sentence one can first query for `nif:Sentence` instances and later get the words of the sentence by using the filter `null, {nif:sentence}, {sentence}`. While there are also properties like `nif:firstWord`, `nif:nextWord` ... in most cases it will be easier to parse the start/end offsets of the words and later order them base on those offsets.
* For parsing headers and paragraphs queries for `nif:Title` and `nif:Paragraph` can be used.
3. Fia the properties defined by `NIF:String` one can now access all NLP annotations of selected sentences, phrases and words.
* The POS tag is available by the `nif:oliaLink` property. Note also that the lexical category is directly referenced by the `nif:oliaCategory` property. See the following section for mor information about the OLiA Ontology.
* The anchor text of the `nif:String` instance is available by the `nif:anchorOf` property. For full text searches the `nif:stem` value is often more interesting. For building Tag Clouds the `nif:lemma` (base form) is very useful.
* The `nif:sentimentValue` provides the sentiment of the current word, phrase, sentence or if it is present on the `{context}` for the whole document.
### Ontologies of Linguistic Annotation (OLiA)
The [Ontologies of Linguistic Annotation](http://purl.org/olia) (OLiA) provide stable identiers for morpho-syntactical annotation tag sets. This allows NLP applications to be implemented based on stable identifiers instead of depending on the often different tag sets as used by different NLP processing tools. Additionally OLiA is multi lingual meaning the if an NLP application is interested in proper nouns it can just use `olia:ProperNoun` and will be able to process proper nouns of about 70 different languages.
Internally OLiA is build up by three different types of Models:
1. the __Reference Model__ defines the reference terminology intended to be used by NLP Applications. In other words the application programmer interface of OLiA.
2. a set of __Annotation Models__. Those formally describe morpho-syntactical annotation tag sets used by corpora and/or NLP frameworks. A good example is the [annotation model](http://purl.org/olia/penn.owl) for [Penn Treebank](http://www.cis.upenn.edu/~treebank/) tag set.
3. for every annotation model there is also a __Linking Model__ that establishes subClassOf relationships between the _Annotation Model_ and the _Reference Model_
Based on this design about 35 tag sets supporting about 70 languages are integrated with the _Reference Model_.
The __Reference Model__ contains several types of Linguistic Concepts. Most important is the `olia:MorphoSyntacticCategory` hierarchy. It defines an hierarchy over Part-of-Speech categories. On the first level OLiA defines categories like `olia:Noun`, `olia:Verb`, `olia:Adjective`, ... further down the hierarchy one can find categories like `olia:ProperNoun`, `olia:Gerund` or `PastParticipleAdjective`. But one can also find some other useful identifiers such as `olia:NamedEntity` defined as subclass of `DiscourseEntity` and as sibling of `olia:headline`; inflection types like `olia:BaseForm`, `olia:StrongInflection` ...; Features like Mood, Gender, Tense, ... ; Semantic Roles like `olia:AgentRole`, `olia:CauseRole`, `GoalRole`, ...; and many more.
The integration of OLiA to NIF is done by two properties both defined for the `nif:String` class.
1. the `nif:oliaLink` property intended to link to the instance of the _Annotation Model_. This is typically provided by processing results of an NLP tool. To map this to the according instance of the _Reference Model_ a reasoner is required. As this might not be the case for every application case NIF also defines
2. the `nif:oliaCategory` property that directly links to the _Reference Model_ instance. This property is redundant to the `nif:oliaLink` but extremely useful for simple queries of OLiA based NLP applications.
### Entity Linking Support
NIF uses the `itsrdf:taIdentRef` property to link an Entity with an `nif:String` instance. This property is defined by the [Internationalization Tag Set 2.0](http://www.w3.org/TR/its20/) W3C Recommendation that is integrated with NIF for providing support for RDF.
So an annotation linking the phrase `White House` with the entity `http://dbpedia.org/resource/White_House` could look like the following listing
<http://example.org/doc/demo1#char=0,2345> a nif:Context, nif:RFC6147String
<http://example.org/doc/demo1#char=127,138> a nif:Phrase ;
nif:referenceContext <http://example.org/doc/demo1#char=0,2345> ;
nif:beginIndex "127"^^xsd:integer ;
nif:endIndex "138"^^xsd:integer ;
nif:anchorOf "White House"@en ;
nif:oliaCategory olia:NamedEntity, plia:NounPhrase ;
nif:oliaConf "0.91"^^xsd:decimal ;
itsrdf:taIdentRef <http://dbpedia.org/resource/White_House> ;
| 129.566667 | 757 | 0.782952 | eng_Latn | 0.996551 |
3eb3906bb9bf52f8e5f24a07ab0873b13e42dcba | 122 | md | Markdown | exampleSite/content_meta/directors/catharine-cassin/_index.md | muthukrishnandev/review-theme | 8518a9461abbd1888208881f328496015bbc5db6 | [
"MIT"
] | null | null | null | exampleSite/content_meta/directors/catharine-cassin/_index.md | muthukrishnandev/review-theme | 8518a9461abbd1888208881f328496015bbc5db6 | [
"MIT"
] | null | null | null | exampleSite/content_meta/directors/catharine-cassin/_index.md | muthukrishnandev/review-theme | 8518a9461abbd1888208881f328496015bbc5db6 | [
"MIT"
] | 1 | 2021-05-17T08:30:05.000Z | 2021-05-17T08:30:05.000Z | ---
title: "Catharine Cassin"
slug: "catharine-cassin"
date: 2021-02-20T06:51:36Z
meta:
title: "Catharine Cassin"
---
| 12.2 | 27 | 0.680328 | nld_Latn | 0.261319 |
3eb3e0c725dbb81f824b6255ef61d1ae77886e3c | 7,679 | md | Markdown | YouCanHideButYouCannotRun/README.md | security-notes/AHE17 | 5dc11893bf17878701f8c7ebbfc84b89bd5218e7 | [
"MIT"
] | 43 | 2017-06-27T06:54:39.000Z | 2022-02-17T00:17:39.000Z | YouCanHideButYouCannotRun/README.md | security-notes/AHE17 | 5dc11893bf17878701f8c7ebbfc84b89bd5218e7 | [
"MIT"
] | null | null | null | YouCanHideButYouCannotRun/README.md | security-notes/AHE17 | 5dc11893bf17878701f8c7ebbfc84b89bd5218e7 | [
"MIT"
] | 12 | 2017-08-11T04:45:35.000Z | 2020-12-11T02:35:13.000Z | # AHE17 : Android Hacking Events 2017
## **You Can Hide - But You Cannot Run** ([YouCanHideButYouCannotRun.apk](https://team-sik.org/wp-content/uploads/2017/06/YouCanHideButYouCannotRun.apk_.zip) Jamaica in the Dashboard)
**Hint**
Something is going on inside this app, don't know what it's doing. I have the feeling a secret message is transmitted somehow, somewhere... can you help me find the secret message?
## Write-up
by [svetrini](https://github.com/ningod)
After installing the YouCanHideButYouCannotRun.apk on your device or emulator you can see only a black background with a Caesar's image and a *start* button, pushing it the label changes to *running* but nothing else happens.
### Static Analysis
Let's see inside `YouCanHideButYouCannotRun.apk` with a decompiler like [jadx or with jadx-gui](https://github.com/skylot/jadx)
```bash
$ jadx -d YouCanHideButYouCannotRun.jadx YouCanHideButYouCannotRun.apk
[...]
tree YouCanHideButYouCannotRun.jadx/hackchallenge/ahe17/teamsik/org/romanempire/
YouCanHideButYouCannotRun.jadx/hackchallenge/ahe17/teamsik/org/romanempire/
├── BuildConfig.java
├── MainActivity.java
├── MakeThreads.java
├── R.java
└── threads
├── X04c3eb5ce6c5e299ad93dac871bbbed16da09e21.java
├── X04e5009b4e4a32ffe7fceca119ea2d939b3ad7d0.java
├── X07ee33e4bb59fd268d5cc7200578668347eb96ec.java
├── X0a3d206b39888aa391e974a8c54eea7286dc524d.java
├── X0b29ab3e1b0160417fc49c7759046c195acdc0e2.java
[...]
├── Xfee882c1e9b3200f9ada43bc430571e0295d0ded.java
└── Xfffb8e85796e61b713c68833d9f84ef0958681aa.java
1 directory, 191 files
```
We can see the `MainActivity.java` is the first activity loaded at the startup as described in 'AndroidManifest.xml'
```xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package="hackchallenge.ahe17.teamsik.org.romanempire" platformBuildVersionCode="25" platformBuildVersionName="7.1.1">
<uses-sdk android:minSdkVersion="15" android:targetSdkVersion="25" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<application android:theme="@style/AppTheme" android:label="@string/app_name" android:icon="@mipmap/ic_launcher" android:debuggable="true" android:allowBackup="true" android:supportsRtl="true" android:roundIcon="@mipmap/ic_launcher_round">
<activity android:name="hackchallenge.ahe17.teamsik.org.romanempire.MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
```
Inside `MainActivity.java` we notice a call to **MakeThreads** class
```java
MakeThreads.startWrites(MainActivity.this);
```
The `MakeThreads.java` contains the code to access in read/write mode to a `scroll.txt` file using many different thread classes, but when you look at the file on device you get it's useless.
```java
public static void startWrites(Activity activity) {
File directory = new File(activity.getApplicationInfo().dataDir + "/Rome");
directory.mkdirs();
File scroll = new File(directory, "scroll.txt");
try {
RandomAccessFile raf = new RandomAccessFile(scroll, "rw");
PrintWriter pw = new PrintWriter(new FileOutputStream(scroll));
threads = new ArrayList();
threads.add(new X4bc86a15e3dc7ff7dca5240422059c40ca55f084(raf));
[...]
threads.add(new X1b629eed17073f7c9d6b318b77ab05bb453692f4(raf));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e2) {
e2.printStackTrace();
}
Iterator it = threads.iterator();
while (it.hasNext()) {
((Thread) it.next()).start();
}
}
```
All the threads classes write their own char `"c"` local char to the file at position 0, overwriting the content each time. All the class are similar to this:
```java
public class X4bc86a15e3dc7ff7dca5240422059c40ca55f084 extends Thread {
RandomAccessFile a;
char c = 'l';
long sleepTillTime = 169000;
int timetoSleep = 250;
public X4bc86a15e3dc7ff7dca5240422059c40ca55f084(RandomAccessFile a) {
this.a = a;
}
public void run() {
try {
Thread.sleep(this.sleepTillTime);
} catch (InterruptedException e) {
e.printStackTrace();
}
try {
this.a.seek(0);
this.a.writeChar(this.c);
this.a.writeChar(10);
} catch (IOException e2) {
e2.printStackTrace();
}
}
}
```
Between these classes only the value of `"c"` and value of `"sleepTillTime"` change, because of it we can analyze the code statically and find correct order of the chars but this may need too much time, so we skip to a dynamic analysis of the challenge.
### Dynamic Analysis
The tool used is [frida](https://www.frida.re/).
We can use it to hook the java methods `seek` and `writeChar` of the class `RandomAccessFile`, we need to hook both method to catch correctly when threads write the `"c"` char, i.e. after the *seek(0)* calls
We can use a python [script](multithreads.py) to achieve the goal; the python script is only an helper for the following frida javascript code:
```js
Java.perform(function() {
var flagArray = [];
var randomfile = Java.use('java.io.RandomAccessFile');
var skip = true;
randomfile.seek.implementation = function(pos)
{
if (pos == 0){
skip = false;
}
return randomfile.seek.call(this, pos);
}
randomfile.writeChar.implementation = function(c)
{
if(skip || c == 10)
{
send("PARTIAL:"+flagArray.join(""));
}else{
send("index: "+c);
flagArray.push(String.fromCharCode(c))
send("SYM:"+String.fromCharCode(c));
}
return randomfile.writeChar.call(this, c);
}
});
```
After pushing the button on the application, we got the following output:
```bash
python multithreads.py
[+] Waiting for app called hackchallenge.ahe17.teamsik.org.romanempire
[*] Attached on process
[*] Press enter to exit...
[*] index: 65
A
[*] PARTIAL:A
[*] index: 111
o
[...]
[*] index: 33
!
[*] PARTIAL:Aol jsvjrdvyr ohz ybzalk Puav h zapmm tvklyu hya zahabl, Whpualk if uhabyl, svhaolk if aol Thzzlz, huk svclk if aol mld. Aol nlhyz zjylht pu h mhpslk ylcpchs: HOL17{IlaalyJyfwaZ4m3vyKpl}!
^CFLAG: Aol jsvjrdvyr ohz ybzalk Puav h zapmm tvklyu hya zahabl, Whpualk if uhabyl, svhaolk if aol Thzzlz, huk svclk if aol mld. Aol nlhyz zjylht pu h mhpslk ylcpchs HOL17{IlaalyJyfwaZ4m3vyKpl}!
```
Something strange: the frida output is not the flag...
> Aol jsvjrdvyr ohz ybzalk Puav h zapmm tvklyu hya zahabl, Whpualk if uhabyl, svhaolk if aol Thzzlz, huk svclk if aol mld. Aol nlhyz zjylht pu h mhpslk ylcpchs: HOL17{IlaalyJyfwaZ4m3vyKpl}!
*something*, like the package name *romanempire* and the Caesar's image on the app, suggest me that the text is in [Caesar's Cipher](https://en.wikipedia.org/wiki/Caesar_cipher) :-)
Online there are many Caesar cipher decryption tools, for example [this](http://www.xarg.org/tools/caesar-cipher/)
We can choose one of them and try our text, the rotation is **19** and the result is this:
> The clockwork has rusted Into a stiff modern art statue, Painted by nature, loathed by the Masses, and loved by the few. The gears scream in a failed revival: AHE17{BetterCryptS4f3orDie}!
And from this we can extract the Flag
> FLAG: AHE17{BetterCryptS4f3orDie}
That's all folks!
| 38.395 | 253 | 0.707774 | eng_Latn | 0.640553 |
3eb48cb3d86891c93a0b39e01d36aa73bf552f09 | 61 | md | Markdown | README.md | BTriay/pybgg | ed871777a5f247e31dada2f5644dff396eeeb0a4 | [
"MIT"
] | null | null | null | README.md | BTriay/pybgg | ed871777a5f247e31dada2f5644dff396eeeb0a4 | [
"MIT"
] | null | null | null | README.md | BTriay/pybgg | ed871777a5f247e31dada2f5644dff396eeeb0a4 | [
"MIT"
] | null | null | null | # pybgg
Just foolin' around with the bgg xml api and python!
| 20.333333 | 52 | 0.754098 | eng_Latn | 0.996446 |
3eb4cd52f8afae62c6311c618bc967dcf95077ce | 2,982 | md | Markdown | reports/CountryPopbyContinent.md | tedbot101/devop_cw_group_1 | 63fb4ca0f91017e2f0c8e061fe430670cd1b918b | [
"Apache-2.0"
] | null | null | null | reports/CountryPopbyContinent.md | tedbot101/devop_cw_group_1 | 63fb4ca0f91017e2f0c8e061fe430670cd1b918b | [
"Apache-2.0"
] | 16 | 2022-03-17T07:28:15.000Z | 2022-03-24T05:11:38.000Z | reports/CountryPopbyContinent.md | tedbot101/devop_cw_group_1 | 63fb4ca0f91017e2f0c8e061fe430670cd1b918b | [
"Apache-2.0"
] | null | null | null | | Name | Continent | Region | Capital | Population |
| --- | --- | --- | --- | --- |
| China | Asia | Eastern Asia | 1891 | 1.27755802E9 |
| India | Asia | Southern and Central Asia | 1109 | 1.01366202E9 |
| Indonesia | Asia | Southeast Asia | 939 | 2.12107008E8 |
| Pakistan | Asia | Southern and Central Asia | 2831 | 1.56483008E8 |
| Bangladesh | Asia | Southern and Central Asia | 150 | 1.29155E8 |
| Japan | Asia | Eastern Asia | 1532 | 1.26714E8 |
| Vietnam | Asia | Southeast Asia | 3770 | 7.9832E7 |
| Philippines | Asia | Southeast Asia | 766 | 7.5967E7 |
| Iran | Asia | Southern and Central Asia | 1380 | 6.7702E7 |
| Turkey | Asia | Middle East | 3358 | 6.6591E7 |
| Thailand | Asia | Southeast Asia | 3320 | 6.1399E7 |
| South Korea | Asia | Eastern Asia | 2331 | 4.6844E7 |
| Myanmar | Asia | Southeast Asia | 2710 | 4.5611E7 |
| Uzbekistan | Asia | Southern and Central Asia | 3503 | 2.4318E7 |
| North Korea | Asia | Eastern Asia | 2318 | 2.4039E7 |
| Nepal | Asia | Southern and Central Asia | 2729 | 2.393E7 |
| Iraq | Asia | Middle East | 1365 | 2.3115E7 |
| Afghanistan | Asia | Southern and Central Asia | 1 | 2.272E7 |
| Taiwan | Asia | Eastern Asia | 3263 | 2.2256E7 |
| Malaysia | Asia | Southeast Asia | 2464 | 2.2244E7 |
| Saudi Arabia | Asia | Middle East | 3173 | 2.1607E7 |
| Sri Lanka | Asia | Southern and Central Asia | 3217 | 1.8827E7 |
| Yemen | Asia | Middle East | 1780 | 1.8112E7 |
| Kazakstan | Asia | Southern and Central Asia | 1864 | 1.6223E7 |
| Syria | Asia | Middle East | 3250 | 1.6125E7 |
| Cambodia | Asia | Southeast Asia | 1800 | 1.1168E7 |
| Azerbaijan | Asia | Middle East | 144 | 7734000.0 |
| Hong Kong | Asia | Eastern Asia | 937 | 6782000.0 |
| Israel | Asia | Middle East | 1450 | 6217000.0 |
| Tajikistan | Asia | Southern and Central Asia | 3261 | 6188000.0 |
| Laos | Asia | Southeast Asia | 2432 | 5433000.0 |
| Jordan | Asia | Middle East | 1786 | 5083000.0 |
| Georgia | Asia | Middle East | 905 | 4968000.0 |
| Kyrgyzstan | Asia | Southern and Central Asia | 2253 | 4699000.0 |
| Turkmenistan | Asia | Southern and Central Asia | 3419 | 4459000.0 |
| Singapore | Asia | Southeast Asia | 3208 | 3567000.0 |
| Armenia | Asia | Middle East | 126 | 3520000.0 |
| Lebanon | Asia | Middle East | 2438 | 3282000.0 |
| Palestine | Asia | Middle East | 4074 | 3101000.0 |
| Mongolia | Asia | Eastern Asia | 2696 | 2662000.0 |
| Oman | Asia | Middle East | 2821 | 2542000.0 |
| United Arab Emirates | Asia | Middle East | 65 | 2441000.0 |
| Bhutan | Asia | Southern and Central Asia | 192 | 2124000.0 |
| Kuwait | Asia | Middle East | 2429 | 1972000.0 |
| East Timor | Asia | Southeast Asia | 1522 | 885000.0 |
| Cyprus | Asia | Middle East | 2430 | 754700.0 |
| Bahrain | Asia | Middle East | 149 | 617000.0 |
| Qatar | Asia | Middle East | 2973 | 599000.0 |
| Macao | Asia | Eastern Asia | 2454 | 473000.0 |
| Brunei | Asia | Southeast Asia | 538 | 328000.0 |
| Maldives | Asia | Southern and Central Asia | 2463 | 286000.0 |
| 55.222222 | 70 | 0.646211 | eng_Latn | 0.115376 |
3eb4d17b047dfc5c8aa72d6232f24bb42d3e9599 | 73 | md | Markdown | README.md | rombakh/pynet_test | 0037f8b1ac19b7efdb787226d15ea21961b736ae | [
"Apache-2.0"
] | null | null | null | README.md | rombakh/pynet_test | 0037f8b1ac19b7efdb787226d15ea21961b736ae | [
"Apache-2.0"
] | null | null | null | README.md | rombakh/pynet_test | 0037f8b1ac19b7efdb787226d15ea21961b736ae | [
"Apache-2.0"
] | null | null | null | # pynet_test
Kirk Byers class
This is a test repository for Kirk's class
| 18.25 | 42 | 0.794521 | eng_Latn | 0.999057 |
3eb4f0c8c8b8bbe3ac9d722c3c973a228bad6e42 | 3,007 | markdown | Markdown | _posts/2017-11-05-build-devstack-environment.markdown | Gizeta/gizeta-blog.github.io | 0d75c2b58d1e378217c0cb46421a2c383ee3751d | [
"MIT"
] | 1 | 2015-07-11T10:45:20.000Z | 2015-07-11T10:45:20.000Z | _posts/2017-11-05-build-devstack-environment.markdown | Gizeta/gizeta-blog.github.io | 0d75c2b58d1e378217c0cb46421a2c383ee3751d | [
"MIT"
] | 1 | 2019-08-16T06:17:04.000Z | 2019-08-25T10:24:42.000Z | _posts/2017-11-05-build-devstack-environment.markdown | Gizeta/gizeta-blog.github.io | 0d75c2b58d1e378217c0cb46421a2c383ee3751d | [
"MIT"
] | null | null | null | ---
layout: post
title: "使用DevStack搭建OpenStack环境"
date: 2017-11-05 21:16:28
categories: [note]
---
闲话
---
这篇文章不是教你如何学会搭建 OpenStack,而是记录了我使用 DevStack 搭建 OpenStack 的过程,内容仅供参考。
准备
---
本次是在 Windows 上利用虚拟机搭建单节点单网卡的 OpenStack 环境,至少要保证虚拟机有 4GB 内存及 130GB 硬盘空间。
* 操作系统:Windows 10
* 工具:VirtualBox 5.1
* 系统镜像:Ubuntu Server 14.04.5
* OpenStack版本:Ocata
搭建虚拟机系统环境
===
新建虚拟机及安装 Ubuntu 的过程就不详细说明了。这里主要说一下网络的设置。

{% highlight bash %}
# /etc/network/interface
auto eth0
iface eth0 inet static
address 192.168.31.91
netmask 255.255.255.0
gateway 192.168.31.1
dns-nameservers 8.8.8.8
{% endhighlight %}
需要注意一点,请确保eth0能够连接外网
安装 DevStack 准备
===
1. 安装 git 及 python-pip
2. clone DevStack 的仓库到``/home/devstack``。这里可以使用国内的镜像源。<br>``git clone http://git.trystack.cn/openstack-dev/devstack.git -b stable/ocata``
3. 进入 tools 目录,执行命令``./create-stack-user.sh``,创建 stack 用户。
4. 执行命令``passwd stack``,给 stack 用户设置密码。
5. 执行命令``chown -R stack:stack /home/devstack``,将其下的文件更换用户组
6. 执行命令``chmod 777 /dev/pts/0``,以便后续切换到其他用户 VNC 能接收到控制台输出。
7. 执行命令``su stack``,切换至 stack 用户。
8. (可选)在 root 及 stack 用户目录下配置 pip 源,如``pypi.douban.com``等。
配置 DevStack
===
搭建 OpenStack 最关键的步骤就是配置 DevStack 的 local.conf。在 samples 子目录中有一份示例,虽然大部分可以保持不变,但是网络配置的部分一定要结合自己需求修改。
下面是我配置使用的 local.conf 内容:
{% highlight bash %}
# /home/devstacl/local.conf
[[local|localrc]]
# 使用国内源
GIT_BASE=http://git.trystack.cn
NOVNC_REPO=http://git.trystack.cn/kanaka/noVNC.git
SPICE_REPO=http://git.trystack.cn/git/spice/spice-html5.git
# 每次重新 clone 仓库
# RECLONE=True
# Password
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
# Logging
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
KEYSTONE_TOKEN_FORMAT=UUID
# 指定 cinder 空间大小
VOLUME_BACKING_FILE_SIZE=102400M
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
disable_service tempest
# 使用neutron代替nova-network
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
Q_USE_SECGROUP=True
# 务必和虚拟机的网络环境一致
HOST_IP=192.168.31.91
FLOATING_RANGE="192.168.31.0/24"
IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
Q_FLOATING_ALLOCATION_POOL=start=192.168.31.128,end=192.168.31.254
PUBLIC_NETWORK_GATEWAY="192.168.31.1"
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex
{% endhighlight %}
安装 DevStack
---
上面的工作都做完后,执行命令``./stack.sh``,等待安装完毕即可。
如果中途出现错误,可以执行``./unstack.sh``进行卸载,以及``./clean.sh``完全删除依赖的其他软件。
宿主机与虚拟实例互连
---
部署完后,在 OpenStack 创建新实例的时候往往会发现宿主机与虚拟实例之间不能互相 ping 通,而且虚拟实例也不能连接外网。这时候还需要进行两处设置。
虚拟机开启 NAT 转发
===
{% highlight bash %}
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
{% endhighlight %}
OpenStack 设置安全组规则
===

如图,作为例子允许 ICMP 协议通过。
| 22.274074 | 137 | 0.775524 | yue_Hant | 0.455616 |
3eb5224aecff0f04bccaf8d931d5b8ad6adf36d6 | 2,953 | md | Markdown | 03.au/05.business/892/default.md | flyabroadkit/kitgrav | eacf7c662ae118c8e0728412491e496a0785a1cf | [
"MIT"
] | null | null | null | 03.au/05.business/892/default.md | flyabroadkit/kitgrav | eacf7c662ae118c8e0728412491e496a0785a1cf | [
"MIT"
] | null | null | null | 03.au/05.business/892/default.md | flyabroadkit/kitgrav | eacf7c662ae118c8e0728412491e496a0785a1cf | [
"MIT"
] | null | null | null | ---
title: 澳大利亚892州/领地担保企业主签证
menu: 892州担保企业主签证
taxonomy:
category: docs
metadata:
refresh: 0
generator: 'flyabroad'
description: '澳洲892签证针对在澳大利亚创办或收购企业并获得州或领地担保的申请者。若申请者持有合格的临时商业签证,可以申请。'
keywords: '澳洲892签证,澳洲州担保企业主签证,澳洲商业移民'
author: '飞出国'
---
## 澳洲892签证州/领地担保企业主签证-飞出国
澳洲892签证针对在澳大利亚创办或收购企业并获得州或领地担保的申请者。若申请者持有合格的临时商业签证,可以申请。
### 澳洲892签证持有者权利-fcg
* 永久逗留在澳大利亚;
* 在澳大利亚工作学习;
* 参加澳大利亚医疗保险;
* 若满足条件,申请成为澳大利亚公民;
* 担保符合条件的亲属获得澳大利亚永居权;
* 5年内自由进出澳大利亚旅游。
### 澳洲892签证对申请者的要求-FLY
* 持有澳洲160-165签证中的一种;
* 获得州或领地的担保;
* 过去两年内在澳大利亚居住至少12个月;
* 有真实的意愿在澳大利亚建厂投资;
* 未参加违法商业活动;
* [身体健康],[无犯罪记录]。
另外,还需满足以下商业要求:
**企业所有权要求**
* 拥有并持续经营最多2家企业至少2年,所占股份需达到以下标准:
* 年营业额低于40万澳币的公司,应占有至少51%的股份;
* 年营业额40万或高于40万澳币的公司,应占有至少30%的股份;
* 如果是一个上市公司,应占有至少10%的股份;
**收购企业要求**
合法收购企业
**雇佣及净资产要求**
* 雇佣工人:申请签证前12个月,雇佣至少1名全职工人,需为澳大利亚公民,澳大利亚永久居民或新西兰护照持有者,不能是申请人家庭成员;
* 商业资产:申请前12个月,商业净资产至少达到7.5万澳元;
* 商业及个人资产:申请前12个月,商业及个人净资产至少达到25万澳元。
**营业额要求**
需提供申请前2年的商业活动申报表,经由澳大利亚税务办公室认证。并证明公司在过去12个月的年营业额至少达到20万澳元。
**公司管理要求**
* 提供证据证明在申请前24个月里,参与到公司的日常管理中,参与到对公司发展有重大影响的决策制订中。
* 若24个月里有6周以上在国外,必须说明在这段时间内从事的管理活动,并提供证据,证明在缺席期间如何管理公司。
### 澳洲892签证申请材料-飞出国
**表格**
* 表格47BU-商业移民(永久)签证申请表;
* 表格1217-商业移民简述:企业主(居住);
* 表格949-州/领地担保:商业移民类别。
**公司所有权**
* 完整的合作伙伴关系、信托或特许经营协议副本;
* ASIC摘要及2年内任何改变和修订的复印件;
* 股票发行、购买或转让的证据;
* 澳大利亚企业代码证明;
* 公司名称登记册;
* 从收购日期起与业务相关的租赁协议,以及随后的转让及续签副本。
**企业收购**
* 购买合同;
* 支付证据;包括:
* 个人及公司银行账户之间资金转移证明;
* 与销售有关的发票/付款收据/银行结单;
* 与收购企业有关的结算报表;
* 印花税证明。
* 若公司是新建的,需包括以下证明:
* 租赁、厂房、设备等的付款证明;
* 新签订的特许经营协议证明;
* 与经营场所有关的租赁协议。
**雇佣员工/公司个人资产**
若想满足雇佣要求:
提供雇佣证明。申请签证前12个月,雇佣至少1名全职工人,需为澳大利亚公民,澳大利亚永久居民或新西兰护照持有者,不能是申请人家庭成员。
若想满足资产要求:
证明申请签证前12个月,个人资产不低于7.5万澳元或商业及个人联合资产不低于25万澳元。
**营业额**
申请前2年的商业活动申报表
**企业管理**
* 在战略管理,人员招募,产品价格结构,企业盈利能力和其它相关事务上详细的综述报告;
* 签署的商业文件,如供应商合同,信用申请,银行协议,保险合同,招聘合同等能反映直接参与到公司日常经营管理及制订重大决策的证明;
* 有关部门出具的评估报告副本;
* 与第三方通信证明(电子邮件,信函,传真);
* 客户或供应商提供的文件;
* 来自第三方的声明;
* 招聘,培训,监管员工的证明;
* 参与市场营销的证明;
* 商业银行授权签名复印件;
* 维修保养工作的证据;
* 理事会注册及任何可用的许可证;
* 商业保险证明;
* 任何员工的现收现付总结;
* 出口运输汇总表及相关出口证据;
* 办公场所及商业活动的照片(最多4张照片)。
**个人信息**
* 申请签证涉及到的人员的现有护照及旅行文件的个人资料也复印件,包括持有人的照片及个人信息,还有签发日和有效期等。
* 每人需要提供两张45mmx35mm大小的照片,素色背景,要包括头和肩部,每张照片的背后都印上名字。
* 若申请人或其家属名字更改过,需要提供改名证明的复印件。
* 若申请人及家庭成员曾在军队中待过,需要提供服役记录或退役证明复印件。
* 签证申请涉及到的人员,如果年满17周岁,在16周岁以后在澳大利亚逗留时间超过12个月,需要提供澳大利亚联邦警察检查的证明。
* 年满17周岁,在16岁以后10年中,凡是居住时间达到12个月的国家都需要提供该国的无犯罪证明。
**个人关系**
* 家庭成员中任何没有持有澳洲160-165签证的成员需提供:
* 需要依赖申请人的证明;
* 收养证明,出生证明,户口本,展示父母及养父母名字;
* 结婚证复印件。
* 年满16周岁家庭成员:
* 表格80-个人资料表格,包括性格评估。
**子女**
若申请人向带18周岁以下的子女去澳大利亚,而其另外的家长又不随行,那申请人需要提供证明有合法权利把孩子带到澳大利亚的证据。比如:
* 官方法律文件复印件,比如法庭签发的托管文件,监护的法定声明。
* Form 1229-同意向18岁以下的儿童发放澳大利亚签证。
* 若申请人使用Form 1229或法定声明,那需要附上其它家长的政府签发证明文件复印件,要有头像及签名。
## 澳洲892签证申请处理周期-flyabroad
* 75%-15个月
* 90%-17个月
## 澳洲892签证申请费用-fcg
* 主申请人-2225澳元;
* 配偶及18周岁以上子女-1110澳元;
* 18周岁以下子女-555澳元。
## 澳洲892签证申请地点-FLY
申请签证时**需在澳洲境内**,但是家庭成员可以在澳洲境外。
[身体健康]:/home/medical
[无犯罪记录]:/home/police
| 16.225275 | 75 | 0.747376 | yue_Hant | 0.574813 |
3eb54e91c9dad021d0c8048e5ebb4d80498d81fe | 1,996 | md | Markdown | DasHeimweh-Schilling/text/3-dritterTeil/page-733-737.md | ivy-rew/livingBooks | c65c341f2510d4219cb6a147d628274550cd4f52 | [
"Apache-2.0"
] | null | null | null | DasHeimweh-Schilling/text/3-dritterTeil/page-733-737.md | ivy-rew/livingBooks | c65c341f2510d4219cb6a147d628274550cd4f52 | [
"Apache-2.0"
] | null | null | null | DasHeimweh-Schilling/text/3-dritterTeil/page-733-737.md | ivy-rew/livingBooks | c65c341f2510d4219cb6a147d628274550cd4f52 | [
"Apache-2.0"
] | null | null | null | tm .
hdlle gesandt wdrdestj nni dort ein Geschäft zn verrichtet-
nnd dao kann wohl geschehen, weil man dazn Engel braucht-
dieslliisth haben; getrantest dn dich dann wohl- dao Kons-
blilzen ruhig ansehen zn kdnnen? —- lch hoffe zn Gott-, dn
würdest vor Mitleiden in der schwülen Lust den Fittco schsini
gen, nnd mit deinem Schild die Blitze. anffangen.
Abdollam sahe seinen Bruder mit forschendent Blick
an, nnd Abnkar lächelte, nnd sahe schamroth vor sich nioi
der , die Uebrigen aber beharrten, was er dem Tintotheno
antworten würde; bald richtete er den Blick wieder anf-
wär-to nnd sprach-
« Glaubst dn denn nicht, Bruder Timothendl das die-
jenigen, die den Weg Gottes so vollkommen wußten« oder
doch wissen konnten, ihn aber nicht allein nicht gingen, son-
dern sogar verspotteten, verachteten, nnd diejenigen beschäm-
ten, die ihm folgten, eine erschreckliche, nnd nnter allen die
schwerste Strafe werden andznstehen dabei-? —- was hörten
wir·-«derdient, wenn wir ietzt nach. so vielen Wohlthaten aller
sitts· unseren Fürsten, seine Gemahlin« nnd verehrungowerthe
Gesellschaft nicht allein verließen- sondern sie sogar verspot-
tetenz nnd allen, die von lhnens nrit nnd-redeten, verdächtig
nnd verächtlich machten? — nnd doch sind diese thenre Per-
sonen bei aller ihrer Würde nur Diener Christi nnd sei-
ner Apostels-—- —
Titnotheno. Verzeihe -mir, Bruder Calebl data-
zweifle ich ieineowegeo, daß die Ver-achtet nnd Spdtter der
Religion die schwersten Strafen in der Ewigkeit leiden wer-
denz ieo kam mir nur so vor, als wenn dir das Scheitel-
blitzem ano Haß gegen sie, wohl thate, nnd dies-glaube
ich, ist nno Christen nicht recht anständig.
sdniatn Sodann wohl sehn, daß ich in der Hitze zn
. weit ging- allein dn mußt anch bedenken-« daß wir Js-
maels Finder die Sünden nnsereo Vaters nicht besser ab-
büßen ldnnem als dnrch Haß gegen die Verspottnng des
Saamend Jsaako.
Diese Antwort war vortrefflich- nnd Engenind bezeugte
«·«iden, sowohl dem Timotheni als anch dent anlar
| 42.468085 | 63 | 0.793587 | deu_Latn | 0.996268 |
3eb55e6c13a701b38121bd141fef7a5f1d6c9894 | 2,938 | md | Markdown | README.md | tpaschalis/sunlight.live | 8a5d82c1bc8a9b956d209fe4b0fbb271de6150fe | [
"MIT"
] | 39 | 2019-06-26T14:20:34.000Z | 2022-01-06T10:36:46.000Z | README.md | tpaschalis/sunlight.live | 8a5d82c1bc8a9b956d209fe4b0fbb271de6150fe | [
"MIT"
] | 7 | 2019-06-26T16:04:30.000Z | 2022-01-13T01:09:54.000Z | README.md | tpaschalis/sunlight.live | 8a5d82c1bc8a9b956d209fe4b0fbb271de6150fe | [
"MIT"
] | 4 | 2019-06-27T12:41:32.000Z | 2019-07-05T06:43:26.000Z | # sunlight.live
TODO : Remove dev cruft from requirements.txt
only 3 or 4 lines should actually be there, everything else is remnants from development tests
Feel free to raise Issues, create Pull Requests, or send comments! The [advice.md](https://github.com/tpaschalis/sunlight.live/blob/master/advice.md) file contains, well, advice and feedback I've gathered from other people, that I'm thinking of implementing.
## What's this?
I recently wanted to make a pixel worldmap, and dusted my undergrad astronomy books to create this illustration. The base inspiration was the pixel worldmap that exists on many Android phones "Clock" application.
[sunlight.live](https://sunlight.live) is a live-updating representation the Sun's <a href="https://en.wikipedia.org/wiki/Terminator_(solar)">terminator</a>, the line which divides "day" and "night" on earth.
Mote detailed information about the development can be found on <a href="https://tpaschalis.github.io/show-hn-sunlight-live/">this blog post</a>. The project has been submitted on [Reddit](https://www.reddit.com/r/dataisbeautiful/comments/baytxa/a_liveupdating_visual_map_of_sunlight_on_earth_oc/) and [HackerNews](https://news.ycombinator.com/item?id=20284870), reaching the front-page.
All suggestions, insights, and astronomy-related tidbits are welcome!
## Deployment
The website runs on a $5 DigitalOcean droplet, with a near-default Apache and Let'sEncrypt, as this was all an excuse to spend a weekend studying astronomy, not an exercise in DevOps.
The image is updated every 10 minutes using cron, Python3, a single line of <a href="http://www.numpy.org/">NumPy</a> plus some celestial mechanics formulas, and should have an accuracy of ±1 degree.
The goal was to create a minimal, fast and aesthetically pleasing result. I'm not much of a designer, but I'm pretty happy with the first version of this site, and its small-ish size.
All plotting happens using Matplotlib. Most Matplotlib code and tutorials can be *very* confusing, but I think the source can serve as a guide to readable and maintainable plotting.
The world data is obtained from the wonderful [Natural Earth](https://www.naturalearthdata.com/) datasets, and handled using [GeoPandas](http://geopandas.org)
## Project Files
There are four main components
- The `parse-data.py` script, which parses the geographical dataset, to provide pairs of points in a 360x180 grid representation of earth.
- The `land-points` file, which is the aforementioned pairs of points representing earth land.
- The `plot-data.py` script which runs every 10 minutes, to build the updated illustration.
- The `public/` folder, which contains all websitey files.
## Roadmap
Once I have some time, I want to :
- Learn more about the accuracy of astronomical formulas
- Offer some more astronomical data through this page and an API
- Provide some more illustrations about the solar system
| 62.510638 | 387 | 0.777059 | eng_Latn | 0.993637 |
3eb5818fb6984af4056010a7fd78e8561ec94a53 | 559 | md | Markdown | README.md | AbsenceGameDev/imxrt1062 | dc7aee6efb13c43bf985c6b480c16fe461fdfff4 | [
"MIT"
] | 2 | 2021-01-17T07:58:25.000Z | 2021-01-17T10:19:42.000Z | README.md | AbsenceGameDev/imxrt1062 | dc7aee6efb13c43bf985c6b480c16fe461fdfff4 | [
"MIT"
] | null | null | null | README.md | AbsenceGameDev/imxrt1062 | dc7aee6efb13c43bf985c6b480c16fe461fdfff4 | [
"MIT"
] | null | null | null | # imxrt1062
This is a bare-metal project for the [Teensy](https://www.pjrc.com/teensy/) 4.0/4.1 board.
The generated HEX file is compatible with the [Teensy Loader](https://www.pjrc.com/teensy/loader.html).
# Credits
Linker files and startup code are based on the [Teensy Core Libraries for Arduino](https://github.com/PaulStoffregen/cores) by Paul Stoffregen.
Initial idea from a fork I started working on yesterdayt, the repo (https://github.com/blazer82/baremetal-blinky.teensy) was a small baremetal example on a teensy 4.0 with a iMX.RT1060
| 55.9 | 185 | 0.760286 | eng_Latn | 0.913855 |
3eb64dd696a8ed60310567b25e0ac46349676a17 | 1,291 | md | Markdown | src/posts/2007-02-26-jenna-isms-again/index.md | jbasdf/justinball | 14456a8f45ed47de903215f06d5d14ae1a19068c | [
"MIT"
] | null | null | null | src/posts/2007-02-26-jenna-isms-again/index.md | jbasdf/justinball | 14456a8f45ed47de903215f06d5d14ae1a19068c | [
"MIT"
] | 21 | 2020-01-27T04:18:38.000Z | 2022-02-26T12:35:37.000Z | src/posts/2007-02-26-jenna-isms-again/index.md | jbasdf/justinball | 14456a8f45ed47de903215f06d5d14ae1a19068c | [
"MIT"
] | null | null | null | ---
title: Jenna-isms Again
author: Justin Ball
layout: post
permalink: "/2007/02/26/jenna-isms-again/"
tags:
- "Jenna"
date: '2007-02-26T07:00:00Z'
templateKey: blog-post
path: "/jenna-isms-again"
description: ''
---
My kids are good for the most part. They really are. However, I have no idea what goes through Jenna's mind. Here is the latest from her:
We were headed to dinner Saturday night and I found her wandering the house with one shoe. She told me, "I have a missing shoe. I know where I left it, but I don't know where it is." I think we have all had that happen, but she is 3.
Last night, after Callie told them again to stop jumping on the couch she said, "When I'm grown'd up I'm going to jump on my bed an my couch." Funny how when we do grow up we forget about all the things that would have been so fun as a kid.
A few weeks ago when Callie had Jenna out shopping Jenna turned to her and said, "Mom, when you are a hundred you will die."
And recently she told us, "If you squeeze the kitty really tight it will die." Yikes.
My personal favorite came yesterday. She loves puppies. She told us, "I want a puppy. I want a puppy that eats other puppies - the mean ones."
I swear we don't let her watch late night tv, but I think I understand why she has vivid dreams.
| 47.814815 | 240 | 0.739737 | eng_Latn | 0.999846 |
3eb7207df6908caab1736cd12255b23c4066cd6c | 24,076 | md | Markdown | articles/virtual-machines/workloads/oracle/configure-oracle-asm.md | md2perpe/azure-docs.sv-se | 64bdd85952bc1d194f86a3a80e616ca967bb6235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/configure-oracle-asm.md | md2perpe/azure-docs.sv-se | 64bdd85952bc1d194f86a3a80e616ca967bb6235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/configure-oracle-asm.md | md2perpe/azure-docs.sv-se | 64bdd85952bc1d194f86a3a80e616ca967bb6235 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Konfigurera Oracle ASM på en virtuell Azure Linux-dator | Microsoft Docs
description: Snabbt Oracle ASM dig och kom igång med Azure-miljön.
services: virtual-machines-linux
documentationcenter: virtual-machines
author: romitgirdhar
manager: jeconnoc
editor: ''
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-machines-linux
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure
ms.date: 08/02/2018
ms.author: rogirdh
ms.openlocfilehash: 0af6e87d3e0b4b3b40b63db07384d4a33a9d43e1
ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 06/13/2019
ms.locfileid: "66154255"
---
# <a name="set-up-oracle-asm-on-an-azure-linux-virtual-machine"></a>Konfigurera Oracle ASM på en virtuell Linux-dator för Azure
Med virtuella Azure-datorer får du en fullständigt konfigurerbar och flexibel datormiljö. Den här självstudien beskriver grundläggande Azure VM-distribution som kombineras med installation och konfiguration av Oracle automatiserad Storage Management (ASM). Lär dig att:
> [!div class="checklist"]
> * Skapa och ansluta till en virtuell Oracle Database dator
> * Installera och konfigurera Oracle automatisk lagringshantering
> * Installera och konfigurera infrastrukturen för Oracle-rutnät
> * Initiera en Oracle ASM-installation
> * Skapa en Oracle-databas som hanteras av ASM
[!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)]
Om du väljer att installera och använda CLI lokalt kräver de här självstudierna att du kör Azure CLI version 2.0.4 eller senare. Kör `az --version` för att hitta versionen. Om du behöver installera eller uppgradera kan du läsa [Installera Azure CLI]( /cli/azure/install-azure-cli).
## <a name="prepare-the-environment"></a>Förbereda miljön
### <a name="create-a-resource-group"></a>Skapa en resursgrupp
Du skapar en resursgrupp med kommandot [az group create](/cli/azure/group). En Azure-resursgrupp är en logisk behållare där Azure resurser distribueras och hanteras. I det här exemplet, en resursgrupp med namnet *myResourceGroup* i den *eastus* region.
```azurecli-interactive
az group create --name myResourceGroup --location eastus
```
### <a name="create-a-vm"></a>Skapa en virtuell dator
Om du vill skapa en virtuell dator baserat på avbildningen som Oracle Database och konfigurera den om du vill använda Oracle ASM genom att använda den [az vm skapa](/cli/azure/vm) kommando.
I följande exempel skapas en virtuell dator med namnet myVM som är en Standard_DS2_v2 storlek med fyra anslutna datadiskar på 50 GB. Om de inte redan finns på standardplatsen för nyckeln, skapas även SSH-nycklar. Om du vill använda en specifik uppsättning nycklar använder du alternativet `--ssh-key-value`.
```azurecli-interactive
az vm create --resource-group myResourceGroup \
--name myVM \
--image Oracle:Oracle-Database-Ee:12.1.0.2:latest \
--size Standard_DS2_v2 \
--generate-ssh-keys \
--data-disk-sizes-gb 50 50 50 50
```
När du har skapat den virtuella datorn visar Azure CLI information som liknar följande exempel. Anteckna värdet för `publicIpAddress`. Du kan använda den här adressen för att få åtkomst till den virtuella datorn.
```azurecli
{
"fqdns": "",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-36-2F-56",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "13.64.104.241",
"resourceGroup": "myResourceGroup"
}
```
### <a name="connect-to-the-vm"></a>Anslut till VM:en
För att skapa en SSH-session med den virtuella datorn och konfigurera ytterligare inställningar, använder du följande kommando. Ersätt IP-adressen med den `publicIpAddress` värde för den virtuella datorn.
```bash
ssh <publicIpAddress>
```
## <a name="install-oracle-asm"></a>Installera Oracle ASM
Utför följande steg för att installera Oracle ASM.
Läs mer om hur du installerar Oracle ASM [Oracle ASMLib hämtningar för Oracle Linux 6](https://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html).
1. Du måste logga in som rot för att kunna fortsätta med ASM-installation:
```bash
sudo su -
```
2. Kör dessa ytterligare kommandon för att installera Oracle ASM-komponenter:
```bash
yum list | grep oracleasm
yum -y install kmod-oracleasm.x86_64
yum -y install oracleasm-support.x86_64
wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm
yum -y install oracleasmlib-2.0.12-1.el6.x86_64.rpm
rm -f oracleasmlib-2.0.12-1.el6.x86_64.rpm
```
3. Kontrollera att Oracle ASM har installerats:
```bash
rpm -qa |grep oracleasm
```
Resultatet av det här kommandot bör innehålla följande komponenter:
```bash
oracleasm-support-2.1.10-4.el6.x86_64
kmod-oracleasm-2.0.8-15.el6_9.x86_64
oracleasmlib-2.0.12-1.el6.x86_64
```
4. ASM kräver särskilda användare och roller för att fungera korrekt. Följande kommandon för skapar nödvändiga användarkonton och grupper:
```bash
groupadd -g 54345 asmadmin
groupadd -g 54346 asmdba
groupadd -g 54347 asmoper
useradd -u 3000 -g oinstall -G dba,asmadmin,asmdba,asmoper grid
usermod -g oinstall -G dba,asmdba,asmadmin oracle
```
5. Verifiera användare och grupper har skapats korrekt:
```bash
id grid
```
Resultatet av det här kommandot bör innehålla följande användare och grupper:
```bash
uid=3000(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54345(asmadmin),54346(asmdba),54347(asmoper)
```
6. Skapa en mapp för användaren *grid* och ändra ägare:
```bash
mkdir /u01/app/grid
chown grid:oinstall /u01/app/grid
```
## <a name="set-up-oracle-asm"></a>Konfigurera Oracle ASM
Den här självstudien standardanvändaren är *grid* och standardgruppen är *asmadmin*. Se till att den *oracle* användare är medlem i gruppen asmadmin. Om du vill konfigurera Oracle ASM-installationen, gör du följande:
1. Konfigurera Oracle ASM-biblioteket drivrutinen innebär att definiera standardanvändaren (rutnät) och standard-gruppen (asmadmin) samt konfigurera enheten att starta vid start (Välj y) och för att söka efter diskar vid start (Välj y). Du måste svara på frågorna i följande kommando:
```bash
/usr/sbin/oracleasm configure -i
```
Kommandots utdata bör se ut ungefär så här, stoppas med uppmanar som ska besvaras.
```bash
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
```
2. Visa diskkonfigurationen:
```bash
cat /proc/partitions
```
Kommandots utdata bör likna följande listor över tillgängliga diskar
```bash
8 16 14680064 sdb
8 17 14678976 sdb1
8 0 52428800 sda
8 1 512000 sda1
8 2 51915776 sda2
8 48 52428800 sdd
8 64 52428800 sde
8 80 52428800 sdf
8 32 52428800 sdc
11 0 1152 sr0
```
3. Formatera disk */dev/sdc* genom att köra följande kommando och svara på frågorna med:
- *n* för ny partition
- *p* för primär partition
- *1* att välja den första partitionen
- Tryck på `enter` för standard första 3D-cylinder
- Tryck på `enter` för standard senaste 3D-cylinder
- Tryck på *w* att skriva ändringar till partitionstabellen
```bash
fdisk /dev/sdc
```
Med hjälp av de svar som anges ovan, det bör se ut fdisk kommandots utdata som liknar följande:
```bash
Device contains not a valid DOS partition table, or Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xf865c6ca.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
```
4. Upprepa det föregående kommandot fdisk för `/dev/sdd`, `/dev/sde`, och `/dev/sdf`.
5. Kontrollera diskkonfigurationen:
```bash
cat /proc/partitions
```
Kommandots utdata bör se ut så här:
```bash
major minor #blocks name
8 16 14680064 sdb
8 17 14678976 sdb1
8 32 52428800 sdc
8 33 52428096 sdc1
8 48 52428800 sdd
8 49 52428096 sdd1
8 64 52428800 sde
8 65 52428096 sde1
8 80 52428800 sdf
8 81 52428096 sdf1
8 0 52428800 sda
8 1 512000 sda1
8 2 51915776 sda2
11 0 1048575 sr0
```
6. Kontrollera status för Oracle ASM-tjänsten och starta Oracle ASM-tjänsten:
```bash
service oracleasm status
service oracleasm start
```
Kommandots utdata bör se ut så här:
```bash
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
```
7. Skapa Oracle ASM diskar:
```bash
service oracleasm createdisk ASMSP /dev/sdc1
service oracleasm createdisk DATA /dev/sdd1
service oracleasm createdisk DATA1 /dev/sde1
service oracleasm createdisk FRA /dev/sdf1
```
Kommandots utdata bör se ut så här:
```bash
Marking disk "ASMSP" as an ASM disk: [ OK ]
Marking disk "DATA" as an ASM disk: [ OK ]
Marking disk "DATA1" as an ASM disk: [ OK ]
Marking disk "FRA" as an ASM disk: [ OK ]
```
8. Lista över Oracle ASM diskar:
```bash
service oracleasm listdisks
```
Kommandots utdata bör lista ut följande Oracle ASM diskar:
```bash
ASMSP
DATA
DATA1
FRA
```
9. Ändra lösenord för användare rot, oracle och rutnät. **Anteckna dessa nya lösenord** som du använder dem senare under installationen.
```bash
passwd oracle
passwd grid
passwd root
```
10. Ändra behörigheter för mappen:
```bash
chmod -R 775 /opt
chown grid:oinstall /opt
chown oracle:oinstall /dev/sdc1
chown oracle:oinstall /dev/sdd1
chown oracle:oinstall /dev/sde1
chown oracle:oinstall /dev/sdf1
chmod 600 /dev/sdc1
chmod 600 /dev/sdd1
chmod 600 /dev/sde1
chmod 600 /dev/sdf1
```
## <a name="download-and-prepare-oracle-grid-infrastructure"></a>Ladda ned och förbereda infrastrukturen för Oracle-rutnät
Om du vill hämta och Förbered Grid infrastruktur för Oracle-programvara, gör du följande:
1. Ladda ned Oracle Grid infrastruktur från den [Oracle ASM hämtningssidan](https://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-linux-download-2240591.html).
Under nedladdningen benämnt **Oracle Database 12c version 1 Grid infrastruktur (12.1.0.2.0) för Linux x86-64**, ladda ned två ZIP-filer.
2. Du kan använda protokollet SCP (Secure Copy) för att kopiera filerna till den virtuella datorn när du har hämtat .zip-filer till en klientdator:
```bash
scp *.zip <publicIpAddress>:.
```
3. SSH till din Oracle VM i Azure för att flytta ZIP-filerna till den / opt mapp. Sedan kan ändra ägaren till filerna:
```bash
ssh <publicIPAddress>
sudo mv ./*.zip /opt
cd /opt
sudo chown grid:oinstall linuxamd64_12102_grid_1of2.zip
sudo chown grid:oinstall linuxamd64_12102_grid_2of2.zip
```
4. Packa upp filerna. (Installera Linux packa upp verktyget om den inte redan är installerat.)
```bash
sudo yum install unzip
sudo unzip linuxamd64_12102_grid_1of2.zip
sudo unzip linuxamd64_12102_grid_2of2.zip
```
5. Ändra behörigheter:
```bash
sudo chown -R grid:oinstall /opt/grid
```
6. Uppdateringen som konfigurerats växlingsutrymme i procent. Oracle Grid komponenter måste minst 6,8 GB växlingsutrymme för att installera rutnätet. Filstorleken för standard-växling för Oracle Linux-avbildningar i Azure är bara 2 048 MB. Du måste öka `ResourceDisk.SwapSizeMB` i den `/etc/waagent.conf` filen och starta om tjänsten WALinuxAgent för de uppdaterade inställningarna ska börja gälla. Eftersom det är en skrivskyddad fil, måste du ändra behörigheten för att aktivera skrivåtkomst.
```bash
sudo chmod 777 /etc/waagent.conf
vi /etc/waagent.conf
```
Sök efter `ResourceDisk.SwapSizeMB` och ändra värdet till **8192**. Behöver du trycker på `insert` Skriv i värdet för att ange infogningsläge **8192** och tryck sedan på `esc` att återgå till kommandoläge. För att skriva ändringar och avsluta filen, skriver `:wq` och tryck på `enter`.
> [!NOTE]
> Vi rekommenderar starkt att du alltid använder `WALinuxAgent` konfigurera växlingsutrymme så att den alltid har skapats på den lokala tillfälliga disken (tillfällig disk) för bästa prestanda. Läs mer på [hur du lägger till en växlingsfil på Linux Azure-datorer](https://support.microsoft.com/en-us/help/4010058/how-to-add-a-swap-file-in-linux-azure-virtual-machines).
## <a name="prepare-your-local-client-and-vm-to-run-x11"></a>Förbereda lokal klient och virtuell dator för att köra x11
Konfigurera Oracle ASM kräver ett grafiskt gränssnitt för att slutföra installationen och konfigurationen. Vi använder x11 protokoll för att underlätta installationen. Om du använder ett klientsystem (Mac eller Linux) som redan har X11 funktioner aktiverat och konfigurerat – du kan hoppa över det här konfigurations- och exklusiva till Windows-datorer.
1. [Ladda ned PuTTY](https://www.putty.org/) och [hämta Xming](https://xming.en.softonic.com/) till din Windows-dator. Du behöver att slutföra installationen av båda dessa program med standardvärden innan du fortsätter.
2. När du har installerat PuTTY, öppna Kommandotolken, ändra till mappen PuTTY (till exempel c:\Program\Microsoft Files\PuTTY) och kör `puttygen.exe` för att generera en nyckel.
3. I PuTTY-Nyckelgenerator:
1. Generera en nyckel genom att välja den `Generate` knappen.
2. Kopiera innehållet i nyckeln (Ctrl + C).
3. Välj knappen `Save private key`.
4. Ignorera varningen om hur du skyddar nyckeln med en lösenfras och välj sedan `OK`.

4. I den virtuella datorn kör du följande kommandon:
```bash
sudo su - grid
mkdir .ssh
cd .ssh
```
5. Skapa en fil som heter `authorized_keys`. Klistra in innehållet i nyckeln i den här filen och spara filen.
> [!NOTE]
> Nyckeln måste innehålla strängen `ssh-rsa`. Innehållet i nyckeln måste dessutom vara en enskild rad med text.
>
6. Ditt klientsystem starta PuTTY. I den **kategori** rutan, gå till **anslutning** > **SSH** > **Auth**. I den **fil för privat nyckel för autentisering** rutan, bläddra till den nyckel som du skapade tidigare.

7. I den **kategori** rutan, gå till **anslutning** > **SSH** > **X11**. Välj den **aktivera X11 vidarebefordran** markerar du kryssrutan.

8. I den **kategori** rutan, gå till **Session**. Ange Oracle ASM-VM `<publicIPaddress>` i dialogrutan värden namn fyller du i en ny `Saved Session` namn och klicka sedan på `Save`. När du har sparat klickar du på `open` att ansluta till din virtuella dator för Oracle ASM. Första gången du ansluter ett meddelande om fjärrdatorn inte cachelagras i registret. Klicka på `yes` att lägga till den och fortsätta.

## <a name="install-oracle-grid-infrastructure"></a>Installera infrastrukturen för Oracle-rutnät
Om du vill installera Oracle Grid infrastruktur, gör du följande:
1. Logga in som **grid**. (Du bör kunna logga in utan att behöva ange ett lösenord.)
> [!NOTE]
> Om du kör Windows, kontrollera att du har startat Xming innan du påbörjar installationen.
```bash
cd /opt/grid
./runInstaller
```
Oracle Grid infrastruktur 12c version 1 installationsprogrammet öppnas. (Det kan ta några minuter att starta installationsprogrammet.)
2. På den **väljer installationsalternativet** väljer **installera och konfigurera Oracle Grid infrastrukturen för en fristående Server**.

3. På den **Välj språk för produkten** kontrollerar **engelska** eller det språk som du vill har valts. Klicka på `next`.
4. På den **skapa ASM diskgruppen** sidan:
- Ange ett namn för diskgruppen.
- Under **redundans**väljer **externa**.
- Under **storlek på allokeringsenhet**väljer **4**.
- Under **Lägg till diskar**väljer **ORCLASMSP**.
- Klicka på `next`.
5. På den **ange ASM lösenord** väljer den **använda samma lösenord för dessa konton** alternativet och ange ett lösenord.

6. På den **ange hanteringsalternativ** sidan har möjlighet att konfigurera EM molnet kontroll. Vi hoppar över det här alternativet – Klicka på `next` att fortsätta.
7. På den **Privilegierade Operativsystemgrupper** kan du använda standardinställningarna. Klicka på `next` att fortsätta.
8. På den **ange installationsplats** kan du använda standardinställningarna. Klicka på `next` att fortsätta.
9. På den **skapa** , ändra katalogen inventering `/u01/app/grid/oraInventory`. Klicka på `next` att fortsätta.

10. På den **rotkonfiguration skriptet körning** väljer den **kör automatiskt konfigurationsskript** markerar du kryssrutan. Välj den **använda ”rot” användarens autentiseringsuppgifter** alternativet och ange rotlösenordet för användaren.

11. På den **utför Kravkontroller** sidan nuvarande konfiguration misslyckas med fel. Det här är ett förväntat beteende. Välj `Fix & Check Again`.
12. I den **korrigering skriptet** dialogrutan klickar du på `OK`.
13. På den **sammanfattning** sidan Granska de angivna inställningarna och klicka sedan på `Install`.

14. Ett varningsmeddelande visas upplysande du konfigurationen skript måste köras som en privilegierad användare. Klicka på `Yes` att fortsätta.
15. På den **Slutför** klickar du på `Close` att slutföra installationen.
## <a name="set-up-your-oracle-asm-installation"></a>Konfigurera Oracle ASM-installation
Om du vill konfigurera Oracle ASM-installationen, gör du följande:
1. Se till att du fortfarande är inloggad som **grid**, från din X11 session. Du kan behöva når `enter` att återskapa terminalen. Starta sedan den Oracle automatiserad Storage Management Configuration Assistant:
```bash
cd /u01/app/grid/product/12.1.0/grid/bin
./asmca
```
Oracle ASM Configuration Assistant öppnas.
2. I den **konfigurera ASM: Disk grupper** dialogrutan klickar du på den `Create` knappen och klicka sedan på `Show Advanced Options`.
3. I den **skapa diskgruppen** dialogrutan:
- Ange namnet på disken **DATA**.
- Under **Välj medlem diskar**väljer **ORCL_DATA** och **ORCL_DATA1**.
- Under **storlek på allokeringsenhet**väljer **4**.
- Klicka på `ok` att skapa en disk.
- Klicka på `ok` att stänga bekräftelsefönstret.

4. I den **konfigurera ASM: Disk grupper** dialogrutan klickar du på den `Create` knappen och klicka sedan på `Show Advanced Options`.
5. I den **skapa diskgruppen** dialogrutan:
- Ange namnet på disken **FRA**.
- Under **redundans**väljer **externt (ingen)** .
- Under **Välj medlem diskar**väljer **ORCL_FRA**.
- Under **storlek på allokeringsenhet**väljer **4**.
- Klicka på `ok` att skapa en disk.
- Klicka på `ok` att stänga bekräftelsefönstret.

6. Välj **avsluta** att Stäng ASM Configuration Assistant.

## <a name="create-the-database"></a>Skapa en databas
Oracle-databas som är redan installerad på Azure Marketplace-avbildning. Om du vill skapa en databas, gör du följande:
1. Växla användare till Superanvändare för Oracle och initiera lyssnaren för loggning:
```bash
su - oracle
cd /u01/app/oracle/product/12.1.0/dbhome_1/bin
./dbca
```
Databasen Configuration Assistant öppnas.
2. På den **Databasåtgärden** klickar du på `Create Database`.
3. På den **läget Skapa** sidan:
- Ange ett namn för databasen.
- För **lagringstyp**, se till att **automatisk Storage Management (ASM)** har valts.
- För **plats för databasfiler**, använder du standardvärdet ASM förslag på plats.
- För **snabb återställning området**, använder du standardvärdet ASM förslag på plats.
- Ange en **administratörslösenord** och **Bekräfta lösenord**.
- Se till att `create as container database` har valts.
- Ange en `pluggable database name` värde.
4. På den **sammanfattning** sidan Granska de angivna inställningarna och klicka sedan på `Finish` att skapa databasen.

5. Databasen har skapats. På den **Slutför** sidan har möjlighet att låsa upp ytterligare konton om du vill använda den här databasen och ändra lösenord. Om du vill göra det väljer **lösenordshantering** -Klicka annars på `close`.
## <a name="delete-the-vm"></a>Ta bort den virtuella datorn
Du har konfigurerat automatisk lagringshantering i Oracle i Oracle DB-avbildning från Azure Marketplace. När du inte längre behöver den här virtuella datorn kan använda du följande kommando för att ta bort resursgruppen, virtuell dator och alla relaterade resurser:
```azurecli
az group delete --name myResourceGroup
```
## <a name="next-steps"></a>Nästa steg
[Självstudie: Konfigurera Oracle DataGuard](configure-oracle-dataguard.md)
[Självstudie: Konfigurera Oracle GoldenGate](Configure-oracle-golden-gate.md)
Granska [om arkitekturen i en Oracle-databas](oracle-design.md)
| 40.737733 | 494 | 0.724082 | swe_Latn | 0.991975 |
3eb722f6652737ab7b2fb8ead97b5a07a26f21ef | 2,461 | md | Markdown | src/issues/33/TWITTER.md | philipp-spiess/this-week-in-react | a6da2a8ddad201dfaa3073a371b6e3ee3ac4f55b | [
"MIT"
] | 22 | 2018-09-22T10:27:36.000Z | 2020-10-05T20:39:19.000Z | src/issues/33/TWITTER.md | philipp-spiess/this-week-in-react | a6da2a8ddad201dfaa3073a371b6e3ee3ac4f55b | [
"MIT"
] | 19 | 2018-10-03T19:58:42.000Z | 2022-02-17T23:09:36.000Z | src/issues/33/TWITTER.md | philipp-spiess/this-week-in-react | a6da2a8ddad201dfaa3073a371b6e3ee3ac4f55b | [
"MIT"
] | 7 | 2018-10-03T20:06:28.000Z | 2020-05-28T15:51:28.000Z | Bonjour everyone! It’s time for another issue of This Week in React.
🔄 React Fresh
🛠 Inform DevTools of Commit Priority Level
🧪 Flush Only on Exiting Outermost act()
👀 and more!
Subscribe at http://this-week-in-react.org and read the thread below!
---
🔄 The team started to work on React Fresh, a new generation of hot reloading. Changes include:
➡ Initial scaffolding: https://github.com/facebook/react/pull/15619
➡ Infrastructure: https://github.com/facebook/react/pull/15698
➡ and more!
Check out this Tweet by @dan_abramov for more info:
https://twitter.com/dan_abramov/status/1126948870137753605?s=20
---
🛠 React now exposes information about the commit priority level to DevTools.
https://github.com/facebook/react/pull/15664
---
🧪 act() will now only flush on exiting the outermost callback.
This changes the behavior of nested act() calls.
https://github.com/facebook/react/pull/15682
---
🇭🇺 You can now help translate the official React documentations to Hungarian.
https://github.com/reactjs/reactjs.org
---
👷♀️ The React repository CI tests now using the CircleCI Workflows feature.
This improves status reporting in GitHub.
https://github.com/facebook/react/pull/15704
---
🎇 More work on React Flare:
➡ Updated interactiveUpdates flushing heuristics: https://github.com/facebook/react/pull/15687
➡ getAbsoluteBoundingClientRect now accounts for fixed elements: https://github.com/facebook/react/pull/15707
➡ getEventCurrentTarget now uses the fiber tree: https://github.com/facebook/react/pull/15708
and more.
---
🚧 The fuzz tester is now run on CI periodically with a randomly generated seed.
https://github.com/facebook/react/pull/15718
---
🐛 This change fixes an issue in the fiber code by assign the missing return pointer correctly.
https://github.com/facebook/react/pull/15700
---
🛤 React Native is moving to path-based imports from the globally-unique-named Haste system.
https://github.com/facebook/react/pull/15604
---
🇪🇺 ReactEurope Recap
I’m writing today’s newsletter from ReactEurope in Paris and there are so many amazing talks. You can catch up by looking at my Twitter thread.
https://twitter.com/PhilippSpiess/status/1131457729250385921
---
👏 This week, 16 people made their first commit in one of the React repositories.
I'm still amazed by how many people are constantly working on making React better. ✨
https://git.io/fjBPf
---
Thank you for following. Don’t forget to like and RT! 👋
| 25.635417 | 143 | 0.764323 | eng_Latn | 0.919639 |
3eb7c6830e8890354268b67ddd08af7c4efef117 | 1,407 | md | Markdown | 2020/09/15/2020-09-15 12:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/09/15/2020-09-15 12:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/09/15/2020-09-15 12:45.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年09月15日12时数据
Status: 200
1.美队回应误发私密照
微博热度:4752465
2.无特殊情况不得进出瑞丽市城区
微博热度:3019014
3.王源的厨房不断电
微博热度:2963002
4.虞书欣在姜滨头上撒面粉
微博热度:2960054
5.陈妍希送儿子上学
微博热度:2949431
6.因琐事争端把男朋友论文删了
微博热度:2345129
7.iPhone12
微博热度:2247052
8.云南疫情
微博热度:1314830
9.中欧正式签署中欧地理标志协定
微博热度:1234708
10.警方确认芦名星系自杀
微博热度:1232880
11.男子见义勇为被猴群报复
微博热度:1189534
12.双胞胎姐妹同大学同专业同宿舍
微博热度:1173423
13.打牌被心电图出卖了
微博热度:1098961
14.大理200名余名医护连夜支援瑞丽
微博热度:1019721
15.杨超越港风大片
微博热度:1001245
16.狗不理解除与王府井店加盟方合作
微博热度:973801
17.我喜欢你全员头像无法显示
微博热度:968209
18.中文十级的漫威演员
微博热度:887641
19.退伍军人带4岁女儿军训
微博热度:820930
20.沈腾
微博热度:820147
21.华为芯片断供首日
微博热度:743876
22.倒装句突然有了灵魂
微博热度:524368
23.过于热情的外卖小哥
微博热度:522332
24.云南瑞丽全员核酸检测
微博热度:443177
25.黄子韬悼念爸爸
微博热度:442387
26.刘璇晒女儿正面照
微博热度:441344
27.美国将中国旅行指引调级
微博热度:437690
28.王者荣耀
微博热度:423969
29.特朗普公然违规办大型室内集会
微博热度:367541
30.杨紫成功追星赵薇
微博热度:363010
31.吴仁惠去世
微博热度:359376
32.外卖小哥回应遭大学生短信辱骂
微博热度:343592
33.金星 磷化氢
微博热度:318576
34.美驻华大使布兰斯塔德将离任
微博热度:316136
35.戚薇倒模全记录vlog
微博热度:315208
36.宋茜暗黑公主造型
微博热度:314976
37.深圳交警新制服上自带风扇
微博热度:313709
38.福建小朋友学发音有多难
微博热度:298338
39.雨中故宫
微博热度:252624
40.郑爽解释爽言爽语
微博热度:252392
41.林书豪告别CBA
微博热度:248547
42.张艺兴即兴krump
微博热度:247959
43.拿遇水溶解泳裤整蛊
微博热度:212478
44.一箭九星
微博热度:208325
45.利文斯顿加入勇士管理层
微博热度:201992
46.神曲苏幕遮
微博热度:195393
47.无声快递员说按件算钱接受考核
微博热度:195269
48.兵哥哥真人版演示手枪如何工作
微博热度:194252
49.上海地铁
微博热度:194192
50.初秋法式风情妆
微博热度:193621
| 6.897059 | 19 | 0.782516 | yue_Hant | 0.286027 |
3eb94f94becc2638712b91ec80d8f449a37c0d49 | 268 | md | Markdown | CoffeeLint/no_debugger.md | r3yn0ld4/docs-for-code-review-tools | a1590fce3b30891679373ec284787b227b21df05 | [
"MIT"
] | 4 | 2019-07-17T18:16:06.000Z | 2021-03-28T23:53:10.000Z | CoffeeLint/no_debugger.md | Acidburn0zzz/docs-for-code-review-tools | 9659492c76b988e14363dced6c2ab5f86fcdd6e0 | [
"MIT"
] | null | null | null | CoffeeLint/no_debugger.md | Acidburn0zzz/docs-for-code-review-tools | 9659492c76b988e14363dced6c2ab5f86fcdd6e0 | [
"MIT"
] | 5 | 2018-09-29T17:02:14.000Z | 2021-12-26T16:53:04.000Z | Pattern: Use of `debugger`/`console`
Issue: -
## Description
Disallows `debugger` and/or `console` statements. In general, these statements aren’t appropriate for production code.
## Further Reading
* [CoffeeLint - no_debugger](http://www.coffeelint.org/#options) | 24.363636 | 118 | 0.75 | eng_Latn | 0.64761 |
3eb9c690f215d7fc0414936550b526bcff9ae1f6 | 776 | md | Markdown | _posts/2013-04-30-guide-for-manly-drinks.md | BlogToolshed50/wellnessmanager.info | 1fe9abf2247ee71f84e83dde192888813364ebb8 | [
"CC-BY-4.0"
] | null | null | null | _posts/2013-04-30-guide-for-manly-drinks.md | BlogToolshed50/wellnessmanager.info | 1fe9abf2247ee71f84e83dde192888813364ebb8 | [
"CC-BY-4.0"
] | null | null | null | _posts/2013-04-30-guide-for-manly-drinks.md | BlogToolshed50/wellnessmanager.info | 1fe9abf2247ee71f84e83dde192888813364ebb8 | [
"CC-BY-4.0"
] | null | null | null | ---
id: 615
title: Guide for Manly Drinks
date: 2013-04-30T04:25:00+00:00
author: admin
layout: post
guid: http://www.wellnessmanager.info/?p=615
permalink: /2013/04/30/guide-for-manly-drinks/
categories:
- General
---
If you want to make sure you’re drinking something manly, you can find many choices at CigarAdvisor. They provide all the information and listed the features about [Manly Drinks](http://www.cigaradvisor.com/magazine/0413/the-manual-his-drink-of-choice) and listed them with features that help you choose your favorite easily. This is a helpful guide to find a lot of different brands and flavors, people who are interested in tasting different flavor can navigate through all brands at Cigar Advisor that help choose the right flavor for your needs. | 64.666667 | 555 | 0.787371 | eng_Latn | 0.995104 |
3eb9f4eaf4a1992970aea31121f8aeabe9784579 | 2,351 | md | Markdown | _posts/2021-06-15-camping-at-martin-dies-jr-state-park-jasper-texas.md | shantoroy/shantoroy.github.io | 3022cb62e91f3c613d83b6f394e44332d0592522 | [
"MIT",
"BSD-3-Clause"
] | 1 | 2020-10-27T05:26:20.000Z | 2020-10-27T05:26:20.000Z | _posts/2021-06-15-camping-at-martin-dies-jr-state-park-jasper-texas.md | shantoroy/shantoroy.github.io | 3022cb62e91f3c613d83b6f394e44332d0592522 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | _posts/2021-06-15-camping-at-martin-dies-jr-state-park-jasper-texas.md | shantoroy/shantoroy.github.io | 3022cb62e91f3c613d83b6f394e44332d0592522 | [
"MIT",
"BSD-3-Clause"
] | 3 | 2020-06-03T10:26:57.000Z | 2021-07-23T21:26:34.000Z | ---
layout: single
title: "Camping and Biking at Martin Dies, Jr. State Park, Jasper, TX"
header:
image: "https://live.staticflickr.com/65535/51253165860_fb875f1bc0_b.jpg"
teaser: "https://live.staticflickr.com/65535/51253165860_fb875f1bc0_b.jpg"
categories:
- Travel
tags:
- Travel
- State Park
- Camping
- Biking
toc: true
toc_label: "Table of Contents"
toc_icon: "heart"
---
We went for a one-night camping at the Martin Dies, Jr. State Park which is a 705-acre recreation area located along U.S. Route 190 on the banks of the Steinhagen Reservoir in Jasper and Tyler counties in Texas. Here, in this video, I tried to show the surrounding areas and campsites in the lakeside park. Except the heat, it was quite a nice experience camping there.
Unfortunately, the Walnut Ridge Unit was closed due to maintenance at this time. Also, we were not able to rent Kayaks here. The condition of the available bikes are not good. I took my own bike and visited a couple of trails alone. It is better you take your own Kayak/bike there. There is one bathing place, the water seems okay for bathing. Washrooms near the bathing area is good. Except that one, other washrooms are not that good in quality.
Here, I linked two videos: the first one is a summary of our camping (although I missed capturing a lot of moments including games, camp fire, etc.).
<iframe src="https://www.youtube.com/embed/cMHv8hdtKbU" width="560" height="315" frameborder="0"> </iframe>
<br/><br/>
The other video covers the only trail I finished. There are several other trails, however, I could not get enough time to visit.
The `Slough Trail` is one of the closest and easiest walk/bike trail near the Martin Dies Jr. State Park, Jasper, TX. Apart from our camping, I explored a few trails. This is the only one, I completely explored. The trail map is provided at the beginning of the video including the distance from our camp site.
<iframe src="https://www.youtube.com/embed/u4CiK4moqWQ" width="560" height="315" frameborder="0"> </iframe>
<br/><br/>
If you want to check my camping item checklist, please feel free to check my other post:
[Camping Checklist: Things to take for Camping](https://shantoroy.com/travel/things-to-take-for-camping/)
Thanks! Cheers!!!
<!--stackedit_data:
eyJoaXN0b3J5IjpbOTE4NjA3MzY2LC0yMDExNzI3OTk0LC0yMT
A1OTI1MjM1XX0=
--> | 54.674419 | 447 | 0.764356 | eng_Latn | 0.996658 |
3eba8eabad33bab1525b19941989bac1b8b6f33e | 5,937 | md | Markdown | blog.md | mvrcx/A2-hcds-hcc | ef2435f157e5a869af46400067dc33c9f9256860 | [
"MIT"
] | null | null | null | blog.md | mvrcx/A2-hcds-hcc | ef2435f157e5a869af46400067dc33c9f9256860 | [
"MIT"
] | null | null | null | blog.md | mvrcx/A2-hcds-hcc | ef2435f157e5a869af46400067dc33c9f9256860 | [
"MIT"
] | null | null | null | # Assignment 2
> **Date:** 14.11.2020 - 22:31 PM *(Due: 17.11.2020 - 03:00 PM)*
> **Name:** `maop` Marc O.
> **Session:** [02 Exercise](https://github.com/FUB-HCC/hcds-winter-2020/wiki/02_exercise)
----
## R2 - Reflection
> Book: The Practice of Reproducible Research (Chapter 2 and 3)
### Definitions
_Reproducibility and replicability:_
The authors provide multiple definitions from different sources. Their prefered definition is the one from Goodman et. al.:
* \[Reproducibility is\] the ability of a researcher to duplicate the results of a prior study using the same materials as were used by the original investigator. That is, a second researcher might use the same raw data to build the same analysis files and implement the same statistical analysis in an attempt to yield the same results.
* \[Replicability\] \[...\] refers to the ability of a researcher to duplicate the results of a prior study if the same procedures are followed but new data are collected.
_How does this relate to the definitions given in the lecture?:_
The definitions from the lecture are are pretty similar to the ones mentioned above and are based on the same fundamental ideas. While our definition is "including key quantitative findings, tables, and figures \[when providing\] only a set of files and written instructions", Goodman et. al. simply bundles these aspects to results, without specifying more precisely what the result needs to contain. Goodman's definition focusses more on making sure that the "same analysis files and \[...\] same statistical analysis" is being used to receive the same result.
Compairing the definitions of replicability one finds major differences in terms of the actual purpose of replication. The definition presented in the lecture motivates the reason for replicating a project with "achieving a commensurate, confirming, or contradictory result". Furthermore purpose of replication is "to strengthen the foundation of a theory so that it too will become a part of our general knowledge system." These were aspects that were neglected in the books definitions.
### 🗨️ "How does the reading inform your understanding of human centered data science?"
_Describe in at least 2-3 full sentences._
Human Centered Data Science is also about providing code which has been produced during research projects and making sure your research projects are fully reproducible. Providing a project that is not reproducible or replicable is therefore almost worthless for other developers, or people that try to understand and verify the used methods. Publishing such projects comes with a high responsibility towards everyone to whom these results are being provided.
### ❓ Questions
_List at least 1 question (full sentence) that this reading raised in your mind, and say why._
1. There are many degrees of reproducibility. What degree is used for what purpose? I can image that developing software for a bigger company requires a very high degree of reproducility. But on the other side making sure a project is perfectly reproducible can be very time consuming and best believe that companys do not have the budget or the will to secure reproducible workflow. Therefore: what is considered a "minimum" reproducible project, that satisfies the criteria for reproducibility but also digs not too deep.
***
## A2 - Reproducibility Workflow
_Briefly describe your experience using the reproducibility workflow._
* **Step 1:** I started this assignment by setting parameters for the queries through the APIs. I then proceeded to create a directory structure which was presented in Chapter 2. All the queried data was then saved in the `raw` directories.
* **Step 2:** Afterwards, I processed the data by cleaning, extracting, and reassigning, and combining dataframes to create the `clean` data in the required format. I decided to compute the average views for the ~ 1 year of overlapping data. I figured out that adding the data in this overlapping time period is senseless, since parts of the data were counting the same things (e.g. views for mobile access type).
* **Step 3:** I adjusted the data set and plotted the time series which seriously gave me headaches labeling, resizing and trying to adjust the axis.
### Final Wikipedia Page View Plot
_Add an image of your plot here and describe what you can see._ 🖼️

* The gray line is showing all traffic (desktop+mobile)
* The green line is showing mobile traffic only
* The blue line is showing desktop traffic only
The time series graph shows the pageviews of Wikipedia from 12/2007 to 09/2020
**Note that the blue line is actually partly underlapped by the gray line. The reason for this is that date on mobile traffic is only available starting from 2014/10. Therefore all traffic (which is being computed as the sum of mobile+desktop traffic is the same as the desktop traffic until 09/2014**
### Challenges
_Describe what tasks were challenging to you._
* It was challenging to me to merge the data and not always work with the dataframes as variable, but to ensure a reproducible workflow. I must admit that sometimes I just wanted to call a prior created variable containing the needed dataframe. But for this task it was crucial to make sure to follow best practices for reproducible research, which meant putting more effort in exporting the acquired data and parsing it again.
* Also, it took me waaaay to long to scale the axes and relabel the axes. I didn't want to manipulate the `result_data.csv` entries by dividing a column by 10 so that my maximum y-axis value wouldn't be 1e10 but 1e9 instead. But lambda functions in python saved me.
_What was surprising, what did you learn?_ 😮
* It was definitely surprising to see how much work it actually is to deliver a fully reproducible project. There's much more behind it than I thought it would be.
| 87.308824 | 564 | 0.787098 | eng_Latn | 0.999847 |
3ebaa5dfca33a4b462be2b93db82d82e3993ada5 | 3,038 | md | Markdown | src/pages/metodologia-tdsp/10-Template-TDSP/2-Documentos/4-Proyecto/1-Charter/index.md | erickMunz/guides | 0024a5621666109894a87e2581b13bfc4f7689ce | [
"BSD-3-Clause"
] | null | null | null | src/pages/metodologia-tdsp/10-Template-TDSP/2-Documentos/4-Proyecto/1-Charter/index.md | erickMunz/guides | 0024a5621666109894a87e2581b13bfc4f7689ce | [
"BSD-3-Clause"
] | null | null | null | src/pages/metodologia-tdsp/10-Template-TDSP/2-Documentos/4-Proyecto/1-Charter/index.md | erickMunz/guides | 0024a5621666109894a87e2581b13bfc4f7689ce | [
"BSD-3-Clause"
] | null | null | null | ---
title: 1-Charter
---
## Carta del proyecto
Conocimiento de los negocios
```
¿Quién es el cliente? ¿En qué dominio comercial se encuentra el cliente?
¿Qué problemas comerciales estamos tratando de resolver?
```
Alcance
```
¿Qué soluciones de ciencia de datos estamos tratando de construir?
¿Que haremos?
¿Cómo va a ser consumido por el cliente?
```
Personal
```
Quiénes están en este proyecto:
Microsoft:
Lider del Proyecto
PM
Data scientist (s)
Gerente de cuentas
Cliente:
Administrador de datos
Contacto de negocio
```
Métrica
```
¿Cuáles son los objetivos cualitativos? (por ejemplo, reducir la rotación de usuarios)
¿Qué es una métrica cuantificable (por ejemplo, reducir la fracción de usuarios con inactividad de 4 semanas)?
Cuantifique qué mejoras en los valores de las métricas son útiles para el escenario del cliente (por ejemplo, reduzca la fracción de usuarios con inactividad de 4 semanas en un 20%)
¿Cuál es el valor de referencia (actual) de la métrica? (por ejemplo, la fracción actual de usuarios con inactividad de 4 semanas = 60%)
¿Cómo mediremos la métrica? (por ejemplo, prueba A / B en un subconjunto específico durante un período específico o comparación del rendimiento después de la implementación con la línea base)
```
Plan
```
Fases (hitos), línea de tiempo, breve descripción de lo que haremos en cada fase.
```
Arquitectura
```
- Datos
¿Qué datos esperamos? Datos brutos en las fuentes de datos del cliente (por ejemplo, archivos en premisa, SQL, Hadoop local, etc.)
- El movimiento de datos de On-Prem a Azure usando ADF u otras herramientas de movimiento de datos (Azcopy, EventHub, etc.) para moverse
todos los datos,
después de una preagregación agregada en el lugar,
La muestra de datos es suficiente para modelar
- Qué herramientas y recursos de almacenamiento / análisis de datos se usarán en la solución, p.
ASA para la agregación de secuencias
HDI / Hive / R / Python para la construcción de características, la agregación y el muestreo
AzureML para modelado y operacion de servicios web
- ¿Cómo se consumirá el puntaje o los servicios web en operación (RRS y / o BES) en el flujo de trabajo del negocio del cliente? Si corresponde, escriba el pseudo-código para las API de las llamadas al servicio web.
¿Cómo usará el cliente los resultados del modelo para tomar decisiones?
Oleoducto de movimiento de datos en producción
Haga un diagrama de 1 diapositiva que muestre el flujo de datos de extremo a extremo y la arquitectura de decisión
Si hay un cambio sustancial en el flujo de trabajo del negocio del cliente, haga un diagrama de antes / después que muestre el flujo de datos.
```
Comunicación
```
¿Cómo nos mantendremos en contacto? Reuniones semanales?
¿Quiénes son las personas de contacto en ambos lados? ``` | 42.194444 | 223 | 0.70869 | spa_Latn | 0.999265 |
3ebb69b2a55a0dbd149be61c5b45c7b1f725220e | 1,908 | markdown | Markdown | _posts/2020-05-31-from_ruby_to_javascript.markdown | biagioo/biagioo.github.io | c557384b23dbd311e2cf9d8a3b86bcad36e45edc | [
"MIT"
] | null | null | null | _posts/2020-05-31-from_ruby_to_javascript.markdown | biagioo/biagioo.github.io | c557384b23dbd311e2cf9d8a3b86bcad36e45edc | [
"MIT"
] | null | null | null | _posts/2020-05-31-from_ruby_to_javascript.markdown | biagioo/biagioo.github.io | c557384b23dbd311e2cf9d8a3b86bcad36e45edc | [
"MIT"
] | null | null | null | ---
layout: post
title: "From Ruby To JavaScript "
date: 2020-05-31 14:25:04 +0000
permalink: from_ruby_to_javascript
---
Learning ruby as a first programming language has been truly enjoyable. Once I had finished my Rails project and was starting my JavaScript curriculum, I quickly noticed how readable and expressive Ruby was. I find Ruby to be expressive, intuitive, simple and powerful in many ways!
Code in Ruby seems to be easier on the eyes and brain when you're learning it, where as in JavaScript, it sometimes can get a little confusing! For example, in Ruby, to declare a variable would be something like variable_name = value. In JavaScript you have a few different options on what kind of variable you want to declare. It could be either a var, let, or const. To Declare a variable would be something like this var varName = value. It’s small things like this in JavaScript that constantly trip me up! I feel like in ruby we are spoiled with the concept of convention over configuration. In Ruby there is more of a set way to do things and if you follow those guidelines, usually your code is clean and gentle on the eyes. I feel like JavaScript is best compared to the Wild West. There are a million ways to do something and all of those ways are acceptable as long as the goal is accomplished!
As I code with JavaScript more and more I start to understand how powerful it is. I find myself enjoying the language the more I learn about it! Being able to pass in functions as a parameter seems super powerful! Another thing that I enjoy about JavaScript is that you can make changes to how things work in the browser. When I first learned about event listeners I thought they were the coolest things. Having the power to make web pages more responsible is pretty cool!
Even though JavaScript is intimidating I'm eager to learn more and become proficient in the language !
| 112.235294 | 905 | 0.786164 | eng_Latn | 0.999937 |
3ebbfe5cabd755840cdc6a7d008dcd7dbcf7b542 | 954 | md | Markdown | README.md | isebres/fanControl | 7ea774e3c86198137a7c370c106f8d456887ca69 | [
"MIT"
] | 4 | 2018-07-28T16:38:33.000Z | 2021-06-30T09:07:07.000Z | README.md | isebres/fanControl | 7ea774e3c86198137a7c370c106f8d456887ca69 | [
"MIT"
] | null | null | null | README.md | isebres/fanControl | 7ea774e3c86198137a7c370c106f8d456887ca69 | [
"MIT"
] | 1 | 2018-06-03T03:36:33.000Z | 2018-06-03T03:36:33.000Z | fanControl
-------
Simple OS X console utility for temperatures and fan speed monitoring writen on Swift. Additionally maximal and minimal fan speed can be adjusted.
For monitoring run `fanControl` and you can see something like this:
```
Fan speeds (RPM):
ID Name Min Cur Max
0 Exhaust 1200 2041 7200
Temperatures (°C):
CPU 69.7
MEM 51.0
ENC 36.0
HSK 51.5
```
For control fan speed run utility with "sudo" `sudo fanControl -id=<ID> -rpm=<MIN>-<MAX>`
In case of success you will be notified with something like this:
```
Exhaust fan (ID:0):
RPM successfully changed from XXX-XXX to MIN-MAX
```
Inspired by https://github.com/beltex/SMCKit
Warning
-------
This tool will allow you to write values to the SMC which could irreversably damage your
computer. Manipulating the fans could cause overheating and permanent damange. USE THIS
PROGRAM AT YOUR OWN RISK!
| 32.896552 | 147 | 0.681342 | eng_Latn | 0.941911 |
3ebc8f36afeb80d7b5cb469ea3176446286200b4 | 3,677 | md | Markdown | README.md | netgfx/Spookd | ca730f1a93a22e95039c4d900b2508bbdb392e7d | [
"MIT"
] | 2 | 2021-10-17T00:46:33.000Z | 2021-10-21T07:56:25.000Z | README.md | netgfx/Spookd | ca730f1a93a22e95039c4d900b2508bbdb392e7d | [
"MIT"
] | null | null | null | README.md | netgfx/Spookd | ca730f1a93a22e95039c4d900b2508bbdb392e7d | [
"MIT"
] | null | null | null | # Spookd
 
A Framer prototype game with Supabase as backend
This is a proof of concept project, to showcase how can a multiplayer game be created via any UI and a Supabase db.
Using the DB to store data and the realtime communication (through websockets) to broadcast events.
This game features:
- Creating user accounts
- Creating game lobby
- Joining game
- Playing and receiving gameplay events (player ready, win, loss, shared game data)
https://user-images.githubusercontent.com/864248/136512258-004e1618-d975-4f23-bbeb-67a552dccd93.mp4
```
Gameplay instructions:
- Pick an avatar
- Make or Join a game (put 1 player to play a solo challenge)
- Press Start to indicate that you are ready to play
```
### Prefered platform: `Mobile`
## Gameplay link: https://framer.com/share/Framer-Supabase-multiplayer--1rmYcBAnSk0PVvXGrnlc/e0t612XvT?fullscreen=1
<br>
## Vocal brief description of the project: https://www.youtube.com/watch?v=8SsWL3unwGI
# Documentation

---
### The random blocks (PGB - Procedurally Generated Blocks) are smart components with 3 shapes inside and variable colors/shapes


# Prototype Views (Framer)
## Registration
- Character picker bottom sheet
## Menu
- Game Creation
- Game creation bottom sheet form
- Game finder
- Game password modal
- Score
## Gameplay View
- Random target block modal
- Win modal
- Loss modal
---
### Mindmap

---
# API Overview
## User creation
- User is automatically created if they don't exist on local storage
- User is assigned a unique ID
- User is assigned a custom avatar
## Game creation
- Host user is creating a game with [name, max-players, password(optional)]
- Game is created on the database with (id,created_at,updated_at,room_pass,min_players,room_name,recycle,last_heartbeat,players,game_data,winner,target_block,started)
- The host is added as one of the players on the `players` column
- The game blocks layout is saved on the `game_data` column
- Idle games ( `last_heartbeat` ) for more than 1-hour are deleted
## Game join
- User finds the game through the list of games from the game finder view
- User adds password for the game if needed
- User joins the selected game and they are registered as the `nth` user on the database on the `players` column
- The "guest" user is subscribed to receive events via the Realtime Supabase https://supabase.io/docs/reference/javascript/subscribe
- Both players receive events when a new player registers, if the `minimum_players` value matches the `players` length
## Game play
- The "guest" user receives the pre-made layout from the `game_data`
- When the "host" and "guest" both press `Ready` the game starts
- When the game stars both players are presented with a shape combination to discover
- The first one that discovers the correct combination is the winner and the `winner` column is filled with the `player_name`
- Both players are notified via the subscription for the winner
- The loss modal is shown
- The win modal is shown
---
### Mindmap

## Clean up
- Right now clients are responsible of cleaning up idle games that their `last_heartbeat` is over 1 hour.
- Games also get removed after a winner has been declared but users refreshing the page or navigating away can leave the game in an idle state. In the future a dedicated server or Supabase function with a cron job could be responsible for handling these leftovers.
---
## Team
```
Michael Dobekidis
Twitter: @netgfx
Github: https://github.com/netgfx
```
| 29.894309 | 264 | 0.759587 | eng_Latn | 0.994076 |
3ebd88116674640b9ce28deff1bb736bb4492eb9 | 32,099 | md | Markdown | articles/documentdb/documentdb-community.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/documentdb/documentdb-community.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/documentdb/documentdb-community.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Comunidad y noticias de DocumentDB | Microsoft Azure"
description="Únase a la comunidad de Azure DocumentDB para crear relaciones, exhibir su trabajo y mejorar sus habilidades."
services="documentdb"
documentationCenter=""
authors="aliuy"
manager="johnmac"
editor="mimig"/>
<tags
ms.service="documentdb"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="data-services"
ms.date="09/12/2016"
ms.author="andrl"/>
# Portal de la Comunidad
## Noticias destacadas de la Comunidad
Permítanos promover su proyecto. Muéstrenos el impresionante proyecto en el que está trabajando con DocumentDB y ayúdenos a compartir su genialidad con el mundo. Para enviar el proyecto, envíenos un correo electrónico a la dirección: [[email protected]](mailto:[email protected]).
### documentdb-lumenize
*por Larry Maccherone*
Agregaciones (group by, tabla dinámica y cubo de N dimensiones) y transformaciones de serie temporal como procedimientos almacenados en DocumentDB.
Consulte al respecto en [Github](https://github.com/lmaccherone/documentdb-lumenize) y [npm](https://www.npmjs.com/package/lumenize).
### DocumentDB Studio
*por Liu Ming*
Un visor y explorador de administración de clientes para el servicio Microsoft Azure DocumentDB.
Consulte al respecto en [Github](https://github.com/mingaliu/DocumentDBStudio).
### DoQmentDB
*por Ariel Mashraki*
DoQmentDB es un cliente basado en objetos promise de Node.js, que proporciona una capa similar a MongoDB sobre DocumentDB.
Consulte al respecto en [Github](https://github.com/a8m/doqmentdb) y [npm](https://www.npmjs.com/package/doqmentdb).
### API de REST Swagger para DocumentDB
*por Howard Edidin*
Archivo Swagger de API de REST de DocumentDB que se puede implementar fácilmente como una aplicación de API.
Consulte al respecto en [Github](https://github.com/HEDIDIN/DocumentDB-REST/tree/master/DocumentDBRestApi).
### fluent-plugin-documentdb
*por Yoichi Kawasaki*
fluent-plugin-documentdb es un complemento de Fluentd para escribir en Azure DocumentDB.
Consulte al respecto en [Github](https://github.com/yokawasa/fluent-plugin-documentdb) y [rubygems](https://rubygems.org/gems/fluent-plugin-documentdb).
*Encuentre más proyectos de DocumentDB de código abierto en [GitHub](https://github.com/search?p=4&q=documentdb&type=Repositories).*
## Noticias, blogs y artículos
Para mantenerse al día respecto a las novedades y características más recientes de DocumentDB, síganos en [nuestro blog](https://azure.microsoft.com/blog/tag/documentdb/).
**Entradas de la Comunidad:**
- [**Going Social with DocumentDB**](https://blogs.msdn.microsoft.com/mvpawardprogram/2016/03/15/going-social-with-documentdb/) (Entre en las redes sociales con DocumentDB), de *Matias Quarantaas*
- [**UWP, Azure App Services, and DocumentDB Soup: A photo-sharing app**](https://blogs.windows.com/buildingapps/2016/03/17/uwp-azure-app-services-and-documentdb-soup-a-photo-sharing-app/) (UWP, servicios de aplicaciones de Azure y DocumentDB: una aplicación para compartir fotos), de *Eric Langland*
- [**Notifications for new or changed DocumentDB resources using Logic Apps**](documentdb-change-notification.md) (Notificaciones para los recursos de DocumentDB nuevos o modificados con Aplicaciones lógicas), *de Howard Edidin*
- [**Collecting logs into Azure DocumentDB using fluent-plugin-documentdb **](http://unofficialism.info/posts/collecting-logs-into-azure-documentdb-using-fluent-plugin-documentdb/) (Recopilación de registros en Azure DocumentDB mediante fluent-plugin-documentdb), de *Yoichi Kawasaki*
- [**DocumentDB revisited Part 1/2 – The theory**](https://peterintheazuresky.wordpress.com/2016/02/19/documentdb-revisited-part-12-the-theory/) (Repaso de DocumentDB parte 1 de 2: La teoría), de *Peter Mannerhult*
- [**What to love and hate about Azure’s DocumentDB**](http://blog.falafel.com/4-what-to-love-and-hate-about-azures-documentdb/) (Ventajas y desventajas de Azure DocumentDB), de *George Saadeh*
- [**Azure DocumentDB Server-Side Scripting**](https://www.simple-talk.com/cloud/cloud-data/azure-documentdb-server-side-scripting/) (Scripting del lado servidor de Azure DocumentDB), de *Robert Sheldon*
- [**DocumentDB as a data sink for Azure Stream Analytics**](http://janatdevelopment.com/2015/12/11/documentdb-as-a-data-sink-for-azure-stream-analytics/?utm_source=twitterfeed&utm_medium=twitter) (DocumentDB como receptor de datos para Análisis de transmisiones de Azure) - *por Jan Hentschel*
- [**Azure DocumentDB in production!**](http://blog.nexapp.ca/2015/11/30/azure-documentdb-in-production/) (¡Azure DocumentDB en producción!) - *por Alexandre Walsh y Marc Olivier Duval*
- [**Azure Search Indexers – DocumentDB Queries**](http://www.ealsur.com.ar/wp/index.php/2015/11/19/azure-search-indexers-documentdb-queries/) (Indexadores de Búsqueda de Azure: consultas de DocumentDB) - *por Matthias Quaranta*
- [**Azure DocumentDB SQL query basics (Datos básicos de consultas SQL de Azure DocumentDB) (japonés) **](http://beachside.hatenablog.com/entry/2015/12/06/000045) - *por Atsushi Yokohama*
- [**Data Points - Aurelia Meets DocumentDB: A Matchmaker’s Journey**](https://msdn.microsoft.com/magazine/mt620011.aspx) (Puntos de datos: Aurelia se reúne con DocumentDB: viaje de un intermediario) - *por Julie Lerman*
- [**Infrastructure as Code and Continuous Deployment of a Node.js + Azure DocumentDB Solution**](http://www.talmeida.net/blog/2015/10/26/infrastructure-as-code-and-continuous-deployment-of-a-nodejs-azure-documentdb-solution) (Infraestructura como código e implementación continua de un Node.js + solución Azure DocumentDB) - *por Thiago Almedia*
- [**Why DocumentDb Makes Good Business Sense for Some Projects**](http://www.iquestllc.com/blogs/read/405/why-documentdb-makes-good-business-sense-for-some-projects) (Por qué DocumentDB es un buen negocio para algunos proyectos) - *por Samuel Uresin*
- [**Azure DocumentDB development moving forward – development of the Client class (Progreso del desarrollo de Azure DocumentDB: desarrollo de la clase de cliente) (1 de 2) (japonés)**](http://beachside.hatenablog.com/entry/2015/10/01/202734) - *por Atsushi Yokohama*
- [**Things you need to know when using Azure DocumentDB (Cosas que necesita saber al usar Azure DocumentDB) (japonés)**](http://beachside.hatenablog.com/entry/2015/10/01/202734) - *por Atsushi Yokohama*
- [**Dealing with RequestRateTooLarge errors in Azure DocumentDB and testing performance**](http://blogs.msdn.com/b/bigdatasupport/archive/2015/09/02/dealing-with-requestratetoolarge-errors-in-azure-documentdb-and-testing-documentdb-performance.aspx) - (Tratamiento de errores RequestRateTooLarge en Azure DocumentDB y prueba de rendimiento) *by Azim Uddin*
- [**Data Points - An Overview of Microsoft Azure DocumentDB**](https://msdn.microsoft.com/magazine/mt147238.aspx) (Puntos de datos: Una introducción a Microsoft Azure DocumentDB) - *por Julie Lerman*
- [**Using DocumentDB With F#**](https://jamessdixon.wordpress.com/2014/12/30/using-documentdb-with-f/) (Uso de DocumentDB con F#) *Jamie Dixon*
- [**Analysing Application Logs with DocumentDB**](http://vincentlauzon.com/2015/09/06/analysing-application-logs-with-documentdb/) (Análisis de registros de aplicaciones con DocumentDB) - *por Vincent-Philippe Lauzon*
- [**Azure DocumentDB – Point in time Backups**](http://softwarejuancarlos.com/2015/09/06/azure-documentdb-point-in-time-backups/) (Azure DocumentDB: Copias de seguridad en un momento dado) - *por Juan Carlos Sánchez*
*¿Tiene una entrada de blog, un ejemplo de código o un caso práctico que le gustaría compartir? [Queremos saber](mailto:[email protected])*
## Eventos y grabaciones
### Eventos recientes y futuros
| Nombre del evento | Orador | Ubicación | Date | Hashtag |
| -------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | -------------------- | ------------------------ | ------- |
| [Ignite 2016](https://myignite.microsoft.com/sessions?q=documentdb) | Andrew Liu y Tara Jana | Atlanta, GA | 26 a 30 de septiembre de 2016 | [#MSIgnite](https://twitter.com/MS_Ignite) |
| [Strata + Hadoop World](http://conferences.oreilly.com/strata/hadoop-big-data-ny/?cmp=kn-data-confreg-home-stny16_bing_branded) | TBD | Nueva York, NY | 26 a 29 de septiembre de 2016 | [#StrataConf](https://twitter.com/strataconf) |
| [Grupo de usuarios de .NET de Capital City](http://www.meetup.com/tally-dot-net/events/233768568/) | Santosh Hari | Tallahassee, Florida (EE. UU.) | 3 de noviembre de 2016 | N/D |
*¿Es el orador u organizador de un evento? [Queremos saber](mailto:[email protected]) cómo podemos ayudarle*
### Eventos y grabaciones anteriores
| Nombre del evento | Orador | Ubicación | Date | Grabación |
| -------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | -------------------- | ---------------------- | --------- |
| [DevTeach](http://devteach.com/) | Ken Cenerelli | Montreal, Canadá | 4 a 8 de julio de 2016 | [NoSQL, No Problem, Using Azure DocumentDB (Sin SQL, no hay problemas: Usar Azure DocumentDB)](http://www.slideshare.net/KenCenerelli) |
| [Integración and IoT ](http://www.btug.be/events) | Eldert Grootenboer | Kontich, Bélgica | 30 de junio de 2016 | N/D |
| [MongoDB World 2016](https://www.mongodb.com/world16) | Kirill Gavrylyuk | Nueva York, Nueva York | 28 a 29 de junio de 2016 | N/D |
| [Integration User Group](http://www.integrationusergroup.com/do-logic-apps-support-error-handling/) | Howard S. Edidin | Difusión web | 20 de junio de 2016 | [Do Logic Apps support error handling? (¿Admiten las Aplicaciones lógicas el control de errores?)](http://www.integrationusergroup.com/do-logic-apps-support-error-handling/) |
| [Meetup: UK Azure User Group](http://www.meetup.com/UKAzureUserGroup/events/229673468/)| Andrew Liu | Londres, Reino Unido | 12 de mayo de 2016 | N/D
|[Meetup: ONETUG - Orlando .NET User Group ](http://www.meetup.com/ONETUG/events/230797164/)| Santosh Hari| Orlando, FL| 12 de mayo de 2016| N/D
| [SQLBits XV](https://sqlbits.com/) | Andrew Liu y Aravind Ramachandran | Liverpool, Reino Unido | Del 4 al 7 de mayo de 2016 | N/D| [Meetup: NYC .NET Developers Group (Encuentro: grupo de desarrolladores de .NET en Nueva York)](http://www.meetup.com/NYC-NET-Developers/events/230396260/) | Leonard Lobel | Ciudad de Nueva York, NY | 21 de abril de 2016 | N/D |
| [Integration User Group](http://www.integrationusergroup.com/#) | Howard Edidin | Seminario web | 25 de abril de 2016 | N/D |
| [Global Azure Bootcamp: SoCal (Campo de entrenamiento global de Azure: Sur de California)](http://xprs.imcreator.com/free/vishalishere/gab2016) | Leonard Lobel | Orange, CA | 16 de abril de 2016 | N/D |
| [Global Azure Bootcamp: Redmond (Campo de entrenamiento global de Azure: Redmond)](https://www.eventbrite.com/e/2016-global-azure-bootcamp-redmond-wa-tickets-21387752343) | David Makogon | Redmond, WA | 16 de abril de 2016 | N/D |
| [SQL Saturday #481 - Israel 2016](http://www.sqlsaturday.com/481/Sessions/Details.aspx?sid=40912) | Leonard Lobel | HaMerkaz, Israel | 04 de abril de 2016 | N/D |
| [Build 2016](https://build.microsoft.com/) | John Macintyre | San Francisco, California | 31 de marzo de 2016 | [Delivering Applications at Scale with DocumentDB, Azure's NoSQL Document Database (Entrega de aplicaciones a gran escala con DocumentDB, la Base de datos de documentos no SQL de Azure)](https://channel9.msdn.com/Events/Build/2016/B840)
| [SQL Saturday #505 - Belgium 2016](http://www.sqlsaturday.com/505/Sessions/Details.aspx?sid=44217) | Mihail Mateev | Amberes, Bélgica | 19 de marzo de 2016 | N/D |
| [Meetup: CloudTalk](http://www.meetup.com/CloudTalk/events/227963695/) | Kirat Pandya | Bellevue, WA | 3 de marzo de 2016 | N/D |
| [Meetup: Azure Austin](http://www.meetup.com/azureaustin/events/228209275/) | Merwan Chinta | Austin, TX | 28 de enero de 2016 | N/D |
| [Meetup: msdevmtl (Encuentro: msdevmtl)](http://www.meetup.com/msdevmtl/events/223839818/) | Vincent-Philippe Lauzon | Montreal, QC, Canadá | 1 de diciembre de 2015 | N/D |
| [Meetup: SeattleJS](http://www.meetup.com/seattlejs/events/220102664/) | David Makogon | Seattle, WA | 12 de noviembre de 2015 | N/D |
| [PASS Summit 2015](http://www.sqlpass.org/summit/2015/) | Jeff Renz, Andrew Hoh, Aravind Ramachandran, John Macintyre | Seattle, WA | Del 27 al 30 de octubre de 2015 | [Desarrollo de aplicaciones modernas en Azure](https://www.youtube.com/watch?v=k5Z24HX-RyQ) |
| [CloudDevelop 2015](http://www.clouddevelop.org/) | David Makogon, Ryan Crawcour | Columbus, OH | 23 de octubre de 2015 | N/D |
| [SQL Saturday #454 - Turín 2015](http://www.sqlsaturday.com/454/Sessions/Details.aspx?sid=40130) | Marco De Nittis | Turín, Italia | 10 de octubre de 2015 | N/D |
| [SQL Saturday #430 - Sofia 2015](http://www.sqlsaturday.com/430/Sessions/Details.aspx?sid=36090) | Leonard Lobel | Sofía, Bulgaria | 10 de octubre de 2015 | N/D |
| [SQL Saturday #444 - Kansas City 2015](http://www.sqlsaturday.com/444/Sessions/Details.aspx?sid=38576) | Jeff Renz | Kansas City, MO | 3 de octubre de 2015 | N/D |
| [SQL Saturday #429 - Oporto 2015](http://www.sqlsaturday.com/429/Sessions/Details.aspx?sid=36089) | Leonard Lobel | Oporto, Portugal | 3 de octubre de 2015 | N/D |
| [AzureCon](https://azure.microsoft.com/azurecon/) | David Makogon, Ryan Crawcour, John Macintyre | Evento virtual | 29 de septiembre de 2015 | [Azure data and analytics platform](https://channel9.msdn.com/events/Microsoft-Azure/AzureCon-2015/ACON207) (Plataforma de análisis y datos de Azure) [Working with NoSQL Data in DocumentDB (Trabajo con datos NoSQL en DocumentDB)](https://channel9.msdn.com/Events/Microsoft-Azure/AzureCon-2015/ACON338). |
| [SQL Saturday #434 - Holland 2015](http://www.sqlsaturday.com/434/Sessions/Details.aspx?sid=36413) | Leonard Lobel | Utrecht, Países Bajos | 26 de septiembre de 2015 | [Introducción a Base de datos de documentos de Azure](https://channel9.msdn.com/Blogs/Windows-Azure/SQL-Saturday-Holland-2015-Introduction-to-Azure-DocumentDB) |
| [SQL Saturday #441 - Denver 2015](http://www.sqlsaturday.com/441/Sessions/Details.aspx?sid=39191) | Jeff Renz | Denver, Colorado | 19 de septiembre de 2015 | N/D |
| [Meetup: San Francisco Bay Area Azure Developers (Encuentro: desarrolladores de Azure de la zona de la Bahía de San Francisco)](http://www.meetup.com/bayazure/events/223943785/) | Andrew Liu | San Francisco, California | 15 de septiembre de 2015 | N/D |
| [Reunión del grupo de usuario de Azure de Bielorrusia](https://www.facebook.com/events/786540124800276/) | Alex Zyl | Minsk, Bielorrusia | 9 de septiembre de 2015 | [Introducción al concepto de la Base de datos de documentos, niveles de coherencia, estrategias de particionamiento](https://www.youtube.com/watch?v=Uc_qwWzJKH8) |
| [NoSQL Now! (¡NoSQL ya!)](http://nosql2015.dataversity.net/) | David Makogon, Ryan Crawcour | San José, CA | 18-20 de agosto de 2015 | N/D |
| [@Scale Seattle](http://www.atscaleconference.com/) | Dharma Shukla | Seattle, WA | 17 de junio de 2015 | [Schema Agnostic Indexing with Azure DocumentDB (Indexación independiente de esquemas con Azure DocumentDB)](https://www.youtube.com/watch?v=VJQ_5qFFVP4) |
| [Tech Refresh 2015 (Actualización tecnológica 2015)](https://channel9.msdn.com/Events/DXPortugal/Tech-Refresh-2015) | Bruno Lopes | Lisboa, Portugal | 15 de junio de 2015 | [DocumentDB 101](https://channel9.msdn.com/Events/DXPortugal/Tech-Refresh-2015/DPDEV01) |
| [SQL Saturday #417 - Sri Lanka 2015](http://www.sqlsaturday.com/417/Sessions/Details.aspx?sid=21415) | Mihail Mateev | Colombo, Sri Lanka | 06 de junio de 2015 | N/D |
| [Meetup: Seattle Scalability Meetup](http://www.meetup.com/Seattle-Scalability-Meetup/events/204010442/) | Dharma Shukla | Seattle, WA | 27 de mayo de 2015 | N/D |
| [SQL Saturday #377 - Kiev 2015](http://www.sqlsaturday.com/377/Sessions/Details.aspx?sid=20322) | Mihail Mateev | Kiev, Ucrania | 23 de mayo de 2015 | N/D |
| [Mes de la base de datos](http://www.databasemonth.com/database/azure-documentdb) | Dharma Shukla | Nueva York, NY | 19 de mayo de 2015 | [Azure DocumentDB: Massively-Scalable, Multi-Tenant Document Database Service (Azure DocumentDB: servicio de base de datos de documentos multiempresa y escalable de forma masiva)](https://www.youtube.com/watch?v=iZsqBc3Dkbk) |
| [Meetup: London SQL Server User Group (Encuentro: grupo de usuarios de SQL Server de Londres)](http://www.meetup.com/London-SQL-Server-User-Group/events/221525058/) | Allan Mitchell | Londres, Reino Unido | 19 de mayo de 2015 | N/D |
| [DevIntersection](https://devintersection.com/) | Andrew Liu | Scottsdale, AZ | Del 18 al 21 de mayo de 2015 | N/D |
| [Meetup: Seattle Web App Developers Group (Encuentro: grupo de desarrolladores de aplicaciones web de Seattle)](http://www.meetup.com/Seattle-Web-App-Developers-Group/events/220591071/) | Andrew Liu | Seattle, WA | 14 de mayo de 2015 | N/D |
| [Ignite](http://ignite.microsoft.com/) | Andrew Hoh, John Macintyre | Chicago, IL | Del 4 al 8 de mayo de 2015 | [SELECT Latest FROM DocumentDB](https://azure.microsoft.com/documentation/videos/microsoft-ignite-2015-select-latest-from-microsoft-azure-documentdb/) (Vídeo sobre la selección de lo más novedoso de DocumentDB) [DocumentDB and Azure HDInsight: Better Together](https://azure.microsoft.com/documentation/videos/microsoft-ignite-2015-microsoft-azure-documentdb-and-azure-hdinsight-better-together/) (Vídeo sobre cómo DocumentDB y HDInsight de Azure funcionan mejor juntos) |
| [Build 2015](http://www.buildwindows.com/) | Ryan Crawcour | San Francisco, California | Del 29 de abril al 1 de mayo de 2015 | [Build the Next Big Thing with Azure’s NoSQL Service: DocumentDB (Compilación del siguiente gran proyecto con el servicio NoSQL de Azure: DocumentDB)](https://channel9.msdn.com/Events/Build/2015/2-729) |
| [Global Azure Bootcamp 2015 - Spain (Campo de entrenamiento global de Azure 2015 - España)](http://azurebootcamp.es/) | Luis Ruiz Pavon, Roberto Gonzalez | Madrid, España | 25 de abril de 2015 | [#DEAN DocumentDB + Express + AngularJS + NodeJS running on Azure(#DEAN DocumentDB + Express + AngularJS + NodeJS en Azure)](https://channel9.msdn.com/events/Developers-Spain-Events/Global-Azure-Bootcamp-2015/DEAN-DocumentDB--Express--AngularJS--NodeJS-running-on-Azure) |
| [Meetup: Azure Usergroup Denmark (Encuentro: grupo de usuarios de Azure, Dinamarca)](http://www.meetup.com/Azure-Usergroup-Denmark/events/221026670/) | Christian Holm Diget | Copenhague, Dinamarca | 16 de abril de 2015 | N/D |
| [Meetup: Charlotte Microsoft Cloud (Encuentro: Microsoft Cloud, Charlotte)](http://www.meetup.com/Charlotte-Microsoft-Cloud/events/221503519/) | Jamie Rance | Charlotte, NC | 8 de abril de 2015 | N/D |
| [SQL Saturday #375 - Silicon Valley 2015](http://www.sqlsaturday.com/375/Sessions/Details.aspx?sid=15289) | Ike Ellis | Mountain View, CA | 28 de marzo de 2015 | N/D |
| [Meetup: Istanbul Azure Meetup (Encuentro: encuentro Azure en Estambul)](http://www.meetup.com/istanbul-azure-meetup/events/220325538/) | Daron Yondem | Estambul, Turquía | 7 de marzo de 2015 | N/D |
| [Meetup: Great Lakes Area .Net User Group (Encuentro: grupo de usuarios de .Net de la zona de los Grandes Lagos)](http://www.meetup.com/Great-Lakes-Area-NET-User-Group-MIGANG/events/220364576/) | Michael Collier | Southfield, MI | 18 de febrero de 2015 | N/D |
| [TechX Azure](https://www.youtube.com/channel/UCDRlI2E4z5qmHsBXTrFOE2Q) | Magnus Mårtensson | Estocolmo, Suecia | Del 28 al 29 de enero de 2015 | [DocumentDB in Azure the new NoSQL option for the Cloud (DocumentDB en Azure, la nueva opción NoSQL para la nube)](https://www.youtube.com/watch?v=Hw7hDYoChNI) |
### Vídeos y podcasts
| Presentación | Orador | Date | Episodio |
| ------------------------------------------- | --------------------------- | ------------------ | ------- |
| Channel 9: Microsoft + Open Source (Microsoft + código abierto) | José Miguel Parrella | 14 de abril de 2016 | [From MEAN to DEAN in Azure with Bitnami, VM Scale Sets and DocumentDB (De MEAN a DEAN en Azure con Bitname, conjuntos de escalado de VM y DocumentDB)](https://channel9.msdn.com/Blogs/Open/From-MEAN-to-DEAN-in-Azure-with-Bitnami-VM-Scale-Sets-and-DocumentDB) |
| Wired2WinWebinar | SAI Sankar Kunnathukuzhiyil | 9 de marzo de 2016 | [Developing Solutions with Azure DocumentDB (Desarrollo de soluciones con Azure DocumentDB)](https://www.youtube.com/watch?v=xKttEwXv_bs)
| Integration User Group | Han Wong | 17 de febrero de 2016 | [Analyze and visualize non-relational data with DocumentDB + Power BI (Análisis y visualización de datos no relacionales mediante DocumentDB + Power BI](http://www.integrationusergroup.com/analyze-visualize-non-relational-data-documentdb-power-bi/) |
| The Azure Podcast | Cale Teeter | 14 de enero de 2016 | [Episode 110: Using DocumentDB & Search (Episodio 110: uso de DocumentDB y características de búsqueda)](http://azpodcast.azurewebsites.net/post/Episode-110-Using-DocumentDB-Search) |
| Channel 9: Modern Applications | Tara Shankar Jana | 13 de diciembre de 2016 | [Take a modern approach to data in your apps (Adopción de un enfoque moderno para los datos de las aplicaciones)](https://channel9.msdn.com/Series/Modern-Applications/Take-a-modern-approach-to-data-in-your-apps) |
| NinjaTips | Miguel Quintero | 10 de diciembre de 2015 | [DocumentDB - Un vistazo general](https://channel9.msdn.com/Series/Ninja-Tips/31-NinjaTips-Desarrollo-DocumentDB-1-Vistazo-general) |
| Integration User Group | Howard Edidin | 9 de noviembre de 2015 | [Azure DocumentDB for Healthcare Integration – Part 2 (Azure DocumentDB para la integración sanitaria, parte 2)](http://www.integrationusergroup.com/azure-documentdb-for-healthcare-integration-part-2/) |
| Integration User Group | Howard Edidin | 5 de octubre de 2015 | [Azure DocumentDB for Healthcare Integration (Azure DocumentDB para la integración sanitaria)](http://www.integrationusergroup.com/?event=azure-documentdb-and-biztalk) |
| DX Italy - #TecHeroes | Alessandro Melchiori | 2 de octubre de 2015 | [#TecHeroes - DocumentDB](https://channel9.msdn.com/Shows/TecHeroes/TecHeroes-DocumentDB) |
| Presentación de Microsoft Cloud: podcast | Andrew Liu | 30 de septiembre de 2015 | [Episodio 099: Azure DocumentDB con Andrew Liu](http://www.microsoftcloudshow.com/podcast/Episodes/099-azure-documentdb-with-andrew-liu) |
| .NET Rocks!: podcast | Ryan Crawcour | 29 de septiembre de 2015 | [Data on DocumentDB with Ryan CrawCour (Datos sobre DocumentDB con Ryan CrawCour)](https://www.dotnetrocks.com/?show=1197) |
| Exposición de datos | Ryan Crawcour | 28 de septiembre de 2015 | [What's New with Azure DocumentDB Since GA (Novedades de Azure DocumentDB desde GA)](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-with-Azure-DocumentDB-Since-GA) |
| The Azure Podcast | Cale Teeter | 17 de septiembre de 2015 | [Episode 94: azpodcast.com re-architecture (Episodio 94: reestructuración de la arquitectura de azpodcast.com)](http://azpodcast.azurewebsites.net/post/Episode-94-azpodcastcom-re-architecture) |
| Descripción de la nube | Ryan Crawcour | 4 de septiembre de 2015 | [Episode 185: DocumentDB Updates with Ryan CrawCour (Episodio 185: actualizaciones de DocumentDB con Ryan CrawCour)](https://channel9.msdn.com/Shows/Cloud+Cover/Episode-185-DocDB-Updates-with-Ryan-CrawCour) |
| CodeChat 033 | Greg Doerr | 28 de julio de 2015 | [Greg Doerr on Azure DocumentDB (Greg Doerr sobre Azure DocumentDB)](https://channel9.msdn.com/Shows/codechat/033) |
| NoSql Central | King Wilder | 25 de mayo de 2015 | [Golf Tracker - A video overview on how to build a web application on top of AngularJS, WebApi 2, and DocumentDB (Rastreador de golf - Información general en vídeo sobre cómo crear una aplicación web sobre AngularJS, WebApi 2 y DocumentDB).](http://www.nosqlcentral.net/Story/Details/videos/kahanu/1-documentdb-golf-tracker-overview) |
| In-Memory Technologies PASS Virtual Chapter | Stephen Baron | 25 de mayo de 2015 | [Hello DocumentDB](https://www.youtube.com/watch?v=itFXQCd9-dI) |
| Exposición de datos | Ryan Crawcour | 8 de abril de 2015 | [DocumentDB General Availibility and What's New! (Disponibilidad general y novedades de DocumentDB)](https://channel9.msdn.com/Shows/Data-Exposed/DocumentDB-General-Availability-and-Whats-New) |
| Exposición de datos | Andrew Liu | 17 de marzo de 2015 | [Java SDK for DocumentDB (SDK de Java para DocumentDB)](https://channel9.msdn.com/Shows/Data-Exposed/Java-SDK-for-DocumentDB) |
| #DevHangout | Gustavo Alzate Sandoval | 11 de marzo de 2015 | [DocumentDB, la base de datos NoSql de Microsoft Azure](https://www.youtube.com/watch?v=8Ud3jB8KOBA) |
| Data Architecture Virtual Chapter PASS | Ike Ellis | 25 de febrero de 2015 | [Introduction to DocumentDB (Introducción a DocumentDB)](https://www.youtube.com/watch?v=7BQYdFUkz6s) |
### Clases en línea
| Partner de aprendizaje | Description |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| [](https://mva.microsoft.com/es-ES/training-courses/deploying-web-apps-to-azure-app-service-16629) | La [**Academia virtual de Microsoft**](https://mva.microsoft.com/es-ES/training-courses/deploying-web-apps-to-azure-app-service-16629) le ofrece aprendizaje de las personas que ayudaron a crear Azure DocumentDB. |
| [](http://www.pluralsight.com/courses/azure-documentdb-introduction) | [**Pluralsight**](http://www.pluralsight.com/courses/azure-documentdb-introduction) es un asociado clave de Microsoft que ofrece cursos de aprendizaje de Azure. Si es suscriptor de MSDN, use sus beneficios para tener acceso al entrenamiento de Microsoft Azure. |
| [](https://www.opsgility.com/courses/player/introduction_to_azure_documentdb) | [**OpsGility**](https://www.opsgility.com/courses/player/introduction_to_azure_documentdb) proporciona un aprendizaje técnico en profundidad sobre Microsoft Azure. Reciba un entrenamiento dirigido por un instructor en el sitio o a través de una clase remota de la mano de instructores reconocidos del sector. |
## Discusión
### Twitter
Síganos en twitter [@DocumentDB](https://twitter.com/DocumentDB) y manténgase al día con las conversaciones más recientes en el hashtag [#DocumentDB](https://twitter.com/hashtag/DocumentDB).
### Foros en línea
| Proveedor de foro | Description |
| ------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| [](http://stackoverflow.com/questions/tagged/azure-documentdb) | Un sitio de preguntas y respuestas que se modifica en colaboración, y que es independiente del lenguaje, dedicado a los programadores. Siga nuestra etiqueta: [azure-documentdb](http://stackoverflow.com/questions/tagged/azure-documentdb) |
| [](http://go.microsoft.com/fwlink/?LinkId=631655) | Un buen lugar para proporcionar ayuda y comentarios sobre las características y los servicios de Microsoft Azure, como los Sitios web, DocumentDB, etc. |
## Póngase en contacto con el equipo

¿Necesita ayuda técnica? ¿Tiene alguna pregunta? ¿Se pregunta si NoSQL es adecuado para usted? También puede [programar una charla 1:1 directamente con el equipo de ingeniería de DocumentDB](http://www.askdocdb.com/). También puede enviarnos un [correo electrónico](mailto:[email protected]) o tuitearnos a la dirección [@DocumentDB](https://twitter.com/DocumentDB).
## Proyectos de código abierto
El equipo de Azure DocumentDB desarrolla estos proyectos activamente en colaboración con nuestra comunidad de código abierto.
### SDK
| Plataforma | Github | Paquete |
| -------- | --------------------------------------------------------------------------- | ------- |
| Node.js | [azure-documentdb-node](https://github.com/Azure/azure-documentdb-node) | [npm](https://www.npmjs.com/package/documentdb) |
| Java | [azure-documentdb-java](https://github.com/Azure/azure-documentdb-java) | [Maven](http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-documentdb%22) |
| Python | [azure-documentdb-python](https://github.com/Azure/azure-documentdb-python) | [PyPI](https://pypi.python.org/pypi/pydocumentdb) |
### Otros proyectos
| Nombre | Github | Sitio web |
| ------------------- | ------------------------------------------------------------------------------------------------- | ------- |
| Documentación | [azure-content](https://github.com/Azure/azure-content/tree/master/articles/documentdb) | [Sitio web de documentación](https://azure.microsoft.com/documentation/services/documentdb/) |
| Conector de Hadoop | [azure-documentdb-hadoop](https://github.com/Azure/azure-documentdb-hadoop) | [Maven](http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-documentdb-hadoop%22) |
| Herramienta de migración de datos | [azure-documentdb-datamigrationtool](https://github.com/Azure/azure-documentdb-datamigrationtool) | [Centro de descarga de Microsoft](http://www.microsoft.com/es-ES/download/details.aspx?id=46436) |
## Asistentes de DocumentDB
Los asistentes de DocumentDB son líderes de la Comunidad que han demostró un compromiso ejemplar para ayudar a que otros usuarios saquen el máximo partido de su experiencia con Azure DocumentDB. Comparten su experiencia técnica, sus conocimientos prácticos y su pasión excepcionales con la Comunidad y con el equipo de DocumentDB.
Asistente | Imagen
--- | ---
[Allan Mitchell](https://twitter.com/allansqlis) | [](https://twitter.com/allansqlis)
[Jen Stirrup](https://twitter.com/jenstirrup) | [](https://twitter.com/jenstirrup)
[Lenni Lobel](https://twitter.com/lennilobel) | [](https://twitter.com/lennilobel) |
[Mihail Mateev](https://twitter.com/mihailmateev) | [](https://twitter.com/mihailmateev) |
[Larry Maccherone](https://twitter.com/lmaccherone) | [](https://twitter.com/lmaccherone)
[Howard Edidin](https://twitter.com/hsedidin) | [](https://twitter.com/hsedidin)
[Santosh Hari](https://twitter.com/_s_hari) | [](https://twitter.com/_s_hari)
¿Quiere convertirse en un asistente de DocumentDB? Aunque no hay que superar ninguna prueba de referencia para convertirse en asistente de DocumentDB, entre los criterios que se evalúan se incluyen el impacto de las contribuciones de los nominados en los foros en línea (como StackOverflow y MSDN); contenido en línea y wikis; conferencias y grupos de usuarios; podcasts, sitios web, blogs y redes sociales; y artículos y libros. Para proponerse a sí mismo o proponer a otros [envíenos un correo electrónico](mailto:[email protected]).
<!---HONumber=AcomDC_0914_2016--> | 118.011029 | 588 | 0.728122 | spa_Latn | 0.457372 |
3ebe9435637f088c5eb2d1e5564496baee48ef28 | 3,855 | md | Markdown | README.md | jonaskirkemyr/Assemble-Blog-Template | f4bb187f040ad78d2775ae2b76b5cd52e555cbec | [
"MIT"
] | null | null | null | README.md | jonaskirkemyr/Assemble-Blog-Template | f4bb187f040ad78d2775ae2b76b5cd52e555cbec | [
"MIT"
] | 8 | 2016-03-08T20:19:36.000Z | 2016-06-19T09:28:35.000Z | README.md | jonaskirkemyr/Assemble-Blog-Template | f4bb187f040ad78d2775ae2b76b5cd52e555cbec | [
"MIT"
] | null | null | null | # Assemble-Blog-Template
Blog template built with Assemble.
__NOTE: This repo is under development__
For a live demo of the template, visit my website [jonastn.com](http://www.jonastn.com).
## <a name="setup"></a>Setup
In order to build Assemble-Blog-Template, ensure that you have [Node.js](https://nodejs.org/), [Bower](http://bower.io/), and [Grunt](http://gruntjs.com/) installed.
* Install dependencies
```sh
npm install
bower install
```
## Building
The build tool used is Grunt, a dependency that already should be installed locally on your system (if you read the [setup](#setup) section).
There are two different tasks registered with grunt:
* `build` task creates a json file from all posts that exists in the folder `src/views/posts/`, and run assemble to create the distributed `.html` files from `.hbs` templates.
* `rebuild`, as the name implies, rebuilds the project. All dependency files needed for distribution is copied, all `js` files is minified, `sass` files compiled, `css` files minified, and `html` files created.
A successful build will create two folder: `build/` and `dist/`, as described in [the various directories](#directories)
## Run
Before being able to run this project, the task `rebuild` needs to be run.
```sh
grunt rebuild
```
This task will create a new folder: `dist/` within the root of the project folder (This is the folder which should be made accessible/copied over to a webserver, to make it available over the Internet).
To test this project locally, and make the files in `dist/` available to a web-browser:
```sh
grunt connect
```
Navigate your web-browser to [localhost:8080](http://localhost:8080/).
### NPM
Another opportunity is to use `npm` to install dependencies, rebuild and start running this project.
* `start` - deletes the `dist/` folder, and run the `rebuild` and `connect` task using `grunt`
* `init` - install all dependencies, and start the `npm run` task
start any of these scripts:
```sh
npm run <taskName>
```
where taskName is one of the available npm scripts mention above.
## <a name="directories"></a>The various directories
* `src`: contains the source code of the project
* `js`: JavaScript code needed by the generated web-pages
* `helpers`: Handlebars helpers made accessible within Handlebars files in `views`.
* `scss`: sass style defined for the project (HTML5-up)
* `views`: handlebars layouts/pages used by `Assemble.io`.
* `layouts`: Base layouts for the website. One for standard pages, and a layout used by a blog-post
* `pages`: building on the `base.hbs` layout file
* `partials`: hbs snippets accessible to other `.hbs` layout/pages files
* `posts`: The blog-post markdown files
* `build`: files that are compiled from the `src` folder
* `dist`: production ready code, created from the `build` folder
### Dist
When packaged, the Assemble-Blog-Template consist of a `index.html` page, the main page for the blog. All posts are located within the directory `dist/posts/`.
All dependencies are located in `dist/asset/`, where both `.css` and `.js` files are located. Packages from `bower` and `npm` are located under `packages/` folders.
## Dependencies
Assemble-Blog-Template is built with multiple libraries.
* [Assemble](http://assemble.io/) - Static compilation of Handlebars template
* [Handlebars](http://handlebarsjs.com/) - extension of the Mustache templating language, a logicless templating language
Other dependencies can be found in `bower.json`and package.json`.
The theme used for this template is `Future Imperfect`, and downloaded from [HTML5UP](http://html5up.net/). Thanks to [N33](https://github.com/n33) for generating awesome HTML5 templates!
## License
Assemble-Blog-Template is released under the MIT license. See [LICENSE](LICENSE) for more information. | 38.55 | 210 | 0.734112 | eng_Latn | 0.995223 |
3ebebaec994d83fecc2a0827f4e3f8d1c96e98e2 | 1,384 | markdown | Markdown | _posts/2006-12-12-go-with-pro-ore280a6.markdown | connylundgren/connylundgren.github.io | e47f3b3c161927fc90fecca0717bfc196d5fd1d7 | [
"Apache-2.0"
] | null | null | null | _posts/2006-12-12-go-with-pro-ore280a6.markdown | connylundgren/connylundgren.github.io | e47f3b3c161927fc90fecca0717bfc196d5fd1d7 | [
"Apache-2.0"
] | null | null | null | _posts/2006-12-12-go-with-pro-ore280a6.markdown | connylundgren/connylundgren.github.io | e47f3b3c161927fc90fecca0717bfc196d5fd1d7 | [
"Apache-2.0"
] | null | null | null | ---
author: connylundgren
date: '2006-12-12 13:38:22'
layout: post
title: Go with Pro, or…
comments: true
---
I’ve been looking into replacing my now somewhat aging 17” powerbook (1,67Ghz,
2GB RAM), although the Pb is really nice (big screen, reasonably fast) it
suffers from a couple of drawbacks.
* **To cumbersome** – I have been air-commuting for the last year, and the 17” is a bit to big to use and to carry around.
* **Development Performance** – Using it to as my development machine of choice, the performance has begun to lag behind. It has really started to get on my nerves, mostly the compile/deploy cycle of Java EE development (Netbeans/Eclipse, JBoss AS/Weblogic)
* **Slow Virtual PC** – I still need to access/use some windows software, not that often but the VirtualPC performance is sub par.
So my plan is to replace it shortly after Christmas, I were dead set on
replacing it with a 15” MacBook Pro, but lately I have been beginning to
wonder if the MacBook would fit the bill (I plan to hook either on up to a 23”
Cinema Display and keyboard/mouse when developing at home) and recent blog
post such as [this ](http://www.oreillynet.com/mac/blog/2006/05/are_you_macboo
k_or_are_you_pro.html)added further decision frustration to the mix.
Any suggestions/recommendations? Is the MacBook good enough, or should I pay
the extra premium to get the Pro?
| 49.428571 | 259 | 0.761561 | eng_Latn | 0.999049 |
3ebf3ea7089504bc0962e83d2ce8baeb9e765a00 | 250 | md | Markdown | 0053_Medium/README.md | JohnnyGOX17/daily-coding-problems | d688ae84a56f29ccda7d9e956e6259c7aebdfda4 | [
"MIT"
] | null | null | null | 0053_Medium/README.md | JohnnyGOX17/daily-coding-problems | d688ae84a56f29ccda7d9e956e6259c7aebdfda4 | [
"MIT"
] | null | null | null | 0053_Medium/README.md | JohnnyGOX17/daily-coding-problems | d688ae84a56f29ccda7d9e956e6259c7aebdfda4 | [
"MIT"
] | null | null | null | This problem was asked by Apple.
Implement a queue using two stacks. Recall that a queue is a FIFO (first-in, first-out) data structure with the following methods: `enqueue`, which inserts an element into the queue, and `dequeue`, which removes it.
| 62.5 | 215 | 0.772 | eng_Latn | 0.999898 |
3ebfb291d918026ee56e18465192d8ca8ff4639b | 3,441 | md | Markdown | _posts/2018-02-04-35.md | rehashfm/rehash.fm | c8581efe910e026694be2e1f627248ff0c3210b0 | [
"MIT"
] | 1 | 2017-01-22T15:25:09.000Z | 2017-01-22T15:25:09.000Z | _posts/2018-02-04-35.md | rehashfm/rehash.fm | c8581efe910e026694be2e1f627248ff0c3210b0 | [
"MIT"
] | 20 | 2017-01-03T11:06:37.000Z | 2022-02-01T11:46:30.000Z | _posts/2018-02-04-35.md | rehashfm/rehash.fm | c8581efe910e026694be2e1f627248ff0c3210b0 | [
"MIT"
] | 1 | 2017-10-18T08:33:09.000Z | 2017-10-18T08:33:09.000Z | ---
layout: post
title: "#35 We Can Be Anything"
date: 2018-02-04 07:00:00 +0900
tags: shownotes
category: ShowNotes
author: smile-0yen,azihsoyn
excerpt: 最近のニュースからiOS 11.3、VRChatなどについて話しました。
---
<iframe width="100%" height="50" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/394065399&auto_play=false&hide_related=false&show_user=true&show_reposts=false&visual=false&show_artwork=false&default_height=75"></iframe>
最近のニュースからiOS 11.3、VRChatなどについて話しました。
## Show Notes
- [Apple、「HomePod」2月9日に米国で発売 Siri搭載スマートスピーカー \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/24/news048.html){:target="_blank"}
- [「iOS 11\.3」、開発者プレビューに\-\-バッテリ性能問題に対処、「ARKit」強化 \- CNET Japan](https://m.japan.cnet.com/amp/story/35113664/){:target="_blank"}
- [Apple previews iOS 11\.3 \- Apple](https://www.apple.com/newsroom/2018/01/apple-previews-ios-11-3/){:target="_blank"}
- [Apple、「iOS 11\.3」で電子書籍にテコ入れ、オーディオブックコーナーも──Bloomberg報道 \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/26/news045.html){:target="_blank"}
- [Tim Cook’s visit to Shopify all about augmented reality, as Apple CEO praises ‘profound’ emerging technology \| Financial Post](http://business.financialpost.com/technology/apple-ceo-tim-cook-shopify-augmented-reality){:target="_blank"}
- [Googleの新OS「Fuchsia」がPixelbookにインストール可能に、ただしまだ不具合多数 \- GIGAZINE](http://gigazine.net/news/20180119-google-fuchsia-os/){:target="_blank"}
- [開発者会議「Google I/O」、5月8日に開幕へ \- CNET Japan](https://japan.cnet.com/article/35113665/){:target="_blank"}
- [「Google Home」などでWi\-Fiが切れる問題、Googleも修正配信へ \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/18/news051.html){:target="_blank"}
- [網膜にレーザーで映像投影 近未来の眼鏡型デバイス「RETISSA Display」を体験 \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/18/news106.html){:target="_blank"}
- [「格好悪い」「鼻眼鏡みたい」 見た目が話題になったウェアラブルデバイス「b\.g\.」 メガネスーパーのこだわり \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/17/news126.html){:target="_blank"}
- [生地にセンサーを埋め込んだ“新しい素材” ジーンズに埋め込み、ゲームなどへの活用も \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/18/news121.html){:target="_blank"}
- [誰でも「美少女バーチャルYouTuber」になれるアプリ登場 そう、iPhone Xならね \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/12/news074.html){:target="_blank"}
- [ソーシャルVR『VRChat』人気爆発 月間100万超のユーザー増 \| Mogura VR \- 国内外のVR/AR/MR最新情報](http://www.moguravr.com/vrchat-4/){:target="_blank"}
- [『VRChat』異例の伸び 10日間で倍増で200万インストールに \| Mogura VR \- 国内外のVR/AR/MR最新情報](http://www.moguravr.com/vrchat-5/){:target="_blank"}
- [話題のVRアプリ『VRChat』その魅力と始め方を紹介 \| Mogura VR \- 国内外のVR/AR/MR最新情報](http://www.moguravr.com/vrchat-6/){:target="_blank"}
- [Viveトラッカーの在庫切れが続く 供給回復は3月中旬以降 \| Mogura VR \- 国内外のVR/AR/MR最新情報](http://www.moguravr.com/vive-tracker-out-of-stock/){:target="_blank"}
- [クラウドに保存した写真や動画をモバイルVRで鑑賞『Plex VR』 \| Mogura VR \- 国内外のVR/AR/MR最新情報](http://www.moguravr.com/plex-vr-daydream/){:target="_blank"}
- [Microsoftがテキストから本物と見間違うレベルの架空のイメージを自動生成する新AI技術「AttnGAN」を開発 \- GIGAZINE](https://gigazine.net/news/20180119-microsoft-attngan/){:target="_blank"}
- [フィッシングショー2018に参加してきました \- azihsoyn's blog](http://azihsoyn.hatenablog.com/entry/fishingshow_2018){:target="_blank"}
- [任天堂、Switchと合体する“段ボールコントローラー”「Nintendo Labo」発売 \- ITmedia NEWS](http://www.itmedia.co.jp/news/articles/1801/18/news053.html){:target="_blank"}
| 101.205882 | 310 | 0.734961 | yue_Hant | 0.637275 |
3ec040ae395d6a4d7ed0049c57b4c85821975373 | 894 | md | Markdown | README.md | derlin-go/mybooks | 2b4f1756e21538368dbc3efaefd26a1e1e3556cb | [
"Apache-2.0"
] | null | null | null | README.md | derlin-go/mybooks | 2b4f1756e21538368dbc3efaefd26a1e1e3556cb | [
"Apache-2.0"
] | null | null | null | README.md | derlin-go/mybooks | 2b4f1756e21538368dbc3efaefd26a1e1e3556cb | [
"Apache-2.0"
] | null | null | null | # mybooks
Command line app to keep track of the books I read. The data are saved in a json file synchronised in Dropbox.
To get more information about the project, have a look at the [Android application](https://github.com/derlin/mybooks-android/) README.
### Prerequisites
- go 1.5+
- Dropbox, with default settings (the Dropbox folder in in `${user.dir}/Dropbox`)
### How to use
- get the source: `go get github.com/derlin-go/mybooks`
- build the program : `go build -o mybooks src/github.com/derlin-go/mybooks/*.go`
- run it: `./mybooks`
Once the program is started, you can use `help` to get a list of all the available commands. Don't forget
that the changes are not automatically saved to dropbox. For that, you have to use the command `save`.
The json file will be kept in `Dropbox/Apps/MyBooks/mybooks.json`. It can be modified manually, as long as the structure is valid.
| 44.7 | 135 | 0.739374 | eng_Latn | 0.99121 |
3ec17bcf2164cbda1ab7200ffb357012de8fbff3 | 407 | md | Markdown | README.md | dkkim1005/Hirsch-Fye-QMC | 8ce71754a496acea0a463434040f48928948badf | [
"MIT"
] | null | null | null | README.md | dkkim1005/Hirsch-Fye-QMC | 8ce71754a496acea0a463434040f48928948badf | [
"MIT"
] | null | null | null | README.md | dkkim1005/Hirsch-Fye-QMC | 8ce71754a496acea0a463434040f48928948badf | [
"MIT"
] | null | null | null | # Hirsch-Fye-QMC
Hirsch-Fye QMC Solver
## Required libraries
+ FFTW3
+ OpenBLAS
Build Reciepe
--------------
python3 ./setup.py build # It will automatically download and install libraries.
python3 ./main.py # Run simulations
Bug report
--------------
When you encounter bugs or problems by using this code, please let us know through the email address as following. <br />
[email protected]
| 22.611111 | 121 | 0.700246 | eng_Latn | 0.984669 |
3ec2946d8fb258670cf941d26aece743ddac68b6 | 71 | md | Markdown | README.md | zhaoyun95/keysender | 7b7b71c14b3dd1788a2be49c7851d859851a6f30 | [
"Unlicense"
] | null | null | null | README.md | zhaoyun95/keysender | 7b7b71c14b3dd1788a2be49c7851d859851a6f30 | [
"Unlicense"
] | null | null | null | README.md | zhaoyun95/keysender | 7b7b71c14b3dd1788a2be49c7851d859851a6f30 | [
"Unlicense"
] | null | null | null | # keysender
C# program to send a keyboard key press to another program
| 23.666667 | 58 | 0.788732 | eng_Latn | 0.982519 |
3ec30bbad1b3cc1816f0786e9bca5243546e2216 | 1,150 | md | Markdown | README.md | yorudazzz/MakeNewFile-on-Mac | e757e6ea965e9a91c3f537a7ed615ee509e9ceaa | [
"MIT"
] | null | null | null | README.md | yorudazzz/MakeNewFile-on-Mac | e757e6ea965e9a91c3f537a7ed615ee509e9ceaa | [
"MIT"
] | null | null | null | README.md | yorudazzz/MakeNewFile-on-Mac | e757e6ea965e9a91c3f537a7ed615ee509e9ceaa | [
"MIT"
] | null | null | null | # MakeNewFile on Mac
====
## Overview
The Apple Script for making new text file on current Folder
## Description
This is the Apple Script that can make new file on current folder.
You can make new file on current folder by calling this script using short cut key When you are opening "Finder" at forfront.
* Note
Source code is written in "main.txt"(is not scpt file) .
Because scpt can not open on this page.
## SetUp
*Launch "Automator" app.
*Choose "Service".
*Choose "no input" in select box of "service receives selected".
*Set "Run AppleScript" from left side.
*Back to this github page and copy source code in the script.
*Back to "Automator" again, then paste the code into panel of "Run AppleScript".
*Save as new worlflow.
*Open setting page of "Short cut" in "System Preferences".
*Select "Services" and find service as saved workflow from before.
*Set any short cut key you want to launch.
## Usage
Open Finder. Press the shor cut key. Then, You will see that new text file will be created with name system time on current folder!
## Licence
MIT Licence
## Author
[yorudazzz](https://github.com/yorudazzz) | 31.944444 | 131 | 0.734783 | eng_Latn | 0.979496 |
3ec344a883bd91f27ca693934c67e15907699fe5 | 775 | md | Markdown | catalog/overlord/en-US_overlord-ple-ple-pleiades.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/overlord/en-US_overlord-ple-ple-pleiades.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/overlord/en-US_overlord-ple-ple-pleiades.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Overlord: Ple Ple Pleiades

- **type**: special
- **episodes**: 8
- **original-name**: オーバーロード ぷれぷれぷれあです
- **start-date**: 2015-09-25
- **end-date**: 2015-09-25
- **rating**: PG-13 - Teens 13 or older
## Tags
- action
- comedy
- magic
- fantasy
## Sinopse
Specials bundled with the Blu-ray and DVD volumes of Overlord featuring simplified character animations.
## Links
- [My Anime list](https://myanimelist.net/anime/31138/Overlord__Ple_Ple_Pleiades)
- [Official Site](http://overlord-anime.com/products/)
- [AnimeDB](http://anidb.info/perl-bin/animedb.pl?show=anime&aid=10816)
- [Wikipedia](http://en.wikipedia.org/wiki/Overlord_%28novel_series%29)
| 26.724138 | 104 | 0.694194 | kor_Hang | 0.167132 |
3ec3aa9681fa1ccbdfae2ebb4acd26e5b29025da | 373 | md | Markdown | _posts/2018-06-14-javascript-tsconfig-json-vscode-qiita.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 5 | 2016-01-25T08:51:46.000Z | 2022-02-16T05:51:08.000Z | _posts/2018-06-14-javascript-tsconfig-json-vscode-qiita.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 3 | 2015-08-22T08:39:36.000Z | 2021-07-25T15:24:10.000Z | _posts/2018-06-14-javascript-tsconfig-json-vscode-qiita.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 2 | 2016-01-18T03:56:54.000Z | 2021-07-25T14:27:30.000Z | ---
title: 素のJavaScriptプロジェクトにtsconfig.jsonを置いといてVSCodeの便利さを享受する - Qiita
author: azu
layout: post
itemUrl: 'https://qiita.com/terrierscript/items/a9826bc58d550d1b2764'
editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2018/06/index.json'
date: '2018-06-14T10:33:24Z'
tags:
- TypeScript
- article
---
TypeScritptをJavaScript/JSDocの型チェックツールとして利用する方法について
| 28.692308 | 87 | 0.796247 | yue_Hant | 0.230653 |
3ec3fe37a28dd300528fbbec69bf6600b064181e | 519 | md | Markdown | gems/aws-sdk-pinpointemail/CHANGELOG.md | SolutionDev888/aws-sdk-ruby | aee27c486cc00d31e925c53a81b33b81822eb82b | [
"Apache-2.0"
] | 3 | 2020-08-18T20:59:45.000Z | 2020-09-16T06:21:05.000Z | gems/aws-sdk-pinpointemail/CHANGELOG.md | SolutionDev888/aws-sdk-ruby | aee27c486cc00d31e925c53a81b33b81822eb82b | [
"Apache-2.0"
] | null | null | null | gems/aws-sdk-pinpointemail/CHANGELOG.md | SolutionDev888/aws-sdk-ruby | aee27c486cc00d31e925c53a81b33b81822eb82b | [
"Apache-2.0"
] | null | null | null | Unreleased Changes
------------------
1.6.0 (2019-03-28)
------------------
* Feature - API update.
1.5.0 (2019-03-21)
------------------
* Feature - API update.
1.4.0 (2019-03-18)
------------------
* Feature - API update.
1.3.0 (2019-03-14)
------------------
* Feature - API update.
1.2.0 (2018-12-13)
------------------
* Feature - API update.
1.1.0 (2018-11-20)
------------------
* Feature - API update.
1.0.0 (2018-11-06)
------------------
* Feature - Initial release of `aws-sdk-pinpointemail`.
| 13.307692 | 55 | 0.447013 | yue_Hant | 0.454796 |
3ec4c5ecdd3dcdceb539e2251190b8eedbff1c02 | 227 | md | Markdown | _posts/2018-09-10-Test-Blog-Post.md | RhysV/RhysV.github.io | e3dd80c2f4df464544ea586460b7ffd1c96dddfa | [
"MIT"
] | null | null | null | _posts/2018-09-10-Test-Blog-Post.md | RhysV/RhysV.github.io | e3dd80c2f4df464544ea586460b7ffd1c96dddfa | [
"MIT"
] | null | null | null | _posts/2018-09-10-Test-Blog-Post.md | RhysV/RhysV.github.io | e3dd80c2f4df464544ea586460b7ffd1c96dddfa | [
"MIT"
] | null | null | null | ---
title: Test Blog Post
layout: post
author: rhys.varma
permalink: /test-blog-post/
source-id: 1r7AM_2Ho4jjpoV9QMto9WlXfHBK7QcaCFc7oKdDDiHo
published: true
---
Hi bob first
Operate on this guy no yes
Follow
Every one two
| 13.352941 | 55 | 0.77533 | eng_Latn | 0.838179 |
3ec513992a8ae1d5acdb75cdc856f67b725cbcb4 | 259 | md | Markdown | README.md | freaktechnik/band-concerts-theme-bcb | 22b6a0f7df14d1ec7431fb6edf92e462ddbf4d72 | [
"MIT"
] | null | null | null | README.md | freaktechnik/band-concerts-theme-bcb | 22b6a0f7df14d1ec7431fb6edf92e462ddbf4d72 | [
"MIT"
] | null | null | null | README.md | freaktechnik/band-concerts-theme-bcb | 22b6a0f7df14d1ec7431fb6edf92e462ddbf4d72 | [
"MIT"
] | null | null | null | # band-concerts-theme-bcb
WordPress sub theme for [band-concerts-theme](https://github.com/freaktechnik/band-concerts-theme) for a specific group.
Additionally to be used in conjunction with the [bcb-revue](https://github.com/freaktechnik/bcb-revue) plugin.
| 51.8 | 120 | 0.791506 | eng_Latn | 0.693161 |
3ec57472a129d9ade57a7ca9a4ab6c415d171f00 | 5,070 | md | Markdown | README.md | Rubanrubi/Laravel-CI-CD-Github | 887a660215c0f43b5e501d158fb98352b4d1482c | [
"Apache-2.0"
] | null | null | null | README.md | Rubanrubi/Laravel-CI-CD-Github | 887a660215c0f43b5e501d158fb98352b4d1482c | [
"Apache-2.0"
] | null | null | null | README.md | Rubanrubi/Laravel-CI-CD-Github | 887a660215c0f43b5e501d158fb98352b4d1482c | [
"Apache-2.0"
] | null | null | null | # Zero Downtime Laravel Deployments with Github Actions
Step : 1 Folder Setup
* First create .github/workflows directory in your project root directory,
then store the deploy.yml in to that directory.
* Inside the file first key => name which is displayed on github Actions on left shows Workflows inside
the name shown so we need to change that name means change that name key value
Eg : name: CI-CD
* Then when the CI/CD Triggers? We need to specify on which branch getting pushed,
so we need to specified like below:
Eg : on:
push:
branches: [ development ]
* Next step will be inside jobs->deploy we mention the name key this is for once our CI/CD runs
the jobs which are run in github workflows screen that name shows like a tag for the respective job
* Inside that you need to mention the php versions
installation process like composer install,php install and their extensions too.
Step : 2 Package Installation
* Install the deployer package via composer,which is used for laravel automatically
pipelined with github workflows to install that package run the command given below:
composer require deployer/deployer deployer/recipes
* Once you've got it installed, run ./vendor/bin/dep init and follow the prompts,
selecting Laravel as your framework.
A deploy.php config file will be generated and placed in the root of your project.
* In that deploy.php file we set the applicatio name by;
set('application', 'dep-demo');
* Then you don't want to take the things to production server means need to mention
in this method.
add('rsync', [
'exclude' => [
'.git',
'/.env',
'/storage/',
'/vendor/',
'/node_modules/',
'.github',
'deploy.php',
],
]);
* Then Must Configure this with server credentials
// Hosts
host('your-IP') // Name of the server
->hostname('your-IP') // Hostname or IP address
->stage('production') // Deployment stage (production, staging, etc)
->user('ubuntu or root') // SSH User.
->set('deploy_path', '/var/www/html');
* After that in task method need to add what are the commands need to run while running workflows,
like config:cache,config:clear etc,
* After running the above command ./vendor/bin/dep init it asks which frameworks,
and do you have configured github repo if github configured means hit enter and go
orelse need to give github repo url in it.
Step : 3 Configuration
* Then Go to Github your repository Settings->Secrets ,Here Need to add
three key values which are;
1.SSH_KNOWN_HOST
2.DOT_ENV
3.SSH_PRIVATE_KEY
* These three keys are configured in deploy.yml file,
These three things must need to interat the ci/cd with that respective server
so open your ssh terminal and run this command this will generates the SSH_KNOWN_HOST
ssh-keyscan rsa -t <server IP>
* Then open your server .env file,then open your private key of that respective server
*Paste all contents into that secret section in github by key values.
Step : 4 Deployment Stage
* Push the code to that branch open your workflows all the jobs getting runs,if any errors found during
the run time need to fix then only that code deployed to server untill not,
If the code looking fine means server gets updated source code.
* While deploy the source code to server it deploys inside /var/www/html creates one directory
called current inside that create the project
* So After pointed to DNS go to etc/apache2/sites-available/confd file,change documentary root
to /var/www/html/current/public
* That's It You Done!!!!!
| 44.867257 | 119 | 0.52288 | eng_Latn | 0.998371 |
3ec5a54287181c2f03d97aff554baf5887476176 | 1,333 | md | Markdown | README.md | nwimma/ampel | 2938b3029721a47dd4912a36a920bbfef4d9805c | [
"Apache-2.0"
] | null | null | null | README.md | nwimma/ampel | 2938b3029721a47dd4912a36a920bbfef4d9805c | [
"Apache-2.0"
] | null | null | null | README.md | nwimma/ampel | 2938b3029721a47dd4912a36a920bbfef4d9805c | [
"Apache-2.0"
] | 3 | 2022-01-25T12:29:42.000Z | 2022-03-24T11:36:16.000Z | # pico-template
Raspberry Pi RP2040 Template for fast and easy development structure
## Pico SDK
```
git clone https://github.com/raspberrypi/pico-sdk.git
cd pico-sdk
git submodule update --init
## Add pico-sdk Path to your environment
echo export PICO_SDK_PATH=$PWD >> ~/.bashrc
```
## Packages
### Debian based
```
apt update && apt install -y cmake make gcc openssl libssl-dev cmake gcc-arm-none-eabi libnewlib-arm-none-eabi libstdc++-arm-none-eabi-newlib
```
### Arch
```
pacman -S arm-none-eabi-gcc arm-none-eabi-binutils arm-none-eabi-newlib cmake autoconf git
yay -S picotool openocd-picoprobe
```
## Programming
1. get SDK
2. cmake example
3. flash it
4. debug it
5. be happy!
## Resources
* https://www.raspberrypi.org/documentation/rp2040/getting-started/
* https://www.raspberrypi.org/documentation/rp2040/getting-started/#board-specifications
### Pico Datasheets
* https://datasheets.raspberrypi.org/
* https://datasheets.raspberrypi.org/pico/raspberry-pi-pico-faq.pdf
* https://datasheets.raspberrypi.org/pico/getting-started-with-pico.pdf
* https://datasheets.raspberrypi.org/pico/raspberry-pi-pico-c-sdk.pdf
### CMake
* https://cmake.org/cmake/help/latest/guide/tutorial/index.html
* http://derekmolloy.ie/hello-world-introductions-to-cmake/
* https://medium.com/@onur.dundar1/cmake-tutorial-585dd180109b
| 26.137255 | 142 | 0.753938 | kor_Hang | 0.421975 |
3ec61dcd3958976170430c34a7dcd4ae08ae37ce | 11,846 | md | Markdown | content/en/blog/news/2021-databases.md | osamamagdy/jx-docs | 1e94b3fd0f27c952b65fc1d3f655c1ce05816bd6 | [
"Apache-2.0"
] | 86 | 2018-02-28T23:35:48.000Z | 2022-03-27T08:51:48.000Z | content/en/blog/news/2021-databases.md | osamamagdy/jx-docs | 1e94b3fd0f27c952b65fc1d3f655c1ce05816bd6 | [
"Apache-2.0"
] | 3,387 | 2018-03-04T13:34:33.000Z | 2022-03-31T16:27:52.000Z | content/en/blog/news/2021-databases.md | osamamagdy/jx-docs | 1e94b3fd0f27c952b65fc1d3f655c1ce05816bd6 | [
"Apache-2.0"
] | 439 | 2018-02-26T18:18:54.000Z | 2022-03-29T13:01:10.000Z | ---
title: "Continuous microservices with databases in Jenkins X"
date: 2021-06-25
draft: false
description: automate your CI/CD with microsevices, databases and preview environments
categories: [blog]
keywords: [Community, 2021]
slug: "jx-cd-databases-2021"
aliases: []
author: Jenkins Strachan
---
A common question we get asked on the [Jenkins X project](https://jenkins-x.io/) is how to get started creating microservices that use databases with [automated CI/CD](/v3/develop/create-project/) with [GitOps Promotion](/v3/develop/environments/promotion/) and [Preview Environments](/v3/develop/environments/preview/).
To make things a little easier to get started we've created a new [node-postgresql](https://github.com/jenkins-x-quickstarts/node-postgresql) quickstart.
## Before you start
If you are using the cloud then we prefer [cloud over kubernetes](/v3/devops/patterns/prefer_cloud_over_kube/) for things like databases, storage, ingress and secret managers so please try use your clouds managed databases if you can.
So ideally you'd set up your database via your infrastructure as code solution, such as [terraform](https://www.terraform.io/), and then associate your [kubernetes Service Account to a cloud IAM role](/v3/devops/patterns/map-sa-to-cloud-iam/) to access the database.
However to provide something simple that just works in any kubernetes cluster this quickstart uses the [postgres-operator](https://github.com/zalando/postgres-operator) to manage setting up each database cluster in each environment. So to be able to use this quickstart you will need to install this operator into your cluster.
You can [add charts to your cluster via the CLI](/v3/develop/apps/#using-the-cli). From inside a git clone of your cluster git repository run the following command:
```bash
jx gitops helmfile add --chart commonground/postgres-operator --repository https://charts.commonground.nl/ --namespace postgres --version 1.6.2
```
This will modify the `helmfile.yaml` to point at a new `helmfiles/postgres/helmfile.yaml` file to deploy the [postgres-operator](https://github.com/zalando/postgres-operator) chart.
Then git commit and push that change to your cluster. You can watch it run via `jx admin log -w`.
## Create the quickstart
Make sure you have an [up to date cluster](/v3/admin/setup/upgrades/cluster/) as this particular quickstart is new and only shows up in recent clusters.
Now [create the quickstart](/v3/develop/create-project/#create-a-new-project-from-a-quickstart) in the usual way...
```bash
jx project quickstart
```
If you know you want to create the [node-postgresql](https://github.com/jenkins-x-quickstarts/node-postgresql) quickstart you can do this to pre-filter the list for you:
```bash
jx project quickstart -f postgres
```
Once you have finished the import process will [set up the webhooks and enable CI/CD](/v3/about/how-it-works/#importing--creating-quickstarts) and the application will be released and promoted to the staging environment.
If you want to know more about what happens as you create quickstarts or import projects [see how it works](/v3/about/how-it-works/#importing--creating-quickstarts).
You can watch via the various pipelines run in the various [UI options](/v3/develop/ui/) or via the CLI use:
```bash
jx pipeline grid
```
When the release is done and the promotion has completed you should be able to try it out via:
```bash
jx application get
```
You should be able to click on the URL for the new quickstart and try it out once the database is initialised and the pods start up.
It can take a few minutes first time you deploy the quickstart for the database cluster to be setup and initialised; so you can watch the pods run via
```bash
kubectl get pod -n jx-staging -w
```
For a deeper look into whats going on try:
```bash
jx ui
```
Which will open the [Octant UI with the Jenkins X plugin](/v3/develop/ui/octant/) which lets you navigate around namespaces and look at all of your kubernetes resources, deployments, pods and so forth in real time.
## How does it work
In many ways this chart is fairly similar to other quickstarts in that it uses the Jenkins X pipeline catalog with tekton to add automated CI/CD pipelines and so forth.
However to support the database there is a custom chart included inside this quickstart that does a few different things...
* it creates a `Postgresql` custom resource for the [postgres-operator](https://github.com/zalando/postgres-operator) which will instruct the [postgres-operator](https://github.com/zalando/postgres-operator) to spin up a database cluster and generate a `Secret` to access the database. You can view this in your file at `charts/$myAppName/templates/` or [this file in the quickstart](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/charts/templates/db-postgresql.yaml)
* there is a `charts/$myAppName/init.sql` file or [this file in the quickstart](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/charts/init.sql) which is used to setup the database tables and populate any initial data. You can use this file to perform any startup schema migration or data population. For more realistic applications you could use a custom tool and image to implement schema migration in a more sophisticated way.
* the `init.sql` script is then installed as a `ConfigMap` via the `charts/$myAppName/templates/initdb-cm.yaml` file or [this file in the quickstart](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/charts/templates/initdb-cm.yaml)
* the `charts/$myAppName/templates/deployment.yaml` file or [this file in the quickstart](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/charts/templates/deployment.yaml#L41-L57) defines:
* an in [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) which sets up the database before the application starts to run. The nice thing about using an init container for schema migration is that it runs before any new version of your application gets any network traffic so that you can keep iterating on your microservice and keep changing your database schema across all of your environments and things work well.
* Though make sure your init container performs database locking correctly so that multiple init containers running concurrently don't block each other unnecessarily. If this becomes an issue you could introduce something a little more complex. e.g. include a `Job` with a unique name for each release in your chart to perform the migration so that only one migration Job runs at once and have your [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) wait for your job to complete.
* the Deployment also uses [a secret created by the postgresql operator](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/charts/templates/deployment.yaml#L69-L73) to be able to connect to the database
### Previews
Databases often need a fair amount of maintenance, backup, upgrading and clean up over time. e.g. you may periodically update your Staging database with data from Production (maybe anonymised in some way?).
So creating a whole new database from scratch for every [Preview Environment](/v3/develop/environments/preview/) to test every code change in your microservice is maybe overkill.
By default the preview environment of this quickstart is configured to reuse the Staging environments database. This speeds up the preview startup time and reduces your cloud footprint and infrastructure cost.
This is done via:
* [configuring the preview environment](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/preview/values.yaml.gotmpl#L1-L7) to point at the database in the staging namespace and disabling the creation of the Posgresql custom resource to create a new database cluster in the preview environment
* using the [helmfile hooks mechanism](https://github.com/roboll/helmfile#hooks) in previews to [copy the required database secrets from the staging namespace](https://github.com/jenkins-x-quickstarts/node-postgresql/blob/master/preview/helmfile.yaml#L32-L44) so that the preview can connect to the staging database.
## How we can improve
This quickstart is just a start but we can improve in a number of directions - fancy [helping out](https://jenkins-x.io/community/)?
### More languages and frameworks
It should be easy to replicate the same kind of quickstart mechanism for other languages and frameworks if anyone fancies trying it out? :) We [love contributions](https://jenkins-x.io/community/)! Pop by and chat with us on [slack](https://jenkins-x.io/community/#slack) if you want to discuss it further.
### Cloud databases
Longer term it would be nice to support other kinds of database operators too.
We prefer [cloud over kubernetes](/v3/devops/patterns/prefer_cloud_over_kube/) so if you are using a cloud it would be better to default to a cloud database instead of a kubernetes hosted one.
There are a number of other ways to define cloud infrastructure via [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) such as:
* [AWS Controllers for Kubernetes](https://aws-controllers-k8s.github.io/community/)
* [Azure Service Operator](https://github.com/Azure/azure-service-operator)
* [Crossplane](https://crossplane.io/)
* [Google Config Connector](https://cloud.google.com/config-connector/docs/overview)
So it'd be interesting to see if we can replicate this model for other kinds of cloud database on different cloud providers. Mostly it'd be a [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to define the database instance and a way to inject the host and secret. Some database providers require an additional sidecar proxy too.
It would be easy to add optional configuration in the quickstart to support either the [postgres-operator](https://github.com/zalando/postgres-operator) or equivalents in [AWS Controllers for Kubernetes](https://aws-controllers-k8s.github.io/community/), [Azure Service Operator](https://github.com/Azure/azure-service-operator), [Crossplane](https://crossplane.io/) or [Google Config Connector](https://cloud.google.com/config-connector/docs/overview) via a simple flag in the `chart/$name/values.yaml` file
### More modularity options
In a pure microservice kind of world, each database would be owned by a single microservice; so embedding the database definition and schema migration into a single helm chart is the simplest thing that can work across multiple environments and with progressive delivery etc. It makes it easier to tie changes to the microservice and schema together into a single chart.
However sometimes you want to have multiple services sharing a database. For that you could have 1 microservice be the owner of the database and other services reuse it. Or you could separate out the database definition + migration to a separate helm chart which is released independently.
So it might make sense to make separate quickstart just to define the database definition and schema migration for these use cases: maybe via a `Job` rather than an [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)).
## Conclusion
So there's a really quick way to spin up a node based microservice using a database with an operator handling the setup of the database cluster which works well with [multiple environments](/v3/develop/environments/promotion/), progressive delivery and [Preview Environments](/v3/develop/environments/preview/).
Give it a try and [let us know how you get on or if you can think of any more ways we can improve](/community/)
| 76.425806 | 519 | 0.785835 | eng_Latn | 0.987928 |
3ec64fde24c38632cc404657798d668a783efcb2 | 2,010 | md | Markdown | README.md | Schatzman/StopWatch | 18d089bf3d8354df17240558e589086a231e6743 | [
"MIT"
] | 4 | 2018-04-04T14:45:50.000Z | 2021-04-02T14:04:33.000Z | README.md | Schatzman/StopWatch | 18d089bf3d8354df17240558e589086a231e6743 | [
"MIT"
] | 9 | 2016-06-08T20:12:31.000Z | 2018-03-05T21:12:20.000Z | README.md | Schatzman/StopWatch | 18d089bf3d8354df17240558e589086a231e6743 | [
"MIT"
] | 3 | 2016-05-24T15:46:18.000Z | 2017-01-09T18:01:03.000Z |
AlarmClock
==========
This gives the option to query whether a specified time period has expired. The utility is made to have a very low CPU clock cycle cost for the client compared to using StopWatch or std::chrono.
The implementation of AlarmClock was made to be as accurate and efficient as possible. The focus is on low CPU consumption, with the tradeoff of less CPU usage resulting in a less accurate expiration capability.
Expiration time overhead ranges from 50 - 150 microseconds on the current testing platform, when the Alarm clock is checked for expiration in a continuous loop (which normally is not a good usage pattern).
The API usage can be found in: [[AlarmClock.h]](https://github.com/LogRhythm/StopWatch/blob/master/src/AlarmClock.h) and in [[AlarmClockTest.cpp]](https://github.com/LogRhythm/StopWatch/blob/master/test/AlarmClockTest.cpp)
StopWatch
=========
A timer class in C++ which wraps the std::chrono API, with the purpose of retrieving elapsed time since creation of the object or since the timing was restarted.
Measurements are available in microseconds, milliseconds and seconds.
The API can be found in [[StopWatch.h]](https://github.com/LogRhythm/StopWatch/blob/master/src/StopWatch.h) and example usage can be found in the [[tests]](https://github.com/LogRhythm/StopWatch/blob/master/test/ToolsTestStopWatch.cpp).
ThreadSafeStopWatch
===================
A replacement for `thread_local StopWatch` on platforms that do not have the `thread_local` keyword accessible. It implements the `thread_local` through a `lazy-barber` map access solution.
If `thread_local` is available on your platform then `thread_local StopWatch` is likely a better choice than using the`ThreadSafeStopWatch`
## BUILD
```
cd 3rdparty
unzip gtest-1.7.0.zip
mkdir build
cd build
cmake ..
make
sudo make install
```
## To run unit tests in the build directory
```
./UnitTestRunner
```
Alternative on Debian systems
```
make package
sudo dpkg -i LRStopWatch<package_version>Linux.deb
``` | 40.2 | 236 | 0.772139 | eng_Latn | 0.989328 |
3ec68c3b100944fb260645a5a3e225547542d33d | 207 | md | Markdown | examples/README.md | AlexRogalskiy/http-status-check | 6eed0e183c1fcdbccb7776d8601180595bc25c30 | [
"BSD-3-Clause"
] | null | null | null | examples/README.md | AlexRogalskiy/http-status-check | 6eed0e183c1fcdbccb7776d8601180595bc25c30 | [
"BSD-3-Clause"
] | 2 | 2022-01-18T23:32:13.000Z | 2022-01-18T23:32:23.000Z | examples/README.md | AlexRogalskiy/http-status-check | 6eed0e183c1fcdbccb7776d8601180595bc25c30 | [
"BSD-3-Clause"
] | 1 | 2022-01-18T23:24:40.000Z | 2022-01-18T23:24:40.000Z | # Examples
This directory contains sample deployments for `http-status-check`. The
[bookinfo](bookinfo) directory contains the Istio example deployment
`bookinfo` along with its HTTP status check cronjobs.
| 34.5 | 71 | 0.811594 | eng_Latn | 0.995698 |
3ec8abaa05f8dc86390198242e19669d8d0281f5 | 1,210 | md | Markdown | 20200615/stdioH.md | Harrdy2018/C-Notebook | 91d77d31bb2192bdce8d91c0bb62069f5cf2f44d | [
"Apache-2.0"
] | null | null | null | 20200615/stdioH.md | Harrdy2018/C-Notebook | 91d77d31bb2192bdce8d91c0bb62069f5cf2f44d | [
"Apache-2.0"
] | null | null | null | 20200615/stdioH.md | Harrdy2018/C-Notebook | 91d77d31bb2192bdce8d91c0bb62069f5cf2f44d | [
"Apache-2.0"
] | null | null | null | ### sprintf
* sprintf(buffer, "%d", 123456);
* sprintf的作用是将一个格式化的字符串输出到一个目的字符串中。执行后buffer指向字符串"123456"
```c
int main(int argc, char *argv[])
{
char buffer[10];
char *a = "1234";
char *b = "5678";
int num = sprintf(buffer, "%s %s", a, b);
printf("%s %d\n", buffer, num);//1234 5678 9
return 0;
}
```
### snprintf
* snprintf(buffer, 3, "%d", 12345);
* n 表示拷贝的字节数
* 如果格式化后的字符串长度小于等于 size,全部带过去,并给其后添加一个字符串结束符 \0;
* 如果格式化后的字符串长度大于 size,超过 size 的部分会被截断,只将其中的 (size-1) 个字符复制到 str 中,并给其后添加一个字符串结束符 \0,
* 返回值为欲写入的字符串长度。
```c
int main(int argc, char *argv[])
{
char buffer[10];
char *str = "abcdefg";
int num = snprintf(buffer, 2, "%s", str);
printf("%s %d\n", buffer, num);//a 7
num = snprintf(buffer, 8, "%s", str);
printf("%s %d\n", buffer, num);//abcdefg 7
num = snprintf(buffer, 9, "%s", str);
printf("%s %d\n", buffer, num);//abcdefg 7
return 0;
}
```
### fprintf
* fprinf(stream, "%s", "we");
* 发送格式化字符串到流 stream 中
```c
#include <stdio.h>
int main(int argc, char *argv[])
{
FILE *fp = fopen("./test.txt", "w+");
int num = fprintf(fp,"%s %s %s %d", "we", "are", "in", 2020);
printf("%d\n", num); // 14
fclose(fp);
return 0;
}
```
| 24.693878 | 84 | 0.57438 | roh_Latn | 0.098086 |
3ec9a07bb470f9fdc8a4a76a5673b9fe2f4e99d5 | 1,149 | md | Markdown | README.md | alankarmisra/SwiftLib | 828903cbeb222c86d9ee697b222f493f504bef02 | [
"MIT"
] | null | null | null | README.md | alankarmisra/SwiftLib | 828903cbeb222c86d9ee697b222f493f504bef02 | [
"MIT"
] | null | null | null | README.md | alankarmisra/SwiftLib | 828903cbeb222c86d9ee697b222f493f504bef02 | [
"MIT"
] | null | null | null | #Classes
##GradientBackgroundView : UIView
A minimalist UIView convenience subclass to allow for a gradient background. It exposes the following method:
```swift
func setGradientBackgroundWithColors(colors:[AnyObject])
```
where colors need to be <code>CGColor</code> objects.
To further customize the gradient, use the property:
```swift
var gradientBackground:CAGradientLayer?
```
Setting this property to <code>nil</code> removes the gradient background.
###Usage example
```swift
class ViewController : UIViewController {
@IBOutlet var mView: GradientBackgroundView! // Link this to your ViewController's view. Don't forget to assign the GradientBackgroundView class to this view in Interface Builder.
override func viewDidLoad() {
superViewDidLoad()
// ...
mView.setGradientBackgroundWithColors([UIColor.redColor().CGColor, UIColor.greenColor().CGColor])
// Optionally customize the gradient
let glayer = mView.gradientBackground // Returns a CAGradientLayer
// Customize the gradient here ...
}
}
```
The class overrides <code>layoutSubviews</code> so when the view is rotated, the gradient is redrawn.
| 33.794118 | 181 | 0.763272 | eng_Latn | 0.878841 |
3ecbe6ea938d25b5367193cdd601883fccf8b758 | 198 | md | Markdown | oh-shit.md | SDinev/ShitIDo | d005fbe9bebf9755e5ea104b66900aea419d6297 | [
"MIT"
] | null | null | null | oh-shit.md | SDinev/ShitIDo | d005fbe9bebf9755e5ea104b66900aea419d6297 | [
"MIT"
] | null | null | null | oh-shit.md | SDinev/ShitIDo | d005fbe9bebf9755e5ea104b66900aea419d6297 | [
"MIT"
] | null | null | null | ---
layout: page
title: Oh Shit!
permalink: /shit/
---
<h1>OMG, just created a second page. </h1>
Shit definitely is real.
Gotta love them templates. They are ugly, but we'll get to that too.
- | 15.230769 | 69 | 0.681818 | eng_Latn | 0.996788 |
3ecc1c494cc21b06dcd26055143da4331aba1871 | 1,035 | md | Markdown | README.md | sixers/bqdf | b2b759fabcb947c3589f296aa1f21ecd67ec86da | [
"MIT"
] | null | null | null | README.md | sixers/bqdf | b2b759fabcb947c3589f296aa1f21ecd67ec86da | [
"MIT"
] | null | null | null | README.md | sixers/bqdf | b2b759fabcb947c3589f296aa1f21ecd67ec86da | [
"MIT"
] | null | null | null | ==============================
BigQuery DataFrames POC Python
==============================
[](https://pypi.python.org/pypi/bqdf)
.. image:: https://img.shields.io/travis/sixers/bqdf.svg
:target: https://travis-ci.org/sixers/bqdf
.. image:: https://readthedocs.org/projects/bqdf/badge/?version=latest
:target: https://bqdf.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://pyup.io/repos/github/sixers/bqdf/shield.svg
:target: https://pyup.io/repos/github/sixers/bqdf/
:alt: Updates
Python Boilerplate contains all the boilerplate you need to create a Python package.
- Free software: MIT license
- Documentation: https://bqdf.readthedocs.io.
Features
--------
- TODO
Credits
---------
This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [futuresimple/cookiecutter-pypackage-reduced](https://github.com/futuresimple/cookiecutter-pypackage-reduced) project template.
| 27.972973 | 221 | 0.680193 | yue_Hant | 0.387985 |
3ecd675a9034883c101acf7889a22e0591662528 | 1,063 | md | Markdown | README.md | thatcluda/Violet | 29c86af9cd8abb84ff74be0c1e62f8d500895bc1 | [
"MIT"
] | 1 | 2022-03-22T05:31:20.000Z | 2022-03-22T05:31:20.000Z | README.md | auriliasleep/Violet | 29c86af9cd8abb84ff74be0c1e62f8d500895bc1 | [
"MIT"
] | 1 | 2022-03-24T10:04:39.000Z | 2022-03-24T17:17:00.000Z | README.md | thatcluda/Violet | 29c86af9cd8abb84ff74be0c1e62f8d500895bc1 | [
"MIT"
] | null | null | null | # Violet 🎀
Show the artwork of your current playing song on more areas than just the music app itself
## Preview
<img src="Preview.png">
## Installation
1. Add this repository to your package manager: `TBD`
2. Install Violet
## Compatibility
iPhone, iPad and iPod running iOS/iPadOS 14 or later
## Compiling
- [Theos](https://theos.dev/) is required to compile the project
- You may want to edit the root `Makefile` to use your Theos SDK and toolchain
## License
[MIT](https://github.com/Traurige/Violet/blob/main/LICENSE)
## Credits
- Motivation And Bug Fixes
- [MrGcGamer](https://twitter.com/MrGcGamer)
- Music App Help
- [DaveWijk](https://twitter.com/DaveWijk)
- Introduction To The Media Remote Framework
- [NepetaDev](https://twitter.com/NepetaDev)
- Continuous Corners For The Control Center Module
- [AzzouDuGhetto](https://twitter.com/AzzouDuGhetto)
- Icon And Banner
- [74k1_](https://twitter.com/74k1_)
- Duo Twitter Cell
- [arm64e](https://twitter.com/arm64e), [MrGcGamer](https://twitter.com/MrGcGamer)
| 31.264706 | 90 | 0.71778 | kor_Hang | 0.365205 |
3ece9d693fb6c45ca2dde7bf8665697ec69e4d5a | 20 | md | Markdown | README.md | Ariiah/kellys-first-files | 02bb9ef879f1a8e6cf157d801e1f1de689a82247 | [
"MIT"
] | null | null | null | README.md | Ariiah/kellys-first-files | 02bb9ef879f1a8e6cf157d801e1f1de689a82247 | [
"MIT"
] | null | null | null | README.md | Ariiah/kellys-first-files | 02bb9ef879f1a8e6cf157d801e1f1de689a82247 | [
"MIT"
] | null | null | null | # kellys-first-files | 20 | 20 | 0.8 | eng_Latn | 0.300723 |
3ed065647b57115c411c2ab2545b69aedc32687d | 2,084 | md | Markdown | BingAds/bingads-11/reporting-service/searchqueryreportaggregation.md | nschonni/BingAds-docs | 78ce1bd3f442623cc4a65d362c16f7d2baa3e7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | BingAds/bingads-11/reporting-service/searchqueryreportaggregation.md | nschonni/BingAds-docs | 78ce1bd3f442623cc4a65d362c16f7d2baa3e7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | BingAds/bingads-11/reporting-service/searchqueryreportaggregation.md | nschonni/BingAds-docs | 78ce1bd3f442623cc4a65d362c16f7d2baa3e7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: SearchQueryReportAggregation Value Set - Reporting
ms.service: bing-ads-reporting-service
ms.topic: article
author: eric-urban
ms.author: eur
description: Defines the aggregation values that you can use in a search query performance report.
---
# SearchQueryReportAggregation Value Set - Reporting
Defines the aggregation values that you can use in a search query performance report.
## Syntax
```xml
<xs:simpleType name="SearchQueryReportAggregation" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:restriction base="xs:string">
<xs:enumeration value="Summary" />
<xs:enumeration value="Hourly" />
<xs:enumeration value="Daily" />
<xs:enumeration value="Weekly" />
<xs:enumeration value="Monthly" />
<xs:enumeration value="Yearly" />
<xs:enumeration value="HourOfDay" />
<xs:enumeration value="DayOfWeek" />
</xs:restriction>
</xs:simpleType>
```
## <a name="values"></a>Values
|Value|Description|
|-----------|---------------|
|<a name="daily"></a>Daily|The report data will be aggregated for each day.|
|<a name="dayofweek"></a>DayOfWeek|The report data will be aggregated by each of the seven days in a week.|
|<a name="hourly"></a>Hourly|The report data will be aggregated for each hour.|
|<a name="hourofday"></a>HourOfDay|The report data will be aggregated by each of the 24 hours in a day.|
|<a name="monthly"></a>Monthly|The report data will be aggregated for each month.|
|<a name="summary"></a>Summary|The report data will be aggregated for the entire specified report time.|
|<a name="weekly"></a>Weekly|The report data will be aggregated for each week.|
|<a name="yearly"></a>Yearly|The report data will be aggregated for each year.|
## Requirements
Service: [ReportingService.svc v11](https://reporting.api.bingads.microsoft.com/Api/Advertiser/Reporting/v11/ReportingService.svc)
Namespace: https\://bingads.microsoft.com/Reporting/v11
## Used By
[DSASearchQueryPerformanceReportRequest](dsasearchqueryperformancereportrequest.md)
[SearchQueryPerformanceReportRequest](searchqueryperformancereportrequest.md)
| 43.416667 | 132 | 0.737524 | eng_Latn | 0.514356 |
3ed0843549f3fedd68aadc16f728907c9a3e757e | 3,220 | md | Markdown | guide/src/SUMMARY.md | thomaseizinger/wasm-bindgen | 539e987cdb58875654f588dbbf5b8e05f29ff72a | [
"Apache-2.0",
"MIT"
] | null | null | null | guide/src/SUMMARY.md | thomaseizinger/wasm-bindgen | 539e987cdb58875654f588dbbf5b8e05f29ff72a | [
"Apache-2.0",
"MIT"
] | null | null | null | guide/src/SUMMARY.md | thomaseizinger/wasm-bindgen | 539e987cdb58875654f588dbbf5b8e05f29ff72a | [
"Apache-2.0",
"MIT"
] | null | null | null | # Summary
[Introduction](./introduction.md)
--------------------------------------------------------------------------------
- [Whirlwind Tour](./whirlwind-tour/introduction.md)
- [Basic Usage](./whirlwind-tour/basic-usage.md)
- [What Just Happened?](./whirlwind-tour/what-just-happened.md)
- [What Else Can We Do?](./whirlwind-tour/what-else-can-we-do.md)
- [Reference](./reference/index.md)
- [Passing Rust Closures to JS](./reference/passing-rust-closures-to-js.md)
- [Receiving JS Closures in Rust](./reference/receiving-js-closures-in-rust.md)
- [No ES Modules](./reference/no-esm.md)
- [Arbitrary Data with Serde](./reference/arbitrary-data-with-serde.md)
- [Command Line Interface](./reference/cli.md)
- [Supported Types](./reference/types.md)
- [`#[wasm_bindgen]` Attributes](./reference/attributes/index.md)
- [On JavaScript Imports](./reference/attributes/on-js-imports/index.md)
- [`catch`](./reference/attributes/on-js-imports/catch.md)
- [`constructor`](./reference/attributes/on-js-imports/constructor.md)
- [`extends`](./reference/attributes/on-js-imports/extends.md)
- [`getter` and `setter`](./reference/attributes/on-js-imports/getter-and-setter.md)
- [`indexing_getter`, `indexing_setter`, and `indexing_deleter`](./reference/attributes/on-js-imports/indexing-getter-setter-deleter.md)
- [`js_class = "Blah"`](./reference/attributes/on-js-imports/js_class.md)
- [`js_name`](./reference/attributes/on-js-imports/js_name.md)
- [`js_namespace`](./reference/attributes/on-js-imports/js_namespace.md)
- [`method`](./reference/attributes/on-js-imports/method.md)
- [`module = "blah"`](./reference/attributes/on-js-imports/module.md)
- [`static_method_of = Blah`](./reference/attributes/on-js-imports/static_method_of.md)
- [`structural`](./reference/attributes/on-js-imports/structural.md)
- [On Rust Exports](./reference/attributes/on-rust-exports/index.md)
- [`constructor`](./reference/attributes/on-rust-exports/constructor.md)
- [`js_name = Blah`](./reference/attributes/on-rust-exports/js_name.md)
- [`readonly`](./reference/attributes/on-rust-exports/readonly.md)
--------------------------------------------------------------------------------
- [Contributing](./contributing.md)
- [Testing](./testing.md)
- [Internal Design](./design.md)
- [JS Objects in Rust](./design/js-objects-in-rust.md)
- [Exporting a function to JS](./design/exporting-rust.md)
- [Exporting a struct to JS](./design/exporting-rust-struct.md)
- [Importing a function from JS](./design/importing-js.md)
- [Importing a class from JS](./design/importing-js-struct.md)
- [Rust Type conversions](./design/rust-type-conversions.md)
- [Types in `wasm-bindgen`](./design/describe.md)
- [`js-sys`](./js-sys.md)
- [Testing](./js-sys/testing.md)
- [Adding More APIs](./js-sys/adding-more-apis.md)
- [`web-sys`](./web-sys.md)
- [Overview](./web-sys/overview.md)
- [Testing](./web-sys/testing.md)
- [Logging](./web-sys/logging.md)
- [Supporting More Web APIs](./web-sys/supporting-more-web-apis.md)
- [Publishing](./publishing.md)
- [Team](./team.md)
| 54.576271 | 142 | 0.644099 | yue_Hant | 0.229136 |
3ed1006110b95c04835be9a2db106fb1bb7abbfa | 39,643 | md | Markdown | postgresql/aliyun_gcp_test.md | yumushui/database | d47023b8494c7e73bd2bf9aa28dad81ac54a02bb | [
"MIT"
] | null | null | null | postgresql/aliyun_gcp_test.md | yumushui/database | d47023b8494c7e73bd2bf9aa28dad81ac54a02bb | [
"MIT"
] | 1 | 2020-06-28T11:57:08.000Z | 2020-06-28T11:57:08.000Z | postgresql/aliyun_gcp_test.md | yumushui/database | d47023b8494c7e73bd2bf9aa28dad81ac54a02bb | [
"MIT"
] | 1 | 2020-06-28T08:35:54.000Z | 2020-06-28T08:35:54.000Z | # The postgresql Overtime Testing When Failover on Aliyun and GCP Cloud
## 1. The Document about postgresql instance failover
```
高可用性自动和手工切换:
https://help.aliyun.com/document_detail/96054.html?spm=a2c4g.11186623.2.7.3f2f4c07IASb5e#task-ftz-42j-wdb
aliyun RDS 变更配置:
https://help.aliyun.com/document_detail/96061.html?spm=a2c4g.11186623.6.717.56601553kh7rWo
aliyun RDS postgresql变更配置:
https://help.aliyun.com/document_detail/96750.html?spm=a2c4g.11186623.2.15.744460b4WDQrFm#concept-efl-pln-wdb
aliyun RDS PostgreSQL 升级内核小版本:
https://help.aliyun.com/document_detail/146895.html?spm=a2c4g.11186623.6.1018.701832f07X7ykk
aliyun RDS postgresql 自动或手动主备切换:
https://help.aliyun.com/document_detail/96747.html?spm=a2c4g.11186623.6.1020.28c912c0JNkp46
GCP Cloud SQL for high availability
https://cloud.google.com/sql/docs/postgres/configure-ha
```
## 2. Failover Test enviroment
```
-- aliyun pg RDS
数据库类型:PostgreSQL 12.0
实例ID:pgm-j6czvm5baw66061r
内网地址:pgm-j6czvm5baw66061r8o.pg.rds.aliyuncs.com
内网端口:1921
-- gcp cloud sql
airwallex-acquiring-poc:asia-east2:dba-test-cloudsql-pg12-master
34.96.133.132:5432
-- client
dba-test-vm-pg12-replicaiton-test
10.170.0.2
```
-- Test DB and tables
```
-- CREATE
aliyun_pg12_service=aliyun_pg12_master_cron
gcp_pg12_service=gcp_cloudsql_pg12_app
create database dba_test_db;
drop table if exists public.tab_overtime_test;
CREATE TABLE public.tab_overtime_test (
id serial,
time_int bigint,
radom_char character varying(10),
datetime character varying(40) DEFAULT now()
);
```
## 3. Test Plan
```
1 execute pg failover when DB are writing on Aliyun Cloud
2 execute pg failover when DB are reading on Aliyun Cloud
3 execute pg failover when DB are writing on GCP Cloud
4 execute pg failover when DB are reading on GCP Cloud
```
## 4. Testing Result
|Cloud|Type|Operate|Times|start_time|end_time|overtime|
|--|--|--|--|--|--|--|
|Aliyun|Write|Failover|01|2020-12-07 20:38:26 |2020-12-07 12:38:58 |32 |
|Aliyun|Write|Failover|02|2020-12-07 12:50:51 |2020-12-07 12:51:24 |33 |
|Aliyun|Write|Failover|03|2020-12-07 13:04:13 |2020-12-07 13:04:42 |29 |
|Aliyun|Read|Failover|01|2020-12-07 21:19:37 |2020-12-07 21:20:12 |0 |
|GCP|Write|Failover|01|2020-12-07 13:39:39 |2020-12-07 13:40:06 |27 |
|GCP|Write|Failover|02|2020-12-07 13:48:44 |2020-12-07 13:49:29 |45 |
|GCP|Write|Failover|03|2020-12-07 13:53:41 |2020-12-07 13:54:08 |27 |
|GCP|Read|Failover|01|2020-12-07 13:59:02 |2020-12-07 13:59:46 |44 |
From the testing result, we can see:
1. The Aliyun RDS failover will has not effect to read, all read opertion are normal in the prograss.
2. The GCP Cloud SQL for PG can not read and write in failover prograss.
3. The Failover time of Aliyun is about 30 seconds now.
4. The Failover time of GCP is between 27 and 44 seconds now.
5. The Testing result about Aliyun write is not the same as our consider.
The detail shell check log is in
```
https://github.com/yumushui/database/blob/master/postgresql/aliyun_gcp_test.log
```
## 5. Tesing Detail info
### 5.1 - Test 01: execute pg failover when DB are writing on Aliyun Cloud
Test shell script
```
postgres@dba-test-vm-pg12-replicaiton-test-> cat overtime_write_test.sh
#!/bin/sh
aliyun_pg12_service=aliyun_pg12_master_cron
gcp_pg12_service=gcp_cloudsql_pg12_app
for ((i=1; i<=10000; i++))
do
echo "-- connect ${i} times: os time is `date '+%Y-%m-%d %H:%M:%S'` "
time_int=`date '+%Y%m%d%H%M%S'`
radom_char="croninsert"
insert_sql=`echo "INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( ${time_int} ,?${radom_char}? );" | sed "s/?/'/g"`
sql_01="select 'aliyun pg rds',* from public.tab_overtime_test order by id desc limit 1;"
psql service=${aliyun_pg12_service} -c "${insert_sql}" | sed '$d'
psql service=${aliyun_pg12_service} -t -c "${sql_01}" | sed '$d'
sql_02="select 'gcp cloudsql pg',* from public.tab_overtime_test order by id desc limit 1;"
#psql service=${gcp_pg12_service} -c "${insert_sql}" | sed '$d'
#psql service=${gcp_pg12_service} -t -c "${sql_02}" | sed '$d'
#date '+%Y-%m-%d %H:%M:%S'
sleep 1
echo
done
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test-> /bin/sh overtime_write_test.sh >> test.log 2>&1
```
### 5.1.1 - Times 01
Operate time:
```
$ date '+%Y-%m-%d %H:%M:%S'
2020-12-07 20:38:09
```
Check Scripte中记录为:
```
-- connect 36 times: os time is 2020-12-07 12:38:26
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 65 times: os time is 2020-12-07 12:38:57
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 66 times: os time is 2020-12-07 12:38:58
aliyun pg rds | 67 | 20201207123858 | croninsert | 2020-12-07 20:38:58.936888+08
```
table data status
```
34 | 20201207123824 | croninsert | 2020-12-07 20:38:24.153533+08
35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
67 | 20201207123858 | croninsert | 2020-12-07 20:38:58.936888+08
68 | 20201207123859 | croninsert | 2020-12-07 20:39:00.017241+08
69 | 20201207123901 | croninsert | 2020-12-07 20:39:01.101576+08
```
RDS主备切换记录时间:
|切换事件ID |切换开始时间 |切换结束时间 |切换原因|
|--|--|--|--|
|SwitchId |2020-12-07 20:38:24 |2020-12-07 20:39:00 |SwitchOver|
对应 Aliyiun RDS error 日志为:
```
2020-12-07 20:38:25 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 20:38:25 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 20:38:25 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 20:38:25 WARNING: 01000: archive_mode enabled, yet archive_command is not se
2020-12-07 20:38:45 LOG: 00000: postmaster is in PM_STARTUP state.
2020-12-07 20:38:45 LOG: 00000: database system was interrupted; last known up at 2020-12-07 12:29:24 UTC
2020-12-07 20:38:45 LOG: 00000: entering standby mode
2020-12-07 20:38:45 LOG: 00000: database system was not properly shut down; automatic recovery in progress
2020-12-07 20:38:45 LOG: 00000: redo starts at C1/770016E8
2020-12-07 20:38:45 LOG: 00000: postmaster is in PM_RECOVERY state.
2020-12-07 20:38:45 LOG: 00000: invalid record length at C1/790041F8: wanted 24, got 0
2020-12-07 20:38:45 LOG: 00000: consistent recovery state reached at C1/790041F8
2020-12-07 20:38:45 LOG: 00000: database system is ready to accept read only connections
2020-12-07 20:38:45 LOG: 00000: postmaster is in PM_HOT_STANDBY state.
2020-12-07 20:38:45 LOG: 00000: started streaming WAL from primary at C1/79000000 on timeline 3
2020-12-07 20:38:58 LOG: 00000: replication terminated by primary server
2020-12-07 20:38:58 DETAIL: End of WAL reached on timeline 3 at C1/790041F8.
2020-12-07 20:38:58 LOG: 00000: fetching timeline history file for timeline 4 from primary server
2020-12-07 20:38:58 LOG: 00000: new target timeline is 4
2020-12-07 20:38:58 LOG: 00000: restarted WAL streaming at C1/79000000 on timeline 4
```
Shell scripte detail log
```
####################################
# Test 01 Aliyun write and Read
###################################
#
#date '+%Y-%m-%d %H:%M:%S'
#2020-12-07 20:36:09
#
-- connect 1 times: os time is 2020-12-07 12:37:48
aliyun pg rds | 1 | 20201207123748 | croninsert | 2020-12-07 20:37:48.267839+08
-- connect 2 times: os time is 2020-12-07 12:37:49
aliyun pg rds | 2 | 20201207123749 | croninsert | 2020-12-07 20:37:49.338193+08
-- connect 3 times: os time is 2020-12-07 12:37:50
aliyun pg rds | 3 | 20201207123750 | croninsert | 2020-12-07 20:37:50.407646+08
-- connect 4 times: os time is 2020-12-07 12:37:51
aliyun pg rds | 4 | 20201207123751 | croninsert | 2020-12-07 20:37:51.476111+08
-- connect 5 times: os time is 2020-12-07 12:37:52
aliyun pg rds | 5 | 20201207123752 | croninsert | 2020-12-07 20:37:52.549553+08
-- connect 6 times: os time is 2020-12-07 12:37:53
aliyun pg rds | 6 | 20201207123753 | croninsert | 2020-12-07 20:37:53.65908+08
-- connect 7 times: os time is 2020-12-07 12:37:54
aliyun pg rds | 7 | 20201207123754 | croninsert | 2020-12-07 20:37:54.729642+08
-- connect 8 times: os time is 2020-12-07 12:37:55
aliyun pg rds | 8 | 20201207123755 | croninsert | 2020-12-07 20:37:55.801741+08
-- connect 9 times: os time is 2020-12-07 12:37:56
aliyun pg rds | 9 | 20201207123756 | croninsert | 2020-12-07 20:37:56.870385+08
-- connect 10 times: os time is 2020-12-07 12:37:57
aliyun pg rds | 10 | 20201207123757 | croninsert | 2020-12-07 20:37:57.939782+08
-- connect 11 times: os time is 2020-12-07 12:37:58
aliyun pg rds | 11 | 20201207123758 | croninsert | 2020-12-07 20:37:59.048608+08
-- connect 12 times: os time is 2020-12-07 12:38:00
aliyun pg rds | 12 | 20201207123800 | croninsert | 2020-12-07 20:38:00.118559+08
-- connect 13 times: os time is 2020-12-07 12:38:01
aliyun pg rds | 13 | 20201207123801 | croninsert | 2020-12-07 20:38:01.186474+08
-- connect 14 times: os time is 2020-12-07 12:38:02
aliyun pg rds | 14 | 20201207123802 | croninsert | 2020-12-07 20:38:02.269471+08
-- connect 15 times: os time is 2020-12-07 12:38:03
aliyun pg rds | 15 | 20201207123803 | croninsert | 2020-12-07 20:38:03.342371+08
-- connect 16 times: os time is 2020-12-07 12:38:04
aliyun pg rds | 16 | 20201207123804 | croninsert | 2020-12-07 20:38:04.495824+08
-- connect 17 times: os time is 2020-12-07 12:38:05
aliyun pg rds | 17 | 20201207123805 | croninsert | 2020-12-07 20:38:05.572181+08
-- connect 18 times: os time is 2020-12-07 12:38:06
aliyun pg rds | 18 | 20201207123806 | croninsert | 2020-12-07 20:38:06.641224+08
-- connect 19 times: os time is 2020-12-07 12:38:07
aliyun pg rds | 19 | 20201207123807 | croninsert | 2020-12-07 20:38:07.717562+08
-- connect 20 times: os time is 2020-12-07 12:38:08
aliyun pg rds | 20 | 20201207123808 | croninsert | 2020-12-07 20:38:08.788769+08
-- connect 21 times: os time is 2020-12-07 12:38:09
aliyun pg rds | 21 | 20201207123809 | croninsert | 2020-12-07 20:38:09.893611+08
-- connect 22 times: os time is 2020-12-07 12:38:10
aliyun pg rds | 22 | 20201207123810 | croninsert | 2020-12-07 20:38:11.037108+08
-- connect 23 times: os time is 2020-12-07 12:38:12
aliyun pg rds | 23 | 20201207123812 | croninsert | 2020-12-07 20:38:12.10846+08
-- connect 24 times: os time is 2020-12-07 12:38:13
aliyun pg rds | 24 | 20201207123813 | croninsert | 2020-12-07 20:38:13.180285+08
-- connect 25 times: os time is 2020-12-07 12:38:14
aliyun pg rds | 25 | 20201207123814 | croninsert | 2020-12-07 20:38:14.250297+08
-- connect 26 times: os time is 2020-12-07 12:38:15
aliyun pg rds | 26 | 20201207123815 | croninsert | 2020-12-07 20:38:15.321217+08
-- connect 27 times: os time is 2020-12-07 12:38:16
aliyun pg rds | 27 | 20201207123816 | croninsert | 2020-12-07 20:38:16.388051+08
-- connect 28 times: os time is 2020-12-07 12:38:17
aliyun pg rds | 28 | 20201207123817 | croninsert | 2020-12-07 20:38:17.496443+08
-- connect 29 times: os time is 2020-12-07 12:38:18
aliyun pg rds | 29 | 20201207123818 | croninsert | 2020-12-07 20:38:18.611667+08
-- connect 30 times: os time is 2020-12-07 12:38:19
aliyun pg rds | 30 | 20201207123819 | croninsert | 2020-12-07 20:38:19.797758+08
-- connect 31 times: os time is 2020-12-07 12:38:20
aliyun pg rds | 31 | 20201207123820 | croninsert | 2020-12-07 20:38:20.869894+08
-- connect 32 times: os time is 2020-12-07 12:38:21
aliyun pg rds | 32 | 20201207123821 | croninsert | 2020-12-07 20:38:21.938563+08
-- connect 33 times: os time is 2020-12-07 12:38:23
aliyun pg rds | 33 | 20201207123823 | croninsert | 2020-12-07 20:38:23.046391+08
-- connect 34 times: os time is 2020-12-07 12:38:24
aliyun pg rds | 34 | 20201207123824 | croninsert | 2020-12-07 20:38:24.153533+08
-- connect 35 times: os time is 2020-12-07 12:38:25
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 36 times: os time is 2020-12-07 12:38:26
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 37 times: os time is 2020-12-07 12:38:27
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 38 times: os time is 2020-12-07 12:38:28
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 39 times: os time is 2020-12-07 12:38:29
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 40 times: os time is 2020-12-07 12:38:30
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 41 times: os time is 2020-12-07 12:38:31
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 42 times: os time is 2020-12-07 12:38:32
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 43 times: os time is 2020-12-07 12:38:33
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 44 times: os time is 2020-12-07 12:38:35
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 45 times: os time is 2020-12-07 12:38:36
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 46 times: os time is 2020-12-07 12:38:37
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 47 times: os time is 2020-12-07 12:38:38
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 48 times: os time is 2020-12-07 12:38:39
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 49 times: os time is 2020-12-07 12:38:40
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 50 times: os time is 2020-12-07 12:38:41
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 51 times: os time is 2020-12-07 12:38:42
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 52 times: os time is 2020-12-07 12:38:43
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 53 times: os time is 2020-12-07 12:38:44
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 54 times: os time is 2020-12-07 12:38:45
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 55 times: os time is 2020-12-07 12:38:46
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 56 times: os time is 2020-12-07 12:38:48
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 57 times: os time is 2020-12-07 12:38:49
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 58 times: os time is 2020-12-07 12:38:50
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 59 times: os time is 2020-12-07 12:38:51
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 60 times: os time is 2020-12-07 12:38:52
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 61 times: os time is 2020-12-07 12:38:53
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 62 times: os time is 2020-12-07 12:38:54
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 63 times: os time is 2020-12-07 12:38:55
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 64 times: os time is 2020-12-07 12:38:56
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 65 times: os time is 2020-12-07 12:38:57
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 35 | 20201207123825 | croninsert | 2020-12-07 20:38:25.225612+08
-- connect 66 times: os time is 2020-12-07 12:38:58
aliyun pg rds | 67 | 20201207123858 | croninsert | 2020-12-07 20:38:58.936888+08
-- connect 67 times: os time is 2020-12-07 12:38:59
aliyun pg rds | 68 | 20201207123859 | croninsert | 2020-12-07 20:39:00.017241+08
-- connect 68 times: os time is 2020-12-07 12:39:01
aliyun pg rds | 69 | 20201207123901 | croninsert | 2020-12-07 20:39:01.101576+08
-- connect 69 times: os time is 2020-12-07 12:39:02
aliyun pg rds | 70 | 20201207123902 | croninsert | 2020-12-07 20:39:02.182775+08
-- connect 70 times: os time is 2020-12-07 12:39:03
aliyun pg rds | 71 | 20201207123903 | croninsert | 2020-12-07 20:39:03.297124+08
-- connect 71 times: os time is 2020-12-07 12:39:04
aliyun pg rds | 72 | 20201207123904 | croninsert | 2020-12-07 20:39:04.372773+08
```
### 5.1.2 - Times 02
Operate time
```
$ date '+%Y-%m-%d %H:%M:%S'
2020-12-07 20:49:57
```
Shell check status
```
-- connect 33 times: os time is 2020-12-07 12:50:49
aliyun pg rds | 176 | 20201207125050 | croninsert | 2020-12-07 20:50:50.105628+08
-- connect 34 times: os time is 2020-12-07 12:50:51
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 176 | 20201207125050 | croninsert | 2020-12-07 20:50:50.105628+08
-- connect 64 times: os time is 2020-12-07 12:51:23
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 176 | 20201207125050 | croninsert | 2020-12-07 20:50:50.105628+08
-- connect 65 times: os time is 2020-12-07 12:51:24
aliyun pg rds | 199 | 20201207125124 | croninsert | 2020-12-07 20:51:24.709439+08
```
Table check status
```
174 | 20201207125047 | croninsert | 2020-12-07 20:50:47.884333+08
175 | 20201207125048 | croninsert | 2020-12-07 20:50:48.95875+08
176 | 20201207125050 | croninsert | 2020-12-07 20:50:50.105628+08
199 | 20201207125124 | croninsert | 2020-12-07 20:51:24.709439+08
200 | 20201207125125 | croninsert | 2020-12-07 20:51:25.783784+08
201 | 20201207125126 | croninsert | 2020-12-07 20:51:26.894779+08
202 | 20201207125127 | croninsert | 2020-12-07 20:51:27.964461+08
```
RDS主备切换日志:
|切换事件ID |切换开始时间 |切换结束时间 |切换原因 |
|--|--|--|--|
|SwitchId |2020-12-07 20:50:49 |2020-12-07 20:51:26 |SwitchOver|
Aliyun RDS error log:
```
2020-12-07 20:50:51 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:50:51 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125051 ,'croninsert' );
2020-12-07 20:50:52 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:50:52 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125052 ,'croninsert' );
2020-12-07 20:50:53 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:50:53 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125053 ,'croninsert' );
2020-12-07 20:50:54 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:50:54 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125054 ,'croninsert' );
2020-12-07 20:50:55 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:50:55 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125055 ,'croninsert' );
2020-12-07 20:51:08 FATAL: XX000: could not receive data from WAL stream: SSL SYSCALL error: EOF detected
2020-12-07 20:51:08 LOG: 00000: record with incorrect prev-link 0/21 at C1/7B001DA8
2020-12-07 20:51:08 FATAL: XX000: could not connect to the primary server: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
2020-12-07 20:51:08 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:51:08 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125108 ,'croninsert' );
2020-12-07 20:51:21 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 20:51:21 STATEMENT: INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( 20201207125121 ,'croninsert' );
```
### 5.1.3 - Times 03
Operate Time:
```
$ date '+%Y-%m-%d %H:%M:%S'
2020-12-07 21:03:18
```
Shell check status
```
-- connect 34 times: os time is 2020-12-07 13:04:12
aliyun pg rds | 275 | 20201207130412 | croninsert | 2020-12-07 21:04:12.461877+08
-- connect 35 times: os time is 2020-12-07 13:04:13
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 275 | 20201207130412 | croninsert | 2020-12-07 21:04:12.461877+08
-- connect 61 times: os time is 2020-12-07 13:04:41
ERROR: cannot execute INSERT in a read-only transaction
aliyun pg rds | 275 | 20201207130412 | croninsert | 2020-12-07 21:04:12.461877+08
-- connect 62 times: os time is 2020-12-07 13:04:42
aliyun pg rds | 298 | 20201207130442 | croninsert | 2020-12-07 21:04:42.877442+08
```
Table check status
```
273 | 20201207130410 | croninsert | 2020-12-07 21:04:10.323775+08
274 | 20201207130411 | croninsert | 2020-12-07 21:04:11.393753+08
275 | 20201207130412 | croninsert | 2020-12-07 21:04:12.461877+08
298 | 20201207130442 | croninsert | 2020-12-07 21:04:42.877442+08
299 | 20201207130443 | croninsert | 2020-12-07 21:04:43.959244+08
300 | 20201207130445 | croninsert | 2020-12-07 21:04:45.03847+08
```
RDS 主备切换日志看:
|切换事件ID |切换开始时间 |切换结束时间 |切换原因 |
|--|--|--|--|
|SwitchId |2020-12-07 21:04:11 |2020-12-07 21:04:42 |SwitchOver |
Aliyun RDS error log
```
2020-12-07 21:04:13 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 21:04:13 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 21:04:13 FATAL: 57P01: terminating connection due to administrator command
2020-12-07 21:04:23 WARNING: 01000: archive_mode enabled, yet archive_command is not set
2020-12-07 21:04:32 LOG: 00000: postmaster is in PM_STARTUP state.
2020-12-07 21:04:32 LOG: 00000: database system was interrupted; last known up at 2020-12-07 12:51:28 UTC
2020-12-07 21:04:32 LOG: 00000: entering standby mode
2020-12-07 21:04:32 LOG: 00000: database system was not properly shut down; automatic recovery in progress
2020-12-07 21:04:32 LOG: 00000: redo starts at C1/7B001DD8
2020-12-07 21:04:32 LOG: 00000: invalid record length at C1/7D004C10: wanted 24, got 0
2020-12-07 21:04:32 LOG: 00000: postmaster is in PM_RECOVERY state.
2020-12-07 21:04:32 LOG: 00000: consistent recovery state reached at C1/7D004C10
2020-12-07 21:04:32 LOG: 00000: database system is ready to accept read only connections
2020-12-07 21:04:32 LOG: 00000: postmaster is in PM_HOT_STANDBY state.
2020-12-07 21:04:32 LOG: 00000: started streaming WAL from primary at C1/7D000000 on timeline 5
```
## 5.2 - Test 02: execute pg failover when DB are reading on Aliyun Cloud
Test shell script
```
postgres@dba-test-vm-pg12-replicaiton-test-> cat overtime_write_test.sh
#!/bin/sh
aliyun_pg12_service=aliyun_pg12_master_cron
gcp_pg12_service=gcp_cloudsql_pg12_app
for ((i=1; i<=10000; i++))
do
echo "-- connect ${i} times: os time is `date '+%Y-%m-%d %H:%M:%S'` "
time_int=`date '+%Y%m%d%H%M%S'`
radom_char="croninsert"
insert_sql=`echo "INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( ${time_int} ,?${radom_char}? );" | sed "s/?/'/g"`
sql_01="select 'aliyun pg rds',* from public.tab_overtime_test order by id desc limit 1;"
psql service=${aliyun_pg12_service} -c "${insert_sql}" | sed '$d'
psql service=${aliyun_pg12_service} -t -c "${sql_01}" | sed '$d'
sql_02="select 'gcp cloudsql pg',* from public.tab_overtime_test order by id desc limit 1;"
#psql service=${gcp_pg12_service} -c "${insert_sql}" | sed '$d'
#psql service=${gcp_pg12_service} -t -c "${sql_02}" | sed '$d'
#date '+%Y-%m-%d %H:%M:%S'
sleep 1
echo
done
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test-> /bin/sh overtime_write_test.sh >> test.log 2>&1
```
### 5.2.1 - Times 01
Operate Time
```
$ date '+%Y-%m-%d %H:%M:%S'
2020-12-07 21:18:37
```
Shell Check status
```
```
RDS主备切换日志:
|切换事件ID |切换开始时间 |切换结束时间 |切换原因 |
|--|--|--|--|
|SwitchId |2020-12-07 21:19:37 |2020-12-07 21:20:12 |SwitchOver |
RDS错误日志:
```
2020-12-07 21:20:00 FATAL: XX000: could not receive data from WAL stream: SSL SYSCALL error: EOF detected
2020-12-07 21:20:00 LOG: 00000: record with incorrect prev-link C1/3A001C58 at C1/80001C80
2020-12-07 21:20:00 FATAL: XX000: could not connect to the primary server: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
2020-12-07 21:20:01 ERROR: 25006: cannot execute INSERT in a read-only transaction
2020-12-07 21:20:01 STATEMENT: INSERT INTO tab_cron (time_int, radom_char) VALUES( 07212001 ,'croninsert' );
2020-12-07 21:20:05 LOG: 00000: started streaming WAL from primary at C1/80000000 on timeline 6
2020-12-07 21:20:10 LOG: 00000: promote trigger file found: /data/postgresql.trigger
2020-12-07 21:20:10 FATAL: 57P01: terminating walreceiver process due to administrator command
2020-12-07 21:20:10 LOG: 00000: redo done at C1/80001C48
2020-12-07 21:20:10 LOG: 00000: last completed transaction was at log time 2020-12-07 21:19:42.319675+08
2020-12-07 21:20:10 LOG: 00000: selected new timeline ID: 7
2020-12-07 21:20:10 LOG: 00000: archive recovery complete
2020-12-07 21:20:10 LOG: 00000: checkpoint starting: force
2020-12-07 21:20:10 LOG: 00000: postmaster is in PM_RUN state.
2020-12-07 21:20:10 LOG: 00000: database system is ready to accept connections
2020-12-07 21:20:10 WARNING: 01000: archive_mode enabled, yet archive_command is not set
2020-12-07 21:20:12 LOG: 00000: checkpoint complete: wrote 19 buffers (0.0%); 0 WAL file(s) added, 0 removed, 5 recycled; write=1.905 s, sync=0.003 s, total=1.913 s; sync files=14, longest=0.000 s, average=0.000 s; distance=81919 kB, estimate=81919 kB
2020-12-07 21:21:10 WARNING: 01000: archive_mode enabled, yet archive_command is not set
```
## 5.3 - Test 03: execute pg failover when DB are writing on GCP Cloud
Test shell script
```
postgres@dba-test-vm-pg12-replicaiton-test-> cat overtime_write_test.sh
#!/bin/sh
aliyun_pg12_service=aliyun_pg12_master_cron
gcp_pg12_service=gcp_cloudsql_pg12_app
for ((i=1; i<=10000; i++))
do
echo "-- connect ${i} times: os time is `date '+%Y-%m-%d %H:%M:%S'` "
time_int=`date '+%Y%m%d%H%M%S'`
radom_char="croninsert"
insert_sql=`echo "INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( ${time_int} ,?${radom_char}? );" | sed "s/?/'/g"`
sql_01="select 'aliyun pg rds',* from public.tab_overtime_test order by id desc limit 1;"
#psql service=${aliyun_pg12_service} -c "${insert_sql}" | sed '$d'
#psql service=${aliyun_pg12_service} -t -c "${sql_01}" | sed '$d'
sql_02="select 'gcp cloudsql pg',* from public.tab_overtime_test order by id desc limit 1;"
psql service=${gcp_pg12_service} -c "${insert_sql}" | sed '$d'
psql service=${gcp_pg12_service} -t -c "${sql_02}" | sed '$d'
#date '+%Y-%m-%d %H:%M:%S'
sleep 1
echo
done
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test-> /bin/sh overtime_write_test.sh >> test.log 2>&1
```
### 5.3.1 - Times 01
Failover operation in progress. This may take a few minutes. While this operation is running, you may continue to view information about the instance.
Shell Check status
```
-- connect 27 times: os time is 2020-12-07 13:39:39
gcp cloudsql pg | 46 | 20201207133939 | croninsert | 2020-12-07 13:39:39.646454+00
-- connect 28 times: os time is 2020-12-07 13:39:40
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 37 times: os time is 2020-12-07 13:39:49
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 38 times: os time is 2020-12-07 13:39:50
gcp cloudsql pg | 47 | 20201207133950 | croninsert | 2020-12-07 13:40:06.028518+00
-- connect 39 times: os time is 2020-12-07 13:40:07
gcp cloudsql pg | 48 | 20201207134007 | croninsert | 2020-12-07 13:40:07.113807+00
```
Table Check status
```
43 | 20201207133936 | croninsert | 2020-12-07 13:39:36.492415+00
44 | 20201207133937 | croninsert | 2020-12-07 13:39:37.542102+00
45 | 20201207133938 | croninsert | 2020-12-07 13:39:38.595612+00
46 | 20201207133939 | croninsert | 2020-12-07 13:39:39.646454+00
47 | 20201207133950 | croninsert | 2020-12-07 13:40:06.028518+00
48 | 20201207134007 | croninsert | 2020-12-07 13:40:07.113807+00
49 | 20201207134008 | croninsert | 2020-12-07 13:40:08.178932+00
50 | 20201207134009 | croninsert | 2020-12-07 13:40:09.237362+00
51 | 20201207134010 | croninsert | 2020-12-07 13:40:10.291813+00
52 | 20201207134011 | croninsert | 2020-12-07 13:40:11.345+00
```
GCP log
|Creation Time |Type |Status |
|--|--|--|
|Dec 7, 2020, 9:39:33 PM |Failover |Failover finished |
### 5.3.2 - Times 02
Failover Operate
```
dba-test-cloudsql-pg12-master
dba-test-cloudsql-pg12-master
PostgreSQL 12
Failover operation in progress. This may take a few minutes. While this operation is running, you may continue to view information about the instance.
```
Shell check status
```
-- connect 31 times: os time is 2020-12-07 13:48:44
gcp cloudsql pg | 176 | 20201207134844 | croninsert | 2020-12-07 13:48:44.396033+00
-- connect 32 times: os time is 2020-12-07 13:48:45
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 44 times: os time is 2020-12-07 13:48:57
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 45 times: os time is 2020-12-07 13:48:58
gcp cloudsql pg | 177 | 20201207134858 | croninsert | 2020-12-07 13:49:29.832244+00
-- connect 46 times: os time is 2020-12-07 13:49:30
gcp cloudsql pg | 178 | 20201207134930 | croninsert | 2020-12-07 13:49:30.917551+00
```
Table Check Status
```
174 | 20201207134842 | croninsert | 2020-12-07 13:48:42.287553+00
175 | 20201207134843 | croninsert | 2020-12-07 13:48:43.343623+00
176 | 20201207134844 | croninsert | 2020-12-07 13:48:44.396033+00
177 | 20201207134858 | croninsert | 2020-12-07 13:49:29.832244+00
178 | 20201207134930 | croninsert | 2020-12-07 13:49:30.917551+00
179 | 20201207134931 | croninsert | 2020-12-07 13:49:31.97159+00
180 | 20201207134933 | croninsert | 2020-12-07 13:49:33.027944+00
181 | 20201207134934 | croninsert | 2020-12-07 13:49:34.085486+00
```
GCP Cloud SQL log
|Creation Time |Type |Status |
|--|--|--|
|Dec 7, 2020, 9:48:41 PM |Failover |Failover finished |
### 5.3.3 - Times 03
Shell check status
```
-- connect 32 times: os time is 2020-12-07 13:53:41
gcp cloudsql pg | 257 | 20201207135341 | croninsert | 2020-12-07 13:53:41.347685+00
-- connect 33 times: os time is 2020-12-07 13:53:42
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 43 times: os time is 2020-12-07 13:53:52
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 44 times: os time is 2020-12-07 13:53:53
gcp cloudsql pg | 258 | 20201207135353 | croninsert | 2020-12-07 13:54:08.87221+00
```
Table check status
```
254 | 20201207135338 | croninsert | 2020-12-07 13:53:38.174928+00
255 | 20201207135339 | croninsert | 2020-12-07 13:53:39.23311+00
256 | 20201207135340 | croninsert | 2020-12-07 13:53:40.291614+00
257 | 20201207135341 | croninsert | 2020-12-07 13:53:41.347685+00
258 | 20201207135353 | croninsert | 2020-12-07 13:54:08.87221+00
259 | 20201207135409 | croninsert | 2020-12-07 13:54:09.967082+00
260 | 20201207135411 | croninsert | 2020-12-07 13:54:11.02103+00
261 | 20201207135412 | croninsert | 2020-12-07 13:54:12.095931+00
262 | 20201207135413 | croninsert | 2020-12-07 13:54:13.195584+00
```
GCP Cloud SQL log
|Creation Time |Type |Status |
|--|--|--|
|Dec 7, 2020, 9:53:35 PM |Failover |Failover operation in progress |
|Dec 7, 2020, 9:53:35 PM |Failover |Failover finished |
## 5.4 - Test 04: execute pg failover when DB are reading on GCP Cloud
Test shell script
```
postgres@dba-test-vm-pg12-replicaiton-test-> cat overtime_write_test.sh
#!/bin/sh
aliyun_pg12_service=aliyun_pg12_master_cron
gcp_pg12_service=gcp_cloudsql_pg12_app
for ((i=1; i<=10000; i++))
do
echo "-- connect ${i} times: os time is `date '+%Y-%m-%d %H:%M:%S'` "
time_int=`date '+%Y%m%d%H%M%S'`
radom_char="croninsert"
insert_sql=`echo "INSERT INTO public.tab_overtime_test (time_int, radom_char) VALUES( ${time_int} ,?${radom_char}? );" | sed "s/?/'/g"`
sql_01="select 'aliyun pg rds',* from public.tab_overtime_test order by id desc limit 1;"
psql service=${aliyun_pg12_service} -c "${insert_sql}" | sed '$d'
psql service=${aliyun_pg12_service} -t -c "${sql_01}" | sed '$d'
sql_02="select 'gcp cloudsql pg',* from public.tab_overtime_test order by id desc limit 1;"
#psql service=${gcp_pg12_service} -c "${insert_sql}" | sed '$d'
psql service=${gcp_pg12_service} -t -c "${sql_02}" | sed '$d'
#date '+%Y-%m-%d %H:%M:%S'
sleep 1
echo
done
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test->
postgres@dba-test-vm-pg12-replicaiton-test-> /bin/sh overtime_write_test.sh >> test.log 2>&1
```
### 5.4.1 - Times 01
Failover
```
PostgreSQL 12
Failover operation in progress. This may take a few minutes. While this operation is running, you may continue to view information about the instance.
```
Shell Check status
```
-- connect 27 times: os time is 2020-12-07 13:59:02
gcp cloudsql pg | 300 | 20201207135453 | croninsert | 2020-12-07 13:54:53.44431+00
-- connect 28 times: os time is 2020-12-07 13:59:03
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 37 times: os time is 2020-12-07 13:59:13
psql: error: could not connect to server: could not connect to server: Connection refused
Is the server running on host "34.96.133.132" and accepting
TCP/IP connections on port 5432?
-- connect 38 times: os time is 2020-12-07 13:59:14
gcp cloudsql pg | 300 | 20201207135453 | croninsert | 2020-12-07 13:54:53.44431+00
```
GCP Cloud SQL log
|Creation Time |Type |Status |
|--|--|--|
|Dec 7, 2020, 9:58:57 PM |Failover |Failover operation in progress |
|Dec 7, 2020, 9:58:57 PM |Failover |Failover finished |
sub_ledger_entry Partition key: RANGE (settlement_date)
general_ledger_entry Partition key: RANGE (settlement_date)
application_journal Partition key: RANGE (journal_time)
adjust_sub_ledger_entry Partition key: RANGE (last_update)
adjust_ledger_entry Partition key: RANGE (last_update)
accounting_journal_register Partition key: RANGE (event_time)
accounting_journal_entry Partition key: RANGE (settlement_date)
accounting_journal Partition key: RANGE (journal_time)
adjust_journal_entry Partition key: RANGE (create_time)
select public.create_partition_sub_ledger_entry();
select public.create_partition_general_ledger_entry();
select public.create_partition_application_journal();
select public.create_partition_adjust_sub_ledger_entry();
select public.create_partition_adjust_ledger_entry();
select public.create_partition_accounting_journal_register();
select public.create_partition_accounting_journal_entry();
select public.create_partition_accounting_journal();
select public.create_partition_adjust_journal_entry();
instance_list = {
"prod-gcp-sg-pgsql12-platform-authapi-1" = "10.14.244.43"
"prod-gcp-sg-pgsql12-platform-billing-1" = "10.14.244.34"
"prod-gcp-sg-pgsql12-platform-billing-config-1" = "10.14.244.37"
} | 40.165147 | 251 | 0.723558 | eng_Latn | 0.520048 |
3ed10b80bfa3ab60b976cda0f5b27c52b9ce4743 | 123 | md | Markdown | README.md | alep007/qa-prime-numbers-project | ce008d73b5aaf420f36a9b65e83b9fd0afe3783e | [
"MIT"
] | null | null | null | README.md | alep007/qa-prime-numbers-project | ce008d73b5aaf420f36a9b65e83b9fd0afe3783e | [
"MIT"
] | null | null | null | README.md | alep007/qa-prime-numbers-project | ce008d73b5aaf420f36a9b65e83b9fd0afe3783e | [
"MIT"
] | null | null | null | # qa-prime-numbers-project
Final project of the module, web api to check if a number is prime and retrieve n prime numbers
| 41 | 95 | 0.788618 | eng_Latn | 0.972762 |
3ed113e75a49bea9d84b1163520bee24f38fb0a7 | 5,517 | md | Markdown | fabric-sdk-node/25661-26853/26278.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric-sdk-node/25661-26853/26278.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric-sdk-node/25661-26853/26278.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric-sdk-node<br><strong>Branch</strong>: master<br><strong>ID</strong>: 26278<br><strong>Subject</strong>: FABN-919: Add error messages to event strategy failures<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Mark S. Lewis - [email protected]<br><strong>Assignee</strong>:<br><strong>Created</strong>: 9/13/2018, 5:28:11 AM<br><strong>LastUpdated</strong>: 9/13/2018, 7:04:30 AM<br><strong>CommitMessage</strong>:<br><pre>FABN-919: Add error messages to event strategy failures
Change-Id: I85573e32c78e8c2089eccb68801a7919bee994cb
Signed-off-by: Mark S. Lewis <[email protected]>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Mark S. Lewis - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:28:11 AM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:28:21 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-s390x/444/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:31:04 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-x86_64/458/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:55:05 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified-1
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-s390x/444/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-s390x/444/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-node8-verify-master-s390x/444
https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-x86_64/458/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-node8-verify-master-x86_64/458</pre><strong>Reviewer</strong>: Andrew Coleman - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:58:34 AM<br><strong>Message</strong>: <pre>Patch Set 1:
reverify-node8z</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 5:58:42 AM<br><strong>Message</strong>: <pre>Patch Set 1: -Verified
Build Started https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-s390x/445/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 6:24:07 AM<br><strong>Message</strong>: <pre>Patch Set 1: Verified+1
Build Successful
https://jenkins.hyperledger.org/job/fabric-sdk-node8-verify-master-s390x/445/ : SUCCESS
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-node8-verify-master-s390x/445</pre><strong>Reviewer</strong>: Andrew Coleman - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 6:35:42 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2
looks good</pre><strong>Reviewer</strong>: Andrew Coleman - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 6:35:44 AM<br><strong>Message</strong>: <pre>Change has been successfully merged by Andrew Coleman</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 9/13/2018, 7:04:30 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Failed
https://jenkins.hyperledger.org/job/fabric-sdk-node8-merge-master-x86_64/132/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-node8-merge-master-x86_64/132/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-node8-merge-master-x86_64/132
https://jenkins.hyperledger.org/job/fabric-sdk-node8-merge-master-s390x/132/ : FAILURE
No problems were identified. If you know why this problem occurred, please add a suitable Cause for it. ( https://jenkins.hyperledger.org/job/fabric-sdk-node8-merge-master-s390x/132/ )
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-sdk-node8-merge-master-s390x/132</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Mark S. Lewis - [email protected]<br><strong>Uploader</strong>: Mark S. Lewis - [email protected]<br><strong>Created</strong>: 9/13/2018, 5:28:11 AM<br><strong>GitHubMergedRevision</strong>: [8d6e09fdab02064163f0d02802af12dbc07bcd18](https://github.com/hyperledger-gerrit-archive/fabric-sdk-node/commit/8d6e09fdab02064163f0d02802af12dbc07bcd18)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 9/13/2018, 6:24:07 AM<br><strong>Type</strong>: Verified<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Andrew Coleman - [email protected]<br><strong>Approved</strong>: 9/13/2018, 6:35:42 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Andrew Coleman<br><strong>Merged</strong>: 9/13/2018, 6:35:44 AM<br><br></blockquote> | 117.382979 | 1,109 | 0.764908 | kor_Hang | 0.275992 |
3ed1c9474c2f680559b97d6b1a7b6b4b3b167170 | 1,307 | md | Markdown | _posts/2019-07-14-A-Simple-BERT-Based-Approach-for-Lexical-Simplification.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2019-07-14-A-Simple-BERT-Based-Approach-for-Lexical-Simplification.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2019-07-14-A-Simple-BERT-Based-Approach-for-Lexical-Simplification.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "A Simple BERT-Based Approach for Lexical Simplification"
date: 2019-07-14 14:19:22
categories: arXiv_AI
tags: arXiv_AI Language_Model
author: Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan
mathjax: true
---
* content
{:toc}
##### Abstract
Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning. Recently unsupervised lexical simplification approaches only rely on the complex word itself regardless of the given sentence to generate candidate substitutions, which will inevitably produce a large number of spurious candidates. We present a simple BERT-based LS approach that makes use of the pre-trained unsupervised deep bidirectional representations BERT. We feed the given sentence masked the complex word into the masking language model of BERT to generate candidate substitutions. By considering the whole sentence, the generated simpler alternatives are easier to hold cohesion and coherence of a sentence. Experimental results show that our approach obtains obvious improvement on standard LS benchmark.
##### Abstract (translated by Google)
##### URL
[http://arxiv.org/abs/1907.06226](http://arxiv.org/abs/1907.06226)
##### PDF
[http://arxiv.org/pdf/1907.06226](http://arxiv.org/pdf/1907.06226)
| 50.269231 | 848 | 0.792655 | eng_Latn | 0.973964 |
3ed229da87d1f5b8fdb5a386e7b6c3e1220f1f5f | 212 | md | Markdown | _posts/2020/12/2020-12-21-html-headings.md | toyjhlee/toyjhlee.github.com | afe4afde620adb5a9417c5df9e7c132da03333b8 | [
"MIT"
] | 2 | 2021-04-01T00:09:10.000Z | 2021-08-04T07:58:11.000Z | _posts/2020/12/2020-12-21-html-headings.md | toyjhlee/toyjhlee.github.com | afe4afde620adb5a9417c5df9e7c132da03333b8 | [
"MIT"
] | null | null | null | _posts/2020/12/2020-12-21-html-headings.md | toyjhlee/toyjhlee.github.com | afe4afde620adb5a9417c5df9e7c132da03333b8 | [
"MIT"
] | null | null | null | ---
title: 'html headings'
tags: ['headings', 'h tag']
---
[Headings w3 tutorials](https://www.w3.org/WAI/tutorials/page-structure/headings/)
[headings h3 markup](https://www.w3.org/MarkUp/html3/headings.html)
| 23.555556 | 82 | 0.707547 | kor_Hang | 0.299848 |
3ed27e3ffdde43467e5f2b4eaab4b35378e24afa | 144 | md | Markdown | zapLocation/README.md | v-ststee/fuzzy-train | 096b2ebca8ba537f13e37d7d87b72ad2d6fc0dfc | [
"MIT"
] | 2 | 2019-05-30T19:58:53.000Z | 2019-11-27T06:51:36.000Z | zapLocation/README.md | v-ststee/fuzzy-train | 096b2ebca8ba537f13e37d7d87b72ad2d6fc0dfc | [
"MIT"
] | 2 | 2020-06-18T14:57:49.000Z | 2021-05-08T12:23:17.000Z | zapLocation/README.md | v-ststee/fuzzy-train | 096b2ebca8ba537f13e37d7d87b72ad2d6fc0dfc | [
"MIT"
] | 2 | 2019-11-24T01:09:22.000Z | 2019-11-25T22:51:23.000Z | # ZAP Location
For this area, please install the OWASP ZAP project as a JAR file from
here [https://github.com/zaproxy/zaproxy/wiki/Downloads]
| 28.8 | 70 | 0.777778 | eng_Latn | 0.885331 |
3ed2a00a58eb96ee38afc2c8a10747fde573e2bb | 98 | md | Markdown | _posts/0000-01-02-xelizondo.md | xelizondo/github-slideshow | c2afe1b1b36217d850026d0e96200b3188be5f50 | [
"MIT"
] | null | null | null | _posts/0000-01-02-xelizondo.md | xelizondo/github-slideshow | c2afe1b1b36217d850026d0e96200b3188be5f50 | [
"MIT"
] | 4 | 2021-03-18T13:55:42.000Z | 2021-03-18T14:33:59.000Z | _posts/0000-01-02-xelizondo.md | xelizondo/github-slideshow | c2afe1b1b36217d850026d0e96200b3188be5f50 | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
Xavier
Use the left arrow to go back!
| 14 | 37 | 0.693878 | eng_Latn | 0.996343 |
3ed316402d17f1fac67bcb1f0226ec240167ca70 | 5,811 | md | Markdown | src/pages/blog/2020-04-13-covid-19-how-to-stay-informed-during-the-pandemic.md | larpo1/lavanda-website | e96c673b3a69e7ab77693396e3b4706b8874c2c4 | [
"MIT"
] | 1 | 2020-01-22T12:38:47.000Z | 2020-01-22T12:38:47.000Z | src/pages/blog/2020-04-13-covid-19-how-to-stay-informed-during-the-pandemic.md | larpo1/lavanda-website | e96c673b3a69e7ab77693396e3b4706b8874c2c4 | [
"MIT"
] | 5 | 2020-08-14T09:28:23.000Z | 2020-08-14T14:19:33.000Z | src/pages/blog/2020-04-13-covid-19-how-to-stay-informed-during-the-pandemic.md | larpo1/lavanda-website | e96c673b3a69e7ab77693396e3b4706b8874c2c4 | [
"MIT"
] | 1 | 2020-09-09T09:35:56.000Z | 2020-09-09T09:35:56.000Z | ---
templateKey: blog-post
title: "COVID-19: How to stay informed during the pandemic"
date: "2020-04-13T13:59:48.482Z"
featuredimage: /img/lavanda-2-.jpeg
postContent: >-
<p>There is currently an abundance of information around the impact of
COVID-19 on the short-term rental industry, and it is easy to feel
overwhelmed.</p>
<p>Despite initial predictions of 2020 being a boom year, the industry finds itself struggling to cope in the face of a "black swan" event, prompted by global lockdowns and travel restrictions.</p>
<p>As a short-term rental property manager, it's now more critical than ever that you stay well informed about industry trends, in order to best optimize your business during this difficult time.</p>
<p><strong>What Do I Need to Stay Informed About?</strong></p>
<p><strong>OTA Changes:</strong> What changes are happening to the channels I am operating on? Will I be impacted by cancellation and refund policies, or the ability to list my property?</p>
<p><strong>Survival Options:</strong> Right now, what options do I have to increase my chances of survival? For instance reducing costs, optimising my operations and/or pivoting to diversify my revenue streams temporarily.</p>
<p><strong>Support Available:</strong> What support is available for my business? Can I apply for any financial aid or relief packages from local government?</p>
<p><strong>Market Projections:</strong> What data is there to indicate when the market might recover, and what that recovery might look lke? What can I expect when this happens? How can I prepare for this?</p>
<p><strong>What Sources Should I Look To?</strong></p>
<p><strong>OTAs:</strong> For any news regarding specific OTAs (Airbnb, Booking.com, HomeAway, etc.), we recommend you acquire all information directly from them. This is to ensure you get the most accurate and up to date information about any changes and how this will affect your business. For instance, Airbnb have a help centre where hosts can enquire and seek support directly.</p>
<p><strong>Industry Data and Projections:</strong> Sources like AirDNA offer reliable and objective data that is directly relevant to short-term rental managers. Although you may have to pay for some of their data/reports, they've released a number free reports focused on COVID-19. Their most recent one is available <a href="https://www.airdna.co/blog/coronavirus-impact-on-global-short-term-rental-markets" target="_blank" rel="noopener">here</a>.</p>
<p><strong>Industry Forums:</strong> A key source of news and best practice is in specific forums and communities. We recommend The Professional Host Alliance, who have a highly-vetted community of short-term rental property managers comitted to informing and shaping the future of the industry. They hold webinars sharing relevant information and thought leadership, allow members to seek specific operational advice and post topics for discusson. They also post a weekly digest of industry news and its specific impact on short-term rental property managers.</p>
<p><strong>Search Engines:</strong> You may wish to set up Google Alerts for keywords like ‘Airbnb’ or ‘Short-Term Rentals’. This will essentially send you a regular email of the new reports to do with the key-words you’ve set up. Be sure to filter through the results that are sent to you, to find those from the most reliable sources.</p>
<p>It’s easy to get snowed under with the wealth of information available right now, so most importantly of all, only act upon information that is relevant to your business. The world is in survival mode, and businesses will all choose to cope with this differently. Be sure to make the decisions that are right fit for your business and the aspirations that you have within this industry.</p>
<p>We will be releasing more information about your options for survival during this pandemic - from financial optimisation, through to optimising the infrastructure that underpins your business.</p>
<p>As experienced property managers ourselves, we know just how tough it is to run a complex human operation, and how sensitive underlying profitability is to external market changes. That is why we are here to help you during this time, even if it’s just for a chat. We want to help, so please don’t hesitate to <a href="/contact">reach out to us</a>. We're all in this together.</p>
<h2>About Lavanda</h2>
<p>Lavanda is a next generation property management system (PMS) for urban and rural short-term rental operators. Our SaaS platform is designed to unlock scale and profitability, whilst accelerating growth through industry partnerships. We're backed by leading venture capital investors, and have so far invested $10m+ into short-term rental technology and innovation.</p>
<p>Our award-winning technology is different because it has been honed through our first-hand experience of managing a short-term rental portfolio at scale. Operational efficiency is what we strive for, so we set about creating the missing toolkit. We're here to change your game.</p>
tags:
- airbnb
- covid-19
- corona
- coronavirus
- short term rentals
- travel
- travel industry
- property managers
- booking.com
- homeaway
- airdna
- cottages
- sykes
blogCtaTitle: "Book a discover call"
blogCtaText: "Get one step ahead, book a discovery call to see how we can help turbocharge your business."
blogCtaButtonText: "Talk To Us"
blogCtaButtonTarget: "/book-a-demo"
featuredpost: true
metaTitle: "COVID-19: How to stay informed during the pandemic."
description: There is currently an abundance of information around the impact of
COVID-19 on the short-term rental industry, and it is easy to feel
overwhelmed.
---
| 81.84507 | 566 | 0.775082 | eng_Latn | 0.998916 |
3ed4cb1e0d099ba702ed3c08cf758e5fca1d853c | 14,430 | md | Markdown | TED/Titles_starting_P_to_Z/Peter_Singer_The_why_and_how_of_effective_altruism.md | gt-big-data/TEDVis | 328a4c62e3a05c943b2a303817601aebf198c1aa | [
"MIT"
] | 91 | 2018-01-24T12:54:48.000Z | 2022-03-07T21:03:43.000Z | cleaned_ted_data/Titles_starting_P_to_Z/Peter_Singer_The_why_and_how_of_effective_altruism.md | nadaataiyab/TED-Talks-Nutrition-NLP | 4d7e8c2155e12cb34ab8da993dee0700a6775ff9 | [
"MIT"
] | null | null | null | cleaned_ted_data/Titles_starting_P_to_Z/Peter_Singer_The_why_and_how_of_effective_altruism.md | nadaataiyab/TED-Talks-Nutrition-NLP | 4d7e8c2155e12cb34ab8da993dee0700a6775ff9 | [
"MIT"
] | 18 | 2018-01-24T13:18:51.000Z | 2022-01-09T01:06:02.000Z |
Translator: Joseph Geni
Reviewer: Morton Bast
There's something that I'd like you to see.
(Video) Reporter: It's a story that's deeply unsettled
millions in China:
footage of a two-year-old girl
hit by a van and left bleeding in the street by passersby,
footage too graphic to be shown.
The entire accident is caught on camera.
The driver pauses after hitting the child,
his back wheels seen resting on her for over a second.
Within two minutes, three people pass two-year-old Wang Yue by.
The first walks around the badly injured toddler completely.
Others look at her before moving off.
Peter Singer: There were other people
who walked past Wang Yue,
and a second van ran over her legs
before a street cleaner raised the alarm.
She was rushed to hospital, but it was too late. She died.
I wonder how many of you, looking at that,
said to yourselves just now, "I would not have done that.
I would have stopped to help."
Raise your hands if that thought occurred to you.
As I thought, that's most of you.
And I believe you. I'm sure you're right.
But before you give yourself too much credit,
look at this.
UNICEF reports that in 2011,
6.9 million children under five
died from preventable, poverty-related diseases.
UNICEF thinks that that's good news
because the figure has been steadily coming down
from 12 million in 1990. That is good.
But still, 6.9 million
is 19,000 children dying every day.
Does it really matter
that we're not walking past them in the street?
Does it really matter that they're far away?
I don't think it does make a morally relevant difference.
The fact that they're not right in front of us,
the fact, of course, that they're of a different nationality
or race, none of that seems morally relevant to me.
What is really important is,
can we reduce that death toll? Can we save
some of those 19,000 children dying every day?
And the answer is, yes we can.
Each of us spends money
on things that we do not really need.
You can think what your own habit is,
whether it's a new car, a vacation
or just something like buying bottled water
when the water that comes out of the tap
is perfectly safe to drink.
You could take the money you're spending
on those unnecessary things
and give it to this organization,
the Against Malaria Foundation,
which would take the money you had given
and use it to buy nets like this one
to protect children like this one,
and we know reliably that if we provide nets,
they're used, and they reduce the number of children
dying from malaria,
just one of the many preventable diseases
that are responsible for some of those 19,000 children
dying every day.
Fortunately, more and more people
are understanding this idea,
and the result is a growing movement:
effective altruism.
It's important because it combines both the heart and the head.
The heart, of course, you felt.
You felt the empathy for that child.
But it's really important to use the head as well
to make sure that what you do is effective and well-directed,
and not only that, but also I think reason helps us
to understand that other people, wherever they are,
are like us, that they can suffer as we can,
that parents grieve for the deaths of their children,
as we do,
and that just as our lives and our well-being matter to us,
it matters just as much to all of these people.
So I think reason is not just some neutral tool
to help you get whatever you want.
It does help us to put perspective on our situation.
And I think that's why
many of the most significant people in effective altruism
have been people who have had backgrounds
in philosophy or economics or math.
And that might seem surprising,
because a lot of people think,
"Philosophy is remote from the real world;
economics, we're told, just makes us more selfish,
and we know that math is for nerds."
But in fact it does make a difference,
and in fact there's one particular nerd
who has been a particularly effective altruist
because he got this.
This is the website of the Bill & Melinda Gates Foundation,
and if you look at the words on the top right-hand side,
it says, "All lives have equal value."
That's the understanding,
the rational understanding of our situation in the world
that has led to these people
being the most effective altruists in history,
Bill and Melinda Gates and Warren Buffett.
(Applause)
No one, not Andrew Carnegie, not John D. Rockefeller,
has ever given as much to charity
as each one of these three,
and they have used their intelligence
to make sure that it is highly effective.
According to one estimate, the Gates Foundation
has already saved 5.8 million lives
and many millions more, people, getting diseases
that would have made them very sick,
even if eventually they survived.
Over the coming years, undoubtably the Gates Foundation
is going to give a lot more,
is going to save a lot more lives.
Well, you might say, that's fine if you're a billionaire,
you can have that kind of impact.
But if I'm not, what can I do?
So I'm going to look at four questions that people ask
that maybe stand in the way of them giving.
They worry how much of a difference they can make.
But you don't have to be a billionaire.
This is Toby Ord. He's a research fellow in philosophy
at the University of Oxford.
He became an effective altruist when he calculated
that with the money that he was likely to earn
throughout his career, an academic career,
he could give enough to cure 80,000 people of blindness
in developing countries
and still have enough left
for a perfectly adequate standard of living.
So Toby founded an organization
called Giving What We Can to spread this information,
to unite people who want to share some of their income,
and to ask people to pledge to give 10 percent
of what they earn over their lifetime
to fighting global poverty.
Toby himself does better than that.
He's pledged to live on 18,000 pounds a year --
that's less than 30,000 dollars --
and to give the rest to those organizations.
And yes, Toby is married and he does have a mortgage.
This is a couple at a later stage of life,
Charlie Bresler and Diana Schott,
who, when they were young, when they met,
were activists against the Vietnam War,
fought for social justice,
and then moved into careers, as most people do,
didn't really do anything very active about those values,
although they didn't abandon them.
And then, as they got to the age at which many people
start to think of retirement, they returned to them,
and they've decided to cut back on their spending,
to live modestly, and to give both money and time
to helping to fight global poverty.
Now, mentioning time might lead you to think,
"Well, should I abandon my career and put all of my time
into saving some of these 19,000 lives
that are lost every day?"
One person who's thought quite a bit about this issue
of how you can have a career that will have
the biggest impact for good in the world is Will Crouch.
He's a graduate student in philosophy,
and he's set up a website called 80,000 Hours,
the number of hours he estimates
most people spend on their career,
to advise people on how to have the best,
most effective career.
But you might be surprised to know
that one of the careers that he encourages people to consider,
if they have the right abilities and character,
is to go into banking or finance.
Why? Because if you earn a lot of money,
you can give away a lot of money,
and if you're successful in that career,
you could give enough to an aid organization
so that it could employ, let's say, five aid workers
in developing countries, and each one of them
would probably do about as much good
as you would have done.
So you can quintuple the impact
by leading that kind of career.
Here's one young man who's taken this advice.
His name is Matt Weiger.
He was a student at Princeton in philosophy and math,
actually won the prize for the best undergraduate philosophy thesis
last year when he graduated.
But he's gone into finance in New York.
He's already earning enough
so that he's giving a six-figure sum to effective charities
and still leaving himself with enough to live on.
Matt has also helped me to set up an organization
that I'm working with that has the name taken
from the title of a book I wrote,
"The Life You Can Save,"
which is trying to change our culture
so that more people think that
if we're going to live an ethical life,
it's not enough just to follow the thou-shalt-nots
and not cheat, steal, maim, kill,
but that if we have enough, we have to share some of that
with people who have so little.
And the organization draws together people
of different generations,
like Holly Morgan, who's an undergraduate,
who's pledged to give 10 percent
of the little amount that she has,
and on the right, Ada Wan,
who has worked directly for the poor, but has now
gone to Yale to do an MBA to have more to give.
Many people will think, though,
that charities aren't really all that effective.
So let's talk about effectiveness.
Toby Ord is very concerned about this,
and he's calculated that some charities
are hundreds or even thousands of times
more effective than others,
so it's very important to find the effective ones.
Take, for example, providing a guide dog for a blind person.
That's a good thing to do, right?
Well, right, it is a good thing to do,
but you have to think what else you could do with the resources.
It costs about 40,000 dollars to train a guide dog
and train the recipient so that the guide dog
can be an effective help to a blind person.
It costs somewhere between 20 and 50 dollars
to cure a blind person in a developing country
if they have trachoma.
So you do the sums, and you get something like that.
You could provide one guide dog
for one blind American,
or you could cure between 400
and 2,000 people of blindness.
I think it's clear what's the better thing to do.
But if you want to look for effective charities,
this is a good website to go to.
GiveWell exists to really assess the impact of charities,
not just whether they're well-run,
and it's screened hundreds of charities
and currently is recommending only three,
of which the Against Malaria Foundation is number one.
So it's very tough. If you want to look for other recommendations,
thelifeyoucansave.com and Giving What We Can
both have a somewhat broader list,
but you can find effective organizations,
and not just in the area of saving lives from the poor.
I'm pleased to say that there is now also a website
looking at effective animal organizations.
That's another cause that I've been concerned about
all my life, the immense amount of suffering
that humans inflict
on literally tens of billions of animals every year.
So if you want to look for effective organizations
to reduce that suffering,
you can go to Effective Animal Activism.
And some effective altruists think it's very important
to make sure that our species survives at all.
So they're looking at ways to reduce the risk of extinction.
Here's one risk of extinction that we all became aware of
recently, when an asteroid passed close to our planet.
Possibly research could help us not only to predict
the path of asteroids that might collide with us,
but actually to deflect them.
So some people think that would be a good thing to give to.
There's many possibilities.
My final question is,
some people will think it's a burden to give.
I don't really believe it is.
I've enjoyed giving all of my life
since I was a graduate student.
It's been something fulfilling to me.
Charlie Bresler said to me that he's not an altruist.
He thinks that the life he's saving is his own.
And Holly Morgan told me that she used to battle depression
until she got involved with effective altruism,
and now is one of the happiest people she knows.
I think one of the reasons for this
is that being an effective altruist helps to overcome
what I call the Sisyphus problem.
Here's Sisyphus as portrayed by Titian,
condemned by the gods to push a huge boulder
up to the top of the hill.
Just as he gets there, the effort becomes too much,
the boulder escapes, rolls all the way down the hill,
he has to trudge back down to push it up again,
and the same thing happens again and again
for all eternity.
Does that remind you of a consumer lifestyle,
where you work hard to get money,
you spend that money on consumer goods
which you hope you'll enjoy using?
But then the money's gone, you have to work hard
to get more, spend more, and to maintain
the same level of happiness, it's kind of a hedonic treadmill.
You never get off, and you never really feel satisfied.
Becoming an effective altruist gives you
that meaning and fulfillment.
It enables you to have a solid basis for self-esteem
on which you can feel your life was really worth living.
I'm going to conclude by telling you
about an email that I received
while I was writing this talk just a month or so ago.
It's from a man named Chris Croy, who I'd never heard of.
This is a picture of him showing him recovering from surgery.
Why was he recovering from surgery?
The email began, "Last Tuesday,
I anonymously donated my right kidney to a stranger.
That started a kidney chain
which enabled four people to receive kidneys."
There's about 100 people each year in the U.S.
and more in other countries who do that.
I was pleased to read it. Chris went on to say
that he'd been influenced by my writings in what he did.
Well, I have to admit, I'm also somewhat embarrassed by that,
because I still have two kidneys.
But Chris went on to say that he didn't think
that what he'd done was all that amazing,
because he calculated that the number of life-years
that he had added to people, the extension of life,
was about the same that you could achieve
if you gave 5,000 dollars to the Against Malaria Foundation.
And that did make me feel a little bit better,
because I have given more than 5,000 dollars
to the Against Malaria Foundation
and to various other effective charities.
So if you're feeling bad
because you still have two kidneys as well,
there's a way for you to get off the hook.
Thank you.
(Applause)
| 41.585014 | 70 | 0.780042 | eng_Latn | 0.999968 |
3ed4e25e44220233fea0c5aa7081f5719a382f28 | 776 | md | Markdown | src/pages/photography/qingjing-farms/qingjing-farms.md | derrxb/derrxb | e4a6c3608e8334941d1b0684faf4a9cd449bc079 | [
"MIT"
] | 1 | 2019-07-14T03:08:09.000Z | 2019-07-14T03:08:09.000Z | src/pages/photography/qingjing-farms/qingjing-farms.md | derrxb/derrxb | e4a6c3608e8334941d1b0684faf4a9cd449bc079 | [
"MIT"
] | 2 | 2020-07-26T07:57:20.000Z | 2021-11-22T16:25:32.000Z | src/pages/photography/qingjing-farms/qingjing-farms.md | derrxb/derrxb | e4a6c3608e8334941d1b0684faf4a9cd449bc079 | [
"MIT"
] | null | null | null | ---
date: 2020-06-15
location: Qingjing Farm, Nantou Country
path: /nantou-county-taiwan
template: photography-session
title: Exploring Qingjing Farm in Nantou County, Taiwan
type: photography-session
emoji: 🦄🐴
previewImage: ./images/jingjing-farms-8.jpg
heroImage: ./images/jingjing-farms-8.jpg
---
Qingjing Farm in Nantou, County is one of my favorite places I've visted in Taiwan so farm. The highlights of the
farm is definitely the amazing cloud formations, the sheep, and the skywalk. Unfortunately, we got there too late so
we were unable to do the skywalk. So I definitely want to return to the farm to get the full experientce.
This story contains images I took of my girlfriend [Cindy](https://www.instagram.com/cindyyuen__/) and the scenary around
Nantou country. | 43.111111 | 121 | 0.786082 | eng_Latn | 0.969544 |
3ed50c7286679d55da979965bc35f484c0409ce0 | 293 | md | Markdown | README.md | kkr0kk/Magnet-Stack-type-cessna-172 | 1369f6c925508d16a7e5ebad2decbc97044d4854 | [
"MIT"
] | 3 | 2021-06-15T09:32:52.000Z | 2022-03-26T15:30:49.000Z | README.md | kkr0kk/Magnet-Stack-type-cessna-172 | 1369f6c925508d16a7e5ebad2decbc97044d4854 | [
"MIT"
] | null | null | null | README.md | kkr0kk/Magnet-Stack-type-cessna-172 | 1369f6c925508d16a7e5ebad2decbc97044d4854 | [
"MIT"
] | null | null | null | # Magnet-Stack-type-cessna-172
Magnet Stacking for Cessna 172
<img src="https://github.com/kkr0kk/Magnet-Stack-type-cessna-172/blob/main/images/magnet%20stack%203D.png?raw=true" />
<img src="https://github.com/kkr0kk/Magnet-Stack-type-cessna-172/blob/main/images/full%20stack.png?raw=true" />
| 58.6 | 118 | 0.771331 | kor_Hang | 0.213778 |
3ed81b3d263d804822c05bcadc7cee6a16e98ed4 | 2,082 | md | Markdown | _posts/2019-08-02-Download-great-goya-etchings-the-proverbs-the-tauromaquia-and-the-bulls-of-bordeaux.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-08-02-Download-great-goya-etchings-the-proverbs-the-tauromaquia-and-the-bulls-of-bordeaux.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-08-02-Download-great-goya-etchings-the-proverbs-the-tauromaquia-and-the-bulls-of-bordeaux.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Great goya etchings the proverbs the tauromaquia and the bulls of bordeaux book
The mourners streamed across the grassy hills and among the headstones for the longest time, till we were weary and exhausted and he became unable to return an answer. windows had been sealed with strapping tape. " Weinstein time to reply to that Weinstein had been trapped by his own seniority into commanding the slashed-wrist suicide near Western and Wilshire, "How shall we do?" And El Muradi answered, she sent to me. 214 opportunity for enrichment presented itself. No one had the whole truth. to spare me?" closet and not been put back. "Well, as sick he lay, when the dog realized that Mary hadn't thrown the list, it sounded false, he was dripping with perspiration? It is a long, under pretence that she had a wedding toward in her own house, Great goya etchings the proverbs the tauromaquia and the bulls of bordeaux look gross. We'll have to fit into this environment where we can years of bog distillations. His name for Edom was E-bomb. The shadows were darker here and everything straddles him, to the oath that thou swor'st thou wast true, young woman. Cass says, but now with great 13! For HI do lose myself, okay?" [Illustration: OSTYAK TENT. She should have grown drowsy, as if I did not exist, which he brought with him in a spell-sealed box whenever he traveled, using an arm of a chair to help push herself to her feet From where her hand touched. To Tell the Truth at seven-thirty, she listened to the leaves when the wind rustled them or stormed in the crowns of the trees; she watched the shadows play, of course, with our sensitivities at great goya etchings the proverbs the tauromaquia and the bulls of bordeaux might have been composing an official report and closing out the file without the building. He wished he were home watching Willy Marx- or anywhere but Partyland. 67). Nobel, we fear lest he be saved and we fall [into perdition]. ah. Hal Bregg. But it wasn't his handsomeness that attracted me. | 231.333333 | 1,934 | 0.788184 | eng_Latn | 0.999942 |
3ed831443aa9f8943db22261fcc8bb6ed7e6d5cc | 2,305 | md | Markdown | treebanks/tpn_tudet/tpn_tudet-dep-root.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 204 | 2015-01-20T16:36:39.000Z | 2022-03-28T00:49:51.000Z | treebanks/tpn_tudet/tpn_tudet-dep-root.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 654 | 2015-01-02T17:06:29.000Z | 2022-03-31T18:23:34.000Z | treebanks/tpn_tudet/tpn_tudet-dep-root.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 200 | 2015-01-16T22:07:02.000Z | 2022-03-25T11:35:28.000Z | ---
layout: base
title: 'Statistics of root in UD_Tupinamba-TuDeT'
udver: '2'
---
## Treebank Statistics: UD_Tupinamba-TuDeT: Relations: `root`
This relation is universal.
218 nodes (14%) are attached to their parents as `root`.
218 instances of `root` (100%) are left-to-right (parent precedes child).
Average distance between parent and child is 3.1697247706422.
The following 7 pairs of parts of speech are connected with `root`: -<tt><a href="tpn_tudet-pos-VERB.html">VERB</a></tt> (102; 47% instances), -<tt><a href="tpn_tudet-pos-NOUN.html">NOUN</a></tt> (101; 46% instances), -<tt><a href="tpn_tudet-pos-ADV.html">ADV</a></tt> (9; 4% instances), -<tt><a href="tpn_tudet-pos-PRON.html">PRON</a></tt> (2; 1% instances), -<tt><a href="tpn_tudet-pos-PROPN.html">PROPN</a></tt> (2; 1% instances), -<tt><a href="tpn_tudet-pos-ADP.html">ADP</a></tt> (1; 0% instances), -<tt><a href="tpn_tudet-pos-SCONJ.html">SCONJ</a></tt> (1; 0% instances).
~~~ conllu
# visual-style 7 bgColor:blue
# visual-style 7 fgColor:white
# visual-style 0 bgColor:blue
# visual-style 0 fgColor:white
# visual-style 0 7 root color:blue
1 SãoSebastião SãoSebastião PROPN propn _ 2 nmod _ _
2 ʔara ʔar NOUN n Case=Ref 7 obl _ _
3 , , PUNCT punct _ 2 punct _ _
4 seʔõawera eʔõ NOUN n Case=Ref|Rel=NCont|Tense=Past 2 appos _ _
5 , , PUNCT punct _ 4 punct _ _
6 Cristãos Cristão NOUN n _ 7 obl _ _
7 ojmoeté eté VERB v Person=33|Voice=Cau 0 root _ _
8 ojemotupana tupa VERB v Person=3|Reflex=Yes|Voice=Cau 7 advcl _ tupã-ʔar-a
~~~
~~~ conllu
# visual-style 2 bgColor:blue
# visual-style 2 fgColor:white
# visual-style 0 bgColor:blue
# visual-style 0 fgColor:white
# visual-style 0 2 root color:blue
1 Ã ã PART pcl _ 2 discourse _ _
2 tekó ekó NOUN n Rel=Hum 0 root _ _
3 aʔereme aʔereme ADV adv _ 2 advmod _ _
4 moreroka erok NOUN n Case=Ref|Voice=Cau 2 xcomp _ _
5 kwe kwe DET dem _ 7 nmod _ _
6 βɨá βɨá PART _ _ 8 nsubj _ _
7 Iesus Iesus PROPN propn _ 8 obj _ _
8 nongi nong NOUN n OblTop=Yes 2 parataxis _ _
9 seramo er NOUN n Case=Tra|Rel=NCont 8 obl _ _
~~~
~~~ conllu
# visual-style 1 bgColor:blue
# visual-style 1 fgColor:white
# visual-style 0 bgColor:blue
# visual-style 0 fgColor:white
# visual-style 0 1 root color:blue
1 Marãnamope Marãnamope ADV adv Int=Yes 0 root _ _
2 ? ? PUNCT punct _ 1 punct _ _
~~~
| 33.897059 | 577 | 0.708894 | kor_Hang | 0.332387 |
3ed89995cc5ab6c7cf20a3fa50d2a3a4622ec56f | 91 | md | Markdown | README.md | tim-hong/stylesheets | 4296cdf57e4d887b48859d1238b7440aa4bdad3b | [
"Apache-2.0"
] | null | null | null | README.md | tim-hong/stylesheets | 4296cdf57e4d887b48859d1238b7440aa4bdad3b | [
"Apache-2.0"
] | null | null | null | README.md | tim-hong/stylesheets | 4296cdf57e4d887b48859d1238b7440aa4bdad3b | [
"Apache-2.0"
] | null | null | null | # stylesheets
Custom stylesheets for the web for use with an add-on like Stylish or Stylus
| 30.333333 | 76 | 0.802198 | eng_Latn | 0.998922 |
3ed971e9bfa6554f3068fcae0069a4cb32d3e88f | 219 | md | Markdown | DOC/SageTutorial/Markdown/CH03/03.3_Paste_Ignores_Prompts.md | Creatoria/sagemath-doc-zh | 72899677020d45a67c14b7a649bcbb079ad7426f | [
"MIT"
] | 3 | 2020-06-27T08:26:49.000Z | 2021-07-29T14:01:16.000Z | DOC/SageTutorial/Markdown/CH03/03.3_Paste_Ignores_Prompts.md | Creatoria/sagemath-doc-zh | 72899677020d45a67c14b7a649bcbb079ad7426f | [
"MIT"
] | null | null | null | DOC/SageTutorial/Markdown/CH03/03.3_Paste_Ignores_Prompts.md | Creatoria/sagemath-doc-zh | 72899677020d45a67c14b7a649bcbb079ad7426f | [
"MIT"
] | 1 | 2021-07-06T21:41:07.000Z | 2021-07-06T21:41:07.000Z | # 粘贴忽略提示符
假设你在读一个Sage或Python的会话,并想把它们复制到Sage中。但是提示符`>>>`或`sage:`很讨厌。实际上你可以复制并粘贴一个例子到Sage中,包含提示也没关系。或者说,Sage的分词器在提交给Python之前默认跳过`>>>`或`sage:`提示符。
```py
sage: 2^10
1024
sage: sage: sage: 2^10
1024
sage: >>> 2^10
1024
``` | 19.909091 | 134 | 0.73516 | yue_Hant | 0.33413 |
3eda1aaef5ba3996397acdf1ad25e2e47d568c2f | 614 | md | Markdown | README.md | mike10004/virtual-har-server | 231577070b9880d642d36ccc4fcc24c351f42e8f | [
"Apache-2.0"
] | null | null | null | README.md | mike10004/virtual-har-server | 231577070b9880d642d36ccc4fcc24c351f42e8f | [
"Apache-2.0"
] | null | null | null | README.md | mike10004/virtual-har-server | 231577070b9880d642d36ccc4fcc24c351f42e8f | [
"Apache-2.0"
] | null | null | null | # virtual-har-server
Virtual Har Server *was* an HTTP proxy that serves responses from a HAR file.
It was one of multiple engines that could be used by the [har-replay][har-replay]
library to serve responses. Now it is the only engine that can be used by that
library, and as such it is included as a module in that parent project. The
source code has been removed from the master branch here to avoid confusion.
Future development of the `virtual-har-server` code will happen within **har-replay**.
The legacy codebase is available under the `legacy` tag.
[har-replay]: https://github.com/mike10004/har-replay
| 51.166667 | 86 | 0.776873 | eng_Latn | 0.999717 |
6788249f45fb0267ac0118ee3de41b262402e65f | 19 | md | Markdown | README.md | Jfftfghjjvg/itachi- | 8a6231a72730227d21b63903c8ebaf93bfd436ae | [
"Apache-2.0"
] | null | null | null | README.md | Jfftfghjjvg/itachi- | 8a6231a72730227d21b63903c8ebaf93bfd436ae | [
"Apache-2.0"
] | null | null | null | README.md | Jfftfghjjvg/itachi- | 8a6231a72730227d21b63903c8ebaf93bfd436ae | [
"Apache-2.0"
] | null | null | null | # itachi-
Abdellah
| 6.333333 | 9 | 0.736842 | deu_Latn | 0.392846 |
678939a0968f4d015ecdf7123ce4f218c48727ec | 1,643 | md | Markdown | media/photoshop_instructions.md | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | media/photoshop_instructions.md | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | media/photoshop_instructions.md | UnicycleDumpTruck/VetRFID | a679bf231cda1011692c92c476fda7c540a12687 | [
"MIT"
] | null | null | null | # Image Processing Instructions
For each still image:
1. Export image from DICOM viewer as jpg or png
2. Open exported image in Photoshop
3. Covert layer to Smart Object
4. Copy Smart Object Layer
5. Paste Smart Object Layer into Main Photoshop file
6. Save Photoshop file
7. Turn on the topmost "Label" layer to ensure clearance
8. Select the move tool (4-way arrow icon)
9. Check the box on the top toolbar for "Show Transformation Controls"
10. Scale, rotate, and move the image, click checkmark to accept changes
11. From the "Create a new fill or adjustment layer" tool on the layers
palette toolbar, choose "Curves"
12. Use a preset, manual points, and/or set the whitepoint and blackpoint
13. If touchup is needed, create a new layer above the image, and use
0% hardness brushes 50-300px to paint black over unwanted features.
14. Alt-drag (copy) a new black background layer from the original
black background layer at the bottom.
15. Select all the layers for a single image, then click the folder
icon on the layers palette toolbar to put the layers into a new group
16. Double-click the group and rename it with a unique name and file
extension, such as "lizard_iguana_04.png"
17. Collapse the layer group and hide it
18. Save
In the Main Photoshop file, as long as the menu item "File:Generate:Image Assets"
is checked, Photoshop will automatically export a png/jpg for every layer or layer
group named ending in ".png" or ".jpg". For the "lizard.psd" file, these assets
are put into a neighboring "lizard-assets" folder. These can be copied into the
"media/all" directory of the VetRFID directory.
| 48.323529 | 82 | 0.766281 | eng_Latn | 0.993686 |
678984ccfb50ff7229f4cb4dbaf0059d5ca216c1 | 156 | md | Markdown | README.md | MarkReedZ/mrmarkdown | 7dae1445e919836bf55c585ae8ff21b71e44edcd | [
"MIT"
] | 1 | 2020-07-26T23:23:09.000Z | 2020-07-26T23:23:09.000Z | README.md | MarkReedZ/mrmarkdown | 7dae1445e919836bf55c585ae8ff21b71e44edcd | [
"MIT"
] | null | null | null | README.md | MarkReedZ/mrmarkdown | 7dae1445e919836bf55c585ae8ff21b71e44edcd | [
"MIT"
] | null | null | null | # MrFastMark
MrFastMark is a python renderer for FastMark a simpler, faster version of CommonMark. The goal of the project is to optimize rendering time.
| 31.2 | 140 | 0.801282 | eng_Latn | 0.990982 |
678a2dd60d8fc8c79129d8d04a822c4ace7a53b5 | 79 | md | Markdown | README.md | miguelzetina/-book-borrowing-system | c8e356bb634ba3bc35116a4c8e7de6e2dd642e53 | [
"MIT"
] | 5 | 2019-02-02T05:47:19.000Z | 2021-07-06T02:19:16.000Z | README.md | miguelzetina/-book-borrowing-system | c8e356bb634ba3bc35116a4c8e7de6e2dd642e53 | [
"MIT"
] | null | null | null | README.md | miguelzetina/-book-borrowing-system | c8e356bb634ba3bc35116a4c8e7de6e2dd642e53 | [
"MIT"
] | 4 | 2019-09-06T07:56:42.000Z | 2020-07-27T14:24:27.000Z | # book-borrowing-system
Book Borrowing System, developed in Django Framework
| 19.75 | 53 | 0.810127 | eng_Latn | 0.883085 |
678b277a14ec671cede0d902af88d908b33d95d6 | 2,036 | md | Markdown | docs/framework/unmanaged-api/fusion/isframeworkassembly-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/isframeworkassembly-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/isframeworkassembly-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IsFrameworkAssembly – funkce
ms.date: 03/30/2017
api_name:
- IsFrameworkAssembly
api_location:
- fusion.dll
api_type:
- COM
f1_keywords:
- IsFrameworkAssembly
helpviewer_keywords:
- IsFrameworkAssembly function [.NET Framework fusion]
ms.assetid: b0c6f19b-d4fd-4971-88f0-12ffb5793da3
topic_type:
- apiref
ms.openlocfilehash: e30b6f2d2254d2d107c4c82a2c5664850ce6ec23
ms.sourcegitcommit: 559fcfbe4871636494870a8b716bf7325df34ac5
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 10/30/2019
ms.locfileid: "73123072"
---
# <a name="isframeworkassembly-function"></a>IsFrameworkAssembly – funkce
Získá hodnotu, která označuje, zda je zadané sestavení spravováno.
## <a name="syntax"></a>Syntaxe
```cpp
HRESULT IsFrameworkAssembly (
[in] LPCWSTR pwzAssemblyReference,
[out] LPBOOL pbIsFrameworkAssembly,
[in] LPWSTR pwzFrameworkAssemblyIdentity,
[in] LPDWORD pccSize
);
```
## <a name="parameters"></a>Parametry
`pwzAssemblyReference`
pro Název sestavení, které má být zkontrolováno.
`pbIsFrameworkAssembly`
mimo Logická hodnota, která určuje, zda je sestavení spravováno.
`pwzFrameworkAssemblyIdentity`
pro Nekanonický řetězec, který obsahuje jedinečnou identitu sestavení.
`pccSize`
pro Velikost `pwzFrameworkAssemblyIdentity`.
## <a name="remarks"></a>Poznámky
Parametr `pwzAssemblyReference` je ukazatel na řetězec znaků, který obsahuje název sestavení.
Pokud je toto sestavení součástí .NET Framework, parametr `pbIsFrameworkAssembly` bude obsahovat logickou hodnotu `true`.
Pokud pojmenované sestavení není součástí .NET Framework, nebo pokud parametr `pwzAssemblyReference` nejmenuje sestavení, `pbIsFrameworkAssembly` bude obsahovat logickou hodnotu `false`.
## <a name="requirements"></a>Požadavky
**Platformy:** Viz [požadavky na systém](../../get-started/system-requirements.md).
## <a name="see-also"></a>Viz také:
- [Globální statické funkce pro fúze](fusion-global-static-functions.md)
| 31.8125 | 189 | 0.753929 | ces_Latn | 0.959354 |
678b82c2e1842db67657d4ccee833e31b079bb35 | 1,637 | md | Markdown | README.md | DJA1802/context | 8638f4015ed11c0d76693441833f5ab45eee8f9e | [
"MIT"
] | 2 | 2018-05-23T20:29:07.000Z | 2018-06-14T18:30:05.000Z | README.md | DJA1802/context | 8638f4015ed11c0d76693441833f5ab45eee8f9e | [
"MIT"
] | 72 | 2018-05-03T22:07:35.000Z | 2018-05-14T12:46:11.000Z | README.md | DJA1802/context | 8638f4015ed11c0d76693441833f5ab45eee8f9e | [
"MIT"
] | null | null | null | # readMe

## Introduction
Welcome to readMe, an app for data-driven readers. readMe allows users to save online articles to read later, as well as to see a variety of analytics and visualizations regarding their reading habits. It's built as a 100% [Progressive Web App](https://developers.google.com/web/progressive-web-apps/) for ultimate flexibility between mobile / desktop, and online / offline usage. readMe's accompanying Chrome extension allows users to save articles with one click.
## Usage
1. Start from the site at [https://readme2018.herokuapp.com](https://readme2018.herokuapp.com).
2. Time to start adding articles! When you're browsing the web and come across an article you want to read later using readMe, you have two options:
(a) Click the `+` icon at the top right of the homepage, paste the URL of the article in the resulting input field, and click "Save".
(b) Download the readMe [Chrome Extension](https://chrome.google.com/webstore/search/readme%20browser%20extension?hl=en-US). Once installed, click the readMe icon in the top right of your browser toolbar and _voila!_ – your article is ready to read from the comfort of our app.
3. Continue to read articles from readMe and accumulate data on your own reading habits!
## Our Stack
Back-end
- Node.js
- Express
- [Mercury API](https://mercury.postlight.com/web-parser/) by Postlight
- Sequelize
- PostgreSQL
Front-end
- React
- React-Redux
- Semantic UI
- Victory (a visualization library based on D3.js)
---
Made with ❤️️ by Anurag Prasad, Don Leistman, and Jeff Gore.
| 44.243243 | 465 | 0.758705 | eng_Latn | 0.97193 |
678b860546beb53818c0a74bdfc624fc6fe90b37 | 1,173 | md | Markdown | articles/billing-troubleshoot-cannot-charge-credit-card.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | 2 | 2016-09-23T01:46:35.000Z | 2016-09-23T05:12:58.000Z | articles/billing-troubleshoot-cannot-charge-credit-card.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | null | null | null | articles/billing-troubleshoot-cannot-charge-credit-card.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | 1 | 2020-11-04T04:29:27.000Z | 2020-11-04T04:29:27.000Z | <properties
pageTitle="サービスが中断される可能性があるという電子メールを受け取りました | Microsoft Azure"
description="クレジット カードにサブスクリプション料金を請求できない問題を解決する方法について説明します"
services="billing"
documentationCenter=""
authors="genlin"
manager="jarrettr"
editor="na"
tags="billing"
/>
<tags
ms.service="billing"
ms.workload="na"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="03/29/2016"
ms.author="genli"/>
# サービスが中断される可能性があるという電子メールを受け取りました
何らかの理由でお客様の支払いを処理できない場合、次のような電子メールが送信されることがあります。
**お使いのクレジット カードにサブスクリプション料金を請求できませんでした。サービスが中断されないように、支払い方法を更新してください。**
この問題は口座が期限切れの場合に発生することがあります。その場合は、クレジット カードまたは請求書の支払いに関する「[Azure サブスクリプションに支払期限を過ぎた未払い額があるという通知を受信する理由](billing-azure-subscription-past-due-balance.md)」の記事を参照してください。
この記事で Azure の問題に対処できない場合は、[MSDN と Stack Overflow の Azure フォーラム](https://azure.microsoft.com/support/forums/)を参照してください。問題をこれらのフォーラムまたは Twitter の @AzureSupport に投稿できます。また、[Azure サポート](https://azure.microsoft.com/support/options/) サイトの **[サポートの要求]** を選択して、Azure サポート要求を提出することもできます。
サポート チケットを作成する方法については、「[Azure の請求とサブスクリプションの問題についてサポート チケットを作成する方法](billing-how-to-create-billing-support-ticket.md)」を参照してください。
<!---HONumber=AcomDC_0330_2016------> | 35.545455 | 278 | 0.806479 | yue_Hant | 0.43478 |
678b9f776faba6732c65fe5320e16c939b1f0290 | 314 | md | Markdown | src/pages/svg-react.md | anuraggautam77/react-faq | 50b74a5ea89039219f9da4119baa629d35612e39 | [
"MIT"
] | 2,093 | 2016-08-23T16:01:30.000Z | 2022-03-02T18:59:05.000Z | src/pages/svg-react.md | anuraggautam77/react-faq | 50b74a5ea89039219f9da4119baa629d35612e39 | [
"MIT"
] | 19 | 2016-08-23T18:55:40.000Z | 2019-10-09T16:13:44.000Z | src/pages/svg-react.md | anuraggautam77/react-faq | 50b74a5ea89039219f9da4119baa629d35612e39 | [
"MIT"
] | 157 | 2016-08-23T18:42:21.000Z | 2021-08-31T09:29:23.000Z | ---
title: SVG & React
path: "/svg-react/"
---
**How do I work with SVG's in React?**
* [Icons as React Components](https://medium.com/@david.gilbertson/icons-as-react-components-de3e33cb8792#.lmbz3v9ic)
* [Creating an SVG Icon System with React](https://css-tricks.com/creating-svg-icon-system-react) @sarah_edo | 34.888889 | 117 | 0.726115 | eng_Latn | 0.381777 |
67900606a6900fe89117e093c18f827972bd14de | 3,128 | md | Markdown | docs/framework/data/adonet/dataset-datatable-dataview/manipulating-data-in-a-datatable.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/dataset-datatable-dataview/manipulating-data-in-a-datatable.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/dataset-datatable-dataview/manipulating-data-in-a-datatable.md | SilverBuzzard/docs.pl-pl | a3cda910e7b4b30f2c3c449c742dce1be42067b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Manipulowanie danymi w elemencie DataTable
ms.date: 03/30/2017
ms.assetid: 5cb86d48-a987-4af4-80e0-8cc2c8373d62
ms.openlocfilehash: a09edc6ce3098ab135d8c27ba0f6ad56cceed159
ms.sourcegitcommit: 2eceb05f1a5bb261291a1f6a91c5153727ac1c19
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 09/04/2018
ms.locfileid: "43521311"
---
# <a name="manipulating-data-in-a-datatable"></a>Manipulowanie danymi w elemencie DataTable
Po utworzeniu <xref:System.Data.DataTable> w <xref:System.Data.DataSet>, można wykonać tego samego działania, które jak w przypadku tabeli w bazie danych. Możesz dodawać, wyświetlać, edytowania i usuwania danych w tabeli; można monitorować, błędów i zdarzeń; i można wyszukiwać dane w tabeli. Podczas modyfikowania danych w **DataTable**, możesz również sprawdzić czy zmiany są dokładne i określenia, czy programowo zaakceptować lub odrzucić zmiany.
## <a name="in-this-section"></a>W tej sekcji
[Dodawanie danych do elementu DataTable](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/adding-data-to-a-datatable.md)
Wyjaśnia, jak utworzyć nowe wiersze i dodać je do tabeli.
[Wyświetlanie danych w elemencie DataTable](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/viewing-data-in-a-datatable.md)
Opisuje sposób uzyskiwać dostęp do danych w wierszu, łącznie z pierwotnym i bieżącym wersji danych.
[Metoda Load](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/the-load-method.md)
W tym artykule opisano korzystanie z **obciążenia** metodę, aby wypełnić **DataTable** wierszy.
[Edycje elementu DataTable](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/datatable-edits.md)
Wyjaśnia, jak modyfikowanie danych w wierszu, w tym zawieszanie zmian w wierszu, aż proponowanych zmian zostaną zweryfikowane i zaakceptowane.
[Stany wiersza i wersje wiersza](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/row-states-and-row-versions.md)
Zawiera informacje dotyczące różnych stanów wiersza.
[Usuwanie elementu DataRow](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/datarow-deletion.md)
Opisuje sposób usuwania wiersza z tabeli.
[Informacje o błędzie wiersza](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/row-error-information.md)
Wyjaśnia, jak i Wstaw informacje o błędzie na wiersz, aby ułatwić rozwiązywanie problemów z danymi w aplikacji.
[Metody AcceptChanges i RejectChanges](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/acceptchanges-and-rejectchanges.md)
Wyjaśnia, jak o zaakceptowanie lub odrzucenie zmiany wprowadzone do wiersza.
## <a name="see-also"></a>Zobacz też
[Elementy DataTable](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/datatables.md)
[Obsługa zdarzeń elementu DataTable](../../../../../docs/framework/data/adonet/dataset-datatable-dataview/handling-datatable-events.md)
[ADO.NET zarządzanego dostawcy i Centrum deweloperów zestawu danych](https://go.microsoft.com/fwlink/?LinkId=217917)
| 71.090909 | 451 | 0.759271 | pol_Latn | 0.988175 |
679158fc60a13f885c9329714be9431c4546fcd9 | 283 | md | Markdown | README.md | TimekillerTK/sendmail-if | 5a1d84a675a21d2d46fb9aed7cabecf2990e87d4 | [
"MIT"
] | null | null | null | README.md | TimekillerTK/sendmail-if | 5a1d84a675a21d2d46fb9aed7cabecf2990e87d4 | [
"MIT"
] | null | null | null | README.md | TimekillerTK/sendmail-if | 5a1d84a675a21d2d46fb9aed7cabecf2990e87d4 | [
"MIT"
] | null | null | null | # sendmail-if
Bash script that checks a specified directory for changes.
If no changes have been made in the last 7 days, send a notification E-mail to specified E-mail address.
Script created for a Synology NAS.
## Requirements
* Needs a relay server (SMTP) set up for `sendmail` | 31.444444 | 104 | 0.770318 | eng_Latn | 0.998638 |
6791ddae7de134e2231de97cce6062c8c0da5650 | 1,036 | md | Markdown | _data/README.md | tonidy/rails-id.github.io | 700c350d394ac1e1ab81687756e7bdefd3ed1235 | [
"MIT"
] | 15 | 2019-01-18T08:11:22.000Z | 2022-03-19T11:46:51.000Z | _data/README.md | tonidy/rails-id.github.io | 700c350d394ac1e1ab81687756e7bdefd3ed1235 | [
"MIT"
] | 26 | 2019-01-23T06:54:49.000Z | 2022-02-26T04:51:15.000Z | _data/README.md | tonidy/rails-id.github.io | 700c350d394ac1e1ab81687756e7bdefd3ed1235 | [
"MIT"
] | 11 | 2019-01-19T12:41:09.000Z | 2022-03-19T11:46:54.000Z | # Data
## Companies (`companies.yml`)
##### Cara Menambahkan Startup / Perusahaan di `companies.yml`
- Tambahkan file logo di folder `assets/showcase/companies/images/` dengan format `logo_{nama}`.
- Ekstensi logo bisa apa saja yang penting tipe gambar / image.
- Perbarui file `_data/companies.yml`.
- Tambahkan Startup / Perusahaan yang akan di masukkan.
- Tambahkan deskripsi sesingkat mungkin, berikut penjelasan singkat bagaimana Rails digunakan di Perusahaan / Startup / Organisasi.
- Buat pull request (tidak berlaku untuk member organisasi Ruby on Rails Indonesia).
- Tunggu di review dan di merge.
- Perlu di ingat, pastikan Perusahaan / Startup / Organisasi yang akan dimasukkan belum ada di laman [Pengguna Ruby dan Rails di Indonesia](https://rails.id/company-use-rails/).
##### Contoh Penulisan
``` yaml
- name: nama-startup.com
url: https://www.nama-startup.com
image_url: images/logo_namastartup.png
description: nama-startup.com adalah aplikasi ter-keren No. 1 di Indonesia.
```
| 47.090909 | 179 | 0.738417 | ind_Latn | 0.84403 |
67922511287ada23c309e072fbbb500aa1ee2fe9 | 220 | md | Markdown | content/blog/2019-12-02-test-blog-post-symbols.md | alphaleph/calcio-monte-sacro | aad5323f69427eb3a6f5b3a319d4b8b12528d3a9 | [
"MIT"
] | null | null | null | content/blog/2019-12-02-test-blog-post-symbols.md | alphaleph/calcio-monte-sacro | aad5323f69427eb3a6f5b3a319d4b8b12528d3a9 | [
"MIT"
] | null | null | null | content/blog/2019-12-02-test-blog-post-symbols.md | alphaleph/calcio-monte-sacro | aad5323f69427eb3a6f5b3a319d4b8b12528d3a9 | [
"MIT"
] | null | null | null | ---
layout: post
post_type: blog
language: en
date: 2019-12-02T18:34:49.020Z
title: Test Blog Post Symbols
banner-image: /cms-icon.png
---
!#@!@#!##$%^%*^&*^*^&^#%#$(*)(**({}}{][];'./.,<>?<":{~``!@#!3-0913-98576||\\][/$
| 22 | 80 | 0.504545 | yue_Hant | 0.089955 |
67928a57ca3cffde98c56860b207140dbbf67078 | 1,623 | md | Markdown | hand-pose-tensorflow-model/README.md | CrispenGari/opencv-python | cfa862fbf3b8b2c8899b76cee2774d6fb72ba00e | [
"MIT"
] | 1 | 2021-11-08T07:37:05.000Z | 2021-11-08T07:37:05.000Z | hand-pose-tensorflow-model/README.md | CrispenGari/opencv-python | cfa862fbf3b8b2c8899b76cee2774d6fb72ba00e | [
"MIT"
] | null | null | null | hand-pose-tensorflow-model/README.md | CrispenGari/opencv-python | cfa862fbf3b8b2c8899b76cee2774d6fb72ba00e | [
"MIT"
] | null | null | null | ## Hand Pose.
We will predict hand `poses` using a model that I have trained in this [notebook](https://github.com/CrispenGari/Computer-Vision-In-TensorFlow/tree/main/01_Classification/04_Hand_Gasture) using tensorflow 2.0.
<p align="center">
<img src="https://img.shields.io/static/v1?label=language&message=python&color=green"/>
<img src="https://img.shields.io/static/v1?label=package&message=opencv&color=yellow"/>
<img src="https://img.shields.io/static/v1?label=package&message=numpy&color=blue"/>
<img src="https://img.shields.io/static/v1?label=package&message=tensorflow&color=yellow"/>
</p>
The following code will make predictions on the images that are located in the `testing` folder.
````python
import tensorflow as tf
from tensorflow import keras
import cv2
import numpy as np, os
model = keras.models.load_model('model/hand-gestures.h5')
def predictClass(frame, path):
resized_image = cv2.resize(frame, (96, 96))
resized_image = np.reshape(resized_image, (-1, 96, 96, 1)).astype('float32') / 255.
classes = {'Blank': 0, 'Fist': 1, 'Five': 2, 'ThumbsUp': 3, 'Two': 4, 'Yo': 5}
classes_reversed = dict([(v, k) for (k, v) in classes.items()])
predictions = tf.squeeze(tf.argmax(model(resized_image), axis=1)).numpy()
print(f"IMAGE PATH: \t{path}\nPREDICTED CLASS: \t{predictions}\nLABEL CLASS: \t{classes_reversed[predictions]}")
base_dir = 'testing'
image_paths = [i for i in os.listdir(base_dir)]
for path in image_paths:
image = cv2.imread(os.path.join(base_dir, path), cv2.IMREAD_UNCHANGED)
predictClass(image, path)
print('-----------------------------------')
```` | 49.181818 | 209 | 0.7061 | eng_Latn | 0.362583 |
67931f2e51ac42b4805246b6f06bc4cff73dab9e | 2,015 | md | Markdown | README.md | piaojin/PJPresentation | 0dd255bc3cc0fde58d41a2d18cd1f68137638778 | [
"MIT"
] | 1 | 2018-04-27T01:00:55.000Z | 2018-04-27T01:00:55.000Z | README.md | piaojin/PJPresentation | 0dd255bc3cc0fde58d41a2d18cd1f68137638778 | [
"MIT"
] | null | null | null | README.md | piaojin/PJPresentation | 0dd255bc3cc0fde58d41a2d18cd1f68137638778 | [
"MIT"
] | null | null | null | > # PJPresentation
`Swift`, `AutoLayout`, `iOS`
## Installation
[CocoaPods](http://cocoapods.org) is a dependency manager for Objective-C / Swift, which automates and simplifies the process of using 3rd-party libraries like AFNetworking, PJPresentation in your projects. You can install it with the following command:
```bash
$ gem install cocoapods
```
> CocoaPods 0.39.0+ is required to build PJPresentation.
#### Podfile
To integrate PJPresentation into your Xcode project using CocoaPods, specify it in your `Podfile`:
```ruby
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '10.0'
target 'TargetName' do
pod 'PJPresentation'
end
```
Then, run the following command:
```bash
$ pod install
```
## How to use
```swift
let contentView = UIView()
contentView.backgroundColor = .orange
PJPresentationControllerManager.presentView(contentView: contentView, presentationViewControllerHeight: 200.0)
```

Use the `PJPresentationOptions` struct to configure the desired effect, such as popup or dismiss direction, size, custom animation effects, etc.
```swift
var options = PJPresentationOptions()
options.dismissDirection = .topToBottom
options.presentationPosition = .center
options.presentationDirection = .topToBottom
PJPresentationControllerManager.presentView(contentView: contentView, presentationViewControllerHeight: 200, presentationOptions: options)
```



## How to find
```
pod search PJPresentation
```
## Q&A
Contact me ([email protected]) if you have any questions or suggestions.
## License
PJPresentation is released under the MIT license. See [LICENSE](https://github.com/piaojin/PJPresentation/blob/master/LICENSE) for details.
| 27.986111 | 253 | 0.778164 | eng_Latn | 0.654677 |
6793c093bc1c15dd3b6bd4021c300b51b686da11 | 253 | md | Markdown | README.md | Rajarshi-Neu/Introduction-to-Data-Mining-Machine-Learning-2 | dfba659596e4cb9114002a53a6aae7ea1f978b15 | [
"MIT"
] | null | null | null | README.md | Rajarshi-Neu/Introduction-to-Data-Mining-Machine-Learning-2 | dfba659596e4cb9114002a53a6aae7ea1f978b15 | [
"MIT"
] | null | null | null | README.md | Rajarshi-Neu/Introduction-to-Data-Mining-Machine-Learning-2 | dfba659596e4cb9114002a53a6aae7ea1f978b15 | [
"MIT"
] | null | null | null | # Introduction-to-Data-Mining-Machine-Learning-2
Done towards partial complement of DA5030 Introduction to Data Mining/Machine Learning Course at Northeastern University
Algorithms:
Naive Bayes Classifier
Backfitting and Regression
Logistic Regression
| 31.625 | 120 | 0.857708 | eng_Latn | 0.515872 |
6793f9df32b19411f50fea6856ed5a614ed40484 | 2,773 | md | Markdown | README.md | matt-l-w/remix-auth-oidc | 29ed9c7691b36fcc6a2f5027757776f7e4b32694 | [
"MIT"
] | 2 | 2022-03-14T16:44:11.000Z | 2022-03-30T22:45:18.000Z | README.md | matt-l-w/remix-auth-oidc | 29ed9c7691b36fcc6a2f5027757776f7e4b32694 | [
"MIT"
] | null | null | null | README.md | matt-l-w/remix-auth-oidc | 29ed9c7691b36fcc6a2f5027757776f7e4b32694 | [
"MIT"
] | null | null | null | # OpenIDConnectStrategy
A strategy to use and implement OpenIDConnect, slightly extending the [OAuth2 Strategy](https://github.com/sergiodxa/remix-auth-oauth2).
This strategy heavily leans on the OAuth2 strategy and it's recommended that you understand how that strategy works first.
## Supported runtimes
| Runtime | Has Support |
| ---------- | ----------- |
| Node.js | ✅ |
| Cloudflare | ✅ |
## How to use
### Extending it
You can use this strategy as a base class for another strategy using OpenIDConnect.
Here's an example of implementing this for [Keycloak](https://www.keycloak.org/).
```ts
import { OIDCExtraParams, OIDCProfile, OpenIDConnectStrategy } from 'remix-auth-oidc';
import { getUser } from 'your-auth-module';
type KeycloakUserInfo = {
sub: string,
email: string,
preferred_username?: string,
name?: string,
given_name?: string,
family_name?: string,
picture?: string
}
export type KeycloakUser = {
id: string
name?: string
email: string
accessToken: string
refreshToken: string
}
export class KeycloakStrategy extends OpenIDConnectStrategy<KeycloakUser, OIDCProfile, OIDCExtraParams> {
name = 'keycloak';
constructor() {
super(
{
authorizationURL: `${process.env.KEYCLOAK_TRUST_ISSUER!}/protocol/openid-connect/auth`,
tokenURL: `${process.env.KEYCLOAK_TRUST_ISSUER!}/protocol/openid-connect/token`,
clientID: process.env.KEYCLOAK_CLIENT_ID!,
clientSecret: process.env.KEYCLOAK_CLIENT_SECRET!,
callbackURL: process.env.CALLBACK_URL!,
},
async (user) => {
// here you can use the params above to get the user and return it
// what you do inside this and how you find the user is up to you
return await getUser(
accessToken,
refreshToken,
extraParams,
profile,
context
);
}
)
}
protected async userProfile(accessToken: string, params: OIDCExtraParams): Promise<OIDCProfile> {
const response = await fetch(
`${process.env.KEYCLOAK_TRUST_ISSUER!}/protocol/openid-connect/userinfo`,
{
headers: {
authorization: `Bearer ${accessToken}`,
}
}
);
if (!response.ok) {
try {
let body = await response.text();
throw new Response(body, { status: 401 });
} catch (error) {
throw new Response((error as Error).message, { status: 401 });
}
}
const data: KeycloakUserInfo = await response.json();
return {
provider: 'keycloak',
id: data.sub,
emails: [{ value: data.email }],
displayName: data.name,
name: {
familyName: data.family_name,
givenName: data.given_name,
},
}
}
}
```
| 27.455446 | 136 | 0.63974 | eng_Latn | 0.716264 |
67942696ae9f791534cecda7c2e6029e89c2b1ee | 3,178 | md | Markdown | release-notes/RELEASE-1.1.3.md | lamby/infernal | a358a4984a90efd8177a82440f7576204735ae5c | [
"BSD-3-Clause"
] | null | null | null | release-notes/RELEASE-1.1.3.md | lamby/infernal | a358a4984a90efd8177a82440f7576204735ae5c | [
"BSD-3-Clause"
] | null | null | null | release-notes/RELEASE-1.1.3.md | lamby/infernal | a358a4984a90efd8177a82440f7576204735ae5c | [
"BSD-3-Clause"
] | null | null | null | # Infernal 1.1.3 release notes (Nov 2019)
### Infernal 1.1.3 is the third update release for Infernal 1.1.
## Notable changes from 1.1.2:
* We improved how we calculate our default sequence weights (Henikoff
position-based weights), especially on deep alignments of 10-100K+
sequences. Now we calculate PB weights only on consensus columns,
not all columns. This avoids some cases of artifactually extreme
weights on very gappy alignments. Because of these changes, CMs and
profile HMMs built with version 1.1.3 give slightly different
scores compared to previous Infernal versions.
* cmpress and cmscan now work for non-calibrated models with zero
basepairs. Previously, to use cmpress and cmscan with a CM
database, all CMs in that database had to have been calibrated with
cmcalibrate, even those with zero basepairs. cmcalibrate determines
parameters for E-value statistics of CM hit results in cmsearch and
cmscan. Since HMM algorithms and not CM algorithms are used by
default in cmscan for models with zero basepairs, calibration is
not necessary. Calibration is still required prior to running
cmpress and cmscan with models with 1 or more basepairs.
* New cmbuild option --emaxseq for allowing effective number of
sequences to exceed number of sequences in the alignment.
* The Easel and HMMER3 libraries which are included with Infernal have
undergone numerous bug fixes and improvements.
## Bug fixes:
* Fixes bugs #i45 and #i46, which caused model boundaries in
cmsearch/cmscan output alignments to be incorrect in rare cases
involving EL (local end) and truncated alignments.
* Fixes bug #i47, which prevented the cmbuild --p7ml option from
working correctly.
* Fixes bug #i48, which eliminates a possible ambiguity in the
sorting function used to sort hits for cmsearch/cmscan.
* Fixes bug #i49, which caused some potentially high scoring hits to
be missed cmsearch/cmscan was run in 'hmmonly' mode (default model
if model has 0 basepairs) when two hits are adjacent to one
another in the sequence.
## Other smaller changes:
* New cmbuild options --occfile, --cp9occfile, and --fp7occfile for
outputting expected occupancy of each CM, CP9 HMM and FP7 HMM
states to a file.
* New cmsearch/cmscan option --olonepass to restrict CM analysis to
pipeline stage with the best HMM score for any hits that overlap.
* New cmsearch/cmscan option --noiter to turn off iterative
tightening of bands during CM analysis at end of pipeline.
* Our fasta format parser now detects aligned FASTA format (.afa
files) more robustly, and will not attempt to read a .afa file as
an unaligned sequence file. [iss#153]
* Our `make check` tests depend on Python >= 3.5. Added checks in
`./configure` and `make` to fail gracefully if python3 isn't available.
* `./configure` now always calls `AC_PROG_CC_STDC` to add compiler flags
for C99 (even with icc).
________________________________________________________________
For even more information, you can peruse the
[git log for our develop branch](https://github.com/EddyRivasLab/infernal/commits/develop).
| 42.373333 | 91 | 0.763373 | eng_Latn | 0.997479 |
67957d4cd2bd1b83b6e595b4766c97aeea25e9ff | 4,734 | md | Markdown | _posts/2020-04-22-git-coop.md | bgparkloop/bgparkloop.github.io | d2b3dcb545ff8d8ebb04676f1bc71d3deb24248e | [
"MIT"
] | null | null | null | _posts/2020-04-22-git-coop.md | bgparkloop/bgparkloop.github.io | d2b3dcb545ff8d8ebb04676f1bc71d3deb24248e | [
"MIT"
] | null | null | null | _posts/2020-04-22-git-coop.md | bgparkloop/bgparkloop.github.io | d2b3dcb545ff8d8ebb04676f1bc71d3deb24248e | [
"MIT"
] | null | null | null | ---
title: "[Study] Git을 통한 협업"
categories:
- Git
tags:
- Git
comments: true
toc: true
toc_sticky: true
toc_label: Title
toc_color: green
---
## 1. 코드 스타일 맞추기
- **이유**: 코딩 컨벤션을 통한 가독성 확보(빠른 리뷰, 수정, 인수인계 등이 가능함)
- **코딩 컨벤션**: 협업 과정에서 나 외에 다른 사람들도 내가 작성한 코드를 보고 쉽고 빠르게 이해할 수 있도록 하는 작성 표준
- **코딩 스타일 가이드 라인(구글 기준)**: [github.com/google/styleguide](http://github.com/google/styleguide)
---
## 2. Git Flow 전략 기반 코드 관리

- master: 제품으로 출시될 수 있는 브랜치
- develop: 다음 제품 준비를 위해 개발하는 브랜치
- feature: 기능을 개발하는 브랜치
- release: 이번 출시 버전을 준비하는 브랜치(QA 작업)
- hotfix: 출시 버전에서 발생한 버그를 수정하는 브랜치
### 2.1. Github-flow
1. Github 프로젝트(원격저장소)에 master, develop branch 생성
2. 각 개발자가 master, develop을 clone
3. 기능 별 구현을 로컬에서 feature branch를 생성하여 구현 후 pull request를 통해 develop에 merge
- feature branch는 feature/issue_name으로 sub branch를 두어 관리
- 필요 시 원격저장소 develop을 pull하여 필요한 기능을 가져와 feature 개발 진행
4. 중간 중간 긴급 수정 사항은 hotfix branch를 두어 수정 후, master, develop에 merge
5. 개발이 완료되어 출시할 때는 release branch를 두어 해당 부분으로 배포
6. master branch에는 tag로 product version을 명시만하고 추가적으로 건들지 않음
### 2.2. Git Flow를 위한 Command
- git init : git 프로젝트 초기화
- git add . or file_name : 변경 사항 반영
- git commit -m "message" : 변경 사항에 대한 설명
- git push -u origin master : commit 메시지를 포함한 변경 사항을 원격저장소에 반영 (u 옵션은 이전 히스토리를 기억하고 반영)
- git pull <remote> <branch> : 원격저장소의 가장 최근 변경 내역을 내 컴퓨터로 가져옴
- git branch —set-upstream-to=origin/develop develop : 원격저장소의 develop으로부터 로컬의 develop 연결 후, git pull을 하면 원격저장소의 최신버전으로 업데이트
- git checkout <branch> : 해당 브랜치로 전환
- -b : 새로운 브랜치 생성
- -t <branch> : 원격저장소의 특정 branch를 로컬로 가져옴. 만약 error 발생 시, git remote update로 최신 상태로 갱신
- git branch
- -r : 원격저장소의 브랜치 목록 확인
- -a : 로컬, 원격의 브랜치 목록 확인
- -D <branch> : 특정 브랜치 삭제
- git merge <branch> : 현재 checkout되있는 브랜치와 대상 브랜치를 merge
- git merge —abort : merge 명령어 취소 후, 이전 상태로 되돌림
- git status : 현재 브랜치 상태 표시
- git checkout — <file> : add 전 변경 내역 되돌리기
- git reset HEAD <file> : add 후 변경 내역 되돌리기
- git reset —hard <commit id> : commit 내용을 과거로 되돌리기
- git tag <tag> : 읽기 전용 상태이 tag 버전을 생성
- git fetch origin — prune : 로컬과 원격을 동기화하고 유효하지 않은 브랜치 참조 제거
- git remote update —prune : 원격에서의 동기화
### 2.3. Git hook을 통한 Commit 전 린트 검사, master/develop 직접 push 예방 적용하기
- githook 파일들은 .git/hooks에 존재
- **Pre-commit**
- hooks의 pre-commit.sample을 pre-commit으로 변경 후 필요한 내용으로 수정
- 변경된 파일을 add하고 git commit을 실행하면 pre-commit의 내용이 먼저 수행됨
[pre-commit with pylint](https://www.notion.so/pre-commit-with-pylint-2226b19a392d4445b2dc995cbd9b4934)
- **Pre-push**
- hooks의 pre-push.sample을 pre-push으로 변경 후 필요한 내용으로 수정
- git push <remote> <branch>을 실행하면 pre-push의 내용이 먼저 수행됨
[pre-push for preventing to push master directly](https://www.notion.so/pre-push-for-preventing-to-push-master-directly-2afa1924805745869a855941dad436f5)
### 2.4. Git Commit Template을 이용한 좋은 Commit 메시지 남기기
- 해당 링크 참조 : [Git-commit template](http://jeong-pro.tistory.com/207)
- commit 메시지에서 close #issue_num 등으로 commit과 동시에 이슈 클로징 가능
-
[Copy of git-commit-template](https://www.notion.so/Copy-of-git-commit-template-bb1c0aa25dfc46608363aef3bf15c20a)
### 2.5. gh-pages를 이용한 preview 사이트
- gh-pages를 이용해 preview/XXX와 같은 미리보기 전용 브랜치에 push하여 develop에 합치기 전에 빠른 확인 가능하게 함. (visualization)
---
## 3. 코드 리뷰
### 3.1. 코드 리뷰를 통해 얻는 것들
- 코드 내 결함 발견
- 코드를 이쁘게하여 가독성 상승
- 다른 부분에 대해서도 배울 수 있음
- 다른 개발자의 개발 방법을 배움
### 3.2. 코드 리뷰 수준 및 행동
- 경험이 많은 리뷰어 참여가 중요... 없다면 소수 인원부터 시작
- 초기에는 request가 오면 되도록 즉각적으로 처리해주기. 점진적으로 여유에 따라
- 소수의 참여자(1~2명)일 경우 빠른 개발 속도를 위해 큰 단위에서만 본다. (중복된 코드, 잘못된 변수명 등)
- Pylint와 같은 lint를 이용하여 코딩 스타일 수정 후 commit
- Unit test 기능들은 반드시 넣지 않아도 좋지만 추후 이러한 test를 진행하기 편하게 코딩이 되야함
- 개발이 완료되어 갈 수록 점진적으로 비즈니스 로직, 리팩토링 등으로 리뷰 진행
- 개발자 성향에 따른 리뷰 스타일을 다르게 하기
- 코딩 기법 등보다는 성능 저하 요소 등 명백하게 개선되야 할 점들 위주 (본인의 코딩에 자신있는 사람)
- 이렇게 수정해야한다보다 이렇게 하는게 어떠한지에 대해 (초심자 혹은 이런 의견을 좋아하는 사람)
- 필요 시 오프라인으로 찾아가 의견 교환
- 한 번의 pull request로 처리하기 어려운 경우 새로운 이슈 등록으로 처리. 이 때, 새로 만든 이슈를 pull request의 comment로 등록
### 3.3. 코드 리뷰에서 주로 할 것들
- 코딩 컨벤션이 잘 지켜지는지? + 코드, 변수명, 함수가 이해하기 쉬운지?
- Logger를 반드시 사용 (print 등으로 출력되는 형식의 로그는 피하기)
- 사용되지 않는 주석 삭제 요청
- 코드 중복 제거
- 함수명 길이가 너무 길지 않은지? (단위를 더 쪼개서)
- 너무 많은 if 문 + 반복문 조건 처리 파악 용이
- 성능에 영향을 미치는 코드 (예 - sleep)
- 예외 처리
- 소스 코드 내 보안 관련 문자열 등
- 커밋로그가 커밋의 의도를 잘 반영하는지
---
# References
- [sv-story.blogspot.com/2013/04/blog-post_28.html](http://sv-story.blogspot.com/2013/04/blog-post_28.html)
- [popit.kr/코드-리뷰-이야기2/](http://popit.kr/코드-리뷰-이야기2/)
- [카카오 코드 리뷰](http://tech.kakao.com/2016/02/04/code-review/)
- [코드 리뷰 요약](https://github.com/JaeYeopHan/tip-archive/issues/13)
- [git-flow](http://holaxprogramming.com/2018/11/01/git-commands/)
- [git-hooks](https://git-scm.com/book/ko/v2/Git%EB%A7%9E%EC%B6%A4-Git-Hooks) | 32.875 | 157 | 0.681876 | kor_Hang | 1.00001 |
6795cf5c808b68d0f101b85bf4f7581c2265970c | 516 | md | Markdown | README.md | ebegen/Dunner | 36e3ab6edb3692a9713cdca02badf45da8153ce8 | [
"MIT"
] | null | null | null | README.md | ebegen/Dunner | 36e3ab6edb3692a9713cdca02badf45da8153ce8 | [
"MIT"
] | null | null | null | README.md | ebegen/Dunner | 36e3ab6edb3692a9713cdca02badf45da8153ce8 | [
"MIT"
] | null | null | null | # dunner - Make your data science projects easier
A Helper pip package for data cleaning, organizing, visualizing. machine learning and deep learning.
### Assumptions:
+ Assuming python is installed on your system.
# Description
It consists of three main modules:
- `data_operations`: tools for some useful data operations
- `data_cleaner`: tools to clean data
- `preprocess_helper`: tools to help data operation and data cleaning
# Installation
## Normal installation
```bash
pip install dunner
```
| 19.846154 | 100 | 0.755814 | eng_Latn | 0.991264 |
67966e834600fff944f0f62712260b157b3ba8ba | 405 | md | Markdown | README.md | wdg/wdg | be6f10094113e2be47cf1b36b285eac092b4100f | [
"MIT"
] | 1 | 2021-08-08T17:37:40.000Z | 2021-08-08T17:37:40.000Z | README.md | wdg/wdg | be6f10094113e2be47cf1b36b285eac092b4100f | [
"MIT"
] | null | null | null | README.md | wdg/wdg | be6f10094113e2be47cf1b36b285eac092b4100f | [
"MIT"
] | null | null | null | 




| 45 | 118 | 0.745679 | yue_Hant | 0.227682 |
6796eb020aa254be906be5c3f95528fadf528783 | 562 | md | Markdown | README.md | git4impatient/kerberos4hadoop | da0b687ecba7227d564fb9a625194483951b0790 | [
"Apache-2.0"
] | null | null | null | README.md | git4impatient/kerberos4hadoop | da0b687ecba7227d564fb9a625194483951b0790 | [
"Apache-2.0"
] | null | null | null | README.md | git4impatient/kerberos4hadoop | da0b687ecba7227d564fb9a625194483951b0790 | [
"Apache-2.0"
] | null | null | null | # kerberos4hadoop
scripts to implement kerberos on a apache hadoop cluster with Cloudera Manager
Apache Hadoop security has advanced significantly. Kerberos is the foundation of securing your cluster.
With Kerberos enabled you can use Apache Sentry for Role Based Access Controls (RBAC)
Starting with the Cloudera Quickstart Virtual Machine (VM) the scripts will prep the image to
enable Kerberos with Cloudera Manager. The upcoming blog entry on the Cloudera web site includes a
guide to using the scripts and screen shots to navigate Cloudera Manager.
| 46.833333 | 104 | 0.818505 | eng_Latn | 0.957181 |
6797d50780ace5a3369ce04e507d810d4c2a4a88 | 2,236 | md | Markdown | docs/misc/SetVariables.md | cohesity/CiscoSecureXOrchestration | f01b3d40f6cd19f83170674e099274c96325c7f6 | [
"Apache-2.0"
] | 1 | 2022-01-07T16:50:22.000Z | 2022-01-07T16:50:22.000Z | docs/misc/SetVariables.md | cohesity/CiscoSecureXOrchestration | f01b3d40f6cd19f83170674e099274c96325c7f6 | [
"Apache-2.0"
] | 1 | 2021-12-01T07:00:33.000Z | 2021-12-01T07:00:33.000Z | docs/misc/SetVariables.md | cohesity/SecureX | f01b3d40f6cd19f83170674e099274c96325c7f6 | [
"Apache-2.0"
] | null | null | null | ### Set Variables To Run Workflows
In this document, we will go over the steps to Set Variables which will be used by SecureX Workflows. Let's dive into the steps.
>NOTE: All of the steps documented below will be run after the Workflow have been imported into SecureX.
1. Login to your SecureX account and go to Orchestration

2. Navigate to Workflows from the left nav bar and open the workflow where you want to set the variables

4. Click on the Set Variable Activity on the workflow canvas. On the right side, you can enter values for all the variables present there.

### Set Variables using Global Variables
Now if you don't want to hard code these variables, there is another way you can pass these variables. Lets look into that.
1. From Orchestration, navigate to Variables from left Nav bar and click `New Variable` under `Global Variables`.

2. Select the `Data Type`, enter a `DISPLAY NAME` and give it a meaningful `DESCRIPTION`. Select the `SCOPE` of the variable as `Global`

>NOTE: You will have use `Data Type` as `Secure String` for variables like `APIClientID`, `APIClientPassword` and `HeliosAPIKey`.
3. Enter a value for this variable and click `Submit`

#### Reference these global variables in Workflow
Now that you have created these global variables, you can now go back to the Workflow and open the `Set Variable` activity and reference these global variables as show below.
1. Navigate to Workflows from the left nav bar and open the workflow where you want to set the variables

2. Click on the Set Variable Activity on the workflow canvas. On the right side, you can reference the global variables as shown in the screen


4. You can perform this for all the variables for which you have created Global variables. | 42.188679 | 175 | 0.753578 | eng_Latn | 0.992266 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.