hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
3ee11897484b8dbc49715217b2c7a18d2b321635
2,237
md
Markdown
_posts/blog/2019-05-10-music.md
derektopper/derektopper.github.io
bc106798a2722ca0e4e0e8cbea4f75dba484905e
[ "Apache-2.0" ]
null
null
null
_posts/blog/2019-05-10-music.md
derektopper/derektopper.github.io
bc106798a2722ca0e4e0e8cbea4f75dba484905e
[ "Apache-2.0" ]
null
null
null
_posts/blog/2019-05-10-music.md
derektopper/derektopper.github.io
bc106798a2722ca0e4e0e8cbea4f75dba484905e
[ "Apache-2.0" ]
null
null
null
--- layout: blog_post title: Generative Sonification of Web-Scraped Bike-Sharing Data category: blog --- In my final for my music technology class, I developed a generative algorithmic composition that changes based on web-scraped bike sharing data. This past semester, I took a course on music technology. This was not something I never really done before as I was never particularly musically gifted. I was always the worst person in music class and was always “last chair” when I played the stand-up bass in elementary, middle and high school. However, I’ve always thought the music was neat and wanted to see how data science could be applied. In the class, offered by [Berkeley’s CNMAT program,](cnmat.berkeley.edu) The professor taught students how to use the visual programming language Max to create music and multimedia projects. Max is a data flow system where patches/programs are created by arranging building blocks within a visual canvas. These objects can receive input, generate output or do both. These objects pass messages between each other and can transmit various information to each other. While other people in my class developed instruments or analyzed various sinusoidal parts of music, I chose to build something that the program was not really set up to accommodate. I used Max to web-scrape the [open-data for every bike sharing system in the United States.](https://github.com/NABSA/gbfs/blob/master/systems.csv) Having spent the year using the Bay Area’s Ford GoBike, I became interested in the flow of riders and bikes and wanted to sonfiy this information. Thus, I used Max to scrape the bike sharing information for each of the 87 systems in the United States. I then took this data and every time someone docked or removed a bike from a given system, I played a sound related to that bike sharing system. So for example, if someone docked a bike in Miami, I had Max play a sample of Cuban music. While if someone began to ride a bike in Detroit, I had Max play a sample of Motown. This created a neat generative sample of music that was kind of interesting. [The code and images of the project are available here.](https://drive.google.com/open?id=11HyjFIO4MvrjeYIR--Eov1iaOCzkk-A5olokgfU12Do)
117.736842
477
0.798838
eng_Latn
0.999811
3ee13113be5a0a477721bfb9e8ede121cec12947
2,010
md
Markdown
README.md
ErrorxCode/CloremDB
2fe565e06e622dc49c5477212019a09c0169f04f
[ "Apache-2.0" ]
1
2022-03-03T02:29:12.000Z
2022-03-03T02:29:12.000Z
README.md
ErrorxCode/CloremDB
2fe565e06e622dc49c5477212019a09c0169f04f
[ "Apache-2.0" ]
3
2021-10-17T16:32:13.000Z
2022-03-30T12:19:28.000Z
README.md
ErrorxCode/CloremDB
2fe565e06e622dc49c5477212019a09c0169f04f
[ "Apache-2.0" ]
null
null
null
# CloremDB ~ Key-value pair store <p align="left"> <a href="#"><img alt="Version" src="https://img.shields.io/badge/Language-Java-1DA1F2?style=flat-square&logo=java"></a> <a href="#"><img alt="Bot" src="https://img.shields.io/badge/Version-2.8-green"></a> <a href="https://www.instagram.com/x__coder__x/"><img alt="Instagram - x__coder__" src="https://img.shields.io/badge/Instagram-x____coder____x-lightgrey"></a> <a href="#"><img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/ErrorxCode/OTP-Verification-Api?style=social"></a> </p> CloremDB is a key-value paired nosql database written in JAVA for programs in JAVA. The data is stored like a JSON tree with nodes and children. It has the most powerful query engine. You can perform low-level and high-level queries on database to sort data. Given a node, you can reach/find any node to any nested levels that are under it and can sort them on the basis of a property. ![image](https://cdn.educba.com/academy/wp-content/uploads/2019/05/what-is-Nosql-database1.png) ## Features - Easy,lightweight and fast - Data sorting using queries - Direct object deserialization - Capable of storing almost all primitive datatypes - Use JSON structure for storing data - Supports List<Integer> & List<String> ## Acknowledgements - [What is No-Sql](https://en.wikipedia.org/wiki/Key%E2%80%93value_database) ## Documentation - [Javadocs](https://errorxcode.github.io/docs/clorem/index.html) - [Guide](https://github.com/ErrorxCode/CloremDB/wiki/Guide) ## Deployment / Installation In your project build.gradle ```groovy allprojects { repositories { ... maven { url 'https://jitpack.io' } } } ``` In your app build.gradle ```groovy dependencies { implementation 'com.github.ErrorxCode:CloremDB:v2.8' } ``` ## It's easy ``` Clorem.getInstance().addMyData().commit(); ``` ## Powered by ❤ #### [Clorabase](https://clorabase.netlify.app) > A account-less platform as a service for android apps. (PaaS)
32.419355
233
0.71791
eng_Latn
0.552596
3ee1bf49419b6b07c15e6f5f58ab5ff1512bf2c3
5,482
md
Markdown
README.md
Pocc/meraki-client-vpn
bdb33b047b0a71b69c199db399c59ac5bfecd64c
[ "Apache-2.0" ]
4
2018-04-15T07:31:27.000Z
2020-03-26T04:20:31.000Z
README.md
Pocc/meraki-client-vpn
bdb33b047b0a71b69c199db399c59ac5bfecd64c
[ "Apache-2.0" ]
60
2018-04-15T08:25:49.000Z
2021-12-13T19:47:10.000Z
README.md
Pocc/meraki-client-vpn
bdb33b047b0a71b69c199db399c59ac5bfecd64c
[ "Apache-2.0" ]
1
2018-04-05T07:16:04.000Z
2018-04-05T07:16:04.000Z
[![Build Status](https://travis-ci.org/pocc/merlink.svg?branch=master)](https://travis-ci.org/pocc/merlink) [![Build status](https://ci.appveyor.com/api/projects/status/ktmvfms5ithcevcl/branch/master?svg=true)](https://ci.appveyor.com/project/pocc/merlink/branch/master) [![BCH compliance](https://bettercodehub.com/edge/badge/pocc/merlink?branch=master)](https://bettercodehub.com/) # MerLink This program will connect desktop devices to Meraki firewalls via an L2TP/IPSEC connection. This program uses a Meraki dashboard admin's credentials to pull the data required for a Client VPN connection, create the connection with OS built-ins, and then connect. ## Current Feature Set (targeting v1.0.0) * Authentication/Authorization * Dashboard admins/Guest users supported with Meraki Auth * TFA prompt supported * Only networks/organizations that that user has access to are shown * VPN Connection (Windows-only) * Split Tunnel * Remember Credential * Platforms * Windows 7/8/10 * macOS 10.7-13 * linux (requires network-manager) * CI/CD on tagged commits * Windows 10 * macOS 10.13 * Ubuntu 14.04 * Ubuntu 16.04 * Troubleshooting tests on connection failure * Is the MX online? * Can the client ping the firewall's public IP? * Is the user behind the firewall? * Is Client VPN enabled? * Is authentication type Meraki Auth? * Are UDP ports 500/4500 port forwarded through firewall? The goals for future major versions can be found in the [Project list](https://github.com/pocc/merlink/projects). ## Installing Merlink ### Executables Download the executables [here](https://github.com/pocc/merlink/releases). ### Building from Source **1.** Clone the repository: ```git clone https://github.com/pocc/merlink``` **2.** Download the required libraries with pip3 ```pip3 install requirements.txt``` **3.** Execute the file ```python3 merlink.py``` ## Contributing Please read [contributing.md](https://github.com/pocc/merlink/blob/master/docs/contributing.md) for the process for submitting pull requests. ### Setting up your environment To set up your Windows environment, please read [pycharm_setup.md](https://github.com/pocc/merlink/blob/master/docs/pycharm_setup.md) ### Versioning [SemVer](http://semver.org/) is used for versioning: * MAJOR version: Incompatible UI from previous version from a user's perspective * MINOR version: Functionality is added to UI from a user's persective * PATCH version: Minor enhancements and bug fixes For the versions available, see the [tags on this repository](https://github.com/pocc/merlink/tags). ### Branching Adapting [Git Branching](http://nvie.com/posts/a-successful-git-branching-model/) for this projcet * **iss#-X.Y**: Branch from dev-X.Y and reintegrate to dev-X.Y. Should be tied to an issue tagged with 'bug', 'feature', or 'enchancement' on repo. * **dev-X.Y**: Development branch. When it's ready for a release, branch into a release. * **rel-X.Y**: Release candidate targeting version X.Y. When it is ready, it should be merged into master tagged with version X.Y. * **master**: Master branch. ## Addenda ### Reference Material #### Language and Libraries * [Python 3](https://www.python.org/) - Base language * [Qt5](https://doc.qt.io/qt-5/index.html) - Comprehensive Qt reference made by the Qt company. It is made for C++, but will supply the information you need about classes and functions. * [PyQt5](http://pyqt.sourceforge.net/Docs/PyQt5/) - Documentation for PyQt5. This is a copypaste of the Qt docs applied to Python, and generally contains less useful information * [Mechanical Soup](https://github.com/MechanicalSoup/MechanicalSoup) - Web scraper for Python 3 #### Environment * [PyCharm](https://www.jetbrains.com/pycharm/) - IDE used #### General Documentation * [Powershell VPN Client docs](https://docs.microsoft.com/en-us/powershell/module/vpnclient/?view=win10-ps) - Collection of manpages for VPN Client-specific powershell functions. #### Style Guide * [Google Python Style Guide (2018)](https://github.com/google/styleguide/blob/gh-pages/pyguide.md) #### Style Guide * [Google Python Style Guide (2018)](https://github.com/google/styleguide/blob/gh-pages/pyguide.md) #### Building * [PyInstaller](https://pyinstaller.readthedocs.io/en/v3.3.1/) - Python bundler used as part of this project * Make sure you install the latest PyIntstaller directly: `pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip ` * [NSIS](http://nsis.sourceforge.net/Docs/) - Windows program installer system * [NSIS Wizard + IDE](http://hmne.sourceforge.net/) - Will build and debug NSIS scripts * [NSIS Sample Installers](http://nsis.sourceforge.net/Category:Real_World_Installers) - To learn how to build your own installer by example * [FPM](https://github.com/jordansissel/fpm) - A way to package to targets deb, rpm, pacman, and osxpkg libxml2-utils ### Linting * coala: * On ubuntu, be sure to install these libraries as well: `sudo apt install libxml2-utils libxml2-dev libxslt-dev libxml2` ### License This project is licensed under the Apache License 2.0 - see the [LICENSE.md](LICENSE.md) file for details. ### Authors * **Ross Jacobs** - *Initial work* - [pocc](https://github.com/pocc) See also the list of [contributors](https://github.com/pocc/merlink/contributors) who participated in this project. ### Acknowledgments Praise be Stack Overflow!
39.438849
185
0.738234
eng_Latn
0.787336
3ee1d8ca5d76d0327021b7d12787cf37042887d7
364
md
Markdown
docs/V4ReleaseListItemChangelog.md
giantswarm/giantswarm-js-client
7dc20cc3eae929665d73fa38b5ca9157112d2e14
[ "Apache-2.0" ]
6
2015-07-09T09:12:03.000Z
2021-03-30T01:50:10.000Z
docs/V4ReleaseListItemChangelog.md
giantswarm/giantswarm-js-client
7dc20cc3eae929665d73fa38b5ca9157112d2e14
[ "Apache-2.0" ]
69
2015-06-15T10:14:13.000Z
2022-01-27T13:56:21.000Z
docs/V4ReleaseListItemChangelog.md
giantswarm/giantswarm-js-client
7dc20cc3eae929665d73fa38b5ca9157112d2e14
[ "Apache-2.0" ]
2
2018-01-22T21:07:52.000Z
2018-01-22T21:07:58.000Z
# GiantSwarm.V4ReleaseListItemChangelog ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **component** | **String** | If the changed item was a component, this attribute is the name of the component. | [optional] **description** | **String** | Human-friendly description of the change | [optional]
36.4
125
0.598901
eng_Latn
0.90755
3ee202cf3ae15e96644e2f5bd5640fd66c9c3e54
5,493
md
Markdown
documents/amazon-elasticache-docs/doc_source/memcache/elasticache-use-cases.md
siagholami/aws-documentation
2d06ee9011f3192b2ff38c09f04e01f1ea9e0191
[ "CC-BY-4.0" ]
5
2021-08-13T09:20:58.000Z
2021-12-16T22:13:54.000Z
documents/amazon-elasticache-docs/doc_source/memcache/elasticache-use-cases.md
siagholami/aws-documentation
2d06ee9011f3192b2ff38c09f04e01f1ea9e0191
[ "CC-BY-4.0" ]
null
null
null
documents/amazon-elasticache-docs/doc_source/memcache/elasticache-use-cases.md
siagholami/aws-documentation
2d06ee9011f3192b2ff38c09f04e01f1ea9e0191
[ "CC-BY-4.0" ]
null
null
null
# Common ElastiCache Use Cases and How ElastiCache Can Help<a name="elasticache-use-cases"></a> Whether serving the latest news or a product catalog, or selling tickets to an event, speed is the name of the game\. The success of your website and business is greatly affected by the speed at which you deliver content\. In "[For Impatient Web Users, an Eye Blink Is Just Too Long to Wait](http://www.nytimes.com/2012/03/01/technology/impatient-web-users-flee-slow-loading-sites.html?pagewanted=all&_r=0)," the New York Times noted that users can register a 250\-millisecond \(1/4 second\) difference between competing sites\. Users tend to opt out of the slower site in favor of the faster site\. Tests done at Amazon, cited in [How Webpage Load Time Is Related to Visitor Loss](http://pearanalytics.com/blog/2009/how-webpage-load-time-related-to-visitor-loss/), revealed that for every 100\-ms \(1/10 second\) increase in load time, sales decrease 1 percent\. If someone wants data, you can deliver that data much faster if it's cached\. That's true whether it's for a webpage or a report that drives business decisions\. Can your business afford to not cache your webpages so as to deliver them with the shortest latency possible? It might seem intuitively obvious that you want to cache your most heavily requested items\. But why not cache your less frequently requested items? Even the most optimized database query or remote API call is noticeably slower than retrieving a flat key from an in\-memory cache\. *Noticeably slower* tends to send customers elsewhere\. The following examples illustrate some of the ways using ElastiCache can improve overall performance of your application\. ## In\-Memory Data Store<a name="elasticache-use-cases-data-store"></a> The primary purpose of an in\-memory key\-value store is to provide ultrafast \(submillisecond latency\) and inexpensive access to copies of data\. Most data stores have areas of data that are frequently accessed but seldom updated\. Additionally, querying a database is always slower and more expensive than locating a key in a key\-value pair cache\. Some database queries are especially expensive to perform\. An example is queries that involve joins across multiple tables or queries with intensive calculations\. By caching such query results, you pay the price of the query only once\. Then you can quickly retrieve the data multiple times without having to re\-execute the query\. The following image shows ElastiCache caching\. ![\[Image: ElastiCache Caching\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/./images/ElastiCache-Caching.png) ### What Should I Cache?<a name="elasticache-use-cases-data-store-what-to-cache"></a> When deciding what data to cache, consider these factors: **Speed and expense** – It's always slower and more expensive to get data from a database than from a cache\. Some database queries are inherently slower and more expensive than others\. For example, queries that perform joins on multiple tables are much slower and more expensive than simple, single table queries\. If the interesting data requires a slow and expensive query to get, it's a candidate for caching\. If getting the data requires a relatively quick and simple query, it might still be a candidate for caching, depending on other factors\. **Data and access pattern** – Determining what to cache also involves understanding the data itself and its access patterns\. For example, it doesn't make sense to cache data that changes quickly or is seldom accessed\. For caching to provide a real benefit, the data should be relatively static and frequently accessed\. An example is a personal profile on a social media site\. On the other hand, you don't want to cache data if caching it provides no speed or cost advantage\. For example, it doesn't make sense to cache webpages that return search results because the queries and results are usually unique\. **Staleness** – By definition, cached data is stale data\. Even if in certain circumstances it isn't stale, it should always be considered and treated as stale\. To tell whether your data is a candidate for caching, determine your application's tolerance for stale data\. Your application might be able to tolerate stale data in one context, but not another\. For example, suppose that your site serves a publicly traded stock price\. Your customers might accept some staleness with a disclaimer that prices might be *n* minutes delayed\. But if you serve that stock price to a broker making a sale or purchase, you want real\-time data\. Consider caching your data if the following is true: + Your data is slow or expensive to get when compared to cache retrieval\. + Users access your data often\. + Your data stays relatively the same, or if it changes quickly staleness is not a large issue\. For more information, see the following: + [Caching Strategies](https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html) in the *ElastiCache for Memcached User Guide* ## ElastiCache Customer Testimonials<a name="elasticache-use-cases-testimonials"></a> To learn about how businesses like Airbnb, PBS, Esri, and others use Amazon ElastiCache to grow their businesses with improved customer experience, see [How Others Use Amazon ElastiCache](https://aws.amazon.com/elasticache/testimonials/)\. You can also watch the [ElastiCache Videos](Tutorials.md#tutorial-videos) for additional ElastiCache customer use cases\.
122.066667
687
0.788822
eng_Latn
0.997163
3ee2babb021e5f907a8e0b18b5037726c180cba1
90
md
Markdown
src/Camera agents for multiple cars/README.md
SahilDhull/autonomous
378fc7d6c5a9c34c4e915f080fb78ed5c11195d6
[ "MIT" ]
3
2020-02-28T12:04:26.000Z
2022-02-27T00:42:56.000Z
src/Camera agents for multiple cars/README.md
SahilDhull/autonomous
378fc7d6c5a9c34c4e915f080fb78ed5c11195d6
[ "MIT" ]
null
null
null
src/Camera agents for multiple cars/README.md
SahilDhull/autonomous
378fc7d6c5a9c34c4e915f080fb78ed5c11195d6
[ "MIT" ]
null
null
null
To add multiple cars in the webots, look at these files no need for the current scenario
22.5
55
0.788889
eng_Latn
0.999595
3ee33d55ed1179c150e8742a606516fb29fd05d6
84
md
Markdown
vault/tn/HEB-x7px.md
mandolyte/uw-obsidian
39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d
[ "MIT" ]
null
null
null
vault/tn/HEB-x7px.md
mandolyte/uw-obsidian
39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d
[ "MIT" ]
null
null
null
vault/tn/HEB-x7px.md
mandolyte/uw-obsidian
39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d
[ "MIT" ]
null
null
null
# Connecting Statement: This is the first of five urgent warnings the author gives.
28
59
0.797619
eng_Latn
0.99998
3ee4d68ce482391b354ca3ede46f45fec81033c1
1,749
md
Markdown
Search/README.md
LihaoWang1991/LeetCode
391b3beeefe1d32c8a4935a66175ab94445a1160
[ "Apache-2.0" ]
null
null
null
Search/README.md
LihaoWang1991/LeetCode
391b3beeefe1d32c8a4935a66175ab94445a1160
[ "Apache-2.0" ]
null
null
null
Search/README.md
LihaoWang1991/LeetCode
391b3beeefe1d32c8a4935a66175ab94445a1160
[ "Apache-2.0" ]
null
null
null
Search ====== ### BFS: * [Problem 1091: Shortest Path in Binary Matrix](https://leetcode.com/problems/shortest-path-in-binary-matrix/) * [Problem 279: Perfect Squares](https://leetcode.com/problems/perfect-squares/) * [Problem 127: Word Ladder](https://leetcode.com/problems/word-ladder/) ### DFS: * [Problem 695: Max Area of Island](https://leetcode.com/problems/max-area-of-island/) * [Problem 200: Number of Islands](https://leetcode.com/problems/number-of-islands/) * [Problem 547: Friend Circles](https://leetcode.com/problems/friend-circles/) * [Problem 130: Surrounded Regions](https://leetcode.com/problems/surrounded-regions/) * [Problem 417: Pacific Atlantic Water Flow](https://leetcode.com/problems/pacific-atlantic-water-flow/) ### Backtracking: * [Problem 17: Letter Combinations of a Phone Number](https://leetcode.com/problems/letter-combinations-of-a-phone-number/) * [Problem 93: Restore IP Addresses](https://leetcode.com/problems/restore-ip-addresses/) * [Problem 79: Word Search](https://leetcode.com/problems/word-search/) * [Problem 257: Binary Tree Paths](https://leetcode.com/problems/binary-tree-paths/) * [Problem 46: Permutations](https://leetcode.com/problems/permutations/) * [Problem 47: Permutations II](https://leetcode.com/problems/permutations-ii/) * [Problem 77: Combinations](https://leetcode.com/problems/combinations/) * [Problem 39: Combination Sum](https://leetcode.com/problems/combination-sum/) * [Problem 40: Combination Sum II](https://leetcode.com/problems/combination-sum-ii/) * [Problem 216: Combination Sum III](https://leetcode.com/problems/combination-sum-iii/) * [Problem 78: Subsets](https://leetcode.com/problems/subsets/) * [Problem 90: Subsets II](https://leetcode.com/problems/subsets-ii/)
62.464286
123
0.751858
yue_Hant
0.264445
3ee75bb935372b7916bd388a382b2864085ef13e
899
md
Markdown
content/projects.md
IgorVaryvoda/new-varyvoda.com
df805d439898407498496ad095792a87439fc9c2
[ "Apache-2.0" ]
null
null
null
content/projects.md
IgorVaryvoda/new-varyvoda.com
df805d439898407498496ad095792a87439fc9c2
[ "Apache-2.0" ]
null
null
null
content/projects.md
IgorVaryvoda/new-varyvoda.com
df805d439898407498496ad095792a87439fc9c2
[ "Apache-2.0" ]
null
null
null
--- template: page title: Projects by Igor Varyvoda slug: projects draft: false --- ## UaHelp <a href="https://www.uahelp.me" target="_blank"><img src="https://cdn.earthroulette.com/varyvoda/uahelp.png" alt="Earth Roulette screenshot"></a> [A curated list of resources to help Ukraine](https://www.uahelp.me) ## Earth Roulette <a href="https://earthroulette.com" target="_blank"><img src="https://iantiark.sirv.com/varyvoda/er.png" alt="Earth Roulette screenshot"></a> A random travel destination generator. It's a pretty fun project which I thouroughly enjoyed creating from scratch :) [Visit Earth Roulette](https://earthroulette.com) ## CryptoTracker <a href="https://cryptotracker.xyz" target="_blank"><img src="https://iantiark.sirv.com/varyvoda/ct.png"></a> Track crypto currency prices. Simple, yet good looking. Set it as a homepage. [Visit CryptoTracker](https://cryptotracker.xyz)
37.458333
167
0.74416
eng_Latn
0.338705
3ee8b33b1c9e2acdf44332e8822405e6dc313faf
146
md
Markdown
README.md
michelescandola/BayesianCorrelations
b5e4867e501bd221f136816fa4bbb39328065b81
[ "MIT" ]
null
null
null
README.md
michelescandola/BayesianCorrelations
b5e4867e501bd221f136816fa4bbb39328065b81
[ "MIT" ]
null
null
null
README.md
michelescandola/BayesianCorrelations
b5e4867e501bd221f136816fa4bbb39328065b81
[ "MIT" ]
null
null
null
# BayesianCorrelations Here a list of Stan scripts to compute Bayesian Correlations in Stan. More infos at https://michelescandola.netlify.app/
24.333333
69
0.808219
eng_Latn
0.549938
3ee8cf08ca5b9211197b539ef30cabecc571395a
11,874
md
Markdown
applications/ori.md
syntifi/Grants-Program
879bee4f9bdfa6a022a40f519be8d676bfb79b34
[ "Apache-2.0" ]
null
null
null
applications/ori.md
syntifi/Grants-Program
879bee4f9bdfa6a022a40f519be8d676bfb79b34
[ "Apache-2.0" ]
null
null
null
applications/ori.md
syntifi/Grants-Program
879bee4f9bdfa6a022a40f519be8d676bfb79b34
[ "Apache-2.0" ]
null
null
null
# W3F Grant Proposal - **Project Name:** ORI (Onchain Risk Intelligence) - **Team Name:** SyntiFi - **Payment Address:** 0x5E89f8d81C74E311458277EA1Be3d3247c7cd7D1 (USDT) - **[Level](https://github.com/w3f/Grants-Program/tree/master#level_slider-levels):** 2 ## Project Overview :page_facing_up: The main issue in fighting financial crime is that traditional and crypto financial institutions manage their own Anti Money Laundering (AML) processes independently. These same institutions continue to invest enormous resources in INTERNAL systems to fight financial crime and fulfill regulatory requirements. Yet, financial crime occurs across institutions and jurisdictions in both crypto- and fiat worlds. Closer collaboration between the market players is fundamental, but privacy regulations and banking secrecy rules prevent data sharing. The result is an industry on the verge of a revolution that still relies on regulatory processes of the last century. The blockchain technology solves some of these problems by providing a secure and reliable ledger to all stakeholders. SyntiFi is here to close the gap and provide on-chain tools to investigate and monitor financial transactions stored across-chains. ### Overview SyntiFi is actively developing ORI, an On-chain Risk Intelligence tool to fight and prevent financial crime. We do believe that with the great power of decentralized finance comes the great responsibility of ensuring that no financial crime, money laundering or terrorist financing is taking place. For this reason, some crucial capabilities of our tool is made available open source. In this grant we ask for support to add Polkadot to our growing ecosystem of chains and tokens. ### Project Details In a nutshell this project is composed of two main steps: 1. the implementation of a Job to crawl and index the Polkadot chain and persist the information according to the ORI data model (Token, Block, Account and Transaction); 2. and the development of a REST API with resources to query the indexed chain, trace the coin and rate a given account according to some AML metrics. The tool is implemented in Java and uses the micro-service framework Quarkus, as well as the Spring Jobs for the crawler. Up until now we were indexing directly on ElasticSearch but we are currently changing it to use Postgresql as persistency layer together with ElasticSearch for an efficient search (more precisely we use a hibernate-elastic-search module to facilitate the sync between the SQL DB and Elastic Search). Additionally we could have decied to use any of the already available indexers for the different chains. The issue is that these indexers are often storing the whole information of the blocks and merkle trees, but we just need a smaller portion of that. For this reason we decided to create our own, simple crawler, to index just what we need and replicate that for the different chains. We believe that in doing so we can facilitate the integration of other chains into the system. ORI also comes with a front-end implemented in React. The further development/improvement of the front end is **NOT** part of this proposal. For more details please check our Git repository: [ORI](https://www.github.com/syntifi/ori) ### Ecosystem Fit Following compliance rules and ensuring that no financial crime is taking place is paramount to the success of any DeFi application. In fact, governments are starting to tighten regulatory requirements for Crypto related transactions. According to [Reuters](https://www.reuters.com/technology/eu-tighten-rules-cryptoasset-transfers-2021-07-20/) , “Companies that transfer BTC or other crypto-assets must collect details of senders and recipients to help authorities crack down on dirty money, EU policymakers proposed on Tuesday in the latest efforts to tighten regulation of the sector. The law, which is one of the recommendations of the inter-governmental watchdog, the Financial Action Task Force (FATF), already applies to wire transfers. Providing anonymous crypto-asset wallets will also be prohibited, just as anonymous bank accounts are already banned under EU anti-money laundering rules.” That being said, we do believe that applications on Polkadot could use an open source tool such as ORI to facilitate the job of their compliance team when fulfilling regulatory requirements. ## Team :busts_in_silhouette: ### Team members - Andre Bertolace - Remo Stieger - Alexandre Carvalho ### Contact - **Contact Name:** Andre Bertolace - **Contact Email:** [email protected] - **Website:** www.syntifi.com ### Legal Structure - **Registered Address:** Baarerstasse 10, 6300 Zug - **Registered Legal Entity:** SyntiFi GmbH ### Team's experience Andre is an entrepreneur with an engineering background who acquired over 15+ years of experience in the financial industry in several positions and institutions but always in a quantitative analyst, trader or developer role. He founded a fintech start-up delivering financial data analytics to Wealth and Asset managers. Remo has 15+ years of financial markets experience having worked in investment banking where he held various leading positions at global financial institutions. Remo has previously co-founded a technology start-up solving legal and technological challenges of inheriting digital assets leveraging a blockchain-based ecosystem. Alexandre is an electrical engineer with more than 15 years working in IT. His focus has been on enterprise software development, acting as solution architect, software engineer and developer in both private and public sector. Alexandre works with a team of diverse specialties building strategic systems while implementing DevOps practices and infrastructure. As a self-learner, he is always experimenting with new and emerging tech. ### Team Code Repos - https://github.com/syntifi - https://github.com/syntifi/ori - https://github.com/syntif/casper-sdk - https://github.com/syntif/near-java-api Please also provide the GitHub accounts of all team members. If they contain no activity, references to projects hosted elsewhere or live are also fine. - https://github.com/AB3rt0 - https://github.com/AB3rtz - https://github.com/oak ### Team LinkedIn Profiles (if available) - https://www.linkedin.com/in/andre-bertolace-87983426/ - https://www.linkedin.com/in/remostieger/ ## Development Status :open_book: ORI is an active/ongoing project. Please take a look at [ORI](https://github.com/syntifi/ori) for the latest development. At the moment, we are on the process of finalizing a front-end and a dashboard as well as refactoring the back end code. ## Development Roadmap :nut_and_bolt: ### Overview - **Total Estimated Duration:** 3 months - **Full-Time Equivalent (FTE):** 1.5 FTE - **Total Costs:** 49'500 USD ### Milestone 1 — Index the Polkadot chain - **Estimated duration:** 1 month - **FTE:** 1.5 - **Costs:** 20'500 USD | Number | Deliverable | Specification | | -----: | ----------- | ------------- | | 0a. | License | Apache 2.0 | | 0b. | Documentation | We will provide both **low-level/inline documentation** of the code and a basic **tutorial** that explains how a user can crawl the Polkadot Chain and populate the database for later use. | | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | 0d. | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | 1. | Polkadot Crawler Job | We will create an Spring batch job to crawl the Polkadot chain and populate the transaction/account tables in our indexed Database | | 2. | Polkadot Updater Job | We will create an Spring batch job to update the transaction/account tables in our indexed Database with the latest transactions in the Polkadot network | ### Milestone 2 — REST API to query the Indexed chain - **Estimated duration:** 1 month - **FTE:** 1.5 - **Costs:** 12'500 USD | Number | Deliverable | Specification | | -----: | ----------- | ------------- | | 0a. | License | Apache 2.0 | | 0b. | Documentation | We will provide both **low-level/inline documentation** of the code and a basic **tutorial** that explains how a user can crawl the Polkadot Chain and populate the database for later use. | | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | 0d. | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | 1. | Data model | Block, Account and Transaction POJO models | | 2. | API endpoint: *account/* | GET and POST method to list accounts and create a new account | | 3. | API endpoint: *account/hash/{hash}* | GET and DELETE method to retrieve and remove an specific account given the hash | | 4. | API endpoint: *block/* | GET method to list blocks currently in the system | | 5. | API endpoint: *block/hash/{hash}* | GET and DELETE method to retrieve and remove an specific block given the hash | | 6. | API endpoint: *block/parent/{hash}* | POST method to add a new block given the parent block hash | | 7. | API endpoint: *transaction/* | GET method to list transactions currently in the system | | 8. | API endpoint: *transaction/account/{account}* | GET method to list the transactions for a given the account | | 9. | API endpoint: *transaction/hash/{hash}* | GET method to retrieve a transaction given the transaction hash | | 10. | API endpoint: *transaction/hash/{hash}* | DELETE method to remove a transaction given the hash | | 11. | API endpoint: *transaction/incoming/account/{account}* | GET method to list the incoming transactions to the given account | | 12. | API endpoint: *transaction/outgoing/account/{account}* | GET method to list the outgoing transactions from the given account | | 13. | API endpoint: *block/{hash}/from/{from}/to/{to}* | POST method to add a new transaction registered on the given block hash from an account to another account| ### Milestone 3 — REST API to trace the coin - **Estimated duration:** 1 month - **FTE:** 1.5 - **Costs:** 16'500 USD | Number | Deliverable | Specification | | -----: | ----------- | ------------- | | 0a. | License | Apache 2.0 | | 0b. | Documentation | We will provide both **low-level/inline documentation** of the code and a basic **tutorial** that explains how a user can crawl the Polkadot Chain and populate the database for later use. | | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | 0d. | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | 0e. | Article | We will publish a **blog entry** highlighting the addition of another Chain to the group of chains covered by ORI. | | 1. | API endpoint */score/{account}* | GET method to retrieve the certain AML scores (structuring over time, unusual outgoing volume, unusual behavior score and flow through score) for a given account | | 2. | API endpoint */trace/back/{account}* | GET method to trace back the origin of the coin given a date/time and an account | | 3. | API endpoint */trace/forward/{account}* | GET method to trace forward the destination of the coin given a date/time and an account | ## Future Plans - Short-term: add the major chains and tokens to ORI (BTC, ETH, ...) - Mid-term: run the platform as a service for DeFi apps that need compliance tools ## Additional Information :heavy_plus_sign: **How did you hear about the Grants Program?** Web3 Foundation Website
50.101266
213
0.756864
eng_Latn
0.994625
3ee91ffa0aee43ed8228f4a745714d0d1aa4d91e
302
md
Markdown
includes/iot-hub-pii-note-naming-device.md
grayknight2/mc-docs.zh-cn
dc705774cac09f2b3eaeec3c0ecc17148604133e
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/iot-hub-pii-note-naming-device.md
grayknight2/mc-docs.zh-cn
dc705774cac09f2b3eaeec3c0ecc17148604133e
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/iot-hub-pii-note-naming-device.md
grayknight2/mc-docs.zh-cn
dc705774cac09f2b3eaeec3c0ecc17148604133e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: ceca1d1061b80b7d117cfc775bf3f6930c761e48 ms.sourcegitcommit: c1ba5a62f30ac0a3acb337fb77431de6493e6096 ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 04/17/2020 ms.locfileid: "63823842" --- > [!IMPORTANT] > 收集的日志中可能会显示设备 ID 用于客户支持和故障排除,因此,在为日志命名时,请务必避免包含任何敏感信息。 >
27.454545
60
0.824503
yue_Hant
0.634186
3ee9e84d9483e2b9fd0af15a75edd50859223069
724
md
Markdown
README.md
nadhirvince/Boston-Housing-Prices
93546e42fce4736282e33b036cd742f0c7a3bc59
[ "MIT" ]
null
null
null
README.md
nadhirvince/Boston-Housing-Prices
93546e42fce4736282e33b036cd742f0c7a3bc59
[ "MIT" ]
null
null
null
README.md
nadhirvince/Boston-Housing-Prices
93546e42fce4736282e33b036cd742f0c7a3bc59
[ "MIT" ]
null
null
null
# Boston-Housing-Prices ![CI status](https://img.shields.io/badge/build-passing-brightgreen.svg) ## Statistical Analysis of Housing Prices ### Requirements * Linux * Python 3.3 and up `$ pip install foobar` ## Usage ```python import foobar foobar.pluralize('word') # returns 'words' foobar.pluralize('goose') # returns 'geese' foobar.singularize('phenomena') # returns 'phenomenon' ``` ## Development ``` $ virtualenv foobar $ . foobar/bin/activate $ pip install -e . ``` ## Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate. ## License [MIT](https://choosealicense.com/licenses/mit/)
19.052632
114
0.726519
eng_Latn
0.88506
3eea3ceeb299fe90f02654932a63aba29e0c6526
1,860
md
Markdown
README.md
erkandem/ogame-model
2e18f7d0b8d177a6cd55a95fd236861d9fbb8d1e
[ "MIT" ]
null
null
null
README.md
erkandem/ogame-model
2e18f7d0b8d177a6cd55a95fd236861d9fbb8d1e
[ "MIT" ]
null
null
null
README.md
erkandem/ogame-model
2e18f7d0b8d177a6cd55a95fd236861d9fbb8d1e
[ "MIT" ]
null
null
null
# ogame-model Python representation of objects in ogame * * * ## about This code base contains objects which model the in game objects of `ogame.org`. Yes it is a stupid "game". But still quite interesting. I created it as an experiment to calculate mine production, but coding escalated quickly. ## how do I use it? Not ready to use. I created it as an experiment and needs your help, ideas, guidance to be sth meaningful. ## construction sites #### state options include a database (see `db` sub folder), a simple dictionary (see `config.py`) which plays nice with JSON, or sth completely different. #### server specific settings Object properties are not constant among all universes/servers - the macroscopic unit of a game. That's why I avoided to hard code them into the objects. But something messy like in `properties.py` and `ogame.py` isn't a solution either. There must be a better way. #### tests They are mostly missing. Really a shame. #### any many more Really, it's not ready. ## other projects A quick search on github reveals that most other projects are written either in JavaScript or PHP. My weapon of choice is Python. |project| python version| description| |----|----|----| |[alaingilbert/pyogame](https://github.com/alaingilbert/pyogame) | python3 | API to interact with the account | |[esp1337/ogame-testing](https://github.com/esp1337/ogame-testing) | python2 | undocumented | |[erkandem/ogame-stats](https://github.com/erkandem/ogame-stats) | python3 | public game statistics API | |[erkandem/ogame-raid-radar](https://github.com/erkandem/ogame-raid-radar) | python3 | sample application using the statistics API | Thanks to the work by Alain Gilbert (https://github.com/alaingilbert) I don't remember from where I took most of the constants used in attributes (e.g. prices). Credit to the website I've forgotten.
37.959184
132
0.75
eng_Latn
0.994842
3eea6fc7e1be01816868f2fd822cf868e48c5df6
43
md
Markdown
README.md
bobbyroe/graphql-lambda
ecd641306a6d3d621018f354eaefd63373a93a80
[ "MIT" ]
null
null
null
README.md
bobbyroe/graphql-lambda
ecd641306a6d3d621018f354eaefd63373a93a80
[ "MIT" ]
null
null
null
README.md
bobbyroe/graphql-lambda
ecd641306a6d3d621018f354eaefd63373a93a80
[ "MIT" ]
null
null
null
# graphql-lambda getting comfy with Lambda
14.333333
25
0.813953
eng_Latn
0.976737
3eea81c2d593ec271328a9c4b3b7e6347e6bd00b
2,115
md
Markdown
_posts/2018-02-14-speyside-03.md
whiskeybobby/whiskeybobby.github.io
887f7af49a3092de0f6268c90a726c803a281023
[ "MIT" ]
null
null
null
_posts/2018-02-14-speyside-03.md
whiskeybobby/whiskeybobby.github.io
887f7af49a3092de0f6268c90a726c803a281023
[ "MIT" ]
null
null
null
_posts/2018-02-14-speyside-03.md
whiskeybobby/whiskeybobby.github.io
887f7af49a3092de0f6268c90a726c803a281023
[ "MIT" ]
null
null
null
--- layout: post title: You Spey, I Spey - Part 3 date: 2018-02-14 category: blog tags: [Scotch, Speyside, Glenlivet 12, Glenfiddich 12] --- Cracking open two more Speyside 50 mL samplers today - Glenlivet 12 and Glenfiddich 12. I get these confused all the time. I poured 15 mLs of each into Glencairns and promptly forgot which was which. This bodes well for the tasting... Contrary to my usual taste first style, I'm going to grab the tasting notes from the respective websites to see if I can figure out which is which. ### Glenlivet 12 From the [official website](https://www.theglenlivet.com/en-us/the-glenlivet-12-year-old/): * 40% ABV * Cask: Traditional and American oak * Flavor: Delicately balanced with strong pineapple notes * Colour: Bright, vibrant gold * Nose: Fruity and summery * Palate: Well balanced and fruity, with strong pineapple notes * Finish: Creamy, smooth, with marzipan and fresh hazelnuts * $5.29/50mL -> $1.59/15mL pour ### Glenfiddich 12 From the [official website](https://www.glenfiddich.com/us/collection/product-collection/core-range/12-year-old/): * 40% ABV * Fresh pear, subtle oak * $5.99/50mL -> $1.80/15mL pour >With a unique freshness from the same Highland spring water we’ve used since 1887, its distinctive fruitiness comes from the high cut point William Grant always insisted upon. >Carefully matured in the finest American oak and European oak sherry casks for at least 12 years, it is mellowed in oak marrying tuns to create its sweet and subtle oak flavours. >Creamy with a long, smooth and mellow finish, our 12 Year Old is the perfect example of Glenfiddich’s unique Speyside style and is widely proclaimed the best dram in the valley. ### My take I'm trying to pick out the "strong pineapple notes" but can't find them. Both are light and fruity. One has a bit more woodiness and kick on the backend. That one might be the Glenfiddich with the "subtle oak" description. Neither one have a terribly long finish. These seem really interchangeable upon first taste. Maybe a couple more samplings will reveal more differences... Whisky Bob signing off.
49.186047
377
0.768794
eng_Latn
0.996207
3eeab6df35082052f8738c39ab12edd0e61f7907
65
md
Markdown
README.md
All-the-stuff/stress_test.sh
1ad4e5a740fc5d5697fde228d10593914607b5ac
[ "MIT" ]
null
null
null
README.md
All-the-stuff/stress_test.sh
1ad4e5a740fc5d5697fde228d10593914607b5ac
[ "MIT" ]
null
null
null
README.md
All-the-stuff/stress_test.sh
1ad4e5a740fc5d5697fde228d10593914607b5ac
[ "MIT" ]
null
null
null
# stress_test.sh Stress test the overclocking on a raspberry pi.
21.666667
47
0.8
eng_Latn
0.984224
3eec6302147692d771068d48e1d3d56ecf8fda3c
173
md
Markdown
README.md
ramesh-kamath/hello-world
490cd6c5d413087e105a4baf43015007fde6ddc1
[ "MIT" ]
null
null
null
README.md
ramesh-kamath/hello-world
490cd6c5d413087e105a4baf43015007fde6ddc1
[ "MIT" ]
null
null
null
README.md
ramesh-kamath/hello-world
490cd6c5d413087e105a4baf43015007fde6ddc1
[ "MIT" ]
null
null
null
# hello-world This is a repository created to learn how GitHub works I am a newbie to GitHub and that is the reason I am trying out this Hello World project as suggested.
28.833333
101
0.780347
eng_Latn
0.999962
3eede6c724a5e009888789b294d8f940e8057953
1,171
md
Markdown
README.md
ito-soft-design/isd-color-palette
f3939494c38369e2c3e6e2050afdcb7ce5dc7073
[ "MIT" ]
null
null
null
README.md
ito-soft-design/isd-color-palette
f3939494c38369e2c3e6e2050afdcb7ce5dc7073
[ "MIT" ]
null
null
null
README.md
ito-soft-design/isd-color-palette
f3939494c38369e2c3e6e2050afdcb7ce5dc7073
[ "MIT" ]
null
null
null
# ISDColorPalette ISDColorPalette is a color selection panel for the RubyMotion iOS app. ![Screenshot](https://raw.github.com/ito-soft-design/isd-color-palette/master/screenshots/screenshot-0.1.1.png) ## Installation ```ruby # in Gemfile gem 'isd-color-palette' ``` ISDColorPalette uses Sugarcube and BubbleWrap. ```ruby # in Gemfile gem 'bubble-wrap' # minimum set gem 'sugarcube', :require => [ 'sugarcube-core', 'sugarcube-localized', 'sugarcube-color', 'sugarcube-uikit', 'sugarcube-nsuserdefaults' ] ``` ## Usage ```ruby # attr_accessor :color # get a controller. c = ISDColorPaletteViewController.colorPaletteViewController # set an initial color. c.selectedColor = self.color # set a callback. c.selected_color_block = Proc.new {|color| did_select_color color } # push the controller. self.navigationController.pushViewController c, animated:true ``` If you want to get nil instead of the clear color, set #return_nil true. ``` c.return_nil = true ``` The selected_color_block is called after a color is selected. ``` def did_select_color color p color self.color = color end ``` ## License MIT License
17.742424
111
0.721605
eng_Latn
0.510778
3eee75d2df1a8a93ef3e75d108d5ec6ff6f147f0
1,555
md
Markdown
settings.md
shreyas-dev/CueObserve
c573f6f5f2725696b3fceabf9955d9013fe3d77d
[ "Apache-2.0" ]
null
null
null
settings.md
shreyas-dev/CueObserve
c573f6f5f2725696b3fceabf9955d9013fe3d77d
[ "Apache-2.0" ]
null
null
null
settings.md
shreyas-dev/CueObserve
c573f6f5f2725696b3fceabf9955d9013fe3d77d
[ "Apache-2.0" ]
null
null
null
# Settings ## Slack CueObserve can send two types of Slack alerts: 1. Anomaly alerts are sent when an anomaly is detected 2. App Monitoring alerts are sent when an anomaly detection job fails To get these alerts, enter your Slack Bot User OAuth Access Token. To create a Slack Bot User OAuth Access Token, follow the steps outlined in [Slack documentation](https://api.slack.com/messaging/webhooks). 1. Create a slack app. 2. Once you create the app, you will be redirected to your app’s `Basic Information` screen. In `Add features and functionality`, click on `Bots`. 3. On the next screen, click on `add a scope` and you will be redirected to OAuth & Permissions page. 4. On the next screen, go to Scopes section, click on `Add on OAuth Scope` and add `files:write` and `chat:write` permissions, now click on `Install to Workspace` to create the `Bot User OAuth Token` . 5. Copy `Bot User OAuth Token` and paste it in the CueObserve Settings screen. Next, create two channels in Slack. Add the app to these two channels. 1. To find your Slack channel's ID, right-click the channel in Slack and then click on `Open channel details` . You'll find the channel ID at the bottom. Copy and paste it in the CueObserve Settings screen. 2. Click on the `Save` button. ## Email 1. Make sure you have enabled email alert while installation. 2. Add email Id to `Send Email To` input field, If you have to add more than one email Id, make it comma separated in input field as shown below. ![](.gitbook/assets/screenshot-from-2021-08-26-17-52-09.png)
48.59375
207
0.75627
eng_Latn
0.987012
3eeef4a67e2db2da9da343491c2effa65648789c
5,301
md
Markdown
articles/sql-database/sql-database-elastic-pool-manage-tsql.md
OpenLocalizationTestOrg/azure-docs-pr15_hu-HU
ac1600ab65c96c83848e8b2445ac60e910561a25
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-elastic-pool-manage-tsql.md
OpenLocalizationTestOrg/azure-docs-pr15_hu-HU
ac1600ab65c96c83848e8b2445ac60e910561a25
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-elastic-pool-manage-tsql.md
OpenLocalizationTestOrg/azure-docs-pr15_hu-HU
ac1600ab65c96c83848e8b2445ac60e910561a25
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Hozzon létre vagy egy Azure SQL-adatbázis áthelyezése egy rugalmas készlet segítségével az SQL-T |} Microsoft Azure" description="Az SQL-T használja-Azure SQL-adatbázis létrehozása az rugalmas készletben. Vagy helyezze át a datbase készletek és kijelentkezés az SQL-T használja." services="sql-database" documentationCenter="" authors="srinia" manager="jhubbard" editor=""/> <tags ms.service="sql-database" ms.devlang="NA" ms.topic="article" ms.tgt_pltfrm="NA" ms.workload="data-management" ms.date="05/27/2016" ms.author="srinia"/> # <a name="monitor-and-manage-an-elastic-database-pool-with-transact-sql"></a>Figyelheti és kezelheti a Transact-SQL-rugalmas adatbázis készlet > [AZURE.SELECTOR] - [Azure portál](sql-database-elastic-pool-manage-portal.md) - [A PowerShell](sql-database-elastic-pool-manage-powershell.md) - [C#](sql-database-elastic-pool-manage-csharp.md) - [AZ SQL-T](sql-database-elastic-pool-manage-tsql.md) Az [Adatbázis létrehozása (Azure SQL-adatbázis)](https://msdn.microsoft.com/library/dn268335.aspx) és parancsokkal [Alter Database(Azure SQL Database)](https://msdn.microsoft.com/library/mt574871.aspx) hozhat létre, és helyezze át az adatbázisok be- és kijelentkezés a rugalmas készletek. A rugalmas készlet léteznie kell, mielőtt az alábbi parancsokat használhatja. Ezek a parancsok csak adatbázisok vonatkoznak. Új készletek létrehozása és a készlet tulajdonságai (például a min és max eDTUs) beállítás nem módosítható az SQL-T a parancsok. ## <a name="create-a-new-database-in-an-elastic-pool"></a>Új adatbázis létrehozása az egyik rugalmas készletben Az adatbázis létrehozása parancs használata a SERVICE_OBJECTIVE lehetőséget. CREATE DATABASE db1 ( SERVICE_OBJECTIVE = ELASTIC_POOL (name = [S3M100] )); -- Create a database named db1 in a pool named S3M100. Az rugalmas készletben adatbázisokra öröklik a szolgáltatási réteg rugalmas készlet (Basic, szabványos, prémium verzióban). ## <a name="move-a-database-between-elastic-pools"></a>Adatbázis-készletek rugalmas közötti áthelyezése Az adatbázis módosítása parancs használata a módosítása, és adja meg a szolgáltatás\_ELASZTIKUS OBJEKTÍV lehetőséget\_készlet; Állítsa be a nevet a cél készlet nevére. ALTER DATABASE db1 MODIFY ( SERVICE_OBJECTIVE = ELASTIC_POOL (name = [PM125] )); -- Move the database named db1 to a pool named P1M125 ## <a name="move-a-database-into-an-elastic-pool"></a>Adatbázis áthelyezése egy rugalmas készlet Az adatbázis módosítása parancs használata a módosítása, és adja meg a szolgáltatás\_ELASTIC_POOL; OBJEKTÍV lehetőséget Állítsa be a nevet a cél készlet nevére. ALTER DATABASE db1 MODIFY ( SERVICE_OBJECTIVE = ELASTIC_POOL (name = [S3100] )); -- Move the database named db1 to a pool named S3100. ## <a name="move-a-database-out-of-an-elastic-pool"></a>Adatbázis áthelyezése egy rugalmas készlet kívül Az adatbázis módosítása parancsot, és jelölje be a SERVICE_OBJECTIVE, a teljesítmény különböző (S0, S1 stb.). ALTER DATABASE db1 MODIFY ( SERVICE_OBJECTIVE = 'S1'); -- Changes the database into a stand-alone database with the service objective S1. ## <a name="list-databases-in-an-elastic-pool"></a>Lista-adatbázisok az rugalmas készletben Használja a [sys.database\_szolgáltatás \_célok nézet](https://msdn.microsoft.com/library/mt712619) egy rugalmas készlet adatbázisokra listáját. Jelentkezzen be a fő adatbázist, hogy a lekérdezés a nézetet. SELECT d.name, slo.* FROM sys.databases d JOIN sys.database_service_objectives slo ON d.database_id = slo.database_id WHERE elastic_pool_name = 'MyElasticPool'; ## <a name="get-resource-usage-data-for-a-pool"></a>Erőforrás-használati adatok beszerzése a készletben Használja a [sys.elastic\_készlet \_erőforrás \_stat nézet](https://msdn.microsoft.com/library/mt280062.aspx) szeretné az erőforrás használati statisztika egy rugalmas készlet logikai kiszolgálón vizsgálja meg. Jelentkezzen be a fő adatbázist, hogy a lekérdezés a nézetet. SELECT * FROM sys.elastic_pool_resource_stats WHERE elastic_pool_name = 'MyElasticPool' ORDER BY end_time DESC; ## <a name="get-resource-usage-for-an-elastic-database"></a>Erőforrás-kihasználtság első rugalmas adatbázisok Használja a [sys.dm\_ db\_ erőforrás\_stat nézet](https://msdn.microsoft.com/library/dn800981.aspx) vagy [sys.resource \_stat nézet](https://msdn.microsoft.com/library/dn269979.aspx) szemügyre veszi a erőforrás használati statisztika az rugalmas készletben adatbázis. Ez a folyamat hasonlít lekérdezése az Erőforrás kihasználtsága bármely egyetlen adatbázis. ## <a name="next-steps"></a>Következő lépések Miután létrehozott egy rugalmas adatbázis készlet, kezelheti a készletben rugalmas adatbázisok rugalmas feladatok létrehozásával. Rugalmas feladatok megkönnyítik a készletben adatbázisok tetszőleges számú parancsprogramok futó az SQL-T. További információ a [Rugalmas adatbázis feladatok áttekintése](sql-database-elastic-jobs-overview.md)című témakörben találhat. Lásd: a [Méretezés ki az Azure SQL-adatbázis](sql-database-elastic-scale-introduction.md): rugalmas adatbázis eszközök segítségével méretezési, az adatok áthelyezése, lekérdezés, vagy hozzon létre a tranzakciók.
64.646341
542
0.777966
hun_Latn
0.999946
3ef00dd5336293e125393eecca1b287c587ee201
8,141
md
Markdown
articles/supply-chain/warehousing/packing-vs-storage-dimensions.md
MicrosoftDocs/Dynamics-365-Operations.pl-pl
fabc82553d43158349e740e44634860e5c927b6d
[ "CC-BY-4.0", "MIT" ]
5
2020-05-18T17:14:43.000Z
2022-03-02T03:47:15.000Z
articles/supply-chain/warehousing/packing-vs-storage-dimensions.md
MicrosoftDocs/Dynamics-365-Operations.pl-pl
fabc82553d43158349e740e44634860e5c927b6d
[ "CC-BY-4.0", "MIT" ]
8
2017-12-12T13:01:05.000Z
2021-01-17T16:41:42.000Z
articles/supply-chain/warehousing/packing-vs-storage-dimensions.md
MicrosoftDocs/Dynamics-365-Operations.pl-pl
fabc82553d43158349e740e44634860e5c927b6d
[ "CC-BY-4.0", "MIT" ]
4
2019-10-12T18:17:43.000Z
2021-01-17T16:37:51.000Z
--- title: Ustawianie różnych wymiarów pakowania i przechowywania description: W tym temacie pokazano, jak określić, do którego procesu (pakowanie, przechowywanie lub zagnieżdżone pakowanie) będzie używany każdy określony wymiar. author: mirzaab ms.date: 01/28/2021 ms.topic: article ms.prod: '' ms.technology: '' ms.search.form: EcoResPhysicalProductDimensions, WHSPhysDimUOM audience: Application User ms.reviewer: kamaybac ms.search.scope: Core, Operations ms.search.region: Global ms.author: mirzaab ms.search.validFrom: 2021-01-28 ms.dyn365.ops.version: 10.0.17 ms.openlocfilehash: 0e8ce576f21f1f5ea5f3acb7d43bbe68826e6f39 ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec ms.translationtype: HT ms.contentlocale: pl-PL ms.lasthandoff: 09/29/2021 ms.locfileid: "7580079" --- # <a name="set-different-dimensions-for-packing-and-storage"></a>Ustawianie różnych wymiarów pakowania i przechowywania [!include [banner](../../includes/banner.md)] Niektóre towary są pakowane lub przechowywane w taki sposób, że konieczne może być śledzenie wymiarów fizycznych w inny sposób dla każdego z kilku różnych procesów. Funkcja *wymiarów produktu do pakowania* umożliwia skonfigurowanie jednego lub kilku typów wymiarów dla każdego produktu. Każdy typ wymiaru ma zestaw miar fizycznych (waga, szerokość, głębokość i wysokość) i ustala proces, w którym te wartości miar fizycznych mają zastosowanie. Gdy ta funkcja jest włączona, system obsługuje następujące typy wymiarów: - *Przechowywanie* — wymiary magazynowania są używane razem z danymi wolumetrycznymi lokalizacji w celu określenia, ile każdego towaru można przechowywać w różnych lokalizacjach magazynowych. - *Pakowanie* — wymiary pakowania są używane podczas konteneryzacji i ręcznego procesu pakowania w celu określenia, ile każdego towaru dopasuje się do różnych typów kontenerów. - *Zagnieżdżone pakowanie* — zagnieżdżone wymiary pakowania są używane, gdy proces pakowania zawiera wiele poziomów. Wymiany *magazynowania* są obsługiwane, nawet jeśli funkcja *Wymiary produktu do pakowania* nie jest włączona. Te ustawienia są ustawiane za pomocą strony **Wymiar fizyczny** w aplikacji Supply Chain Management. Te wymiary są używane przez wszystkie procesy, w których nie określono wymiarów pakowania i zagnieżdżonych wymiarów pakowania. Wymiary *pakowania* i *zagnieżdżonego pakowania* są ustawiane za pomocą strony **Fizyczne wymiary produktu**, która jest dodawana po włączeniu funkcji *wymiarów produktu do pakowania*. Ten temat zawiera scenariusz, który ilustruje sposób używania tej funkcji. ## <a name="turn-on-the-packaging-product-dimensions-feature"></a>Włączanie funkcji wymiarów produktu opakowania Aby móc używać tej funkcji, należy ją włączyć w systemie. Administratorzy mogą skorzystać z obszaru roboczego [Zarządzanie funkcjami](../../fin-ops-core/fin-ops/get-started/feature-management/feature-management-overview.md), aby sprawdzić stan funkcji i włączyć ją, jeśli istnieje taka potrzeba. Ta funkcja jest wymieniona w następujący sposób: - **Moduł:** *Zarządzanie magazynem* - **Nazwa funkcji:** *Wymiary produktu do pakowania* ## <a name="example-scenario"></a>Przykładowy scenariusz ### <a name="set-up-the-scenario"></a>Konfiguracja scenariusza Przed uruchomieniem przykładowego scenariusza należy przygotować system w sposób opisany w tej sekcji. #### <a name="enable-demo-data"></a>Włączanie danych pokazowych Aby pracować z tym scenariuszem przy użyciu określonych pokazowych rekordów i wartości tutaj określonych, należy użyć systemu, w którym są zainstalowane standardowe [dane pokazowe](../../fin-ops-core/dev-itpro/deployment/deploy-demo-environment.md). Dodatkowo należy również wybrać firmę *USMF* przed rozpoczęciem. #### <a name="add-a-new-physical-dimension-to-a-product"></a>Dodawanie nowego wymiaru fizycznego do produktu Dodaj nowy wymiar fizyczny dla produktu, wykonując następujące czynności: 1. Przejdź do **Zarządzanie informacjami o produktach\> Produkty \> Zwolnione produkty**. 1. Wybierz produkt z **numerem pozycji** *A0001*. 1. W okienku akcji otwórz kartę **Zarządzaj zapasami** i w grupie **Magazyn** wybierz **Wymiary fizyczne produktu**. 1. Zostanie otwarta strona **Wymiary fizyczne produktu**. W okienku akcji wybierz opcję **Nowy**, aby dodać nowy wymiar do siatki, używając następujących ustawień: - **Typ wymiaru fizycznego** - *Pakowanie* - **Jednostka fizyczna** - *sztuki* - **Waga** - *4* - **Jednostka wagi** - *kg* - **Długość** - *3* - **Wysokość** - *4* - **Szerokość** - *3* - **Długość** - *cm* - **Jednostka objętości** - *cm3* Pole **Objętość** jest obliczane automatycznie na podstawie ustawień **głębokości**, **wysokości** i **szerokości**. #### <a name="create-a-new-container-type"></a>Tworzenie nowego typu kontenera Przejdź do pozycji **Zarządzanie magazynem \> Ustawienia \> Kontenery \> Typy kontenerów** i utwórz nowy rekord z następującymi ustawieniami: - **Kod typu kontenera** - *Pudełko krótkie* - **Opis** - *Pudełko krótkie* - **Maksymalna waga netto** - *50* - **Objętość** - *144* - **Długość** - *6* - **Szerokość** - *6* - **Wysokość** - *4* #### <a name="create-a-container-group"></a>Tworzenie grupy kontenerów Przejdź do pozycji **Zarządzanie magazynem \> Ustawienia \> Kontenery \> Grupy kontenerów** i utwórz nowy rekord z następującymi ustawieniami: - **Identyfikator grupy kontenerów** - *Pudełko krótkie* - **Opis** - *Pudełko krótkie* Dodaj nowy wiersz do sekcji **Szczegóły**. Ustaw **typ kontenera** na *Pudełko krótkie*. #### <a name="set-up-a-container-build-template"></a>Ustawianie szablonu kompilacji kontenera Wybierz kolejno opcje **Zarządzanie magazynem \> Ustawienia \> Kontenery \> Szablony kompilacji kontenerów** i wybierz pozycję **Pudełka**. Zmień **identyfikator grupy kontenerów** na *Pudełko krótkie*. ### <a name="run-the-scenario"></a>Uruchamianie scenariusza Po przygotowaniu systemu zgodnie z opisem w poprzedniej sekcji można rozpocząć scenariusz opisany w następnej sekcji. #### <a name="create-a-sales-order-and-create-a-shipment"></a>Tworzenie zamówienia sprzedaży i tworzenie wysyłki W tym procesie można utworzyć wysyłkę na podstawie wymiarów *pakowania* towarów, dla których wysokość jest mniejsza niż 3. 1. Wybierz kolejno opcje **Sprzedaż i marketing \> Zamówienia sprzedaży \> Wszystkie zamówienia sprzedaży**. 1. W okienku akcji wybierz opcję **Nowy**. 1. W wyświetlonym oknie dialogowym **Utwórz zamówienie sprzedaży** można ustawić następujące wartości: - **Konto odbiorcy:** *US-001* - **Magazyn:** *63* 1. Naciśnij przycisk **OK**, aby zamknąć okno dialogowe i utworzyć nowe zamówienie zakupu. 1. Nowe zamówienie zakupu (PO) zostało otwarte. Powinno zawierać pusty wiersz w siatce na skróconej karcie **Wiersze zamówienia sprzedaży**. W nowym wierszu ustaw następujące wartości: - **Numer pozycji:** *A0001* - **Ilość:** *5* 1. Na skróconej karcie **Wiersze zamówienia sprzedaży**, w wybierz **Zapasy \> Rezerwacja**. 1. Na stronie **Rezerwacja**, w okienku akcji, wybierz pozycję **Rezerwacja partii**, aby zarezerwować zapasy. 1. Zamknij stronę. 1. W okienku akcji otwórz kartę **Magazyn** i wybierz pozycję **Zwolnij do magazynu**, aby utworzyć pracę dla magazynu. 1. Na skróconej karcie **Wiersze zamówienia sprzedaży** wybierz opcję **Magazyn \> Szczegóły wysyłki**. 1. W okienku akcji otwórz kartę **Transport** i wybierz opcję **Wyświetl kontenery**. Potwierdź, że towar został przeniesiony do dwóch kontenerów *Pudełko krótkie*. #### <a name="place-an-item-into-storage"></a>Umieszczanie pozycji w magazynie 1. Otwórz urządzenie przenośne, zaloguj się do magazynu 63 i przejdź do pozycji **Zapasy \> Dostosuj w**. 1. Wprowadź **lokalizację** = *SHORT-01*. Utwórz nowy numer identyfikacyjny z **pozycją** = *A0001* i **ilością** = *1 szt.* 1. Kliknij przycisk **OK**. Pojawi się błąd „Niepowodzenie lokalizacji SHORT-01, pozycja A0001 nie pasuje do wymiarów określonych dla lokalizacji”. Wynika to z tego, że wymiary typu *Magazyn* są większe niż wymiary określone w profilu lokalizacji. [!INCLUDE[footer-include](../../includes/footer-banner.md)]
59.860294
517
0.763665
pol_Latn
0.999873
3ef07a70e9ca3f8f6a11a184d7cce6b703e4bcee
6,271
md
Markdown
_posts/2016-09-14-java-thread-exclusion-sync.md
jingboli/jingbolee.github.io
9a6e51806aec8525d2048f06e4e232bbdee8fe40
[ "MIT" ]
null
null
null
_posts/2016-09-14-java-thread-exclusion-sync.md
jingboli/jingbolee.github.io
9a6e51806aec8525d2048f06e4e232bbdee8fe40
[ "MIT" ]
null
null
null
_posts/2016-09-14-java-thread-exclusion-sync.md
jingboli/jingbolee.github.io
9a6e51806aec8525d2048f06e4e232bbdee8fe40
[ "MIT" ]
null
null
null
--- layout: post title: "Java 多线程: 线程安全、同步、互斥" subtitle: "线程安全、同步、互斥" date: 2016-09-18 author: "Jerome" header-img: "img/post-bg-06.jpg" --- ## Java 多线程 ### 线程安全的产生 - 在单线程的情况下,不会出现线程安全。当运行在多线程的情况下,需要访问同一个资源的情况下,可能会存在线程安全。 - 当多个线程同时访问临界资源(也称为共享资源,可以是一个对象,对象中的属性,一个文件,一个数据库等)时,就可能会产生线程安全问题。 ### 线程安全的解决-同步互斥 - 基本上所有的开发模式在解决线程安全问题时,都采用“序列化访问临界资源”的方案,即**在同一时刻,只能有一个线程访问临界资源,也称作同步互斥访问**。 - 通常来说,是在访问临界资源的代码前面加上一个锁,当访问完临界资源后释放锁,让其他线程继续访问。 - 在 Java 中,提供了两种方式来实现同步互斥访问: **synchronized** 和 **Lock** ### synchronized - 互斥锁:在临界资源上加上互斥锁,当一个线程在访问该临界资源时,其他线程便只能等待。 - 在 Java 中,**每一个对象都拥有一个锁标记( monitor),也成为监视器**,多线程同时访问某个对象时,线程只有获取了该对象的锁才能访问 - 在 Java 中,可以使用 synchronized 关键字来标记一个方法或者代码块,当某个线程调用该对象的 synchronized 方法或者访问 synchronized 代码块时,这个线程便获取了该对象的锁,其他线程暂时无法访问这个方法,只有等待这个方法执行完毕或者代码块执行完毕,这个线程才会释放该对象的锁,其他线程才能执行这个方法或者代码块。 - 没有使用 synchronized 关键字修饰方法 - InsertData.java public class InsertData { private ArrayList<Integer> datas = new ArrayList<>(); public void insert(Thread thread) { for (int i = 0; i < 5; i++) { System.out.println(thread.getName() + "插入数据:" + i); datas.add(i); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } } - SynchronizedTest.java public class SynchronizedTest { public static void main(String[] args) { InsertData insertData = new InsertData(); new Thread("t1"){ @Override public void run() { insertData.insert(Thread.currentThread()); } }.start(); new Thread("t2"){ @Override public void run() { insertData.insert(Thread.currentThread()); } }.start(); } } - 测试结果:(说明: 2 个线程同时对 ArrayList 进行了插入数据的操作,数据在某个时刻可能会出现预期以外的结果) t2插入数据:0 t1插入数据:0 t1插入数据:1 t2插入数据:1 t1插入数据:2 t2插入数据:2 t1插入数据:3 t2插入数据:3 t1插入数据:4 t2插入数据:4 - 使用 synchronized 关键字修饰方法 - InsertData.java public class InsertData { private ArrayList<Integer> datas = new ArrayList<>(); public synchronized void insert(Thread thread) { for (int i = 0; i < 5; i++) { System.out.println(thread.getName() + "插入数据:" + i); datas.add(i); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } } - 测试结果:(说明: t1 线程获取到锁,执行完毕以后,释放锁, t2才执行插入操作) t1插入数据:0 t1插入数据:1 t1插入数据:2 t1插入数据:3 t1插入数据:4 t2插入数据:0 t2插入数据:1 t2插入数据:2 t2插入数据:3 t2插入数据:4 - 注意点: 1. 当一个线程正在访问一个对象的 synchronized 方法,那么其他线程不能访问该对象的其他的 synchronized 方法,因为**一个对象只有一个锁,当一个线程获取了该对象的锁之后,其他线程无法获取该对象的锁**,所以无法访问该对象的其他的 synchronized 方法。 2. 当一个线程访问一个对象的 sychronized 方法,其他线程可以访问非 synchronized 方法,因为**访问非 synchronized 方法不需要获取该对象的锁**。 3. 访问同一类型的不同对象的 synchronized 方法,不存在互斥对象,因为访问的不是同一个对象。 - synchronized 代码块 - 格式: synchronized (synObject) { } - 当在某个线程中执行这段代码块,该线程获取对象 synObject 的锁,从而使得其他线程无法同时访问该代码块 - synObject 可以是 this,代表获取当前对象的锁,也可以是类中的一个属性,代表获取该属性的锁。 - InsertData.java(版本一) public class InsertData { private ArrayList<Integer> datas = new ArrayList<>(); public void insert(Thread thread) { synchronized (this) { for (int i = 0; i < 5; i++) { System.out.println(thread.getName() + "插入数据:" + i); datas.add(i); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } } } - InsertData.java(版本二) public class InsertData { private ArrayList<Integer> datas = new ArrayList<>(); private Object synObject = new Object(); public void insert(Thread thread) { synchronized (synObject) { for (int i = 0; i < 5; i++) { System.out.println(thread.getName() + "插入数据:" + i); datas.add(i); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } } } - synchronized 代码块使用起来比 synchronized 方法要灵活得多,一个方法可能只有一部分是需要同步的,如果此时对整个方法用 synchronized 进行同步,会影响程序的执行效率,而使用 synchronized 代码块就可以避免这个问题,synchronized 代码块可以只对需要同步的地方进行同步 - 每个类也会有一个锁,它可以用来控制对 static 数据成员的并发访问 - 如果一个线程执行一个对象的非 static synchronized 方法,另外一个线程执行这个对象所属类的 static synchronized 方法,此时不会发生互斥现象,因为访问 static synchronized 方法占用的是类锁,而访问非 static synchronized 方法占用的是对象锁,所以不存在互斥现象 - InsertData.java public class InsertData { private static int staticNumber = 1; public static synchronized void insert(Thread thread) { for (int i = 0; i < 5; i++) { System.out.println(thread.getName() + "数据:" + staticNumber); staticNumber++; try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } public synchronized void insert1() { System.out.println("insert1执行"); } } - SynchronizedTest.java public class SynchronizedTest { public static void main(String[] args) { InsertData insertData = new InsertData(); new Thread("t1") { @Override public void run() { InsertData.insert(Thread.currentThread()); } }.start(); new Thread("t2") { @Override public void run() { insertData.insert1(); } }.start(); } } - 测试结果:(第一个线程执行的是 insert() 方法,不会导致第二个线程执行 insert1() 方法发送阻塞现象) t1数据:1 insert1执行 t1数据:2 t1数据:3 t1数据:4 t1数据:5 ### 注意: - 对于 synchronized 方法或者 synchronized 代码块,当出现异常时, JVM 会自动释放当前线程占用的锁,因此不会由于异常导致出现死锁现象。
26.348739
178
0.559719
yue_Hant
0.275875
3ef09582cdfbc0e69822320e15c0a110f9c1403d
75
md
Markdown
tag/twitter.md
andrewsio/andrewsio-blog.github.io
008e349a3aeb8ba3e162b5cf72e15093ee63eea2
[ "MIT" ]
2
2021-10-02T23:32:37.000Z
2022-01-03T21:57:43.000Z
tag/twitter.md
andrewsio/andrewsio-blog.github.io
008e349a3aeb8ba3e162b5cf72e15093ee63eea2
[ "MIT" ]
1
2015-08-03T19:02:29.000Z
2017-01-25T16:51:24.000Z
tag/twitter.md
andrewsio/andrewsio-blog.github.io
008e349a3aeb8ba3e162b5cf72e15093ee63eea2
[ "MIT" ]
1
2020-01-02T19:15:55.000Z
2020-01-02T19:15:55.000Z
--- layout: tagpage title: "Tag: twitter" tag: twitter robots: noindex ---
10.714286
21
0.68
eng_Latn
0.241862
3ef1d712ed1fe67c4eb2d763748007f280417c5b
5,046
md
Markdown
list.md
gtg7784/List-Data-Structure-Presentation
7c62cc0c6acce8ea10c8729ff080381bbf77fd35
[ "MIT" ]
2
2021-04-23T01:34:12.000Z
2021-10-14T01:23:52.000Z
list.md
gtg7784/List-Data-Structure-Presentation
7c62cc0c6acce8ea10c8729ff080381bbf77fd35
[ "MIT" ]
null
null
null
list.md
gtg7784/List-Data-Structure-Presentation
7c62cc0c6acce8ea10c8729ff080381bbf77fd35
[ "MIT" ]
null
null
null
--- marp: true style: | img[alt~="center"] { display: block; margin: 0 auto; } table { margin-left: 30px } footer: '2021 선린인터넷고등학교 자료구조 - 리스트' --- # 리스트 ### 30406 고태건, 30413 박경민 --- # 목차 1. 리스트란? 2. 배열 3. 배열의 검색 5. 배열의 추가/삭제 6. 연결 리스트 7. 연결 리스트의 검색 8. 연결 리스트의 추가/삭제 9. 원형 연결 리스트 10. 이중 연결 리스트 --- # 리스트란? **리스트는 선형적으로 값을 가지고 있는 자료구조** 두가지가 있다. - 배열 리스트 - 연결 리스트 --- <style scoped> table { margin-top: 80px; } </style> # 배열 리스트 - **연속된 데이터를 저장하는 자료구조** - 인덱스와 대응하는 데이터를 저장하여, 인덱스는 첫번째 부터 상대적인 위치를 표현 - 검색 연산은 빠르지만, 추가/삭제 연산이 느리다 0 | 1 | 2 | 3 | 4 | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | Pizza | Papaya | Melon --- <style scoped> table { margin-top: 20px; margin-bottom: 20px; } </style> # 배열의 검색 - 검색은 매우 빠르다 - 아래 그림을 보면 바로 이해가 될것이다 0 | 1 | 2 | 3 | 4 | **5** | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | Pizza | **Papaya** | Melon ``` 5번째 주소에 있는건 무엇일까?? = 배열의 첫번째 주소 + 5 -> Papaya ``` --- <style scoped> table { margin-top: 20px; margin-bottom: 20px; } </style> # 배열의 추가/삭제 - 추가나 삭제가 많은 상황에 적합하지 않다. - 작업을 할때마다 메모리 상의 주소를 다 바꿔야 하는 문제가 생긴다 0 | 1 | 2 | 3 | **4** | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | **Pizza** | Papaya | Melon ``` 과일 배열에 웬 피자가 꼈어! > 피자를 지워보자 ``` --- 0 | 1 | 2 | 3 | **4** | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | **Pizza** | Papaya | Melon 0 | 1 | 2 | 3 | 4 | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | | Papaya | Melon 0 | 1 | 2 | 3 | 4 | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | Papaya | | Melon 0 | 1 | 2 | 3 | 4 | 5 | 6 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Orange | Coconut | Papaya | Melon | --- # 연결 리스트 - **배열과 같이 연속된 데이터를 저장하기 위한 자료구조** - 배열의 단점(추가, 삭제)을 해결하기 위해 고안 (더 좋은건 아님) - 추가/삭제는 빠르지만 검색이 느리다. - 배열처럼 메모리제 저장하는 방식이 아닌, **노드**라는 개념을 만들어서 사용한다. - **노드**는 데이터를 가지고 있는 데이터 필드와 다음 노드의 주소를 갖고있는 링크 필드(포인터)로 구성되어있다. ![left](https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fl0VVL%2FbtquxsmG8P6%2Fnxm8KVIBfzq4LttQHf2CvK%2Fimg.png) --- <style scoped> table { margin-top: 20px; margin-bottom: 20px; } </style> # 연결 리스트의 검색 - 검색은 느리다 - 아래 그림을 보면 바로 이해가 될것이다 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Pizza | Orange | Coconut | Papaya | Melon 210 | 130 | 200 | 300 | 160 | 250 | ``` 저 pizza는 뭐하는놈이야!! 찾아내! > pizza를 찾아보자 ``` --- 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: **Banana** | Apple | Pizza | Orange | Coconut | Papaya | Melon **210** | 130 | 200 | 300 | 160 | 250 | 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Pizza | Orange | **Coconut** | Papaya | Melon 210 | 130 | 200 | 300 | **160** | 250 | 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | **Pizza** | Orange | Coconut | Papaya | Melon 210 | 130 | 200 | 300 | **160** | 250 | --- # 연결 리스트의 추가/삭제 <style scoped> table { margin-top: 20px; margin-bottom: 20px; } </style> - 추가/삭제는 겁나빠르다 - 아래 그림을 보면 바로 이해가 될것이다 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | Pizza | Orange | Coconut | Papaya | Melon 210 | 130 | 200 | 300 | 160 | 250 | ``` 저 pizza는 뭐하는놈이야!! 지워!! > pizza를 지워보자 ``` --- 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | **Pizza** | Orange | Coconut | Papaya | Melon 210 | 130 | **200** | 300 | **160** | 250 | 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | **Pizza** | Orange | Coconut | Papaya | Melon 210 | 130 | **200** | 300 | **200** | 250 | 100 | 130 | 160 | 200 | 210 | 250 | 300 :----:|:----:|:-----:|:----:|:----:|:----:|:----: Banana | Apple | | Orange | Coconut | Papaya | Melon 210 | 130 | | 300 | 200 | 250 | --- # 원형 연결 리스트 ![center width:500px](https://t1.daumcdn.net/cfile/tistory/99B1E73B5A50DA8F18) --- <style scoped> h1 { margin-bottom: 80px; } img { margin-bottom: 80px; } </style> # 이중 연결 리스트 ![center width: 3000px](https://s3.ap-northeast-2.amazonaws.com/opentutorials-user-file/module/1335/2949.png) --- # 활용 예시 - 큐, 트리, 그래프 등 다양한 자료구조의 구현에 사용된다 --- # 자료 출처 - https://velog.io/@kiiim/10%EC%9B%9412%EC%9D%BC-%EB%B0%B0%EC%97%B4-rhcqdr1x - https://opentutorials.org/module/1335/8940 - https://medium.com/@yeonghun4051/%EC%97%B0%EA%B2%B0%EB%A6%AC%EC%8A%A4%ED%8A%B8ii-c06443f705fd - https://bbmsk2.tistory.com/112
19.333333
173
0.478201
kor_Hang
0.992374
3ef2fe3238eda1a45dd22212cdb2a2b730c69ed5
217
md
Markdown
readme.md
shipengyan/tomcat_8.0.0_source
af71014fb36c2df758c3a8d0f04d627bd401432c
[ "Apache-2.0" ]
null
null
null
readme.md
shipengyan/tomcat_8.0.0_source
af71014fb36c2df758c3a8d0f04d627bd401432c
[ "Apache-2.0" ]
null
null
null
readme.md
shipengyan/tomcat_8.0.0_source
af71014fb36c2df758c3a8d0f04d627bd401432c
[ "Apache-2.0" ]
null
null
null
# Debug Tomcat Source ## Step - 下载源码 - 下载原始安装包,解压后,将`conf``lib``logs``webapps`目录拷贝到一个目录,假设为`tomcat-catalina-home` - 在运行`Bootstarp.main`前,配置`VM Options`:`-Dcatalina.home=D:\xxx\tomcat-catalina-home` ## Just Debug
18.083333
83
0.718894
yue_Hant
0.285174
3ef34e3f087fea2dbcab4b0ffb402eabece0443e
1,863
md
Markdown
content/book_3/005_fermentum_tristique_rhoncus_fringilla/008_auctor_facilisis_dictum/002_ultrices_mauris.md
wernerstrydom/sample-book
dbe00c6a6e2c9c227a6eb8955371d394b9398e48
[ "MIT" ]
null
null
null
content/book_3/005_fermentum_tristique_rhoncus_fringilla/008_auctor_facilisis_dictum/002_ultrices_mauris.md
wernerstrydom/sample-book
dbe00c6a6e2c9c227a6eb8955371d394b9398e48
[ "MIT" ]
null
null
null
content/book_3/005_fermentum_tristique_rhoncus_fringilla/008_auctor_facilisis_dictum/002_ultrices_mauris.md
wernerstrydom/sample-book
dbe00c6a6e2c9c227a6eb8955371d394b9398e48
[ "MIT" ]
null
null
null
### Ultrices mauris Lectus facilisis aliquet adipiscing. Vestibulum, lectus, inceptos tempor bibendum. Odio, erat, consectetur metus pulvinar, massa, platea semper magna nunc, rhoncus, convallis ad posuere Quam mollis rhoncus porta Platea integer vitae, elit, sodales dolor, ad lobortis commodo, maecenas libero nibh, consequat est. Magna, sollicitudin convallis sem, himenaeos sodales iaculis varius pulvinar. Facilisis massa habitasse mauris, scelerisque faucibus dolor, enim, augue fermentum ex Dolor risus euismod rutrum magna condimentum suspendisse. Sollicitudin volutpat erat hac Odio, tellus, ex, vestibulum, finibus, auctor, lorem sollicitudin ex volutpat, fusce nulla. Finibus a dolor, urna, fringilla, congue auctor, amet tempor. Fringilla, ut mauris, odio tellus, habitasse nisi maecenas laoreet per Vel id litora purus hendrerit amet urna, auctor vestibulum lacinia. Condimentum aliquam a, torquent auctor feugiat neque, venenatis tempor, facilisis luctus metus donec ligula Congue, ultricies non elementum urna, accumsan id, habitasse lobortis blandit. Lorem, pellentesque dictum Ante nisi, vestibulum, tortor, urna blandit amet metus amet. Eleifend finibus, mattis, euismod Lectus, quam porta tincidunt sit rutrum tortor aptent torquent eros, faucibus. Vestibulum vehicula volutpat id eget laoreet, faucibus Bibendum, sagittis tincidunt ligula. Primis arcu, posuere, nostra, varius, integer diam arcu at platea Quis, mauris, volutpat, efficitur vehicula mattis Id, eros, eleifend, elit, etiam. Commodo maecenas maximus hendrerit nisl magna, pharetra a mattis pulvinar, donec Vitae, a praesent augue id, diam suspendisse mi, turpis leo dapibus arcu sollicitudin mauris Est vehicula nunc id, dictum commodo, nulla, consequat dui, auctor, leo. Fames amet, nunc, viverra consectetur quis, sem cursus, pulvinar gravida aptent dui, vitae
58.21875
265
0.80891
cat_Latn
0.261991
3ef52c6db87a263a3169d145980e821becdb62b9
300
md
Markdown
dev/devlog.md
HendrickZhou/shazam-air
2c6b17604dc492a4631f9ea7da3c463b2f912db0
[ "MIT" ]
19
2020-03-12T19:14:38.000Z
2022-03-22T19:51:55.000Z
dev/devlog.md
vsc-hvdc/shazam-air
2c6b17604dc492a4631f9ea7da3c463b2f912db0
[ "MIT" ]
null
null
null
dev/devlog.md
vsc-hvdc/shazam-air
2c6b17604dc492a4631f9ea7da3c463b2f912db0
[ "MIT" ]
3
2020-11-06T15:59:06.000Z
2021-08-11T08:59:05.000Z
Python lib needed numpy pandas matplotlib scipy - basic signal process pyaudio - audio device IO librosa - feature extraction (another option: pyAudioAnalysis, but librosa is better maintained) scikit-learn - classifier keras - classifie(probabaly) ## deploy: conda env export > environment.yml
17.647059
96
0.79
eng_Latn
0.851179
3ef5c23f7783c1e1732c5c239c93e026c455d427
4,922
md
Markdown
blog/posts/2021-05-28-1398205535974723584.md
systemime/webify
586163b37b0985adc5fcf2326759e8510e5b57c7
[ "Apache-2.0" ]
null
null
null
blog/posts/2021-05-28-1398205535974723584.md
systemime/webify
586163b37b0985adc5fcf2326759e8510e5b57c7
[ "Apache-2.0" ]
null
null
null
blog/posts/2021-05-28-1398205535974723584.md
systemime/webify
586163b37b0985adc5fcf2326759e8510e5b57c7
[ "Apache-2.0" ]
1
2021-11-05T09:09:14.000Z
2021-11-05T09:09:14.000Z
--- title: 多路归并排序-Python实现 subtitle: 技术分享 author: systemime date: 2021-05-28 header_img: /img/in-post/header/12.jpg catalog: True tags: - 数据结构与算法 - python --- 使用python实现多(K)路归并外部排序,解决小内存排序大文件问题 <!-- more --> 上一篇中,我们实现了一般的归并排序 [归并排序递归与非递归-Python实现](2021-05-28-1398166358587478016.md) 在实际工作中,多个有序数列合并成一个,大文件或多个大文件合并成一个并排序的需求常见并不少见,首先,先来看一下多个有序数列情况 ## 合并多个有序数组 比如现在有四路: - a0: [1, 3, 6, 7] - a1: [] - a2: [3, 5, 7, 19] - a3: [9, 12, 87, 98] ### 保存每路最小值 第一步需要知道每一路的最小值,如果每一路用数组表示的话需要保存对应的下标,`并保存为min_map` - 第0路: 1 - 第1路: 没有值 - 第2路: 3 - 第3路: 9 初始的 `min_map`: ```python {0: (1, 0), 2: (3, 0), 3: (9, 0)} ``` ### 获取最小值中的最小值 第二部需要将最小值取出来然,后检查被取出值的那一路是否还剩下。 其他元素如果存在,则修改min_map里面对应的值,如果不存在,则删除掉min_map里面对应的记录,以表示该路已经没有元素需要遍历了 代码: ```python #!/usr/bin/env python # -*- coding: utf-8 -*- # 多路归并: 将已经排序好的多个数组合并起来 def nw_merge(arrs): """ 需要知道每一路的最小值 第0路: 1 第1路: 没有值 第2路: 3 第3路: 9 """ result = [] min_map = {} # 用min_map 保存每一路的当前最小值 for inx, arr in enumerate(arrs): if arr: min_map[inx] = (arr[0], 0) print("初始化的每一路最小值min_map", min_map) while min_map: """ 需要知道每一路的最小值里面哪一路的最小值, 以及最小值所在的那一路的index """ min_ = min(min_map.items(), key = lambda m: m[1][0]) way_num, (way_min_v, way_inx) = min_ result.append(way_min_v) """ 检查被取出值的那一路是否还剩下其他元素, 如果存在, 则修改min_map里面对应的值, 如果不存在, 则删除掉min_map里面对应的记录, 以表示该路已经没有元素需要遍历了 """ way_inx += 1 if way_inx < len(arrs[way_num]): min_map[way_num] = (arrs[way_num][way_inx], way_inx) else: del min_map[way_num] return result a0 = [1, 3, 6, 7] a1 = [] a2 = [3, 5, 7, 19] a3 = [9, 12, 87, 98] arrs = [a0, a1, a2, a3] print("a0:", a0) print("a1:", a1) print("a2:", a2) print("a3:", a3) result = nw_merge(arrs) print("最终合并的:", result) ``` ### 输出 ```python """ a0: [1, 3, 6, 7] a1: [] a2: [3, 5, 7, 19] a3: [9, 12, 87, 98] 初始化的每一路最小值min_map {0: (1, 0), 2: (3, 0), 3: (9, 0)} """ # 最终合并的: [1, 3, 3, 5, 6, 7, 7, 9, 12, 19, 87, 98] ``` ## 对超大文件排序(10G的日志,512M的内存) 绕不开归并核心思想,分治,先拆成小文件,再排序,最后合并所有碎片文件成一个大文件 ### 拆文件 首先第一步,大文件拆分成 x 个 block_size 大的小文件,每个小文件排好序 ```python def save_file(l, fileno): filepath = f"/home/xxx/{fileno}" f = open(filepath, 'a') for i in l: f.write(f"{i}\n") f.close() return filepath def split_file(file_path, block_size): f = open(file_path, 'r') fileno = 1 files = [] while True: lines = f.readlines(block_size) if not lines: break lines = [int(i.strip()) for i in lines] lines.sort() files.append(save_file(lines, fileno)) fileno += 1 return files ``` ### 合并 将拆分成的小文件合并起来,然后将归并的东西写到大文件里面去,这里用到的是上面的多路归并的方法 ```python def nw_merge(files): fs = [open(file_) for file_ in files] min_map = {} out = open("/home/xxx/out", "a") for f in fs: read = f.readline() if read: min_map[f] = int(read.strip()) while min_map: min_ = min(min_map.items(), key = lambda x: x[1]) min_f, min_v = min_ out.write("{}".format(min_v)) out.write("\n") nextline = min_f.readline() if nextline: min_map[min_f] = int(nextline.strip()) else: del min_map[min_f] ``` ### 全部代码 ```python import os from pathlib import Path def nw_merge(files): fs = [open(file_) for file_ in files] min_map = {} # 用来记录每一路当前最小值 out = open(Path(".") / "out/integration.txt", "a+") for f in fs: read = f.readline() if read: min_map[f] = int(read.strip()) while min_map: # 将最小值取出, 并将该最小值所在的那一路做对应的更新 min_ = min(min_map.items(), key=lambda x: x[1]) min_f, min_v = min_ out.write(f"{min_v}\n") nextline = min_f.readline() if nextline: min_map[min_f] = int(nextline.strip()) else: del min_map[min_f] for f in fs: f.close() out.close() def save_file(l, fileno): path = Path(".") / "split" filepath = path / f"{fileno}" info = '\n'.join(map(str, l)) with open(filepath, "a+") as f: f.write(f"{info}") return filepath def split_file(file_path, block_size): fileno = 1 # 文件数 files = [] # 小文件目录 with open(file_path, 'r') as f: while True: lines = f.readlines(block_size) if not lines: break lines = [int(i.strip()) for i in lines] # 生成一个列表 lines.sort() # 排序 files.append(save_file(lines, fileno)) fileno += 1 return files if __name__ == "__main__": # 每行单个数字 file_path = Path(".") / "tests.txt" block_size = 500 * 1024 * 1024 # 500K num_blocks = os.stat(file_path).st_size / block_size files = split_file(file_path, block_size) nw_merge(files) ```
19.767068
75
0.563389
yue_Hant
0.159958
3ef5d8270539e3a3f0af8eb52cf300b370d1cc3e
333
md
Markdown
_avabel/category/001/合成屋(武器カスタマイズ).md
game-z/game-z.github.io
c5d3a591b488cc4c9ffcb403b44327b9c45c8310
[ "BSD-3-Clause" ]
null
null
null
_avabel/category/001/合成屋(武器カスタマイズ).md
game-z/game-z.github.io
c5d3a591b488cc4c9ffcb403b44327b9c45c8310
[ "BSD-3-Clause" ]
null
null
null
_avabel/category/001/合成屋(武器カスタマイズ).md
game-z/game-z.github.io
c5d3a591b488cc4c9ffcb403b44327b9c45c8310
[ "BSD-3-Clause" ]
null
null
null
--- title: 合成屋(武器カスタマイズ) layout: document --- ### 武器カスタマイズ [カスタムツール](カスタムツール)を素材にして、所持している武器のATKとMATKの下限値および上限値を増加させる。 カスタム武器の生産費用は武器の装備レベルによって決まりレベルが高いと金額が大きくなる。 武器タイプによって3種類の調整の仕方がある。 |武器タイプ|調整|強化素材| |---|---|---| |戦|下限値と上限値を3%程度増加|[カスタムツール](カスタムツール)x5| |散|上限値を6%程度増加|[カスタムツール](カスタムツール)x5| |閃|下限値を6%程度増加|[カスタムツール](カスタムツール)x5|
17.526316
59
0.741742
jpn_Jpan
0.454586
3ef60c3c5204091c9c1cf0d2561bd90670775fec
740
md
Markdown
content/project/denoising.md
ShayanPersonal/website-source
01fbaf71ad1deeadbb09424e2a3d5084db9bd7a3
[ "MIT" ]
null
null
null
content/project/denoising.md
ShayanPersonal/website-source
01fbaf71ad1deeadbb09424e2a3d5084db9bd7a3
[ "MIT" ]
null
null
null
content/project/denoising.md
ShayanPersonal/website-source
01fbaf71ad1deeadbb09424e2a3d5084db9bd7a3
[ "MIT" ]
null
null
null
+++ # Date this page was created. date = "2017-01-21" # Project title. title = "Denoising CNN for Realtime Path Tracing" # Project summary to display on homepage. summary = "A convolutional neural network for denoising incomplete path tracing renders using Feature Pyramid Networks." # Optional image to display on homepage (relative to `static/img/` folder). image_preview = "noisy_trace.png" # Tags: can be used for filtering projects. # Example: `tags = ["machine-learning", "deep-learning"]` tags = [] # Optional external URL for project (replaces project detail page). external_link = "https://github.com/ShayanPersonal/Denoise-CNN-for-realtime-path-tracing" # Does the project detail page use math formatting? math = false +++
29.6
120
0.75
eng_Latn
0.902155
3ef60f8949dce5e4ea7ae68aa407611bea15609c
1,594
md
Markdown
_posts/Algo/boj/2021-07-26-14725.md
hajungkim/hajungkim.github.io
8f83245848d57fe8b0e833f812c46b7f253674cc
[ "MIT" ]
null
null
null
_posts/Algo/boj/2021-07-26-14725.md
hajungkim/hajungkim.github.io
8f83245848d57fe8b0e833f812c46b7f253674cc
[ "MIT" ]
null
null
null
_posts/Algo/boj/2021-07-26-14725.md
hajungkim/hajungkim.github.io
8f83245848d57fe8b0e833f812c46b7f253674cc
[ "MIT" ]
null
null
null
--- title: "[알고리즘] 백준 14725 개미굴 / 자바, Trie" categories: - AlgoBoj toc: true toc_sticky: true date: 2021-07-26 --- <https://www.acmicpc.net/problem/14725> ```java import java.io.*; import java.util.*; public class Main_개미굴 { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int N = Integer.parseInt(br.readLine()); //먹이의 개수 Trie trie = new Trie(); StringTokenizer st; for (int i = 0; i < N; i++) { st = new StringTokenizer(br.readLine()); int K = Integer.parseInt(st.nextToken()); String[] strArr = new String[K]; //각 층의 먹이정보 for (int j = 0; j < K; j++) { strArr[j] = st.nextToken(); } trie.insert(strArr); } TrieNode thisNode = trie.rootNode; for (String key : thisNode.childNodes.keySet()) { //루트노드부터 자식노드 출력 System.out.println(key); print(thisNode.childNodes.get(key), 1); } } static void print(TrieNode thisNode, int depth) { for (String key : thisNode.childNodes.keySet()) { for (int i = 0; i < depth; i++) { System.out.print("--"); } System.out.println(key); print(thisNode.childNodes.get(key), depth + 1); } } static class Trie { TrieNode rootNode; Trie() { rootNode = new TrieNode(); } void insert(String[] strArr) { TrieNode thisNode = rootNode; for (String str : strArr) { thisNode = thisNode.childNodes.computeIfAbsent(str, s -> new TrieNode()); //해당 계층 문자의 자식노드가 없으면 추가 } } } static class TrieNode { Map<String, TrieNode> childNodes = new TreeMap<>(); } } ```
21.253333
102
0.629235
kor_Hang
0.466689
3ef67eb3b77534008cbd48517d513e69884883b7
1,442
md
Markdown
README.md
AymaneMachrouki/IPASS_Aymane_Machrouki
760bed96c18162766ae9b573e62597ad6e50611d
[ "BSL-1.0" ]
null
null
null
README.md
AymaneMachrouki/IPASS_Aymane_Machrouki
760bed96c18162766ae9b573e62597ad6e50611d
[ "BSL-1.0" ]
null
null
null
README.md
AymaneMachrouki/IPASS_Aymane_Machrouki
760bed96c18162766ae9b573e62597ad6e50611d
[ "BSL-1.0" ]
null
null
null
# IPASS-AymaneMachrouki This is the repository for my IPASS project. ## Contents This repository contains the following: **MPU6050Library:** This folder contains the library for the MPU6050. It utilizes most of it's functions. All used functions are explained in the included documentation. **doxygenForGameObjects:** This folder contains the generated Doxygen for gameObjects.hpp, which is used for "THE TILT MAZE". **doxygenForMPU6050:** This folder contains the generated Doxygen for MPU6050.hpp, which is used for "THE TILT MAZE" and the test code. **libraryTest:** This folder contains test code for the MPU6050 library in order to check if all functions are working correctly. **tiltMaze:** This folder contains the game "THE TILT MAZE". It utilizes the MPU6050's accelerometer as a way to control the ball by tilting the chip. It uses the hwlib library and my MPU6050 library. The Documentation and makeFile's are also included. **IPASS-Poster.pdf:** This is the poster for this project. It gives some information about the project. **LICENSE:** This is this project's license. **MPU6050-Datasheet.pdf:** This is the MPU6050's datasheet. It contains important information about the chip. **MPU6050-RegisterMap.pdf:** This is the MPU6050's datasheet. It contains important information about the chip's registers. **MakeFile's:** These are makefiles, which are necessary for the project to work with hwlib.
51.5
268
0.771845
eng_Latn
0.998725
3ef73963a2c7a985c61a22c0f1bb950b143c2deb
4,350
md
Markdown
blog/_posts/2017-05-18-two-comment-plugins-2.md
LanternD/lanternd.github.io
ac5414d5c8ccf0aa759d0fb53306a3ba768df19a
[ "MIT" ]
3
2016-10-10T02:22:20.000Z
2020-11-20T14:07:06.000Z
blog/_posts/2017-05-18-two-comment-plugins-2.md
LanternD/lanternd.github.io
ac5414d5c8ccf0aa759d0fb53306a3ba768df19a
[ "MIT" ]
4
2020-04-01T04:07:09.000Z
2021-11-23T05:57:47.000Z
blog/_posts/2017-05-18-two-comment-plugins-2.md
LanternD/lanternd.github.io
ac5414d5c8ccf0aa759d0fb53306a3ba768df19a
[ "MIT" ]
2
2020-11-20T02:32:23.000Z
2020-11-20T14:02:03.000Z
--- layout: post title: LiveRe + Disqus,同时用两种评论框意义大不大? description: 对于一年多前某篇相似日志的扩展讨论,写于多说即将关闭之时。 permalink: /two-comment-plugins-2/ categories: [blog, 视·界] tags: [LiveRe, 来必力, 第三方,多说, disqus, 评论] date: 2017-05-18 12:33:44 --- ## 姐妹篇传送门  本文姑且算是对《[Disqus + 多说,同时用两种评论框意义大不大?](/two-comment-plugins/)》这篇文章的更新吧。 ## 多说下线  博客圈混久的应该都知道这件事了吧。今年三月份的时候,**多说**宣布将在2017年6月1日下线,停止运营。离写本篇Blog这天也不远了。 ![Goodbye Duoshuo]({{site.img-hosting}}/Pic4Post/goodbye-duoshuo/goodbye-duoshuo.png) ![Goodbye Duoshuo]({{site.img-hosting}}/Pic4Post/goodbye-duoshuo/goodbye-duoshuo-0.png) ### 杂念  上面两张截图就权当纪念了,以后可能再也没有机会看到了。然后我搜索「第三方评论系统」的时候发现了好多的博文,**都是近一两个月发表的**,都是各位博主对这件事情的看法。大家也都知道多说倒闭主要是没有合适的用户变现方法,大概就是产品经理或者产品运营不太给力吧。【事后诸葛亮模式】表示,其实多说从一开始就就没有很清晰的产品定位,没有产品分级或者付费手段,甚至连广告也不投放(强行投放我就不会用它了哈),可以说就是自身就有注定了失败的基因。【模式关闭】然而两年多前看到这样一个还算稳定,高效,完整,免费,无墙的第三方评论系统,哪里还会想到今天这样一个结局。  多说的倒闭简直是**原生评论系统用户的狂欢**。因为自带数据库&评论系统的WordPress和Typecho之类一点压力也没有。 ### 晚节不保  值得一提的是,多说下线也就算了,结果还**晚节不保**。我的后台从四月份的某个时候开始就会有莫名的**垃圾评论**涌入,差不多以每天30至50条的速度增加,。不一会儿就差不多积累了1200条的评论,几乎是原来的三倍。之后我就**把多说评论给关掉**了,但是垃圾评论还是**一直在增加**。所以我觉得应该是多说的后台出了问题,不但没法过滤垃圾评论,而且连自己的系统都被黑客攻克了。这就很尴尬了。这些评论的用户名都叫做「新用户XXXX」,说的话好像也是评论其他人的文章留下的,也不排除是他们后院失火,用户评论的归属弄出问题了。 ![Goodbye Duoshuo]({{site.img-hosting}}/Pic4Post/goodbye-duoshuo/goodbye-duoshuo-1.png)  一开始我还认真删了若干评论,再想想也拉倒吧,浪费时间在这种事上是没有意义的。后来关闭多说评论的同时我就把评论导出了,把垃圾评论全选,删除;本来还想着删干净以后导入显得后台整洁一些,结果多说似乎还禁止导入了,上传不了。看起来是努力向大伙**挥手告别**的节奏。  但我们也不能坐以待毙啊。我开始研究第三方的评论系统。 ## 第三方评论系统搜查  其实我已经做好了最坏的打算,那就是自己建服务器(比如VPS,AWS或者自己家里的服务器),然后自己组织评论系统。虽然技术上凭我的知识储备可以搞定,也有很多似乎靠谱的开源评论系统,但是这样的工程量肯定不小,所以我还是会优先选择第三方评论系统。  在寻找第三方评论系统期间,以下这一个链接给了我很大的帮助。  《[水八口记 - 第三方评论系统推荐](https://blog.shuiba.co/comment-systems-recommendation)》  在此表示感谢!大概我是我见过最全关于评论系统的帖子了。里面提到的东西让我获益匪浅。 ### 畅言与友言  其实我有考虑过**畅言**和**友言**。从多篇博文了解到的情况来看,[友言](http://www.uyan.cc/)已经日渐式微,迟早要走上多说一样的路。另外它还有个令人不放心的地方是,点开主页拖到最下面,版权声明信息还是「2011-2012」,天了噜。而畅言呢需要域名备案,据说随便填个数就行,但是也不是很放心。 ### 网易云跟帖  网易云跟帖是长得和网易新闻下面评论很像的一个系统,后台稍微还硬一点。但它勾起了我对网易新闻评论盖楼「自古二楼出傻X」的不好的联想,而且还会显示「来自xx城市」这种信息,和我的Blog不太搭。总结来说就是放弃。 ### HyperComments  网站:[HyperComments](https://www.hypercomments.com)  来自俄罗斯,无墙,可以游客评论,满足大部分需求,长得很简约。鉴于有收费模式,可以预见它短时间内不会出问题。总之各方面都还不错,我还纠结了好一会,但是我对俄罗斯的网络产品比较陌生。Pass。 ### 「反正」  说了这么多,反正我发现了**来必力**这一个评论系统。 ## LiveRe  网址:[来必力/LiveRe](https://livere.com/)(拖到最下面可以换语言)。  它是一个来自韩国的评论系统。主页比较活泼一些。和多说或许差不多的年龄,但是人气一直不高,大约2015年底的时候开始支持中文,然后有不少起色。它的最大的优势是能够用**微信**和QQ登陆,甚至百度和豆瓣都支持,这在评论系统中是很少见的。  不知道「来必力」是谁翻译的,反正我觉得**挺难听**,叫「LiveRe」稍微顺一些。  和多说比起来,它这个系统和Disqus长得接近一些,后台代码比多说要不像一些。  它的**缺点**也非常明显,就是不支持评论导入和导出。这对于一个第三方评论系统来说感觉缺了半壁江山。还好,我并不是那么在意评论本身。写博客并不是为了获得更多的评论。有互动固然是一件好事,但是没有也不是那么的糟糕或者不可忍受。之前的文章也提到过,使用两个评论系统的原因是为了兼顾**可靠性**和**用户的覆盖面**。Disqus在博客圈里也算耳熟能详,其实本身很可靠,只是有墙或者加载慢。对此甚至有人为了评论而科学上网。而对于那些既不想翻墙又不想注册各种账号还想加载快的朋友来说,LiveRe是比多说,甚至畅言、友言更好的一个代替品(支持各种登陆)。  **我相信**如果有一天LiveRe倒闭了,它也是会支持我导出评论的。说实话,导出其实并不是那么的有必要。就像我用**多说**两年多从来没想过有一天我会用到导出。我并不需要什么时候想导出就导出,只需要它能用的时候好用;用不了了给条生路,那就够了。  但是不支持导入这个让我比较不解,可能是防止有人而已给服务器灌水?那也有点小抠门,总之不支持导入对博客迁移非常不友好,如果当初我只有一个评论系统的话我是绝对不会选择LiveRe的。  当然LiveRe实际运营情况如何,全是听天由命。「我相信」并没有多大的实际价值。  LiveRe还有个不知道是缺点还是优点的,就是基本不能自定义,而且吧接口还少得可怜,是崇尚简约呢还是技术部门不太给力,不得而知。  装LiveRe压力不大,JS代码都是一样的套路。一两个小时功夫就弄得差不多了。 ## 原多说评论的迁移  第二件事就是原来多说评论的迁移。既然LiveRe不支持导入,那我只能选择导入Disqus。  此时要祭出Urouge(不知道要不要说出他是谁,啊哈)的神器了: - 《[多说评论迁移至Disqus](http://urouge.github.io/migrate-to-disqus/)》  使用方法都有详细的介绍了。需要自己下载一下Php的程序,也可以装个XAMPP之类的。如果php没有加入系统PATH环境变量可以直接`cd`到php所在文件夹。  比如我这里是 ``` cd C:\xampp\php ```  进去以后再 ``` .\php.exe -f migrate.php ```  运行完没有提示,但是只要生成了`disqus.xml`就算你成功了。然后去Disqus导入页面上传`.xml`文件即可。 ### Debug  使用过程中遇到个Bug。在Disqus的[Import portal](https://import.disqus.com)里面给我提示: > &#8722; disqus.xml > > XML sytax error: Input is not proper UTF-8, indicate encoding ! > Bytes: 0xE6 0xB7 0x8B 0xE6, line 3129, column 38 (line 3129)  感觉第一句话还有语病……一开始以为是UTF-8编码没弄对,来回改了好几次也不行。那是一条繁体字的评论,过程有点复杂就不说了。需要删评论才能彻底解决。  最后解决的方案是:**把繁体转换成简体就好使了**,给需要的朋友提个醒。以及,没有父评论的那些讨论也需要删掉,以免发生意外。 ## 不长的后记  整个博客评论系统更新和升级大约用了一天的时间。选择哪个评论系统是最费事儿的。需要耐心地总结各方面信息然后决策,真伤脑。  在开头提到的那篇文章里,我提到: > 一个是加入评论的条数,就是让访客在没有点开评论框加载按钮的时候就显示出两种评论的条数。这样哪边讨论更火热就可以清楚知道了,选择困难症患者能够多一条参考。 > >  另一个计划是观察和统计两种评论框的使用情况,要是其中一个少人用的话就直接保留另一个就好了。目前的趋势是Disqus用的人比较多(当然其实两个都不多,哈哈)。到时候还得研究研究怎么把其中一个的评论转移到另一个去。 > > 最后的去留交给时间决定啦。  目前回头看,第一条已经解决了,不过LiveRe没有提供显示评论数的接口的样子。至于第二条,我决定长期使用两种评论系统。多个备份还是挺好的,至少Disqus不能丢。  这篇就写到这里吧,在优化评论框异步加载的过程中我还费了比这多N倍的功夫,留在下一篇日志里面说说吧。
30.633803
275
0.808276
yue_Hant
0.597868
3ef799696190e25cba3b4877534d2fee23f19956
2,491
md
Markdown
docs/embassy-deployment.md
joj0s/trait-curation
a61d1e7aebf3ab444373be03c100236d4c955e19
[ "Apache-2.0" ]
2
2020-05-12T16:51:43.000Z
2020-09-22T10:40:04.000Z
docs/embassy-deployment.md
joj0s/trait-curation
a61d1e7aebf3ab444373be03c100236d4c955e19
[ "Apache-2.0" ]
97
2020-05-12T09:17:44.000Z
2021-09-22T19:03:58.000Z
docs/embassy-deployment.md
joj0s/trait-curation
a61d1e7aebf3ab444373be03c100236d4c955e19
[ "Apache-2.0" ]
2
2020-05-28T10:34:39.000Z
2020-10-29T10:11:27.000Z
# Instructions to deploy the app in the EBI Embassy Cloud [Link to the Embassy Cloud documentation](http://docs.embassy.ebi.ac.uk/) ## Initial Embassy set up 1. Request the tenancy through the Resource Usage Portal. See docs on the EBI intranet for details. 1. Once accepted, generate the Embassy admin password as described in the email. 1. Log in to the OpenStack dashboard. URL is also provided in the email. ## Set up keys and network interfaces 1. **Key pairs.** * Go to Compute → Access & Security → Key Pairs. * Click on “Import Key Pair” and add public SSH keys for every collaborator who will need to have SSH access to the instances. 1. **Floating IPs.** * Go to Compute → Access & Security → Floating IPs * Create the necessary number of floating IPs (one for each instance you would like to make publicly accessibly). Use the external network which you identified on the previous step. 1. **Security group.** * Go to Compute → Access & Security → Security Groups. * Click on “Create Security Group”. Name = “Web app”, description = “Allows inbound SSH and ICMP access. Opens ports 80, 443, and 8000.”. * Click on “Manage rules” for that security group. * Add the rules for: “SSH”, “ALL ICMP”, “HTTP”, “HTTPS”, and “Custom TCP Rule” with Port=8000. ## Create and set up the instances 1. Go to Compute → Instances and click on “Launch Instance”. Set the parameters: * **Instance Name** = `trait-curation` (of anything else) * **Count** = 2 (or however many you need). If you choose more than one, a numeric suffix will be appended to the instance name, e.g. `trait-curation-1`, `trait-curation-2`, etc. * **Source** = `Ubuntu18.04LTS` * **Flavor** = `s1.large` (4 VCPUs, 8 GB RAM, 60 GB total disk) * **Networks** = add the default one (created with the project) * **Security Groups** = use the “SSH-and-ping” one created during the set up stage. * **Key Pair** = choose the key pair of **one** of the collaborators who will need to have access to the instance. Only one key pair can be chosen at this stage. 1. For each of the instances, click Actions → Associate floating IP. ## SSH into an instance and set up Use the command: `ssh ubuntu@${FLOATING_IP}`, substituting the actual floating IP of the instance you're trying to get into. Configure SSH access for other collaborators by adding their public keys into `~/.ssh/authorized_keys`. They will also need to use the `ubuntu` user, which has root privileges.
65.552632
185
0.715777
eng_Latn
0.987964
3ef881f78eec9ccf8613e78d2e672167f842ed9c
2,286
md
Markdown
README.md
toyama0919/embulk-filter-azure_computer_vision_api
c25f86a0e7ca546b330577a8c3e1941623a3bdcc
[ "MIT" ]
1
2018-04-07T10:55:46.000Z
2018-04-07T10:55:46.000Z
README.md
toyama0919/embulk-filter-azure_computer_vision_api
c25f86a0e7ca546b330577a8c3e1941623a3bdcc
[ "MIT" ]
null
null
null
README.md
toyama0919/embulk-filter-azure_computer_vision_api
c25f86a0e7ca546b330577a8c3e1941623a3bdcc
[ "MIT" ]
null
null
null
# Azure Computer Vision Api filter plugin for Embulk [![Gem Version](https://badge.fury.io/rb/embulk-filter-azure_computer_vision_api.svg)](http://badge.fury.io/rb/embulk-filter-azure_computer_vision_api) ## Overview * **Plugin type**: filter ## Configuration - **api_type**: api_type(string, required) - **out_key_name**: out_key_name(string, required) - **image_path_key_name**: image_path_key_name(string, required) - **params**: params(hash, default: {}) - **delay**: delay(integer, default: 0) - **retry_wait**: retry_wait(integer, default: 10) - **subscription_key**: subscription_key(string, required) ## Example ### OCR(text recognition) ```yaml - type: azure_computer_vision_api api_type: ocr image_path_key_name: {{ image_path_key_name }} out_key_name: azure_text params: language: "ja" detectOrientation: true subscription_key: {{ env.AZURE_COMPUTER_VISION_SUBSCRIPTION_KEY }} ``` ### analyze(Categories,Tags,Description,Faces,ImageType,Color,Adult) ```yaml - type: azure_computer_vision_api api_type: analyze image_path_key_name: {{ image_path_key_name }} out_key_name: azure_analyze params: visualFeatures: "Categories,Tags,Description,Faces,ImageType,Color,Adult" language: en subscription_key: {{ env.AZURE_COMPUTER_VISION_SUBSCRIPTION_KEY }} ``` ### tag ```yaml - type: azure_computer_vision_api api_type: tag image_path_key_name: {{ image_path_key_name }} out_key_name: azure_tag subscription_key: {{ env.AZURE_COMPUTER_VISION_SUBSCRIPTION_KEY }} ``` ### describe ```yaml - type: azure_computer_vision_api api_type: describe image_path_key_name: {{ image_path_key_name }} out_key_name: azure_describe subscription_key: {{ env.AZURE_COMPUTER_VISION_SUBSCRIPTION_KEY }} ``` ## Reference [Computer Vision—Image Processing and Analytics \| Microsoft Azure](https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/) [Microsoft Cognitive Services \- Documentation](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api/documentation) [Cognitive Services APIs Reference](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) ## Build ``` $ rake ```
28.936709
154
0.744532
yue_Hant
0.332846
3ef94803be061dc9caac9da3db4962863054529f
490
md
Markdown
c9/ex/readme.md
fATwaer/APUE
148c4aaba9d61e00e11dfb86e2e5bfedd700fe0a
[ "MIT" ]
4
2018-10-20T15:01:36.000Z
2020-09-09T17:21:37.000Z
c9/ex/readme.md
fATwaer/APUE
148c4aaba9d61e00e11dfb86e2e5bfedd700fe0a
[ "MIT" ]
null
null
null
c9/ex/readme.md
fATwaer/APUE
148c4aaba9d61e00e11dfb86e2e5bfedd700fe0a
[ "MIT" ]
null
null
null
[moonlight@ArchLinux ex]$ ./a.out parent: pid = 5192, ppid = 3405, pgrp = 5192, sid = 3405, tpgrp = 5192 child: pid = 5193, ppid = 5192, pgrp = 5192, sid = 3405, tpgrp = 5192 new pg: pid = 5193, ppid = 5192, pgrp = 5193, sid = 5193, tpgrp = -1 according to the value of `sid` and `pgrp`, this child process become a seesion leader and process group leader. The leader process don't have console terminal because calling `tcgetpgrp()` returns -1 , and prints "tggrp = -1".
70
227
0.667347
eng_Latn
0.892236
3efb52c682a55128140227338342a8a3abf6907e
1,664
md
Markdown
tests/Secrets.md
chmreid/centillion
91e64330ad2881ff19253d54992ae989bc3a67ff
[ "MIT" ]
6
2019-03-09T18:32:55.000Z
2020-06-29T15:38:36.000Z
tests/Secrets.md
chmreid/centillion
91e64330ad2881ff19253d54992ae989bc3a67ff
[ "MIT" ]
20
2020-02-27T06:54:22.000Z
2020-07-11T21:38:06.000Z
tests/Secrets.md
chmreid/centillion
91e64330ad2881ff19253d54992ae989bc3a67ff
[ "MIT" ]
2
2019-08-20T01:49:15.000Z
2020-01-19T09:25:58.000Z
Contents of `secrets.tar.gz`: - `credentials.json` - file containing OAuth key/token for `[email protected]` Google account (used for tests) - `secrets.py` - python file containing variables with Github and Disqus API tokens (used for tests) - `GITHUB_OAUTH_CLIENT_ID` (access protection) - `GITHUB_OAUTH_CLIENT_SECRET` (access protection) - `GITHUB_TOKEN` (github) - `DISQUS_TOKEN` (disqus) - `SECRET_KEY` (flask) Instructions for encrypting/decrypting can be found in the Travis documentation here: <https://docs.travis-ci.com/user/encrypting-files/#manual-encryption> **Step 1: Encrypt Secret File** To encrypt a file, pick a passphrase and use OpenSSL to encrypt the file with that passphrase: ``` openssl aes-256-cbc -k "<your password>" -in secrets.tar.gz -out secrets.tar.gz.enc ``` **Step 2: Add password to repository's Travis settings.** Log in to Travis and navigate to the project. Modify the settings of the repository. There is a section where you can add environment variables. Add a new environment variable named `credentials_password` with the value of `<your password>` (same password used in the above command). Now you can add the following command in your `.travis.yml` file to decrypt the secrets file: ``` before_install: - ... - cd tests/ - openssl aes-256-cbc -k "$credentials_password" -in secrets.tar.gz.enc -out secrets.tar.gz -d - ... ``` Once you've added the encrypted secrets file (don't add the original, unencrypted secrets file!), you can commit it along with the `.travis.yml` file, and Travis should be able to access the secrets using the secret password provided via the environment variable.
30.814815
94
0.751202
eng_Latn
0.963169
3efe8738527e2cf46a32217151ed6a09d0374882
208
md
Markdown
environment-configs/README.md
punitrathore/live-coding-repos
d2457ba974b7ea731cf61bae68b019aec64f4490
[ "MIT" ]
null
null
null
environment-configs/README.md
punitrathore/live-coding-repos
d2457ba974b7ea731cf61bae68b019aec64f4490
[ "MIT" ]
null
null
null
environment-configs/README.md
punitrathore/live-coding-repos
d2457ba974b7ea731cf61bae68b019aec64f4490
[ "MIT" ]
null
null
null
# Contacts Snapshot starter project 1. Create your database: `psql -f schema.sql` 1. Add data to the database: `psql -f contacts.sql` 1. Install your dependencies: `npm install` 1. Run the server: `nodemon`
29.714286
51
0.740385
eng_Latn
0.818452
3efe9a56a87e5dd308931bf938a92ec9c1342056
2,995
md
Markdown
_posts/2007/2007-06-08-Deliver-Marrow-to-Amoy.md
youngbug/youngbug.github.io
8ef59830d26a1c3203bfc23f5a87448520e9a3da
[ "Apache-2.0" ]
null
null
null
_posts/2007/2007-06-08-Deliver-Marrow-to-Amoy.md
youngbug/youngbug.github.io
8ef59830d26a1c3203bfc23f5a87448520e9a3da
[ "Apache-2.0" ]
null
null
null
_posts/2007/2007-06-08-Deliver-Marrow-to-Amoy.md
youngbug/youngbug.github.io
8ef59830d26a1c3203bfc23f5a87448520e9a3da
[ "Apache-2.0" ]
null
null
null
--- layout: post title: 厦门送髓 author: Zhao Yang([email protected]) time: 2007年06月08日 location: 北京 pulished: true category: Volunteer tags: [volunteer,blog] excerpt_separator: <!--more--> --- > **FOREWORD:** *这篇日志是2007年去厦门运送造血干细胞后写的记录,当时发在了和讯博客和北京市红十字会造血干细胞捐献志愿者之家的论坛上,文笔虽然很差,但是和讯当时还是把这篇日志推送到首页头条,是那几天的最热文章。不过时间过去了十多年,和讯博客和志愿者之家论坛都已经不在了,不过好在和讯数据库还保存着当时的原始数据,向和讯申请下载了博客的历史数据,重新发出来。* 2007年6月6日有幸护送中华骨髓库北京分库第57例供者第二天采集的造血干细胞去厦门,一路谨慎小心,晚上将干细胞安全送到患者医生手中。下面将6月6日送干细胞的经历发出来,跟大家分享一下,给将来有机会护送干细胞的战友们做个参考。 <!--more--> 上午11点赶到北三环的造血干细胞捐献者资料库北京管理中心,李涛向我交代了送骨髓的注意事项。快十二点的时候在空军总医院采集的造血干细胞交到了我手中。箱子不重但责任重大,提在手里还是沉甸甸的。出发前王凯又把送干细胞的注意事项给我说了一遍。 ![img](/assets/blog_image/2007/20070608001-bjredcross-blood-center.jpg) 12点坐车赶到首都机场的二号航站楼。换登机牌的时候才发现走错了航站楼,一路小跑赶到了一号航站楼。过安检的时候,拿出造血干细胞免过安检的申请,他们好象是第一遇到这种情况,还跑去请示了一下领导。请示的结果是可以不过X光机,不过他们要开箱子看一下。我打开箱子让他们看了一眼血袋,那会儿对什么都特别敏感,生怕出什么问题,不过还是顺利地通过了安检。 > **NOTE:** *造血干细胞经过X光照射后,会丧失移植效果,航空运输造血干细胞时中华骨髓库的省级管理中心会开出一个运输造血干细胞申请造血干细胞免X光机检查或者申请手工安检的介绍信。在当时中国红十字会总会也曾经跟民航部门联合发过文,来给与航空运输造血干细胞提供便利。当时2007年6月份,中华骨髓库只有几百例捐献的案例,北京地区也只有57例,经民航送往外地的案例就更少了,很多民航机场并没有什么经验,不过一般经过请示也都能顺利通过安检。经过这15年的发展,到2022年3月中华骨髓库已经实现了超过12800例的捐献,北京分库也超过了500例的捐献,中华骨髓库的工作人员和志愿者坐飞机运输造血干细胞也已经是稀松平常的事情,民航局和红十字会也有了SOP,申请造血干细胞免过X光机检查的程序也更加高效和规范。* ![img](/assets/blog_image/2007/20070608003-captial-airport.jpg) 当日通过的安检通道 侯机的时候一个人抱着箱子做在那等,箱子上贴着一个大大的中华骨髓库的标志挺引人注目的,不时听见有人在旁边偷偷议论说“这人是送骨髓的”。候机的时候跟厦门那边的接收骨髓的鹿大夫联系了一下,把航班号和起飞时间告诉他。 ![img](/assets/blog_image/2007/20070608004-my-bag-my-box.jpg) 跟随我上学的挎包和骨髓库的冷链箱 登机的时候我告诉乘务长,骨髓要送到厦门做移植,希望机长把这个情况告诉地面,让飞机准时起飞。下午2点40飞机正点起飞,飞机飞平稳后,一个男乘务员跑过来兴奋地给我说,地面真的让我们正点起飞了,刚才还让一架国航的飞机等我们呢。我又让乘务长帮我在前排找个位置,这样下飞机的时候会比较方便。有个乘务员超级帮忙,不但帮我在前面找到一个位置,还怕太阳光射的箱子上,劝说我周围的几个乘客把窗户上的遮阳板拉下来。飞机快到厦门的时候,一个乘务员跑过来问我,“骨髓是什么样子的啊?量有多少啊?”我告诉她,跟血差不多,量就跟一瓶墨水那么多,然后又给她说了一点骨髓血干细胞和外周血造血干细胞的事情。 下午5点10分飞机在厦门机场降落,跟乘务长和诸乘务员说声谢谢后,我抱着箱子跑出去,路上一个安检员竟然朝我敬礼。走出机场也没看见接我的鹿医生,打电话过去问了一下,五点多是厦门交通拥堵的高峰,他们医院派过来的救护车堵在路上了。等了十多分钟,看见了一辆救护车开了过来,车上下来一个人朝我挥手喊出我的名字,这位应该就是鹿大夫了。 ![img](/assets/blog_image/2007/20070608005-sign.jpg) 我和患者医院大夫签造血干细胞交接单 ![img](/assets/blog_image/2007/20070608006-handover.jpg) 我将从北京运来的造血干细胞交给患者的主治大夫手中 五点五十我们来到血液科病房,我和鹿医生办完交接,把那袋移交给医院,并跟鹿医生一起把干细胞送到层流病房。一直守在病房门口的患者父亲,见我送来了干细胞,高兴的和我握手,不停的说谢谢,辛苦了。最后亲眼看见鹿大夫把干细胞交到层流病房里的医生手中时,心离绷着的那跟弦才松了下来。 ![img](/assets/blog_image/2007/20070608007-delivery-receipt.jpg) 造血干细胞交接单,隐去了患者和捐献者的信息 > **NOTE:** *需要进行造血干细胞移植的患者,一般患者体重大的需要的造血干细胞就多一些,体重小的需要的造血干细胞数量就少一些,骨髓库和采集医院会根据捐献造血干细胞的志愿者和患者的体重等因素制定采集计划,有的时候时候志愿者捐献一次就达到患者移植所需的细胞数了,有的时候志愿者需要连续两天分别采集两次,才能达到患者需要的细胞计数。这一次我就是负责运送志愿者第二天采集捐献的造血干细胞到患者医院,头一天因为航班延误,造血干细胞很晚才送到患者所在的医院,我这次比较顺利六点多就送到了。* > *造血干细胞送进无菌仓的时候,见到了患者的父亲,厦门很热,穿了一件很旧的短袖,应该是孩子长期在医院治疗,父亲花了大量的财力和精力,人感觉也很憔悴,但是看到第二天的造血干细胞及时送到,脸上也露出了笑容。非亲缘的造血干细胞移植,要花费大量的费用,顺利的几十万,病情复杂的要上百万,那个时候一般的家庭如果有人要做骨髓移植,基本上就是要卖房子了。真是希望在花费了这么多费用,患者吃了这么多苦之后,接受造血干细胞移植后能够顺利出仓,早日康复。* > *患者进行造血干细胞移植,也就是俗称的骨髓移植时,需要提前进入无菌仓,首先清除掉患者自身的造血功能,这时患者就进入了不可逆的状态,患者自己已经没有了造血功能,必须等待输注志愿者捐献的造血干细胞完成移植。这时如果承诺捐献的志愿者出现任何意外,比如反悔,这时对患者的打击都是致命的。这时一般会紧急动员其他合适的志愿者进行捐献。所以这里也希望骨髓库的志愿者们,加入前深思熟虑,做出捐献承诺的准捐献者,能够义无反顾。*
50.762712
355
0.877129
yue_Hant
0.317895
3efeaff67d3fbbd62125f537113d9cf567de85a1
346
md
Markdown
src/RiotQuest/Codegen/out/MatchParticipantFrameList.md
MasterShan/RiotQuest
292bc31bde926dad6dbfd65ab1a516d9b70ea480
[ "MIT" ]
12
2019-04-25T02:52:30.000Z
2019-11-02T21:51:55.000Z
src/RiotQuest/Codegen/out/MatchParticipantFrameList.md
MasterShan/RiotQuest
292bc31bde926dad6dbfd65ab1a516d9b70ea480
[ "MIT" ]
28
2019-07-08T16:08:07.000Z
2019-11-06T18:13:01.000Z
src/RiotQuest/Codegen/out/MatchParticipantFrameList.md
MasterShan/RiotQuest
292bc31bde926dad6dbfd65ab1a516d9b70ea480
[ "MIT" ]
3
2019-08-27T13:11:48.000Z
2020-05-06T20:56:14.000Z
--- title: MatchParticipantFrameList extends: _layouts.documentation section: content --- # MatchParticipantFrameList This page describes the methods for the MatchParticipantFrameList Collection. [Collection Source Code](https://github.com/supergrecko/RiotQuest/blob/master/src/RiotQuest/Components/Collections/MatchParticipantFrameList.php)
26.615385
145
0.83815
yue_Hant
0.309062
3eff2a512807e58a02d2f99379e02903d0cdddad
13,191
md
Markdown
README.md
rtiangha/CowPrompt
a1b67bf0c77518942330edd950b602a5ac2116cb
[ "Unlicense" ]
null
null
null
README.md
rtiangha/CowPrompt
a1b67bf0c77518942330edd950b602a5ac2116cb
[ "Unlicense" ]
null
null
null
README.md
rtiangha/CowPrompt
a1b67bf0c77518942330edd950b602a5ac2116cb
[ "Unlicense" ]
null
null
null
# CowPrompt **CowPrompt** is a simple wrapper script for `xcowsay` (or `cowsay`) and `fortune` that can be used to display any message contained in a single, specific fortune data file to the screen. It comes in two versions. `cowprompt` displays prompts to any screen using an X Windows environment, and `cowprompt-cli` displays prompts to terminal screens. ## What is it for? The main intent for CowPrompt is to display writing or creative prompts for writers taking part in self-challenges such as [NaNoWriMo](https://nanowrimo.org), but can also be used for any purpose where prompts or messages need to be displayed onto the screen on demand. ## Why does CowPrompt need to exist? Why not just use `xcowfortune` which is also installed with `xcowsay`, or use `xcowsay's` existing functionality? You could do that, but it would also pull from every fortune data file installed on the system by default, rather than just one (which may or may not be what you want). You could also pipe the output of fortune (or a specific fortune data file) through `xcowsay` on the command line. However, I wanted a simple way to be able to: 1. Configure the command to use only one fortune data file rather than all of them in *specific* cases (in this case, a fortune data file that specifically contained various creative prompts to try and help me get through writer's block), and more importantly, 2. Be able to invoke the command through a launcher on the desktop, application menu, or quick launch panel rather than through the command line, hence the desire for a wrapper script. I'd essentially have to write my own script to do the above (the alternative being to modify xcowfortune to take an additional argument to specify a specific fortune data file), so I made one for myself and decided to share it. It needed a fancy name, and thus, `cowprompt` was born. ## How it Works By default, CowPrompt displays quotes from a combined set of **Brian Eno's** [Oblique Strategies](https://en.wikipedia.org/wiki/Oblique_Strategies) cards (specifically, Editions 1-4 with duplicates removed, which is [available for viewing online](http://www.rtqe.net/ObliqueStrategies/Edition1-3.html)) in fortune data file format, but CowPrompt can be configured to use any specific fortune data file. Various display options for `cowprompt` including display position and image file can be configured through editing the `cowprompt.conf` file and the `cowprompt` wrapper executable script. Display options for `cowprompt-cli` can be modified using the standard `cowsay` command line arguments by editing that script directly. ## Dependencies `cowprompt` can work on any Linux or Unix based system running an X Windows environment that has [xcowsay](https://github.com/nickg/xcowsay) and [fortune](https://en.wikipedia.org/wiki/Fortune_(Unix)) (or [fortune-mod](https://github.com/shlomif/fortune-mod)) available and installed on the system. `cowprompt-cli` can work on any terminal system that has fortune (or fortune-mod) installed. Both CowPrompt packages also require the `cowprompt-data` to be installed alongside. However, `cowprompt-data` can be installed on its own in case you only want access to the data sets to use with `fortune`. ## How to Install CowPrompt can be installed via binary package (.deb and .rpm provided via the [Releases](https://github.com/rtiangha/CowPrompt/Releases) section), or by copying the various files in the `unix` directory to their respective places onto the file system. 1. First, ensure that the software requirements (`xcowsay` and `fortune` for the graphical version, `cowsay` and `fortune` for the command line version) are installed first. Some examples on how to do so on various distributions are provided below. ### Debian/Ubuntu `sudo apt-get install xcowsay fortune-mod` OR/AND `sudo apt-get install cowsay fortune-mod` ### Fedora/RHEL/CentOS `sudo dnf install xcowsay fortune-mod` OR/AND `sudo dnf install cowsay fortune-mod` ### openSUSE `sudo zypper install xcowsay fortune` OR/AND `sudo zypper install cowsay fortune` ### Arch Linux `cowsay` and `fortune-mod` are available through the standard repositories, while `xcowsay` is available through the [AUR](https://aur.archlinux.org/packages/xcowsay/). `sudo pacman -S cowsay fortune-mod` If you're using something like `yay` to access the AUR, you can install `xcowsay` with `yay -S xcowprompt` Else, download xcowsay's [PKGBUILD](https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=xcowsay) file, run `makepkg` and manually install it. ### Void Linux `sudo xbps-install xcowsay fortune-mod` OR/AND `sudo xbps-install cowsay fortune-mod` 2. Next, install the **CowPrompt** and **cowprompt-data** packages. ### DEB-based Distributions (ex. Debian/Ubuntu, etc.) `sudo dpkg -i cowprompt_<VERSION>.deb cowprompt-data_<VERSION>.deb` OR/AND `sudo dpkg -i cowprompt-cli_<VERSION>.deb cowprompt-data_<VERSION>.deb` If it complains that you're missing dependencies because you forgot to install `xcowsay` and `fortune-mod`first, simply run: `sudo apt-get install -f` and it will automatically fetch any missing dependencies. ### RPM-based Distributions (ex. Fedora/RHEL/CentOS/openSUSE, etc.) `sudo rpm -ivh cowprompt-<VERSION>.noarch.rpm cowprompt-data-<VERSION>.noarch.rpm` OR/AND `sudo rpm -ivh cowprompt-cli-<VERSION>.noarch.rpm cowprompt-data-<VERSION>.noarch.rpm` ### Arch Linux You can install CowPrompt by downloading the binary .zst files from the [Releases](http://github.com/rtiangha/CowPrompt/releases) page using `pacman` or by downloading the PKGBUILD file and manually creating the packages. #### Install via `pacman`: `sudo pacman -U cowprompt-<VERSION>-any.pkg.tar.zst cowprompt-data-<VERSION>-any.pkg.tar.rst` OR/AND `sudo pacman -U cowprommpt-cli-<VERSION>-any.pkg.tar.zst cowprompt-data-<VERSION>-any.pkg.tar.rst` #### Install with `PKGBUILD` Download a copy of the [PKGBUILD](https://github.com/rtiangha/CowPrompt/blob/main/build-arch/PKGBUILD) file from the [build-arch](https://github.com/rtiangha/CowPrompt/tree/main/build-arch) directory, run: `makepkg` to create the packages, then use the `pacman` instructions above to install the packages. ### Void Linux You can use either the [XBPS source packages collection](https://github.com/void-linux/void-packages/) to build the packages, or the instructions in the **Other Distributions** section below. To use the XBPS source packages collection: Install `xtools`: `sudo xbps-install xtools` Clone the XBPS source packages collection directory: `git clone https://github.com/void-linux/void-packages.git` Enter the project directory: `cd void-packages` Install the bootstrap packages: `./xbps-src binary-bootstrap` Copy the contents of `build-void` into `void-packages/srcpkgs`. For example, assuming that the `CowPrompt` and `void-packages` project directories are on the same level: `cp -ar ../CowPrompt/build-void/* srcpkgs/` Build the packages: `./xbps-src pkg CowPrompt` Install the packages using `xi` `xi CowPrompt CowPrompt-cli` ### Other Distributions Ensure that `xcowsay` (or/and `cowsay`) and `fortune` are installed in your system (either through your distribution's package manager or by manually compiling it). Then, you can use the included `Makefile` to install/uninstall the various pieces of CowPrompt. 1. Edit the various Configuration Options in the `Makefile` to point to the proper paths in your file system. For example, the location where `fortune` stores its data files can vary depending on the distribution (ex. `/usr/share/games/forunes` vs. `/usr/share/fortune`). 2. To install: - Install cowprompt: `make install-cowprompt` - Install cowprompt-cli: `make install-cowprompt-cli` - Install cowprompt-data: `make install-cowprompt-data` - Install everything: `make install` 3. To uninstall: - Uninstall cowprompt: `make uninstall-cowprompt` - Uninstall cowprompt-cli: `make uninstall-cowprompt-cli` - Uninstall cowprompt-data: `make uninstall-cowprompt-data` - Uninstall everything: `make uninstall` Alternatively, copy the files in the `unix` directory of the CowPrompt project page to their equivalent places in your distribution's file system. ## How to Configure Options to change how/what CowPrompt displays are available via editing the `/etc/cowprompt.conf` file and/or the `/usr/bin/cowprompt` wrapper script for the graphical version, or the `/usr/bin/cowprompt-cli` wrapper script for the command line version. Instructions and examples are included within the files. ## How to Use For the graphical version, to display a random prompt, simply click on the CowPrompt application launcher in your window manager's application menu. You can also invoke it on the command line by typing `cowprompt` in a terminal. To make access easier, you may want to consider adding shortcuts to your Desktop or Quick Launch panel. For the command line version, type `cowprompt-cli` in a terminal window. You can also [alias](https://www.computerworld.com/article/2598087/how-to-use-aliases-in-linux-shell-commands.html) the command to something shorter to make access easier. ## Included Prompts **NOTE**: CowPrompt now requires the `cowprompt-data` package to be installed as well. By default, CowPrompt is configured by default to use the `Oblique` data set, which includes all of the strategies included in Editions 1 through 4 of Oblique Strategies, minus the duplicates. Included in the CowPrompt package are the following data files: ### Oblique Strategies - `Oblique`: All of the Oblique Strategies from Edition 1-4 - `Oblique-ed1`: Oblique Strategies, Edition 1 - `Oblique-ed2`: Oblique Strategies, Edition 2 - `Oblique-ed3`: Oblique Strategies, Edition 3 - `Oblique-ed4`: Oblique Strategies, Edtion 4 ### [Other Strategies](http://www.rtqe.net/ObliqueStrategies/EditionOther.html) - `Acute`: [The Acute Strategies](http://www.rtqe.net/ObliqueStrategies/Acute.html) (Strategies submitted by Oblique Strategies fans) - `Diary`: Strategies included in Brian Eno's diary - `Signal`: Strategies published in *Signal: A whole earth catalog - Communication Tools for the Information Age (ed. Kevin Kelly, fwd by Stewart Brand, 1988, Harmony Books, P. 17)* ### Everything - `Complete`: A data file that combines all of the data sets above. To switch data files, edit the `/usr/bin/cowprompt` file and change the `DECKNAME` variable to use the name of the deck that you want. For example: DECKNAME=Complete ## How to Create New Prompts CowPrompt can pull from any message contained in a valid fortune data file installed on the system. There are many sources to get fortune data files. Your operating system distribution may have additional ones that you can install outside of the default set, or you can find many on the internet that other people have made as well. Creating your own is easy too. For example, [here is a tutorial](http://bradthemad.org/tech/notes/fortune_makefile.php). Once you've obtained or created your own custom fortune text and `.dat` files, make sure to copy them to where the other fortune data files live on your system (usually in `/usr/share/games/fortunes` but your distribution may vary) and then edit the `/usr/bin/cowprompt` wrapper script to use it instead of the default `Oblique` file (for example, replace `Oblique` with `Oblique-ed3` to specifically pull from Oblique Strategies, 3rd Edition). ## CREDITS Special thanks to Nick Gasson for creating [xcowsay](http://www.doof.me.uk/xcowsay) as a super-simple way to output text to a screen in a graphical way, to Tony Monroe for creating the original [cowsay](https://github.com/tnalpgge/rank-amateur-cowsay) terminal program, and to [The Oblique Strategies](http://www.rtqe.net/ObliqueStrategies/) website for making the text of Editions 1-4 and more available for [online viewing](http://www.rtqe.net/ObliqueStrategies/Edition1-3.html) . ## LICENSE ``` This is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For more information, please refer to <http://unlicense.org/> ```
54.283951
482
0.7728
eng_Latn
0.981952
41000b8ba5b1d3e47ee64379ee738ad42de25670
23
md
Markdown
README.md
CowPanda/docker-alpine-mongodb
f7cec4738ff201edd69639c19c30dbc9a5ba28c6
[ "MIT" ]
null
null
null
README.md
CowPanda/docker-alpine-mongodb
f7cec4738ff201edd69639c19c30dbc9a5ba28c6
[ "MIT" ]
null
null
null
README.md
CowPanda/docker-alpine-mongodb
f7cec4738ff201edd69639c19c30dbc9a5ba28c6
[ "MIT" ]
null
null
null
# docker-alpine-mongodb
23
23
0.826087
eng_Latn
0.339261
410048551e1646818bff2244923ad8c0c30133fd
65
md
Markdown
_includes/02-image.md
light2750/markdown-portfolio
4b83e56fe46c2a0dc2356f595d0dfdfec4a1a9b7
[ "MIT" ]
null
null
null
_includes/02-image.md
light2750/markdown-portfolio
4b83e56fe46c2a0dc2356f595d0dfdfec4a1a9b7
[ "MIT" ]
5
2019-09-16T04:34:33.000Z
2019-09-16T05:11:03.000Z
_includes/02-image.md
light2750/markdown-portfolio
4b83e56fe46c2a0dc2356f595d0dfdfec4a1a9b7
[ "MIT" ]
null
null
null
![Yaktocat Pose](https://octodex.github.com/images/yaktocat.png)
32.5
64
0.769231
vie_Latn
0.34901
4100c8589fe2f9551e455db634a7ea348386e3e2
92
md
Markdown
cheatsheets/README.md
devlights/try-c
0363a40fb1b58f83b89a211d19fdd1afdd4a2e4b
[ "MIT" ]
null
null
null
cheatsheets/README.md
devlights/try-c
0363a40fb1b58f83b89a211d19fdd1afdd4a2e4b
[ "MIT" ]
15
2020-02-15T09:38:00.000Z
2021-05-20T06:43:37.000Z
cheatsheets/README.md
devlights/try-c
0363a40fb1b58f83b89a211d19fdd1afdd4a2e4b
[ "MIT" ]
null
null
null
# サンプルリスト このディレクトリには以下のサンプルがあります。 |name|description|exec| |----|------------|----| ||||
9.2
24
0.521739
jpn_Jpan
0.526092
41016e8696f91969837a74fc4fe8c45ae43727d9
35,116
md
Markdown
articles/_posts/2017-12-29-comparing-pcf-to-k8s.md
odedia/odedia.github.io
ca643645f3bde33f1cb34b0adce6e177ba420815
[ "CC-BY-3.0" ]
1
2020-07-23T14:46:44.000Z
2020-07-23T14:46:44.000Z
articles/_posts/2017-12-29-comparing-pcf-to-k8s.md
bottkars/odedia.github.io
f7df4e81db234e70560d1dc4d6fbb4cd3072e48a
[ "CC-BY-3.0" ]
null
null
null
articles/_posts/2017-12-29-comparing-pcf-to-k8s.md
bottkars/odedia.github.io
f7df4e81db234e70560d1dc4d6fbb4cd3072e48a
[ "CC-BY-3.0" ]
1
2020-07-23T14:46:47.000Z
2020-07-23T14:46:47.000Z
--- layout: post title: 'Comparing Kubernetes to Pivotal Cloud Foundry — A Developer’s Perspective' description: What are the benefits and drawbacks of each platform? Is this even a valid comparison? image: assets/images/apps-manager.png type: article source: https://medium.com/@odedia/comparing-kubernetes-to-pivotal-cloud-foundry-a-developers-perspective-6d40a911f257 --- * * * _Author’s disclosure: I’m currently a Sr. Platform Architect at Pivotal, however I’ve written this article almost a whole year before joining the company. It is based on my own unsolicited experience working with the platform as a vendor for a third party._ * * * I’ve been developing on Pivotal Cloud Foundry for several years. Working with the Spring boot stack, I was able to create CI/CD pipelines very easily and deployments were a breeze. I found it to be a truly agile platform (is that still a word in 2018?). On the other hand, Kubernetes is gaining serious traction, so I’ve spent a few weeks trying to get a grasp on the platform. I’m in no means as experienced working with Kubenertes as I am in Cloud Foundry, so my observations are strictly as a novice to the platform. In this article I will try to explain the differences between the two platforms as I see them from a _developer’s_ perspective. There will not be a lot of internal under-the-hood architecture diagrams here, this is purely from a user experience point of view, specifically for a Java developer that is novice to either platform. Such a comparison is also inherently subjective, so although I will try to provide objective comparison for any aspect, I will also give my personal thoughts on which aspect works best for _me_. These may or may not be conclusions you would agree with. * * * ### Overview #### Cloud Foundry Cloud Foundry is a cloud-agnostic platform-as-a-service solution. The open source cloud foundry is developed and supported by the [cloud foundry foundation](https://www.cloudfoundry.org), which includes the likes of Pivotal, Dell EMC, IBM, VMWare, and [many others](https://www.cloudfoundry.org/members/). There are enterprise versions developed based on the open source project, such as IBM Bluemix and Pivotal Cloud Foundry (PCF for short). #### Kubernetes Kubernetes is an open source cloud platform that originated from Google’s [Project Borg](http://blog.kubernetes.io/2015/04/borg-predecessor-to-kubernetes.html). It is sponsored by the [Cloud Native Computing Foundation](https://www.cncf.io/), whose members include top names of the industry such as AWS, Azure, Intel, IBM, RedHat, Pivotal and many others. Kubernetes is first and foremost a _container runtime_. Although not limited to it, it is mostly used to run docker containers. There are several solutions that offer a PaaS experience on top of Kubernetes, such as [RedHat OpenShift](https://www.openshift.com/). ### Similarities * Both solutions use the idea of containers to isolate your application from the rest of the system. * Both Cloud Foundry and Kubernetes are designed to let you run either on public cloud infrastructure (AWS, Azure, GCP etc.), or on-prem using solutions such as VMWare vSphere. * Both solutions offer the ability to run in hybrid/multi-cloud environments, allowing you to spread your availability among different cloud providers and even split the workload between on-prem and public cloud. * Starting with the latest release of Pivotal Cloud Foundry, _both_ solutions support Kubernetes as a generic container runtime. More on that below. ### PaaS vs IaaS+ #### Cloud Foundry First and foremost, Cloud Foundry is a PaaS. I don’t feel Kubernetes fits this description. Some regard it as an _IaaS+_. Even Kubernetes’ own documentation describes itself as “[_not_ a traditional, all-inclusive PaaS](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not)”. As a developer, the biggest differentiator for me is how Cloud Foundry takes a very Spring-like, opinionated approach to development, deployments and management. If you ever used Spring Boot, you know that one of its strengths is the ability to auto-configure itself just by looking at your maven/gradle dependencies. For example, if you have a dependency on mysql JDBC driver, your Spring Data framework would auto-configure to use mysql. If no driver is provided, it would fallback to h2 in-memory database. As we’ll see in this article, PCF seems to take a similar approach for application deployment and service binding. #### Kubernetes Kubernetes takes a different approach. It is inherently a generic container runtime that knows very little about the inner-workings of your application. Its main purpose is to provide a simple infrastructure solution to run your container, everything else is up to you as a developer. ### Supported Containers #### Kubernetes Kubernetes runs Docker containers. As such, it supports a very wide range of applications, from a message broker to a Redis database to your own custom java application to anything you can find on [Docker Hub](https://hub.docker.com). Anyone who had a chance to write a Dockerfile knows it can be either a trivial task of writing a few lines of descriptor code, or it can get complicated rather quickly. Here’s a simple example I [pulled off of github](https://github.com/kstaken/dockerfile-examples/blob/master/nodejs-mongodb/Dockerfile), and that’s a fairly simple example: <script src="https://gist.github.com/odedia/ce6d76d1766235c24331ac4a60646ef0.js"></script> This example should not seem intimidating to the average developer, but it does immediately show you there is a learning curve here. Since Docker is a generic container solution, it can run almost anything. It is your job as the developer to define how the operating system inside the container will execute your code. It is very powerful, but with great power comes great responsibility. #### Cloud Foundry Cloud Foundry takes a very opinionated approach to containers. It uses a container solution called _garden._ The original container in earlier versions of PCF was called _warden_, which actually predates docker itself. > Cloud foundry _itself_ actually predates Kubernetes. The first release was in 2011, while kubernetes is available since 2014. More importantly than the container runtime being used, is _how_ you create a container. Let’s take the example of a developer that needs to host a Spring Boot Java application. With docker, you should define a Dockerfile to support running a java-based application. You can define this container in many different ways. You can choose different base operating systems, different JDK versions from different providers, you can expose different ports and make security assumptions on how your container would be available. There is no standard for what a java-based Spring Boot application container looks like. In Cloud Foundry, you have one baseline _buildpack_ for all java-based applications, and this buildpack is provided by the vendor_._ A buildback is a template for creating an application runtime in a given language. Buildpacks are [managed by cloud foundry itself](https://github.com/cloudfoundry/java-buildpack). Cloud Foundry takes the guesswork that is part of defining a container out of the hands of the developer. It defines a “standard” for what a java-based container should look like, and all developers, devops teams and IT professionals can sync-up with this template. You can rest assured that your container will run just like other containers provided by other developers, either in your existing cluster or if you’ll move to a public cloud solution tomorrow. Of course, sometimes the baseline is not enough. For example — you might want to add your own self-signed SSL certificates to the buildpack. You can extend the base buildpack in these scenarios, but that would still allow you to use a shared default as the baseline. Continuing with the opinionated approach, Cloud foundry can identify which buildpack to use _automatically_, based on the contents of the provided build artifact. This artifact might be a jar file, a php folder, a nodejs folder, a .NET executable etc. Once identified, cloud foundry will create the garden container for you. * * * All this means that with PCF, your build artifact is your native deployment artifact, while in Kubernetes your build artifact is a docker image. With Kubernetes, you need to define the template for this docker image yourself in a Dockerfile, while in PCF you get this template automatically from a buildpack. ### Management Console #### Cloud Foundry PCF separates the web dashboard to two separate target audiences. * Ops Manager is targeted at the IT professional that is responsible for setting up the virtual machines or hardware that will be used to create the PCF cluster. ![](https://cdn-images-1.medium.com/max/1600/1*ROb-p7HjPF6h-50KPNpqJQ.png) * Apps Manager is targeted at the developer that is responsible for pushing application code to testing or production environments. The developer is completely unaware of the underlying infrastructure that runs its PCF cluster. All he can really see is the quotas assigned to his organization, such as memory limits. ![](https://cdn-images-1.medium.com/max/1600/1*KlGrkdMr7INimpDiV-DNXA.png) #### Kubernetes Kubernetes takes a different approach. You get one dashboard to manage everything. Here’s a typical Kubernetes dashboard: ![](https://cdn-images-1.medium.com/max/2400/1*krDR3tEbmr369sY7nRQ_-A.png) As you can see from the left-hand side, there is a lot of data to process here. You have access to persistent volumes, daemons, definition of roles, replication controllers etc. It’s hard to focus on what are the developer’s needs and what are the IT needs. Some might tell you this is the same person in a Devops culture, and that’s a fair point. Still, in reality-it is a more confusing paradigm compared to a simple application manager. ### Command Line Interface #### Cloud Foundry Cloud foundry uses a command line interface called cf. It is a cli that lets you control all aspects of the developer interaction. Following in the footsteps of simplicity that you might have already noticed, the idea is to take an opinionated view to practically everything. For example, if you are in a folder that contains a spring boot jar file called myapp.jar, you can deploy this application to PCF with the following command: ``` cf push myapp -p myapp.jar ``` That’s it! That’s all you need. PCF will lookup the current working directory and find the jar executable. It will then update bits to the platform, where the java buildpack would create a container, calculate the required memory settings, deploy it to the currently logged-in org and space in PCF, and set a route based on the application name: ``` wabelhlp0655019:test odedia$ cf push myapp -p myapp.jar Updating app myapp in org OdedShopen / space production as user… OK Uploading myapp… Uploading app files from: /var/folders/_9/wrmt9t3915lczl7rf5spppl597l2l9/T/unzipped-app271943002 Uploading 977.4K, 148 files Done uploading OK Starting app myapp in org OdedShopen / space production as user… Downloading pcc_php_buildpack… Downloading binary_buildpack… Downloading python_buildpack… Downloading staticfile_buildpack… Downloading java_buildpack… Downloaded binary_buildpack (61.6K) Downloading ruby_buildpack… Downloaded ruby_buildpack Downloading nodejs_buildpack… Downloaded pcc_php_buildpack (951.7K) Downloading go_buildpack… Downloaded staticfile_buildpack (7.7M) Downloading ibm-websphere-liberty-buildpack… Downloaded nodejs_buildpack (111.6M) Downloaded ibm-websphere-liberty-buildpack (178.4M) Downloaded java_buildpack (224.8M) Downloading php_buildpack… Downloading dotnet_core_buildpack… Downloaded python_buildpack (341.6M) Downloaded go_buildpack (415.1M) Downloaded php_buildpack (341.7M) Downloaded dotnet_core_buildpack (919.8M) Creating container Successfully created container Downloading app package… Downloaded app package (40.7M) Staging… — — -> Java Buildpack Version: v3.18 | https://github.com/cloudfoundry/java-buildpack.git#841ecb2 — — -> Downloading Open Jdk JRE 1.8.0_131 from https://java-buildpack.cloudfoundry.org/openjdk/trusty/x86_64/openjdk-1.8.0_131.tar.gz (found in cache) Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.1s) — — -> Downloading Open JDK Like Memory Calculator 2.0.2_RELEASE from https://java-buildpack.cloudfoundry.org/memory-calculator/trusty/x86_64/memory-calculator-2.0.2_RELEASE.tar.gz (found in cache)Memory Settings: -Xmx681574K -XX:MaxMetaspaceSize=104857K -Xss349K -Xms681574K -XX:MetaspaceSize=104857K — — -> Downloading Container Security Provider 1.5.0_RELEASE from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.5.0_RELEASE.jar (found in cache) — — -> Downloading Spring Auto Reconfiguration 1.11.0_RELEASE from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-1.11.0_RELEASE.jar (found in cache) Exit status 0 Uploading droplet, build artifacts cache… Uploading build artifacts cache… Uploading droplet… Staging complete Uploaded build artifacts cache (109B) Uploaded droplet (86.2M) Uploading complete Destroying container Successfully destroyed container 0 of 1 instances running, 1 starting 0 of 1 instances running, 1 starting 0 of 1 instances running, 1 starting 0 of 1 instances running, 1 starting 1 of 1 instances running ``` Although you can start with barely any intervention, this doesn’t mean you give up any control. You have a lot of customizations available in PCF. You can define your own routes, set the number of instances, max memory and disk space, environment variables etc. All of this can be done in the cf cli or by having a manifest.yml file available as a parameter to the cf push command. A typical manifest.yml file can be as simple as the following: ``` applications: - name: my-app memory: 512M instances: 2 env: PARAM1: PARAM1VALUE PARAM2: PARAM2VALUE ``` The main takeaway is this: with PCF, provide the information you know, and the platform will imply the rest. Cloud Foundry’s [haiku](https://twitter.com/onsijoe/status/598235841635360768?lang=en) is: > Here’s my code <br> > Run it on the cloud for me.<br> > I don’t care how. #### Kubernetes In kubernetes, you interact with the kubectl cli. The commands are not complicated at all, but there is still a higher learning curve from what I’ve experienced so far. For starters, a basic assumption is that you have a private docker registry available and configured (unless you only plan to deploy images available on public registries such as docker hub). Once you have that registry up and running, you will need to push your Docker image to that registry. Now that the registry contains your image, you can initiate commands to kubectl to deploy the image. Kubernetes documentaiotn gives the example of starting up an nginx server: ``` # start the pod running nginx$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"deployment "nginx-app" created ``` The above command only spins up a kubernetes [_pod_](https://kubernetes.io/docs/concepts/workloads/pods/pod/) and runs the container. A pod is an abstraction that groups one or more containers to the same network ip and storage. It’s actually the smallest deployable unit available in Kubernetes. You can’t access a docker container directly, you only access its pod. Usually, a pod would contain a single docker container, but you can run more. For example, an application container might want to have some monitoring dameon container in the same pod. In order to make the container accessible to other pods in the Kubernetes cluster, you need to wrap the pod with a _service:_ ``` # expose a port through with a service$ kubectl expose deployment nginx-app --port=80 --name=nginx-httpservice "nginx-http" exposed ``` Your container is now accessible inside the kubernetes cluster, but it is still not exposed to the outside world. For that, you need to wrap your service with an _ingress_. > Note: Ingress is [still considered a beta feature](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations)! I could not find a simple command to expose an ingress at this point (please correct me if I’m wrong!). It appears that you must create an ingress descriptor file first, for example: ``` apiVersion: extensions/v1beta1kind: Ingressmetadata: name: test-ingress annotations: ingress.kubernetes.io/rewrite-target: /spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80 ``` Once that file is available, you can create the ingress by issuing a command ``` **kubectl create -f my-ingress.yaml** ``` Note that unlike the single manifest.yml in PCF, the deployment yml files in Kubernetes are separated — there is one for pod creation, one for service creation and as you saw above — one for ingress creation. A typical descriptor file is not entirely overwhelming but I wouldn’t call it the most user friendly either. For example, here’s a descriptor file for nginx deployment: ``` apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 ``` All this to say — with kubernetes, you need to be specific. Don’t expect deployments to be implied. If I had to create a haiku for Kubernetes, it’ll be probably something like this: > Here’s my code<br> > I’ll tell you exactly how you should run it on the cloud for me<br> > And don’t you dare make any assumptions on the delployment without my written consent! ### Zero Downtime Deployments Both platforms support the ability to deploy applications with zero downtime, however this is one area where Kubernetes wins in my opinion, since it provides a built-in mechanism for zero downtime deployments with rollback. > _2019 update: As of Pivotal Cloud Foundry 2.4,_ [_native zero-downtime deployments_](https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/zero-downtime.html) _are available out of the box!_ #### Cloud Foundry With Pivotal Cloud Foundry, t̶h̶e̶r̶e̶’̶s̶ ̶n̶o̶ ̶b̶u̶i̶l̶t̶-̶i̶n̶ ̶m̶e̶c̶h̶a̶n̶i̶s̶m̶ ̶t̶o̶ ̶s̶u̶p̶p̶o̶r̶t̶ ̶a̶ ̶r̶o̶l̶l̶i̶n̶g̶ ̶u̶p̶d̶a̶t̶e̶, you’re basically expected to do some cf cli trickery to perform the update with zero downtime. The concept is called [blue-green deployment](https://martinfowler.com/bliki/BlueGreenDeployment.html). If I had to explain it in step-by-step guide, it’ll probably be something like this: * Starting point: you have myApp in production, and you want to deploy a new version of this app — v2. * Deploy v2 under a new application name, for example — myApp-v2 * The new app will have its own initial route — myApp-v2.mysite.com * Perform testing and verification on the new app. * Map an additional route to the myApp-v2 application, using the same route as the original application. For example: ``` cf map-route myApp-v2 mysite.com —hostname myApp ``` * Now requests to your application are load balanced between v1 and v2\. Based of the number of instances available to each version, you can perform A/B testing. For example — if you have 4 instances of v1 and 1 instance of v2, 20% of your clients will be routed to the new codebase. * If you identify issues at any point — simply remove v2\. No harm done. * Once you are satisfied, scale the number of available instances of v2, and reduce or completely delete the instances of v1. * Remove the myApp-v2.mysite.com route from v2 of your application. You have now fully migrated to the new codebase with zero downtime, including sanity testing phase and potentially A/B testing phase. > Note: The cf cli supports plugin extensions. Some of them provide automated blue-green deployments, such as [blue-green-deploy](https://github.com/bluemixgaragelondon/cf-blue-green-deploy), [autopilot](https://github.com/contraband/autopilot) and [zdd](https://github.com/Comcast/cf-zdd-plugin). I personally found blue-green-deploy to be very easy and intuitive, especially due to its support for automated smoke tests as part of the deployment. #### Kubernetes kubectl has a built-in support for rolling updates. You basically pass a new docker image for a given deployment, for example: ``` kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jon/kubernetes-bootcamp:v2 ``` The command above tells kubernetes to perform a rolling update between all pods of the kubernetes-bootcamp deployment from its current image to the new v2 image. During this rollout, your application remains available. Even more impressive — you can always revert back to the previous version by issuing the undo command: ``` kubectl rollout undo deployments/kubernetes-bootcamp ``` ### External Load Balancing As we saw previously, both PCF and Kubernetes provide load balancing for your application instances/pods. Once a route or an ingress is added, your application is exposed to the outside world. If we’ll take an external view of the levels of abstraction that are needed to reach your application, we can describe them as follows: #### Kubernetes ``` ingress → service → pod → container ``` #### Cloud Foundry ``` route → container ``` ### Internal Load Balancing (Service Discovery) #### Cloud Foundry PCF supports two methods of load balancing inside the cluster: * Route-based load balancing, in the traditional server-side configuration. This is similar to external load balancing mentioned above, however you can specify certain domains to only be accessible from within PCF, thereby making them internal. * Client side load balancing by using Spring Cloud Services. This set of services offers features from the Spring Cloud frameworks that are based on Netflix OSS. For service discovery, Spring Cloud Services uses Netflix Eureka. Eureka runs as its own service in the PCF environment. Other applications register themselves with Eureka, thereby publishing themselves to the rest of the cluster. Eureka server maintains a heartbeat health check of all registered clients to keep an up-to-date list of healthy instances. Registered clients can connect to Eureka and ask for available endpoints based on a service id (the value of _spring.application.name_ in case of Spring Boot applications). Eureka would return a list of all available endpoints, and that’s it. It is up to the client to actually access one of these endpoints. That is usually done using frameworks like Ribbon or Feign for client-side load balancing, but that is an implementation detail of the application, and not related to PCF itself. Client-side load balancing can theoretically scale better since each client keeps a cache of all available endpoints, and can continue working even if Eureka is temporarily down. If your application already uses Spring Cloud and the Netflix OSS stack, PCF fits your needs like a glove. #### Kubernetes Kubernetes uses [DNS resolution](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names) to identify other services within the cluster. Inside the same namespace, you can lookup another service by its name. In another namespace, you can lookup the service’s name followed by a dot and then the other namespace. The major benefit that Kubernetes’ load balancing offers is not requiring any special client libraries. Eureka is mostly targeted at java-based applications (although solutions exist for other languages such as Steeltoe for .NET). With Kubernetes, you can make load-balanced http calls to any Kubernetes service that exposes pods, regardless of the implementation of the client _or_ the server. The load-balancing domain name is simply the name of the service that exposes the pods. For example: * You have an application called my-app in namespace zone1 * It exposes a GET /myApi REST endpoint * There are 10 pods of this container in the cluster * You created a service called my-service that exposes this application to the cluster * From any other pod inside the namespace, you can call: ``` GET https://my-service/myApi ``` * From any other pod in any other namespace in the cluster, you can call: ``` GET https://my-service.zone1/myApi ``` And the API would load-balance over the available instances. It doesn’t matter if your client is written in Java, PHP, Ruby, .NET or any other technology. > 2018 update: Pivotal Cloud Foundry now supports [polyglot, platform-managed service discovery](https://www.cloudfoundry.org/blog/polyglot-service-discovery-container-networking-cloud-foundry/) similar to Kubernetes, using Envoy proxy, **apps.internal** domain and BOSH DNS. ### Marketplace PCF offers a services marketplace. It provides a ridiculously simple way to bind your application to a service. The term _service_ in PCF is not the same as a service in kubernetes. A PCF service binds your application to things like a database, a monitoring tool, a message broker etc. Some example services are: * Spring Cloud Services, which provides access to Eureka, a Config Server and a Hystrix Dashboard. * RabbitMQ message broker. * MySQL. Third party vendors can implement their own services as well. Some of the vendor offerings include MongoDB Enterprise and Redislabs for Redis in-memory database. Here’s a screenshot of available services on [Pivotal Web Services](https://run.pivotal.io/): ![](https://cdn-images-1.medium.com/max/1600/1*XSa3PvzuQmAema5fhfb0WQ.png) IBM Bluemix is another Cloud Foundry provider that offers its own services such as IBM Watson for AI and machine learning applications. Every service has different plans available based on your SLA needs, such as a small database for development or a highly-available database for a production environment. Last but not least, you have the option to define user-provided services. These allow you to bind your application to an existing service that you already have, such as an Oracle database or an Apache Kafka message broker. A user provided service is simply a group of key-value pairs that you can then inject into your application as environment variables. This offloads any specific configuration such as URLs, usernames or passwords to the environment itself — services are bound to a given PCF space. #### Kubernetes Kubernetes does not offer a marketplace out of the box. There is a [service catalog extension](https://kubernetes.io/docs/concepts/service-catalog/) that allows for a similar service catalog, however it is still in beta. Note that since it can run any docker container — Dockerhub can be considered as a kubernetes marketplace in a way. You can basically run anything that can run in a container. Kubernetes does have a concept similar to user-provided services. Any configuration or environment variables can exist in ConfigMaps, which allow you to externalise configuration artifacts away your container, thus making it more portable. ### Configuration Speaking of configuration — One of the features of the Spring Cloud Services service is [Spring Cloud Config](http://cloud.spring.io/spring-cloud-static/spring-cloud-config/1.4.0.RELEASE/single/spring-cloud-config.html). It is another service that is targeted specifically for Spring Boot applications. The config service serves configuration artifacts from a git repository of your choosing, and allows for zero-downtime configuration changes. If your Spring beans are annotated with @RefreshScope, they can be reloaded with updated configuration by issuing a _POST /refresh_ API call to your application. The property files that are available as configuration sources are loaded based on a pre-defined loading order, which provides some sort of an inheritance-based mechanism to how the configuration is loaded. It’s a great solution, but again assumes that your applications are based on the Spring Cloud (or .NET Steeltoe) stack. If you’re already using spring boot with a config server today — PCF fits like a glove. In Kubernetes, you can still run a config server as a container, but that would probably become an unneeded operational overhead since you already have built-in support for ConfigMaps. Use the native solution for the platform you go with. ### Storage Volumes A big differentiator of Kubernetes is the ability to attach a storage volume to your container. Kubernetes uses _etcd_ as a means to manage storage volumes, and you can attach such a volume to any of your containers. This means you get a reliable storage solution, which lets you run storage-based containers like a database or a file server. In PCF, your application is fully stateless. PCF follows the 12-factor apps model and one of these models assumes that your application has no state. You should theoretically take the same application that runs today in your on-prem data center, move it to AWS, and provided there is adequate connectivity, it should just work. Any storage-based solution should be offloaded to either a PCF service, or to a storage solution outside the PCF cluster itself. This may be regarded as an advantage or a disadvantage depending on your application and architecture. For stateless application runtimes such as web servers, it is always a good idea to decouple it from any internal storage facility. ### Onboarding #### Kubernetes Getting started with Kubernetes was not easy. As mentioned above, you can’t just start with a 5-minutes quick start guide, there are just too many things you need to know and too many assumptions about what you already have (docker registry and a git repository are often taken for granted). Just taking a look at the excellent [Kubernetes Basics interactive tutorial](https://kubernetes.io/docs/tutorials/kubernetes-basics/) shows the level of knowledge required on the platform. For a basic on-boarding, there are 6 steps, each one of them containing quite a few commands and terminologies you need to understand. Trying to follow the tutorial on a local minikube vm instead of the pre-configured online cluster is quite difficult. #### Cloud Foundry Getting started with PCF is easy. Your developers already know how to develop their spring boot / nodejs / php / ruby / .NET application. They already know what its artifacts are. They probably already have some jenkins pipeline in place. They just want to run the same thing in a cloud environment. If we’ll take a look at PCF’s “[Getting Started with Pivotal Cloud Foundry](https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry/deploy-the-sample-app)”, it’s almost comical how little is required to get something up and running. When you need more complex interaction, it’s all available for you, either in the cf cli, as part of a manifest.yml, or in the web console, but this doesn’t prevent you from getting started quickly. If you mainly develop server-based applications in java or nodejs, PCF gets you to the cloud simply, quickly and more elegantly. ### Vision #### Kubernetes Kubernetes is truly a great open source platform. Kudos to Google for giving up control and letting the community do its thing. That’s probably the number one reason why Kubernetes has taken off so quickly while other solutions like Docker swarm are falling behind. Other vendors also offer solutions that offer a more PaaS-like experience on top of Kubernetes, such as RedHat OpenShift. But with such a diverse and thriving eco-system, the path forward can be one of many different directions. It really does feel like a Google product in a way — maybe it will remain supported by Google for years, maybe it will change with barely any backwards compatibility, or maybe they’ll kill it and move to the next big thing (Does anyone remember Google Buzz, Google Wave or Google Reader?). Any AngularJS developer who’s trying to move to Angular 5 can tell you that backwards compatibility is not a top priority. #### Cloud Foundry Cloud Foundry is also a thriving open source platform, but it is pretty clear who sets the tone here. It is Pivotal, with additional contributions from IBM. Yes, it’s open source, but the enterprise play here is **_Pivotal_** Cloud Foundry, which provides added value like the services marketplace, Ops Manager etc. And on that side, it’s a limited democracy. This is a product that is meant to serve enterprise customers, and the feature set would first and foremost answer those needs. If Kubernetes is Google, then PCF is Apple. A little more of a walled garden, more controlled, better design/experience layer, and a commitment for delivering a great product. I feel like the platform is more _focused_, and focus is critical in my line of work. #### PKS The real surprise of the recently announced PCF 2.0 was that all I’ve been talking about throughout this article is now just one part of a larger offering. The application runtime (everything that is referred to as PCF in this article) is now called Pivotal Application Service (PAS). There is also a new serverless solution called Pivotal Function Service (PFS), and lastly — a new Kubernetes runtime called Pivotal Container Service (PKS). This means that Pivotal Cloud Foundry now gives you the best of both worlds: A great application runtime for fast onboarding of cloud-native applications, as well as a great container runtime when you need to develop generic low-level containers. ### Conclusion In this article I tried to share my personal experiences of working with both platforms. Although I am a bit biased towards PCF, it is for a good reason — it has served me well. I approached Kubernetes with an open mind, and found it to be a very versatile platform, but also one that requires a steeper learning curve. Maybe I got spoiled by living in the Spring eco-system for too long :). With the latest announcement of PKS, it appears that Pivotal Cloud Foundry is set to offer the best integrated PaaS — one that lets you run cloud native applications as quickly and simply as possible, while also exposing the best generic container runtime when that is needed. I can see this becoming very useful in many scenarios. For example, Apache Kafka is one of the best message brokers available today, but this message broker still doesn’t have a PCF service available, so it has to run externally on virtual machines. Now with PCF 2.0, I can run Apache Kafka in docker containers inside PCF itself. The main conclusion is that this is definitely not a this _or_ that discussion. Since both the application runtime and the container runtime now live side by side in the same product, the future seems promising for both. Thank you for reading, and happy coding! Oded Shopen
74.8742
1,021
0.789241
eng_Latn
0.997871
4102a93c6da0b2d3eb86d14279af673d7b880d9c
1,849
md
Markdown
AlchemyInsights/api-permissions-and-consents.md
isabella232/OfficeDocs-AlchemyInsights-pr.lt-LT
001bf5fd6b37e2b218bad3586388a8d80ba1d47d
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T19:07:05.000Z
2021-03-06T00:35:04.000Z
AlchemyInsights/api-permissions-and-consents.md
MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.lt-LT
2b110ffade6ea6775b415483413fccd000605bed
[ "CC-BY-4.0", "MIT" ]
4
2020-06-02T23:31:43.000Z
2022-02-09T06:55:37.000Z
AlchemyInsights/api-permissions-and-consents.md
isabella232/OfficeDocs-AlchemyInsights-pr.lt-LT
001bf5fd6b37e2b218bad3586388a8d80ba1d47d
[ "CC-BY-4.0", "MIT" ]
3
2019-10-09T20:28:54.000Z
2021-10-09T10:40:30.000Z
--- title: API teisės ir sutikimas ms.author: v-aiyengar author: AshaIyengar21 manager: dansimp ms.date: 01/25/2021 ms.audience: Admin ms.topic: article ms.service: o365-administration ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "9004343" - "7756" ms.openlocfilehash: c45bab67d414c8f0f2ca1c5275084d4ecce538c5256154292302080ba5bd8175 ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175 ms.translationtype: MT ms.contentlocale: lt-LT ms.lasthandoff: 08/05/2021 ms.locfileid: "53932105" --- # <a name="api-permissions-and-consent"></a>API teisės ir sutikimas Su ""Microsoft" tapatybės platforma" integruojamos taikomosios programos atitinka autorizavimo modelį, kuris suteikia vartotojams ir administratoriams galimybę kontroliuoti, kaip galima pasiekti duomenis. Autorizavimo modelio diegimas buvo atnaujintas "Microsoft" tapatybės platforma pabaigos taške. Ji pakeičia, kaip programa turi sąveikauti su "Microsoft" tapatybės platforma. [Teisės ir sutikimas "Microsoft" tapatybės platforma galinio](https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent) punkto apima pagrindines šio autorizavimo modelio sąvokas, įskaitant aprėptis, teises ir sutikimą. ""Azure Active Directory" [("Azure AD")](https://docs.microsoft.com/azure/active-directory/develop/consent-framework) sutikimo sistema leidžia lengvai kurti kelių nuomotojų žiniatinklio ir vietinių klientų taikomąsias programas. Šios taikomosios programos leidžia prisijungti pagal vartotojų paskyras iš "Azure AD" nuomotojo, kuris skiriasi nuo to, kuriame registruota taikomoji programa. Jiems taip pat gali reikėti pasiekti žiniatinklio API, pvz., "Microsoft "Graph"" API (norint pasiekti "Azure AD", "Intune" ir paslaugas "Microsoft 365") ir kitus ""Microsoft" paslaugos" API, be jūsų žiniatinklio API.
63.758621
625
0.819903
lit_Latn
0.997727
4102c4b2bfdf0f0e972b343172a88cd3489abebe
46
md
Markdown
README.md
lovezhuliang/hello-world
7a533ae42be5631272bd32165a7599f2d6f352ab
[ "Apache-2.0" ]
null
null
null
README.md
lovezhuliang/hello-world
7a533ae42be5631272bd32165a7599f2d6f352ab
[ "Apache-2.0" ]
1
2016-12-20T07:47:05.000Z
2016-12-21T00:46:20.000Z
README.md
lovezhuliang/hello-world
7a533ae42be5631272bd32165a7599f2d6f352ab
[ "Apache-2.0" ]
null
null
null
# hello-world Hello GitHub a new repository
15.333333
31
0.76087
eng_Latn
0.837153
4103b724b7692e7d800b01954c94ca395fcf0788
33
md
Markdown
README.md
BraveDroid/Tafsir
91b03df0a232d959511c4cabc0f51cb2543419fe
[ "Unlicense" ]
null
null
null
README.md
BraveDroid/Tafsir
91b03df0a232d959511c4cabc0f51cb2543419fe
[ "Unlicense" ]
null
null
null
README.md
BraveDroid/Tafsir
91b03df0a232d959511c4cabc0f51cb2543419fe
[ "Unlicense" ]
null
null
null
# Tafsir Tafsir is about tafsir
11
23
0.757576
eng_Latn
0.68336
410580b042fbc2744d525bc24ee5bad2b7fb63f4
3,138
md
Markdown
_posts/sapjil/2021-09-01-github-pages-2.md
2unbini/2unbini.github.io
19353dcba0fc2d4328ab988e0384871a942b997c
[ "MIT" ]
null
null
null
_posts/sapjil/2021-09-01-github-pages-2.md
2unbini/2unbini.github.io
19353dcba0fc2d4328ab988e0384871a942b997c
[ "MIT" ]
2
2021-10-19T06:39:28.000Z
2022-03-24T04:29:50.000Z
_posts/sapjil/2021-09-01-github-pages-2.md
2unbini/2unbini.github.io
19353dcba0fc2d4328ab988e0384871a942b997c
[ "MIT" ]
null
null
null
--- title: "깃헙 블로그 삽질 2" toc: true toc_sticky: true categories: - 📂 all - sapjil tags: - 깃허브 - 블로그 - 삽질 last_modified_at: 2021-09-01 --- # 깃허브 블로그 테마 적용하기 > fork, 복사 붙여넣기, remote . . . 지난 글의 말미에 소개했듯, [링크](http://jekyllthemes.org)에 들어가면 `jekyll`로 만들어지 다양한 테마의 웹 페이지들이 있다. 나는 그 중 `minimal-mistake`를 사용했고, 테마를 적용하는 정말 많은 방법(fork, remote, 복사 붙여넣기 등...) 중, 나를 살려 준 **[엄청난 블로그](https://devinlife.com/howto/)** 에 소개된 방식을 따라했다. 원래 다른 테마를 사용해서 블로그를 만드려고 했는데, 자꾸 뭐가 꼬여서 더 많은 사람들이 사용하는 테마로 바꾸었다. 테마를 선택할 때 자기의 스타일에 맞는 것도 좋지만, 사람들이 많이 사용해 지속적으로 버그나 기능이 업데이트 되는 테마를 선택하는 것도 좋은 선택일 것이다. <br/> ### 앞서 만든 github.io 밀기 아니 왜 애써 만들어 놓은 녀석을 미나요? 나는 하나부터 열까지 테마로 쭉 만들어진 애를 쓸 것이기 때문에 그냥 내가 지금까지 적용 해 놓은(사실상 적용한 것도 없긴 함) 파일들을 싹 지워준다. 지우지 않고 내가 쓰고자 하는 테마를 `zip` 파일로 받아서 해당 테마의 내용을 모두 끌어다가 내 폴더에 대치시켜도 된다. 그런데, 이렇게 하면 서로 다른 디렉토리에 만들어진 같은 이름과 기능의 페이지들이 충돌하는 상황이 발생한다(경험담). 그래서 그냥 어차피 크게 수정한 것도 없으니 쿨하게 밀어준다. ### 원하는 테마 소스 받기 위 링크에서 테마를 찾았으면, 필히 그 곳에 깃허브 링크가 붙어 있을 것이다. 깃허브 레포로 가 `fork` 를 뜬 뒤, 해당 Readme 에 쓰여 있는 대로 블로그를 생성하는 것이 가장 일반적인 방법이다. 그런데 나는 `fork` 를 뜬 뒤 해당 레포를 `username.github.io` 로 바꾸는 과정에서 정상적으로 뭐가 작동이 안 돼서 그냥 포크 뜨는 건 포기하기로 했다. > 중요하지 않지만 의외로 중요할 수도 있는 것) `fork` 된 레포엔 커밋을 아무리 많이 해도 잔디가 심어지지 않는다. 어 별로 중요하지 않은 말이 너무 길었다,,, 어쨌든 내가 사용하고 싶은 테마 깃허브에 들어가서 `git clone` 으로 로컬에 소스를 내려받는다. ```zsh git clone https://github.com/어쩌구... ``` 완료되면 해당 소스의 폴더로 이동한 뒤, 앞의 문서에서 했듯 `bundle` 을 통해 패키지를 받아준다. ```zsh cd 폴더 bundle ``` 잘 받아졌으면, 서버 명령어로 로컬에서 웹 페이지를 띄워보자. ```zsh bundle exec jekyll serve ``` localhost:4000 에 정상적으로 웹 페이지가 뜨면, 이제 깃허브 페이지 호스팅을 이용하면 된다. ### 내 레포에 이식하기 로컬에 받아준 소스를 내 깃허브 레포에 연결하는 과정이다. 지금 로컬에 받아놓은 소스는 해당 테마의 깃 레포에 연결돼있는 상태다. 이를 내 깃허브 레포에 연결시키려면 다음과 같이 실행한다. ```zsh git remote remove origin git remote add origin https://github.com/내레포주소 git push -u origin master ``` 현재 연결된 깃 레포와 연결을 끊고, 내 깃 레포와 연결한 뒤, 그 상태를 저장하는 과정이라고 생각하면 된다. 이제 내 레포가 되었으니 폴더 이름도 바꿔보자. ```zsh mv 원래폴더이름 바꿀폴더이름 ``` 근데 이 때, 바꿀 폴더 이름이 `username.github.io` 이고, 앞의 튜토리얼을 따라하면서 `username.github.io`로 된 폴더가 이미 존재한다면 이름이 바뀌는 대신 폴더가 이동하므로 해당 이름의 폴더가 있는지 없는지 먼저 확인해야 한다. > `mv` 명령어 : 이름 바꾸기, 위치 이동하기 ### Github Pages 에서 확인하기 모든 게 완료됐다면 Github 에 들어가자. 내 레포의 Settings -> Pages 탭에 들어가면 다음과 같은 화면이 나온다. ![](/assets/images/sap-2/ghpages.png) 위 이미지처럼 초록 바에 `username.github.io` 형식으로 published 됐는지 확인한다. 만약 이런 초록색이 안 뜨고, 아래 **Source** 가 비활성화 되어 있으면, 해당 소스 부분을 `master` 브랜치의 `/root`로 설정 해 준 뒤 `save` 를 눌러 활성화를 시켜 준다. ### 블로그 개인 정보 설정하기 모든 테마에 `_config.yml` 이라는 파일이 있을 것이다. 이 파일엔 해당 웹 페이지의 구성 요소들에 대한 설정을 해 줄 수 있는데, 블로거의 정보나 로컬 시간 설정, `post`에 댓글을 달 수 있는 설정, `tags`, `category` 등에 대한 설정 등 다양한 설정을 해 줄 수 있다. 대부분의 경우 주석으로 어떤 정보를 담으면 되는지, 형식은 어떻게 되는지 친절하게 다 써 놓았다. 이에 따라 내가 원하는 부분의 정보를 맞춰 집어넣어주면 된다. ### 첫 글 쓰기 jekyll 의 약속 중, 글은 `_post` 라는 폴더에 들어가야 한다는 것이 있다. 기본적으로 `_post` 라는 폴더가 있을 수 있지만, 없다면 그냥 루트 경로에 하나 만들어주자. ```zsh mkdir _post ``` 폴더를 만들었으면, 지정된 제목 형식에 따라 마크다운 문서를 하나 써 준다. ```zsh vim 2021-09-01-first-post.md ``` `vim` 편집기를 썼지만, 이게 익숙하지 않은 사람들은 `echo` 를 사용해서 문서를 하나 써 주자. ```zsh echo "hello" > 2021-09-01-first-post.md ``` 여기서 쓰는 문서는 꼭 **마크다운 문서**여야 한다! 루트 경로로 간 뒤, push 해 주면 블로그에서 첫 글을 볼 수 있다. 캡쳐는... 까먹었다..ㅎ 끝!!
21.791667
235
0.659337
kor_Hang
1.00001
410733c3b8c9bf26067f5b3d17ba14ce6ec6d1db
121
md
Markdown
_includes/02-image.md
Theo2Codex/markdown-portfolio
2fbbab07ac5aae41d6d29e00e7ecfede3bad0fc4
[ "MIT" ]
null
null
null
_includes/02-image.md
Theo2Codex/markdown-portfolio
2fbbab07ac5aae41d6d29e00e7ecfede3bad0fc4
[ "MIT" ]
5
2020-11-30T21:26:42.000Z
2020-12-01T12:22:35.000Z
_includes/02-image.md
Theo2Codex/markdown-portfolio
2fbbab07ac5aae41d6d29e00e7ecfede3bad0fc4
[ "MIT" ]
null
null
null
![profile image](https://avatars0.githubusercontent.com/u/73598013?s=400&u=030f4c472c334b7613a1d04fa992cddae0c0c9f9&v=4)
60.5
120
0.834711
yue_Hant
0.285397
4107901fc93dd11e97ee42e1cfb041b1d8acc44b
4,064
md
Markdown
README.md
jdfr228/PSNDiscord
557cb0c7592fdd721ce2f34fb434a29f3924d992
[ "MIT" ]
3
2018-09-27T07:42:23.000Z
2020-10-19T04:33:38.000Z
README.md
jdfr228/PSNDiscord
557cb0c7592fdd721ce2f34fb434a29f3924d992
[ "MIT" ]
null
null
null
README.md
jdfr228/PSNDiscord
557cb0c7592fdd721ce2f34fb434a29f3924d992
[ "MIT" ]
3
2018-10-03T14:17:56.000Z
2021-12-21T12:46:08.000Z
# PSNDiscord This is a simple web-scraping script that scrapes my.playstation.com in order to update your Discord "Now Playing" status when playing a game on PSN. # DISCLAIMER Usage of this script with your personal (non-bot) Discord account is technically considered "Self-botting", as it externally uses the API to update your profile. Self-botting is against Discord ToS, though some examples show that Discord is generally lenient against this (see [here](https://github.com/Favna/Discord-Self-Bot/wiki) or [here](https://github.com/discordapp/discord-api-docs/issues/69#issuecomment-223898291)). I am not liable for any administrative or disciplinary action Discord takes against you or your account for using this script. **_USE AT YOUR OWN RISK!_** # Installing - Requires [Python 3.5.2](https://www.python.org/downloads/) or higher Run the following command to install the Discord.py API wrapper ``` python -m pip install -U https://github.com/Rapptz/discord.py/archive/rewrite.zip#egg=discord.py ``` **For Windows users**: If Python is not in your system PATH, look [here](https://www.howtogeek.com/118594/how-to-edit-your-system-path-for-easy-command-line-access/) for information on adding it. --- Run the following command to install the Selenium webdriver for Python ``` python -m pip install -U selenium ``` --- [Download chromedriver version 2.40](https://chromedriver.storage.googleapis.com/index.html?path=2.40/) for your operating system and ensure the extracted file is in your system PATH. - For Windows, you should be able to place it in C:/Windows, or look [here](https://www.howtogeek.com/118594/how-to-edit-your-system-path-for-easy-command-line-access/) for instructions on how to add chromedriver's location to your PATH. --- Modify the PSNDiscord.py script to include the necessary information: 1. **url**- The URL for your my.playstation.com user profile. Replace "YourName" with your PSN username or simply copy the full url from a browser. 2. **dataDirectory**- Modify where you'd like to store session data for Chrome. e.g. "C:/Python/PSNDiscord/" 3. **twitchUrl**- Enter a twitch.tv profile URL if you have one. This link will appear when in-game if someone right-clicks on your profile in Discord. 4. **userToken**- In Discord, press Ctrl+Shift+i to open the developer console. Navigate to the "Application" tab, then under the Storage section click Local Storage -> https://discordapp.com. Copy the value of the "token" variable under this section and paste it into the script between the quotes, replacing all x's. **_WARNING_: THIS TOKEN IS ESSENTIALLY YOUR DISCORD LOGIN AND PASSWORD COMBINED. DO NOT SHARE THIS TOKEN WITH ANYONE FOR ANY REASON, AND BE SURE NOT TO SHARE A VERSION OF THIS SCRIPT WITH YOUR TOKEN ALREADY ENTERED.** 5. hideChrome- This should be set to False on the first run so you can properly log in to Sony's website. After successfully logging in, for future runs you can set this to True to completely hide Chrome's window. # Usage Windows users can simply run the included PSNDiscord.bat file. The first run of the script will require you to login to my.playstation.com. After logging in once, all future runs of the script should save your login session, and the hideChrome option can be set to True. Exit the script with a keyboard interrupt for now (Ctrl+C). # Other parameters - **noGameRefreshTime**- Time in seconds between each page refresh when not playing a game on PSN. Setting this higher will reduce system usage but of course make your playing status change less frequently. - **inGameRefreshTime**- Time in seconds between refreshes when playing a game. **NOTE**: Setting these two parameters lower than 50 seconds will have virtually no effect due to my.playstation.com rate limiting. **_SETTING THEM BELOW 15 SECONDS COULD VIOLATE DISCORD API TERMS OF SERVICE AND IS NOT RECOMMENDED_**. - **loadTime**- How long in seconds the script will wait for your profile page to load. Slower internet connections may require a higher loadTime setting.
59.764706
424
0.770423
eng_Latn
0.966452
41089e4a7f14844de22bb1855f68b6df20b5395f
205
md
Markdown
example/advanced/README.md
psyashes/docker-bitcoin
e32fd87a94079692be2889988c5ebcedc276d61e
[ "MIT" ]
67
2018-01-05T01:04:35.000Z
2022-01-28T17:21:09.000Z
example/advanced/README.md
psyashes/docker-bitcoin
e32fd87a94079692be2889988c5ebcedc276d61e
[ "MIT" ]
9
2018-08-25T23:03:56.000Z
2021-03-03T07:53:39.000Z
example/advanced/README.md
psyashes/docker-bitcoin
e32fd87a94079692be2889988c5ebcedc276d61e
[ "MIT" ]
24
2017-10-09T17:01:38.000Z
2022-03-06T04:16:06.000Z
# Advanced Config Example This example demonstrates ways to customize the docker-compose file to create your own environment. 1. Custom bitcoin.conf 2. JSON-RPC Support 3. Custom datadir path for volume
25.625
99
0.804878
eng_Latn
0.971098
4109baa3ceb1b1de320a81e4da4485ef9a9de7b2
2,816
md
Markdown
README.md
elkarte/tools
f275ca63336b4b06f4e83e1d583dc8062b14c252
[ "BSD-3-Clause" ]
3
2016-10-01T21:34:10.000Z
2019-08-13T22:49:32.000Z
README.md
elkarte/tools
f275ca63336b4b06f4e83e1d583dc8062b14c252
[ "BSD-3-Clause" ]
10
2015-01-13T20:18:24.000Z
2021-12-26T00:33:30.000Z
README.md
elkarte/tools
f275ca63336b4b06f4e83e1d583dc8062b14c252
[ "BSD-3-Clause" ]
8
2015-08-09T20:02:50.000Z
2020-02-12T02:59:12.000Z
#### ElkArte Utility Scripts This repository contains a several useful scripts for ElkArte, such as install/upgrade, repair scripts, database cleaning, etc. All scripts in this repository are under BSD 3-clause license, unless specified otherwise. Most of the scripts are developed or maintained by [emanuele45](https://github.com/emanuele45). #### Description * **databasecleanup**: Analyses a database and compares it to a fresh install. Displays added settings and columns with options to remove. * **install_script**: A template that can be used to create manual installation scripts for mods. At the moment the hook part is fully working, the database part is still WIP. * **ban_script.php**: A script that allows perform multiple user banning at once. You can provide a list of usernames that you want to ban or you can ask the script to scan a board you have collected all the users you want to ban in (the name must be the subject of the topic). * **fix_packages.php**: After a large upgrade (to cleanup forum) the mods are still marked as installed, with this script you can invert that state. * **populate.php**: A script that can be used to populate a forum with dummy users / topics / posts (useful for testing), originally written by SlammedDime http://code.mattzuba.com/populator * **Populator/populate.php**: As above, but uses Faker project to populate the database with "real" names, subjects, text, ip, email, etc. * **repair_settings.php**: A script that can detect the correct value for a number of fields and settings on your forum. Useful to fix broken installs. * **elkinfo.php**: A script that will provide detailed information to help with support issues. Output includes details of the system, PHP, file versions, database, error log, addons installed. Can provide password access to output for trusted users. * **status.php**: A script that can be used to analyse mySQL database performance and provide suggestions on how to improve settings (experimental) ###### Repo Feel free to fork this repository and make your desired changes. Please see the [Developer's Certificate of Origin](https://github.com/elkarte/tools/blob/master/DCO.txt) in the repository, by this you acknowledge that you can and do license your code under the license of the project. ###### How to contribute: * Fork the repository. If you are not used to Github, please check out [fork a repository](http://help.github.com/fork-a-repo). * Branch your repository, to commit the desired changes. * An easy way to do so, is to define an alias for the git commit command, which includes -s switch (reference: [How to create Git aliases](http://githacks.com/post/1168909216/how-to-create-git-aliases)) * Send us a pull request. * This implies that your submission is done under the license of the project.
82.823529
277
0.768821
eng_Latn
0.998426
410a2357fe2a305841ff68d5e523b8ff53a84126
1,444
md
Markdown
news-site/content/post/d7e44d05a586a96ae953da74f609bd4c49d6b62ab272ba37537e8da918aea99d.md
saqoah/news-feed
679213b6cae4f7038aa59739ba74e9395b13041b
[ "MIT" ]
null
null
null
news-site/content/post/d7e44d05a586a96ae953da74f609bd4c49d6b62ab272ba37537e8da918aea99d.md
saqoah/news-feed
679213b6cae4f7038aa59739ba74e9395b13041b
[ "MIT" ]
null
null
null
news-site/content/post/d7e44d05a586a96ae953da74f609bd4c49d6b62ab272ba37537e8da918aea99d.md
saqoah/news-feed
679213b6cae4f7038aa59739ba74e9395b13041b
[ "MIT" ]
1
2021-11-02T18:36:09.000Z
2021-11-02T18:36:09.000Z
--- title: Flight Penguin is a new flight search Chrome app that promises ‘no collusion’ date: "2021-04-06 00:30:00" author: The Verge authorlink: https://www.theverge.com/2021/4/5/22368466/flight-penguin-search-chrome-extension-hipmunk tags: - The-Verge --- <figure> <img alt="" src="https://cdn.vox-cdn.com/thumbor/xIcf7nLy-zJBBC-yjntUfKO-Pu0=/204x0:963x506/1310x873/cdn.vox-cdn.com/uploads/chorus_image/image/69081000/33t687fbfzy4vdol.0.png" /> </figure> <p id="4mY874">The founders of Hipmunk are launching a new startup today that’s aimed squarely at taking on their own former product. Called <a href="https://flightpenguin.com/">Flight Penguin</a>, it’s a Chrome browser extension that simultaneously searches a bunch of airline websites and then presents the results in a familiar format. Rather than taking a commission or affiliate fee, Flight Penguin will instead charge its users $10 per month — it’s designed for people who travel a lot (or, since there’s still a pandemic on, people who will imminently travel a lot).</p> <p id="GGhJMw">Flight Penguin is also not pulling any punches when it comes to its rhetoric: it promises that there will be “no collusion” with the airline industry, specifically noting that “Some of the largest travel sites hide flights from...</p> <p> <a href="https://www.theverge.com/2021/4/5/22368466/flight-penguin-search-chrome-extension-hipmunk">Continue reading&hellip;</a> </p>
84.941176
579
0.762465
eng_Latn
0.95533
410a3f86a665f6b3e151b74fe0657d536e5bab0e
718
md
Markdown
README.md
cleansoftmods/cleansoftpagesmod
acc5e14a1708f9dcb3339bdb0cbf57c6069e2dab
[ "MIT" ]
null
null
null
README.md
cleansoftmods/cleansoftpagesmod
acc5e14a1708f9dcb3339bdb0cbf57c6069e2dab
[ "MIT" ]
null
null
null
README.md
cleansoftmods/cleansoftpagesmod
acc5e14a1708f9dcb3339bdb0cbf57c6069e2dab
[ "MIT" ]
null
null
null
# WebEd pages ![Total downloads](https://poser.pugx.org/sgsoft-studio/pages/d/total.svg) ![Latest Stable Version](https://poser.pugx.org/sgsoft-studio/pages/v/stable.svg) ![License](https://poser.pugx.org/sgsoft-studio/pages/license.svg) ####Vendor publish ``` php artisan vendor:publish --provider=WebEd\Base\Pages\Providers\ModuleProvider php artisan vendor:publish --provider=WebEd\Base\Pages\Providers\ModuleProvider --tag=lang php artisan vendor:publish --provider=WebEd\Base\Pages\Providers\ModuleProvider --tag=views php artisan vendor:publish --provider=WebEd\Base\Pages\Providers\ModuleProvider --tag=config php artisan vendor:publish --provider=WebEd\Base\Pages\Providers\ModuleProvider --tag=migrations ```
55.230769
96
0.795265
yue_Hant
0.427458
410b8edceb6aa92457da80e3da859dcecc2b29da
5,767
md
Markdown
_pages/projects.md
MCroci/MCroci.github.io
0f53a1d8c55e33032d32c6ba17718988383ca59a
[ "MIT" ]
null
null
null
_pages/projects.md
MCroci/MCroci.github.io
0f53a1d8c55e33032d32c6ba17718988383ca59a
[ "MIT" ]
null
null
null
_pages/projects.md
MCroci/MCroci.github.io
0f53a1d8c55e33032d32c6ba17718988383ca59a
[ "MIT" ]
null
null
null
--- layout: archive title: "Projects" permalink: /projects/ author_profile: true redirect_from: - /projects --- {% include base_path %} Here's a list of the projects I've followed over the past four years: ### Mo.Re Farming - a MOnitornig & REmote system for a MORE sustainable farming 🚜 The *[Mo.Re Farming](http://www.morefarming.it/index.html)* project aimed to develop a platform for the collection and management of spatial and farm data, to provide the user (technician or farmer) with information to support decision-making and promote more sustainable farming techniques, such as site-specific management (precision farming). ### Nutrivigna 🍇 The *[Nutrivigna](http://www.nutrivigna.it)* project, aimed to improve the nutrient efficiency and reduce the environmental impact of wine production, through the development and dissemination of different tools: * innovative techniques of spectral observation from proximity (optical sensors mounted on operating machines) and from remote (drones and earth observation satellites) for the determination of mineral requirements; * advanced web services for the nutritional balance integrated with procedures for "mapping" the areas of the vineyard with different nutritional needs; * definition of vineyard management systems with low environmental impact. [Here](http://www.nutrivigna.it/media/documents/nutrivigna_www/eventi/convegno%20finale/Vincini_Calegari_Croci_Nutrivigna_31_05_2018.pdf?v=20180606) are some results are shown during the final conference held on 31-05-2018 ### Positive - Scalable Operational Protocols for precision agriculture 💧 The *[Positive](http://www.progettopositive.it)* project had the following objectives: * to establish a stable service, with coverage at regional scale and managed by institutional players, making available on a regular basis updated maps of the most significant agronomic indices with a resolution of 10 x 10 m, and to connect it with the [*IRRINET*](https://www.irriframe.it/irriframe/home/index_er) system; * to develop components and interfaces to manage the flow of information from the creation of vegetation indices maps (including those generated by proximity or in vivo sensors) to irrigation advisory systems and eventually to precision irrigation systems; * to implement a demonstration system for the management of precision irrigation with a high degree of automation that relies on these services and on protocols agreed upon with irrigation machine manufacturers; * to improve the functionality of FERTIRRINET, part of the IRRINET service: ◦ in terms of quality of the irrigation advice, thanks to a systematic integration of satellite data and, where available, ground sensors; ◦ setting up variable rate irrigation plans for soils and crops where appropriate. The standards and protocols developed during project POSITIVE will be available and fully documented to be freely usable by all companies (both farms and equipment providers) interested in precision irrigation and fertigation ### Soipomi 🍅 The *[SOIPOMI](https://progetti.crpv.it/Home/ProjectDetail/60)* project aims to make more and more central the role of the [*O.I. of processing tomatoes*](https://oipomodoronorditalia.it) in the process that generates information to support the supply chain, so as to be able to manage them and play a leading role of hinge between the agricultural and industrial world for a greater appreciation of the tomato on the markets. Through the development of a system of crop classification and production forecasting starting from Sentinel 2 ESA satellite images to derive technical information that can be used to better plan the various interventions both by the farm and the processing industry. Another objective is to have real-time data during the campaign to know the quantities delivered to industries and their quality levels, so as to better manage the planning of harvesting and processing. For further information click [here](https://oipomodoronorditalia.it/2019/09/24/lo-studio-delle-immagini-satellitari-per-migliorare-la-produzione-progetto-di-oi-e-regione-emilia-romagna/) ### Agro.Big.Data.Science 🥬🍐 🥝 The *[agro.big.data.science](http://agrobigdatascience.it/)* project intends to apply the data driven logic to 3 production chains (kiwi, pear and spinach), complete with the necessary sensors for real-time data collection. Agro.Big.Data.Science, the result of the project, will be the landing point for the development of specialized solutions for the agri-food domain and has the following objectives: * The solution of specific problems of the three supply chains considered; * The validation of the data driven methodology on the agro-food supply chains; * The verification of the maturity and improvements of IoT systems already available to the supply chains; * The engineering of a Big Data platform specific for the agri-food sector, flexible and usable also by supply chains different from those considered in the project. ### GRACE BBI 🌾🌾 The Bioeconomy project *[GRACE](https://www.grace-bbi.eu/)* is made up of a unique consortium of 22 partners from both academia and industry and also includes SME’s, farmers and an industrial cluster. These are joining forces to demonstrate three goals: * the upscaling of miscanthus crop production * the production of both miscanthus and hemp on lands of low productivity, abandoned land or land with contaminated soil * the establishment of 10 biobased value chains at a scale of relevance to industry. In this project, I assisted my colleague *Giorgio Impollonia* (the one and only) in the development of an algorithm for the estimation of Miscanthus moisture and an algorithm for the estimation of biophysical parameters using UAV multispectral images 🚁 .
104.854545
898
0.807179
eng_Latn
0.997825
410c9e21f22bf7a1ee86b501af729d366658c78f
2,504
md
Markdown
design-docs/features/continuous-build/improved-responsiveness/README.md
stefb965/gradle
eb65d008dcce11907e123a45eb1b51584cc0edea
[ "Apache-2.0" ]
null
null
null
design-docs/features/continuous-build/improved-responsiveness/README.md
stefb965/gradle
eb65d008dcce11907e123a45eb1b51584cc0edea
[ "Apache-2.0" ]
null
null
null
design-docs/features/continuous-build/improved-responsiveness/README.md
stefb965/gradle
eb65d008dcce11907e123a45eb1b51584cc0edea
[ "Apache-2.0" ]
1
2018-08-26T23:41:56.000Z
2018-08-26T23:41:56.000Z
# Continuous build: Improved responsiveness - [ ] Continuous build does not trigger rebuild when an intermediate file is changed - [ ] Provide optimal feedback to user when continuous build is triggered when build is in process ## Backlog / Open Issues / Not in scope - Certain inputs might be known to be immutable (e.g. cached repository dependencies have a checksum in their path and will not change, system header files) ## Stories ## Story: Provide optimal feedback to user when continuous build is triggered when build is in process This is an extension to the story "Continuous build will trigger a rebuild when an input file is changed during build execution". There are several options for providing the feedback quickly to the user: 1. A) When a change is detected, the currently executing build is canceled. After a quiet period a new build is triggered. 2. B) When a change is detected and a build is currently executing, use an adaptive solution to complete the current build. - skip tasks that are non-incremental such as test execution tasks - complete incremental tasks such as compilation tasks The implementation strategy will be chose when this story is designed and reviewed. ### Test coverage - all current continuous build tests should pass Assuming A) implementation strategy is chosen: - additional test cases for test scenario with a build with tasks A, B, C, D each having it's own directory as input. Tasks have dependencies so that B depends on A, C on B and D on C. Request building task D. - change input files to tasks after each task has been executed, but before the build has completed - check that the currently running build gets canceled and a new build gets triggered - change input file for task during the task is executed - check that the currently running build gets canceled and a new build gets triggered ## Story: Continuous build does not trigger rebuild when an intermediate file is changed The current implementation of continuous build registers to watch changes to the input files of all tasks that are executed. Benefits of not watching for changes in intermediate files: - Some builds might end up in a loop when their intermediate files keep changing. These builds would be able to use continuous build mode without changing the build logic. - It would reduce file system I/O when we watch for changes in less files. Intermediate files are outputs from one task and inputs to another task in the same task graph. TBD
52.166667
209
0.787141
eng_Latn
0.999879
410cadd729892f6b7bd8b808d4783601df88eb69
38
md
Markdown
README.md
Tentelian-Sports-Association/api.tsa.gg
c9eed895a1932cce33d9dc7eea92bddcc0f4c845
[ "BSD-3-Clause" ]
3
2020-07-23T15:37:27.000Z
2021-03-10T12:14:09.000Z
README.md
Tentelian-Sports-Association/api.tsa.gg
c9eed895a1932cce33d9dc7eea92bddcc0f4c845
[ "BSD-3-Clause" ]
2
2020-08-05T07:05:19.000Z
2020-08-05T09:28:45.000Z
README.md
Tentelian-Sports-Association/api.tsa.gg
c9eed895a1932cce33d9dc7eea92bddcc0f4c845
[ "BSD-3-Clause" ]
1
2020-07-23T09:30:56.000Z
2020-07-23T09:30:56.000Z
# tsa.gg Tentelian Sports Association
12.666667
28
0.815789
eng_Latn
0.285907
410cfdc62b345de5ef5dcc280817be21d54064de
2,686
md
Markdown
docs/logging-guidelines.md
stfung77/java-stellar-anchor-sdk
4444f876922909a07423a8cd4265c820aa0abbf0
[ "Apache-2.0" ]
4
2022-01-28T16:55:18.000Z
2022-03-27T21:08:35.000Z
docs/logging-guidelines.md
stfung77/java-stellar-anchor-sdk
4444f876922909a07423a8cd4265c820aa0abbf0
[ "Apache-2.0" ]
130
2022-01-28T17:08:17.000Z
2022-03-31T23:10:06.000Z
docs/logging-guidelines.md
stfung77/java-stellar-anchor-sdk
4444f876922909a07423a8cd4265c820aa0abbf0
[ "Apache-2.0" ]
4
2022-02-15T06:01:27.000Z
2022-03-02T17:51:13.000Z
# Logging Guidelines ## Don't Do ## Logging Levels We follow SLF4J logging levels, which are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `FATAL`. Here comes the guidelines of each logging level. ## TRACE The most fine-grained information only used in rare cases where you need the full visibility of what is happening in your application and inside the third-party libraries that you use. You can expect the TRACE logging level to be very verbose. You can use it for example to annotate each step in the algorithm or each individual query with parameters in your code. ## DEBUG Less granular compared to the TRACE level, but it is more than you will need in everyday use. The DEBUG log level should be used for information that may be needed for diagnosing issues and troubleshooting or when running application in the test environment for the purpose of making sure everything is running correctly ## INFO The standard log level indicating that something happened, the application entered a certain state, etc. For example, a controller of your authorization API may include an INFO log level with information on which user requested authorization if the authorization was successful or not. The information logged using the INFO log level should be purely informative and not looking into them on a regular basis shouldn’t result in missing any important information. ## WARN The log level that indicates that something unexpected happened in the application, a problem, or a situation that might disturb one of the processes. But that doesn’t mean that the application failed. The WARN level should be used in situations that are unexpected, but the code can continue the work. For example, a parsing error occurred that resulted in a certain document not being processed. ## ERROR The log level that should be used when the application hits an issue preventing one or more functionalities from properly functioning. The ERROR log level can be used when one of the payment systems is not available, but there is still the option to check out the basket in the e-commerce application or when your social media logging option is not working for some reason. It can be used to log specific details of an error prior to returning a more generic and human-readable error response to the user ## FATAL The log level that tells that the application encountered an event or entered a state in which one of the crucial business functionality is no longer working. A FATAL log level may be used when the application is not able to connect to a crucial data store like a database or all the payment systems are not available and users can’t checkout their baskets in your e-commerce.
99.481481
504
0.801936
eng_Latn
0.999878
410d9dee9ef860a42aa400aaab27cd6540cee1bc
788
md
Markdown
Solutions/0343. 整数拆分.md
itcharge/LeetCode-Py
4330344de6fc1c1a4d81354ecb00f992480c90f8
[ "MIT" ]
1,237
2021-01-27T02:41:17.000Z
2022-03-31T07:08:30.000Z
Solutions/0343. 整数拆分.md
itcharge/LeetCode-Py
4330344de6fc1c1a4d81354ecb00f992480c90f8
[ "MIT" ]
1
2022-01-20T02:18:42.000Z
2022-01-21T01:10:59.000Z
Solutions/0343. 整数拆分.md
itcharge/LeetCode-Py
4330344de6fc1c1a4d81354ecb00f992480c90f8
[ "MIT" ]
245
2021-11-02T04:13:45.000Z
2022-03-30T15:42:46.000Z
## [0343. 整数拆分](https://leetcode-cn.com/problems/integer-break) - 标签:数学、动态规划 - 难度:中等 ## 题目大意 给定一个正整数 n,将其拆分为至少两个正整数的和,并使这些整数的乘积最大化。要求:返回可以获得的最大乘积。 ## 解题思路 可以使用动态规划求解。 定义状态 `dp[i]` 为:拆分整数 `i`,可以获得的最大乘积为 `dp[i]`。 将 `j` 从 `1` 遍历到 `i - 1`,通过两种方式得到 `dp[i]`: - `(i - j) * j` ,直接将 `i` 拆分为 `i - j` 和 `j`,获取两者乘积。 - `dp[i - j] * j`,将 `i` 中的 `i - j` 部分拆分,得到 `dp[i - j]`,和 `j` ,获取乘积。 则 `dp[i]` 取两者中的最大值。遍历 `j`,得到 `dp[i]` 的最大值。 则状态转移方程为:`dp[i] = max(dp[i], (i - j) * j, dp[i - j] * j)`。 最终输出 `dp[n]`。 ## 代码 ```Python class Solution: def integerBreak(self, n: int) -> int: dp = [0 for _ in range(n + 1)] dp[2] = 1 for i in range(3, n+1): for j in range(1, i): dp[i] = max(dp[i], (i - j) * j, dp[i - j] * j) return dp[n] ```
19.7
67
0.497462
yue_Hant
0.877995
410e2e44486c1da45e538488307fb0c1f0528d4e
52
md
Markdown
README.md
SidVer312/SidVer312.github.io
e1e0de0f2647d47803bb3f02d25affaecde2a255
[ "Apache-2.0" ]
null
null
null
README.md
SidVer312/SidVer312.github.io
e1e0de0f2647d47803bb3f02d25affaecde2a255
[ "Apache-2.0" ]
null
null
null
README.md
SidVer312/SidVer312.github.io
e1e0de0f2647d47803bb3f02d25affaecde2a255
[ "Apache-2.0" ]
null
null
null
# SidVer312.github.io My personal Portfolio website
17.333333
29
0.826923
eng_Latn
0.553241
410e3281b6036d9e5a24b88a00537b4847b944ee
83
md
Markdown
src/main/resources/react4xp/_entries/variables/Variables.md
runejo/mimir
8ab69a39624ed5f93757f78fad0729295646924b
[ "Apache-2.0" ]
4
2020-03-10T11:25:12.000Z
2022-02-23T08:09:13.000Z
src/main/resources/react4xp/_entries/variables/Variables.md
statisticsnorway/mimir
9544e08b399f4deb2c89bda76b03df8e9946ae91
[ "Apache-2.0" ]
205
2020-03-23T16:22:51.000Z
2022-03-09T14:07:12.000Z
src/main/resources/react4xp/_entries/variables/Variables.md
runejo/mimir
8ab69a39624ed5f93757f78fad0729295646924b
[ "Apache-2.0" ]
2
2021-02-25T10:29:49.000Z
2021-09-16T10:04:19.000Z
## Variables - List of Variables - Displayed as Cards - Displayed as a table ?
16.6
26
0.686747
eng_Latn
0.993264
410fdd79811c7a1bf29e294cd116d5793e7d4a00
28
md
Markdown
README.md
shreyadoshi26/portfolio
5fddf3ac086410f89a5ad2b73befb758e05da912
[ "MIT" ]
null
null
null
README.md
shreyadoshi26/portfolio
5fddf3ac086410f89a5ad2b73befb758e05da912
[ "MIT" ]
null
null
null
README.md
shreyadoshi26/portfolio
5fddf3ac086410f89a5ad2b73befb758e05da912
[ "MIT" ]
null
null
null
SHREYA DOSHI - PORTFOLIO
7
25
0.714286
kor_Hang
0.951692
4110737a15b7127725283af2dd2df591a5dd8976
2,198
md
Markdown
aspnet/web-forms/videos/aspnet-ajax/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control.md
Vehache/Docs.fr-fr
5d95349c491e02135a4dbae1171bbcb8526fe327
[ "CC-BY-4.0", "MIT" ]
1
2021-08-17T15:51:26.000Z
2021-08-17T15:51:26.000Z
aspnet/web-forms/videos/aspnet-ajax/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control.md
Vehache/Docs.fr-fr
5d95349c491e02135a4dbae1171bbcb8526fe327
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/web-forms/videos/aspnet-ajax/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control.md
Vehache/Docs.fr-fr
5d95349c491e02135a4dbae1171bbcb8526fe327
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: web-forms/videos/aspnet-ajax/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control title: "[Comment faire] Déclencher une actualisation de UpdatePanel à partir d’un contrôle DropDownList ? | Microsoft Docs" author: JoeStagner description: "Dans la plupart de nos vidéos sur UpdatePanel d’ASP.NET AJAX, nous avons utilisé un contrôle bouton pour provoquer un UpdatePanel actualiser son contenu. Encore, nous pouvons utiliser n’importe quel événement..." ms.author: aspnetcontent manager: wpickett ms.date: 08/22/2007 ms.topic: article ms.assetid: e90defdb-b6b1-4f38-8f6a-7adccbb426ef ms.technology: dotnet-webforms ms.prod: .net-framework msc.legacyurl: /web-forms/videos/aspnet-ajax/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control msc.type: video ms.openlocfilehash: 8286f5add8e2c26f98b895869be4960cf4d96694 ms.sourcegitcommit: 9a9483aceb34591c97451997036a9120c3fe2baf ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 11/10/2017 --- <a name="how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control"></a>[Comment faire] Déclencher une actualisation de UpdatePanel à partir d’un contrôle DropDownList ? ==================== par [Joe Stagner](https://github.com/JoeStagner) Dans la plupart de nos vidéos sur UpdatePanel d’ASP.NET AJAX, nous avons utilisé un contrôle bouton pour provoquer un UpdatePanel actualiser son contenu. Encore, nous pouvons utiliser n’importe quel événement déclenché par n’importe quel autre contrôle de serveur ASP.NET. Cette vidéo utilise l’événement SelectedIndexChanged du contrôle DropDownList en tant que déclencheur pour l’actualisation d’un contrôle UpdatePanel. Nous constatons également comment nous pouvons modifier dynamiquement de la classe de feuille de style associée aux contrôles contenus dans le contrôle UpdatePanel. [&#9654; Regardez la vidéo (minutes 9)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/how-do-i-trigger-an-updatepanel-refresh-from-a-dropdownlist-control) >[!div class="step-by-step"] [Précédent](how-do-i-implement-the-persistent-communications-pattern-using-web-services.md) [Suivant](how-do-i-create-an-aspnet-ajax-extender-from-scratch.md)
68.6875
587
0.799363
fra_Latn
0.723445
4110a4b5e57a87feb9ddec642e3fffe259d3fda7
3,994
md
Markdown
_posts/2021-03-07-desk-bell-alerts.md
Billiam/billiam.github.io
8d9a1fe89484c1430402492c81d004da8412cd23
[ "MIT" ]
null
null
null
_posts/2021-03-07-desk-bell-alerts.md
Billiam/billiam.github.io
8d9a1fe89484c1430402492c81d004da8412cd23
[ "MIT" ]
5
2020-02-25T02:04:45.000Z
2021-05-10T00:01:46.000Z
_posts/2021-03-07-desk-bell-alerts.md
Billiam/billiam.github.io
8d9a1fe89484c1430402492c81d004da8412cd23
[ "MIT" ]
null
null
null
--- layout: post title: Arduino desk bell notifications date: 2021-03-07 22:46 -0600 excerpt: ESP8266 and solenoid for alerts comments: false share: false tags: [] --- An itch.io [project of mine](https://billiam.itch.io/deepdwn) has been getting a little attention recently, and I wanted to get alerts for new purchases, instead of obsessively checking the website. I saw this [kickstarter alert desk bell project](https://aaronparecki.com/2017/11/13/5/kickstarter-desk-bell) a few years ago, and thought it would work great. Here's my finished project: {% include youtube.html id="h5CqjYG4t0E" %} It runs a web server waiting for a JSON payload, and then rings the bell the appropriate number of times. ## Build Parts list: * $3 Wemos D1 clone arduino board * $5 5v mini solenoid. This is perfect for this use case * 1k resistor * TIP120 transistor * 1n4004 diode * Electrocookie solderable perfboard (really nice) * USB breakout board [from another project](https://www.billiam.org/2019/05/29/sherbet-an-ergonomic-keypad). To mount the solenoid to the bell frame I 3D printed a small mount. The solenoid frame had two M2 threaded hole that made mounting easier. The mount clips onto the frame, but ought to sit a few mm lower. The nice thing about this design is that the bell can still be used normally if needed... Not sure when I'd need that. {% include figure.html url="images/post/2021/bell/solenoid_mount.jpg" description="3D printed mount attached to desk bell" %} I did a bunch of tests on a breadboard since I'm still new to electronics projects, first with just the solenoid to make sure it would ring clearly and later with the arduino. I did most of the design with a NodeMCU but switched to the smaller Wemos D1 when I ran out of space. {% include figure.html url="images/post/2021/bell/breadboard.jpg" description="Testing the circuit on a breadboard" %} One thing I didn't anticipate when I started is that the clapper (the part of the bell that swings) sits low into the the bottom base in its resting position. This reduced the available space underneath by about half, so I made a paper template and then cut an arc into one side of the (previously square) perfboard with a jewelers saw. I also 3D printed this simple mount, mostly to keep any of the circuit from contacting the metal frame. The board holds to it nicely, but I haven't designed a good mount for it, so I just hotglued it in place for now. {% include figure.html url="images/post/2021/bell/mount.jpg" description="Small mounting plate for the board" %} {% include figure.html url="images/post/2021/bell/soldered.jpg" description="Board done soldering" %} {% include figure.html url="images/post/2021/bell/done.jpg" description="Mounted inside bell" %} There's more stuff I'd like to do: * 3D print the whole base for better mounting points and more space * LEDs (I have an LED ring that fits really nicely in the diameter of the bell, but there isn't really enough room for it right now) * Proper outlet mounting instead of just sneaking a thin cable underneath the base ## Software For the firmware, I'm using [arduino fsm](https://www.arduino.cc/reference/en/libraries/arduino-fsm/) to handle state changes and delays, since I want the solenoid to activate for about 150ms and then wait a couple of seconds before it can activate again. I need this to be non-blocking, so that I can also respond to web requests and later do some LED animation. The webserver and wifi code is mostly taken from the default ESP8266 examples. For some reason, the `D1` etc. pin constants did not work for me when using the Wemos D1 board profiles, using the correct GPIO pin instead did, so I didn't investigate further. It waits for a request with valid basic auth credentials, and a JSON body with a `count` value, ex: ```sh curl -s -i -u username:password \ --header "Content-Type: application/json" \ --request POST \ --data '{"count": 2}' ``` {% gist 9d24c6534ba7ffb43d6e2c568773f758 %}
55.472222
442
0.765148
eng_Latn
0.996835
4111427a83056217e8e19505e6a75ed0dd298440
42
md
Markdown
Readme.md
anhdhbn/SmartThreadPool
f34458018ca1261a995667c1236b4a54018eecc4
[ "MS-PL" ]
null
null
null
Readme.md
anhdhbn/SmartThreadPool
f34458018ca1261a995667c1236b4a54018eecc4
[ "MS-PL" ]
null
null
null
Readme.md
anhdhbn/SmartThreadPool
f34458018ca1261a995667c1236b4a54018eecc4
[ "MS-PL" ]
null
null
null
https://github.com/amibar/SmartThreadPool
21
41
0.833333
kor_Hang
0.480161
41119b28022d7b31eb2ec1a0807f7e1101cbadfc
332
md
Markdown
resources/istio-resources/files/dashboards/README.md
eagle-dai/kyma
da00872461205932a47f2eb3e4ab7bfad28bf06b
[ "Apache-2.0" ]
1
2022-03-30T10:19:39.000Z
2022-03-30T10:19:39.000Z
resources/istio-resources/files/dashboards/README.md
eagle-dai/kyma
da00872461205932a47f2eb3e4ab7bfad28bf06b
[ "Apache-2.0" ]
28
2022-02-18T10:06:08.000Z
2022-03-28T06:30:20.000Z
resources/istio-resources/files/dashboards/README.md
VOID404/kyma
e1491c4678e07afcafd33c0977dff46e0a7f451b
[ "Apache-2.0" ]
null
null
null
# Istio Dashboards Istio Dashboards in Kyma are based on the official [Istio dashboards](https://istio.io/latest/docs/ops/integrations/grafana/#configuration) with minor modifications: - Removed `DS_PROMETHEUS` Grafana variable - Removed **__inputs** and **__requires** fields - Added tags: **"tags": ["service-mesh", "kyma"],**
55.333333
165
0.746988
eng_Latn
0.750526
411333e1b2d9fe1921feae641940020355eeb816
67
md
Markdown
README.md
Ryan-Moe/ry-chat
257464b2b09010c1c9cc34d852afc32217fe2a70
[ "MIT" ]
null
null
null
README.md
Ryan-Moe/ry-chat
257464b2b09010c1c9cc34d852afc32217fe2a70
[ "MIT" ]
1
2021-06-04T23:46:25.000Z
2021-06-04T23:46:25.000Z
README.md
Ryan-Moe/ry-chat
257464b2b09010c1c9cc34d852afc32217fe2a70
[ "MIT" ]
null
null
null
# ry-chat A django-based forum app I designed to play around with.
22.333333
56
0.761194
eng_Latn
0.999874
4113b73d2d47f61ea569bec73e5a66617e40aff1
2,617
md
Markdown
README.md
lee-jingu/SSMOECHS
5afb0899304689c05a68580a9eb5610dd83ea76a
[ "MIT" ]
1
2021-02-12T01:32:23.000Z
2021-02-12T01:32:23.000Z
README.md
lee-jingu/SSMOECHS
5afb0899304689c05a68580a9eb5610dd83ea76a
[ "MIT" ]
null
null
null
README.md
lee-jingu/SSMOECHS
5afb0899304689c05a68580a9eb5610dd83ea76a
[ "MIT" ]
null
null
null
# SSMOECHS SSMOECHS for WSNs or Discret Multiple Selection <br> 시간이 진행되면서 변화하는 환경에 적응하면서 불연속 환경에 있는 노드들을 선택하기 위한 알고리즘. 알고리즘 전략은 다음과 같다. ``` 0) 모든 Agent들은 각각 노드들을 선택한다. 최초 선택은 Uniform Distribution에서 Random Selection을 진행. 1) Agent들의 선택의 결과들이 각 환경(Fitness Function)에 따라서 Score가 결정된다. 2) 모든 Agent들은 먼저 전체 그룹 중 가장 좋은 선택(Global Best), 자신의 선택, 그리고 그룹 중 또 다른 Agent의 선택을 고려해 Selection을 갱신한다. 3) 2)를 진행한 Agent들은 로컬 그룹 중 가장 좋은 선택(Local Best), 자신의 선택, 그리고 로컬 그룹 중 또 다른 Agent의 선택을 고려해 Selection을 갱신한다. 4) 일정 기간동안 Global Best가 바뀌지 않으면 Local Group은 반으로 찢어지고, 찢어지는 행위는 미리 선정한 최대값(MG)까지 진행된다. 5) Local Group의 수가 MG를 넘어가면 Global Best의 선택을 Return한다. ``` 2)에서 선택을 고려한다는 표현은 선택의 Score를 활용해 선택에 활용될 Distribution을 갱신한다는 의미이다. # Abstract Abstract Extending the lifetime and stability of wireless sensor networks (WSNs) through efficient energy consumption remains challenging. Though clustering has improved energy efficiency through cluster-head selection, its application is still complicated. In existing cluster-head selection methods, the locations where cluster-heads are desirable are first searched. Next, the nodes closest to these locations are selected as the cluster-heads. This location-based approach causes problems such as increased computation, poor selection accuracy, and the selection of duplicate nodes. To solve these problems, we propose the sampling-based spider monkey optimization (SMO) method. If the sampling population consists of nodes to select cluster-heads, the cluster-heads are selected among the nodes. Thus, the problems caused by different locations of nodes and cluster-heads are resolved. Consequently, we improve lifetime and stability of WSNs through sampling-based spider monkey optimization and energy-efficient cluster head selection (SSMOECHS). This study describes how the sampling method is used in basic SMO and how to select cluster-heads using sampling-based SMO. The experimental results are compared to similar protocols, namely low-energy adaptive clustering hierarchy centralized (LEACH-C), particle swarm optimization clustering protocol (PSO-C), and SMO based threshold-sensitive energy-efficient delay-aware routing protocol (SMOTECP), and the results are shown in both homogeneous and heterogeneous setups. In these setups, SSMOECHS improves network lifetime and stability periods by averages of 13.4%, 7.1%, 34.6%, and 1.8%, respectively # Paper Link [Energy-Efficient Cluster-Head Selection for Wireless Sensor Networks Using Sampling-Based Spider Monkey Optimization](https://www.mdpi.com/1424-8220/19/23/5281) ## License / 라이센스 This project is licensed under the MIT License
109.041667
1,650
0.802828
kor_Hang
0.915542
41154e11144bb321633019e899a7ca70b3313f24
15,167
md
Markdown
_posts/2020-08-18-摆脱追踪者:您的行动必须是匿名的,否则没有可持续性.md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
21
2020-07-20T16:10:55.000Z
2022-03-14T14:01:14.000Z
_posts/2020-08-18-摆脱追踪者:您的行动必须是匿名的,否则没有可持续性.md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2020-07-19T21:49:44.000Z
2021-09-16T13:37:28.000Z
_posts/2020-08-18-摆脱追踪者:您的行动必须是匿名的,否则没有可持续性.md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2021-05-29T19:48:01.000Z
2021-05-29T19:48:01.000Z
--- layout: post title: "摆脱追踪者:您的行动必须是匿名的,否则没有可持续性" date: 2020-08-18T08:22:12.000Z author: 火光 from: https://2049post.wordpress.com/2020/08/18/v4tech3/ tags: [ 火光 ] categories: [ 火光 ] --- <!--1597738932000--> [摆脱追踪者:您的行动必须是匿名的,否则没有可持续性](https://2049post.wordpress.com/2020/08/18/v4tech3/) ------ <div> <p><a href="https://www.iyouport.org/%E6%91%86%E8%84%B1%E8%BF%BD%E8%B8%AA%E8%80%85%EF%BC%9A%E6%82%A8%E7%9A%84%E8%A1%8C%E5%8A%A8%E5%BF%85%E9%A1%BB%E6%98%AF%E5%8C%BF%E5%90%8D%E7%9A%84%EF%BC%8C%E5%90%A6%E5%88%99%E6%B2%A1%E6%9C%89%E5%8F%AF/" target="_blank" rel="noreferrer noopener">链接</a><br>编者注:译文略有改动。</p><ul><li class=""><em>这里是一些基础知识。易于追踪监视是互联网的本性,如果您想使用这个互联网做点正经事,就必须能实现足够程度的匿名</em></li></ul><figure class="wp-block-image"><img src="https://i1.wp.com/www.iyouport.org/wp-content/uploads/2019/11/%E5%A4%B4-5.jpg?resize=1100%2C1100&amp;ssl=1" alt="" /></figure><p>与现实中的调查不同,开源情报调查必须是匿名的,必须尽可能不留下数字足迹和蛛丝马迹,否则目标人物会反过来跟踪你,不仅阻止您的调查,甚至会对您的基本人身安全产生威胁。</p><p><strong>掌握摆脱跟踪的安全防护技巧,是成为公民调查员最基础的一步</strong>。</p><p>我们曾经介绍过<a href="https://www.iyouport.org/category/%e6%8a%80%e6%9c%af%e9%98%b2%e8%ba%ab-%e8%87%aa%e6%88%91%e4%bf%9d%e6%8a%a4%e6%96%b9%e6%b3%95/" target="_blank" rel="noreferrer noopener">很多安全防护知识</a>。本文将介绍的是基础知识——<strong>有哪些可用于追踪您的网络技术,在进行开源情报收集时应采取哪些针对性的防护对策。</strong></p><p>请注意:安全知识不仅仅是给调查人员看的,也是给任何不希望成为猎物的、尊重自身权利的公民看的。</p><p>有很多不同的参与者对跟踪互联网用户感兴趣,每个参与者都有自己的动机。</p><p>最基本的,例如,广告商越来越多地监视人们的在线行为,以定制广告来定位他们的目标,这种类型的跟踪被称为在线行为广告(OBA),<strong>就是它在一定程度上推动了社交网站的爆炸式增长</strong>,所有这些站点都在时刻跟踪您的一举一动——“免费注册,快来玩啊~”</p><p>如今最火的是所谓的网络分析服务(例如剑桥分析公司),他们<strong>跟踪在线用户以收集可以用于操纵人们的信息</strong>;而政府的间谍机构则大规模跟踪在线用户,以分析全球数字信息并预测未来的政治、军事和经济变化。</p><p>此外,<strong>网警、警察和情报机构</strong>会使用网上公共信息来获取关于目标的情报。<strong>社会工程学专家和黑客</strong>也会跟踪监视,以制定有针对性的攻击计划。</p><p>从公共资源获取的信息称为<a href="https://www.iyouport.org/category/osint/" target="_blank" rel="noreferrer noopener">开源情报(OSINT)</a>,它指的是所有可公开获得的信息。OSINT的来源与其他形式的情报有所区别,因为它必须在不违反任何版权或隐私法的情况下合法地获得。</p><p>!<em>作为调查者,如果你的目标对象发现你正在搜索他/他们怎么办?如果他们可以了解你(搜索者)的来源、背后的组织、以及位置或身份,怎么办?他们会对你做什么?</em></p><p><strong>你必须警惕这点。因为在互联网上隐藏身份不是在现实中戴墨镜假发那么简单。</strong></p><p>!<em>在进行开源情报搜索过程中如果暴露了搜索者的身份,在许多情况下可能会带来严重的危险,甚至法律后果。</em></p><p>相同的问题也适用于商业领域,考虑一家寻求进入新市场的公司,如果同一领域的其他竞争者发现了该公司的搜索调查行为,会发生什么?</p><p><strong>保护操作的隐私安全性是开源情报调查成功的关键。</strong></p><p>本文分为两个部分,首先探讨&#8221;在线跟踪&#8221;的概念,并介绍用在线于跟踪的不同技术方法,以提出相应的对策。</p><p>许多开源情报初级从业者认为,使用VPN服务足以防止被在线跟踪 ——这是完全<strong>错误</strong>的。</p><p><strong>使用可靠的VPN只会隐藏您的IP地址,而对于跟踪您的其他跟踪器来说,您的数字足迹都是显而易见的。</strong></p><p>完全的网上匿名非常难以实现。要做到完全匿名状态,您需要使用一整套工具和策略来隐藏任何可以揭示您的身份的信息,甚至是隐藏访问互联网的硬件和连接类型的踪迹。这需要一些专业的技术技能。</p><p>只有外国间谍活动的那种等级才需要这种匿名性,通常由非常了解如何隐藏其搜集活动的情报机构进行,而不是公民调查者。</p><p>但是,<strong>为了进行常规开源情报收集活动,您需要匿名化到适当的级别,使目标无法发现您正在尝试查找有关他们的信息。</strong></p><p>本文的第二部分将介绍各种在线跟踪技术、以及如何使用大量工具和策略来对付它们,并且将提出一种解决方案,允许在线的任何人<strong>使用虚拟设备将其浏览活动与本地网络和基础设施隔离开来</strong>。</p><figure class="wp-block-image"><img src="https://i2.wp.com/www.iyouport.org/wp-content/uploads/2019/11/11-1.jpg?resize=1100%2C1100&amp;ssl=1" alt="" /></figure><h3 id="什么是在线跟踪?"></h3><h3 id="什么是在线跟踪?"><strong>什么是在线跟踪?</strong></h3><p>在线跟踪可以定义为追踪互联网用户在不同网站上的浏览历史记录、有时是在线行为的过程。</p><p>为了将浏览历史连接到目标用户,追踪者可以使用标识符将每个在线用户区分为可以跟踪的个人。</p><p>该标识符类似于人的指纹,因为它可以在数百万个已连接的用户中区分出特定的用户设备。</p><p>下一节将演示在线跟踪在技术上如何工作。</p><h3 id="在线跟踪技术"></h3><h3 id="在线跟踪技术"><strong>在线跟踪技术</strong></h3><p>在线跟踪通常利用以下一种或多种方法:</p><p><strong>1、IP地址跟踪</strong></p><p>没有IP就无法联网,IP地址是唯一的标识符,用于在连接到互联网时标识任何具有联网功能的设备。</p><p>没有两个设备可以在同一IP网络上拥有相同的IP地址,这使得IP地址成为在线跟踪用户的首选。</p><p>连接到 Internet 时,您要么每次使用相同的IP地址(静态IP),要么每次使用不同的数字(称为动态IP)。</p><p>静态IP地址是您的互联网服务提供商(ISP)分配的地址,不会随时间变化;相反,动态IP是指每当您连接到互联网时,ISP都会分配一个动态IP地址。</p><p>每次重新启动计算设备或路由器时,它都使用一种被称为动态主机配置协议(DHCP)的东西为您分配新的IP地址。一些ISP可能会多次分配以前分配给您的相同IP地址,但这不是经验法则。</p><p>请注意,使用不同的技术(例如VPN和TOR网络之类的匿名网络)上线,可以伪造(隐藏)IP地址。用户还可以使用NAT路由器,它使所有属于同一网络的设备都共享一个公共IP地址。</p><p>由于这些原因,一个IP地址并不足以定位当今互联网上的某个特定的在线用户。但是,它仍然是在线跟踪目标的首选。</p><p>要了解如何选择VPN提供商,可以参见《<a href="https://www.iyouport.org/%e5%ae%89%e5%85%a8%e6%89%8b%e5%86%8c%ef%bc%9a%e8%bf%99%e9%87%8c%e6%98%af%e4%bd%a0%e9%9c%80%e8%a6%81%e7%9a%84%e5%87%a0%e4%b9%8e%e6%89%80%e6%9c%89%e5%ae%89%e5%85%a8%e4%b8%8a%e7%bd%91%e5%b7%a5%e5%85%b7/" target="_blank" rel="noreferrer noopener">安全手册</a>》;要了解如何判断VPN的安全性,可以参见这里《<a href="https://www.iyouport.org/%e5%a6%82%e4%bd%95%e9%aa%8c%e8%af%81%e6%82%a8%e7%9a%84-vpn-%e8%bf%9e%e6%8e%a5%e6%98%af%e5%90%a6%e5%ae%89%e5%85%a8%ef%bc%9f/" target="_blank" rel="noreferrer noopener">如何验证您的 VPN 连接是否安全</a>?》。</p><p><strong>2、Cookie 追踪</strong></p><p>Cookies 是跟踪在线用户的最常用技术,如今家喻户晓。</p><p>Cookie 是用户访问特定网站时创建的小型文本文件,其中包含的标准信息包括区分客户端设备的唯一ID、有效日期、和Cookie网站名称。</p><p>当再次返回同一网站时,cookie 用于判断客户端设备。网站使用 Cookie 的主要目的有两个:存储登录凭据、和跟踪用户的在线行为。</p><p>大多数人在谈论 Web cookie 时都意味着一个基本的 cookie 文件(也称为 HTTP cookie)。HTTP cookie 是一个简单的文本文件,用于跟踪用户对部署它的网站的访问。</p><p>关闭浏览器时,过期的 HTTP cookie 会自动删除。但是,到期日期可能是未来的<strong>很多年</strong>。就 cookie 的寿命而言,主要有两种类型:会话cookie和持久cookie。</p><p>会话Cookie存储在临时内存中,并且在用户关闭浏览器时将被擦除,此类Cookie没有到期日期,并且不存储有关用户客户端设备的任何信息。它通常用于维护电子商务网站中的购物车内容。</p><p>持久Cookie(例如 Flash 和 evercookie cookie)引起了严重的隐私问题。</p><p>Cookie 的内容中有一半是第一方的,属于您访问的网站,另一半是第三方的,属于合作伙伴、服务或与网站合作的广告商。</p><p>第三方 Cookie <strong>用于(跨网站的)跟踪活动</strong>,并识别频繁访问和回访的访问者,以根据Cookie的历史记录量身定制内容或优化广告或改善用户体验。持久 cookie 的两种主要类型是 Flash 和 Evercookies。</p><p>与具有有效日期并存储在客户端硬盘驱动器上特定文件夹中的传统cookie文件相比,Flash Cookies 具有更高的持久性。<strong>仅删除网络浏览器的 cookie 文件夹不会删除此类型的 cookie。</strong></p><p>Flash cookie 用于存储用户在多个网站上的浏览历史记录,可以用来重新实例化用户删除的 HTTP cookie。</p><p>要访问存储在计算机中的所有 Flash cookie(在Windows下),请转到&#8221;控制面板&#8221;➤&#8221; Flash Player&#8221;,然后选择&#8221;阻止所有站点在此计算机上存储信息&#8221;(Block all sites from storing information on the computer)选项。</p><p>还可以使用一个工具来<strong>显示系统中存在的 Flash cookie 列表并将其删除,</strong><a href="https://www.secjuice.com/tracking-osint-hunters/www.nirsoft.net/utils/flash_cookies_view.html" target="_blank" rel="noreferrer noopener"><strong>FlashCookiesView</strong></a>** 是为此目的创建的便携式工具**。</p><p>Evercookies Cookie:根据开发人员 Samy Kamkar 的说法,evercookie 是基于 JavaScript 的 cookie,<strong>即使用户从其计算机中删除了 HTTP 和 Flash cookie 之后,它也可以存在。</strong></p><p>它通过将数据存储在客户端浏览器/计算机上的多个位置(例如,HTTP cookie,Flash cookie,HTML5本地存储,Web历史记录,Silverlight)来实现其持久性。</p><p>例如,如果这些位置之一被用户删除,evercookie 将检测到该位置并自行重新生成。幸运的是,现代的网络浏览器能够阻止或检测 evercookies。</p><p><strong>3、ETag 追踪</strong></p><p>ETag 是不使用 Cookie(HTTP和Flash)、JavaScript、HTML存储、或IP地址而跟踪用户的另一种方法。</p><p>ETag或实体标签是超文本传输协议(HTTP)机制的一部分,该机制提供Web缓存验证,并用于控制特定文件在客户端缓存的时间。</p><p>ETag帮助Web浏览器避免两次加载相同的Web资源,例如,当用户访问在后台播放音乐的网站时,它会检查用户的本地时间。</p><p>如果ETag不同,则客户端浏览器将下载新版本的音频文件。</p><p><strong>ETag 类似于持久cookie的特点,可以用来跟踪用户</strong>,并且,即使服务器上的内容不变,跟踪服务器也可以不断将ETag发送到客户端浏览器。这样,跟踪服务器可以与客户端计算机保持无限期持久的会话。</p><p>要摆脱 ETag,必须清除浏览器缓存内容。</p><p><strong>4、数字指纹追踪</strong></p><p>浏览器指纹是有关用户系统和浏览器的一组技术信息,可以在线区分其机器。</p><p>此信息包括以下内容:浏览器类型、操作系统(OS)版本、安装的附件、用户代理、安装的字体、语言设置、时区、屏幕大小和颜色深度等。</p><p><strong>即使禁用了 cookie 和 JavaScript,指纹也可以使跟踪器区分用户的机器</strong>。</p><p>浏览器指纹识别是无状态的,对用户和计算机都是透明的。</p><p>数字指纹收集的信息看似通用,不足以在线识别数百万个已连接设备中的单个计算机;但是,<strong>如果将这些信息组合在一起,则可以绘制有关每台用户计算机的全面而独特的图景,此后,如果将其与其他个人身份信息组合在一起,则可以将该信息追踪到真实身份</strong>。</p><p>数字指纹追踪可以使不同的势力有效地在不使用传统跟踪技术(例如计算机IP地址和cookie)的情况下轻松地对目标进行描述。</p><p>结论是,<strong>仅使用浏览器中的少量技术信息就可以在线配置和跟踪大多数互联网用户。</strong></p><p>设备指纹识别主要有两种类型:基于脚本的和基于画布的。</p><p><strong>基于脚本的指纹——</strong></p><p>脚本指纹通过将脚本(通常为 JavaScript)加载到用户的浏览器中而起作用。成功加载脚本后,它将执行以提取有关当前浏览器和系统配置的大量技术信息。</p><p>提取的信息包括用户代理、已安装的附件/扩展程序、已安装的字体、屏幕分辨率、时区、操作系统类型和版本、CPU和GPU类型、以及有关目标系统的许多其他详细信息。</p><p>然后根据脚本收集的信息进行哈希处理。<strong>该哈希值可以识别和跟踪您的设备,就像IP地址一样。</strong></p><p>跟踪器使用 Flash、Silverlight 或 Java applet 代替 JavaScript 来执行指纹识别。他们都将返回相同的结果。</p><p>抵制此技术的主要方法是在浏览器中禁用 JavaScript。但是,这种方法不切实际,并且可能导致大量网站无法使用(大多数Web设计框架现在都基于 JavaScript 来提供功能)。</p><p><strong>禁用 Java</strong>(参见下图)不会导致诸如禁用 JavaScript 之类的问题。</p><figure class="wp-block-image"><img src="https://i1.wp.com/www.iyouport.org/wp-content/uploads/2019/11/2-1.png?resize=1100%2C769&amp;ssl=1" alt="" /></figure><p><strong>画布指纹</strong></p><p>画布(Canvas) 是最初由Apple开发的 HTML5 元素;它使用 JavaScript API 在网页上绘制图形(线条、形状、文本、图像)和动画(例如游戏和横幅广告)。广告客户可以利用画布功能识别和追踪用户。</p><p>画布指纹追踪是一种跟踪用户在线活动的新方法。只需在用户的客户端浏览器上绘制不可见的图像即可。</p><p>该图像对于每个用户而言都是不同的,一旦绘制在客户端浏览器上,它将收集有关用户浏览器和计算机的不同技术信息。然后根据已收集的信息进行哈希处理。</p><p>该哈希值将在用户访问的所有网站上保持一致(哈希值是从画布数据生成的);<strong>这将有效地记录用户的浏览历史记录</strong>。</p><p>尽管不能单独使用从画布指纹中收集的信息来识别用户,但是,可以将此指纹与其他来源结合使用,以<strong>完全地识别您</strong>。</p><h3 id="浏览器指纹追踪的对策"></h3><h3 id="浏览器指纹追踪的对策"><strong>浏览器指纹追踪的对策</strong></h3><p>当前,指纹识别被认为是用户在线冲浪时面临的最大风险。</p><p>为了了解如何才能阻止这种入侵以保护我们的隐私,首先来看看您当前的数字指纹正在向追踪者展示些什么内容。以下是免费提供此类服务的流行网站。</p><ol><li class="">Panopticlick (<a href="https://panopticlick.eff.org/" target="_blank" rel="noreferrer noopener">https://panopticlick.eff.org</a>)</li><li class="">DeviceInfo (<a href="https://www.deviceinfo.me/" target="_blank" rel="noreferrer noopener">https://www.deviceinfo.me</a>)</li><li class="">Browserleaks (<a href="https://browserleaks.com/" target="_blank" rel="noreferrer noopener">https://browserleaks.com</a>)</li><li class="">AmIUnique (<a href="https://amiunique.org/" target="_blank" rel="noreferrer noopener">https://amiunique.org</a>)</li></ol><h3 id="配置Web浏览器以提高安全性"></h3><h3 id="配置Web浏览器以提高安全性"><strong>配置Web浏览器以提高安全性</strong></h3><p>可以将主要的Web浏览器配置为对隐私更加友好(例如,自动删除浏览历史记录和cookie)、具有完善的隐私配置,以实现此目标。</p><p>用户还可以使用私密浏览模式自动删除浏览历史记录、保存的密码和 cookie。</p><p>在Firefox中,它称为 “Private Browsing”,可以通过按以下按钮组合(Ctrl + Shift + A)进行访问,而在Chrome中,它称为&#8221;隐身模式&#8221;(Incognito)。</p><p>在这里看到更多解释《<a href="https://www.iyouport.org/opsec-%e6%93%8d%e4%bd%9c%e5%ae%89%e5%85%a8%ef%bc%9a%e5%9c%a8%e5%8f%8d%e4%be%a6%e5%af%9f%e7%9a%84%e8%bf%87%e7%a8%8b%e4%b8%ad%e6%82%a8%e5%ba%94%e8%af%a5%e6%b3%a8%e6%84%8f%e5%93%aa%e4%ba%9b%e9%99%b7/" target="_blank" rel="noreferrer noopener">Opsec 操作安全:在反侦察的过程中您应该注意哪些陷阱?</a>》。</p><p>Brave 浏览器是基于 Chromium 项目的注重隐私的网络浏览器,它接受 Chrome 扩展程序。</p><p>默认情况下,Brave 会阻止其他在线跟踪机制,您可以通过转到&#8221;设置&#8221;然后单击&#8221; Shields&#8221;来<strong>启用指纹保护</strong>(参见下图)。</p><figure class="wp-block-image"><img src="https://i2.wp.com/www.iyouport.org/wp-content/uploads/2019/11/3-2.png?resize=1100%2C445&amp;ssl=1" alt="" /></figure><h3 id="浏览器扩展保护隐私"></h3><h3 id="浏览器扩展保护隐私"><strong>浏览器扩展保护隐私</strong></h3><p>有许多隐私附加组件可用来阻止或误导在线跟踪者,以下是最有名的:</p><ol><li class="">Privacy Badger — block invisible trackers (<a href="https://www.eff.org/privacybadger" target="_blank" rel="noreferrer noopener">https://www.eff.org/privacybadger</a>)</li><li class="">Disconnect — Block invisible websites (<a href="https://addons.mozilla.org/en-US/firefox/addon/disconnect" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/disconnect</a>)</li><li class="">uBlock Origin — another efficient blocker (<a href="https://addons.mozilla.org/en-US/firefox/addon/ublock-origin" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/ublock-origin</a>)</li><li class="">HTTPS Everywhere — encrypts your communications with many major websites, making your browsing more secure (<a href="https://www.eff.org/HTTPS-EVERYWHERE" target="_blank" rel="noreferrer noopener">https://www.eff.org/HTTPS-EVERYWHERE</a>)</li><li class="">CanvasBlocker — Alters some JS APIs to prevent fingerprinting (<a href="https://addons.mozilla.org/en-US/firefox/addon/canvasblocker" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/canvasblocker</a>)</li><li class="">Cookie AutoDelete — Automatically delete cookies upon tab closes (<a href="https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete</a>)</li><li class="">Decentraleyes — Block Content Delivery Networks (<a href="https://decentraleyes.org/" target="_blank" rel="noreferrer noopener">https://decentraleyes.org</a>)</li><li class="">uMatrix — A point-and-click matrix-based firewall, with many privacy-enhancing tools (<a href="https://addons.mozilla.org/en-US/firefox/addon/umatrix" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/umatrix</a>)</li><li class="">User-Agent Switcher — switch between popular user-agent strings (<a href="https://addons.mozilla.org/en-US/firefox/addon/user-agent-switcher-revived" target="_blank" rel="noreferrer noopener">https://addons.mozilla.org/en-US/firefox/addon/user-agent-switcher-revived</a>)</li></ol><h3 id="搜索引擎跟踪"></h3><h3 id="搜索引擎跟踪"><strong>搜索引擎跟踪</strong></h3><p>典型的搜索引擎都会跟踪其用户的搜索记录,以量身定制广告,并定制返回的搜索结果。</p><p>例如,大多数 Google 搜索引擎用户拥有一个 Gmail 帐户(Google免费电子邮件服务),当用户进行在线搜索时,该用户的在线活动将被记录、并被链接到他/她的 Gmail 帐户(通常是用户的真实身份)。</p><p><strong>即使用户尚未登录Gmail帐户,Google 仍可以使用已提及的任何以前的跟踪技术将用户的浏览历史记录链接到其真实身份。</strong></p><p><strong>在搜索开源情报来源时,建议使用面向隐私的搜索引擎,该引擎应该不记录您的搜索历史、不会根据搜索提供商确定的标准返回结果</strong>。</p><p>有很多匿名搜索引擎无法跟踪其用户的活动,您可以在这里找到:《<a href="https://www.iyouport.org/%e5%a6%82%e6%9e%9c%e4%bd%a0%e6%93%85%e9%95%bf%e6%90%9c%e7%b4%a2%ef%bc%8c%e4%bd%a0%e8%83%bd%e6%89%be%e5%88%b0%e4%b8%80%e5%88%87%ef%bc%9a%e6%b2%a1%e6%9c%89%e8%b0%b7%e6%ad%8c%e7%9a%84%e5%bc%ba%e5%a4%a7/" target="_blank" rel="noreferrer noopener">如果你擅长搜索,你能找到一切:<em>没有谷歌</em>的强大搜索世界</a>》。</p><h3 id="社交网络跟踪"></h3><h3 id="社交网络跟踪"><strong>社交网络跟踪</strong></h3><p>诸如 Facebook 和 Twitter 之类的社交网站可以跨网站跟踪用户(它们实际上可以跟踪大多数互联网用户的浏览历史记录),<strong>即使这些用户当前并未登录Facebook 和 Twitter 也是如此!</strong></p><p>你一定注意到了,大多数网站都有 Facebook 的&#8221; Like&#8221;和&#8221; Share&#8221;按钮,这些按钮有助于在用户的 Facebook 新闻源上共享内容。但是,您应该知道的是,<strong>只要您访问了具有 Facebook &#8220;点赞&#8221;或&#8221;分享&#8221;按钮的网页,即使您没有单击该按钮,Facebook 也会记录此访问!</strong></p><p>Facebook 跟踪不会在此时停止,因为他们可以在用户不知情的情况下使用嵌入在&#8221;点赞&#8221;和&#8221;共享&#8221;按钮中的隐藏代码,<strong>在不同网站上跟踪非 Facebook用户,他们一直在这样做!</strong></p><p>Twitter的&#8221;关注&#8221;按钮,在跟踪在线用户方面也起着相同的作用,就像Facebook 的&#8221;点赞&#8221;和&#8221;共享&#8221;按钮一样。</p><figure class="wp-block-image"><img src="https://i0.wp.com/www.iyouport.org/wp-content/uploads/2019/11/22.jpg?resize=1100%2C1100&amp;ssl=1" alt="" /></figure><h3 id="逃避在线跟踪"></h3><h3 id="逃避在线跟踪"><strong>逃避在线跟踪</strong></h3><p>正如我们一直以来不断介绍的安全知识中强调的那样,可以使用不同的技术在线跟踪用户,而应付这些技术需要正确地使用不同的工具和策略。</p><p>为了防止在线跟踪,用户需要执行三个<strong>最基本的步骤</strong>:</p><ul><li class=""><strong>使用可靠的VPN服务隐藏IP地址。</strong></li><li class=""><strong>关闭浏览器后,删除(或拒绝)cookie和Web浏览器缓存。</strong></li><li class=""><strong>防止数字指纹技术对计算机进行描述。</strong></li></ul><p>这需要一定的技术技能和明确的防御意识。对抗跟踪和指纹识别很困难,并且<strong>没有</strong>100%保证的技术方案可以解决此问题。</p><p>要知道,即使将Web浏览器配置为具有更好的隐私性并安装了许多附件,熟练的对手仍然可以在很大程度上识别您的数字指纹。</p><p>对抗浏览器指纹识别的最佳技术解决方案是使您的浏览器看起来像大多数浏览器的指纹一样。</p><p>为此,建议使用一个新安装的Web浏览器副本,该浏览器手动配置为具有增强的隐私设置,而无需安装任何附加组件。<strong>该浏览器应从同样全新安装的虚拟机(例如 Virtual Box )上运行。</strong></p><p>通过使用此技术,您的浏览器将看起来像大多数正在运行的浏览器,从而有效地隐藏了您的真实数字足迹。</p><p>当然,您仍然需要使用VPN来加密您的连接并隐藏您的真实IP地址。</p><p>尤其推荐<strong>匿名工具 Whonix</strong>,您可以<a rel="noreferrer noopener" href="https://www.iyouport.org/%e5%a6%88%e5%a6%88%e8%af%b4%ef%bc%8c%e6%93%8d%e4%bd%9c%e5%ae%89%e5%85%a8%e6%b0%b8%e8%bf%9c%e4%b8%8d%e8%83%bd%e8%a2%ab%e5%bf%bd%e8%a7%86%e2%80%8a-%e2%80%8a%e5%8c%bf%e5%90%8d%e5%b7%a5%e5%85%b7%ef%bc%9a/" target="_blank">在这里看到安装使用教程</a>。</p><nav class="wp-block-navigation" ><ul class="wp-block-navigation__container"><li class="wp-block-navigation-link"><a class="wp-block-navigation-link__content" href="https://2049post.wordpress.com/v4index/"><span class="wp-block-navigation-link__label">返回第四期目录</span></a></li></ul></nav><p></p> </div>
892.176471
14,857
0.771016
yue_Hant
0.593621
41155fc039366c05ffe6650981cb6d1ee9b14b72
816
md
Markdown
README.md
igoradamenko/node-gitlab-ci
28530585dcbbc32e8e8a10833ce21a59f6804c51
[ "MIT" ]
null
null
null
README.md
igoradamenko/node-gitlab-ci
28530585dcbbc32e8e8a10833ce21a59f6804c51
[ "MIT" ]
null
null
null
README.md
igoradamenko/node-gitlab-ci
28530585dcbbc32e8e8a10833ce21a59f6804c51
[ "MIT" ]
null
null
null
# Node.js GitLab CI [![Docker Pulls](https://img.shields.io/docker/pulls/igoradamenko/node-gitlab-ci.svg)](https://hub.docker.com/r/igoradamenko/node-gitlab-ci/) [![](https://images.microbadger.com/badges/image/igoradamenko/node-gitlab-ci.svg)](https://microbadger.com/images/igoradamenko/node-gitlab-ci) Docker image that I use in my GitLab projects to build them on CI. ## Example Create `.gitlab-ci.yml` in the root of your project like this: ```yaml image: igoradamenko/node-gitlab-ci:latest ``` Then configure your jobs as described in [the documentation](https://docs.gitlab.com/ee/ci/yaml/). ## Why? In the past I used to write the commands from the [Dockerfile](Dockerfile) in `.gitlab-ci.yml`, but it slowed down builds, and moving them to the separated image saved me 30-60 seconds on each build.
37.090909
143
0.75
eng_Latn
0.795066
41161ed8d883b5f72b7dab48de71c1ad7cc4e373
1,408
md
Markdown
README.md
iscasur/simple-chat
838775f2866cb831491b3b5a1d50ff3c3d97b8fd
[ "MIT" ]
null
null
null
README.md
iscasur/simple-chat
838775f2866cb831491b3b5a1d50ff3c3d97b8fd
[ "MIT" ]
null
null
null
README.md
iscasur/simple-chat
838775f2866cb831491b3b5a1d50ff3c3d97b8fd
[ "MIT" ]
null
null
null
# Simple Chat This is a simple chat using React.js, Node.js and Socket.io. ![Simple Chat](./img/simple-chat.png) You can see it alive [here](https://simple-chat-moons.netlify.app/). ### Features - Multiple users can join the chat room by entering their names - Users can type chat messages to the chat room - A notification is sent to all users when a user joins/leaves the room ### Links - Website: https://simple-chat-moons.netlify.app/ ## Get started If you want to run this project locally, you can: 1. Clone this project ```bash git clone https://github.com/iscasur/simple-chat.git ``` 2. Go to the project's folder ```bash cd simple-chat ``` ### Backend 💻 3. Go to the server side ```bash cd server ``` 4. Install the dependencies in local ```bash npm install ``` 5. Run development enviroment (server) ```bash npm run dev ``` ### Frontend 🎨 Split your terminal or open a new one. 3. Inside simple-chat folder, go to the client side ```bash cd client ``` 4. Install the dependencies in local ```bash npm install ``` 5. Run the app (client) ```bash npm run start ``` It will open the app automatically if don't, point your browser to `http://localhost:3000` ## Technologies | Frontend | Backend | | -------- | ---------- | | React | Node.js | | CSS | Express.js | | Netlify | Socket.io | | | Heroku | ## Licence This project is MIT licensed
15.472527
90
0.665483
eng_Latn
0.966101
4116442c63b785b48d896f1df2937beb6d925a23
605
md
Markdown
Second exercise/How many days/README.md
pouyaardehkhani/Fundamentals-of-Programming-Course-Exercises-CPP
a6bd550b5db46f6349ec418f25246f7578cdd015
[ "MIT" ]
1
2022-02-18T22:26:41.000Z
2022-02-18T22:26:41.000Z
Second exercise/How many days/README.md
pouyaardehkhani/Fundamentals-of-Programming-Course-Exercises-CPP
a6bd550b5db46f6349ec418f25246f7578cdd015
[ "MIT" ]
null
null
null
Second exercise/How many days/README.md
pouyaardehkhani/Fundamentals-of-Programming-Course-Exercises-CPP
a6bd550b5db46f6349ec418f25246f7578cdd015
[ "MIT" ]
null
null
null
# question Write a program that receives a number as a year and indicates whether it is leap year or not. The formula is as follows: years that are multiples of 100, if they are multiples of 400, and other years, if they are multiples of 4, are leap years. # input Contains a natural number as a Gregorian year. # output Print "leap year" if leap year and "normal year" otherwise. # example: ## Sample input 1: ``` 1600 ``` ## Sample output 1: ``` leap year ``` ## Sample input 2: ``` 2028 ``` ## Sample output 2: ``` leap year ``` ## Sample input 3: ``` 2021 ``` ## Sample output 3: ``` normal year ```
17.794118
150
0.682645
eng_Latn
0.999753
411698938a70ad3dc6e09c4ad589e1503f779494
371
md
Markdown
catalog/pm-5-00-koi-ochiru-toki/en-US_pm-5-00-koi-ochiru-toki.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/pm-5-00-koi-ochiru-toki/en-US_pm-5-00-koi-ochiru-toki.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/pm-5-00-koi-ochiru-toki/en-US_pm-5-00-koi-ochiru-toki.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# PM 5:00 - Koi Ochiru Toki ![pm-5-00-koi-ochiru-toki](https://cdn.myanimelist.net/images/manga/3/11652.jpg) - **type**: manga - **volumes**: 1 - **chapters**: 5 - **original-name**: PM5:00~恋堕ちる時 ## Tags - shoujo ## Authors - Nakamura - Sayumi (Story & Art) ## Links - [My Anime list](https://myanimelist.net/manga/8516/PM_5_00_-_Koi_Ochiru_Toki)
16.863636
81
0.628032
kor_Hang
0.118065
4117e4eb32a848b203a432c0a70f9b729efe97f7
20,462
md
Markdown
archives/2022-01-25.md
erbanku/github-hot-hub
a76cf8a23317f5d67b28ac3c64c11e8e559a0067
[ "MIT" ]
null
null
null
archives/2022-01-25.md
erbanku/github-hot-hub
a76cf8a23317f5d67b28ac3c64c11e8e559a0067
[ "MIT" ]
null
null
null
archives/2022-01-25.md
erbanku/github-hot-hub
a76cf8a23317f5d67b28ac3c64c11e8e559a0067
[ "MIT" ]
null
null
null
# GitHub热榜 `最后更新时间:2022-01-25 11:13:51 +0800` ## 今日热门仓库 1. [veler / DevToys](https://github.com/veler/DevToys) - A Swiss Army knife for developers. - language: **C#** &nbsp;&nbsp; stars: **4,193** &nbsp;&nbsp; folks: **183** &nbsp;&nbsp; `996 stars today` 1. [doocs / leetcode](https://github.com/doocs/leetcode) - 😏 LeetCode solutions in any programming language | 多种编程语言实现 LeetCode、《剑指 Offer(第 2 版)》、《程序员面试金典(第 6 版)》题解 - language: **Java** &nbsp;&nbsp; stars: **10,441** &nbsp;&nbsp; folks: **2,014** &nbsp;&nbsp; `247 stars today` 1. [public-apis / public-apis](https://github.com/public-apis/public-apis) - A collective list of free APIs - language: **Python** &nbsp;&nbsp; stars: **177,497** &nbsp;&nbsp; folks: **20,434** &nbsp;&nbsp; `411 stars today` 1. [yangshun / tech-interview-handbook](https://github.com/yangshun/tech-interview-handbook) - 💯 Curated interview preparation materials for busy engineers - language: **JavaScript** &nbsp;&nbsp; stars: **64,730** &nbsp;&nbsp; folks: **8,998** &nbsp;&nbsp; `238 stars today` 1. [khuedoan / homelab](https://github.com/khuedoan/homelab) - My self-hosting infrastructure, fully automated from empty disk to operating services - language: **Python** &nbsp;&nbsp; stars: **4,244** &nbsp;&nbsp; folks: **201** &nbsp;&nbsp; `893 stars today` 1. [sickcodes / Docker-OSX](https://github.com/sickcodes/Docker-OSX) - Run Mac in a Docker! Run near native OSX-KVM in Docker! X11 Forwarding! CI/CD for OS X! - language: **Shell** &nbsp;&nbsp; stars: **20,506** &nbsp;&nbsp; folks: **964** &nbsp;&nbsp; `237 stars today` 1. [sindresorhus / awesome](https://github.com/sindresorhus/awesome) - 😎 Awesome lists about all kinds of interesting topics - language: **无** &nbsp;&nbsp; stars: **186,409** &nbsp;&nbsp; folks: **22,694** &nbsp;&nbsp; `280 stars today` 1. [DustinBrett / daedalOS](https://github.com/DustinBrett/daedalOS) - Desktop environment in the browser. - language: **JavaScript** &nbsp;&nbsp; stars: **2,996** &nbsp;&nbsp; folks: **136** &nbsp;&nbsp; `125 stars today` 1. [IBAX-io / go-ibax](https://github.com/IBAX-io/go-ibax) - An innovative Blockchain Protocol Platform, which everyone can deploy their own applications quickly and easily, such as Dapp, DeFi, DAO, Cross-Blockchain transactions, etc. - language: **Go** &nbsp;&nbsp; stars: **2,675** &nbsp;&nbsp; folks: **2,089** &nbsp;&nbsp; `1,291 stars today` 1. [Ryujinx / Ryujinx](https://github.com/Ryujinx/Ryujinx) - Experimental Nintendo Switch Emulator written in C# - language: **C#** &nbsp;&nbsp; stars: **10,948** &nbsp;&nbsp; folks: **1,309** &nbsp;&nbsp; `128 stars today` 1. [yuzu-emu / yuzu](https://github.com/yuzu-emu/yuzu) - Nintendo Switch Emulator - language: **C++** &nbsp;&nbsp; stars: **18,183** &nbsp;&nbsp; folks: **1,531** &nbsp;&nbsp; `152 stars today` 1. [ocornut / imgui](https://github.com/ocornut/imgui) - Dear ImGui: Bloat-free Graphical User interface for C++ with minimal dependencies - language: **C++** &nbsp;&nbsp; stars: **34,290** &nbsp;&nbsp; folks: **5,912** &nbsp;&nbsp; `65 stars today` 1. [jwasham / coding-interview-university](https://github.com/jwasham/coding-interview-university) - A complete computer science study plan to become a software engineer. - language: **无** &nbsp;&nbsp; stars: **204,591** &nbsp;&nbsp; folks: **55,212** &nbsp;&nbsp; `307 stars today` 1. [papers-we-love / papers-we-love](https://github.com/papers-we-love/papers-we-love) - Papers from the computer science community to read and discuss. - language: **Shell** &nbsp;&nbsp; stars: **52,147** &nbsp;&nbsp; folks: **4,383** &nbsp;&nbsp; `80 stars today` 1. [akutz / go-generics-the-hard-way](https://github.com/akutz/go-generics-the-hard-way) - A hands-on approach to getting started with Go generics. - language: **Go** &nbsp;&nbsp; stars: **866** &nbsp;&nbsp; folks: **51** &nbsp;&nbsp; `210 stars today` 1. [kedro-org / kedro](https://github.com/kedro-org/kedro) - A Python framework for creating reproducible, maintainable and modular data science code. - language: **Python** &nbsp;&nbsp; stars: **5,841** &nbsp;&nbsp; folks: **577** &nbsp;&nbsp; `236 stars today` 1. [bregman-arie / devops-exercises](https://github.com/bregman-arie/devops-exercises) - Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions - language: **Python** &nbsp;&nbsp; stars: **20,843** &nbsp;&nbsp; folks: **4,284** &nbsp;&nbsp; `191 stars today` 1. [chiru-labs / ERC721A](https://github.com/chiru-labs/ERC721A) - https://ERC721A.org - language: **Solidity** &nbsp;&nbsp; stars: **193** &nbsp;&nbsp; folks: **44** &nbsp;&nbsp; `35 stars today` 1. [jonmatthis / freemocap](https://github.com/jonmatthis/freemocap) - Free like Freedom - language: **Python** &nbsp;&nbsp; stars: **1,499** &nbsp;&nbsp; folks: **95** &nbsp;&nbsp; `50 stars today` 1. [tokio-rs / axum](https://github.com/tokio-rs/axum) - Ergonomic and modular web framework built with Tokio, Tower, and Hyper - language: **Rust** &nbsp;&nbsp; stars: **3,268** &nbsp;&nbsp; folks: **203** &nbsp;&nbsp; `66 stars today` 1. [flxzt / rnote](https://github.com/flxzt/rnote) - A simple drawing application to create handwritten notes. - language: **Rust** &nbsp;&nbsp; stars: **1,381** &nbsp;&nbsp; folks: **32** &nbsp;&nbsp; `349 stars today` 1. [arvidn / libtorrent](https://github.com/arvidn/libtorrent) - an efficient feature complete C++ bittorrent implementation - language: **C++** &nbsp;&nbsp; stars: **3,690** &nbsp;&nbsp; folks: **808** &nbsp;&nbsp; `30 stars today` 1. [mljar / mercury](https://github.com/mljar/mercury) - Mercury: easily convert Python notebook to web app and share with others - language: **TypeScript** &nbsp;&nbsp; stars: **742** &nbsp;&nbsp; folks: **41** &nbsp;&nbsp; `116 stars today` 1. [trekhleb / javascript-algorithms](https://github.com/trekhleb/javascript-algorithms) - 📝 Algorithms and data structures implemented in JavaScript with explanations and links to further readings - language: **JavaScript** &nbsp;&nbsp; stars: **133,312** &nbsp;&nbsp; folks: **21,895** &nbsp;&nbsp; `202 stars today` 1. [radareorg / radare2](https://github.com/radareorg/radare2) - UNIX-like reverse engineering framework and command-line toolset - language: **C** &nbsp;&nbsp; stars: **15,665** &nbsp;&nbsp; folks: **2,620** &nbsp;&nbsp; `73 stars today` ## 近一周热门仓库 1. [imcuttle / mometa](https://github.com/imcuttle/mometa) - 🛠 [Beta] 面向研发的低代码元编程,代码可视编辑,辅助编码工具 - language: **TypeScript** &nbsp;&nbsp; stars: **1,557** &nbsp;&nbsp; folks: **218** &nbsp;&nbsp; `1,062 stars this week` 1. [NVlabs / instant-ngp](https://github.com/NVlabs/instant-ngp) - Instant neural graphics primitives: lightning fast NeRF and more - language: **Cuda** &nbsp;&nbsp; stars: **2,083** &nbsp;&nbsp; folks: **161** &nbsp;&nbsp; `647 stars this week` 1. [mattermost / focalboard](https://github.com/mattermost/focalboard) - Focalboard is an open source, self-hosted alternative to Trello, Notion, and Asana. - language: **TypeScript** &nbsp;&nbsp; stars: **9,000** &nbsp;&nbsp; folks: **667** &nbsp;&nbsp; `1,607 stars this week` 1. [gchq / CyberChef](https://github.com/gchq/CyberChef) - The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis - language: **JavaScript** &nbsp;&nbsp; stars: **14,991** &nbsp;&nbsp; folks: **1,904** &nbsp;&nbsp; `1,095 stars this week` 1. [public-apis / public-apis](https://github.com/public-apis/public-apis) - A collective list of free APIs - language: **Python** &nbsp;&nbsp; stars: **177,497** &nbsp;&nbsp; folks: **20,434** &nbsp;&nbsp; `3,125 stars this week` 1. [craftzdog / dotfiles-public](https://github.com/craftzdog/dotfiles-public) - My personal dotfiles - language: **Vim script** &nbsp;&nbsp; stars: **1,354** &nbsp;&nbsp; folks: **334** &nbsp;&nbsp; `77 stars this week` 1. [tauri-apps / tauri](https://github.com/tauri-apps/tauri) - Build smaller, faster, and more secure desktop applications with a web frontend. - language: **Rust** &nbsp;&nbsp; stars: **29,311** &nbsp;&nbsp; folks: **710** &nbsp;&nbsp; `1,447 stars this week` 1. [revoxhere / duino-coin](https://github.com/revoxhere/duino-coin) - ᕲ Duino-Coin is a coin that can be mined with almost everything, including Arduino boards. - language: **Python** &nbsp;&nbsp; stars: **659** &nbsp;&nbsp; folks: **415** &nbsp;&nbsp; `111 stars this week` 1. [fastlane / fastlane](https://github.com/fastlane/fastlane) - 🚀 The easiest way to automate building and releasing your iOS and Android apps - language: **Ruby** &nbsp;&nbsp; stars: **33,905** &nbsp;&nbsp; folks: **5,098** &nbsp;&nbsp; `439 stars this week` 1. [sunym1993 / flash-linux0.11-talk](https://github.com/sunym1993/flash-linux0.11-talk) - 你管这破玩意叫操作系统源码 — 像小说一样品读 Linux 0.11 核心代码 - language: **C** &nbsp;&nbsp; stars: **6,267** &nbsp;&nbsp; folks: **668** &nbsp;&nbsp; `901 stars this week` 1. [Tencent / libpag](https://github.com/Tencent/libpag) - A real-time rendering library for PAG (Portable Animated Graphics) files that renders After Effects animations natively across multiple platforms. - language: **C++** &nbsp;&nbsp; stars: **1,169** &nbsp;&nbsp; folks: **140** &nbsp;&nbsp; `261 stars this week` 1. [andrecronje / solidly](https://github.com/andrecronje/solidly) - 无 - language: **Solidity** &nbsp;&nbsp; stars: **365** &nbsp;&nbsp; folks: **53** &nbsp;&nbsp; `163 stars this week` 1. [TandoorRecipes / recipes](https://github.com/TandoorRecipes/recipes) - Application for managing recipes, planning meals, building shopping lists and much much more! - language: **HTML** &nbsp;&nbsp; stars: **2,532** &nbsp;&nbsp; folks: **205** &nbsp;&nbsp; `252 stars this week` 1. [faker-js / faker](https://github.com/faker-js/faker) - Generate massive amounts of fake data in the browser and node.js - language: **TypeScript** &nbsp;&nbsp; stars: **2,826** &nbsp;&nbsp; folks: **266** &nbsp;&nbsp; `786 stars this week` 1. [remix-run / remix](https://github.com/remix-run/remix) - Build Better Websites. Create modern, resilient user experiences with web fundamentals. - language: **TypeScript** &nbsp;&nbsp; stars: **11,798** &nbsp;&nbsp; folks: **778** &nbsp;&nbsp; `629 stars this week` 1. [osmosis-labs / osmosis](https://github.com/osmosis-labs/osmosis) - The AMM Laboratory - language: **Go** &nbsp;&nbsp; stars: **360** &nbsp;&nbsp; folks: **108** &nbsp;&nbsp; `34 stars this week` 1. [yuzu-emu / yuzu](https://github.com/yuzu-emu/yuzu) - Nintendo Switch Emulator - language: **C++** &nbsp;&nbsp; stars: **18,183** &nbsp;&nbsp; folks: **1,531** &nbsp;&nbsp; `504 stars this week` 1. [sunface / rust-course](https://github.com/sunface/rust-course) - “连续六年成为全世界最受喜爱的语言,无GC也无需手动内存管理、极高的性能和安全性、过程/OO/函数式编程、优秀的包管理、JS未来基石" — 工作之余的第二语言来试试Rust吧。<<Rust语言圣经>>拥有全面且深入的讲解、生动贴切的示例、德芙般丝滑的内容,甚至还有JS程序员关注的wasm、deno等专题。这可能是目前最用心的Rust中文开源教程 - language: **Rust** &nbsp;&nbsp; stars: **3,376** &nbsp;&nbsp; folks: **208** &nbsp;&nbsp; `1,175 stars this week` 1. [Textualize / rich](https://github.com/Textualize/rich) - Rich is a Python library for rich text and beautiful formatting in the terminal. - language: **Python** &nbsp;&nbsp; stars: **34,178** &nbsp;&nbsp; folks: **1,099** &nbsp;&nbsp; `824 stars this week` 1. [parcel-bundler / parcel](https://github.com/parcel-bundler/parcel) - The zero configuration build tool for the web. 📦🚀 - language: **JavaScript** &nbsp;&nbsp; stars: **40,051** &nbsp;&nbsp; folks: **2,072** &nbsp;&nbsp; `142 stars this week` 1. [hashicorp / consul](https://github.com/hashicorp/consul) - Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. - language: **Go** &nbsp;&nbsp; stars: **24,112** &nbsp;&nbsp; folks: **3,969** &nbsp;&nbsp; `251 stars this week` 1. [IBAX-io / go-ibax](https://github.com/IBAX-io/go-ibax) - An innovative Blockchain Protocol Platform, which everyone can deploy their own applications quickly and easily, such as Dapp, DeFi, DAO, Cross-Blockchain transactions, etc. - language: **Go** &nbsp;&nbsp; stars: **2,675** &nbsp;&nbsp; folks: **2,089** &nbsp;&nbsp; `2,612 stars this week` 1. [sivan / heti](https://github.com/sivan/heti) - 赫蹏(hètí)是专为中文内容展示设计的排版样式增强。它基于通行的中文排版规范而来,可以为网站的读者带来更好的文章阅读体验。 - language: **SCSS** &nbsp;&nbsp; stars: **3,945** &nbsp;&nbsp; folks: **166** &nbsp;&nbsp; `1,767 stars this week` 1. [DataTalksClub / data-engineering-zoomcamp](https://github.com/DataTalksClub/data-engineering-zoomcamp) - Code for Data Engineer Zoomcamp course - language: **Jupyter Notebook** &nbsp;&nbsp; stars: **2,547** &nbsp;&nbsp; folks: **434** &nbsp;&nbsp; `781 stars this week` 1. [hannahcode / wordle](https://github.com/hannahcode/wordle) - A clone of the popular game Wordle made using React, Typescript, and Tailwind - language: **TypeScript** &nbsp;&nbsp; stars: **452** &nbsp;&nbsp; folks: **212** &nbsp;&nbsp; `286 stars this week` ## 近一月热门仓库 1. [Asabeneh / 30-Days-Of-JavaScript](https://github.com/Asabeneh/30-Days-Of-JavaScript) - 30 days of JavaScript programming challenge is a step-by-step guide to learn JavaScript programming language in 30 days. This challenge may take more than 100 days, please just follow your own pace. - language: **JavaScript** &nbsp;&nbsp; stars: **14,825** &nbsp;&nbsp; folks: **3,044** &nbsp;&nbsp; `3,489 stars this month` 1. [apache / incubator-seatunnel](https://github.com/apache/incubator-seatunnel) - SeaTunnel is a distributed, high-performance data integration platform for the synchronization and transformation of massive data (offline & real-time). - language: **Java** &nbsp;&nbsp; stars: **2,930** &nbsp;&nbsp; folks: **312** &nbsp;&nbsp; `1,098 stars this month` 1. [files-community / Files](https://github.com/files-community/Files) - A modern file manager that pushes the boundaries of the platform. - language: **C#** &nbsp;&nbsp; stars: **18,882** &nbsp;&nbsp; folks: **1,023** &nbsp;&nbsp; `3,700 stars this month` 1. [bevyengine / bevy](https://github.com/bevyengine/bevy) - A refreshingly simple data-driven game engine built in Rust - language: **Rust** &nbsp;&nbsp; stars: **13,726** &nbsp;&nbsp; folks: **1,227** &nbsp;&nbsp; `1,450 stars this month` 1. [dataease / dataease](https://github.com/dataease/dataease) - 人人可用的开源数据可视化分析工具。 - language: **Java** &nbsp;&nbsp; stars: **4,830** &nbsp;&nbsp; folks: **878** &nbsp;&nbsp; `749 stars this month` 1. [marktext / marktext](https://github.com/marktext/marktext) - 📝A simple and elegant markdown editor, available for Linux, macOS and Windows. - language: **JavaScript** &nbsp;&nbsp; stars: **28,309** &nbsp;&nbsp; folks: **2,018** &nbsp;&nbsp; `4,636 stars this month` 1. [CleverRaven / Cataclysm-DDA](https://github.com/CleverRaven/Cataclysm-DDA) - Cataclysm - Dark Days Ahead. A turn-based survival game set in a post-apocalyptic world. - language: **C++** &nbsp;&nbsp; stars: **6,457** &nbsp;&nbsp; folks: **3,092** &nbsp;&nbsp; `569 stars this month` 1. [sxyu / svox2](https://github.com/sxyu/svox2) - Plenoxels: Radiance Fields without Neural Networks, Code release WIP - language: **Python** &nbsp;&nbsp; stars: **1,449** &nbsp;&nbsp; folks: **177** &nbsp;&nbsp; `824 stars this month` 1. [dgtlmoon / changedetection.io](https://github.com/dgtlmoon/changedetection.io) - changedetection.io - The best and simplest self-hosted open source website change detection monitoring and notification service. An alternative to Visualping, Watchtower etc. Designed for simplicity - the main goal is to simply monitor which websites had a text change. Open source web page change detection - Now also includes XPATH and JSON API … - language: **Python** &nbsp;&nbsp; stars: **3,199** &nbsp;&nbsp; folks: **169** &nbsp;&nbsp; `1,436 stars this month` 1. [danielyxie / bitburner](https://github.com/danielyxie/bitburner) - Bitburner Game - language: **JavaScript** &nbsp;&nbsp; stars: **1,506** &nbsp;&nbsp; folks: **454** &nbsp;&nbsp; `838 stars this month` 1. [tauri-apps / tauri](https://github.com/tauri-apps/tauri) - Build smaller, faster, and more secure desktop applications with a web frontend. - language: **Rust** &nbsp;&nbsp; stars: **29,311** &nbsp;&nbsp; folks: **710** &nbsp;&nbsp; `4,010 stars this month` 1. [HashLips / hashlips_art_engine](https://github.com/HashLips/hashlips_art_engine) - HashLips Art Engine is a tool used to create multiple different instances of artworks based on provided layers. - language: **JavaScript** &nbsp;&nbsp; stars: **3,368** &nbsp;&nbsp; folks: **1,708** &nbsp;&nbsp; `1,038 stars this month` 1. [ssssssss-team / spider-flow](https://github.com/ssssssss-team/spider-flow) - 新一代爬虫平台,以图形化方式定义爬虫流程,不写代码即可完成爬虫。 - language: **Java** &nbsp;&nbsp; stars: **5,856** &nbsp;&nbsp; folks: **1,034** &nbsp;&nbsp; `2,276 stars this month` 1. [babysor / MockingBird](https://github.com/babysor/MockingBird) - 🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time - language: **JavaScript** &nbsp;&nbsp; stars: **19,060** &nbsp;&nbsp; folks: **2,548** &nbsp;&nbsp; `3,967 stars this month` 1. [ja-netfilter / ja-netfilter](https://github.com/ja-netfilter/ja-netfilter) - A javaagent framework - language: **Java** &nbsp;&nbsp; stars: **3,790** &nbsp;&nbsp; folks: **955** &nbsp;&nbsp; `1,806 stars this month` 1. [emilk / egui](https://github.com/emilk/egui) - egui: an easy-to-use immediate mode GUI in pure Rust - language: **Rust** &nbsp;&nbsp; stars: **6,665** &nbsp;&nbsp; folks: **360** &nbsp;&nbsp; `658 stars this month` 1. [sunym1993 / flash-linux0.11-talk](https://github.com/sunym1993/flash-linux0.11-talk) - 你管这破玩意叫操作系统源码 — 像小说一样品读 Linux 0.11 核心代码 - language: **C** &nbsp;&nbsp; stars: **6,267** &nbsp;&nbsp; folks: **668** &nbsp;&nbsp; `2,923 stars this month` 1. [withastro / astro](https://github.com/withastro/astro) - Build fast websites, faster. 🚀🧑‍🚀✨ - language: **TypeScript** &nbsp;&nbsp; stars: **10,021** &nbsp;&nbsp; folks: **480** &nbsp;&nbsp; `1,361 stars this month` 1. [QSCTech / zju-icicles](https://github.com/QSCTech/zju-icicles) - 浙江大学课程攻略共享计划 - language: **HTML** &nbsp;&nbsp; stars: **25,061** &nbsp;&nbsp; folks: **7,355** &nbsp;&nbsp; `1,105 stars this month` 1. [trekhleb / javascript-algorithms](https://github.com/trekhleb/javascript-algorithms) - 📝 Algorithms and data structures implemented in JavaScript with explanations and links to further readings - language: **JavaScript** &nbsp;&nbsp; stars: **133,312** &nbsp;&nbsp; folks: **21,895** &nbsp;&nbsp; `4,042 stars this month` 1. [teslamotors / light-show](https://github.com/teslamotors/light-show) - Tesla Light Show - language: **Python** &nbsp;&nbsp; stars: **2,086** &nbsp;&nbsp; folks: **204** &nbsp;&nbsp; `1,544 stars this month` 1. [fanux / sealos](https://github.com/fanux/sealos) - 一条命令离线安装高可用 Kubernetes,3min 装完,500M,100年证书,版本不要太全,生产环境稳如老狗🔥 ⎈ 🐳 - language: **Go** &nbsp;&nbsp; stars: **7,891** &nbsp;&nbsp; folks: **1,275** &nbsp;&nbsp; `2,371 stars this month` 1. [Textualize / rich](https://github.com/Textualize/rich) - Rich is a Python library for rich text and beautiful formatting in the terminal. - language: **Python** &nbsp;&nbsp; stars: **34,178** &nbsp;&nbsp; folks: **1,099** &nbsp;&nbsp; `2,552 stars this month` 1. [tachiyomiorg / tachiyomi](https://github.com/tachiyomiorg/tachiyomi) - Free and open source manga reader for Android. - language: **Kotlin** &nbsp;&nbsp; stars: **14,524** &nbsp;&nbsp; folks: **1,822** &nbsp;&nbsp; `636 stars this month` 1. [iptv-org / iptv](https://github.com/iptv-org/iptv) - Collection of publicly available IPTV channels from all over the world - language: **JavaScript** &nbsp;&nbsp; stars: **45,025** &nbsp;&nbsp; folks: **4,609** &nbsp;&nbsp; `1,682 stars this month`
65.583333
354
0.673297
eng_Latn
0.306644
411906b8a4ea843ec1e4a18191775a8cb625e6a3
2,702
md
Markdown
cinemas/mesinkasirtoko.md
mesinkasir/eleventyblog
6783e3e4c6be79070eb28590cc5e5d2d0fff1be2
[ "MIT" ]
1
2022-03-13T18:30:41.000Z
2022-03-13T18:30:41.000Z
cinemas/mesinkasirtoko.md
mesinkasir/eleventyblog
6783e3e4c6be79070eb28590cc5e5d2d0fff1be2
[ "MIT" ]
null
null
null
cinemas/mesinkasirtoko.md
mesinkasir/eleventyblog
6783e3e4c6be79070eb28590cc5e5d2d0fff1be2
[ "MIT" ]
null
null
null
--- title: Mesin Kasir Toko description: Mesin kasir toko barcode minimarket shop lengkap siap pakai. date: 2022-03-08 cover: hzXc74f5KP8 image: https://images.unsplash.com/photo-1541877944-ac82a091518a?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8N3x8eW91dHViZSUyMHRodW1ibmFpbHxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=500&q=60 tags: - mesin kasir toko - mesin kasir - pos - barcode - kasir - stok - admin - touchscreen pos layout: layouts/cinema.njk --- Tentunya sebuah toko minimarket akan benar benar membutuhkan sebuah system mesin kasir yang prima stabil dan cepat dalam bekerja, maka karna itu pastikan anda mempercayakan mesin kasir pos point of sale dengan OS windows yang sudah terkenal akan kemampuan dan kestabilan nya dalam bekerja jadi ini mesin kasir terbaik dan stabil untuk kebutuhan menujang bisnis mu, sebagai logika era modern saat ini pun pc computer masih banyak digunakan untuk bekerja bukan, ini adalah bukti bahwa dalam implementasi sebuah pos point of sale dipastikan membutuhkan kestabilan kecepatan untuk bekerja secara sempurna maka windows os adalah yang terbaik, ini dikarenakan kemampuan nya untuk mampu terintegrasi dengan berbagai perangkat kasir mulai dari mesin pencetak kode barcode hingga laser barcode dan printer kasir semua menjadi sempurna dengan koneksi yang dilakukan, dan dipastikan ini bukan lah mesin kasir kaleng kaleng, karna menggunakan asus technology all in one touchscreen monitor melebur menjadi satu dengan pc desktop semakin ringkas mewah slim dan elegan era modern mesin kasir saat ini untuk digunakan, plus dukungan free installasi setting siap pakai dan support penuh remote desktop software akan membantu untuk setiap kesulitan kedepan bukan hanya itu saja proses training pun dapat dilakukan dengan menggunakan remote desktop app ini untuk kebutuhan pembelajaran secara online, dengan konsep ini maka belajar mesin kasir akan semakin mudah. dan kami menyediakan berbagai system untuk mesin kasir ini mulai dari kebutuhan untuk offline mode yang tanpa menggunakan koneksi internet maupun online system untuk kemudahan dan intergasi koneksi dimana saja dengan berbagai device dalam bekerja , dan khusus online mode maka anda akan mendapatkan bonus free selama satu tahun penuh untuk menikmati layanan mesin kasir online. dan anda dapat melanjutkan layanan dengan kontrak satu tahun kedepan nya dengan harga yang terjangkau dan kebebasan untuk akses bersama seluruh staff karyawan dalam bekerja dengan mesin kasir online. pelajari mesin kasir toko terbaik kami disini [https://www.hockeycomputindo.com/p/daftar-mesin-kasir-terbaru.html](https://www.hockeycomputindo.com/p/daftar-mesin-kasir-terbaru.html)
112.583333
1,446
0.821984
ind_Latn
0.980055
4119e34f7bf01b37b3ee483ebed333326667cea4
20,559
md
Markdown
README.md
mathew-fleisch/bashbot-example
368be6b7af7cbe5414c2a2fffac839349a678064
[ "MIT" ]
1
2021-09-10T19:41:38.000Z
2021-09-10T19:41:38.000Z
README.md
mathew-fleisch/bashbot-example
368be6b7af7cbe5414c2a2fffac839349a678064
[ "MIT" ]
1
2021-09-01T09:37:28.000Z
2021-09-03T17:59:18.000Z
README.md
mathew-fleisch/bashbot-example
368be6b7af7cbe5414c2a2fffac839349a678064
[ "MIT" ]
1
2022-01-29T15:36:13.000Z
2022-01-29T15:36:13.000Z
# Bashbot Setup/Deployment Examples [BashBot](https://github.com/mathew-fleisch/bashbot) is a chat-ops tool written in golang for infrastructure/devops teams. [A json configuration file](bashbot/config.json), saved in this repository, is used to extend custom scripts and automation from your existing processes, to Slack. Commands can be restricted to private channels and/or use metadata from the user who triggers each command, as input, when Bashbot executes your scripts and automation. This repository shows some examples of how you can deploy Bashbot in your infrastructure and run locally for testing. Fork this repository and use the method that makes sense for your team's needs. Multiple instances of Bashbot can be run using the same slack token, but the triggers should be different to avoid multiple responses. Contributions to [BashBot](https://github.com/mathew-fleisch/bashbot) are welcome! <img src="https://i.imgur.com/s0cf2Hl.gif" /> ## Table of Contents - ***First-Time Setup*** - [Setup Step 0: Make the slack app and get a token](#setup-step-0-make-the-slack-app-and-get-a-token) - [Setup Step 1: Fork this repository](#setup-step-1-fork-this-repository) - [Setup Step 2: Create deploy key](#setup-step-2-create-deploy-key) (handy for private "forks") - [Setup Step 3: Upload public deploy key](#setup-step-3-upload-public-deploy-key) - [Setup Step 4: Save deploy key as github secret](#setup-step-4-save-deploy-key-as-github-secret) (optional: used in github-actions) - Running Bashbot Locally - [***Go-Binary***](#run-bashbot-locally-as-go-binary) - [***Docker***](#run-bashbot-locally-from-docker) - Deploy Bashbot - [Kubernetes](#run-bashbot-in-kubernetes) - [Makefile targets](#makefile-targets) ------------------------------------------------------------------------ ### Setup Step 0 Make the slack app and get a token Create classic slack app and export the environment variable `SLACK_TOKEN` "Bot User OAuth Access Token" in a .env file in the same directory as the configuration for that instance of bashbot. The .env file can be mounted as a configmap in kubernetes, saved as a github secret for use in github-actions, or used locally to source tokens that Bashbot can leverage in scripts, in each deployment type. - Log into slack, in a browser and 'launch' your workspace from - https://app.slack.com/apps-manage/ - Create a new "classic app" by filling out form - [https://api.slack.com/apps?new_classic_app=1](https://api.slack.com/apps?new_classic_app=1) <img src="https://i.imgur.com/xgUDAOj.png" /> - Click on "Bots" from the `basic information` screen <img src="https://i.imgur.com/UHSVuYg.png" /> - Add Legacy Bot User <img src="https://i.imgur.com/R7XYWvi.png" /> <img src="https://i.imgur.com/q18MFSz.png" /> - Install Bashbot in your workspace <img src="https://i.imgur.com/zwiSQWq.png" /> <img src="https://i.imgur.com/ppvjveV.png" /> - From the `OAuth & Permissions` screen, note the `Bot User OAuth Token` (starts with 'xoxb-') as the environment variable `SLACK_TOKEN` used later on <img src="https://i.imgur.com/EXvWLmT.png" /> ------------------------------------------------------------------------ ### Setup Step 1 Fork this repository json configuration can be saved in a private repository and use steps 2-4 to use a deploy key for read or read/write access. However, they are not necessary for a public fork of this repository. ------------------------------------------------------------------------ ### Setup Step 2 Create deploy key Replace "[email protected]" with your email and run the following command to generate a new ssh key ```bash ssh-keygen \ -q -N "" \ -t rsa -b 4096 \ -C "[email protected]" \ -f ${PWD}/bashbot/id_rsa ``` ------------------------------------------------------------------------ ### Setup Step 3 Upload public deploy key Using the id_rsa.pub file that is generated from the previous command, paste the contents in the key section and give it a title like `BASHBOT_READONLY`: ``` https://github.com/[FORK-OWNER]/bashbot-example/settings/keys ``` Note: Check the 'allow write access' box if this key should be able to modify it's own configuration. This is usually not necessary, and easy to change, so I'd recommend read-only unless you have a specific use case for Bashbot to need write access to its own configuration. <img src="https://i.imgur.com/gmb66jy.png" /> ------------------------------------------------------------------------ ### Setup Step 4 Save deploy key as github secret If you want to use github actions to manage a kubernetes deployment, save the id_rsa private key as a github secret ``` https://github.com/[FORK-OWNER]/bashbot-example/settings/secrets/actions ``` Click "New Repository Secret" on top right <img src="https://i.imgur.com/ZjaTDTN.png" /> Paste the id_rsa in the 'value' box and give it a name like `BASHBOT_RO` <img src="https://i.imgur.com/0Iva5gt.png" /> Verify secret is saved <img src="https://i.imgur.com/QPHX7KS.png" /> ------------------------------------------------------------------------ ## Run Bashbot Locally As Go-Binary The easiest way to run Bashbot as a go-binary is by using the makefile targets: ```bash export BASH_CONFIG_FILEPATH=${PWD}/bashbot/config.json export SLACK_TOKEN=xoxb-xxxxx-xxxxxxx make install-latest make run-binary # ctrl+c to quit ``` Or "the hard way" ```bash export BASH_CONFIG_FILEPATH=${PWD}/bashbot/config.json export SLACK_TOKEN=xoxb-xxxxx-xxxxxxx # ----------- Install binary -------------- # # os: linux, darwin export os=$(uname | tr '[:upper:]' '[:lower:]') # arch: amd64, arm64 export arch=amd64 test "$(uname -m)" == "aarch64" && export arch=arm64 # Latest bashbot version/tag export latest=$(curl -s https://api.github.com/repos/mathew-fleisch/bashbot/releases/latest | grep tag_name | cut -d '"' -f 4) # Remove any existing bashbot binaries rm -rf /usr/local/bin/bashbot || true # Get correct binary for host machine and place in user's path wget -qO /usr/local/bin/bashbot https://github.com/mathew-fleisch/bashbot/releases/download/${latest}/bashbot-${os}-${arch} # Make bashbot binary executable chmod +x /usr/local/bin/bashbot # To verify installation run version or help commands bashbot --version # bashbot-darwin-amd64 v1.6.15 bashbot --help # ____ _ ____ _ # | _ \ | | | _ \ | | # | |_) | __ _ ___| |__ | |_) | ___ | |_ # | _ < / _' / __| '_ \| _ < / _ \| __| # | |_) | (_| \__ \ | | | |_) | (_) | |_ # |____/ \__,_|___/_| |_|____/ \___/ \__| # Bashbot is a slack bot, written in golang, that can be configured # to run bash commands or scripts based on a configuration file. # Usage: ./bashbot [flags] # -config-file string # [REQUIRED] Filepath to config.json file (or environment variable BASHBOT_CONFIG_FILEPATH set) # -help # Help/usage information # -install-vendor-dependencies # Cycle through dependencies array in config file to install extra dependencies # -log-format string # Display logs as json or text (default "text") # -log-level string # Log elevel to display (info,debug,warn,error) (default "info") # -send-message-channel string # Send stand-alone slack message to this channel (requires -send-message-text) # -send-message-ephemeral # Send stand-alone ephemeral slack message to a specific user (requires -send-message-channel -send-message-text and -send-message-user) # -send-message-text string # Send stand-alone slack message (requires -send-message-channel) # -send-message-user string # Send stand-alone ephemeral slack message to this slack user (requires -send-message-channel -send-message-text and -send-message-ephemeral) # -slack-token string # [REQUIRED] Slack token used to authenticate with api (or environment variable SLACK_TOKEN set) # -version # Get current version ``` Once the binary is installed, and environment variables have been set, install vendor dependencies and then run the bashbot binary with no parameters. ```bash # ----------- Run binary -------------- # # From bashbot source root, create a "vendor" directory for vendor dependencies mkdir -p vendor && cd vendor # Use bashbot binary to install vendor dependencies from the vendor directory bashbot --install-vendor-dependencies # From bashbot source root run the bashbot binary cd .. bashbot ``` ------------------------------------------------------------------------ ## Run Bashbot Locally From Docker ```bash export BASH_CONFIG_FILEPATH=${PWD}/bashbot/config.json export SLACK_TOKEN=xoxb-xxxxx-xxxxxxx make docker-run ``` Or "the hard way" ```bash export BASH_CONFIG_FILEPATH=${PWD}/bashbot/config.json export SLACK_TOKEN=xoxb-xxxxx-xxxxxxx docker run --rm \ --name bashbot \ -v ${BASHBOT_CONFIG_FILEPATH}:/bashbot/config.json \ -e BASHBOT_CONFIG_FILEPATH="/bashbot/config.json" \ -e SLACK_TOKEN=${SLACK_TOKEN} \ -e LOG_LEVEL="info" \ -e LOG_FORMAT="text" \ -it mathewfleisch/bashbot:latest # Or mount secrets and docker socket (mac) docker run --rm \ --name bashbot \ -v /var/run/docker.sock:/var/rund/docker.sock \ -v ${BASHBOT_CONFIG_FILEPATH}:/bashbot/config.json \ -e BASHBOT_CONFIG_FILEPATH="/bashbot/config.json" \ -e SLACK_TOKEN=${SLACK_TOKEN} \ -v ${PWD}/bashbot/.env:/bashbot/.env \ -v ${PWD}/bashbot/id_rsa:/root/.ssh/id_rsa \ -v ${PWD}/bashbot/id_rsa.pub:/root/.ssh/id_rsa.pub \ -e LOG_LEVEL="info" \ -e LOG_FORMAT="text" \ -it mathewfleisch/bashbot:latest # Or mount secrets and docker socket (linux) docker run --rm \ --name bashbot \ -v /var/run/docker.sock:/var/run/docker.sock \ --group-add $(stat -c '%g' /var/run/docker.sock) \ -v ${BASHBOT_CONFIG_FILEPATH}:/bashbot/config.json \ -e SLACK_TOKEN=${SLACK_TOKEN} \ -e BASHBOT_CONFIG_FILEPATH="/bashbot/config.json" \ -v ${PWD}/bashbot/.env:/bashbot/.env \ -v ${PWD}/bashbot/id_rsa:/root/.ssh/id_rsa \ -v ${PWD}/bashbot/id_rsa.pub:/root/.ssh/id_rsa.pub \ -e LOG_LEVEL="info" \ -e LOG_FORMAT="text" \ -it mathewfleisch/bashbot:latest # Exec into bashbot container docker exec -it $(docker ps -aqf "name=bashbot") bash # Remove existing bashbot container docker rm $(docker ps -aqf "name=bashbot") ``` The previous examples use the dockerhub container and the next section will describe how to build a container ([alpine](Dockerfile.alpine) or [ubuntu](Dockerfile.ubuntu)) and install bashbot with the [asdf bashbot plugin](https://github.com/mathew-fleisch/asdf-bashbot). This method will allow more granular control over the tools bashbot can leverage in the [.tool-versions](.tool-versions) file. ```bash # Build alpine container make docker-build-alpine # or build ubuntu container make docker-build-ubuntu # Run local container make docker-run-local ``` ------------------------------------------------------------------------ ## Run Bashbot in Kubernetes ***Requirements*** - kubernetes - configmaps - id_rsa, id_rsa.pub (deploy key to bashbot config json) - config json - .env file (with `SLACK_TOKEN` and `BASHBOT_CONFIG_FILEPATH` set) In this deployment method, a "seed" configuration is set as the default configuration json for bashbot. The seed configuration will use a pem file, to clone a private repository, with the actual configuration. The seed configuration is executed when a bashbot pod first spins up, and replaces itself with the latest from a configuration repository (that you set up and can/should be private). This method ensures bashbot is using the latest configuration every time a new pod spins up (deleting a pod will update the running configuration), and is not baked into the container running the bashbot go-binary. A `.env` file is also injected as a configmap containing the `SLACK_TOKEN` and `BASHBOT_CONFIG_FILEPATH` and any other passwords or tokens exported as environment variables. After the seed configuration json is replaced with a configuration json from the private repository `bashbot-example` the bashbot binary is used to install any tools or packages defined in the `dependencies` section of the json using the `--install-vendor-dependencies` flag. It is also possible to mount the main configuration json as a configmap as well, but the seed method allows the configuration json to be modified in place. If the configuration json is mounted directly (via configmap), commands like [add-example](examples/add-example) and [remove-example](examples/remove-example) would not be able to modify the configmap, holding the configuration json, and would error. Note: If the configuration json is mounted directly, remove `cp seed.json config.json &&` from the args value in the deployment yaml. Start by pushing the [sample-config.json](sample-config.json) file to a private repository and set up a [deploy key](https://docs.github.com/en/developers/overview/managing-deploy-keys) following the setup steps: - id_rsa - id_rsa.pub - seed/config.json - config.json - .env Example .env ```bash export SLACK_TOKEN=xoxb-xxxxxxxxx-xxxxxxx export BASHBOT_CONFIG_FILEPATH=/bashbot/config.json export AIRQUALITY_API_KEY=1234567890 ``` Example seed/config.json uses the mounted configmaps to authenticate with github via ssh, and clone the repository where the bashbot configs are saved. Using this method allows the pod running Bashbot to easily update Bashbot to use the latest configuration from github. Deleting the pod running Bashbot, forces this seed process to execute, when the pod is replaced by the Kubernetes deployment. Note: filename must be config.json to match deployment yaml ```json { "admins": [ { "trigger": "bashbot", "appName": "BashBotSeed", "userIds": [], "privateChannelId": "", "logChannelId": "" }], "messages": [], "tools": [], "dependencies": [ { "name": "Private configuration repository", "install": [ "if [[ -f /root/.ssh/keys/id_rsa ]]; then", "cp /root/.ssh/keys/id_rsa /root/.ssh/id_rsa", "&& cp /root/.ssh/keys/id_rsa.pub /root/.ssh/id_rsa.pub", "&& chmod 600 /root/.ssh/id_rsa", "&& ssh-keyscan -t rsa github.com >> /root/.ssh/known_hosts", "&& git config user.name bashbot-github-user", "&& git config user.email [email protected]", "&& rm -rf bashbot-example || true", "&& git clone [email protected]:mathew-fleisch/bashbot-example.git", "&& source /bashbot/.env", "&& cp bashbot-example/bashbot/config.json $BASHBOT_CONFIG_FILEPATH", "&& cd /bashbot", "&& bashbot --install-vendor-dependencies", "&& echo \"Bashbot's configuration is up to date\";", "else", "echo \"id_rsa missing, skipping private repo: bashbot-example\";", "fi" ] } ] } ``` Example deployment yaml mounts the configmaps and copies the seed configuration in place before executing the entrypoint script. A service account and rolebinding are also included to access kube-api from inside the cluster. Note: Bashbot is currently not set up for federation, so there should only ever be one replica of each deployment. ```yaml --- apiVersion: v1 kind: ServiceAccount metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"arm64-bashbot"},"name":"bashbot","namespace":"bashbot"}} name: bashbot namespace: bashbot --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: bashbot roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: bashbot namespace: bashbot --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bashbot name: bashbot namespace: bashbot spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 0 selector: matchLabels: app: bashbot strategy: type: Recreate template: metadata: creationTimestamp: null labels: app: bashbot spec: containers: - env: - name: BASHBOT_ENV_VARS_FILEPATH value: /bashbot/.env image: mathewfleisch/bashbot:v1.6.15 imagePullPolicy: IfNotPresent name: bashbot command: ["/bin/sh"] args: ["-c", "cp seed.json config.json && ./entrypoint.sh"] # To override entrypoint, and run container without bashbot process, comment out the above line and uncomment the following line: # args: ["-c", "while true; do echo hello; sleep 10;done"] resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File workingDir: /bashbot volumeMounts: - name: config-json mountPath: /bashbot/seed.json subPath: config.json - name: id-rsa mountPath: /root/.ssh/keys/id_rsa subPath: id_rsa - name: id-rsa-pub mountPath: /root/.ssh/keys/id_rsa.pub subPath: id_rsa.pub - name: env-vars mountPath: /bashbot/.env subPath: .env volumes: - name: id-rsa configMap: name: bashbot-id-rsa - name: id-rsa-pub configMap: name: bashbot-id-rsa-pub - name: config-json configMap: name: bashbot-config - name: env-vars configMap: name: bashbot-env dnsPolicy: ClusterFirst restartPolicy: Always serviceAccount: bashbot serviceAccountName: bashbot automountServiceAccountToken: true schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0 ``` With the configuration json, .env file and deployment public/private keys saved, they can be mounted as configmaps and the deployment can be applied to the cluster. ```bash # Create bashbot namespace kubectl create namespace bashbot # Define configmaps bot_name=bashbot make config-maps # Finally apply the deployment kubectl -n bashbot apply -f bashbot/deployment-bashbot.yaml # List bashbot instances kubectl -n bashbot get pods # NAME READY STATUS RESTARTS AGE # bashbot-7cd5577c5c-5dwsl 1/1 Running 0 5d23h ``` ------------------------------------------------------------------------ ## Makefile Targets A Makefile is included in this repository to make common actions easier to execute. - `make install-latest` - This target will download the latest go-binary to `/usr/local/bin/bashbot` - `make run-binary` - This target will attempt to install vendor dependencies and run bashbot - `make int-run-binary` - This target is not meant to be run externally, and is used as a workaround to clean-up vendor dependencies after exiting `make run-binary` - `make docker-build-alpine` - This target will build an alpine container, install linters and bashbot through asdf - `make docker-build-ubuntu` - This target will build an ubuntu container, install linters and bashbot through asdf - `make docker-run` - This target will run bashbot in the dockerhub docker container - `make docker-run-local` - This target will run bashbot in a docker container built by `make docker-build-alpine` or `make docker-build-ubuntu` - `make docker-exec` - This target will exec into a running docker container named 'bashbot' - `bot_name=bashbot make config-maps` - This target will overwrite any existing configmaps for the bot/directory `bashbot` - `bot_name=bashbot make get` - This target will print pod information on the bot/directory `bashbot` - `bot_name=bashbot make delete` - This target will delete the pod with the bot/directory `bashbot` - `bot_name=bashbot make describe` - This target will describe the pod with the bot/directory `bashbot` - `bot_name=bashbot make exec` - This target will exec into the pod with the bot/directory `bashbot` - `bot_name=bashbot make logs` - This target will tail the logs of the pod with the bot/directory `bashbot` ------------------------------------------------------------------------ ## To Do Define examples for storing/encrypting secrets like .env and any deployment keys.
38.644737
1,598
0.679265
eng_Latn
0.889659
411b4720d824895f9f6402f772dc0fd953d7b1f4
50
md
Markdown
README.md
asanders1017/aws-lambda-graphql
c021ac1ee23321f34dd6169eb557fed9d63464c5
[ "MIT" ]
null
null
null
README.md
asanders1017/aws-lambda-graphql
c021ac1ee23321f34dd6169eb557fed9d63464c5
[ "MIT" ]
null
null
null
README.md
asanders1017/aws-lambda-graphql
c021ac1ee23321f34dd6169eb557fed9d63464c5
[ "MIT" ]
null
null
null
# aws-lambda-graphql GraphQL App using AWS Lambda
16.666667
28
0.8
kor_Hang
0.402462
411c031a3cccedffdb75cd01b038629a5c8fae57
707
md
Markdown
README.md
hurtstotouchfire/pwned_passwords
3b209f7f6dac65cf7f5cbed053df06fa41298919
[ "MIT" ]
null
null
null
README.md
hurtstotouchfire/pwned_passwords
3b209f7f6dac65cf7f5cbed053df06fa41298919
[ "MIT" ]
null
null
null
README.md
hurtstotouchfire/pwned_passwords
3b209f7f6dac65cf7f5cbed053df06fa41298919
[ "MIT" ]
1
2019-09-13T17:05:12.000Z
2019-09-13T17:05:12.000Z
# pwned_password Ruby gem to search against the Pwned Passwords API without using Devise. Borrows heavily from [devise-pwned_password](https://github.com/michaelbanfield/devise-pwned_password/). # importing data The Pwned Passwords list is available for download here: https://haveibeenpwned.com/Passwords The file only includes the SHA-1 hash of each password with the count of how many times it's been observed in public data breaches. To import data from SHA-1 archive: 1. create a target sql file e.g. `touch pwned-passwords-v5.sql` 2. run the ruby script `ruby prepare-sql.rb pwned-passwords-sha1-ordered-by-count-v5.txt pwned-password-v5.sql` 3. import "pwned-passwords-v5.sql" into a database.
44.1875
177
0.78925
eng_Latn
0.946223
411c298dec94207e9a33e05a74cdc7e5ba41e6e5
4,114
md
Markdown
fitness/bonk-fatigue-cramp.md
mmshress/lifelong-learning
803f8ed5ea40c83b38014fc52d5af8e22a44366f
[ "Unlicense" ]
593
2015-01-05T09:06:53.000Z
2022-03-25T16:59:21.000Z
fitness/bonk-fatigue-cramp.md
global-open-source-society/lifelong-learning
f193d32b29854b3ce377771320c8c09bcb681eda
[ "Unlicense" ]
2
2015-07-03T17:55:34.000Z
2019-12-11T17:00:12.000Z
fitness/bonk-fatigue-cramp.md
global-open-source-society/lifelong-learning
f193d32b29854b3ce377771320c8c09bcb681eda
[ "Unlicense" ]
112
2015-01-05T09:14:41.000Z
2022-03-03T11:16:28.000Z
## Bonking vs. Fatigue vs. Cramping: What You Need to Know [Reference](https://runnersconnect.net/bonk-fatigue-cramp/) ### Bonking - Bonking/hitting the wall is a term used to describe what happens when your body runs low on glycogen to burn as a fuel source. - While your body can burn fat for energy, it tends to prefer glycogen. - When your glycogen stores begin to run low, your body recognizes the potential danger and slows the body down gradually to conserve energy. - At this point, you can still run, but your pace will begin to slow unless you increase your effort. - However, if you continue, your glycogen stores will get so low that your body will basically shut down and even jogging will be almost impossible. - *Bonking is not feeling tired; bonking is not an inability to move your legs faster. Bonking is when your glycogen stores get low enough that your brain shuts down your body.* - A "true" bonk will almost always result in you not being able to physically run any longer. - Anything that resembles running is likely out the window. More than likely, you'll feel dizzy or light-headed (and some runners feel nauseous). - Preventing a true bonk: train yourself to burn fat more efficiently as a fuel source. - Over-fueling is bad--your body can only process a finite amount of carb per hour. If you try to take in more carbs than you can handle, the digestive starts to shut down and you don't absorb anything. ### Fatigue - This is what runners "mislabel" as bonking--getting tired. - What causes this: muscle fatigue and damage to the muscle fibers. - The damage to the muscle fibers reduce their ability to produce the powerful contractions needed to maintain marathon pace effort. It also causes that soreness and dead-leg feeling you get late in a race. - As you begin to increase your effort to make up for the muscle damage, you begin to produce more lactate which interferes with your body's ability to clear hydrogen and results in a build-up of acid in the muscles. - The more you have to rely on glycogen as a fuel source, the more the brain will signal your body to slow down. - Not useful to run the full marathon distance in training due to how long it would take to recover. - Accumulated fatigue--the fatigue from one workout accumulates and transfers to the next so that you're always starting a workout or a long run a little tired from your previous training. ### Cramping - The research says that it's very clear that only a very small percentage of muscle cramps in runners are caused by fluid or electrolyte loss. - Swimmers in very cold water can suffer from muscle cramping. - Neither body weight changes (due to dehydration) not blood electrolyte levels were correlated with suffering cramps through a race. - Most likely, these are "muscle overloading" or a fatigue cramp. - These occur when the neural mechanisms that are supposed to inhibit muscle contraction are depressed and the chemical and electrical synapses that fire the muscle fibers is enhanced. - Your slow twitch fibers get tired and you can no longer fire them as efficiently, then you try using intermediate fibers to help maintain pace, but these require more glycogen and are not as fatigue resistant as slow-twitch. - Ex: when your glute muscle fatigues, your legs won't simply stop working. Instead, your brain tells your muscles, "this isn't getting the job done, let's fire the calves more forcefully to make up for the lack of power." Your calf isn't as strong or powerful as the glute, and it's not trained to handle this type of stress, so calf cramps. :( - Preventing cramps: - *Improve form and posture.* Need to perform the strengthening exercises that target the mechanics that commonly deteriorate late in a race. Quad/calf cramps: perform drills that focus on improving your hip extension and posture. - *Simulate late race fatigue in the gym.* Morning session to lift heavy/maximize recruitment of muscle fibers. Then they come back for a second evening session where they use lighter weights/higher reps to blast their fatigued muscles. This stimulates tremendous growth.
105.487179
345
0.787798
eng_Latn
0.999887
411c89217a21582fd329983ef01bf793c615afac
3,227
md
Markdown
business/processes/production/pulpAndPaper/pulpingliquor/README.md
mamhoff/datasets
b33b25ba5a27abd354c110520eaa969bffddd4a6
[ "MIT" ]
7
2019-07-23T07:19:36.000Z
2020-03-08T15:23:25.000Z
business/processes/production/pulpAndPaper/pulpingliquor/README.md
mamhoff/datasets
b33b25ba5a27abd354c110520eaa969bffddd4a6
[ "MIT" ]
14
2019-07-22T18:36:37.000Z
2019-09-09T11:45:00.000Z
business/processes/production/pulpAndPaper/pulpingliquor/README.md
mamhoff/datasets
b33b25ba5a27abd354c110520eaa969bffddd4a6
[ "MIT" ]
1
2019-07-22T18:22:28.000Z
2019-07-22T18:22:28.000Z
**Paper industry methodology, pulping liquors. Calculates carbon dioxide (CO<sub>2</sub>) emissions associated with the combustion of pulping liquors. Globally applicable.** ## Summary This methodology represents **carbon dioxide** (CO<sub>2</sub>) emissions associated with the combustion of spent pulping liquors. The data and calculation methodology is sourced from the sourced from the guidelines published by the International Council of Forest and Paper Associations ([ICFPA](http://www.wbcsd.org/web/projects/forestry/Pulp-and-Paper-Tool-Guidance.pdf)). ----- ## The methodology ### Emissions model This methodology is based upon a basic mass-balance approach which considers the quantity of carbon which is oxidised during the combustion of pulping liquors. This quantity is derived from a simple multiplication of the quantity of pulping liquor (mass), its carbon content (fraction by mass) and the fraction undergoing complete oxidation (decimal fraction). This quantity of carbon is then converted into its corresponding quantity of CO<sub>2</sub> using the known ratio of their respective atomic/molecular masses. The methodology also provides 'heating' or 'calorific' values for biomass fuels. This enables the conversion of quantities of energy into the corresponding mass of biomass fuel and therefore enables calculations to made on the basis of energy consumed as well as mass of biomass. ### Model data The emissions intensity of pulping liquors varies according to their carbon content (percent by weight), and, when expressed relation to energy yielded, their energy content. Therefore, 9 types of pulping liquor are represented, differentiated in terms of the wood type from which they were derived. Each is, in-turn, represented by a characteristic carbon content and both gross and net calorific values. A default oxidation factor of 0.99 (99%) is provided. ### Activity data required CO<sub>2</sub> emissions are directly proportionate to the **quantity of pulping liquor consumed**, which therefore must be provided in order to calculate. Quantities can be expressed in terms of mass or energy. When calculating using energetic quantities, the basis of the energy quantity - net or gross - can be stipulated. ### Calculation and results **CO<sub>2</sub>** emissions are calculated by simply multiplying the specified quantity of pulping liquor consumed by the carbon content, oxidation and stoichiometric factors. By default, energy-based activity data is considered to represent the *net* calorific basis. These emissions represent those attributable to the specified quantity of pulping liquor consumed. ----- ## Related methodologies Other available methodologies relating to the paper industry relate to [biomass](Pulp_and_paper_biomass_emissions) and [process carbonates](Pulp_and_Paper_Direct_Emissions). ----- ## Notes This methodology represents emissions of biogenic CO<sub>2</sub>. This means that the CO<sub>2</sub> released was only removed from the atmosphere relatively recently, during the growth of the biomass. In some contexts this may mean that the emissions can be considered neutral with respect to its effects on atmoshperic carbon concentrations and warming.
40.848101
87
0.802603
eng_Latn
0.999185
411cff104756f9cadaca14383ef08b2fd0121127
244
md
Markdown
LPA/README.md
zhongqin0820/SPL
38ad431ad90174494f4e5e876aa3490ed9e5906f
[ "MIT" ]
null
null
null
LPA/README.md
zhongqin0820/SPL
38ad431ad90174494f4e5e876aa3490ed9e5906f
[ "MIT" ]
5
2018-07-29T14:13:03.000Z
2018-10-03T09:14:22.000Z
LPA/README.md
zhongqin0820/Misc-Algorithm-Implement
38ad431ad90174494f4e5e876aa3490ed9e5906f
[ "MIT" ]
null
null
null
# 创建日期 2018/07/23 # 参考资料 - [标签传播算法(Label Propagation)及Python实现](https://blog.csdn.net/zouxy09/article/details/49105265) # 优化 对算法的并行化,一般分为两种:数据并行和模型并行。 迭代算法,除了判断收敛外,我们还可以让每迭代几步,就用测试label测试一次结果,看模型的整体训练性能如何。特别是判断训练是否过拟合的时候非常有效。因此,代码中包含了这部分内容。
22.181818
94
0.807377
yue_Hant
0.625939
411e73aba37f359ab52aa002db0050b3ba5b5c8e
29
md
Markdown
README.md
piresjonas/funcoes-php
b8bcee4b2e9c0b81004411ac022bb25db5a04f60
[ "MIT" ]
null
null
null
README.md
piresjonas/funcoes-php
b8bcee4b2e9c0b81004411ac022bb25db5a04f60
[ "MIT" ]
null
null
null
README.md
piresjonas/funcoes-php
b8bcee4b2e9c0b81004411ac022bb25db5a04f60
[ "MIT" ]
null
null
null
# funcoes-php Funçóes em PHP
9.666667
14
0.758621
por_Latn
0.997146
411e7a8e6468efa1adeadf30b7f0cddb5200bb9e
323
md
Markdown
_posts/2015/2015-09-03-entrainement-septembre-2015.md
bdossantos/bds.run
8a11943e861b00010ff1fab1224338b5091f7dba
[ "WTFPL" ]
2
2018-02-01T22:30:41.000Z
2018-02-02T10:15:07.000Z
_posts/2015/2015-09-03-entrainement-septembre-2015.md
bdossantos/runner.sh
c41e89f56bb633d9a9eacf0028feb246c285d774
[ "WTFPL" ]
6
2019-05-23T15:31:57.000Z
2021-04-19T04:45:53.000Z
_posts/2015/2015-09-03-entrainement-septembre-2015.md
bdossantos/runner.sh
c41e89f56bb633d9a9eacf0028feb246c285d774
[ "WTFPL" ]
null
null
null
--- layout: post title: Entrainement septembre 2015 description: Bilan sportif du mois de septembre 2015 category: entrainement --- | Séances | 5 | | Distance | 75.02 km | | Durée | 06:24:51 h:m:s | | Gain d'altitude | 517 m | | Vitesse moyenne | 11.7 km/h |
24.846154
52
0.547988
fra_Latn
0.56813
411edc7f59bda27109320afa1c7418877ced1ade
1,212
md
Markdown
class/week11/readings.md
AvijeetPrasad/jb_course
d52ce9e14781bcac3db9e736140fb8895bf9162b
[ "MIT" ]
50
2020-09-01T06:35:34.000Z
2022-03-23T09:53:39.000Z
class/week11/readings.md
AvijeetPrasad/jb_course
d52ce9e14781bcac3db9e736140fb8895bf9162b
[ "MIT" ]
5
2020-09-01T04:58:39.000Z
2021-01-10T04:59:23.000Z
class/week11/readings.md
AvijeetPrasad/jb_course
d52ce9e14781bcac3db9e736140fb8895bf9162b
[ "MIT" ]
12
2020-09-01T06:26:34.000Z
2022-01-08T20:38:10.000Z
# Readings This week the readings assignments are listed below: <label><input type="checkbox" id="week11_reading1" class="box"> **Readings 10.1: ** </input></label> <label><input type="checkbox" id="week11_reading2" class="box"> **Readings 10.2: ** </input></label> <label><input type="checkbox" id="week11_reading3" class="box"> **Readings 10.3: ** </input></label> <label><input type="checkbox" id="week11_reading4" class="box"> **Readings 10.4: ** </input></label> <label><input type="checkbox" id="week11_reading5" class="box"> **Readings 10.5: ** </input></label> <label><input type="checkbox" id="week11_reading5" class="box"> **Readings 10.6: ** </input></label> <label><input type="checkbox" id="week11_reading5" class="box"> **Readings 10.7: ** </input></label> <label><input type="checkbox" id="week11_reading5" class="box"> **Readings 10.8: ** </input></label> Click the button below to be taken to the Pearson textbook to access the eText ````{panels} If you have access to the eText, you can go to the eText from here ++++ ```{link-button} https://portal.mypearson.com :text: Pearson eText :type: url :classes: btn-outline-warning btn-block stretched-link text-dark ``` ````
32.756757
101
0.681518
eng_Latn
0.598068
411ee043b46641c0b9ed6e65a314cd1c4958d703
1,660
md
Markdown
docs/automation/netserver-scripting/reference/ITicketAgent/index.md
SuperOfficeDocs/superoffice-docs
6696af195598bf1baebada1c0624b5d833cd68e3
[ "MIT" ]
2
2022-02-15T22:41:17.000Z
2022-03-30T07:17:15.000Z
docs/automation/netserver-scripting/reference/ITicketAgent/index.md
acdavidh/superoffice-docs
873de11300c32857c73c4131b8fb50152931ac28
[ "MIT" ]
155
2021-04-20T11:50:13.000Z
2022-03-30T11:23:26.000Z
docs/automation/netserver-scripting/reference/ITicketAgent/index.md
acdavidh/superoffice-docs
873de11300c32857c73c4131b8fb50152931ac28
[ "MIT" ]
8
2021-05-14T15:14:04.000Z
2022-03-31T08:07:12.000Z
--- uid: iticketagent-script-events title: ITicketAgent script event methods description: NetServer script event methods. so.generated: true keywords: - "netserver" - "scripting" so.date: 03.19.2021 so.topic: reference so.envir: - "onsite" --- # ITicketAgent method listing Service methods defined on <see cref='T:SuperOffice.CRM.Services.ITicketAgent'>ITicketAgent</see> that can trigger server-side event scripts. * [AddAttachments](addattachments.md) * [CreateDefaultAttachmentEntity](createdefaultattachmententity.md) * [CreateDefaultTicketEntity](createdefaultticketentity.md) * [CreateDefaultTicketMessageEntity](createdefaultticketmessageentity.md) * [DeleteTicketEntity](deleteticketentity.md) * [DeleteTicketMessageEntity](deleteticketmessageentity.md) * [GetAttachmentEntity](getattachmententity.md) * [GetAttachmentInfo](getattachmentinfo.md) * [GetAttachmentStream](getattachmentstream.md) * [GetTicket](getticket.md) * [GetTicketAttachments](getticketattachments.md) * [GetTicketEntity](getticketentity.md) * [GetTicketMessage](getticketmessage.md) * [GetTicketMessageEntity](getticketmessageentity.md) * [GetTickets](gettickets.md) * [Html2Text](html2text.md) * [NotifyNewTicket](notifynewticket.md) * [NotifyNewTicketMessage](notifynewticketmessage.md) * [SanitizeMailContent](sanitizemailcontent.md) * [SanitizeMailContents](sanitizemailcontents.md) * [SaveAttachmentEntity](saveattachmententity.md) * [SaveTicketEntity](saveticketentity.md) * [SaveTicketMessageEntity](saveticketmessageentity.md) * [SendTicketMessage](sendticketmessage.md) * [SetTicketReadByOwner](setticketreadbyowner.md) * [UploadAttachment](uploadattachment.md)
36.086957
141
0.813855
yue_Hant
0.160203
411f27e8e71d16f3f7ed35f00b34dca0f5531936
358
md
Markdown
assets/texts/en/Changelog/1.5.md
hellotinh03/offline-qr-code
6b85ddce3d20d1fbae6f74326efb1f050f375a23
[ "CC0-1.0", "CC-BY-4.0" ]
257
2018-04-12T18:39:46.000Z
2022-03-25T05:14:39.000Z
assets/texts/en/Changelog/1.5.md
hellotinh03/offline-qr-code
6b85ddce3d20d1fbae6f74326efb1f050f375a23
[ "CC0-1.0", "CC-BY-4.0" ]
246
2018-04-13T17:39:52.000Z
2022-02-27T18:07:48.000Z
assets/texts/en/Changelog/1.5.md
hellotinh03/offline-qr-code
6b85ddce3d20d1fbae6f74326efb1f050f375a23
[ "CC0-1.0", "CC-BY-4.0" ]
120
2018-04-25T12:16:43.000Z
2022-02-21T22:41:03.000Z
* **New:** Many useful tips were added. * **Enhanced:** A new placeholder image is used. * **Enhanced:** Adjustments for Firefox 66, which will allow you to customize the hot key of the add-on. * **Fixed:** Accessibility of the (error) message boxes has been improved and their descriptions are now translated. * **Fixed:** Other minor bugs have been fixed.
59.666667
116
0.72905
eng_Latn
0.999391
411f3dd273a70c68f5053c49dc2e933447daebf3
404
md
Markdown
README.md
Sammaye/cly
5bb3ea7786ef9dd77a22ad86df48643344816572
[ "BSD-3-Clause" ]
null
null
null
README.md
Sammaye/cly
5bb3ea7786ef9dd77a22ad86df48643344816572
[ "BSD-3-Clause" ]
null
null
null
README.md
Sammaye/cly
5bb3ea7786ef9dd77a22ad86df48643344816572
[ "BSD-3-Clause" ]
null
null
null
Sammaye's Comics ========= This is a personal project, made in Yii2 and backed by MongoDB, which will aggregate my favourite comics each day and put them in a nice email for me. It is written as though it could be released to the general public and actually used as a platform to serve comics to the entire world. You can find my version here [https://sammaye.com/comics](https://sammaye.com/comics).
44.888889
150
0.759901
eng_Latn
0.99987
41202c4eb46b6e143ac1b943a79cfb003677963d
1,109
md
Markdown
README.md
yulduzetta/book-search-engine
acd1cfe975a6678c1076ef00db1ab114f38ae983
[ "MIT" ]
1
2021-12-08T13:39:50.000Z
2021-12-08T13:39:50.000Z
README.md
yulduzetta/book-search-engine
acd1cfe975a6678c1076ef00db1ab114f38ae983
[ "MIT" ]
null
null
null
README.md
yulduzetta/book-search-engine
acd1cfe975a6678c1076ef00db1ab114f38ae983
[ "MIT" ]
2
2022-03-13T17:31:32.000Z
2022-03-19T00:59:55.000Z
# Book Search Engine. ![MIT License](https://img.shields.io/badge/mit-brightgreen) ![Javascript](https://img.shields.io/github/languages/top/nielsenjared/badmath) ### Description 🤓 This app converts a fully functioning Google Books API search engine built with a RESTful API into a GraphQL API built with Apollo Server. Uses the MERN stack, with a React front end, MongoDB database, and Node.js/Express.js server and API. ## Deployed version can be found on https://yulduz-search-books-mern.herokuapp.com <img width="1433" alt="demo" src="https://user-images.githubusercontent.com/13324397/116847510-4f6bf100-abb0-11eb-8d9e-0493767811ae.png"> ## Table of Contents * 🔧 [Installation](#installation) * 🗒️ [Usage](#usage) * ⚖️ [License](#license) ## Installation ```typescript $ git clone [email protected]:yulduzetta/book-search-engine.git $ cd book-search-engine $ npm install $ npm start ``` ## Usage ```typescript $ node index.js ``` ## License <a href="http://choosealicense.com/licenses/mit/" target="_blank">MIT License</a> ![MIT License](https://img.shields.io/badge/mit-brightgreen)
33.606061
240
0.733093
eng_Latn
0.26034
4121b3eb9df62f45b2f38b9be1d7cf12d43ff11e
160
md
Markdown
_posts/0000-01-02-TechVignesh10101.md
TechVignesh10101/github-slideshow
2b5f71ac19ba32028ee6a739bbd962abbe78195a
[ "MIT" ]
null
null
null
_posts/0000-01-02-TechVignesh10101.md
TechVignesh10101/github-slideshow
2b5f71ac19ba32028ee6a739bbd962abbe78195a
[ "MIT" ]
5
2020-05-03T11:20:37.000Z
2022-02-26T07:36:57.000Z
_posts/0000-01-02-TechVignesh10101.md
TechVignesh10101/github-slideshow
2b5f71ac19ba32028ee6a739bbd962abbe78195a
[ "MIT" ]
null
null
null
layout: slide title: "Welcome to our second slide!" hello eveyone how was ur day? hope u r fine well i am a beginner now and i hope i will learn a lot from now
26.666667
63
0.75
eng_Latn
0.999906
4124068f94a99fc3b873765e1bcd0898fd71b8aa
2,210
md
Markdown
_posts/2019-05-22-PALIM.md
spoj-solution/spoj-solution.github.io
9b31cccb4ba76c1c78262683ceff15cb89934160
[ "MIT" ]
null
null
null
_posts/2019-05-22-PALIM.md
spoj-solution/spoj-solution.github.io
9b31cccb4ba76c1c78262683ceff15cb89934160
[ "MIT" ]
1
2022-02-26T04:57:17.000Z
2022-02-26T04:57:17.000Z
_posts/2019-05-22-PALIM.md
spoj-solution/spoj-solution.github.io
9b31cccb4ba76c1c78262683ceff15cb89934160
[ "MIT" ]
2
2019-07-10T14:10:59.000Z
2020-05-10T05:23:53.000Z
--- layout: post title: PALIM - Yet Another Longest Palindrome Problem categories: ['uncategorized'] code: PALIM src: PALIM.cpp --- ### **Statement** A string is called a palindrome if it's the same when read from left to right and from right to left. For example, `"abdba"` is a palindrome, but `"abbaa"` is not. Given a string, print the length of the longest consecutive sequence of characters occurrences at least once in this string, which is a palindrome. ### Input * Line 1: a string consists of at most 100000 characters. The ASCII code of all characters are between 32 and 127, inclusive. * Line 2: a magical key(for security purpose). ### Output * Line 1: the length of the longest palindrome. * Line 2: the magical key. ### Example Input: abaabbabaaba MAGICAL KEY Output: 6 MAGICAL KEY ### Restriction Only C++ is allowed in this problem now. In addition, you will receive` "wrong answer"` if your program don't start with [this](https://www.spoj.com/content/crazyb0y:PALIM.cpp). You can't use macro `"#undef"` in your solution as well. If you want to solve this problem in another language, send me the header file in your language please. warning: Don't try to access the memory of tester, or I will reject your solution manually, and you will lose the chance to enjoy this problem as well. ### Hint hint of using tester library: you can't read anything from stdin, and you can't print anything as well, your program will be terminated if you called answer(). hint of viewing feedback: You can click on ` "wrong answer"` link to view the feedback of judge: whether your solution didn't include the testlib, or failed on sample. (if neither, your solution failed on a further test case) ### Notice update on Oct.24: I had updated the header file for C++, now you will receive` "Runtime Error(NZEC)"` if your solution called isSame() illegally. The submissions with old version of header file are still acceptable. rejudge on Oct.24: some test cases were added, three submissions were rejudged as TLE instead of AC. #### **Solution**
27.974684
129
0.699548
eng_Latn
0.998816
4125a25182d32efb60e451703d4d6cc2d9e90140
3,541
md
Markdown
articles/commerce/dam-upload-video.md
MicrosoftDocs/Dynamics-365-Operations.nl-nl
e1579117bba04a73a7505bd75ca96612ede76a98
[ "CC-BY-4.0", "MIT" ]
3
2020-05-18T17:14:39.000Z
2021-11-22T14:11:57.000Z
articles/commerce/dam-upload-video.md
MicrosoftDocs/Dynamics-365-Operations.nl-nl
e1579117bba04a73a7505bd75ca96612ede76a98
[ "CC-BY-4.0", "MIT" ]
7
2017-12-12T13:10:45.000Z
2019-04-30T11:45:57.000Z
articles/commerce/dam-upload-video.md
MicrosoftDocs/Dynamics-365-Operations.nl-nl
e1579117bba04a73a7505bd75ca96612ede76a98
[ "CC-BY-4.0", "MIT" ]
1
2019-10-12T18:21:21.000Z
2019-10-12T18:21:21.000Z
--- title: Video's uploaden description: In dit onderwerp wordt beschreven hoe u video's uploadt in Microsoft Dynamics 365 Commerce Site Builder. author: psimolin ms.date: 06/09/2021 ms.topic: article ms.prod: '' ms.technology: '' audience: Application User ms.reviewer: v-chgri ms.custom: '' ms.assetid: '' ms.search.region: Global ms.search.industry: '' ms.author: psimolin ms.search.validFrom: 2019-10-31 ms.dyn365.ops.version: '' ms.openlocfilehash: f481e5d3f323b0c86d637b67c119d13b956d5714dc0d990004834e2be05b370e ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb ms.translationtype: HT ms.contentlocale: nl-NL ms.lasthandoff: 08/05/2021 ms.locfileid: "6735625" --- # <a name="upload-videos"></a>Video's uploaden [!include [banner](includes/banner.md)] In dit onderwerp wordt beschreven hoe u video's uploadt in Microsoft Dynamics 365 Commerce Site Builder. Met de mediabibliotheek van Commerce Site Builder kunt u video's uploaden. U moet altijd de versie van een video uploaden met de hoogste bitrate en resolutie, omdat de video automatisch wordt geconverteerd om geschikt te zijn voor verschillende viewports en bijbehorende onderbrekingspunten. ### <a name="video-information-specified-during-upload"></a>Videogegevens opgegeven tijdens het uploaden Bij het uploaden van een video kan de volgende informatie worden opgegeven. - **Titel, beschrijving, trefwoorden**: metagegevens van de video. - **Automatisch ondertiteling genereren**: hiermee geeft u op of ondertiteling automatisch moet worden gegenereerd voor de video (alleen de Engelse taal wordt ondersteund). - **Ondertiteling**: hiermee geeft u op of ondertiteling moet worden gebruikt. - **Normale audio**: hiermee geeft u op dat het gewone audiospoor moet worden gebruikt. - **Miniatuur**: hiermee geeft u de miniatuur voor de video op. Als u dit niet opgeeft, wordt de miniatuur automatisch gegenereerd. - **Beschrijvende audio**: hiermee geeft u op dat het beschrijvende audiospoor moet worden gebruikt. ## <a name="upload-a-video"></a>Een video uploaden Volg deze stappen om een video te uploaden in Site Builder. 1. Selecteer **Mediabibliotheek** in het navigatievenster aan de linkerkant. 1. Selecteer **Uploaden \> Media-items uploaden** in de opdrachtbalk. 1. Ga in het venster Bestandsverkenner naar een of meer videobestanden die u wilt uploaden en selecteer vervolgens **Openen**. 1. Voer in het dialoogvenster **Media-item uploaden** de vereiste titel en alternatieve tekst in. 1. Voer een optionele omschrijving en trefwoorden in en selecteer indien gewenst een categorie. 1. Als u de afbeelding(en) direct na het uploaden wilt publiceren, schakelt u het selectievakje **Media-items publiceren na uploaden** in 1. Selecteer **OK**. Als u meerdere typen bestanden tegelijk uploadt (zoals afbeeldingen en video's), kunt u in het dialoogvenster **Media-item uploaden** alleen trefwoorden opgeven, of de bestanden direct na het uploaden moeten worden gepubliceerd en of ondertiteling automatisch moeten worden gegenereerd voor videobestanden. Alle bestanden hebben dezelfde trefwoorden. ## <a name="additional-resources"></a>Aanvullende bronnen [Overzicht van digitaal activabeheer](dam-overview.md) [Afbeeldingen uploaden](dam-upload-images.md) [Bestanden uploaden](dam-upload-files.md) [Afbeeldingen bijsnijden](dam-crop-images.md) [Focuspunten van afbeelding aanpassen](dam-custom-focal-point.md) [Statische bestanden uploaden en verwerken](upload-serve-static-files.md) [!INCLUDE[footer-include](../includes/footer-banner.md)]
47.851351
350
0.792432
nld_Latn
0.99854
41265626e326c128fb728d2c44832a9fcb499aee
1,402
md
Markdown
index.md
paceholder/paceholder.github.io
819a3caf488522331d9eea655c514abe15116ab0
[ "MIT" ]
1
2019-05-20T17:43:31.000Z
2019-05-20T17:43:31.000Z
index.md
paceholder/paceholder.github.io
819a3caf488522331d9eea655c514abe15116ab0
[ "MIT" ]
null
null
null
index.md
paceholder/paceholder.github.io
819a3caf488522331d9eea655c514abe15116ab0
[ "MIT" ]
null
null
null
--- layout: page title: Hello World! tagline: Supporting tagline --- {% include JB/setup %} <!-- Show last 5 posts here --> {% for post in paginator.posts %} <article> <header> <h2><a href="{{site.baseurl}}{{post.url}}">{{ post.title }}</a></h2> <span class="date"><i class="icon-clock"></i><time datetime="{{post.date|date:"%F"}}">{{post.date|date:"%b %d, %Y"}}</time></span><br/> <span class="category"><i class="icon-tag"></i> {{ post.categories | category_links }}</span><br/> <span class="author"><i class="icon-user"></i> {% if post.author %}{{post.author}}{% else %}{{site.author}}{% endif%}</span> </header> <div class="entry">{{ post.excerpt }}</div> </article> {% endfor %} <div id="paginator"> {% if paginator.next_page %} <a href="{{site.baseurl}}/page{{paginator.next_page}}"> &laquo; Older Posts</a> {% endif %} {% if paginator.previous_page %} {% if paginator.previous_page == 1 %} <span class="more"> <a href="{{site.baseurl}}/"> Newer Posts &raquo; </a> </span> {% else %} <span class="more"> <a href="{{site.baseurl}}/page{{paginator.previous_page}}"> Newer Posts &raquo; </a> </span> {% endif %} {% endif %} </div>
31.155556
147
0.500713
eng_Latn
0.241736
412a43e885b1c4025893aba2f8b18c487090b113
1,978
markdown
Markdown
README.markdown
mikebell90/aws-elasticache-cluster-client-memcached-for-java
cb5e1e4adaa006cf5d3a73cc520f37a7f0d1ab99
[ "MIT" ]
null
null
null
README.markdown
mikebell90/aws-elasticache-cluster-client-memcached-for-java
cb5e1e4adaa006cf5d3a73cc520f37a7f0d1ab99
[ "MIT" ]
null
null
null
README.markdown
mikebell90/aws-elasticache-cluster-client-memcached-for-java
cb5e1e4adaa006cf5d3a73cc520f37a7f0d1ab99
[ "MIT" ]
null
null
null
# Amazon ElastiCache Cluster Client Amazon ElastiCache Cluster Client is an enhanced Java library to connect to ElastiCache clusters. This client library has been built upon Spymemcached and is released under the [Amazon Software License](http://aws.amazon.com/asl/). # Building Amazon ElastiCache Cluster Client can be compiled using Apache Ant by running the following command: ant This will generate binary, source, and javadoc jars in the build directory of the project. More test info will be updated shortly. # Testing _Note: The ant test target is in the process of being updated to run the additional tests written for Auto Discovery._ The latest version of Amazon ElastiCache Cluster Client has a set of command line arguments that can be used to configure the location of your testing server. The arguments are listed below. -Dserver.address_v4=ipv4_address_of_testing_server This argument is used to specify the ipv4 address of your testing server. By default it is set to localhost. -Dserver.port_number=port_number_of_memcached This argument is used when memcahched is started on a port other than 11211 -Dtest.type=ci This argument is used for CI testing where certain unit tests might be temporarily failing. # More Information for Amazon ElastiCache Cluster Client Github link: https://github.com/amazonwebservices/aws-elasticache-cluster-client-memcached-for-java This repository is a fork of the spymemcached Java client for connecting to memcached (specifically the https://github.com/dustin/java-memcached-client repo). Additional changes have been made to support Amazon ElastiCache Auto Discovery. To read more about Auto Discovery, please go here: http://docs.amazonwebservices.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html. For more information about Spymemcached see the link below: [Spymemcached Project Home](http://code.google.com/p/spymemcached/) contains a wiki, issue tracker, and downloads section.
39.56
231
0.808392
eng_Latn
0.984367
412beb29181a5090fdb57f094ed522f07ba92e48
1,465
md
Markdown
JS/code-fun/add-minus.md
NARUTOne/interview-note
fa498c8dae8daee453271813def5ebdde7517f58
[ "MIT" ]
1
2021-03-21T09:38:52.000Z
2021-03-21T09:38:52.000Z
JS/code-fun/add-minus.md
NARUTOne/interview-note
fa498c8dae8daee453271813def5ebdde7517f58
[ "MIT" ]
null
null
null
JS/code-fun/add-minus.md
NARUTOne/interview-note
fa498c8dae8daee453271813def5ebdde7517f58
[ "MIT" ]
null
null
null
# 5.add(3).minus(2) > 链式操作+给Number/Object对象添加方法 **⚡题目**: ❓ 实现5.add(3).minus(2) ## 优解 🔥 > 扩展支持浮点数; 【大数加减:直接通过 Number 原生的安全极值来进行判断,超出则直接取安全极值】 【超级多位数的小数加减:取JS安全极值位数-2作为最高兼容小数位数 ```js Number.MAX_SAFE_DIGITS = Number.MAX_SAFE_INTEGER.toString().length-2 Number.prototype.digits = function(){ let result = (this.valueOf().toString().split('.')[1] || '').length return result > Number.MAX_SAFE_DIGITS ? Number.MAX_SAFE_DIGITS : result } Number.prototype.add = function(i=0){ if (typeof i !== 'number') { throw new Error('请输入正确的数字'); } const v = this.valueOf(); const thisDigits = this.digits(); const iDigits = i.digits(); const baseNum = Math.pow(10, Math.max(thisDigits, iDigits)); const result = (v * baseNum + i * baseNum) / baseNum; if(result>0){ return result > Number.MAX_SAFE_INTEGER ? Number.MAX_SAFE_INTEGER : result } else{ return result < Number.MIN_SAFE_INTEGER ? Number.MIN_SAFE_INTEGER : result } } Number.prototype.minus = function(i=0){ if (typeof i !== 'number') { throw new Error('请输入正确的数字'); } const v = this.valueOf(); const thisDigits = this.digits(); const iDigits = i.digits(); const baseNum = Math.pow(10, Math.max(thisDigits, iDigits)); const result = (v * baseNum - i * baseNum) / baseNum; if(result>0){ return result > Number.MAX_SAFE_INTEGER ? Number.MAX_SAFE_INTEGER : result } else{ return result < Number.MIN_SAFE_INTEGER ? Number.MIN_SAFE_INTEGER : result } } ```
31.847826
92
0.680546
yue_Hant
0.782776
412c6e0f200e4fb6c7a20d53c9d0e3232ea81d23
680
md
Markdown
_posts/2014-12-11-tales-of-three-hemispheres-fantasy-by-lord-dunsany.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
null
null
null
_posts/2014-12-11-tales-of-three-hemispheres-fantasy-by-lord-dunsany.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
2
2018-01-03T00:41:55.000Z
2020-08-08T02:47:55.000Z
_posts/2014-12-11-tales-of-three-hemispheres-fantasy-by-lord-dunsany.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
null
null
null
--- ID: 1897 post_title: > Tales of Three Hemispheres, Fantasy , by Lord Dunsany author: abbie04m553726 post_excerpt: "" layout: post permalink: > https://universalflowuniversity.com/uncategorized/tales-of-three-hemispheres-fantasy-by-lord-dunsany/ published: true post_date: 2014-12-11 15:06:03 --- [embed]https://www.youtube.com/watch?v=MJj3Uojl3ek[/embed]</br></br> <p>If you already listened to "Idle Days on the Yann" from my previous upload, then you might want to skip it & move to the next story which starts at 2:07:55 mark. ("Idle Days on the Yann" start at 1:24:31 mark and ends at 2:07:53 mark.) Tales of Three Hemispheres, Fantasy Audiobook, by Lord Dunsany</p>
40
164
0.75
eng_Latn
0.854105
412c78c570fbe8f6b41a71e4cfa3cbb76dd08e87
1,189
md
Markdown
trivial-http-server/plan.md
Madoc/fp-learning
e2f084b65c5ee3750d07404cee5fefd99ed6c94c
[ "MIT" ]
null
null
null
trivial-http-server/plan.md
Madoc/fp-learning
e2f084b65c5ee3750d07404cee5fefd99ed6c94c
[ "MIT" ]
null
null
null
trivial-http-server/plan.md
Madoc/fp-learning
e2f084b65c5ee3750d07404cee5fefd99ed6c94c
[ "MIT" ]
null
null
null
# Trivial HTTP server plan ## Framework candidates * [lolhttp](https://github.com/criteo/lolhttp): Looks quite easy to handle. Uses Cats. * [Spray](http://spray.io/): Works on top of Akka. * [http4s](https://http4s.org/): Also based on Cats, among other libraries. ## Plan per framework Create a simple HTTP server, in the following steps: 1. It returns `Hello, world!` on `GET /hello`. 2. It returns a random number on `GET /random`. (That is, a new one each time.) 3. A simple accumulator: * `GET /accumulator` returns the current number. * `POST /accumulator` with a number in the body adds that number to the current accumulator number. * `PUT /accumulator` overwrites the current accumulator number entirely. * `DELETE /accumulator` resets the current accumulator number to zero. 4. Combining HTTP and file I/O: * `GET /storage/<filename>` reads a text file from a special folder and responds with the contents. (No relative paths allowed.) * `PUT /storage/<filename>` creates or overwrites a certain file. (Again, no relative paths.) * `DELETE /storage/<filename>` deletes a certain file. 5. Logging: Integrate logging. This can also be done any other time.
49.541667
131
0.727502
eng_Latn
0.946611
412cd88fd3391719535df5e9ad0edd1a07365b0f
116
md
Markdown
README.md
DataAutomatica/justus.automatica.love
6e320b540b28ce26725f214810f2f49599407265
[ "MIT" ]
null
null
null
README.md
DataAutomatica/justus.automatica.love
6e320b540b28ce26725f214810f2f49599407265
[ "MIT" ]
null
null
null
README.md
DataAutomatica/justus.automatica.love
6e320b540b28ce26725f214810f2f49599407265
[ "MIT" ]
null
null
null
# Website for justus.automatiac.love Uses the Minimal Block Jekyll theme - `https://github.com/drvy/minimal-block`
29
77
0.775862
kor_Hang
0.532074
412d1275c4b72c72bcd5df3b13956af6159abc47
4,852
md
Markdown
README.md
Rolice/laravel-db-switcher
ca5dcb45ee9fdc4eaf8f8af5c7ff263266f33d0c
[ "MIT" ]
3
2018-01-24T23:49:07.000Z
2020-03-08T06:10:53.000Z
README.md
Rolice/laravel-db-switcher
ca5dcb45ee9fdc4eaf8f8af5c7ff263266f33d0c
[ "MIT" ]
1
2018-03-19T23:32:23.000Z
2018-05-15T00:02:58.000Z
README.md
Rolice/laravel-db-switcher
ca5dcb45ee9fdc4eaf8f8af5c7ff263266f33d0c
[ "MIT" ]
1
2018-03-15T23:31:19.000Z
2018-03-15T23:31:19.000Z
# Database Switch for Laravel and Lumen Composer package for Laravel that enables easy replacement of database instance. The package is highly suitable for similar databases that run different copies of a same program. This package is being developed and tested under **Laravel 5.3**, **Laravel 5.4** and **Lumen 5.4**. However it should be compatible with the older releases of Laravel, expecting at least as versions 5.0. ## Prerequisites ### Composer You will need composer to setup your project. You can skip this point most-likely, since it is a must for your Laravel or Lumen project, but if for some reason you are new to it you will find the official [composer website](http://getcomposer.org/) very useful. You will find detailed documentation, use-cases and scenarios plus full instruction set about the downloading and installing composer, including all the download files needed. ### Laravel/Lumen Project We assume you have prepared and installed already your Laravel or Lumen project and you have navigated to its folder with your console/terminal application. ## Installation The package is installed in the traditional way through composer. You can do this by executing the following command in the folder of your Laravel project: ```sh composer require 'rolice/laravel-db-switch' # with globally installed composer ``` or if you have no global composer installation exists with your project, but simply `composer.phar` file: ```sh php /path/to/composer.phar require 'rolice/laravel-db-switch' # with local composer.phar file ``` The above should add the package with your project directly. Alternatively, you can manually add `rolice/laravel-db-switch` in your `composer.json` file in the `require` section and then you will be able to install it with: ```sh composer install ``` or again if no global composer installation exists: ```sh php /path/to/composer.phar install ``` **Note**: That you will have to replace */path/to/composer.phar* in the examples above with the actual path where you have downloaded a copy of the official *composer.phar*. More information could be found on the previous section with the [Prerequisites](##Prerequisites). After you are ready with the package installation we have to enable the service provider inside the application config. **For Laravel**: Just open the `{your/project/folder}/config/app.php` file of your application (by default should be located there). Add the service provider inside the `providers` section (array): ```php Rolice\LaravelDbSwitch\DbSwitchServiceProvider::class, ``` Preferably under the comment like: ```php /* * Package Service Providers... */ Rolice\LaravelDbSwitch\DbSwitchServiceProvider::class, ``` **For Lumen**: You should register the service provider inside `bootstrap/app.php` like: ```php /* |-------------------------------------------------------------------------- | Register Service Providers |-------------------------------------------------------------------------- | | Here we will register all of the application's servipoce providers which | are used to bind services into the container. Service providers are | totally optional, so you are not required to uncomment this line. | */ // ... $app->register(Rolice\LaravelDbSwitch\DbSwitchServiceProvider::class); // ... ``` **For Laravel**: Now you can register the facade in the section below named `aliases` (array), in the same file `config/app.php` by adding the following like there: ```php 'DbSwitch' => Rolice\LaravelDbSwitch\Facades\DbSwitch::class, ``` **For Lumen**: You can enable facades and pass it along with: ```php $app->withFacades(true, [ Rolice\LaravelDbSwitch\Facades\DbSwitch::class => 'DBSwitch' ]); ``` ...or you can directly enable it the same way, but with raw code, an example: ```php /* |-------------------------------------------------------------------------- | Register Facades |-------------------------------------------------------------------------- | | A config section for registering facades through class aliases. | */ class_alias(\Rolice\LaravelDbSwitch\Facades\DbSwitch::class, 'DbSwitch'); ``` Now the package should be available and running with your project. ## Usage You can use the package service either through the facade `DbSwitch` or through the singleton instance inside the IoC service container of Laravel: ```php // Usage through the facade - DbSwitch DbSwitch::to('my-cool-db'); // The defaut connection DbSwitch::connectionTo('my-cool-conenction', 'my-cool-db'); // A specific connection database // Usage through the Laravel Service Container (IoC) app('db.switch')->to('my-cool-db'); // The defaut connection app('db.switch')->connectionTo('my-cool-conenction', 'my-cool-db'); // A specific connection database ``` That is the whole scope of this package. **Enjoy** switching your databases! :P
34.906475
163
0.712077
eng_Latn
0.996516