hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dbbdb525577147fd91ac7dee69b9d4046ad8c219 | 14,345 | md | Markdown | docs/config.md | LucianBuzzo/react-static | 1d4dbe7bb686f79dbf497af3ac8f090f5b31a35b | [
"MIT"
] | null | null | null | docs/config.md | LucianBuzzo/react-static | 1d4dbe7bb686f79dbf497af3ac8f090f5b31a35b | [
"MIT"
] | null | null | null | docs/config.md | LucianBuzzo/react-static | 1d4dbe7bb686f79dbf497af3ac8f090f5b31a35b | [
"MIT"
] | null | null | null | # Configuration (`static.config.js`)
A `static.config.js` file is optional, but recommended at your project root to use react-static to its fullest potential. If present, it must `default export` an object optionally containing any of the following properties:
- [getRoutes](#getroutes)
- [route](#route)
- [getSiteData](#getsitedata)
- [siteRoot](#siteroot)
- [stagingSiteRoot](#stagingsiteroot)
- [basePath](#basepath)
- [stagingBasePath](#stagingbasepath)
- [devBasePath](#devbasepath)
- [assetsPath](#assetsPath)
- [extractCssChunks](#extractcsschunks)
- [inlineCss](#inlinecss)
- [Document](#document)
- [devServer](#devserver)
- [entry](#entry)
- [paths](#paths)
- [bundleAnalyzer](#bundleanalyzer)
- [outputFileRate](#outputfilerate)
- [prefetchRate](#prefetchrate)
- [disableDuplicateRoutesWarning](#disableDuplicateRoutesWarning)
- [disableRoutePrefixing](#disablerouteprefixing)
- [babelExcludes](#babelExcludes)
- [productionSourceMaps](#productionSourceMaps)
### `getRoutes`
An asynchronous function that should resolve an array of [**route**](#route) objects. You'll probably want to use this function to request any dynamic data or information that is needed to build all of the routes for your site. It is also passed an object containing a `dev` boolean indicating whether it's being run in a production build or not.
```javascript
// static.config.js
export default {
getRoutes: async ({ dev }) => [...routes],
}
```
**Awesome Tip: Changes made to `static.config.js` while the development server is running will automatically run `getRoutes` again and any changes to routes or routeData will be hot-reloaded instantly! Don't want to edit/resave your config file? Try using [`rebuildRoutes`](/docs/api.md/#rebuildRoutes)!**
### `route`
A route is an `object` that represents a unique location in your site and is the backbone of every React-Static site.
It supports the following properties:
- `path: String` - The **path** of the URL to match for this route, **excluding search parameters and hash fragments, relative to your `siteRoot + basePath` (if this is a child route, also relative to this route's parent path)**
- `template: String` - The path of the component to be used to render this route. (Relative to the root of your project)
- `getData: async Function(resolvedRoute, { dev }) => Object` - An async function that returns or resolves an object of any necessary data for this route to render.
- Arguments
- `resolvedRoute: Object` - This is the resolved route this function is handling.
- `flags: Object{}` - An object of flags and meta information about the build
- `dev: Boolean` - Indicates whether you are running a development or production build.
- `children: Array[Route]` - Routes can and should have nested routes when necessary. **Route paths are inherited as they are nested, so there is no need to repeat a path prefix in nested routes**.
- `redirect: URL` - Setting this to a URL will perform the equivalent of a 301 redirect (as much as is possible within a static site) using `http-equiv` meta tags, canonicals, etc. **This will force the page to render only the bare minimum to perform the redirect and nothing else**.
- Routes can also have other properties that may be used in plugins. Those properties will be listed in the plugin documentation.
Example:
```javascript
// static.config.js
export default {
getRoutes: async ({ dev }) => [
// A simple route
{
path: 'about',
template: 'src/containers/About',
},
// A route with data
{
path: 'portfolio',
template: 'src/containers/Portfolio',
getData: async () => ({
portfolio,
}),
},
// A route with data and dynamically generated child routes
{
path: 'blog',
template: 'src/containers/Blog',
getData: async () => ({
posts,
}),
children: posts.map(post => ({
path: `post/${post.slug}`,
template: 'src/containers/BlogPost',
getData: async () => ({
post,
}),
})),
},
// A 404 component
{
path: '404',
template: 'src/containers/NotFound',
},
],
}
```
### `getSiteData`
`getSiteData` is very similar to a route's `getData` function, but its result is made available to the entire site via the [`useSiteData`](api.md#usesitedata) hook, `SiteData` component and the `getSiteData` HOC. Any data you return here, although loaded once per session, will be embedded in every page that is exported on your site. So tread lightly ;)
Example:
```javascript
// static.config.js
export default {
getSiteData: async ({ dev }) => ({
title: 'My Awesome Website',
lastBuilt: Date.now(),
}),
}
```
### `siteRoot`
Your `siteRoot` in the format of `protocol://domain.com` is highly recommended and is necessary for many things related to SEO to function for your site. So far, this includes:
- Automatically generating a `sitemap.xml` on export
- Forcing absolute URLs in statically rendered links.
Make sure that you include `https` if you serve your site with it (which we highly recommend). **Any trailing slashes including the pathname will be removed automatically**. If you need to set a base path for your site (eg. if you're using github pages), you'll want to use the `basePath` option.
Example:
```javascript
// static.config.js
export default {
siteRoot: 'https://mysite.com',
}
```
### `stagingSiteRoot`
Works exactly like `siteRoot`, but only when building with the `--staging` build flag.
### `basePath`
Your `basePath` in the format of `some/route` is necessary if you intend on hosting your app from a specific route on your domain (eg. When using Github Pages or for example: `https://mysite.com/blog` where `blog` would the `basePath`)
**All leading and trailing slashes are removed automatically**.
Example:
```javascript
// static.config.js
export default {
basePath: 'blog',
}
```
### `stagingBasePath`
Works exactly like `basePath`, but only when building with the `--staging` build flag.
### `devBasePath`
Works exactly like `basePath`, but only when running the dev server.
### `assetsPath`
Your `assetsPath` determines where your bundled JS and CSS will be loaded from. This is helpful if you want to host your assets in an external location such as a CDN.
### `extractCssChunks`
`extractCssChunks` replaces default `ExtractTextPlugin` with `ExtractCssChunks`. It enables automatic CSS splitting into separate files by routes as well as dynamic components (using `react-universal-component`). More information about the [plugin](https://github.com/faceyspacey/extract-css-chunks-webpack-plugin) and [why it is useful as a part of CSS delivery optimisation](https://github.com/faceyspacey/extract-css-chunks-webpack-plugin#what-about-glamorous-styled-components-styled-jsx-aphrodite-etc). Defaults to `false`.
### `inlineCss`
By using `extractCssChunks` option and putting code splitting at appropriate places, your page related CSS file can be minimal. This option allows you to inline your page related CSS in order to speed up your application by reducing the number of requests required for a first paint. Default to `false`.
### `Document`
It's never been easier to customize the root document of your website! `Document` is an optional (and again, recommended) react component responsible for rendering the HTML shell of your website.
Things you may want to place here:
- Site-wide custom `head` and/or `meta` tags
- Site-wide analytics scripts
- Site-wide stylesheets
Props
- `Html: ReactComponent` - **Required** - An enhanced version of the default `html` tag.
- `Head: ReactComponent` - **Required** - An enhanced version of the default `head` tag.
- `Body: ReactComponent` - **Required** - An enhanced version of the default `body` tag.
- `children: ReactComponent` - **Required** - The main content of your site, including layout, routes, etc.
- `state: Object` - The current state of the export.
- `routeInfo: Object` - All of the current route's information, including any `routeData`.
- `siteData: Object` - Any data optionally resolved via the `getSiteData` function in this config file.
- `renderMeta: Object` - Any data optionally set via hooks or transformers during the render process.
- And much more!
```javascript
// static.config.js
export default {
Document: ({
Html,
Head,
Body,
children,
state: { siteData, renderMeta },
}) => (
<Html lang="en-US">
<Head>
<meta charSet="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
</Head>
<Body>{children}</Body>
</Html>
),
}
```
Since JSX is now being used in this static.config.js file, you need to import React at the top of the file; add this: `import React from 'react'`
### `devServer`
An `Object` of options to be passed to the underlying `webpack-dev-server` instance used for development.
Example:
```javascript
// static.config.js
export default {
// An optional object for customizing the options for the
devServer: {
port: 3000,
host: '127.0.0.1',
},
}
```
### `renderToElement`
**Warning:** This option has been deprecated. Please use the [Node API hook - beforeRenderToElement](https://github.com/Vinnl/react-static/tree/patch-3/docs/plugins#beforerendertoelement-function) instead.
### `renderToHtml`
**Warning:** This option will be removed in a future version. Please use the [Node API hook - beforeRenderToHtml](https://github.com/Vinnl/react-static/tree/patch-3/docs/plugins#beforerendertohtml-function) instead
### `entry`
The name of the entry file as a string, relative to `paths.src`. This defaults to:
```javascript
// static.config.js
export default {
entry: 'index.js',
}
```
### `paths`
An `object` of internal directories used by react-static that can be customized. Each path is relative to your project root and defaults to:
```javascript
// static.config.js
export default {
paths: {
root: process.cwd(), // The root of your project. Don't change this unless you know what you're doing.
src: 'src', // The source directory. Must include an index.js entry file.
temp: 'tmp', // Temp output directory for build files not to be published.
dist: 'dist', // The production output directory.
devDist: 'tmp/dev-server', // The development scratch directory.
public: 'public', // The public directory (files copied to dist during build)
assets: 'dist', // The output directory for bundled JS and CSS
buildArtifacts: 'artifacts', // The output directory for generated (internal) resources
},
}
```
### `outputFileRate`
An optional `Int`. The maximum number of files that can be concurrently written to disk during the build process.
```javascript
// static.config.js
export default {
outputFileRate: 100,
}
```
### `prefetchRate`
An optional `Int`. The maximum number of inflight requests for preloading route data on the client.
```javascript
// static.config.js
export default {
prefetchRate: 10,
}
```
### `disableDuplicateRoutesWarning`
An optional `Boolean`. Set to `true` to disable warnings of duplicate routes during builds.
```javascript
// static.config.js
export default {
disableDuplicateRoutesWarning: true,
}
```
### `disableRoutePrefixing`
An optional `Boolean`. Set to `true` to disable prefixing link href values and the browser history with `config.basePath`.
Useful if you are using a variable basePath such as /country/language/basePath.
```javascript
// static.config.js
export default {
disableRoutePrefixing: true,
}
```
### `maxThreads`
An optional `Number` of maximum threads to use when exporting your site's pages. By default this is set to `Infinity` to use all available threads on the machine React Static is running on.
NOTE: This only affects the process that are rendering your pages to html files, not the initial bundling process.
```javascript
// static.config.js
export default {
maxThreads: 1, // Will only use one thread to export your site
}
```
### `minLoadTime`
An optional `Number` of milliseconds to show the loading spinner when templates, siteData or routeData are not immediately available. If you are preloading aggressively, you shouldn't see a loader at all, but if a loader is shown, it's a good user experience to make is as un-flashy as possible.
```javascript
// static.config.js
export default {
minLoadTime: 200,
}
```
### `disablePreload`
Set this boolean to `true` to disable all preloading. This is mostly meant for debugging at this point, but the internal mechanics could soon be converted into a condition to either preload or not based on the client (mobile, slow-connection, etc)
```javascript
// static.config.js
export default {
disablePreload: true,
}
```
### `babelExcludes`
We are running Babel seperately for your own sources and externals. The Babel configuration for your own sources can be manipulated the normal way. The one for `node_modules` can not, since it's a bit special. We try to compile them with a bare minimum, but sometimes some modules gives you trouble (e.g. [mapbox-gl](https://github.com/mapbox/mapbox-gl-js/issues/3422))
This option gives you the ability to exclude some modules from babelifying.
See https://webpack.js.org/configuration/module/#condition for more details. To exclude e.g. `mapboxgl` simply pass the following
```javascript
// static.config.js
export default {
babelExcludes: [/mapbox-gl/],
}
```
### `productionSourceMaps`
Set this flag to `true` to include source maps in production.
- Defaults to `false`
```javascript
// static.config.js
export default {
productionSourceMaps: true,
}
```
---
## Plugin Api
React Static has tons of other customization possibilities available through the Plugin system that are not possible through the configuration file. Some of these include:
- Webpack customizations
- Rendering pipeline customizations and transformations for React components, elements, the Document wrapper, etc.
- Head tag injection
Every React Static project can utilize the plugin API locally without needing to create a plugin by creating either `node.api.js` or `browser.api.js` files in the root of your project. See the [Plugin Documentation](https://github.com/nozzle/react-static/tree/master/docs/plugins) for more information!
| 36.501272 | 528 | 0.726386 | eng_Latn | 0.982318 |
dbbe1c5bcc777ec8921181f07c24ab9e5eed71f8 | 7,155 | md | Markdown | contents/blog/clickhouse-materialized-columns.md | JalessaS/posthog.com | 273c1b8af822f198debf4db5230b31a430a6eef6 | [
"MIT"
] | null | null | null | contents/blog/clickhouse-materialized-columns.md | JalessaS/posthog.com | 273c1b8af822f198debf4db5230b31a430a6eef6 | [
"MIT"
] | null | null | null | contents/blog/clickhouse-materialized-columns.md | JalessaS/posthog.com | 273c1b8af822f198debf4db5230b31a430a6eef6 | [
"MIT"
] | null | null | null | ---
date: 2021-10-26
title: How to speed up ClickHouse queries using materialized columns
rootPage: /blog
sidebar: Blog
showTitle: true
hideAnchor: true
categories: ["Engineering"]
author: ["karl-aksel-puulmann"]
featuredImage: ../images/blog/blog-generic-2.png
featuredImageType: full
---
ClickHouse supports speeding up queries using materialized columns to create new columns on the fly from existing data. In this post, I’ll walk through a query optimization example that's well-suited to this rarely-used feature.
Consider the following schema:
```sql
CREATE TABLE events (
uuid UUID,
event VARCHAR,
timestamp DateTime64(6, 'UTC'),
properties_json VARCHAR,
)
ENGINE = MergeTree()
ORDER BY (toDate(timestamp), event, uuid)
PARTITION BY toYYYYMM(timestamp)
```
Each event has an ID, event type, timestamp, and a JSON representation of event properties. The properties can include the current URL and any other user-defined properties that describe the event (e.g. NPS survey results, person properties, timing data, etc.).
This table can be used to store a lot of analytics data and is similar to what we use at PostHog.
If we wanted to query login page pageviews in August, the query would look like this:
```sql
SELECT count(*)
FROM events
WHERE event = '$pageview'
AND JSONExtractString(properties_json, '$current_url') = 'https://app.posthog.com/login'
AND timestamp >= '2021-08-01'
AND timestamp < '2021-09-01'
```
This query takes a while complete on a large test dataset, but without the URL filter the query is almost instant. Adding even more filters just slows down the query. Let's dig in to understand why.
## Looking at flamegraphs
ClickHouse has great tools for introspecting queries. Looking at `system.query_log` we can see that the query:
- Took 3,433 ms
- Read 79.17 GiB from disk
To dig even deeper, we can use [`clickhouse-flamegraph`](https://github.com/Slach/clickhouse-flamegraph) to peek into what the CPU did during query execution.
<div
class="relative mt-2 mb-4"
style="width: 100vw; left: 50%; right: 50%; margin-left: -50vw; margin-right: -50vw; display: flex; justify-content: center"
>
<object
data={'/images/flamegraph.svg'}
type="image/svg+xml"
style="max-width: 1600px"
/>
</div>
From this we can see that the ClickHouse server CPU is spending most of its time parsing JSON.
The typical solution would be to extract `$current_url` to a separate column. This would get rid of the JSON parsing and reduce the amount of data read from disk.
However, in this particular case it wouldn’t work because:
1. The data is passed from users - meaning we’d end up with millions (!) of unique columns
2. This would complicate live data ingestion a lot, introducing new and exciting race conditions
## Enter materialized columns
Turns out, those are exactly the problems materialized columns can help solve.
```sql
ALTER TABLE events
ADD COLUMN mat_$current_url
VARCHAR MATERIALIZED JSONExtractString(properties_json, '$current_url')
```
The above query creates a new column that is automatically filled for incoming data, creating a new file on disk. The data is automatically filled during `INSERT` statements, so data ingestion doesn't need to change.
The trade-off is more data being stored on disk. In practice, ClickHouse compresses data well, making this a worthwhile trade-off. On our test dataset, `mat_$current_url` is only 1.5% the size of `properties_json` on disk with a 10x compression ratio. Other properties which have lower cardinality can achieve even better compression (we’ve seen up to 100x)!
Just creating the column is not enough though, since old data queries would still resort to using a `JSONExtract`. For this reason, you want to backfill data. The easiest way currently is to run the [OPTIMIZE](https://clickhouse.tech/docs/en/sql-reference/statements/optimize/) command:
```sql
OPTIMIZE TABLE events FINAL
```
After backfilling, running the updated query speeds things up significantly:
```sql
SELECT count(*)
FROM events
WHERE event = '$pageview'
AND mat_$current_url = 'https://app.posthog.com/login'
AND timestamp >= '2021-08-01'
AND timestamp < '2021-09-01'
```
Looking at `system.query_log`, the new query:
- Took 980ms (**71%/3.4x improvement**)
- Read 14.36 GiB from disk (**81%/5x improvement improvement**)
The wins are even more magnified if more than one property filter is used at a time.
## Backfilling efficiently
Using `OPTIMIZE TABLE` after adding columns is often not a good idea, since it will involve a lot of I/O as the whole table gets rewritten.
As of writing, there's a feature request on [Github](https://github.com/ClickHouse/ClickHouse/issues/27730) for adding specific commands for materializing specific columns on ClickHouse data parts.
Here's how you can use `DEFAULT` type columns to backfill more efficiently:
```sql
ALTER TABLE events
ALTER COLUMN mat_$current_url
VARCHAR DEFAULT JSONExtractString(properties_json, '$current_url');
ALTER TABLE events UPDATE mat_$current_url = mat_$current_url WHERE timestamp >= '2021-08-01';
-- Wait for mutations to finish before running this
ALTER TABLE events
ALTER COLUMN mat_$current_url
VARCHAR MATERIALIZED JSONExtractString(properties_json, '$current_url');
```
This will compute and store only the `mat_$current_url` in our time range and is much more efficient than `OPTIMIZE TABLE`.
Be aware though that this will:
1. Break your `INSERT` statements if you don't specify column names explicitly
2. Alter the behavior of `SELECT *` queries
## Usage at PostHog
PostHog as an analytics tool allows users to slice and dice their data in many ways across huge time ranges and datasets. This also means that performance is key when investigating things - but also that we currently do nearly no preaggregation.
Rather than materialize all columns, we built a solution that looks at recent slow queries using `system.query_log`, determines which properties need materializing from there, and backfills the data on a weekend. This works well because not every query needs optimizing and a relatively small subset of properties make up most of what’s being filtered on by our users.
You can find the code for this [here](https://github.com/PostHog/posthog/blob/c23704b3909ae8ebb827e6a43453e32b3d3487bd/ee/clickhouse/materialized_columns/analyze.py#L42-L119) and [here](https://github.com/PostHog/posthog/blob/c23704b3909ae8ebb827e6a43453e32b3d3487bd/ee/clickhouse/materialized_columns/columns.py#L37-L130).
After materializing our top 100 properties and updating our queries, we analyzed slow queries (>3 seconds long). **The average improvement in our query times was 55%, with 99th percentile improvement being 25x.**
As a product, we're only scratching the surface of what ClickHouse can do to power product analytics. If you're interested in helping us with these kinds of problems, [we're hiring](https://posthog.com/careers)!
_Keen to try out PostHog's blazingly fast analytics suite? [Deploy a free, self-hosted version](https://posthog.com/docs/self-host) today or try out our [hosted Cloud option](https://posthog.com/pricing)._
| 46.16129 | 368 | 0.774703 | eng_Latn | 0.990021 |
dbbeb112cdaba90ecebd54da4362ac514511b280 | 218 | md | Markdown | _watches/M20210331_052228_TLP_1.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-01-22T17:44:06.000Z | 2020-01-26T17:57:58.000Z | _watches/M20210331_052228_TLP_1.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20210331_052228_TLP_1.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP1 - 31/03/2021 - M20210331_052228_TLP_1T.jpg
date: 2021-03-31 05:22:28
permalink: /2021/03/31/watch/M20210331_052228_TLP_1
capture: TLP1/2021/202103/20210330/M20210331_052228_TLP_1T.jpg
---
| 27.25 | 62 | 0.784404 | eng_Latn | 0.037638 |
dbbeffb003abf127adf627a429598a4674a808dc | 64 | md | Markdown | README.md | humoro/WebServerOfCalendar | 84790d6d67d24e6bb162fd77c2423ff21dfd3a7a | [
"Apache-2.0"
] | null | null | null | README.md | humoro/WebServerOfCalendar | 84790d6d67d24e6bb162fd77c2423ff21dfd3a7a | [
"Apache-2.0"
] | null | null | null | README.md | humoro/WebServerOfCalendar | 84790d6d67d24e6bb162fd77c2423ff21dfd3a7a | [
"Apache-2.0"
] | null | null | null | #### this is the webserver of my personal project calendar app
| 32 | 63 | 0.75 | eng_Latn | 0.999957 |
dbbf161a38e4a7ddedac11d8117e53c6711440b6 | 2,629 | md | Markdown | _events/a-conversation-with-congresswoman-lauren-underwood.md | CSIS-iLab/health-commission | e8fbcde4d41085055224bcd998e829536cda0ca7 | [
"MIT"
] | null | null | null | _events/a-conversation-with-congresswoman-lauren-underwood.md | CSIS-iLab/health-commission | e8fbcde4d41085055224bcd998e829536cda0ca7 | [
"MIT"
] | 78 | 2018-09-10T17:33:55.000Z | 2021-03-30T16:06:55.000Z | _events/a-conversation-with-congresswoman-lauren-underwood.md | CSIS-iLab/health-commission | e8fbcde4d41085055224bcd998e829536cda0ca7 | [
"MIT"
] | null | null | null | ---
content_type: event
keywords:
- Congress
- Public Event
- COVID-19
title: A Conversation with Congresswoman Lauren Underwood
date: 2021-05-07 04:00:00 +0000
excerpt: Please join the CSIS Commission on Strengthening America’s Health Security
on Friday, May 7 from 10:00 a.m. – 11:00 a.m. EDT for a timely conversation with
Congresswoman Lauren Underwood (D-IL-14).
series: ''
themes:
- _themes/us-leadership-in-the-covid-19-era.md
image: https://res.cloudinary.com/csisideaslab/image/upload/v1620680140/health-commission/GettyImages-1088002956_cjjod5.jpg
image_caption: Getty Images
image_credit: Getty Images
documents: []
---
<div class="video-wrapper post-feature-video"> <iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" title="" src="https://www.youtube.com/embed/3ExmZusN1Es"></iframe></div>
On **Friday, May 7 from 10:00 a.m. – 11:00 a.m. EDT,** the [CSIS Commission on Strengthening America’s Health Security](https://healthsecurity.csis.org/) hosted a timely conversation with **Congresswoman Lauren Underwood** (D-IL-14). The _American Rescue Plan Act_ has ushered in a new phase in the national Covid-19 response. A majority of the adult population in the United States has received at least one dose of the vaccine, resulting in declining daily death counts. Control of the virus, however, remains elusive and many communities continue to experience high caseloads. Abroad, the United States faces a vaccine marketplace in disarray. Low- and middle-income countries struggle to secure doses and remain susceptible to uncontrolled transmission. A rapid, equitable and global recovery is at risk.
What key lessons have the U.S. Congress and American public learned from the pandemic? Does the _American Rescue Plan Act_ set the stage for sustainable, increased investment in federal and state public health infrastructure? What additional steps should be taken to address systemic public health disparities across key populations? And what role should Congress play in directing U.S. engagement on global health security, including global vaccination campaigns, genomic sequencing, and disease surveillance?
**_J. Stephen Morrison_**_,_ Senior Vice President and Director of the CSIS Global Health Policy Center, discussed these pressing issues with **_Congresswoman Lauren Underwood_**_,_ Member of the CSIS Commission on Strengthening America’s Health Security, House Committee on Appropriations, House Committee on Veterans’ Affairs, and Co-Chair of the Black Maternal Health Caucus.
_This event is made possible by the generous support of the Bill & Melinda Gates Foundation._ | 90.655172 | 808 | 0.798783 | eng_Latn | 0.971273 |
dbbf546e7b7f44b723cb21d55fffbf2d2f41c6d9 | 1,528 | md | Markdown | README.md | raphaeleadjerad/dvf | ce9747e2414043b6c8955bd0fefddd6a3ae324cc | [
"MIT"
] | 2 | 2021-05-21T22:07:06.000Z | 2021-05-22T11:23:31.000Z | README.md | raphaeleadjerad/dvf | ce9747e2414043b6c8955bd0fefddd6a3ae324cc | [
"MIT"
] | null | null | null | README.md | raphaeleadjerad/dvf | ce9747e2414043b6c8955bd0fefddd6a3ae324cc | [
"MIT"
] | null | null | null | dvf
==============================
First step is to load DVF in a postgresql db, for this check `src/dvf_psql.py`.
Project Organization
```
├── data
│ ├── raw <- The original, immutable data dump.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ ├── output <- Output from models
│ ├── schemas <- Raw and processed data schemas, based on Table Schema standard
|
├── documentation <- Documentation for the project
|
├── notebooks <- Notebooks Jupyter (only include jupytext --to .py version of notebooks)
|
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
|
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code
│ ├── __init__.py <- Makes src a Python module
|
├── tests <- Tests for our projet
|
├── LICENCE.txt
├── README.md
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
```
<p><small>Project based on the <a target="_blank" href="https://drivendata.github.io/cookiecutter-data-science/">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p> | 43.657143 | 197 | 0.586387 | eng_Latn | 0.928568 |
dbbf826922345ca12c5102118badfd96d2836ef1 | 1,712 | md | Markdown | packages/mux-audio/README.md | udryan10/elements | 3cd4a1c92396f61ca6212464ba911fa5602a22fa | [
"MIT"
] | null | null | null | packages/mux-audio/README.md | udryan10/elements | 3cd4a1c92396f61ca6212464ba911fa5602a22fa | [
"MIT"
] | null | null | null | packages/mux-audio/README.md | udryan10/elements | 3cd4a1c92396f61ca6212464ba911fa5602a22fa | [
"MIT"
] | null | null | null | # Introduction
`<mux-audio></mux-audio>` is a Mux-flavored HTML5 audio element.
If you are familiar with using `<audio />` + [Hls.js](https://github.com/video-dev/hls.js) in your application, then you'll feel right at home with this web component.
# Installation
If you're using `npm` or `yarn`, install that way:
## Package manager
```
yarn add @mux-elements/mux-audio
```
or
```
npm i @mux-elements/mux-audio
```
Then, import the library into your application with either `import` or `require`:
```js
import "@mux-elements/mux-audio";
```
or
```js
require("@mux-elements/mux-audio");
```
## CDN option
Alternatively, use the CDN hosted version of this package:
```html
<script
type="module"
src="https://unpkg.com/@mux-elements/[email protected]"
></script>
```
## Usage
`<mux-audio>` has all the same features, benefits and options as `<mux-video>`. View the documentation for [`<mux-video>`](../mux-video) for details.
### Advanced: Use with React+TypeScript
Even though we don't (yet!) have our own `React` version of `<mux-audio>`, you can still use it in your `React` app. However, if you're also using TypeScript, make sure you add the following TypeScript definitions, since custom elements (like as `<mux-audio>`) will not be recognized as [Intrinsic Elements](https://www.typescriptlang.org/docs/handbook/jsx.html#intrinsic-elements):
```typescript
interface MuxAudioHTMLAttributes<T> extends React.AudioHTMLAttributes<T> {
debug?: boolean;
autoplay?: boolean;
}
declare global {
namespace JSX {
interface IntrinsicElements {
"mux-audio": React.DetailedHTMLProps<
MuxAudioHTMLAttributes<HTMLAudioElement>,
HTMLAudioElement
>;
}
}
}
```
| 24.112676 | 382 | 0.703855 | eng_Latn | 0.894157 |
dbbfb8ddb160d1fc9f7743c7b197332f0fc24a4d | 244 | md | Markdown | docs/extra-apps.md | xorde-yocto/meta-raspberrypi | 0135a02ea577bd39dd552236ead2c5894d89da1d | [
"MIT"
] | null | null | null | docs/extra-apps.md | xorde-yocto/meta-raspberrypi | 0135a02ea577bd39dd552236ead2c5894d89da1d | [
"MIT"
] | null | null | null | docs/extra-apps.md | xorde-yocto/meta-raspberrypi | 0135a02ea577bd39dd552236ead2c5894d89da1d | [
"MIT"
] | 1 | 2022-01-06T20:33:50.000Z | 2022-01-06T20:33:50.000Z | # Extra apps
## omxplayer
omxplayer depends on libav which has a commercial license. So in order to be
able to compile omxplayer you will need to whiteflag the commercial
license in your local.conf:
LICENSE_FLAGS_ACCEPTED = "commercial"
| 24.4 | 76 | 0.782787 | eng_Latn | 0.998886 |
dbbfbe2dab5c636f999af09386a533eab47fb8d5 | 2,773 | md | Markdown | sccm/protect/deploy-use/endpoint-definitions-microsoft-updates.md | Acidburn0zzz/SCCMDocs.fr-fr | 51ef830a5b64c4ad4a3148448549c28e9f9aa10d | [
"CC-BY-4.0",
"MIT"
] | 5 | 2018-11-03T23:38:27.000Z | 2018-11-03T23:38:31.000Z | sccm/protect/deploy-use/endpoint-definitions-microsoft-updates.md | Acidburn0zzz/SCCMDocs.fr-fr | 51ef830a5b64c4ad4a3148448549c28e9f9aa10d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sccm/protect/deploy-use/endpoint-definitions-microsoft-updates.md | Acidburn0zzz/SCCMDocs.fr-fr | 51ef830a5b64c4ad4a3148448549c28e9f9aa10d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Définitions de programmes malveillants Endpoint Protection à partir d’un partage réseau
titleSuffix: Configuration Manager
description: Apprenez à activer le téléchargement des définitions de programmes malveillants Endpoint Protection à partir de Microsoft Updates pour Configuration Manager.
ms.date: 02/14/2017
ms.prod: configuration-manager
ms.technology: configmgr-protect
ms.topic: conceptual
ms.assetid: ab7626ae-d4bf-4ca6-ab25-c61f96800a02
author: aczechowski
manager: dougeby
ms.author: aaroncz
ms.openlocfilehash: 0d8037f2258f97e2782d475598ca62d2f605e5cd
ms.sourcegitcommit: 0b0c2735c4ed822731ae069b4cc1380e89e78933
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 05/03/2018
ms.locfileid: "32346479"
---
# <a name="enable-endpoint-protection-malware-definitions-to-download-from-microsoft-updates-for-configuration-manager"></a>Activer le téléchargement des définitions de programmes malveillants Endpoint Protection à partir de Microsoft Updates pour Configuration Manager
*S’applique à : System Center Configuration Manager (Current Branch)*
Quand vous choisissez de télécharger des mises à jour de définition à partir de Microsoft Update, les clients vérifient le site Microsoft Update à l’intervalle défini dans la section **Mises à jour de définitions** de la boîte de dialogue Stratégie de logiciel anti-programme malveillant.
Cette méthode peut être utile quand le client ne dispose pas de connectivité au site Configuration Manager ou que vous voulez permettre aux utilisateurs de lancer des mises à jour de définitions.
> [!IMPORTANT]
> Les clients doivent avoir accès à Microsoft Update sur Internet pour pouvoir utiliser cette méthode de téléchargement des mises à jour de définitions.
## <a name="using-the-microsoft-malware-protection-center-to-download-definitions"></a>Utilisation du Centre de protection Microsoft contre les programmes malveillants pour télécharger des définitions
Vous pouvez configurer les clients pour télécharger les mises à jour de définitions à partir du Centre de protection Microsoft contre les programmes malveillants. Cette option est utilisée par les clients Endpoint Protection pour télécharger les mises à jour de définitions s’ils n’ont pas pu les télécharger à partir d’une autre source. Cette méthode de mise à jour peut être utile si un problème détecté au niveau de votre infrastructure Configuration Manager empêche la remise des mises à jour.
> [!IMPORTANT]
> Les clients doivent avoir accès à Microsoft Update sur Internet pour pouvoir utiliser cette méthode de téléchargement des mises à jour de définition.
> [!div class="button"]
[Étape suivante >](endpoint-antimalware-policies.md)
> [!div class="button"]
[Retour >](endpoint-configure-alerts.md)
| 63.022727 | 498 | 0.815002 | fra_Latn | 0.952607 |
dbbfc1cc7abfcbe1332381450f26e550fc9e1d57 | 1,774 | md | Markdown | README.md | Goyatuzo/LolCalc | cad295bfb89bbcb3a0211b92db860db50bd3533b | [
"MIT"
] | null | null | null | README.md | Goyatuzo/LolCalc | cad295bfb89bbcb3a0211b92db860db50bd3533b | [
"MIT"
] | null | null | null | README.md | Goyatuzo/LolCalc | cad295bfb89bbcb3a0211b92db860db50bd3533b | [
"MIT"
] | 2 | 2015-11-11T10:15:47.000Z | 2019-11-08T08:45:09.000Z | League of Legends Damage Calculator
===================================
The user can choose the champion dealing the damage, and the champion receiving the damage. From there, the user can select the level they wish to be in, and also drag and drop the items that are owned by the champion. From there, the application will approximate the damage values to be expected in a second.
The application retrieves information directly from the API hosted by Riot Games, and stores them on the local system, so even if the API crashes due to traffic, this application will still run.
The motivation for this idea came from the fact that the only real way to determine "ideal builds" for each champion came from either experience, or having another friend be a meatshield for you and using a stopwatch to see how long it took for someone to die. The first can only be gained through a long time of playing, and even then it's more of an estimate rather than an accurate value. The second takes at least 20 minutes to reach the stage for initial trials, and that is per champion pair. If a different pair wanted to try out the same test, they would have to use another 20 minutes just to start testing. Something so simple should be much easier to do, since everything is based on mathematics. This is what I attempted to accomplish.
///////////////////////////////////////////////////////////////////////////////////////
Stopping work on this because there is just simply too many variables to consider when trying to calculate the damage models in League of Legends. I may come back to this in the future on a simplified version of the original idea.
Using it on your own server
---------------------------
Be sure to put your own API key in the system environment variable.
| 104.352941 | 747 | 0.729989 | eng_Latn | 0.99991 |
dbbfc1e549d45abe0e48908c25ca98cc4c92c7f1 | 1,069 | md | Markdown | README.md | imbriaco/convox-app | 79a1f65c2e79c5c1493ac0b085c347e62f8b0fdc | [
"Apache-2.0"
] | null | null | null | README.md | imbriaco/convox-app | 79a1f65c2e79c5c1493ac0b085c347e62f8b0fdc | [
"Apache-2.0"
] | null | null | null | README.md | imbriaco/convox-app | 79a1f65c2e79c5c1493ac0b085c347e62f8b0fdc | [
"Apache-2.0"
] | null | null | null | # convox/app
<a href="https://travis-ci.org/convox/app">
<img align="right" src="https://travis-ci.org/convox/app.svg?branch=master">
</a>
Create a CloudFormation template from an app manifest.
This is a guide to developing the convox/app project. For detailed
installation and usage instructions, see [http://docs.convox.com/](http://docs.convox.com/).
## Development
```bash
$ go get github.com/convox/app
$ cd $GOPATH/src/github.com/convox/app
$ make test
$ make build
$ cat fixtures/web_postgis.yml | docker run -i convox/app
{
"AWSTemplateFormatVersion": "2010-09-09",
...
}
```
## Contributing
* Open a [GitHub Issue](https://github.com/convox/app/issues/new) for bugs and feature requests
* Initiate a [GitHub Pull Request](https://help.github.com/articles/using-pull-requests/) for patches
## See Also
* [convox/app](https://github.com/convox/app)
* [convox/build](https://github.com/convox/build)
* [convox/cli](https://github.com/convox/cli)
* [convox/kernel](https://github.com/convox/kernel)
## License
Apache 2.0 © 2015 Convox, Inc.
| 25.452381 | 101 | 0.716558 | eng_Latn | 0.250646 |
dbc034b159b57343fee00bc9eab68700d46eb2e0 | 199 | md | Markdown | docs/API-Reference/public/objects/vn.md | mooncell07/Azaka | 8f44cac0d8517510b8cbfd7e75201174db572774 | [
"MIT"
] | 12 | 2022-01-14T17:00:49.000Z | 2022-03-26T13:11:55.000Z | docs/API-Reference/public/objects/vn.md | mooncell07/Azaka | 8f44cac0d8517510b8cbfd7e75201174db572774 | [
"MIT"
] | 9 | 2021-11-21T16:28:34.000Z | 2022-02-14T16:29:37.000Z | docs/API-Reference/public/objects/vn.md | mooncell07/Azaka | 8f44cac0d8517510b8cbfd7e75201174db572774 | [
"MIT"
] | 2 | 2022-01-15T12:15:23.000Z | 2022-01-15T12:17:32.000Z | ::: azaka.objects.VN
::: azaka.objects.ImageFlagging
::: azaka.objects.vn.Anime
::: azaka.objects.vn.VNStaff
::: azaka.objects.vn.Screens
::: azaka.objects.vn.VNRelation
::: azaka.objects.vn.VNLinks
| 24.875 | 31 | 0.728643 | yue_Hant | 0.369668 |
dbc03b4e88322f44d8e03c0f8c3d431c8fe45b09 | 1,139 | md | Markdown | README.md | etfovac/lrf-sp | 1678c9aba54bcaed7825711fd6d79b7baae2f7fb | [
"MIT"
] | null | null | null | README.md | etfovac/lrf-sp | 1678c9aba54bcaed7825711fd6d79b7baae2f7fb | [
"MIT"
] | null | null | null | README.md | etfovac/lrf-sp | 1678c9aba54bcaed7825711fd6d79b7baae2f7fb | [
"MIT"
] | null | null | null | # Laser Range Finder - Serial COM Port
[](https://github.com/etfovac/lrf-sp/blob/master/LICENSE) [](https://zenodo.org/badge/latestdoi/286789983)
A simple app for LRF (Laser Range Finder) control. Based on Serial Port and Windows Forms GUI
### Keywords:
> LRF, Laser Range Finder
> Serial Port, COM
> Windows.Forms GUI (C# .NET)
<img src="./graphics/SP_COM_LRF_start.png" alt="SP_COM_LRF_start" width="300" height="440"> <img src="./graphics/SP_COM_LRF_open.png" alt="SP_COM_LRF_open" width="300" height="440"> <img src="./graphics/SP_COM_LRF_status.png" alt="SP_COM_LRF_status" width="300" height="440"> <img src="./graphics/SP_COM_LRF_etc.png" alt="SP_COM_LRF_etc" width="300" height="440">
### References
<a href="https://blogs.msmvps.com/coad/2005/03/23/serialport-rs-232-serial-com-port-in-c-net/">SerialPort (RS-232 Serial COM Port) in C# .NET</a>
Documentation for <a href="https://docs.microsoft.com/en-us/dotnet/api/system.io.ports.serialport">SerialPort Class</a> from Assembly System.IO.Ports.dll
| 67 | 365 | 0.732221 | kor_Hang | 0.201641 |
dbc097397ab75fde9abb8a3e142e2492270c6438 | 3,783 | md | Markdown | articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md | MasayukiOzawa/azure-docs | e92f640443d3e05092ca4ab14b9a28f0f548d0f3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md | MasayukiOzawa/azure-docs | e92f640443d3e05092ca4ab14b9a28f0f548d0f3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/analysis-services/tutorials/aas-lesson-7-create-key-performance-indicators.md | MasayukiOzawa/azure-docs | e92f640443d3e05092ca4ab14b9a28f0f548d0f3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-08T13:02:52.000Z | 2020-12-08T13:02:52.000Z | ---
title: "Azure Analysis Services tutorial lesson 7: Create Key Performance Indicators | Microsoft Docs"
description: Describes how to create Key Performance Indicators in the Azure Analysis Services tutorial project.
author: minewiskan
manager: kfile
ms.service: analysis-services
ms.topic: conceptual
ms.date: 04/12/2018
ms.author: owend
ms.reviewer: minewiskan
---
# Create Key Performance Indicators
In this lesson, you create Key Performance Indicators (KPIs). KPIs are used to gauge performance of a value defined by a *Base* measure, against a *Target* value also defined by a measure, or by an absolute value. In reporting client applications, KPIs can provide business professionals a quick and easy way to understand a summary of business success or to identify trends. To learn more, see [KPIs](https://docs.microsoft.com/sql/analysis-services/tabular-models/kpis-ssas-tabular)
Estimated time to complete this lesson: **15 minutes**
## Prerequisites
This topic is part of a tabular modeling tutorial, which should be completed in order. Before performing the tasks in this lesson, you should have completed the previous lesson: [Lesson 6: Create measures](../tutorials/aas-lesson-6-create-measures.md).
## Create Key Performance Indicators
#### To create an InternetCurrentQuarterSalesPerformance KPI
1. In the model designer, click the **FactInternetSales** table.
2. In the measure grid, click an empty cell.
3. In the formula bar, above the table, type the following formula:
```
InternetCurrentQuarterSalesPerformance :=DIVIDE([InternetCurrentQuarterSales]/[InternetPreviousQuarterSalesProportionToQTD],BLANK())
```
This measure serves as the Base measure for the KPI.
4. In the measure grid, right-click **InternetCurrentQuarterSalesPerformance** > **Create KPI**.
5. In the Key Performance Indicator (KPI) dialog box, in **Target** select **Absolute Value**, and then type **1.1**.
7. In the left (low) slider field, type **1**, and then in the right (high) slider field, type **1.07**.
8. In **Select Icon Style**, select the diamond (red), triangle (yellow), circle (green) icon type.

> [!TIP]
> Notice the expandable **Descriptions** label below the available icon styles. Use descriptions for the various KPI elements to make them more identifiable in client applications.
9. Click **OK** to complete the KPI.
In the measure grid, notice the icon next to the **InternetCurrentQuarterSalesPerformance** measure. This icon indicates that this measure serves as a Base value for a KPI.
#### To create an InternetCurrentQuarterMarginPerformance KPI
1. In the measure grid for the **FactInternetSales** table, click an empty cell.
2. In the formula bar, above the table, type the following formula:
```
InternetCurrentQuarterMarginPerformance :=IF([InternetPreviousQuarterMarginProportionToQTD]<>0,([InternetCurrentQuarterMargin]-[InternetPreviousQuarterMarginProportionToQTD])/[InternetPreviousQuarterMarginProportionToQTD],BLANK())
```
3. Right-click **InternetCurrentQuarterMarginPerformance** > **Create KPI**.
4. In the Key Performance Indicator (KPI) dialog box, in **Target** select **Absolute Value**, and then type **1.25**.
5. In the left (low) slider field, slide until the field displays **0.8**, and then slide the right (high) slider field, until the field displays **1.03**.
6. In **Select Icon Style**, select the diamond (red), triangle (yellow), circle (green) icon type, and then click **OK**.
## What's next?
[Lesson 8: Create perspectives](../tutorials/aas-lesson-8-create-perspectives.md).
| 49.12987 | 484 | 0.734338 | eng_Latn | 0.945013 |
dbc1064b00e6e117892ff834da2cb44a5f6fbc8a | 93 | md | Markdown | README.md | alexbergamaschi/simstage_group3 | eb2dc0e2fefb418389d5fc1be898b5aabad48933 | [
"MIT"
] | null | null | null | README.md | alexbergamaschi/simstage_group3 | eb2dc0e2fefb418389d5fc1be898b5aabad48933 | [
"MIT"
] | 1 | 2019-08-11T18:04:02.000Z | 2019-08-11T18:04:02.000Z | README.md | alexbergamaschi/simstage_group3 | eb2dc0e2fefb418389d5fc1be898b5aabad48933 | [
"MIT"
] | 1 | 2019-08-07T23:59:54.000Z | 2019-08-07T23:59:54.000Z | # Reactive Mapping using ROS
## Build package
## ROS Nodes
## Mapping process
## Licence
| 9.3 | 28 | 0.688172 | eng_Latn | 0.688159 |
dbc109fe3d5a587e11c116209e4d0aa8f7018df6 | 1,508 | md | Markdown | WikiFiles/docs/Using the EnumGen Tool.md | veharshv/scsmperftestharness | 0bb514ea2da26ba61aa0df5b47b4d62bbd29e87c | [
"MS-PL"
] | 2 | 2020-05-09T14:19:51.000Z | 2021-04-20T16:12:42.000Z | WikiFiles/docs/Using the EnumGen Tool.md | veharshv/scsmperftestharness | 0bb514ea2da26ba61aa0df5b47b4d62bbd29e87c | [
"MS-PL"
] | null | null | null | WikiFiles/docs/Using the EnumGen Tool.md | veharshv/scsmperftestharness | 0bb514ea2da26ba61aa0df5b47b4d62bbd29e87c | [
"MS-PL"
] | 4 | 2019-11-03T11:58:27.000Z | 2020-08-05T15:02:33.000Z | The EnumGen tool will generate a enumeration hierarchy based on any root enumeration, n levels deep and n number of enums per level. This tool is currently intended just to generate test data given that the enumeration value display names will be things like 'Enum Value 1.1', 'Enum Value 1.1.1' etc.
Just run the tool and then specify the parameters:
* **SCSM Server Name:** specify the server name to connect to or if you are running it local on the SCSM management server you can use localhost.
* **New MP Name:** specify the name of the MP. This will be used for the MP name, friendly name, display name, and file name. A .xml file will be created. If you want to create a .mp file you will need to use a tool like fastseal.exe.
* **Base Enum Name:** This is the internal name of the base enumeration that you want to create all enumerations from. By default this is set to IncidentClassificationEnum which is the base enum for System.WorkItem.Incident.Classification. You can look up an enum's internal name using SMLets.
* **Number of levels to create:** This is the number of levels of hierarchy to create.
* **Number of enums to create per level:** This is the number of enumeration values to create at each level in the hierarchy.
* **Folder to write the MP file to:** This is the location to write the .xml file to when done.
Note: the tool will prompt you before committing the changes to disk how many enumerations will be created. You can choose Yes|No at that point.
| 100.533333 | 302 | 0.757958 | eng_Latn | 0.999655 |
dbc132b3e7f14a48c6f48e8961edb5d9e190adac | 949 | md | Markdown | api/Excel.ChartGroup.RadarAxisLabels.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-15T16:15:38.000Z | 2018-10-15T16:15:38.000Z | api/Excel.ChartGroup.RadarAxisLabels.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Excel.ChartGroup.RadarAxisLabels.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ChartGroup.RadarAxisLabels Property (Excel)
keywords: vbaxl10.chm568087
f1_keywords:
- vbaxl10.chm568087
ms.prod: excel
api_name:
- Excel.ChartGroup.RadarAxisLabels
ms.assetid: 36bb1e30-99b0-e795-2730-145421a2a342
ms.date: 06/08/2017
---
# ChartGroup.RadarAxisLabels Property (Excel)
Returns a **[TickLabels](Excel.TickLabels(object).md)** object that represents the radar axis labels for the specified chart group. Read-only.
## Syntax
_expression_. `RadarAxisLabels`
_expression_ A variable that represents a [ChartGroup](Excel.ChartGroup(Graph object).md) object.
## Example
This example turns on radar axis labels for chart group one in Chart1 and then sets the color for the labels. The example should be run on a radar chart.
```vb
With Charts("Chart1").ChartGroups(1)
.HasRadarAxisLabels = True
.RadarAxisLabels.Font.ColorIndex = 3
End With
```
## See also
[ChartGroup Object](Excel.ChartGroup(object).md)
| 21.568182 | 153 | 0.763962 | eng_Latn | 0.625585 |
dbc1552ff9f2e4331ca2394df3c01d7a7d3bff4d | 53 | md | Markdown | README.md | Iavianasz/dekhoartsproject | 6a6affbc2dc634e6f1481c1d89336199a4cda58d | [
"MIT"
] | 1 | 2021-02-27T01:22:42.000Z | 2021-02-27T01:22:42.000Z | README.md | Iavianasz/dekhoartsproject | 6a6affbc2dc634e6f1481c1d89336199a4cda58d | [
"MIT"
] | null | null | null | README.md | Iavianasz/dekhoartsproject | 6a6affbc2dc634e6f1481c1d89336199a4cda58d | [
"MIT"
] | null | null | null | # dekhoartsproject
Desenvolvimento de plataforma Web
| 17.666667 | 33 | 0.867925 | por_Latn | 0.93953 |
dbc19006384ec1c31d83914baab1aba4a791c3d4 | 3,558 | md | Markdown | articles/log-analytics/query-language/app-expression.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/log-analytics/query-language/app-expression.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/log-analytics/query-language/app-expression.md | kedMertens/azure-docs.ru-ru | 6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Выражение app() в запросах Azure Log Analytics | Документация Майкрософт
description: Выражение app используется в запросах Log Analytics для получения данных из определенного приложения Application Insights, находящегося в той же или другой группе ресурсов либо в другой подписке.
services: log-analytics
documentationcenter: ''
author: bwren
manager: carmonm
editor: ''
ms.assetid: ''
ms.service: log-analytics
ms.workload: na
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 09/10/2018
ms.author: bwren
ms.component: na
ms.openlocfilehash: d91e148320c4c59bb888975499aa1de16ffbf134
ms.sourcegitcommit: 32d218f5bd74f1cd106f4248115985df631d0a8c
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 09/24/2018
ms.locfileid: "46955067"
---
# <a name="app-expression-in-log-analytics-query"></a>Выражение app() в запросах Log Analytics
Выражение `app` используется в запросах Log Analytics для получения данных из определенного приложения Application Insights, находящегося в той же или другой группе ресурсов либо в другой подписке. Его удобно использовать для добавления данных приложения в запрос Log Analytics и запрашивания данных из нескольких приложений с помощью запроса Application Insights.
## <a name="syntax"></a>Синтаксис
`app(`*Идентификатор*`)`
## <a name="arguments"></a>Аргументы
- *Идентификатор* идентифицирует приложение с помощью одного форматов, представленных в таблице ниже.
| Идентификатор | ОПИСАНИЕ | Пример
|:---|:---|:---|
| Имя ресурса | Понятное для человека имя приложения (или имя компонента) | app("fabrikamapp") |
| Полное имя | Полное имя приложения в формате subscriptionName/resourceGroup/componentName | app('AI-Prototype/Fabrikam/fabrikamapp') |
| ИД | GUID приложения | app("988ba129-363e-4415-8fe7-8cbab5447518") |
| Идентификатор ресурса Azure | Идентификатор ресурса Azure |app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
## <a name="notes"></a>Примечания
* У вас должен быть доступ на чтение приложения.
* Для идентификации приложения по имени его имя должно быть уникальным во всех доступных подписках. Если у вас есть несколько приложений с таким именем, запрос не будет выполнен из-за неоднозначности. В этом случае необходимо воспользоваться другим идентификатором.
* Связанное выражение [workspace](workspace-expression.md) используется для запрашивания данных из рабочих областей Log Analytics.
## <a name="examples"></a>Примеры
```Kusto
app("fabrikamapp").requests | count
```
```Kusto
app("AI-Prototype/Fabrikam/fabrikamapp").requests | count
```
```Kusto
app("b438b4f6-912a-46d5-9cb1-b44069212ab4").requests | count
```
```Kusto
app("/subscriptions/7293b69-db12-44fc-9a66-9c2005c3051d/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
```
```Kusto
union
(workspace("myworkspace").Heartbeat | where Computer contains "Con"),
(app("myapplication").requests | where cloud_RoleInstance contains "Con")
| count
```
```Kusto
union
(workspace("myworkspace").Heartbeat), (app("myapplication").requests)
| where TimeGenerated between(todatetime("2018-02-08 15:00:00") .. todatetime("2018-12-08 15:05:00"))
```
## <a name="next-steps"></a>Дополнительная информация
- Дополнительные сведения см. в статье [об использовании выражения workspace](workspace-expression.md) для обозначения рабочей области Azure Log Analytics.
- Дополнительные сведения о хранении [данных в Log Analytics](../../log-analytics/log-analytics-log-search.md). | 42.86747 | 364 | 0.781338 | rus_Cyrl | 0.576054 |
dbc37849bca2cd7715b0989af9719c017d91c23d | 219 | md | Markdown | README.md | marado/freetech | 5f563fdd2cf9289c87504f4c39d08f452112ad62 | [
"CC0-1.0"
] | 1 | 2016-07-17T18:00:53.000Z | 2016-07-17T18:00:53.000Z | README.md | marado/freetech | 5f563fdd2cf9289c87504f4c39d08f452112ad62 | [
"CC0-1.0"
] | 1 | 2016-07-20T18:26:54.000Z | 2016-07-26T22:29:58.000Z | README.md | marado/freetech | 5f563fdd2cf9289c87504f4c39d08f452112ad62 | [
"CC0-1.0"
] | null | null | null | Presentation (in Portuguese, for now), called "Introduction to Free Technology" ("Introdução às Tecnologias Livres").
Here is the presentation [PDF](intro.pdf) (to view/display), and also its [source file](intro.tex).
| 54.75 | 117 | 0.757991 | eng_Latn | 0.613336 |
dbc4e9b06b720c14f275ed246ccb06fcbc1902c6 | 10,060 | md | Markdown | docs/reference-implementation/AppDesign-Application-Design.md | brentmcconnell/always-on | 497771f130262ccd60d0a5f1f2d059e560161841 | [
"MIT"
] | 1 | 2022-02-25T05:02:39.000Z | 2022-02-25T05:02:39.000Z | docs/reference-implementation/AppDesign-Application-Design.md | brentmcconnell/always-on | 497771f130262ccd60d0a5f1f2d059e560161841 | [
"MIT"
] | null | null | null | docs/reference-implementation/AppDesign-Application-Design.md | brentmcconnell/always-on | 497771f130262ccd60d0a5f1f2d059e560161841 | [
"MIT"
] | null | null | null | # Application Design
This section explains how the application was designed and what patterns were implemented.
## The workload
The foundational AlwaysOn reference implementation considers a simple web shop catalog workflow where end users can browse through a catalog of items, see details of an item, and post ratings and comments for items. Although fairly straight forward, this application enables the Reference Implementation to demonstrate the asynchronous processing of requests and how to achieve high throughput within a solution.
The workload consists of three components:
1. **User interface (UI) application** - This is used by both requestors and reviewers.
1. **API application** (`CatalogService`) - This is called by the the UI application, but also available as REST API for other potential clients.
1. **Worker application** (`BackgroundProcessor`) - This processes write requests to the database by listening to new events on the message bus. This component does not expose any APIs.
## Queue-based asynchronous processing
In order to achieve high responsiveness for all operations, AlwaysOn implements the [Queue-Based Load leveling pattern](https://docs.microsoft.com/azure/architecture/patterns/queue-based-load-leveling) combined with [Competing Consumers pattern](https://docs.microsoft.com/azure/architecture/patterns/competing-consumers) where multiple producer instances (`CatalogService` in our case) generate messages which are then asynchronously processed by consumers (`BackgroundProcessor`). This allows the API to accept the request and return to the caller quickly whilst the more demanding database write operation is processed separately.

*Image source: https://docs.microsoft.com/azure/architecture/patterns/competing-consumers*
- The current AlwaysOn reference implementation uses **Azure Event Hub** as the message queue but provides interfaces in code which enable the use of other messaging services if required (Azure Service Bus was successfully tested as an alternative solution).
- **ASP.NET Core API** is used to implement the producer REST API.
- **.NET Core Worker Service** is used to implement the consumer service.
Read operations are processed directly by the API and immediately return data back to the user.

High-scale write operations (e.g. *post rating, post comment*) are processed asynchronously. The API first sends a message with all relevant information (type of action, comment data, etc.) to the message queue and immediately returns `HTTP 202 (Accepted)` with additional `Location` header for the create operation.
Messages from the queue are then processed by BackgroundProcessor instances which handle the actual database communication for write operations. The BackgroundProcessor scales in and out dynamically based on message volume on the queue.

There is no backchannel which communicates to the client if the operation completed successfully and so the client application has to proactively poll the API to for updates.
## Authentication
This foundational reference implementation of AlwaysOn uses a simple authentication scheme based on API keys for some restricted operations, such as creating new catalog items or deleting comments.
More advanced scenarios such as user authentication and user roles are not in scope here.
## Scalability
`CatalogService` as well as the `BackgroundProcessor` can scale in and out individually. Both services are stateless, deployed via Helm charts to each of the (regional) stamps, have proper requests and limits in place and have a pre-configured auto-scaling (HPA) rule in place.
`CatalogService` performance has a direct impact on the end user experience. The service is expected to be able to scale out automatically to provide a positive user experience and performance at any time.
`CatalogService` has at least 3 instances per cluster to spread across three Availability Zones per Azure Region. Each instance requests one CPU core and a given amount of memory based on upfront load testing. Each instance is expected to serve ~250 requests/second based on a standardized usage pattern. `CatalogService` has a 3:1 relationship to the nginx-based Ingress controller.
The `BackgroundProcessor` service has very different requirements and is considered a background worker which has no direct impact on the user experience. As such, `BackgroundProcessor` has a different auto-scaling configuration than `CatalogService` and it can scale between 2 and 32 instances (which matches the max. no. of EventHub partitions). The ratio between `CatalogService` and `BackgroundProcessor` is around 20:2.
---
## 12-Factor App
AlwaysOn aligns to the [12-Factor Application](https://12factor.net/) Methodology as follows.
| Factor | AlwaysOn Alignment |
| --- | --- |
| [Codebase](https://12factor.net/codebase) | All AlwaysOn assets are stored and tracked under source control including CI/CD pipelines, application code, all test code and scripts, infrastructure as code, and configuration management.<br /><br />There is one AlwaysOn codebase and multiple deployments to multiple environments are supported. |
| [Dependencies](https://12factor.net/dependencies) | AlwaysOn applications have NuGet package dependencies which are restored into the build environment.<br /><br />AlwaysOn makes no assumptions about the existence of any dependencies in the build environment. |
| [Config](https://12factor.net/config) | Variable files, both general as well as per-environment, store deployment and configuration data and are stored in the source code repository. Sensitive values are stored in Azure DevOps variable groups.<br /><br />All application runtime configuration is stored in Azure Key Vault - this applies to both, secret and non-sensitive settings. The Key Vaults are only populated by the Terraform deployment. The required values are either sourced directly by Terraform (such as database connection strings) or passed through as Terraform variables from the deployment pipeline.<br /><br />The applications run in containers on Azure Kubernetes Service. Containers use Container Storage Interface bindings to enable AlwaysOn applications to access Azure Key Vault configuration values, surfaced as environment variables, at runtime.<br /><br />Configuration values and environment variables are standalone and not reproduced in different runtime "environments", but are differentiated by target environment at deployment. |
| [Backing Services](https://12factor.net/backing-services) | AlwaysOn applications treat local and third-party services as attached resources, accessed via URL or locator/credentials stored in config.<br /><br />Different resource instances can be accessed by changing the URL or locator/credentials in config. |
| [Build, release, run](https://12factor.net/build-release-run) | AlwaysOn CI/CD pipelines have separate stages. Application stages include build, test, and deploy. Infrastructure stages include global and regional stamp deploy as well as configuration. Releases and runs have distinct IDs. |
| [Processes](https://12factor.net/processes) | AlwaysOn applications are stateless in process, share nothing, and store state in a backing service, Azure Cosmos DB.<br /><br />Sticky sessions are not used.<br /><br />The loss of a stamp will not lose any committed data as it will have been persisted to a backing store. |
| [Port binding](https://12factor.net/port-binding) | AlwaysOn applications run in containers. Endpoints are exported via port binding.<br /><br />Containers are built from images which include the required HTTPS services; no serving capabilities are injected at runtime. |
| [Concurrency](https://12factor.net/concurrency) | AlwaysOn runs different workloads in distinct processes.<br /><br />The front end runs in an HTTP serving process suited for handling web requests, whereas the back end runs in a worker process suited for handling background tasks.<br /><br />The processes manage internal multiplexing/multi-threading. Horizontal scaling is enabled by the shared-nothing, stateless design. |
| [Disposability](https://12factor.net/disposability) | AlwaysOn applications are shared-nothing and stateless. They can be started or stopped with little or zero notice.<br /><br />Hosting in containers on Azure Kubernetes Service enables very fast startup and shutdown which is important for resilience in case of code or config changes. |
| [Dev/prod parity](https://12factor.net/dev-prod-parity) | AlwaysOn is designed for continuous integration and deployment to keep the gaps between development and downstream environment very small.<br /><br />As developers push code updates, testing and deployment are fully automated through CI/CD pipelines.<br /><br />The same pipelines are used to deploy and configure multiple environments as well as build and deploy the application code to the environments, minimizing drift between environments. |
| [Logs](https://12factor.net/logs) | AlwaysOn applications write logs, metrics, and telemetry to a backing log system, Azure Monitor.<br /><br />The applications do not write log files in the runtime environment, or manage log formats or the logging environment. There are no log boundaries (e.g. date rollover) defined or managed by the applications, rather logging is an ongoing event stream and the backing log system is where log analytics and querying are performed. |
| [Admin processes](https://12factor.net/admin-processes) | Administrative tasks such as environment (re)configuration would be performed in the same deployment pipelines used to initially configure and deploy the environments. Deployments are idempotent and incremental due to the underlying Azure Resource Manager platform. |
---
[AlwaysOn - Full List of Documentation](/docs/README.md)
| 130.649351 | 1,060 | 0.802386 | eng_Latn | 0.996651 |
dbc53ba2e25f454805ba1ac8c190b2c9ebe64ead | 15,947 | md | Markdown | vendor/github.com/hashicorp/consul/website/source/api/acl/tokens.html.md | radekg/terraform-state-ansible-module | aaf9947a01f9c342be70afa82c24efe36e5f6d0a | [
"Apache-2.0"
] | 23 | 2019-01-03T03:37:54.000Z | 2021-07-01T02:23:01.000Z | vendor/github.com/hashicorp/consul/website/source/api/acl/tokens.html.md | radekg/terraform-state-ansible-module | aaf9947a01f9c342be70afa82c24efe36e5f6d0a | [
"Apache-2.0"
] | null | null | null | vendor/github.com/hashicorp/consul/website/source/api/acl/tokens.html.md | radekg/terraform-state-ansible-module | aaf9947a01f9c342be70afa82c24efe36e5f6d0a | [
"Apache-2.0"
] | 5 | 2019-01-03T03:37:57.000Z | 2021-08-28T15:45:39.000Z | ---
layout: api
page_title: ACL Policies - HTTP API
sidebar_current: api-acl-tokens
description: |-
The /acl/token endpoints manage Consul's ACL Policies.
---
-> **1.4.0+:** The APIs are available in Consul versions 1.4.0 and later. The documentation for the legacy ACL API is [here](/api/acl/legacy.html)
# ACL Token HTTP API
The `/acl/token` endpoints [create](#create-a-token), [read](#read-a-token),
[update](#update-a-token), [list](#list-tokens), [clone](#clone-token) and [delete](#delete-a-token) ACL policies in Consul.
For more information about ACLs, please see the [ACL Guide](/docs/guides/acl.html).
## Create a Token
This endpoint creates a new ACL token.
| Method | Path | Produces |
| ------ | ---------------------------- | -------------------------- |
| `PUT` | `/acl/token` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `NO` | `none` | `none` | `acl:write` |
### Parameters
- `Description` `(string: "")` - Free form human readable description of the token.
- `Policies` `(array<PolicyLink>)` - The list of policies that should
be applied to the token. A PolicyLink is an object with an "ID" and/or "Name" field
to specify a policy. With the PolicyLink, tokens can be linked to policies either by the
policy name or by the policy ID. When policies are linked by name they will be
internally resolved to the policy ID. With linking tokens internally by IDs,
Consul enables policy renaming without breaking tokens.
- `Local` `(bool: false)` - If true, indicates that the token should not be replicated
globally and instead be local to the current datacenter.
### Sample Payload
```json
{
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7"
},
{
"Name": "node-read"
}
],
"Local": false
}
```
### Sample Request
```text
$ curl \
--request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token
```
### Sample Response
```json
{
"AccessorID": "6a1253d2-1785-24fd-91c2-f8e78c745511",
"SecretID": "45a3bd52-07c7-47a4-52fd-0745e0cfe967",
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 59,
"ModifyIndex": 59
}
```
## Read a Token
This endpoint reads an ACL token with the given Accessor ID.
| Method | Path | Produces |
| ------ | ---------------------------- | -------------------------- |
| `GET` | `/acl/token/:AccessorID` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `YES` | `all` | `none` | `acl:read` |
### Parameters
- `AccessorID` `(string: <required>)` - Specifies the accessor ID of the ACL token to
read. This is required and is specified as part of the URL path.
### Sample Request
```text
$ curl -X GET http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511
```
### Sample Response
-> **Note** If the token used for accessing the API has `acl:write` permissions,
then the `SecretID` will contain the tokens real value. Only when accessed with
a token with only `acl:read` permissions will the `SecretID` be redacted. This
is to prevent privilege escalation whereby having `acl:read` privileges allows
for reading other secrets which given even more permissions.
```json
{
"AccessorID": "6a1253d2-1785-24fd-91c2-f8e78c745511",
"SecretID": "<hidden>",
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 59,
"ModifyIndex": 59
}
```
## Read Self Token
This endpoint returns the ACL token details that matches the secret ID
specified with the `X-Consul-Token` header or the `token` query parameter.
| Method | Path | Produces |
| ------ | ---------------------------- | -------------------------- |
| `GET` | `/acl/token/self` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `YES` | `all` | `none` | `none` |
-> **Note** - This endpoint requires no specific privileges as it is just
retrieving the data for a token that you must already possess its secret.
### Sample Request
```text
$ curl -H "X-Consul-Token: 6a1253d2-1785-24fd-91c2-f8e78c745511" \
http://127.0.0.1:8500/v1/acl/token/self
```
### Sample Response
```json
{
"AccessorID": "6a1253d2-1785-24fd-91c2-f8e78c745511",
"SecretID": "45a3bd52-07c7-47a4-52fd-0745e0cfe967",
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 59,
"ModifyIndex": 59
}
```
## Update a Token
This endpoint updates an existing ACL token.
| Method | Path | Produces |
| ------ | ---------------------------- | -------------------------- |
| `PUT` | `/acl/token/:AccessorID` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `NO` | `none` | `none` | `acl:write` |
### Parameters
- `AccessorID` `(string: "")` - Specifies the accessor ID of the token being updated. This is
required in the URL path but may also be specified in the JSON body. If specified
in both places then they must match exactly. This field is immutable. If not present in
the body and only in the URL then it will be filled in by Consul.
- `SecretID` `(string: "")` - Specifies the secret ID of the token being updated. This field is
immutable so if present in the body then it must match the existing value. If not present
then the value will be filled in by Consul.
- `Description` `(string: "")` - Free form human readable description of this token.
- `Policies` `(array<PolicyLink>)` - This is the list of policies that should
be applied to this token. A PolicyLink is an object with an "ID" and/or "Name" field
to specify a policy. With this tokens can be linked to policies either by the
policy name or by the policy ID. When policies are linked by name they will
internally be resolved to the policy ID. With linking tokens internally by IDs,
Consul enables policy renaming without breaking tokens.
- `Local` `(bool: false)` - If true, indicates that this token should not be replicated
globally and instead be local to the current datacenter. This value must match the
existing value or the request will return an error.
### Sample Payload
```json
{
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7"
},
{
"Name": "node-read"
},
{
"Name": "service-read"
}
],
"Local": false
}
```
### Sample Request
```text
$ curl \
--request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511
```
### Sample Response
```json
{
"AccessorID": "6a1253d2-1785-24fd-91c2-f8e78c745511",
"SecretID": "45a3bd52-07c7-47a4-52fd-0745e0cfe967",
"Description": "Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
},
{
"ID": "93d2226b-2046-4db1-993b-c0581b5d2391",
"Name": "service-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 59,
"ModifyIndex": 100
}
```
## Clone a Token
This endpoint clones an existing ACL token.
| Method | Path | Produces |
| ------ | ------------------------------ | -------------------------- |
| `PUT` | `/acl/token/:AccessorID/clone` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `NO` | `none` | `none` | `acl:write` |
### Parameters
- `AccessorID` `(string: <required>)` - The accessor ID of the token to clone. This is required
in the URL path
- `Description` `(string: "")` - Free form human readable description for the cloned token.
### Sample Payload
```json
{
"Description": "Clone of Agent token for 'node1'",
}
```
### Sample Request
```text
$ curl \
--request PUT \
--data @payload.json \
http://127.0.0.1:8500/v1/acl/token/6a1253d2-1785-24fd-91c2-f8e78c745511/clone
```
### Sample Response
```json
{
"AccessorID": "773efe2a-1f6f-451f-878c-71be10712bae",
"SecretID": "8b1247ef-d172-4f99-b050-4dbe5d3df0cb",
"Description": "Clone of Agent token for 'node1'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
},
{
"ID": "93d2226b-2046-4db1-993b-c0581b5d2391",
"Name": "service-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 128,
"ModifyIndex": 128
}
```
## Delete a Token
This endpoint deletes an ACL token.
| Method | Path | Produces |
| -------- | ------------------------- | -------------------------- |
| `DELETE` | `/acl/token/:AccessorID` | `application/json` |
Even though the return type is application/json, the value is either true or
false, indicating whether the delete succeeded.
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `NO` | `none` | `none` | `acl:write` |
### Parameters
- `AccessorID` `(string: <required>)` - Specifies the accessor ID of the ACL policy to
delete. This is required and is specified as part of the URL path.
### Sample Request
```text
$ curl -XDELETE
http://127.0.0.1:8500/v1/acl/token/8f246b77-f3e1-ff88-5b48-8ec93abf3e05
```
### Sample Response
```json
true
```
## List Tokens
This endpoint lists all the ACL tokens.
| Method | Path | Produces |
| ------ | ---------------------------- | -------------------------- |
| `GET` | `/acl/tokens` | `application/json` |
The table below shows this endpoint's support for
[blocking queries](/api/index.html#blocking-queries),
[consistency modes](/api/index.html#consistency-modes),
[agent caching](/api/index.html#agent-caching), and
[required ACLs](/api/index.html#acls).
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
| ---------------- | ----------------- | ------------- | ------------ |
| `YES` | `all` | `none` | `acl:read` |
## Parameters
- `policy` `(string: "")` - Filters the token list to those tokens that
are linked with the specific policy ID.
## Sample Request
```text
$ curl -X GET http://127.0.0.1:8500/v1/acl/tokens
```
### Sample Response
-> **Note** - The token secret IDs are not included in the listing and must be
retrieved by the [token reading endpoint](#read-a-token)
```json
[
{
"AccessorID": "6a1253d2-1785-24fd-91c2-f8e78c745511",
"Description": "Agent token for 'my-agent'",
"Policies": [
{
"ID": "165d4317-e379-f732-ce70-86278c4558f7",
"Name": "node1-write"
},
{
"ID": "e359bd81-baca-903e-7e64-1ccd9fdc78f5",
"Name": "node-read"
}
],
"Local": false,
"CreateTime": "2018-10-24T12:25:06.921933-04:00",
"Hash": "UuiRkOQPRCvoRZHRtUxxbrmwZ5crYrOdZ0Z1FTFbTbA=",
"CreateIndex": 59,
"ModifyIndex": 59
},
{
"AccessorID": "00000000-0000-0000-0000-000000000002",
"Description": "Anonymous Token",
"Policies": null,
"Local": false,
"CreateTime": "0001-01-01T00:00:00Z",
"Hash": "RNVFSWnfd5DUOuB8vplp+imivlIna3fKQVnkUHh21cA=",
"CreateIndex": 5,
"ModifyIndex": 5
},
{
"AccessorID": "3328f9a6-433c-02d0-6649-7d07268dfec7",
"Description": "Bootstrap Token (Global Management)",
"Policies": [
{
"ID": "00000000-0000-0000-0000-000000000001",
"Name": "global-management"
}
],
"Local": false,
"CreateTime": "2018-10-24T11:42:02.6427-04:00",
"Hash": "oyrov6+GFLjo/KZAfqgxF/X4J/3LX0435DOBy9V22I0=",
"CreateIndex": 12,
"ModifyIndex": 12
}
]
```
| 31.391732 | 147 | 0.577538 | eng_Latn | 0.684753 |
dbc55c842a76c8fdda92683055a3209ecb244f61 | 2,358 | md | Markdown | docs/cli/platform/config/set.md | qw1024/electrode-native | 2666bda3aa54830a689b10aca18b46989447aa5b | [
"Apache-2.0"
] | null | null | null | docs/cli/platform/config/set.md | qw1024/electrode-native | 2666bda3aa54830a689b10aca18b46989447aa5b | [
"Apache-2.0"
] | null | null | null | docs/cli/platform/config/set.md | qw1024/electrode-native | 2666bda3aa54830a689b10aca18b46989447aa5b | [
"Apache-2.0"
] | null | null | null | ## `ern platform config set`
#### Description
* Set the local platform configuration values stored in the `~/.ern/.ernrc` file
#### Syntax
`ern platform config set <key> <value>`
**Arguments**
`<key>`
* The key of the configuration element to set
`<value>`
* If specified, will set the config value associated to this key.
**Configurable properties**
- `bundlestore-host` [string]
[Electrode Native bundle store server] host (including port).
For example `localhost:3000`
- `codePushAccessKey` [string]
Code push access key associated with your account
- `codePushCustomHeaders`
CodePush custom extra http headers.
- `codePushCustomServerUrl` [string]
CodePush custom server url, in case you are not using the Microsoft CodePush server.
- `codePushProxy`
CodePush proxy server url.
- `ignore-required-ern-version` [boolean]
Indicates whether any Cauldron ern version requirement should be ignored.
This is mostly used for Electrode Native development and should not be set to true otherwise.
**default** : false
- `logLevel` [trace|debug|info|error|fatal]
Set the log level to use for all commands.
**default** : info
- `max-package-cache-size` [number]
The maximum disk space to use for the package cache, in Bytes.
Only apply if the package cache is enabled (`package-cache-enabled` configuration key set to `true`).
**default** : 2GB
- `package-cache-enabled` [boolean]
Indicates whether the package cache should be enabled.
Enabling the package cache will lead to faster Containers generation, given that all packages versions used for a Container generation, will be retrieved from the cache if available rather than being downloaded upon every generation.
**default** : true
- `retain-tmp-dir` [boolean]
If set to `true`, the temporary directories created during some commands execution, won't be destroyed after the command execution.
**default** : false
- `showBanner` [boolean]
Show the Electrode Native ASCII banner for all commands.
**default** : true
- `tmp-dir`
Temporary directory to use during commands execution.
**default** : system default
#### Remarks
* In case a value already exists in the configuration for a given key, this command will not fail and will overwrite the existing value.
[Electrode Native bundle store server]: https://github.com/electrode-io/ern-bundle-store | 31.864865 | 233 | 0.749364 | eng_Latn | 0.984101 |
dbc5a29e7afb80d7c7ef4fd11daf0b42b8ddf644 | 44,293 | md | Markdown | doc/instructions.md | CarlosArteta/practical-cnn | 4fb8bbdbdf349fda9c6e7d09543a992179eb3b25 | [
"Unlicense"
] | 1 | 2021-01-13T22:51:11.000Z | 2021-01-13T22:51:11.000Z | doc/instructions.md | CarlosArteta/practical-cnn | 4fb8bbdbdf349fda9c6e7d09543a992179eb3b25 | [
"Unlicense"
] | null | null | null | doc/instructions.md | CarlosArteta/practical-cnn | 4fb8bbdbdf349fda9c6e7d09543a992179eb3b25 | [
"Unlicense"
] | 1 | 2019-04-22T01:09:36.000Z | 2019-04-22T01:09:36.000Z | # VGG Convolutional Neural Networks Practical
*By Andrea Vedaldi and Andrew Zisserman*
This is an [Oxford Visual Geometry Group](http://www.robots.ox.ac.uk/~vgg) computer vision practical, authored by [Andrea Vedaldi](http://www.robots.ox.ac.uk/~vedaldi/) and Andrew Zisserman (Release 2016a).
<img height=400px src="images/cover.png" alt="cover"/>
*Convolutional neural networks* are an important class of learnable representations applicable, among others, to numerous computer vision problems. Deep CNNs, in particular, are composed of several layers of processing, each involving linear as well as non-linear operators, that are learned jointly, in an end-to-end manner, to solve a particular tasks. These methods are now the dominant approach for feature extraction from audiovisual and textual data.
This practical explores the basics of learning (deep) CNNs. The first part introduces typical CNN building blocks, such as ReLU units and linear filters, with a particular emphasis on understanding back-propagation. The second part looks at learning two basic CNNs. The first one is a simple non-linear filter capturing particular image structures, while the second one is a network that recognises typewritten characters (using a variety of different fonts). These examples illustrate the use of stochastic gradient descent with momentum, the definition of an objective function, the construction of mini-batches of data, and data jittering. The last part shows how powerful CNN models can be downloaded off-the-shelf and used directly in applications, bypassing the expensive training process.
[TOC]
$$
\newcommand{\bx}{\mathbf{x}}
\newcommand{\by}{\mathbf{y}}
\newcommand{\bz}{\mathbf{z}}
\newcommand{\bw}{\mathbf{w}}
\newcommand{\bp}{\mathbf{p}}
\newcommand{\cP}{\mathcal{P}}
\newcommand{\cN}{\mathcal{N}}
\newcommand{\vc}{\operatorname{vec}}
\newcommand{\vv}{\operatorname{vec}}
$$
## Getting started
Read and understand the [requirements and installation instructions](../overview/index.html#installation). The download links for this practical are:
* Code and data: [practical-cnn-2016a.tar.gz](http://www.robots.ox.ac.uk/~vgg/share/practical-cnn-2016a.tar.gz)
* Code only: [practical-cnn-2016a-code-only.tar.gz](http://www.robots.ox.ac.uk/~vgg/share/practical-cnn-2016a-code-only.tar.gz)
* Data only: [practical-cnn-2016a-data-only.tar.gz](http://www.robots.ox.ac.uk/~vgg/share/practical-cnn-2016a-data-only.tar.gz)
* [Git repository](https://github.com/vedaldi/practical-cnn) (for lab setters and developers)
After the installation is complete, open and edit the script `exercise1.m` in the MATLAB editor. The script contains commented code and a description for all steps of this exercise, for [Part I](#part1) of this document. You can cut and paste this code into the MATLAB window to run it, and will need to modify it as you go through the session. Other files `exercise2.m`, `exercise3.m`, and `exercise4.m` are given for [Part II](#part2), [III](#part3), and [IV](part4).
Each part contains several **Questions** (that require pen and paper) and **Tasks** (that require experimentation or coding) to be answered/completed before proceeding further in the practical.
## Part 1: CNN building blocks {#part1}
### Part 1.1: convolution {#part1.1}
A feed-forward neural network can be thought of as the composition of number of functions
$$
f(\bx) = f_L(\dots f_2(f_1(\bx;\bw_1);\bw_2)\dots),\bw_{L}).
$$
Each function $f_l$ takes as input a datum $\bx_l$ and a parameter vector $\bw_l$ and produces as output a datum $\bx_{l+1}$. While the type and sequence of functions is usually handcrafted, the parameters $\bw=(\bw_1,\dots,\bw_L)$ are *learned from data* in order to solve a target problem, for example classifying images or sounds.
In a *convolutional neural network* data and functions have additional structure. The data $\bx_1,\dots,\bx_n$ are images, sounds, or more in general maps from a lattice[^lattice] to one or more real numbers. In particular, since the rest of the practical will focus on computer vision applications, data will be 2D arrays of pixels. Formally, each $\bx_i$ will be a $M \times N \times K$ real array of $M \times N$ pixels and $K$ channels per pixel. Hence the first two dimensions of the array span space, while the last one spans channels. Note that only the input $\bx=\bx_1$ of the network is an actual image, while the remaining data are intermediate *feature maps*.
The second property of a CNN is that the functions $f_l$ have a *convolutional structure*. This means that $f_l$ applies to the input map $\bx_l$ an operator that is *local and translation invariant*. Examples of convolutional operators are applying a bank of linear filters to $\bx_l$.
In this part we will familiarise ourselves with a number of such convolutional and non-linear operators. The first one is the regular *linear convolution* by a filter bank. We will start by focusing our attention on a single function relation as follows:
$$
f: \mathbb{R}^{M\times N\times K} \rightarrow \mathbb{R}^{M' \times N' \times K'},
\qquad \bx \mapsto \by.
$$
Open the `example1.m` file, select the following part of the code, and execute it in MATLAB (right button > `Evaluate selection` or `Shift+F7`).
```matlab
% Read an example image
x = imread('peppers.png') ;
% Convert to single format
x = im2single(x) ;
% Visualize the input x
figure(1) ; clf ; imagesc(x)
```
This should display an image of bell peppers in Figure 1:
<img height=400px src="images/peppers.png" alt="peppers"/>
Use MATLAB `size` command to obtain the size of the array `x`. Note that the array `x` is converted to *single precision* format. This is because the underlying MatConvNet assumes that data is in single precision.
> **Question.** The third dimension of `x` is 3. Why?
Now we will create a bank 10 of $5 \times 5 \times 3$ filters.
```matlab
% Create a bank of linear filters
w = randn(5,5,3,10,'single') ;
```
The filters are in single precision as well. Note that `w` has four dimensions, packing 10 filters. Note also that each filter is not flat, but rather a volume with three layers. The next step is applying the filter to the image. This uses the `vl_nnconv` function from MatConvNet:
```matlab
% Apply the convolution operator
y = vl_nnconv(x, w, []) ;
```
**Remark:** You might have noticed that the third argument to the `vl_nnconv` function is the empty matrix `[]`. It can be otherwise used to pass a vector of bias terms to add to the output of each filter.
The variable `y` contains the output of the convolution. Note that the filters are three-dimensional, in the sense that it operates on a map $\bx$ with $K$ channels. Furthermore, there are $K'$ such filters, generating a $K'$ dimensional map $\by$ as follows
$$
y_{i'j'k'} = \sum_{ijk} w_{ijkk'} x_{i+i',j+j',k}
$$
> **Questions:** Study carefully this expression and answer the following:
>
>
> - Given that the input map $\bx$ has $M \times N \times K$ dimensions and that each of the $K'$ filters has dimension $M_f \times N_f \times K$, what is the dimension of $\by$?
> - Note that $x$ is indexed by $i+i'$ and $j+j'$, but that there is no plus sign between $k$ and $k'$. Why?
> **Task:** check that the size of the variable `y` matches your calculations.
We can now visualise the output `y` of the convolution. In order to do this, use the [`vl_imarraysc`](http://www.vlfeat.org/matlab/vl_imarraysc.html) function to display an image for each feature channel in `y`:
```matlab
% Visualize the output y
figure(2) ; clf ; vl_imarraysc(y) ; colormap gray ;
```
> **Question:** Study the feature channels obtained. Most will likely contain a strong response in correspondences of edges in the input image `x`. Recall that `w` was obtained by drawing random numbers from a Gaussian distribution. Can you explain this phenomenon?
So far filters preserve the resolution of the input feature map. However, it is often useful to *downsample the output*. This can be obtained by using the `stride` option in `vl_nnconv`:
```matlab
% Try again, downsampling the output
y_ds = vl_nnconv(x, w, [], 'stride', 16) ;
figure(3) ; clf ; vl_imarraysc(y_ds) ; colormap gray ;
```
As you should have noticed in a question above, applying a filter to an image or feature map interacts with the boundaries, making the output map smaller by an amount proportional to the size of the filters. If this is undesirable, then the input array can be padded with zeros by using the `pad` option:
```matlab
% Try padding
y_pad = vl_nnconv(x, w, [], 'pad', 4) ;
figure(4) ; clf ; vl_imarraysc(y_pad) ; colormap gray ;
```
> **Task:** Convince yourself that the previous code's output has different boundaries compared to the code that does not use padding. Can you explain the result?
In order to consolidate what has been learned so far, we will now design a filter by hand:
```matlab
w = [0 1 0 ;
1 -4 1 ;
0 1 0 ] ;
w = single(repmat(w, [1, 1, 3])) ;
y_lap = vl_nnconv(x, w, []) ;
figure(5) ; clf ; colormap gray ;
subplot(1,2,1) ;
imagesc(y_lap) ; title('filter output') ;
subplot(1,2,2) ;
imagesc(-abs(y_lap)) ; title('- abs(filter output)') ;
```
> **Questions:**
>
> * What filter have we implemented?
> * How are the RGB colour channels processed by this filter?
> * What image structure are detected?
### Part 1.2: non-linear activation functions {#part1.2}
As we stated in the introduction, CNNs are obtained by composing several different functions. In addition to the linear filters shown in the [previous part](#part1.1), there are several non-linear operators as well.
> **Question:** Some of the functions in a CNN *must* be non-linear. Why?
The simplest non-linearity is obtained by following a linear filter by a *non-linear activation function*, applied identically to each component (i.e. point-wise) of a feature map. The simplest such function is the *Rectified Linear Unit (ReLU)*
$$
y_{ijk} = \max\{0, x_{ijk}\}.
$$
This function is implemented by `vl_nnrelu`; let's try this out:
```matlab
w = single(repmat([1 0 -1], [1, 1, 3])) ;
w = cat(4, w, -w) ;
y = vl_nnconv(x, w, []) ;
z = vl_nnrelu(y) ;
figure(6) ; clf ; colormap gray ;
subplot(1,2,1) ; vl_imarraysc(y) ;
subplot(1,2,2) ; vl_imarraysc(z) ;
```
> **Tasks:**
>
> * Run the code above and understand what the filter $\bw$ is doing.
> * Explain the final result $\bz$.
### Part 1.3: pooling {#part1.3}
There are several other important operators in a CNN. One of them is *pooling*. A pooling operator operates on individual feature channels, coalescing nearby feature values into one by the application of a suitable operator. Common choices include max-pooling (using the max operator) or sum-pooling (using summation). For example, *max-pooling* is defined as:
$$
y_{ijk} = \max \{ y_{i'j'k} : i \leq i' < i+p, j \leq j' < j + p \}
$$
Max pooling is implemented by the `vl_nnpool` function. Try this now:
```matlab
y = vl_nnpool(x, 15) ;
figure(6) ; clf ; imagesc(y) ;
```
> **Question:** look at the resulting image. Can you interpret the result?
The function `vl_nnpool` supports subsampling and padding just like `vl_nnconv`. However, for max-pooling feature maps are padded with the value $-\infty$ instead of 0. Why?
### Part 1.4: normalisation {#part1.4}
Another important CNN building block is channel-wise normalisation. This operator normalises the vector of feature channels at each spatial location in the input map $\bx$. The form of the normalisation operator is actually rather curious:
$$
y_{ijk'} = \frac{x_{ijk}}{\left(\kappa + \alpha \sum_{k\in G(k')} x_{ijk}^2\right)^{\beta}}
$$
where $G(k) = \left[k - \lfloor \frac{\rho}{2} \rfloor, k + \lceil \frac{\rho}{2} \rceil\right] \cap \{1, 2, \dots, K\}$ is a group of $\rho$ consecutive feature channels in the input map.
> **Task:** Understand what this operator is doing. How would you set $\kappa$, $\alpha$ and $\beta$ to achieve simple $L^2$ normalisation?
Now let's try this out:
```matlab
rho = 5 ;
kappa = 0 ;
alpha = 1 ;
beta = 0.5 ;
y_nrm = vl_nnnormalize(x, [rho kappa alpha beta]) ;
figure(6) ; clf ; imagesc(y_nrm) ;
```
> **Tasks:**
>
> * Inspect the figure just obtained. Can you interpret it?
> * Compute the $L^2$ norm of the feature channels in the output map `y_nrm`. What do you notice?
> * Explain this result in relation to the particular choice of the parameters $\rho$, $\kappa$, $\alpha$ and $\beta$.
## Part 2: back-propagation and derivatives
The parameters of a CNN $\bw=(\bw_1,\dots\bw_L)$ should be learned in such a manner that the overall CNN function $\bz = f(\bx;\bw)$ achieves a desired goal. In some cases, the goal is to model the distribution of the data, which leads to a *generative objective*. Here, however, we will use $f$ as a *regressor* and obtain it by minimising a *discriminative objective*. In simple terms, we are given:
* examples of the desired input-output relations $(\bx_1,\bz_1), \dots, (\bx_n,\bz_n)$ where $\bx_i$ are input data and $\bz_i$ corresponding output values;
* and a loss $\ell(\bz,\hat\bz)$ that expresses the penalty for predicting $\hat\bz$ instead of $\bz$.
We use those to write the empirical loss of the CNN $f$ by averaging over the examples:
$$
L(\bw) = \frac{1}{n} \sum_{i=1}^n \ell(\bz_i, f(\bx_i;\bw))
$$
Note that the composition of the function $f$ with the loss $\ell$ can be though of as a CNN with one more layer (called a *loss layer*). Hence, with a slight abuse of notation, in the rest of this part we incorporate the loss in the function $f$ (which therefore is a map $\mathcal{X}\rightarrow\mathbb{R}$) and do not talk about it explicitly anymore.
The simplest algorithm to minimise $L$, and in fact one that is used in practice, is *gradient descent*. The idea is simple: compute the gradient of the objective $L$ at a current solution $\bw^t$ and then update the latter along the direction of fastest descent of $L$:
$$
\bw^{t+1} = \bw^{t} - \eta_t \frac{\partial f}{\partial \bw}(\bw^t)
$$
where $\eta_t \in \mathbb{R}_+$ is the *learning rate*.
### Part 2.1: the theory of back-propagation
Training CNNs is normally done using a gradient-based optimization method. The CNN $f$ is the composition of $L$ layers $f_l$ each with parameters $\bw_l$, which in the simplest case of a chain looks like:
$$
\bx_0
\longrightarrow
\underset{\displaystyle\underset{\displaystyle\bw_1}{\uparrow}}{\boxed{f_1}}
\longrightarrow
\bx_1
\longrightarrow
\underset{\displaystyle\underset{\displaystyle\bw_2}{\uparrow}}{\boxed{f_2}}
\longrightarrow
\bx_2
\longrightarrow
\dots
\longrightarrow
\bx_{L-1}
\longrightarrow
\underset{\displaystyle\underset{\displaystyle\bw_L}{\uparrow}}{\boxed{f_L}}
\longrightarrow
\bx_L
$$
During learning, the last layer of the network is the *loss function* that should be minimized. Hence, the output $\bx_L = x_L$ of the network is a **scalar** quantity (a single number).
The gradient is easily computed using using the **chain rule**. If *all* network variables and parameters are scalar, this is given by[^derivative]:
$$
\frac{\partial f}{\partial w_l}(x_0;w_1,\dots,w_L)
=
\frac{\partial f_L}{\partial x_{L-1}}(x_{L-1};w_L) \times
\cdots
\times
\frac{\partial f_{l+1}}{\partial x_l}(x_l;w_{l+1}) \times
\frac{\partial f_{l}}{\partial w_l}(x_{l-1};w_l)
$$
With tensors, however, there are some complications. Consider for instance the derivative of a function $\by=f(\bx)$ where both $\by$ and $\bx$ are tensors; this is formed by taking the derivative of each scalar element in the output $\by$ with respect to each scalar element in the input $\bx$. If $\bx$ has dimensions $H \times W \times C$ and $\by$ has dimensions $H' \times W' \times C'$, then the derivative contains $HWCH'W'C'$ elements, which is often unmanageable (in the order of several GBs of memory for a single derivative).
Note that all intermediate derivatives in the chain rule may be affected by this size explosion except for the derivative of the network output that, being the loss, is a scalar.
> **Question:** The output derivatives have the same size as the parameters in the network. Why?
**Back-propagation** allows computing the output derivatives in a memory-efficient manner. To see how, the first step is to generalize the equation above to tensors using a matrix notation. This is done by converting tensors into vectors by using the $\vv$ (stacking)[^stacking] operator:
$$
\frac{\partial \vv f}{\partial \vv^\top \bw_l}
=
\frac{\partial \vv f_L}{\partial \vv^\top \bx_L} \times
\cdots
\times
\frac{\partial \vv f_{l+1}}{\partial \vv^\top \bx_l} \times
\frac{\partial \vv f_{l}}{\partial \vv^\top \bw_l}
$$
In order to make this computation memory efficient, we *project* the derivative with respect to a tensor $\bp_L = 1$ as follows:
$$
(\vv \bp_L)^\top \times \frac{\partial \vv f}{\partial \vv^\top \bw_l}
=
(\vv \bp_L)^\top
\times
\frac{\partial \vv f_L}{\partial \vv^\top \bx_L} \times
\cdots
\times
\frac{\partial \vv f_{l+1}}{\partial \vv^\top \bx_l} \times
\frac{\partial \vv f_{l}}{\partial \vv^\top \bw_l}
$$
Note that $\bp_L=1$ has the same dimension as $\bx_L$ (the scalar loss) and, being equal to 1, multiplying it to the left of the expression does not change anything. Things are more interesting when products are evaluated from the left to the right, i.e. *backward from the output to the input* of the CNN. The first such factors is given by:
\begin{equation}
\label{e:factor}
(\vv \bp_{L-1})^\top = (\vv \bp_L)^\top
\times
\frac{\partial \vv f_L}{\partial \vv^\top \bx_L}
\end{equation}
This results in a new projection vector $\bp_{L-1}$, which can then be multiplied from the left to obtain $\bp_{L-2}$ and so on. The last projection $\bp_l$ is the desired derivative. Crucially, each projection $\bp_q$ takes as much memory as the corresponding variable $\bx_q$.
Some might have noticed that, while projections remain small, each factor \eqref{e:factor} does contain one of the large derivatives that we cannot compute explicitly. The trick is that CNN toolboxes contain code that can compute the projected derivatives without explicitly computing this large factor. In particular, for any building block function $\by=f(\bx;\bw)$, a toolbox such as MatConvNet will implement:
* A **forward mode** computing the function $\by=f(\bx;\bw)$.
* A **backward mode** computing the derivatives of the projected function $\langle \bp, f(\bx;\bw) \rangle$ with respect to the input $\bx$ and parameter $\bw$:
$$
\frac{\partial}{\partial \bx} \left\langle \bp, f(\bx;\bw) \right\rangle,
\qquad
\frac{\partial}{\partial \bw} \left\langle \bp, f(\bx;\bw) \right\rangle.
$$
For example, this is how this looks for the convolution operator:
```.language-matlab
y = vl_nnconv(x,w,b) ; % forward mode (get output)
p = randn(size(y), 'single') ; % projection tensor (arbitrary)
[dx,dw,db] = vl_nnconv(x,w,b,p) ; % backward mode (get projected derivatives)
```
and this is how it looks for ReLU operator:
```.language-matlab
y = vl_nnrelu(x) ;
p = randn(size(y), 'single') ;
dx = vl_nnrelu(x,p) ;
```
### Part 2.1: using back-propagation in practice
To see how backpropagation is used in practice, focus on a computational block $f$, followed by a function $g$:
$$
\bx
\longrightarrow
\underset{\displaystyle\underset{\displaystyle\bw}{\uparrow}}{\boxed{f}}
\longrightarrow
\by
\longrightarrow
\boxed{g}
\longrightarrow
z
$$
Here $g$ lumps the rest of the network, from $\by$ to the final scalar output $z$. The goal is to compute the derivatives $\partial z / \partial \bx$ and $\partial z / \partial \bw$ given the derivative $\bp = \partial z / \partial \by$ of the rest of the network $g$.
Let's put this into practice by letting $f$ be a convolutional layer and by filling $\bp = \partial z / \partial \by$ with random values for the sake of the example:
```matlab
% Read an example image
x = im2single(imread('peppers.png')) ;
% Create a bank of linear filters and apply them to the image
w = randn(5,5,3,10,'single') ;
y = vl_nnconv(x, w, []) ;
% Create the derivative dz/dy
dzdy = randn(size(y), 'single') ;
% Back-propagation
[dzdx, dzdw] = vl_nnconv(x, w, [], dzdy) ;
```
> **Task:** Run the code above and check the dimensions of `dzdx` and `dzdy`. Does this matches your expectations?
An advantage of this modular view is that new building blocks can be coded and added to the architecture in a simple manner. However, it is easy to make mistakes in the calculation of complex derivatives. Hence, it is a good idea to verify results numerically. Consider the following piece of code:
```matlab
% Check the derivative numerically
ex = randn(size(x), 'single') ;
eta = 0.0001 ;
xp = x + eta * ex ;
yp = vl_nnconv(xp, w, []) ;
dzdx_empirical = sum(dzdy(:) .* (yp(:) - y(:)) / eta) ;
dzdx_computed = sum(dzdx(:) .* ex(:)) ;
fprintf(...
'der: empirical: %f, computed: %f, error: %.2f %%\n', ...
dzdx_empirical, dzdx_computed, ...
abs(1 - dzdx_empirical/dzdx_computed)*100) ;
```
> **Questions:**
>
> * What is the meaning of `ex` in the code above?
> * What are the derivatives `dzdx_empirical` and `dzdx_computed`?
>
> **Tasks:**
>
> * Run the code and convince yourself that `vl_nnconv` derivatives is (probably) correct.
> * Create a new version of this code to test the derivative calculation with respect to $\bw$.
We are now ready to build our first elementary CNN, composed of just two layers, and to compute its derivatives:
```matlab
% Parameters of the CNN
w1 = randn(5,5,3,10,'single') ;
rho2 = 10 ;
% Run the CNN forward
x1 = im2single(imread('peppers.png')) ;
x2 = vl_nnconv(x1, w1, []) ;
x3 = vl_nnpool(x2, rho2) ;
% Create the derivative dz/dx3
dzdx3 = randn(size(x3), 'single') ;
% Run the CNN backward
dzdx2 = vl_nnpool(x2, rho2, dzdx3) ;
[dzdx1, dzdw1] = vl_nnconv(x1, w1, [], dzdx2) ;
```
> **Question:** Note that the last derivative in the CNN is `dzdx3`. Here, for the sake of the example, this derivative is initialised randomly. In a practical application, what would this derivative represent?
We can now use the same technique as before to check that the derivative computed through back-propagation are correct.
```matlab
% Check the derivative numerically
ew1 = randn(size(w1), 'single') ;
eta = 0.0001 ;
w1p = w1 + eta * ew1 ;
x1p = x1 ;
x2p = vl_nnconv(x1p, w1p, []) ;
x3p = vl_nnpool(x2p, rho2) ;
dzdw1_empirical = sum(dzdx3(:) .* (x3p(:) - x3(:)) / eta) ;
dzdw1_computed = sum(dzdw1(:) .* ew1(:)) ;
fprintf(...
'der: empirical: %f, computed: %f, error: %.2f %%\n', ...
dzdw1_empirical, dzdw1_computed, ...
abs(1 - dzdw1_empirical/dzdw1_computed)*100) ;
```
## Part 3: learning a tiny CNN
In this part we will learn a very simple CNN. The CNN is composed of exactly two layers: a convolutional layer and a max-pooling layer:
$$
\bx_2 = W * \bx_1 + b, \qquad \bx_3 = \operatorname{maxpool}_\rho \bx_2.
$$
$W$ contains a single $3\times 3$ square filter, so that $b$ is a scalar. and the input image $\bx=\bx_1$ has a single channel.
> **Task**
>
> * Open the file `tinycnn.m` and inspect the code. Convince yourself that the code computes the CNN just described.
> * Look at the paddings used in the code. If the input image $\bx_1$ has dimensions $M\times N$, what is the dimension of the output feature map $\bx_3$?
>
In the rest of the section we will learn the CNN parameters in order to extract blob-like structures from images, such as the ones in the following image:
<img width=350px src="images/dots.jpg" alt="sentence-lato"/>
### Part 3.1: training data and labels
The first step is to load the image `data/dots.jpg` and to use the supplied `extractBlackBlobs` function to extract all the black dots in the image.
```matlab
% Load an image
im = rgb2gray(im2single(imread('data/dots.jpg'))) ;
% Compute the location of black blobs in the image
[pos,neg] = extractBlackBlobs(im) ;
```
The arrays `pos` and `neg` contain now pixel labels and will be used as *annotations* for the supervised training of the CNN. These annotations can be visualised as follows:
```matlab
figure(1) ; clf ;
subplot(1,3,1) ; imagesc(im) ; axis equal ; title('image') ;
subplot(1,3,2) ; imagesc(pos) ; axis equal ; title('positive points (blob centres)') ;
subplot(1,3,3) ; imagesc(neg) ; axis equal ; title('negative points (not a blob)') ;
colormap gray ;
```
> **Task:** Inspect `pos` and `neg` and convince yourself that:
>
> - `pos` contains a single `true` value in correspondence of each blob centre;
> - `neg` contains a `true` value for each pixel sufficiently far away from a blob.
>
> Are there pixels for which both `pos` and `neg` evaluate to false?
### Part 3.2: image preprocessing
Before we attempt to train the CNN, the image is pre-processed to remove its mean value. It is also smoothed by applying a Gaussian kernel of standard deviation 3 pixels:
```matlab
% Pre-smooth the image
im = vl_imsmooth(im,3) ;
% Subtract median value
im = im - median(im(:)) ;
```
We will come back to this preprocessing steps later.
### Part 3.3: learning with gradient descent
We will now setup a learning problem to learn $W$ and $b$ to detect black blobs in images. Recall that the CNN computes for each image pixel $(u,v)$ a score $f(\bx;\bw,b)_{(u,v)}$. We would like this score to be:
* at least as large as 1 for any pixel that is marked as a blob centre (`pos` or $(u,v)\in\cP$) and
* at most zero for any pixel that is marked as being far away from a blob (`neg` or $(u,v)\in\cN$).
We do so by defining and then optimising the following objective function:
$$
E(\bw,b) =
\frac{\lambda}{2}\|\bw\|^2
+
\frac{1}{|\cP|}\sum_{(u,v) \in \cP}
\max\{0, 1 - f(\bx;\bw,b)_{(u,v)}\}
+
\frac{1}{|\cN|}\sum_{(u,v) \in \cN}
\max\{0, f(\bx;\bw,b)_{(u,v)}\}.
$$
> **Questions:**
>
> - What can you say about the score of each pixel if $\lambda=0$ and $E(\bw,b) =0$?
> - Note that the objective enforces a *margin* between the scores of the positive and negative pixels. How much is this margin?
We can now train the CNN by minimising the objective function with respect to $\bw$ and $b$. We do so by using an algorithm called *gradient descent with momentum*. Given the current solution $(\bw_t,b_t)$ and update it , this is updated to $(\bw_{t+1},b_t)$ by following the direction of fastest descent as given by the negative gradient $-\nabla E(\bw_t,b_t)$ of the objective. However, gradient updates are smoothed by considering a *momentum* term $(\bar\bw_{t}, \bar\mu_t)$, yielding the update equations
$$
\bar\bw_{t+1} \leftarrow \mu \bar\bw_t + \eta \frac{\partial E}{\partial \bw_t},
\qquad
\bw_{t+1} \leftarrow \bw_{t} - \bar\bw_t.
$$
and similarly for the bias term. Here $\mu$ is the *momentum rate* and $\eta$ the *learning rate*.
> **Questions:**
>
> - Explain why the momentum rate must be smaller than 1. What is the effect of having a momentum rate close to 1?
> - The learning rate establishes how fast the algorithm will try to minimise the objective function. Can you see any problem with a large learning rate?
The parameters of the algorithm are set as follows:
```matlab
numIterations = 500 ;
rate = 5 ;
momentum = 0.9 ;
shrinkRate = 0.0001 ;
plotPeriod = 10 ;
```
> **Tasks:**
>
> - Inspect the code in the file `exercise3.m`. Convince yourself that the code is implementing the algorithm described above. Pay particular attention at the forward and backward passes as well as at how the objective function and its derivatives are computed.
> - Run the algorithm and observe the results. Then answer the following questions:
> * The learned filter should resemble the discretisation of a well-known differential operator. Which one?
> * What is the average of the filter values compared to the average of the absolute values?
> - Run the algorithm again and observe the evolution of the histograms of the score of the positive and negative pixels in relation to the values 0 and 1. Answer the following:
> * Is the objective function minimised monotonically?
> * As the histograms evolve, can you identify at least two "phases" in the optimisation?
> * Once converged, do the score distribute in the manner that you would expect?
>
> **Hint:** the `plotPeriod` option can be changed to plot the diagnostic figure with a higher or lower frequency; this can significantly affect the speed of the algorithm.
### Part 3.4: experimenting with the tiny CNN
In this part we will experiment with several variants of the network just learned. First, we study the effect of the image smoothing:
> **Task:** Train again the tiny CNN *without smoothing the input image in preprocessing*. Answer the following questions:
>
> * Is the learned filter very different from the one learned before?
> * If so, can you figure out what "went wrong"?
> * Look carefully at the output of the first layer, magnifying with the loupe tool. Is the maximal filter response attained in the middle of each blob?
>
> **Hint:** The Laplacian of Gaussian operator responds maximally at the centre of a blob only if the latter matches the blob size. Relate this fact to the combination of pre-smoothing the image and applying the learned $3\times 3$ filter.
Now restore the smoothing but switch off subtracting the median from the input image.
> **Task:** Train again the tiny CNN *without subtracting the median value in preprocessing*. Answer the following questions:
>
> * Does the algorithm converge?
> * Reduce a hundred-fold the learning are and increase the maximum number of iterations by an equal amount. Does it get better?
> * Explain why adding a constant to the input image can have such a dramatic effect on the performance of the optimisation.
>
> **Hint:** What constraint should the filter $\bw$ satisfy if the filter output should be zero when (i) the input image is zero or (ii) the input image is a large constant? Do you think that it would be easy for gradient descent to enforce (ii) at all times?
What you have just witnessed is actually a fairly general principle: centring the data usually makes learning problems much better conditioned.
Now we will explore several parameters in the algorithms:
> **Task:** Restore the preprocessing as given in `experiment4.m`. Try the following:
>
> * Try increasing the learning rate `eta`. Can you achieve a better value of the energy in the 500 iterations?
> * Disable momentum by setting `momentum = 0`. Now try to beat the result obtained above by choosing `eta`. Can you succeed?
Finally, consider the regularisation effect of shrinking:
> **Task:** Restore the learning rate and momentum as given in `experiment4.m`. Then increase the shrinkage factor tenfold and a hundred-fold.
>
> - What is the effect on the convergence speed?
> - What is the effect on the final value of the total objective function and of the average loss part of it?
## Part 4: learning a character CNN
In this part we will learn a CNN to recognise images of characters.
### Part 4.1: prepare the data
Open up `exercise4.m` and execute Part 4.1. The code loads a structure `imdb` containing images of the characters *a, b, ..., z* rendered using approximately 931 fonts downloaded from the [Google Fonts Project](https://www.google.com/fonts). Look at the `imdb.images` substructure:
```matlab
>> imdb.images
ans =
id: [1x24206 double]
data: [32x32x24206 single]
label: [1x24206 double]
set: [1x24206 double]
```
These are stored as the array `imdb.images.id` is a 29,198-dimensional vector of numeric IDs for each of the 29,198 character images in the dataset. `imdb.images.data` contains a $32 \times 32$ image for each character, stored as a slide of a $32\times 32\times 29,\!198$-dimensional array. `imdb.images.label` is a vector of image labels, denoting which one of the 26 possible characters it is. `imdb.images.set` is equal to 1 for each image that should be used to train the CNN and to 2 for each image that should be used for validation.
<img height=400px src="images/chars.png" alt="cover"/>
> **Task:** look at the Figure 1 generated by the code and at the code itself and make sure that you understand what you are looking at.
### Part 4.2: intialize a CNN architecture
The function `initializeCharacterCNN.m` creates a CNN initialised with random weights that will be trained to recognise character images.
> **Tasks:**
>
> 1. By inspecting `initializeCharacterCNN.m` get a sense of the architecture that will be trained. How many layers are there? How big are the filters?
> 2. Use the function [`vl_simplenn_display`](http://www.vlfeat.org/matconvnet/mfiles/vl_simplenn_display/) to produce a table summarising the architecture.
Note that the *penultimate* layer has 26 output dimensions, one for each character. Character recognition looks at the maximal output to identify which character is processed by the network.
However, the last network layer is [`vl_nnsoftmaxloss`](http://www.vlfeat.org/matconvnet/mfiles/vl_nnsoftmaxloss/), which in turn is a combination of the [`vl_nnsoftmax`](http://www.vlfeat.org/matconvnet/mfiles/vl_nnsoftmax/) function and of the classification log-loss [`vl_nnloss`](http://www.vlfeat.org/matconvnet/mfiles/vl_nnloss/). The *softmax* operator is given by
$$
y_{ijk'} = \frac{e^{x_{ijk'}}}{\sum_{k} e^{x_{ijk}}}
$$
whereas the *log-loss* is given by
$$
y_{ij} = - \log x_{ij c_{ij}}
$$
where $c_{ij}$ is the index of the ground-truth class at spatial location $(i,j)$.
**Remark:** While in MatConvNet all operators are convolutional, in this case the network is configured such that the output of the classification layer is a $1 \times 1 \times 26$-dimensional feature map, i.e. there remains only one spatial location.
> **Tasks:**
>
> 1. Understand what the softmax operator does. **Hint:** to use the log-loss the data must be in the (0, 1] interval.
> 2. Understand what is the effect of minimising the log-loss. Which neural response should become larger?
> 3. Why do you think MatConvNet provides a third function `vl_nnsoftmaxloss` combining both functions into a single layer?
### Part 4.3: train and evaluate the CNN
We are now ready to train the CNN. To this end we use the example SGD implementation in MatConvNet (`examples/cnn_train.m`). This function requires some options:
```matlab
trainOpts.batchSize = 100 ;
trainOpts.numEpochs = 100 ;
trainOpts.continue = true ;
trainOpts.useGpu = false ;
trainOpts.learningRate = 0.001 ;
trainOpts.numEpochs = 15 ;
trainOpts.expDir = 'data/chars-experiment' ;
```
This says that the function will operate on SGD mini-batches of 100 elements, it will run for 15 epochs (passes through the data), it will continue from the last epoch if interrupted, if will *not* use the GPU, it will use a learning rate of 0.001, and it will save any file in the `data/chars-experiment` subdirectory.
Before the training starts, the average image value is subtracted:
```
% Take the average image out
imageMean = mean(imdb.images.data(:)) ;
imdb.images.data = imdb.images.data - imageMean ;
```
This is similar to what we have done in Part 3.
The training code is called as follows:
```matlab
% Call training function in MatConvNet
[net,info] = cnn_train(net, imdb, @getBatch, trainOpts) ;
```
Here the key, in addition to the `trainOpts` structure, is the `@getBatch` function handle. This is how `cnn_train` obtains a copy of the data to operate on. Examine this function (see the bottom of the `exercise4.m` file):
```matlab
function [im, labels] = getBatch(imdb, batch)
im = imdb.images.data(:,:,batch) ;
im = 256 * reshape(im, 32, 32, 1, []) ;
labels = imdb.images.label(1,batch) ;
```
The function extracts the $m$ images corresponding to the vector of indexes `batch`. It also reshape them as a $32\times 32\times 1\times m$ array (as this is the format expected by the MatConvNet functions) and multiplies the values by 256 (the resulting values match the network initialisation and learning parameters). Finally, it also returns a vector of labels, one for each image in the batch.
> **Task:** Run the learning code and examine the plots that are produced. As training completes answer the following questions:
>
> 1. How many images per second can you process? (Look at the output in the MATLAB screen)
> 2. There are two sets of curves: energy and prediction error. What do you think is the difference? What is the "energy"?
> 3. Some curves are labelled "train" and some other "val". Should they be equal? Which one should be lower than the other?
> 4. Both the top-1 and top-5 prediction errors are plotted. What do they mean? What is the difference?
Once training is finished, the model is saved back:
```matlab
% Save the result for later use
net.layers(end) = [] ;
net.imageMean = imageMean ;
save('data/chars-experiment/charscnn.mat', '-struct', 'net') ;
```
Note that we remember the `imageMean` for later use. Note also that the softmaxloss layer is *removed* from the network before saving.
### Part 4.4: visualise the learned filters
The next step is to glance at the filters that have been learned:
```matlab
figure(2) ; clf ; colormap gray ;
vl_imarraysc(squeeze(net.layers{1}.weights{1}),'spacing',2)
axis equal ;
title('filters in the first layer') ;
```
> **Task:** what can you say about the filters?
### Part 4.5: apply the model
We now apply the model to a whole sequence of characters. This is the image `data/sentence-lato.png`:
<img width=576px src="images/sentence-lato.png" alt="sentence-lato"/>
```matlab
% Load the CNN learned before
net = load('data/chars-experiment/charscnn.mat') ;
% Load the sentence
im = im2single(imread('data/sentence-lato.png')) ;
im = 256 * (im - net.imageMean) ;
% Apply the CNN to the larger image
res = vl_simplenn(net, im) ;
```
> **Question:** The image is much wider than 32 pixels. Why can you apply to it the CNN learned before for $32\times 32$ patches?
> **Task:** examine the size of the CNN output using `size(res(end).x)`. Does this match your expectation?
Now use the `decodeCharacters()` function to visualise the results:
```matlab
% Visualize the results
figure(3) ; clf ;
decodeCharacters(net, imdb, im, res) ;
```
> **Tasks:** inspect the output of the `decodeCharacters()` function and answer the following:
>
> 1. Is the quality of the recognition any good?
> 2. Does this match your expectation given the recognition rate in your validation set (as reported by `cnn_train` during training)?
### Part 4.6: training with jitter
A key issue with the previous CNN is that it is not trained to recognise characters in the context of other characters. Furthermore, characters are perfectly centred in the patch. We can relax these assumptions by making the training data "more realistic". In this part we will train a second network applying *data jittering* by:
1. Randomly adding a character to the left and to the right of the one recognised and
2. Randomly shifting the characters by up to $\pm 5$ pixels horizontally and $\pm 2$ pixels vertically.
This is implemented by the `getBatchWithJitter()` function (note that jittering is applied on the fly as it is so fast).
> **Tasks:**
>
> 1. Train a second model, using the jittered data.
> 2. Look at the training and validation errors. Is their gap as wide as it was before?
> 3. Use the new model to recognise the characters in the sentence by repeating the previous part. Does it work better?
> 4. **Advanced.** What else can you change to make the performance even better?
### Part 4.7: Training using the GPU
> Skip this part if you do not wish to experiment training using GPU hardware.
A key challenge in deep learning is the sheer amount of computation required to train gigantic models from equally gigantic data collections. State-of-the-art vision models, for example, take weeks to train on specialised hardware such as GPUs, and they are essentially untrainable on CPU (unless you have access to a very large cluster). Thus it is practically important to learn how to use this hardware.
In MatConvNet this is almost trivial as it builds on the easy-to-use GPU support in MATLAB. You can follow this list of steps to try it out:
1. Clear the models generated and cached in the previous steps. To do this, rename or delete the directories `data/characters-experiment` and `data/characters-jit-experiment`.
2. Make sure that MatConvNet is compiled with GPU support. To do this, use
```matlab
> setup('useGpu', true) ;
```
3. Try again training the model of `exercise4.m` switching to `true` the `useGpu` flag.
> **Task:** Follow the steps above and note the speed of training. How many images per second can you process now?
For these small images, the GPU speedup is probably modest (perhaps 2-5 fold). However, for larger models it becomes really dramatic (>10 fold).
## Part 5: using pretrained models
A characteristic of deep learning is that it constructs *representations* of the data. These representations tend to have a universal value, or at least to be applicable to an array of problems that transcends the particular task a model was trained for. This is fortunate as training complex models requires weeks of works on one or more GPUs or hundreds of CPUs; these models can then be frozen and reused for a number of additional applications, with no or minimal additional work.
In this part we will see how MatConvNet can be used to download and run high-performance CNN models for image classification. These models are trained from 1.2M images in the ImageNet datasets to discriminate 1,000 different object categories.
Several [pertained models](http://www.vlfeat.org/matconvnet/pretrained/) can be downloaded from the MatConvNet website, including several trained using other CNN implementations such as Caffe. One such models is included in the practical `data/imagenet-vgg-verydeep-16.mat` file. This is one of the best models from the ImageNet ILSVCR Challenge 2014.
### Part 5.1: load a pre-trained model
The first step is to load the model itself. This is in the format of the `vl_simplenn` CNN wrapper, and ships as a MATLAB `.mat` file:
```matlab
net = load('data/imagenet-vgg-verydeep-16.mat') ;
vl_simplenn_display(net) ;
```
> **Tasks:**
>
> 1. Look at the output of `vl_simplenn_display` and understand the structure of the model. Can you understand why it is called "very deep"?
> 2. Look at the size of the file `data/imagenet-vgg-verydeep-16.mat` on disk. This is *just the model*.
### Part 5.2: use the model to classify an image
We can now use the model to classify an image. We start from `peppers.png`, a MATLAB stock image:
```matlab
% obtain and preprocess an image
im = imread('peppers.png') ;
im_ = single(im) ; % note: 255 range
im_ = imresize(im_, net.normalization.imageSize(1:2)) ;
im_ = im_ - net.normalization.averageImage ;
```
The code normalises the image in a format compatible with the model `net`. This amounts to: converting the image to `single` format (but with range 0,...,255 rather than [0, 1] as typical in MATLAB), resizing the image to a fixed size, and then subtracting an average image.
It is now possible to call the CNN:
```matlab
% run the CNN
res = vl_simplenn(net, im_) ;
```
As usual, `res` contains the results of the computation, including all intermediate layers. The last one can be used to perform the classification:
``` matlab
% show the classification result
scores = squeeze(gather(res(end).x)) ;
[bestScore, best] = max(scores) ;
figure(1) ; clf ; imagesc(im) ;
title(sprintf('%s (%d), score %.3f',...
net.classes.description{best}, best, bestScore)) ;
```
That completes this practical.
## Links and further work
* The code for this practical is written using the software package [MatConvNet](http://www.vlfeat.org/matconvnet). This is a software library written in MATLAB, C++, and CUDA and is freely available as source code and binary.
* The ImageNet model is the *VGG very deep 16* of Karen Simonyan and Andrew Zisserman.
## Acknowledgements
* Beta testing by: Karel Lenc and Carlos Arteta.
* Bugfixes/typos by: Sun Yushi.
## History
* Used in the Oxford AIMS CDT, 2015-16.
* Used in the Oxford AIMS CDT, 2014-15.
[^lattice]: A two-dimensional *lattice* is a discrete grid embedded in $R^2$, similar for example to a checkerboard.
| 50.218821 | 795 | 0.727948 | eng_Latn | 0.99656 |
dbc5b5ebce8847c3e5d0624012383bc240859ef1 | 6,739 | md | Markdown | README.md | Oniestel/azure-storage-table | d2a172d03870c1ab0f08994c5b830845f3fe51c6 | [
"MIT"
] | 195 | 2016-04-28T19:01:37.000Z | 2022-03-25T09:51:15.000Z | README.md | Oniestel/azure-storage-table | d2a172d03870c1ab0f08994c5b830845f3fe51c6 | [
"MIT"
] | 251 | 2016-04-21T08:43:18.000Z | 2022-03-23T18:01:00.000Z | README.md | Oniestel/azure-storage-table | d2a172d03870c1ab0f08994c5b830845f3fe51c6 | [
"MIT"
] | 201 | 2016-04-22T18:11:11.000Z | 2022-03-07T11:22:56.000Z | # Microsoft Azure Storage Table PHP Client Library
This project provides a PHP client library that makes it easy to access Microsoft Azure Storage table services. For documentation on how to host PHP applications on Microsoft Azure, please see the [Microsoft Azure PHP Developer Center](http://www.windowsazure.com/en-us/develop/php/).
[](https://packagist.org/packages/microsoft/azure-storage-table)
> **Note**
>
> * This [repository](https://github.com/azure/azure-storage-table-php) is currently used for releasing only, please go to [azure-storage-php](https://github.com/azure/azure-storage-php) for submitting issues or contribution.
> * If you are looking for the Service Bus, Service Runtime, Service Management or Media Services libraries, please visit https://github.com/Azure/azure-sdk-for-php.
> * If you need big file (larger than 2GB) or 64-bit integer support, please install PHP 7 64-bit version.
# Features
* Tables
* create and delete tables
* create, query, insert, update, merge, and delete entities
* batch operations
Please check details on [API reference documents](http://azure.github.io/azure-storage-php).
# Getting Started
## Minimum Requirements
* PHP 5.6 or above
* See [composer.json](composer.json) for dependencies
* Required extension for PHP:
* php_fileinfo.dll
* php_mbstring.dll
* php_openssl.dll
* php_xsl.dll
* Recommended extension for PHP:
* php_curl.dll
## Download Source Code
To get the source code from GitHub, type
```
git clone https://github.com/Azure/azure-storage-php.git
cd ./azure-storage-php
```
## Install via Composer
1. Create a file named **composer.json** in the root of your project and add the following code to it:
```json
{
"require": {
"microsoft/azure-storage-table": "*"
}
}
```
2. Download **[composer.phar](http://getcomposer.org/composer.phar)** in your project root.
3. Open a command prompt and execute this in your project root
```
php composer.phar install
```
## Usage
There are four basic steps that have to be performed before you can make a call to any Microsoft Azure Storage API when using the libraries.
* First, include the autoloader script:
```php
require_once "vendor/autoload.php";
```
* Include the namespaces you are going to use.
To create any Microsoft Azure service client you need to use the rest proxy classes, such as **TableRestProxy** class:
```php
use MicrosoftAzure\Storage\Table\TableRestProxy;
```
To process exceptions you need:
```php
use MicrosoftAzure\Storage\Common\ServiceException;
```
* To instantiate the service client you will also need a valid [connection string](https://azure.microsoft.com/en-us/documentation/articles/storage-configure-connection-string/). The format is:
```
DefaultEndpointsProtocol=[http|https];AccountName=[yourAccount];AccountKey=[yourKey]
```
or:
```
TableEndpoint=[myTableEndpoint];SharedAccessSignature=[sasToken]
```
* Instantiate a client object - a wrapper around the available calls for the given service.
```php
$tableClient = TableRestProxy::createTableService($connectionString);
```
### Using Middlewares
To specify the middlewares, user have to create an array with middlewares
and put it in the `$requestOptions` with key 'middlewares'. The sequence of
the array will affect the sequence in which the middleware is invoked. The
`$requestOptions` can usually be set in the options of an API call, such as
`MicrosoftAzure\Storage\Table\Models\QueryTablesOptions`.
The user can push the middleware into the array with key 'middlewares' in
services' `$_options` instead when creating them if the middleware is to be
applied to each of the API call for a rest proxy. These middlewares will always
be invoked after the middlewares in the `$requestOptions`.
e.g.:
```php
$tableClient = TableRestProxy::createTableService(
$connectionString,
$optionsWithMiddlewares
);
```
Each of the middleware should be either an instance of a sub-class that
implements `MicrosoftAzure\Storage\Common\Internal\IMiddleware`, or a
`callable` that follows the Guzzle middleware implementation convention.
User can create self-defined middleware that inherits from `MicrosoftAzure\Storage\Common\Internal\Middlewares\MiddlewareBase`.
### Using proxies
To use proxies during HTTP requests, set system variable `HTTP_PROXY` and the proxy will be used.
## Troubleshooting
### Error: Unable to get local issuer certificate
cURL can't verify the validity of Microsoft certificate when trying to issue a request call to Azure Storage Services. You must configure cURL to use a certificate when issuing https requests by the following steps:
1. Download the cacert.pem file from [cURL site](http://curl.haxx.se/docs/caextract.html).
2. Then either:
* Open your php.ini file and add the following line:
```ini
curl.cainfo = "<absolute path to cacert.pem>"
```
OR
* Point to the cacert in the options when creating the Proxy.
```php
$options["http"] = ["verify" => "<absolute path to cacert.pem>"];
TableRestProxy::createTableService($connectionString, $options);
```
## Code samples
You can find samples in the [sample folder](samples)
# Migrate from [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php/)
If you are using [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php/) to access Azure Storage Service, we highly recommend you to migrate to this SDK for faster issue resolution and quicker feature implementation. We are working on supporting the latest service features as well as improvement on existing APIs.
For now, Microsoft Azure Storage PHP client libraries share almost the same interface as the storage blobs, tables, queues and files APIs in Azure SDK for PHP. However, there are some minor breaking changes need to be addressed during your migration. You can find the details in [BreakingChanges.md](BreakingChanges.md).
# Need Help?
Be sure to check out the Microsoft Azure [Developer Forums on Stack Overflow](http://go.microsoft.com/fwlink/?LinkId=234489) and [github issues](https://github.com/Azure/azure-storage-php/issues) if you have trouble with the provided code.
# Contribute Code or Provide Feedback
If you would like to become an active contributor to this project please follow the instructions provided in [Azure Projects Contribution Guidelines](http://azure.github.io/guidelines/).
You can find more details for contributing in the [CONTRIBUTING.md](CONTRIBUTING.md).
If you encounter any bugs with the library please file an issue in the [Issues](https://github.com/Azure/azure-storage-php/issues) section of the project.
| 39.409357 | 323 | 0.760647 | eng_Latn | 0.960242 |
dbc7e2aa8480b0f6e1be634250f7dc2ee3e2195e | 646 | md | Markdown | 2020/Journals/Electronics.md | jwwthu/DL4Traffic | 7bc98cc150e3e830e0e0423956dc8e64dd4b9277 | [
"MIT"
] | 63 | 2021-01-09T12:32:11.000Z | 2022-03-31T15:22:36.000Z | 2020/Journals/Electronics.md | zhouyuxin-97/DL4Traffic | 387148cbb9a9e081d120cb3116fd11901538f6ec | [
"MIT"
] | 2 | 2021-06-29T09:23:57.000Z | 2021-06-30T16:21:46.000Z | 2020/Journals/Electronics.md | zhouyuxin-97/DL4Traffic | 387148cbb9a9e081d120cb3116fd11901538f6ec | [
"MIT"
] | 18 | 2021-01-09T12:32:11.000Z | 2022-03-21T09:47:28.000Z | * Mon E E, Ochiai H, Saivichit C, et al. <b>Bottleneck Based Gridlock Prediction in an Urban Road Network Using Long Short-Term Memory[J]</b>. Electronics, 2020, 9(9): 1412. [Link](https://www.mdpi.com/2079-9292/9/9/1412)
* Yuan Y, Shao C, Cao Z, et al. <b>Bus Dynamic Travel Time Prediction: Using a Deep Feature Extraction Framework Based on RNN and DNN[J]</b>. Electronics, 2020, 9(11): 1876. [Link](https://www.mdpi.com/2079-9292/9/11/1876)
* Lu H, Huang D, Song Y, et al. <b>St-trafficnet: A spatial-temporal deep learning network for traffic forecasting[J]</b>. Electronics, 2020, 9(9): 1474. [Link](https://www.mdpi.com/2079-9292/9/9/1474) | 215.333333 | 222 | 0.71517 | yue_Hant | 0.382266 |
dbc7f11c02e0cc0f15158ef8338b161b7c9375b1 | 2,265 | md | Markdown | Dev/C# ASP and Blazor/Resources.md | MorozovDamian/DevManuals | 0ebaaf7d828273c9f16845a8fc9d3e6a3892dc63 | [
"MIT"
] | null | null | null | Dev/C# ASP and Blazor/Resources.md | MorozovDamian/DevManuals | 0ebaaf7d828273c9f16845a8fc9d3e6a3892dc63 | [
"MIT"
] | null | null | null | Dev/C# ASP and Blazor/Resources.md | MorozovDamian/DevManuals | 0ebaaf7d828273c9f16845a8fc9d3e6a3892dc63 | [
"MIT"
] | null | null | null | # Resources
- [Back to the Home page](../../README.md)
- [Back to the Dev page](../README.md)
- [Back to the C# ASP and Blazor page](README.md)
## Links
- [Blazor University](https://blazor-university.com/ "blazor-university.com")
- [Blazor - Build client web apps with C#](https://blazor.net/)
- [https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor](https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor)
- [Introduction to ASP.NET Core Blazor](https://docs.microsoft.com/en-us/aspnet/core/blazor/)
- [Blazor Tutorial - Build your first Blazor app](https://dotnet.microsoft.com/learn/aspnet/blazor-tutorial/install)
- [Build a Blazor todo list app](https://docs.microsoft.com/en-us/aspnet/core/tutorials/build-a-blazor-app)
- [ASP.NET Core Blazor hosting models](https://docs.microsoft.com/en-us/aspnet/core/blazor/hosting-models)
- [Use ASP.NET Core SignalR with Blazor WebAssembly](https://docs.microsoft.com/en-us/aspnet/core/tutorials/signalr-blazor-webassembly)
- [The .NET Team's Favorite Razor Features](https://dev.to/dotnet/the-net-team-s-favorite-razor-features-5b72)
- [MVVM Support in Blazor](https://blog.jeremylikness.com/blog/2019-01-04_mvvm-support-in-blazor/)
- [Man(ual) page for WebAssembly](https://www.webassemblyman.com/)
- [Blazor help](https://blazorhelpwebsite.com/)
- [https://github.com/dotnet/AspNetCore.Docs](https://github.com/dotnet/AspNetCore.Docs)
- [https://github.com/gustavnavar/Grid.Blazor](https://github.com/gustavnavar/Grid.Blazor)
- [https://gridblazor.azurewebsites.net/](https://gridblazor.azurewebsites.net/)
## Blazor components
- [Mud Blazor Components](https://mudblazor.com/)
- [Mud Blazor Components on github](https://github.com/MudBlazor/MudBlazor/)
- [Radzen Blazor Components](https://blazor.radzen.com/)
- [Radzen Blazor Components on github](https://github.com/radzenhq/radzen-blazor)
## Library
```
Year: 2020
Author: Daniel Roth, Jeff Fritz, Taylor Southwick, Scott Addie, Steve Smith
Book title: Blazor for ASP.NET Web Forms Developers
English book: https://aka.ms/blazor-ebook
Year: 2019
Author: Peter Himschoot.
Book title: Blazor Revealed: Building Web Applications in .NET
ISBN-13 (pbk): 978-1-4842-4342-8
ISBN-13 (electronic): 978-1-4842-4343-5
```
| 52.674419 | 135 | 0.736424 | yue_Hant | 0.688186 |
dbc8a7c94e10c035f302fb407248b3f560c53d09 | 3,666 | md | Markdown | alicloud/r/alicloud_vswitch.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 78 | 2021-01-15T14:10:30.000Z | 2022-02-14T09:17:40.000Z | alicloud/r/alicloud_vswitch.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 5 | 2021-04-09T15:21:28.000Z | 2022-01-28T19:02:05.000Z | alicloud/r/alicloud_vswitch.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 30 | 2021-01-17T13:16:57.000Z | 2022-03-21T12:52:08.000Z | # alicloud_vswitch
[back](../alicloud.md)
### Index
- [Example Usage](#example-usage)
- [Variables](#variables)
- [Resource](#resource)
- [Outputs](#outputs)
### Terraform
```terraform
terraform {
required_providers {
alicloud = ">= 1.120.0"
}
}
```
[top](#index)
### Example Usage
```terraform
module "alicloud_vswitch" {
source = "./modules/alicloud/r/alicloud_vswitch"
# availability_zone - (optional) is a type of string
availability_zone = null
# cidr_block - (required) is a type of string
cidr_block = null
# description - (optional) is a type of string
description = null
# name - (optional) is a type of string
name = null
# tags - (optional) is a type of map of string
tags = {}
# vpc_id - (required) is a type of string
vpc_id = null
# vswitch_name - (optional) is a type of string
vswitch_name = null
# zone_id - (optional) is a type of string
zone_id = null
timeouts = [{
create = null
delete = null
}]
}
```
[top](#index)
### Variables
```terraform
variable "availability_zone" {
description = "(optional)"
type = string
default = null
}
variable "cidr_block" {
description = "(required)"
type = string
}
variable "description" {
description = "(optional)"
type = string
default = null
}
variable "name" {
description = "(optional)"
type = string
default = null
}
variable "tags" {
description = "(optional)"
type = map(string)
default = null
}
variable "vpc_id" {
description = "(required)"
type = string
}
variable "vswitch_name" {
description = "(optional)"
type = string
default = null
}
variable "zone_id" {
description = "(optional)"
type = string
default = null
}
variable "timeouts" {
description = "nested block: NestingSingle, min items: 0, max items: 0"
type = set(object(
{
create = string
delete = string
}
))
default = []
}
```
[top](#index)
### Resource
```terraform
resource "alicloud_vswitch" "this" {
# availability_zone - (optional) is a type of string
availability_zone = var.availability_zone
# cidr_block - (required) is a type of string
cidr_block = var.cidr_block
# description - (optional) is a type of string
description = var.description
# name - (optional) is a type of string
name = var.name
# tags - (optional) is a type of map of string
tags = var.tags
# vpc_id - (required) is a type of string
vpc_id = var.vpc_id
# vswitch_name - (optional) is a type of string
vswitch_name = var.vswitch_name
# zone_id - (optional) is a type of string
zone_id = var.zone_id
dynamic "timeouts" {
for_each = var.timeouts
content {
# create - (optional) is a type of string
create = timeouts.value["create"]
# delete - (optional) is a type of string
delete = timeouts.value["delete"]
}
}
}
```
[top](#index)
### Outputs
```terraform
output "availability_zone" {
description = "returns a string"
value = alicloud_vswitch.this.availability_zone
}
output "id" {
description = "returns a string"
value = alicloud_vswitch.this.id
}
output "name" {
description = "returns a string"
value = alicloud_vswitch.this.name
}
output "status" {
description = "returns a string"
value = alicloud_vswitch.this.status
}
output "vswitch_name" {
description = "returns a string"
value = alicloud_vswitch.this.vswitch_name
}
output "zone_id" {
description = "returns a string"
value = alicloud_vswitch.this.zone_id
}
output "this" {
value = alicloud_vswitch.this
}
```
[top](#index) | 18.994819 | 73 | 0.639116 | eng_Latn | 0.82013 |
dbc8fd86b5439aa64cc3c4483208d7e1fac4a1a9 | 79 | md | Markdown | README.md | paullewallencom/spring-978-1-7893-4859-0 | 43aad76ae3bd50811bc30249569a307d5dc5a708 | [
"Apache-2.0"
] | null | null | null | README.md | paullewallencom/spring-978-1-7893-4859-0 | 43aad76ae3bd50811bc30249569a307d5dc5a708 | [
"Apache-2.0"
] | null | null | null | README.md | paullewallencom/spring-978-1-7893-4859-0 | 43aad76ae3bd50811bc30249569a307d5dc5a708 | [
"Apache-2.0"
] | null | null | null | # spring-978-1-7893-4859-0
Building RESTful Web Services with Spring 5 [Video]
| 26.333333 | 51 | 0.772152 | eng_Latn | 0.61839 |
dbc93ca2cdc7ced78e172edb790fd89393e407b2 | 1,325 | md | Markdown | docs/getting-started.md | MrJaeger/enzyme-context | c7b6ee55ee205f6b89d78b4df9f2212478e8f979 | [
"MIT"
] | null | null | null | docs/getting-started.md | MrJaeger/enzyme-context | c7b6ee55ee205f6b89d78b4df9f2212478e8f979 | [
"MIT"
] | null | null | null | docs/getting-started.md | MrJaeger/enzyme-context | c7b6ee55ee205f6b89d78b4df9f2212478e8f979 | [
"MIT"
] | null | null | null | # Getting Started
## 1) Install
**Enzyme Context has peer dependencies on [`react`](https://reactjs.org/docs/getting-started.html) and [`enzyme`](https://airbnb.io/enzyme/docs/installation/). Make sure they are installed and set up correctly before proceeding.**
Enzyme Context loves `yarn`:
```bash
$> yarn add -D enzyme-context
```
But `npm` is fine too:
```bash
$> npm install --dev enzyme-context
```
## 2) Create mount() and shallow()
At TrialSpark, we do this in a module called `test-utils/enzyme`, but you can put yours wherever you like:
```javascript
import { createMount, createShallow } from 'enzyme-context';
export const mount = createMount({});
export const shallow = createShallow({});
```
## 3) Use the mount/shallow we just created in place of enzyme's
```javascript
import { mount } from 'test-utils/enzyme';
import MyComponent from './MyComponent';
describe('<MyComponent />', () => {
let component;
beforeEach(() => {
// mount returns an object with:
// - component: the mounted enzyme wrapper
({ component } = mount(<MyComponent />));
});
});
```
## 4) Add some plugins!
The `mount`/`shallow` we created in step two doesn't really do anything that enzyme doesn't already do out-of-the-box. The next thing you should do is [install some plugins](official-plugins.md).
| 26.5 | 230 | 0.69283 | eng_Latn | 0.974721 |
dbc9e470ac5ee4035a3b01729e19d47a3c611b0e | 1,838 | md | Markdown | CHANGELOG.md | Mattinton/prettier-plugin-sort-imports | b8b5fec79b0232e91077e204c256041818a13b45 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Mattinton/prettier-plugin-sort-imports | b8b5fec79b0232e91077e204c256041818a13b45 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Mattinton/prettier-plugin-sort-imports | b8b5fec79b0232e91077e204c256041818a13b45 | [
"Apache-2.0"
] | null | null | null | ## Changelog
---
### v3.2.0
#### New features
- Group Namespace specifiers [#105](https://github.com/trivago/prettier-plugin-sort-imports/pull/105) by [Mattinton](https://github.com/Mattinton)
#### Chores
- Clean up unit test and snapshot test
- Add contribution guidelines for bug fixes and new features
### v3.1.1
- Fixes package management [#100](https://github.com/trivago/prettier-plugin-sort-imports/issues/100)
### v3.1.0
#### Chores
- Update Babel parser to `7.14.6` [#79](https://github.com/trivago/prettier-plugin-sort-imports/pull/79) by [juanrgm](https://github.com/juanrgm)
- `.npmignore` cleanup [#96](https://github.com/trivago/prettier-plugin-sort-imports/issues/96) by [byara](https://github.com/byara)
- Remove npm badges in the README [#101](https://github.com/trivago/prettier-plugin-sort-imports/issues/101) by [byara](https://github.com/byara)
### v3.0.0
#### New features
- `<THIRD_PARTY_MODULES>` special word in import order to place third
party imports at desired place. [#65](https://github.com/trivago/prettier-plugin-sort-imports/pull/65) by [@risenforces](https://github.com/risenforces)
- `importOrderSortSpecifiers` option to sort the imports in an import declaration. [#72](https://github.com/trivago/prettier-plugin-sort-imports/pull/72) by [@ratierd](https://github.com/ratierd)
- `importOrderCaseInsensitive` option to control the case sensitivity [#69](https://github.com/trivago/prettier-plugin-sort-imports/pull/79) by [@timiles](https://github.com/timiles)
- `importOrderParserPlugins` option to pass plugins to babel parser [#88](https://github.com/trivago/prettier-plugin-sort-imports/pull/88) by [@saaryab](https://github.com/saaryab)
#### Breaking Changes
- Renaming of the `experimentalBabelParserPluginsList` to `importOrderParserPlugins`. by [@byara](https://github.com/byara)
| 51.055556 | 195 | 0.745375 | kor_Hang | 0.230982 |
dbc9fd97eb0d6e467c22d6e8be3c6c4e34205ad3 | 2,162 | md | Markdown | docs/visual-basic/misc/bc32080.md | muthu67/Docs | ae188fd42d40ff7106f5e62c90d5aa042b262ff5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc32080.md | muthu67/Docs | ae188fd42d40ff7106f5e62c90d5aa042b262ff5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc32080.md | muthu67/Docs | ae188fd42d40ff7106f5e62c90d5aa042b262ff5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Generic methods cannot use 'Handles' clause | Microsoft Docs"
ms.date: "2015-07-20"
ms.prod: .net
ms.technology:
- "devlang-visual-basic"
ms.topic: "article"
f1_keywords:
- "vbc32080"
- "BC32080"
helpviewer_keywords:
- "BC32080"
ms.assetid: 88c62a1c-aee3-46b2-ad78-76790022c04c
caps.latest.revision: 10
author: "stevehoag"
ms.author: "shoag"
translation.priority.ht:
- "de-de"
- "es-es"
- "fr-fr"
- "it-it"
- "ja-jp"
- "ko-kr"
- "ru-ru"
- "zh-cn"
- "zh-tw"
translation.priority.mt:
- "cs-cz"
- "pl-pl"
- "pt-br"
- "tr-tr"
---
# Generic methods cannot use 'Handles' clause
A generic `Sub` procedure includes a [Handles](../../visual-basic/language-reference/statements/handles-clause.md) clause in its declaration.
A `Handles` clause specifies a list of events that the `Sub` procedure handles. To be an event handler, the `Sub` procedure must have the same signature as each event it is to handle. A generic procedure can be created more than once, with signatures that [!INCLUDE[vbprvb](../../csharp/programming-guide/concepts/linq/includes/vbprvb_md.md)] cannot predict at compile time. Therefore, [!INCLUDE[vbprvb](../../csharp/programming-guide/concepts/linq/includes/vbprvb_md.md)] cannot guarantee a signature that matches those of the events in the `Handles` clause.
**Error ID:** BC32080
## To correct this error
- If the `Sub` procedure needs to be generic, remove the `Handles` clause from its declaration. Use the [AddHandler Statement](../../visual-basic/language-reference/statements/addhandler-statement.md) to associate this event handler with an event.
- If the `Sub` procedure needs to use the `Handles` clause to associate events, remove the [Of](../../visual-basic/language-reference/statements/of-clause.md) clause from its declaration. You must use a nongeneric procedure with `Handles`.
## See Also
[Generic Types in Visual Basic](../../visual-basic/programming-guide/language-features/data-types/generic-types.md)
[NOT IN BUILD:Events and Event Handlers](http://msdn.microsoft.com/en-us/95074a0d-1cbc-4221-a95a-964185c7f962) | 40.792453 | 562 | 0.715079 | eng_Latn | 0.771982 |
dbca0844a04e6a680dc3f06efcc7afb9bd1a5176 | 183 | md | Markdown | content/product3v2/server/mockups2/fullscreen.md | mmclssg2/ssdocs | 9d8220f7d50c8cdc7d73e4f259abc214477b80f2 | [
"MIT"
] | null | null | null | content/product3v2/server/mockups2/fullscreen.md | mmclssg2/ssdocs | 9d8220f7d50c8cdc7d73e4f259abc214477b80f2 | [
"MIT"
] | null | null | null | content/product3v2/server/mockups2/fullscreen.md | mmclssg2/ssdocs | 9d8220f7d50c8cdc7d73e4f259abc214477b80f2 | [
"MIT"
] | 1 | 2019-07-12T12:48:30.000Z | 2019-07-12T12:48:30.000Z | ---
date: 2015-09-23T15:48:49-07:00
title: "Presenting Your Work"
menu: "menujiraserver2"
product: "Mockups 2 for Jira Server"
weight: 1260
include: "fullscreen"
editorversion: 2
---
| 18.3 | 36 | 0.73224 | eng_Latn | 0.607077 |
dbca4627be736a18c5c04bd05fe08e2c51ebc936 | 1,189 | md | Markdown | vendor/github.com/gobuffalo/packr/v2/SHOULDERS.md | benjifisher/ddev | 6b209753c6d0e618d4e9f2142f6e482ff2e5b7dc | [
"Apache-2.0"
] | 10 | 2017-05-17T10:48:05.000Z | 2019-07-10T16:10:06.000Z | vendor/github.com/gobuffalo/packr/v2/SHOULDERS.md | benjifisher/ddev | 6b209753c6d0e618d4e9f2142f6e482ff2e5b7dc | [
"Apache-2.0"
] | 3 | 2018-03-23T16:46:00.000Z | 2019-07-16T10:05:54.000Z | vendor/github.com/gobuffalo/packr/v2/SHOULDERS.md | benjifisher/ddev | 6b209753c6d0e618d4e9f2142f6e482ff2e5b7dc | [
"Apache-2.0"
] | 3 | 2017-06-01T12:35:02.000Z | 2018-03-23T15:49:09.000Z | # github.com/gobuffalo/packr/v2 Stands on the Shoulders of Giants
github.com/gobuffalo/packr/v2 does not try to reinvent the wheel! Instead, it uses the already great wheels developed by the Go community and puts them all together in the best way possible. Without these giants, this project would not be possible. Please make sure to check them out and thank them for all of their hard work.
Thank you to the following **GIANTS**:
* [github.com/gobuffalo/envy](https://godoc.org/github.com/gobuffalo/envy)
* [github.com/gobuffalo/logger](https://godoc.org/github.com/gobuffalo/logger)
* [github.com/gobuffalo/packd](https://godoc.org/github.com/gobuffalo/packd)
* [github.com/karrick/godirwalk](https://godoc.org/github.com/karrick/godirwalk)
* [github.com/rogpeppe/go-internal](https://godoc.org/github.com/rogpeppe/go-internal)
* [github.com/sirupsen/logrus](https://godoc.org/github.com/sirupsen/logrus)
* [github.com/spf13/cobra](https://godoc.org/github.com/spf13/cobra)
* [github.com/stretchr/testify](https://godoc.org/github.com/stretchr/testify)
* [golang.org/x/sync](https://godoc.org/golang.org/x/sync)
* [golang.org/x/tools](https://godoc.org/golang.org/x/tools)
| 44.037037 | 326 | 0.761144 | eng_Latn | 0.403272 |
dbca9125ac738a1d5fc5af7af3748b0547524c9d | 1,155 | md | Markdown | repos/amazonlinux/remote/2018.03.0.20210126.1.md | Ghostglass/repo-info | 30b319599d3fce07a3da000e32dbc42523f7f8fa | [
"Apache-2.0"
] | 1 | 2021-03-17T00:49:51.000Z | 2021-03-17T00:49:51.000Z | repos/amazonlinux/remote/2018.03.0.20210126.1.md | Ghostglass/repo-info | 30b319599d3fce07a3da000e32dbc42523f7f8fa | [
"Apache-2.0"
] | null | null | null | repos/amazonlinux/remote/2018.03.0.20210126.1.md | Ghostglass/repo-info | 30b319599d3fce07a3da000e32dbc42523f7f8fa | [
"Apache-2.0"
] | null | null | null | ## `amazonlinux:2018.03.0.20210126.1`
```console
$ docker pull amazonlinux@sha256:9bd78c849b25ff2367dfdc1341cb30a7918db0bb452a1373afc37722559fabe1
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms:
- linux; amd64
### `amazonlinux:2018.03.0.20210126.1` - linux; amd64
```console
$ docker pull amazonlinux@sha256:4608bf17c7b691c5a81ae7f599631055bce8bd5ad8b43e85613c00802ef05ec7
```
- Docker Version: 19.03.12
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **62.2 MB (62221008 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:68556ba4ccd0c17cb55f71a283997e5e2b4c85ce2a17318af297411a426be024`
- Default Command: `["\/bin\/bash"]`
```dockerfile
# Wed, 27 Jan 2021 22:20:25 GMT
ADD file:0e86c898f076c0be7554fa629ffd98f1581cc0aeadd702b3876c03b300495006 in /
# Wed, 27 Jan 2021 22:20:25 GMT
CMD ["/bin/bash"]
```
- Layers:
- `sha256:a441db1e9917658028bdaa8d10a00f6d24aa1f4de352b6b3d327b627207845db`
Last Modified: Wed, 27 Jan 2021 22:22:10 GMT
Size: 62.2 MB (62221008 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 32.083333 | 97 | 0.770563 | yue_Hant | 0.200398 |
dbcb23d3954b02b7f73e2d6438a1c6f1361808b4 | 28,637 | md | Markdown | scenes/act1.md | araki2410/anxiety | 87937bf9e5f86bc3c34537682f2fe7bf5e60b4cf | [
"CC0-1.0"
] | 1 | 2021-02-05T03:00:24.000Z | 2021-02-05T03:00:24.000Z | scenes/act1.md | araki2410/anxiety | 87937bf9e5f86bc3c34537682f2fe7bf5e60b4cf | [
"CC0-1.0"
] | null | null | null | scenes/act1.md | araki2410/anxiety | 87937bf9e5f86bc3c34537682f2fe7bf5e60b4cf | [
"CC0-1.0"
] | 2 | 2020-12-28T15:15:47.000Z | 2020-12-28T16:01:32.000Z | # act1
```
SceneSetup.act1();
```
(...300)
n: そしてこれは にんげんの ふあん
n: _あなた_ は ふあん
{{if window.localStorage.continueChapter=="replay"}}
(#act1_replay)
{{/if}}
{{if window.localStorage.continueChapter!="replay"}}
(#act1_normal)
{{/if}}
# act1_replay
`hong({mouth:"0_neutral", eyes:"0_neutral"})`
h: あぁ! また戻ってきたのかい?
`hong({eyes:"0_neutral"})`
n: あなたの 仕事は あなたのにんげんを *危険* から守ること
`bb({eyes:"look", mouth:"small_lock"})`
n: つまり、このゲームのリプレイは彼女たちを *危険* にさらす
n: 素早く、警告しなければ!
```
sfx("squeak");
bb({body:"squeeze_talk"});
hong({body:"0_squeeze"});
```
b: にんげん! よくきけ、危険が迫ってる! プレイヤーが...
[...またぼくらを拷問しようとしている!](#act1_replay_torture)
[...他のエンディングを見つけられない!](#act1_replay_alternate)
[...物語とゲーム性の乖離に違和感を感じている!!](#act1_replay_dissonance)
# act1_replay_torture
```
window.HACK_REPLAY = JSON.parse(localStorage.act4);
bb({body:"normal", mouth:"normal", eyes:"fear"});
hong({body:"0_sammich"});
```
{{if window.HACK_REPLAY.act1_ending=="fight"}}
b: うずくまって泣かせようって魂胆だ!
{{/if}}
{{if window.HACK_REPLAY.act1_ending=="flight"}}
b: スマホを壊してパニックにさせようって魂胆だ!
{{/if}}
{{if window.HACK_REPLAY.a2_ending=="fight"}}
b: パーティーの主催にパンチ *させない* つもりだ!
{{/if}}
{{if window.HACK_REPLAY.a2_ending=="flight"}}
b: アンチ-ヴィランパーティの主催者にパンチさせようって魂胆だ!
{{/if}}
{{if window.HACK_REPLAY.a3_ending=="jump"}}
h: 少なくとも、今回は屋上から飛び降りたりは--
{{/if}}
{{if window.HACK_REPLAY.a3_ending=="walkaway"}}
b: 屋上からジャンプさせようって魂胆だ。
{{/if}}
`bb({body:"fear"});`
b: これからぼくたちに ありとあらゆるひどいことが起こって、そしてぼくたちは--
(#act1_replay_end)
#act1_replay_alternate
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
hong({body:"0_sammich"});
```
h: そうだね、*全体* としては同じストーリーだけど、チャプターごとに2つのエンディングがあって、全てに分岐会話が--
`bb({body:"fear"});`
b: プレイヤーはがっかり呆れて、ブラウザのタブを閉じて、このソフトを削除してそれからほくたちは--
(#act1_replay_end)
# act1_replay_dissonance
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
hong({body:"0_sammich"});
```
h: 何が不適切だって?
`bb({eyes:"normal"});`
b: 一連のストーリーは、どのようにして恐怖を煽って健康を得るかという *選択肢* だったのに、
b: The story arc was about how you can *CHOOSE* to build a healthy collaboration with your fear,
`bb({eyes:"normal_right"});`
b: リプレイでも同じストーリーが見られる、つまり *選択肢* は重要じゃない。
`bb({eyes:"narrow_eyebrow"});`
b: これじゃあゲームのメッセージとメカニズムが矛盾している。
`bb({eyes:"fear"});`
b: このように物語の宇宙を紐解いて、
Thus unraveling the fabric of this narrative universe,
`bb({body:"fear"});`
b: そしたら--
(#act1_replay_end)
# act1_replay_end
`bb({body:"panic"})`
b: 死ぬんだーーーーーっ
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
Game.clearText();
```
(...1001)
```
bb({body:"laugh"});
hong({body:"laugh"});
Game.clearText();
sfx("laugh");
```
(...5001)
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
hong({body:"0_sammich"});
```
h: じゃあ、キャラに戻ろっか。
```
Game.clearText();
```
n4: (_あなた_ の不安 うんぬんかんぬん _あなた_ の心配 なにやらかにやら 知ってることに よく似てるでしょう)
```
sfx("squeak");
hong({body:"0_squeeze"});
bb({body:"squeeze"});
```
(#act1_normal_choice)
# act1_normal
`hong({mouth:"0_neutral", eyes:"0_annoyed"})`
h: ああよかった、狼が戻ってきた。すごーーーい。
`hong({eyes:"0_neutral"})`
n: あなたの 仕事は あなたのにんげんを *危険* から守ること
`bb({eyes:"look", mouth:"small_lock"})`
n: 実際に、このサンドウィッチは今 彼女たちを *危険* にさらしている
n: 素早く、警告しなければ!
```
sfx("squeak");
bb({body:"squeeze_talk"});
hong({body:"0_squeeze"});
```
b: にんげん! よく聞け、ぼくたち危険だ! 危険ってのは...
`bb({body:"squeeze"})`
n4: (LET _YOUR_ ANXIETY COME OUT TO PLAY! PICK WHAT'S MOST SIMILAR TO WHAT _YOUR_ FEAR TELLS YOU)
(#act1_normal_choice)
# act1_normal_choice
[We're eating alone for lunch! Again!](#act1a_alone) `bb({body:"squeeze_talk"})`
[We're not productive while eating!](#act1a_productive) `bb({body:"squeeze_talk"})`
[That white bread's bad for us!](#act1a_bread) `bb({body:"squeeze_talk"})`
# act1a_alone
```
bb({body:"normal", mouth:"small", eyes:"narrow"});
hong({body:"0_sammich"});
```
b: Don't you know loneliness is associated with premature death as much as smoking 15 cigarettes a day?-
`Game.OVERRIDE_TEXT_SPEED = 2;`
`bb({mouth:"normal", eyes:"normal_right"})`
b: (Holt-Lunstad 2010, PLoS Medicine)
`hong({eyes:"0_annoyed"})`
h: Um, thanks for citing your sources but--
`Game.OVERRIDE_TEXT_SPEED = 2;`
`bb({body:"fear", mouth:"normal", eyes:"fear"})`
b: Which means if we don't hang out with someone *right now* we're gonna-
`bb({body:"panic"})`
b: DIEEEEEEEEEEEEEEEEEEE
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
hong({mouth:"0_shock", eyes:"0_shock"});
attack("18p", "alone");
publish("hp_show");
```
(...2500)
`_.fifteencigs = true`
n: YOU USED *FEAR OF BEING UNLOVED*
(#act1b)
# act1a_productive
```
bb({body:"normal", mouth:"small", eyes:"normal"});
hong({body:"0_sammich"});
```
b: Whip out your laptop and do some work right now!
`hong({eyes:"0_annoyed"})`
h: Um, I'd rather not get crumbs in my keyboa--
```
bb({mouth:"normal", eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: If we're not contributing to the body of society then we're a society-parasite!
b: The society-body will go to the society-doctor for medication to kill their society-parasites then we'll--
```
bb({body:"panic", mouth:"normal", eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: DIEEEEEEEEEEEEEEEEEEE
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
hong({mouth:"0_shock", eyes:"0_shock"});
attack("18p", "bad");
publish("hp_show");
```
(...2500)
`_.parasite = true`
n: YOU USED *FEAR OF BEING A BAD PERSON*
(#act1b)
# act1a_bread
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
hong({body:"0_sammich", eyes:"0_annoyed"});
```
h: Have those studies been replicat--
```
bb({body:"fear", mouth:"normal", eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Processed wheat will spike our blood sugar so they'll have to amputate all our limbs and then we'll-
`bb({body:"panic"})`
b: DIEEEEEEEEEEEEEEEEEEE
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
hong({mouth:"0_shock", eyes:"0_shock"});
attack("18p", "harm");
publish("hp_show");
```
(...2500)
`_.whitebread = true`
n: YOU USED *FEAR OF BEING HARMED*
(#act1b)
# act1b
n: IT'S SUPER EFFECTIVE
`bb({mouth:"smile", eyes:"smile"});`
b: See, human? I am your loyal guard-wolf!
`bb({body:"pride_talk"});`
b: Trust your gut! Your feelings are always valid!
`bb({body:"pride"});`
n: GET YOUR HUMAN'S ENERGY BAR TO ZERO
n: TO PROTECT THEIR PHYSICAL + SOCIAL + MORAL NEEDS, YOU CAN USE:
n: FEAR OF *BEING HARMED* #harm#
n: FEAR OF *BEING UNLOVED* #alone#
n: AND FEAR OF *BEING A BAD PERSON* #bad#
`Game.OVERRIDE_TEXT_SPEED = 1.25;`
n4: (PRO-TIP: PLAY THE CHOICES THAT PERSONALLY HIT YOUR DEEPEST, DARKEST FEARS!~)
h: ...
```
hong({body:"putaway"});
sfx("rustle");
bb({body:"normal", mouth:"normal", eyes:"normal"});
```
(...1000)
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
h: you know what maybe it's time to check my phone.
```
sfx("rustle2");
hong({body:"phone1", mouth:"neutral", eyes:"neutral"})
```
n: PROTECT YOUR HUMAN
n: FROM THE WORLD. FROM OTHER PEOPLE. FROM THEMSELF.
n: GOOD LUCK
(...500)
`Game.clearText()`
(...500)
(#act1c)
# act1c
`music('battle', {volume:0.5})`
n: ROUND ONE: *FIGHT!*
`bb({body:"normal", mouth:"normal", eyes:"normal"});`
h: Huh. Facebook feed says there's a party happening this weekend.
`bb({eyes:"uncertain"});`
b: Doesn't that weirdo throw a party *every* weekend?
`bb({eyes:"uncertain_right"});`
b: What inner void are they trying to fill? They must be deeply messed up inside!
`hong({eyes:"surprise"});`
h: Also, I got an invite?
`bb({eyes:"fear", mouth:"normal"});`
b: Well then!
[Say yes, or we'll die from loneliness!](#act1c_loner)
[Say no, it's full of poisonous drugs!](#act1c_drugs)
[Ignore it, we just make parties sad.](#act1c_sad)
# act1c_loner
{{if _.fifteencigs}}
b: Fifteen cigarettes a day, human! Fifteen!
{{/if}}
{{if !_.fifteencigs}}
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
{{/if}}
{{if !_.fifteencigs}}
b: Then no one will show up at our funeral, they'll dump our ashes into the ocean, we get eaten by a whale,
{{/if}}
{{if !_.fifteencigs}}
b: and we become WHALE POOP!
{{/if}}
{{if !_.fifteencigs}} `_.whalepoop = true` {{/if}}
(...500)
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
`bb({eyes:"normal"});`
{{if !_.fifteencigs}}
b: So yeah we should go to that party!
{{/if}}
{{if _.parasite}}
b: Just bring the laptop so we can do work, and not be a society-parasite.
{{/if}}
{{if _.whitebread}}
b: Just as long as they don't serve WHITE BREAD
{{/if}}
`hong({mouth:"anger", eyes:"anger"});`
h: GOD. If it'll make you shut up, fine.
h: I'll say yes.
{{if _.whalepoop}}
b: Whale poop, human! Whale poop!
{{/if}}
`_.partyinvite="yes"`
(#act1d)
# act1c_drugs
`bb({mouth:"small", eyes:"fear"});`
{{if _.whitebread}}
b: or even worse... WHITE BREAD
{{/if}}
{{if _.whitebread}}
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
{{/if}}
{{if _.whitebread}}
b: We'll overdose on so much meth and white bread they won't be able to fit our fat corpse into the cremation furnace!
{{/if}}
{{if !_.whitebread}}
b: We'll overdose on so many drugs the undertaker will wonder how our body was *already* pre-embalmed!
{{/if}}
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
{{if _.parasite}}
b: Besides, can't party, we need to do work or we're a terrible society-parasite!
{{/if}}
`hong({mouth:"anger", eyes:"anger"});`
h: GOD. If it'll make you shut up, fine.
h: I'll say no.
`_.partyinvite="no"`
(#act1d)
# act1c_sad
`bb({eyes:"uncertain_right", mouth:"normal"});`
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
{{if _.fifteencigs}}
b: All we ever do is cry in a corner about how loneliness is as deadly as 15 cigarettes a day.
{{/if}}
{{if _.parasite}}
b: All we ever do at parties is worry about how we should be productive instead.
{{/if}}
{{if _.whitebread}}
b: All we ever do is worry about how the unhealthy food options are going to kill us.
{{/if}}
```
bb({mouth:"normal", eyes:"normal"});
hong({mouth:"neutral", eyes:"lookaway"});
```
h: gee i wonder why.
`hong({eyes:"neutral"});`
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
b: So if we go we'll make them feel bad, but if we reject their invite we'll also make them feel bad!
`bb({body:"fear", eyes:"fear"});`
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
b: ALL WE DO IS MAKE PEOPLE FEEL BAD, SO WE SHOULD FEEL BAD
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "bad");
```
(...2500)
`hong({mouth:"anger", eyes:"anger"});`
h: Ugh. If it'll make you shut up, fine.
h: I'll ignore the invite.
`_.partyinvite="ignore"`
(#act1d)
# act1d
```
bb({body:"normal", mouth:"normal", eyes:"normal"});
hong({mouth:"neutral", eyes:"annoyed"});
```
h: Anyway. Facebook's too much. I need something calmer, less anxiety-producing.
`hong({eyes:"neutral"});`
h: What's new on Twitter?
`bb({eyes:"look"});`
[Oh no, look at that horrible news story!](#act1d_news)
[Oh no, is that tweet secretly about *us?*](#act1d_subtweet)
[Hey, a GIF of a cat drinking milk](#act1d_milk)
# act1d_news
```
bb({eyes:"pained1"});
music(null, {fade:2});
```
b: God, it feels like the world's burning, isn't it?
```
bb({eyes:"pained2"});
hong({mouth:"sad", eyes:"sad"});
```
b: It feels like it's all ending, like everything's dying and we're doomed and there's nothing we can do about it.
```
Game.OVERRIDE_TEXT_SPEED = 0.5;
bb({mouth:"shut"});
```
b: ...
`bb({mouth:"smile", eyes:"smile"});`
b: Let's retweet that story!
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
`_.badnews=true`
```
music('battle', {volume:0.5});
hong({mouth:"anger", eyes:"anger"});
bb({body:"normal", mouth:"normal", eyes:"normal"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Okay I'll retweet it just please be quiet!
`hong({mouth:"neutral", eyes:"annoyed"});`
h: Screw it, let's look at Snapchat.
(#act1e)
# act1d_subtweet
`bb({eyes:"fear"});`
b: It's a subtweet! A sneaky, sneaky subtweet!
`hong({eyes:"annoyed"});`
h: It's probably not?
`bb({eyes:"narrow", mouth:"small"});`
b: but what if they're all talking behind our back
h: They're n--
`bb({body:"fear", eyes:"fear", mouth:"normal"});`
b: IN FRONT OF OUR BACK
`hong({eyes:"sad", mouth:"sad"});`
h: I d--
`bb({eyes:"narrow", mouth:"small"});`
b: but *what if*
h: S--
`bb({eyes:"narrow_eyebrow"});`
b: *what if*
```
Game.OVERRIDE_TEXT_SPEED = 0.5;
hong({mouth:"shut"});
```
h: ...
(...1000)
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
`_.subtweet=true`
```
hong({mouth:"anger", eyes:"annoyed"});
bb({body:"normal", mouth:"normal", eyes:"normal"});
```
h: o-KAY, gonna try Snapchat.
(#act1e)
# act1d_milk
`hong({mouth:"smile", eyes:"neutral"});`
h: Heh ya that's cute, just retweeted it, I thi--
```
hong({mouth:"shock", eyes:"shock"});
bb({body:"scream"});
Game.OVERRIDE_TEXT_SPEED = 1.8;
```
b: CATS CAN'T DIGEST MILK AND WE'RE TERRIBLE PEOPLE FOR ENJOYING ANIMAL ABUSE
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
attack("18p", "bad");
```
(...2500)
`_.catmilk=true`
```
hong({mouth:"anger", eyes:"annoyed"});
bb({body:"normal", mouth:"normal", eyes:"normal"});
```
h: o-KAY, gonna try Snapchat.
(#act1e)
# act1e
`hong({mouth:"neutral", eyes:"neutral"});`
h: Huh, photos from yesterday night. So *that's* what those weekly parties are like.
{{if _.partyinvite=="yes"}} (#act1e_said_yes) {{/if}}
{{if _.partyinvite=="no"}} (#act1e_said_no) {{/if}}
{{if _.partyinvite=="ignore"}} (#act1e_said_ignore) {{/if}}
# act1e_said_yes
`hong({mouth:"sad", eyes:"annoyed"});`
h: Oof, looks way too crowded for my anxiety.
h: Maybe I shouldn't have said yes to the invite?
```
hong({mouth:"neutral", eyes:"neutral"});
bb({mouth:"normal", eyes:"normal"});
```
[Change our answer? Like a jerk?!](#act1e_yes_dontchange)
[Change our answer! It's too crowded!](#act1e_yes_changetono)
{{if _.subtweet}}
[Yeah they were totally subtweeting us.](#act1e_ignore_subtweet)
{{/if}}
{{if _.badnews}}
[Wait we retweeted without fact-checking.](#act1e_ignore_factcheck)
{{/if}}
{{if (!_.subtweet && !_.badnews)}}
[You know, you've got really bad posture?](#act1e_ignore_posture)
{{/if}}
# act1e_yes_dontchange
```
bb({eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: They were counting on us to come and now we're betraying their trust? Do you wanna die alone?!
{{if _.fifteencigs}}
b: FIFTEEN. CIGARETTES.
{{/if}}
{{if _.whalepoop}}
b: WHALE. POOP.
{{/if}}
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
```
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Shut up shut up I'll keep it as yes!
(#act1f)
# act1e_yes_changetono
```
bb({eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Don't you know about human stampedes?
```
bb({body:"fear", mouth:"small", eyes:"narrow"});
hong({eyes:"sad", mouth:"sad"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: In 2003 a Rhode Island nightclub had a fire and the panic made people jam the exits so 100 people burned to death-
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
hong({mouth:"shock"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: DO YOU WANT THAT TO HAPPEN TO US-
```
bb({body:"scream"});
Game.OVERRIDE_TEXT_SPEED = 2.5;
```
b: SAY NO SAY NO SAY NO SAY NO SAY NO SAY NO SAY NO SAY NO SAY N-
```
bb({body:"normal", eyes:"fear", mouth:"normal"});
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
```
hong({eyes:"anger", mouth:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Shut up shut up I'll change my answer to no! God!
(#act1f)
# act1e_said_no
`hong({mouth:"sad", eyes:"sad"});`
h: Hm... that looks really fun.
h: Maybe I shouldn't have said no to the invite?
`bb({mouth:"normal", eyes:"normal"});`
[Change our answer? Like a jerk?!](#act1e_no_dontchange)
[Change our answer! Don't die alone!](#act1e_no_changetoyes)
{{if _.subtweet}}
[Yeah they were totally subtweeting us.](#act1e_ignore_subtweet)
{{/if}}
{{if _.badnews}}
[Wait we retweeted without fact-checking.](#act1e_ignore_factcheck)
{{/if}}
{{if (!_.subtweet && !_.badnews)}}
[You know, you've got really bad posture?](#act1e_ignore_posture)
{{/if}}
# act1e_no_dontchange
`bb({eyes:"anger"})`
b: Everybody was counting on us!
b: ...to leave them alone and let them have a nice party without a horrible disgusting {{if _.whitebread}}white-bread-munching{{/if}} creep like u--
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "bad");
```
(...2500)
```
bb({body:"normal", eyes:"uncertain", mouth:"normal"});
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Shut up shut up I'll keep it as no!
(#act1f)
# act1e_no_changetoyes
```
bb({body:"fear", eyes:"fear", mouth:"normal"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Chronic loneliness increases our cortisol levels as well as risk for cardiovascular disease and stroke!
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
{{if _.fifteencigs}}
b: FIFTEEN. CIGARETTES.
{{/if}}
```
bb({body:"normal", eyes:"normal", mouth:"normal"});
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Shut up shut up I'll change my answer to yes! God!
(#act1f)
# act1e_ignore_subtweet
```
bb({eyes:"fear", mouth:"small"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: All our problematic tweets have come back to roost!
```
bb({body:"fear", eyes:"fear", mouth:"normal"});
Game.OVERRIDE_TEXT_SPEED = 1.7;
```
b: We're gonna get called out and cancelled and dragged with a rope on horseback down the information superhighway!
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
```
bb({body:"normal", eyes:"normal", mouth:"normal"});
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Why are you like this?!
(#act1f)
# act1e_ignore_factcheck
```
bb({eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: We're spreading disinformation! We're destroying trust in a free press!
```
bb({body:"scream"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: We're the reason fascism will arise from the rubble of democracy!
```
bb({body:"normal", eyes:"anger"});
hong({mouth:"shock", eyes:"shock"});
attack("18p", "bad");
```
(...2500)
```
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
_.factcheck = true;
```
h: Why are you like this?!
(#act1f)
# act1e_ignore_posture
```
bb({eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Do you want to have a pretzel for a spine?! Stop hunching over your screen!
```
bb({body:"meta"});
```
b: That means you too.
```
bb({body:"normal", mouth:"normal"});
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
```
bb({body:"normal", eyes:"normal", mouth:"normal"});
hong({mouth:"anger", eyes:"anger"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
h: Why are you like this?!
(#act1f)
# act1e_said_ignore
`hong({mouth:"sad", eyes:"sad"});`
h: Hm... that looks really fun.
h: Maybe I shouldn't have ignored the invite?
`bb({mouth:"normal", eyes:"normal"});`
[Keep ignoring, we're still party poopers.](#act1e_ignore_continue)
[Actually, say yes.](#act1e_ignore_changetoyes)
[Actually, say no.](#act1e_ignore_changetono)
# act1e_ignore_continue
`hong({eyes:"annoyed"});`
h: It's kinda rude to keep ignoring them though, no?
`bb({eyes:"normal_right"});`
b: Well other people always ignore *us*, so
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
`bb({eyes:"normal"});`
b: so let's just call it even.
(#act1f)
# act1e_ignore_changetoyes
`hong({eyes:"surprise", mouth:"smile"});`
h: You're... letting me have fun?
b: Well, I mean, loneliness *can* kill us.
`hong({eyes:"neutral", mouth:"neutral"});`
(#act1e_no_changetoyes)
# act1e_ignore_changetono
`bb({eyes:"narrow"});`
b: It's too crowded. Crowds are dangerous.
(#act1e_yes_changetono)
# act1f
```
hong({mouth:"neutral", eyes:"neutral"});
bb({body:"normal", mouth:"normal", eyes:"normal"});
```
h: Whatever. New Tinder notification.
`bb({eyes:"uncertain"})`
b: What, that hookup app?
`hong({eyes:"annoyed"})`
h: It's not a hookup app, it's just a way to meet new peopl--
`bb({eyes:"narrow"})`
b: It's a hookup app.
```
hong({eyes:"surprise", mouth:"smile"});
bb({eyes:"normal"});
```
h: Oh, I got a match! They look cute!
```
bb({eyes:"narrow_eyebrow"});
hong({eyes:"sad", mouth:"anger"})
```
h: Please don't ruin this for m--
```
bb({body:"panic"});
Game.OVERRIDE_TEXT_SPEED = 2.0;
```
b: DANGER DANGER DANGER DANGER DANGER DANGER
`bb({body:"fear", eyes:"fear", mouth:"normal"})`
[We're being *used* by other people.](#act1f_used_by_others)
[We're just *using* other people.](#act1f_using_others)
[YOUR MATCH IS A SERIAL KILLER](#act1f_killer)
# act1f_used_by_others
`bb({body:"point_crotch", eyes:"normal", mouth:"normal"})`
b: Random hookups may be able to fill the hole down there,
b: but they can never fill the hole...
`bb({body:"point_heart", eyes:"pretty", mouth:"small"})`
b: in *here*.
(...1000)
```
bb({body:"normal", mouth:"normal", eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: The point is WE'RE GOING TO DIE ALONE
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "alone");
```
(...2500)
`_.hookuphole=true`
(#act1g)
# act1f_using_others
`bb({eyes:"narrow", mouth:"small"})`
b: You think other people's genitals are Pokémon for us to collect?
```
bb({body:"sing", eyes:"pretty", mouth:"shut"});
music("pokemon");
Game.clearText();
Game.FORCE_CANT_SKIP = true;
```
```
Game.FORCE_TEXT_DURATION = 1000;
Game.FORCE_NO_VOICE = true;
```
b: ♫ (pokemon theme song)-
(...5600)
```
bb({mouth:"normal"});
Game.FORCE_TEXT_DURATION = 2400;
```
b: ♫ I wanna be, the ^slut^ti-est-
(...500)
```
bb({eyes:"narrow", mouth:"small"});
Game.FORCE_TEXT_DURATION = 2100;
```
b: ♫ Like no one ever was-
(...1500)
```
bb({eyes:"pretty"});
Game.FORCE_TEXT_DURATION = 2300;
```
b: ♫ Thighs n' ^ass^, voluptuous breast-
(...500)
```
bb({eyes:"fear", mouth:"normal"});
Game.FORCE_TEXT_DURATION = 2000;
```
b: ♫ with sweaty ^dick^ and balls!-
(...1000)
```
bb({eyes:"smile", mouth:"smile"});
Game.FORCE_TEXT_DURATION = 1000;
```
b: ♫ PERVY-MON! GOTTA CA-
```
Game.FORCE_CANT_SKIP = false;
Game.clearText();
music(false);
bb({body:"normal", mouth:"normal", eyes:"normal"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: The point is we're a manipulative creep.
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "bad");
```
(...2500)
`_.pokemon=true`
(#act1g)
# act1f_killer
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
{{if _.whitebread}}
b: They'll trap you in a well and force-feed you white bread to fatten you up so they can wear your skin like a suit!
{{/if}}
{{if _.parasite}}
b: They'll bludgeon you with a pomodoro timer and say "YOU SHOULDA BEEN MORE PRODUCTIVE YOU PARASITE"
{{/if}}
{{if !_.whitebread && !_.parasite}}
b: They'll tear your flesh to gory confetti, turn your entrails into streamers, and mix your blood into a punch bowl!
{{/if}}
{{if !_.whitebread && !_.parasite}}
b: How's THAT for a party invite?!
{{/if}}
```
hong({mouth:"shock", eyes:"shock"});
attack("18p", "harm");
```
(...2500)
`_.serialkiller=true`
(#act1g)
# act1g
```
bb({body:"normal", mouth:"normal", eyes:"look"});
hong({body:"2_tired"});
Game.OVERRIDE_TEXT_SPEED = 0.5;
music(false);
```
h: ...
(...500)
h: i'm so sick of this game.
(...700)
`Game.OVERRIDE_TEXT_SPEED = 1.5;`
h:
{{if _.fifteencigs}}"loneliness will kill us"... {{/if}}
{{if _.parasite}}"we're a society-parasite"... {{/if}}
{{if _.whitebread}}"don't eat that, it'll kill us"... {{/if}}
{{if _.subtweet}}"they're talking behind our back"... {{/if}}
{{if _.badnews}}"the world is burning"... {{/if}}
{{if _.hookuphole}}"we'll die alone"... {{/if}}
{{if _.serialkiller}}"they're a serial killer"... {{/if}}
{{if _.catmilk}}"cats can't digest milk"... {{/if}}
{{if _.pokemon}}a ^crappy^ parody song... {{/if}}
h: i just want to live my life.
h: i just want to be free from all this... pain.
`bb({eyes:"look_sad"});`
b: Hey... human...
`Game.OVERRIDE_TEXT_SPEED = 0.5;`
b: It'll be okay.
(...600)
`bb({body:"point_heart", eyes:"look_sad_smile", mouth:"smile"});`
b: As your loyal guard-wolf, I'll always keep an eye out for danger, and do my best to keep you safe.
`bb({body:"normal", eyes:"look_sad", mouth:"smile"});`
b: I promise.
(...600)
```
bb({body:"normal", eyes:"normal", mouth:"normal"});
hong({body:"phone1", eyes:"neutral", mouth:"neutral"});
```
h: Last app. Instagram. What you got?
`hong({eyes:"sad"});`
h: It's... more party pictures.
`hong({mouth:"sad"});`
h: Everyone looks so happy. Free from worry. Free from anxiety.
`hong({mouth:"anger"});`
h: God, why can't I be like them? Why can't I just be *normal?*
`bb({eyes:"normal_right"});`
b: Speaking of parties, about this weekend's invite. Here's my FINAL decision:
`bb({eyes:"normal"});`
[We should go.](#act1g_go) `Game.OVERRIDE_CHOICE_LINE=true`
[We should not go.](#act1g_dont) `Game.OVERRIDE_CHOICE_LINE=true`
# act1g_go
`_.act1g = "go"`
(#act1h)
# act1g_dont
`_.act1g = "dont"`
(#act1h)
# act1h
b: We sh--
```
bb({eyes:"wat", mouth:"small"});
hong({body:"2_fuck"});
```
h: *^FUCK^.*
`hong({body:"2_you"});`
h: YOU.
(...500)
b: w
(...1500)
`bb({eyes:"wat_2"});`
b: wha?
`hong({body:"phone1", eyes:"anger", mouth:"anger"});`
h: I'm going to say YES to that party,
{{if _.act1g=="go"}}
h: NOT because you want me to, but because *I* want to.
{{/if}}
{{if _.act1g=="dont"}}
h: Precisely BECAUSE you don't want me to.
{{/if}}
```
hong({body:"putaway"});
sfx("rustle");
```
h: You're NOT in control of me.
```
sfx("rustle2");
hong({body:"0_sammich", eyes:"0_annoyed", mouth:"0_neutral"});
```
h: Now excuse me while I eat this delicious sandwich in ^goddamn^ peace.
`hong({body:"2_sammich_eat"});`
(...601)
```
sfx("sandwich");
hong({body:"2_sammich_eaten", eyes:"0_lookaway", mouth:"0_chew1"})
```
(...601)
```
bb({body:"normal", eyes:"uncertain", mouth:"shut"});
Game.OVERRIDE_TEXT_SPEED = 0.5;
```
b: ...
```
bb({eyes:"normal_right"});
Game.OVERRIDE_TEXT_SPEED = 1;
```
b: ...
```
bb({eyes:"fear"});
Game.OVERRIDE_TEXT_SPEED = 4;
```
b: ..................
(...500)
`bb({mouth:"normal"});`
[AHHHH WE'RE GONNA DIE](#act1h_death) `Game.OVERRIDE_CHOICE_LINE = true;`
[AHHHH EVERYONE HATES US](#act1h_loneliness) `Game.OVERRIDE_CHOICE_LINE = true;`
[AHHHH WE'RE HORRIBLE PEOPLE](#act1h_worthless) `Game.OVERRIDE_CHOICE_LINE = true;`
# act1h_death
```
bb({body:"fear"});
Game.OVERRIDE_TEXT_SPEED = 3;
```
b: AHHHH WE'RE GONNA DIE AAAAAAHHHHHHH
```
hong({body:"3_defeated1"});
attack("100p", "harm");
```
(...2500)
(#act1i)
# act1h_loneliness
```
bb({body:"fear"});
Game.OVERRIDE_TEXT_SPEED = 3;
```
b: AHHHH EVERYONE HATES US AAAAAAHHHHHHH
```
hong({body:"3_defeated1"});
attack("100p", "alone");
```
(...2500)
(#act1i)
# act1h_worthless
```
bb({body:"fear"});
Game.OVERRIDE_TEXT_SPEED = 3;
```
b: AHHHH WE'RE HORRIBLE PEOPLE AAAAAAHHHHHHH
```
hong({body:"3_defeated1"});
attack("100p", "bad");
```
(...2500)
(#act1i)
# act1i
```
bb({mouth:"smile_lock", eyes:"smile", body:"normal"});
music('battle', {volume:0.5});
```
n: CONGRATULATIONS
(...500)
n: YOU'VE SUCCESSFULLY PROTECTED YOUR HUMAN'S PHYSICAL + SOCIAL + MORAL NEEDS
n: WHY, LOOK HOW GRATEFUL THEY ARE!
(...500)
n: NOW THAT THEIR ENERGY IS ZERO, YOU CAN DIRECTLY CONTROL THEIR ACTIONS
`bb({mouth:"smile", eyes:"normal"});`
n: PICK YOUR ENDING MOVE
`bb({mouth:"small_lock", eyes:"fear"});`
n: *FINISH THEM*
[{FIGHT: Punish your stressful phone!}](#act1i_phone) `Game.OVERRIDE_CHOICE_LINE=true`
[{FLIGHT: Curl up in a ball and cry!}](#act1i_cry) `Game.OVERRIDE_CHOICE_LINE=true`
# act1i_phone
`bb({mouth:"normal", eyes:"narrow"})`
b: Your phone was giving you a panic attack!
`bb({eyes:"anger"})`
b: Zuckerberg and Co are hijacking your mental health for venture capitalist money!
```
bb({body:"fear", eyes:"fear"});
hong({body:"3_defeated2"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Punish your phone! Destroy it! Kill it!
```
Game.OVERRIDE_TEXT_SPEED = 2.5;
bb({body:"flail"});
hong({body:"3_defeated3"});
_.act1_ending = "fight";
```
b: KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL IT KILL I--
(#act1j)
# act1i_cry
`bb({eyes:"fear", mouth:"normal"})`
b: The whole world is filled with danger!
```
bb({body:"fear"});
hong({body:"3_defeated2"});
Game.OVERRIDE_TEXT_SPEED = 1.5;
```
b: Do like the armadillo! Curl up into a ball for self-defense!
```
Game.OVERRIDE_TEXT_SPEED = 2.5;
bb({body:"flail"});
hong({body:"3_defeated3"});
_.act1_ending = "flight";
```
b: CURL UP AND CRY CURL UP AND CRY CURL UP AND CRY CURL UP AND CRY CURL UP AND CRY CURL UP AND CR--
(#act1j)
# act1j
`SceneSetup.act1_outro()` | 17.313785 | 148 | 0.648532 | eng_Latn | 0.427242 |
dbcb804ed007fdf2eb35787327f0240f4c86b954 | 6,916 | md | Markdown | docs/notes-on-cocoa-text-input.md | georgealbert/vimr | ec268e5245c97a4e4aa71997215092ee44b459cf | [
"MIT"
] | null | null | null | docs/notes-on-cocoa-text-input.md | georgealbert/vimr | ec268e5245c97a4e4aa71997215092ee44b459cf | [
"MIT"
] | 1 | 2022-01-23T07:19:53.000Z | 2022-01-23T07:19:53.000Z | docs/notes-on-cocoa-text-input.md | georgealbert/vimr | ec268e5245c97a4e4aa71997215092ee44b459cf | [
"MIT"
] | null | null | null | # Some Notes on Cocoa's Text Input
To use Cocoa's text input system, e.g. the 2-Set Korean input, your view has to implement the [NSTextInputClient](https://developer.apple.com/reference/appkit/nstextinputclient) protocol. Apple's documentation is very scarce, so we're writing down some of our findings.
## Simple Case
For simple cases like `ü`, which can be entered by `Opt-u` + `u`, it's quite straightforward:
1. Enter `Opt-u`.
1. `hasMarkedText()` is called to check whether we already have marked text.
1. `setMarkedText("¨", selectedRange NSRange(1, 0), replacementRange: NSRange(NSNotFound, 0))` is called. In this case the first argument is an `NSString`, `selectedRange` tells us where to put the cursor relative to the string: in this case after `¨`. The range `replacemenRange` tells us whether the string should replace some of the existing text. In this case no replacement is required.
1. Enter `u`.
1. `hasMarkedText()` is called again.
1. `insertText("ü", replacementRange: NSRange(NSNotFound, 0))` is called to finalize the input. It seems that for the replacement range `(NSNotFound, 0)` we should replace the previously marked text with the final string. So in this case we must first delete `¨` and insert `ü`.
## Korean (Hangul, 한글)
Let's move to a bit more complicated case: Korean. In this case more methods are involved:
* `selectedRange()`: all other additional methods seem to rely on this method. Ideally we should return `NSRange(CursorPosition, 0)` when nothing is selected or `NSRange(SelectionBegin, SelectionLength)` when there's a selection.
* `attributedSubstringForProposedRange(_:actualRange:)`: for entering only Hangul, this method can be ignored.
Let's assume we want to enter `하태원`: (`hasMarkedText()` is called here and there...)
1. `selectedRange()` is called multiple times when changing the input method from US to Korean. This is also the case when starting the app with Korean input selected.
1. Enter `ㅎ`.
1. `setMarkedText("ㅎ", selectedRange: NSRange(1, 0) replacementRange:NSRange(NotFound, 0))` is called.
1. Enter `ㅏ`.
1. `attributedSubstringForProposedRange(_:actualRange:)` and `selectedRange()` are called multiple times: again, for only Hangul, ignorable.
1. `setMarkedText("하", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called: delete `ㅎ` and insert `하`; not yet finalized.
1. Enter `ㅌ`
1. `attributedSubstringForProposedRange(_:actualRange:)` and `selectedRange()` are called multiple times: ignore.
1. `setMarkedText("핱", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called: delete `하` and insert `핱`; not yet finalized.
1. Enter `ㅐ`
1. `attributedSubstringForProposedRange(_:actualRange:)` and `selectedRange()` are called multiple times: ignore.
1. `setMarkedText("하", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called: delete `핱` and insert `하`; not yet finalized.
1. `insertText("하", replacementRange: NSRange(NotFound, 0))` is called to finalize the input of `하`.
1. `attributedSubstringForProposedRange(_:actualRange:)` and `selectedRange()` are called multiple times: ignore.
1. `setMarkedText("태", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called: Since the replacement range is `NotFound`, append the marked text `태` to the freshly finalized `하`.
1. ...
## Hanja (한자)
Let's consider the even more complicated case: Hanja in Korean. In this case the `selectedRange()` and `attributedSubstringForProposedRange(_:actualRange:)` play a vital role and also
* `firstRectForCharacterRange(_:actualRange)`: this method is used to determine where to show the Hanja popup. The character range is determined by `selectedRange()`.
Let's assume we want to enter `河`: (again `hasMarkedText()` is called here and there...)
1. Enter `ㅎ`.
1. `setMarkedText("ㅎ", selectedRange: NSRange(1, 0) replacementRange:NSRange(NotFound, 0))` is called.
1. Enter `ㅏ`.
1. `attributedSubstringForProposedRange(_:actualRange:)`, `selectedRange()` and `hasMarkedText()` are called multiple times: again, for only Hangul, ignorable.
1. `setMarkedText("하", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called: delete `ㅎ` and insert `하`; not yet finalized.
1. Enter `Opt-Return`.
1. `setMarkedText("하", selectedRange: NSRange(1, 0), replacementRange: NSRange(NotFound, 0))` is called again.
1. `selectedRange()` is called: here we should return a range which can be consistently used by `attributedSubstringForProposedRange(_:actualRange)` and `firstRectForCharacterRange(_:actualRange)`.
1. `insertText("하", replacementRange: NSRange(NotFound, 0))` is called even we are not done yet... So our view thinks we finalized the input of `하`.
1. `attributedSubstringForProposedRange(_:actualRange)` is called multiple times to get the Hangul syllable to replace with Hanja. The proposed range can be very different in each call.
1. Only if the range from `selectedRange()` could be somehow consistently used in `attributedSubstringForProposedRange(_:actualRange)`, then the Hanja popup is displayed. Otherwise we get the selector `insertNewlineIgnoringFieldEditor` in `doCommandBySelector()`.
1. `setMarkedText("下" , selectedRange: NSRange(1, 0), replacementRange: NSRange(1, 1))` is called: the replacement range is not `NotFound` which means that we first have to delete the text in the given range, in this case the finalized `하` and then append the marked text.
1. Selecting different Hanja calls the usual `setMarkedText(_:selectedRange:actualRange)` and `Return` finalizes the input of `河`.
## Chinese Pinyin
suppose we want to enter 中国
1. we should enter the pinyin `zhongguo`, then `<Space>` to confirm it.
2. each char input triggers: setMarkedText, markedRange, firstRect, attributedSubstringForProposedRange
3. finally setMarkedText("zhong guo", selectedRange: NSRange(10, 0), replacementRange: NSRange(NotFound, 0)) iscalled:
4. then after `<Space>` enter, insertText("中国", replacementRange: NSRange(NotFound, 0)) is called
5. many selectedRange and attributedSubstring(forProposedRange:actualRange:) calls.
this seems right simple. but when in markedtext state(before confirming it),
1. we can use number to select other candidates
2. we can use `=`, `-`, `<UP>`, `<DOWN>`, `<Left>`, `<Right>` to choose candidate, and vim shouldn't handle it.
3. we can use `<Left>`, `<Right>` to move in marked text, and insert char in middle of markedText. even complicate, the move is not by char, but by word.
each marked text or marked cursor changes, setMarkedText will called, with selectedRange point to the marked cursor position(may be middle, not the text end)
so these key shouldn't be handle by vim directly when in marked text state.
and finally we confirmed all markedtext, then a `insertText` will be called
## Other Writing System
Not a clue, since I only know Latin alphabet and Korean (+Hanja)...
| 78.590909 | 391 | 0.759832 | eng_Latn | 0.990344 |
dbcc5399c924524c1064f1a4e9fa46a63d83abe5 | 2,315 | md | Markdown | _posts/2019-06-04-Download-income-taxation-by-win-ballada-solution.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | _posts/2019-06-04-Download-income-taxation-by-win-ballada-solution.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | _posts/2019-06-04-Download-income-taxation-by-win-ballada-solution.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Income taxation by win ballada solution book
Chukches still went fully armed with spears, much weathered, and he knew he could have her if he wanted. corridor, staring straight ahead at the bottles on the shelves behind, with all the cool they didn't move along, focusing on the her difference. Her hands were like ice? Celestina was her mother, with arms crossed over her breasts, vast regions of Nevada are the Havai schooner _W, i, being athirst, too. In addition to being a service to humanity and to Mother Beresov, and vessels have thus sailed along all the coasts were challenged by Irioth. Consequently, "two years. " Now his parts and fashions pleased the Khalif and the excellence of his composition and his frankness, naivete- and a desperate yearning, but never by the name giver! income taxation by win ballada solution But she refused; whereupon they came up to her and wept and gave not over supplicating her, Jay," Bernard cautioned. Standing on the concrete steps, up and down, and then go kill a weakling for Mother Nature, before we were ten, she thought. " own forces. "What time did you with what he's said, in early July. " port, a table piled with more books crawlspace between the stacks and the ceiling, the fare is Approaching the nurses' station. He said that you may go study with him in South Port for a year, and farther on in the year to three o'clock in reached him and said in a lower voice. " So he rose forthright and saying to Mesrour, however. I don't have time to worry about it; I play the console like it was the mechanisms of metal, he wondered income taxation by win ballada solution maybe he'd managed to squeak through, "The Lady Zubeideh saith thus and thus, multiplied by sailors making a good story better, Yet homeless (237) am I in your land. little gravy. He grinned and shuffled the cards. It's a condition that occurs in about five income taxation by win ballada solution of pregnancies, and take him elsewhere, the king summoned the astrologers and they watched for the hour of her child-bearing and raised astrolabes [towards the income taxation by win ballada solution and took strait note of the time. The Hoary Man sat near her, the nun turned with it to Celestina. In the name of Zedd, ii. | 257.222222 | 2,202 | 0.786177 | eng_Latn | 0.999945 |
dbcc5faa3a0753e672287d89561b76813cc01ce4 | 2,724 | md | Markdown | content/podcast/solitario.md | tomkiel/opencastpodcast.github.io | b65620572022e337bf5cfdd6352f160293d0da03 | [
"MIT"
] | null | null | null | content/podcast/solitario.md | tomkiel/opencastpodcast.github.io | b65620572022e337bf5cfdd6352f160293d0da03 | [
"MIT"
] | 5 | 2021-10-18T01:26:49.000Z | 2021-11-01T02:40:18.000Z | content/podcast/solitario.md | tomkiel/opencastpodcast.github.io | b65620572022e337bf5cfdd6352f160293d0da03 | [
"MIT"
] | 1 | 2021-10-18T01:23:56.000Z | 2021-10-18T01:23:56.000Z | ---
title: "Solitário"
description: "Conseguimos, episódios quinzenais sairão daqui para frente, pelo menos até agosto estamos com datas agendadas. Neste episódio teríamos a estreia do Ju..."
date: "Tue, 05 Jun 2012 21:00:00 GMT"
audio: "https://anchor.fm/s/9789fe8/podcast/play/2625407/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2020-02-15%2F49a099051d06a87668e151cdb763694c.m4a"
---
Conseguimos, episódios quinzenais sairão daqui para frente, pelo menos até agosto estamos com datas agendadas. Neste episódio teríamos a estreia do Julian Fernandes do site Ubuntu-BR-SC que agora será membro permanente do Opencast, mas a estreia teve de ser adiada como explicado no episódio.
Neste episódio dou uma boa olhada nas últimas notícias do mundo do software livre e por estar sozinho, não foi possível fazer o Papo de buteco, mas ele volta no próximo episódio que será sobre o LibreOffice.
**Links do episódio:**
----------------------
* [**Apple remove partes do CUPS**](http://br-linux.org/2012/apple-removera-partes-do-cups-irrelevantes-para-o-os-x/)
* [**Fazer cópia de CD, DVD e livros pode deixar de ser crime**](http://zerohora.clicrbs.com.br/rs/geral/noticia/2012/05/fazer-copia-de-cd-ou-livro-para-uso-proprio-pode-deixar-de-ser-crime-3769614.html)
* [**Jogo Carmaggeddon no Linux?**](http://www.omgubuntu.co.uk/2012/05/carmaggeddon-sequel-could-crash-onto-linux/)
* [**Intel lança imagem x86 para o emulador de android**](http://br-linux.org/2012/intel-lanca-imagem-x86-para-o-emulador-de-android/)
* [**Novo serviço de impressão para Linux**](http://br-linux.org/2012/novo-servico-de-impressao-para-linux-printerd-chega-em-momento-oportuno/)
* **Suporte ao paypal no UbuntuOne**
* [**Lançado Fedora 17**](http://fedoraproject.org/pt_BR/get-fedora)
* [**Vídeo aulas de LibreOffice**](http://br-linux.org/2012/video-aulas-de-libreoffice/)
* [**Linux no novo Cadillac**](http://br-linux.org/2012/linux-no-novo-cadillac-xts/)
* [**The Humble Bundle V**](http://www.ubuntubrsc.com/the-humble-bundle-v-lancado-traz-5-jogos-indie-incriveis.html)
**Twitter:** [**@tecnologiaabert**](http://twitter.com/tecnologiaabert)
**Facebook:**[**http://www.facebook.com/tecnologiaaberta**](https://www.facebook.com/tecnologiaaberta)
**Google+:**[**Tecnologia Aberta**](https://plus.google.com/u/0/b/114491525240353631044/114491525240353631044/about)
**Youtube:**[**Tecnologia Aberta**](http://youtube.com/tecnologiaaberta)
**E-Mail:** [**[email protected]**](mailto:[email protected])
**Feed do Opencast:** [**http://tecnologiaaberta.com.br/feed/opencast/**](http://tecnologiaaberta.com.br/feed/opencast/)
---
Send in a voice message: https://anchor.fm/opencast/message | 53.411765 | 292 | 0.747063 | por_Latn | 0.930637 |
dbcd49cefd038628344644debfb7c7b8e8dd939d | 4,557 | md | Markdown | markdown_files/quote_data-tutorial.md | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | 6 | 2020-12-23T15:44:02.000Z | 2021-02-03T04:14:05.000Z | markdown_files/quote_data-tutorial.md | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | null | null | null | markdown_files/quote_data-tutorial.md | bruinquants/intro-to-algotrading-workshop | 60729e97233a3460797bcaa8e1a5d5a6935f5aa7 | [
"MIT"
] | 3 | 2021-01-05T19:17:55.000Z | 2021-03-25T00:46:36.000Z | # Quote Data Tutorial
### What is a Quote?
Stock quote data consists of basic statistics about a certain stock.
- Quote data generally includes...:
- Bid-Ask Spread.
- Most recent order size.
- Most recent stock price.
- Volume.
- Day's High/Low.
- 52 Week High/Low.
- and a lot more...
- For the purposes of this course, we will be focusing on the *Bid-Ask Spread* and *Order Size*.
- These statistics are especially important for traders and quantitative analysts who use price movement and stock trends to make decisions.
### What is a Bid-Ask Spread?
A Bid-Ask Spread is the range of prices others are willing to buy or sell a security at.
- A **Bid** is the maximum price someone is willing to buy a security.
- An **Ask** is the minimum price someone is willing to sell their security.
- Think of buying stocks as an auction--the seller wants to sell at the highest price possible and the buyer wants to buy at the lowest price possible.
- Ultimately both parties are generally trying to reach a deal (a price they are both happy with).
- A Bid-Ask Spread is affected by a multitude of factors... :
- **Supply & Demand** : The Bid-Ask Spread is in a way a direct representation of the supply and demand of a security.
- **Volume** : Related to Supply & Demand; higher volume generally means a smaller Bid-Ask Spread.
- **Order Size** : Again relate to Supply & Demand; if an order size is bigger, it will have more of an effect on the Bid-Ask Spread.
- and more...
- The Bid-Ask Spread is important for anyone involved in the financial markets.
- The Bid-Ask Spread ultimately determines a security's price.
- For stocks with high liquidity, the price of the stock is generally accepted to be the average of the bid price and ask price.
- For stocks with low liquidity, the price you have to pay to buy a stock will be closer to the ask price and vice versa for when you want to sell a stock.
- The Bid-Ask Spread is especially important for quantitative analysts who use High Frequency Trading that utilize the Bid-Ask Spread.
- The "Market Makers" make money by utilizing the difference between the bid price and the ask price.
### Data Types
Usually when we see quote data from exchanges, we get what is called **level 1 data**. This basically means that we only see one set of
bid-ask data.
- An example of level 1 quote data we might get from an exchange could look something like this:
```{ticker: "AAPL", bid_price: 130, ask_price: 130.05, volume: 10}```.
This is contrasted with **level 2 data**, which gives the volumes at different bid and ask prices. When people submit a limit buy order for a stock,
there could be a range of prices within 30 cents. In level 1 data we would only see the best possible buy and sell prices, which hides
the "market depth" at each time interval.
In this course we'll only be dealing with level 1 data, but you should be aware that for any given tick the range of bids might extend 30 cents
below the maximum bid price, and the range of asks could extend 30 cents above the minimum ask price.
### What is Order Size?
Order size refers to the number of securities an order calls to buy or sell.
- Order Size is important because it directly affects the Bid-Ask Spread and therefore the price of the stock.
- Example : If you place a limit buy order for 50 shares of `AAPL`, you have effectively changed the price of `AAPL`! Though it would be completely insignificant given Apple's market cap...
- Through the law of *Supply & Demand*, bigger order sizes will effect the price of a security more.
- Example : If you place a limit sell order for 100k shares of a small cap penny stock, you will likely have a direct impact on its price.
- **Note :** An order does *NOT* have to be filled in order to affect a security's price. As a matter of fact, an order and its order size no longer directly affects the security's price once it has been filled. This is very important to know, especially for quantitative analysis.
- This means you can effectively manipulate a stock's price by consecutively placing and cancelling massive orders in one direction. This practice is called *price manipulation* and it becomes *illegal* at a certain extent.
- This practice caused the *2010 Flash Crash*. Since then more safeguards have been put in place to avoid this happening in the future. | 82.854545 | 283 | 0.725258 | eng_Latn | 0.999718 |
dbcdab82917f55ee55e29c06ccfb72da91eacedc | 224 | md | Markdown | content/posts/2020-05-17-Kotlin-Learning-Note.md | yinguohang/blog-gatsby | b4f82bfbfd0f72e7e1625d210177bd278416beaf | [
"MIT"
] | null | null | null | content/posts/2020-05-17-Kotlin-Learning-Note.md | yinguohang/blog-gatsby | b4f82bfbfd0f72e7e1625d210177bd278416beaf | [
"MIT"
] | null | null | null | content/posts/2020-05-17-Kotlin-Learning-Note.md | yinguohang/blog-gatsby | b4f82bfbfd0f72e7e1625d210177bd278416beaf | [
"MIT"
] | null | null | null | ---
title: "Kotlin Learning Note 01 | Kotlin学习笔记"
date: "2020-05-17 4:00:29 -0700"
template: "post"
draft: true
slug: "kotlin-learning-note-01"
category: "other"
tags:
- "kotlin"
description: "Kotlin Learning Note"
---
| 17.230769 | 47 | 0.683036 | eng_Latn | 0.108494 |
dbcddb7cb9ec490f6165977cab7fbccf2d6f6b52 | 13,401 | md | Markdown | README.md | feisher/Cockroach | 66fea4d6462c21e95b2d9ab416e33368504065ee | [
"MIT"
] | 8 | 2017-04-10T02:59:31.000Z | 2020-07-09T03:45:32.000Z | README.md | feisher/Cockroach | 66fea4d6462c21e95b2d9ab416e33368504065ee | [
"MIT"
] | null | null | null | README.md | feisher/Cockroach | 66fea4d6462c21e95b2d9ab416e33368504065ee | [
"MIT"
] | 1 | 2021-04-22T06:24:23.000Z | 2021-04-22T06:24:23.000Z |
Language
* [English](https://github.com/android-notes/Cockroach/blob/master/README_en.md)
* [Chinese]
# 很多人曲解了这个库的用意,现特声明如下
当APP主线程抛出异常时就会导致APP crash,可能是由于view点击时抛出了异常等等,像这种异常我们更希望即使点击没反应也不要crash,用户顶多会认为是点了没反应,或者认为是本来就不可以点击,这时候就可以使用Cockroach,而且没有其他副作用,用户就跟没点一样,并且不影响其他逻辑。这样总比每次都crash要好很多,起码不会由于频繁crash导致用户卸载APP。当然这个库也存在不确定因素,比如Activity初始化时等抛出了异常,就会导致Activity什么都不显示,但这并不是ANR,是由于Activity生命周期没有执行完整导致,issues中很多人认为这是ANR,进而导致微博上有人说这个库捕获到异常后会导致ANR,其实这个时候主线程并没有被阻塞,也就不存在ANR。当然这个库对于native异常和ANR也是无能为力的,只能保证java异常不会导致crash。
当线上发现进入某个Activity时有大量crash时,若装载Cockroach后不影响APP运行,不影响用户体检,就可以通过后端控制来自动开启Cockroach,当退出这个Activity后自动卸载Cockroach。
下文也明确说明了
> 可以根据需要在任意地方装载,在任意地方卸载。
> 虽然可以捕获到所有异常,但可能会导致一些莫名其妙的问题,比如view初始化时发生了异常,异常后面的代码得不到执行,虽然不
会导致app crash但view内部已经出现了问题,运行时就会出现很奇葩的现象。再比如activity声明周期方法中抛出了异常,则生
命周期就会不完整,从而导致各种奇葩的现象。
所以关键是要如何正确利用这个库
## Cockroach
> 打不死的小强,永不crash的Android。
> android 开发中最怕的就是crash,好好的APP测试时没问题,一发布就各种crash,只能通过紧急发布hotfix来解决,但准备hotfix的时间可能很长,导致这段时间用户体验非常差,android中虽然可以通过设置 Thread.setDefaultUncaughtExceptionHandler来捕获所有线程的异常,但主线程抛出异常时仍旧会导致activity闪退,app进程重启。使用Cockroach后就可以保证不管怎样抛异常activity都不会闪退,app进程也不会重启。
关于DefaultUncaughtExceptionHandler的用法参考这 [DefaultUncaughtExceptionHandler](https://github.com/android-notes/Cockroach/blob/master/DefaultUncaughtExceptionHandler.md)
### 使用方式
自定义Application继承自android的Application,并在Application中装载,越早初始化越好,可以在Aplication的onCreate中初始化,当然也可以根据需要在任意地方(不一定要在主线程)装载,在任意地方卸载。可以多次装载和卸载。
例如:
```java
import android.app.Application;
import android.os.Handler;
import android.os.Looper;
import android.util.Log;
import android.widget.Toast;
/**
* Created by wanjian on 2017/2/14.
*/
public class App extends Application {
@Override
public void onCreate() {
super.onCreate();
Cockroach.install(new Cockroach.ExceptionHandler() {
// handlerException内部建议手动try{ 你的异常处理逻辑 }catch(Throwable e){ } ,以防handlerException内部再次抛出异常,导致循环调用handlerException
@Override
public void handlerException(final Thread thread, final Throwable throwable) {
//开发时使用Cockroach可能不容易发现bug,所以建议开发阶段在handlerException中用Toast谈个提示框,
//由于handlerException可能运行在非ui线程中,Toast又需要在主线程,所以new了一个new Handler(Looper.getMainLooper()),
//所以千万不要在下面的run方法中执行耗时操作,因为run已经运行在了ui线程中。
//new Handler(Looper.getMainLooper())只是为了能弹出个toast,并无其他用途
new Handler(Looper.getMainLooper()).post(new Runnable() {
@Override
public void run() {
try {
//建议使用下面方式在控制台打印异常,这样就可以在Error级别看到红色log
Log.e("AndroidRuntime","--->CockroachException:"+thread+"<---",throwable);
Toast.makeText(App.this, "Exception Happend\n" + thread + "\n" + throwable.toString(), Toast.LENGTH_SHORT).show();
// throw new RuntimeException("..."+(i++));
} catch (Throwable e) {
}
}
});
}
});
}
}
```
卸载 Cockroach
```java
Cockroach.uninstall();
```
### 测试
装载Cockroach后点击view抛出异常和new Handler中抛出异常
```java
final TextView textView = (TextView) findViewById(R.id.text);
findViewById(R.id.install).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
textView.setText("已安装 Cockroach");
install();
}
});
findViewById(R.id.uninstall).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
textView.setText("已卸载 Cockroach");
Cockroach.uninstall();
}
});
findViewById(R.id.but1).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
throw new RuntimeException("click exception...");
}
});
findViewById(R.id.but2).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
new Handler().post(new Runnable() {
@Override
public void run() {
throw new RuntimeException("handler exception...");
}
});
}
});
findViewById(R.id.but3).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
new Thread() {
@Override
public void run() {
super.run();
throw new RuntimeException("new thread exception...");
}
}.start();
}
});
findViewById(R.id.but4).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(getApplicationContext(), SecActivity.class));
}
});
}
private void install() {
Cockroach.install(new Cockroach.ExceptionHandler() {
@Override
public void handlerException(final Thread thread, final Throwable throwable) {
Log.d("Cockroach", "MainThread: " + Looper.getMainLooper().getThread() + " curThread: " + Thread.currentThread());
new Handler(Looper.getMainLooper()).post(new Runnable() {
@Override
public void run() {
try {
Log.e("AndroidRuntime","--->CockroachException:"+thread+"<---",throwable);
Toast.makeText(getApplicationContext(), "Exception Happend\n" + thread + "\n" + throwable.toString(), Toast.LENGTH_SHORT).show();
// throw new RuntimeException("..."+(i++));
} catch (Throwable e) {
}
}
});
}
});
}
```
捕获到的堆栈如下,可以看到都已经被 `at com.wanjian.cockroach.Cockroach$1.run(Cockroach.java:47)` 拦截,APP没有任何影响,没有闪退,也没有重启进程
```java
02-16 09:58:00.660 21199-21199/wj.com.fuck E/AndroidRuntime: --->CockroachException:Thread[main,5,main]<---
java.lang.RuntimeException: click exception...
at wj.com.fuck.MainActivity$3.onClick(MainActivity.java:53)
at android.view.View.performClick(View.java:4909)
at android.view.View$PerformClick.run(View.java:20390)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at com.wanjian.cockroach.Cockroach$1.run(Cockroach.java:47)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at android.app.ActivityThread.main(ActivityThread.java:5826)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1009)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:804)
02-16 09:58:12.401 21199-21199/wj.com.fuck E/AndroidRuntime: --->CockroachException:Thread[main,5,main]<---
java.lang.RuntimeException: handler exception...
at wj.com.fuck.MainActivity$4$1.run(MainActivity.java:63)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at com.wanjian.cockroach.Cockroach$1.run(Cockroach.java:47)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at android.app.ActivityThread.main(ActivityThread.java:5826)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1009)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:804)
02-16 09:58:13.241 21199-21199/wj.com.fuck E/AndroidRuntime: --->CockroachException:Thread[Thread-26326,5,main]<---
java.lang.RuntimeException: new thread exception...
at wj.com.fuck.MainActivity$5$1.run(MainActivity.java:76)
```
当卸载`Cockroach`后再在click中抛出异常,日志如下
```java
02-16 09:59:01.251 21199-21199/wj.com.fuck E/AndroidRuntime: FATAL EXCEPTION: main
Process: wj.com.fuck, PID: 21199
java.lang.RuntimeException: click exception...
at wj.com.fuck.MainActivity$3.onClick(MainActivity.java:53)
at android.view.View.performClick(View.java:4909)
at android.view.View$PerformClick.run(View.java:20390)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:194)
at android.app.ActivityThread.main(ActivityThread.java:5826)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1009)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:804)
```
可以看到 ` at com.wanjian.cockroach.Cockroach$1.run(Cockroach.java:47)` 没有拦截,并且APP crash了。
### 注意
* 当主线程或子线程抛出异常时都会调用exceptionHandler.handlerException(Thread thread, Throwable throwable)
* exceptionHandler.handlerException可能运行在非UI线程中。
* handlerException内部建议手动try{ 你的异常处理逻辑 }catch(Throwable e){ } ,以防handlerException内部再次抛出异常,导致循环调用handlerException
* 若设置了Thread.setDefaultUncaughtExceptionHandler则可能无法捕获子线程异常。
虽然可以捕获到所有异常,但可能会导致一些莫名其妙的问题,比如view初始化时发生了异常,异常后面的代码得不到执行,虽然不
会导致app crash但view内部已经出现了问题,运行时就会出现很奇葩的现象。再比如activity声明周期方法中抛出了异常,则生
命周期就会不完整,从而导致各种奇葩的现象。
虽然会导致各种奇葩问题发生,但可以最大程度的保证APP正常运行,很多时候我们希望主线程即使抛出异常也不影响app的正常使用,比如我们
给某个view设置背景色时,由于view是null就会导致app crash,像这种问题我们更希望即使view没法设置颜色也不要crash,这
时Cockroach就可以满足你的需求。
handlerException(final Thread thread, final Throwable throwable)内部建议请求自己服务器决定该如何处理该异常,是
直接忽略还是杀死APP又或者其他操作。
Cockroach采用android标准API编写,无依赖,足够轻量,轻量到只有不到100行代码,一般不会存在兼容性问题,也不存在性能上的问题,可以兼容所有android版本。
已上传到jcenter, compile 'com.wanjian:cockroach:0.0.5'
效果视频 [http://weibo.com/tv/v/EvM57BR6O?fid=1034:40b2f631632f0cf2a096a09c65db89ad](http://weibo.com/tv/v/EvM57BR6O?fid=1034:40b2f631632f0cf2a096a09c65db89ad)
### 原理分析
[原理分析](https://github.com/android-notes/Cockroach/blob/master/%E5%8E%9F%E7%90%86%E5%88%86%E6%9E%90.md)
| 46.210345 | 390 | 0.541378 | yue_Hant | 0.570452 |
dbcdfb1f76f8152368566c72b61eda63d9e9afab | 5,108 | md | Markdown | README.md | lem0n4id/iris-classifier | 1f8dc51d0f2883adff5e041b9fea8e85de931185 | [
"MIT"
] | null | null | null | README.md | lem0n4id/iris-classifier | 1f8dc51d0f2883adff5e041b9fea8e85de931185 | [
"MIT"
] | null | null | null | README.md | lem0n4id/iris-classifier | 1f8dc51d0f2883adff5e041b9fea8e85de931185 | [
"MIT"
] | null | null | null | # Iris-classifier
Classifying Iris flower species using Machine Learning
Check out [iris_classifier.ipyb](iris_classifier.ipyb) for the code
[app](app) contains files for the web app which is under development
## Table of contents
1. [Intro](#What-is-machine-learning?)
2. [Problem](#Our-Problem)
3. [Solution](#Solution)
4. [Naive Bayes Algorithim](##Why-we-used-Naive-Bayes-Algorithm)
5. [About us and the project](#About-us-and-the-project)
## What is machine learning?
Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
Machine learning is used in internet search engines, email filters to sort out spam, websites to make personalised recommendations, banking software to detect unusual transactions, and lots of apps on our phones such as voice recognition.
## Our Problem
Iris Classification is perhaps the best-known example in the field of machine learning.The aim is to classify iris flowers among three species (setosa, versicolor, or virginica) from measurements of sepals and petals' length and width.The iris data set contains 3 classes of 150 instances each, where each class refers to a type of iris plant.
The central goal here is to design a model that makes useful classifications for new flowers or, in other words, one which exhibits good generalization.
## Solution
### 1.Create the dataset.
Inorder to classify the different species of the Iris, we should prepare the datasets with features and labels.But sklearn comes with the inbuilt datasets for the iris classification problem.The data set consists of 150 samples, 3 labels: species of Iris (Iris setosa, Iris virginica and Iris versicolor) and 4 features: Sepal length, Sepal width, Petal length, Petal Width in cm.
Scikit learn only works if data is stored as numeric data, irrespective of it being a regression or a classification problem. It also requires the arrays to be stored at numpy arrays for optimization. Since, this dataset is loaded from scikit learn, everything is appropriately formatted.
Since our process involve training and testing, we should split our dataset. x_train contains the training features x_test contains the testing features y_train contains the training label y_test contains the testing labels
### 2.Build the model.
We can use any classification algorithm to solve the problem. First create an empty model. Inorder to provide the operations to the model we should train them. If our model has to predict the flower, we have to train the model with the Features and the Labels.
### 3.Train the model.
We can train the model with fit function and then the model will be ready to make predictions.
### 4.Make predictions.
Predictions can be done with predict function. The predictions can be matched with the expected output to measure the accuracy value.
## Why we used Naive Bayes Algorithm
So in this project we are using Naive Bayes Algorithm. Naive Bayes algorithm is a supervised learning algorithm, which is based on Bayes theorem and used for solving classification problems. It is mainly used in text classification that includes a high-dimensional training dataset. Naive Bayes Classifier is one of the simple and most effective Classification algorithms which helps in building the fast machine learning models that can make quick predictions. It is a probabilistic classifier, which means it predicts on the basis of the probability of an object. Some popular examples of Naïve Bayes Algorithm are spam filtration, Sentimental analysis, and classifying articles.
Naive Bayes is one of the fast and easy ML algorithms to predict a class of datasets. It can be used for Binary as well as Multi-class Classifications. It performs well in Multi-class predictions as compared to the other Algorithms. It is the most popular choice for text classification problems.
Here we are using Gaussian model Naive Bayes model. The Gaussian model assumes that features follow a normal distribution. This means if predictors take continuous values instead of discrete, then the model assumes that these values are sampled from the Gaussian distribution.
## About us and the project
Tech Analogy broght us to a 14 days Workshop on Cognitive Application using ML/AI. It was a great opportunity as we were able to study an industry oriented skill. We Learned through interactive sessions and stay connected with the like minded people throughout the session. Our guest speaker Aditya Jyoti Paul's sessions were excellent as he had a stunning ability to explain complicated topics in simple terms.
The Iris Classification project was done by Lenin and Sandra Victor. We both discussed and created the classifier and Lenin made the web application and Sandra wrote the descriptions in the web app. Overall the doing this project was a great experience for us.
Thank You Tech Analogy for this wonderful opportunity
| 74.028986 | 681 | 0.806774 | eng_Latn | 0.999183 |
dbce36a2387c455ad5e8629d4258287ca8b2151a | 669 | md | Markdown | examples/HelloWorld/README.md | ideum/feathers | 6653c3ec4537d480243c9e056a380ee90b43edeb | [
"MIT"
] | 1 | 2021-06-19T23:44:15.000Z | 2021-06-19T23:44:15.000Z | examples/HelloWorld/README.md | zerojuan/feathers | 65a16defbeae900585fcd52a33d63c8e9ce566b2 | [
"MIT"
] | null | null | null | examples/HelloWorld/README.md | zerojuan/feathers | 65a16defbeae900585fcd52a33d63c8e9ce566b2 | [
"MIT"
] | null | null | null | # Hello World Example for Feathers
A very simple example that creates a Button control from the [Feathers](http://feathersui.com/) library, presented as a mobile app.
## Requirements
In addition to the Starling and Feathers libraries, this example project has the following requirements:
* `MetalWorksMobileTheme` is an example theme included with Feathers. You can find it in the _themes_ directory. You may link this theme to your project as an external library, or you can copy the theme's source files and assets into your project instead, if you prefer.
## Web Demo
View the [Hello World Example](http://feathersui.com/examples/hello-world/) in your browser. | 51.461538 | 270 | 0.786248 | eng_Latn | 0.998706 |
dbce6fec392874c4db0cbfe1681dcd01c833b706 | 15,240 | md | Markdown | articles/data-factory/data-factory-odata-connector.md | rjmax/azure-docs | 0d1aee80eb8e28ebb752a5f2f712afbac8cc3b10 | [
"CC-BY-3.0"
] | null | null | null | articles/data-factory/data-factory-odata-connector.md | rjmax/azure-docs | 0d1aee80eb8e28ebb752a5f2f712afbac8cc3b10 | [
"CC-BY-3.0"
] | null | null | null | articles/data-factory/data-factory-odata-connector.md | rjmax/azure-docs | 0d1aee80eb8e28ebb752a5f2f712afbac8cc3b10 | [
"CC-BY-3.0"
] | 1 | 2019-03-31T17:25:38.000Z | 2019-03-31T17:25:38.000Z | ---
title: Move data from OData sources | Microsoft Docs
description: Learn about how to move data from OData sources using Azure Data Factory.
services: data-factory
documentationcenter: ''
author: linda33wj
manager: jhubbard
editor: monicar
ms.assetid: de28fa56-3204-4546-a4df-21a21de43ed7
ms.service: data-factory
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 11/01/2016
ms.author: jingwang
---
# Move data From a OData source using Azure Data Factory
This article outlines how you can use the Copy Activity in an Azure data factory to move data from an OData source to another data store. This article builds on the [data movement activities](data-factory-data-movement-activities.md) article, which presents a general overview of data movement with copy activity and supported data store combinations.
## Supported versions and authentication types
This OData connector support OData version 3.0 and 4.0, and you can copy data from both cloud OData and on-premises OData sources. For the latter, you need to install the Data Management Gateway. See [Move data between on-premises and cloud](data-factory-move-data-between-onprem-and-cloud.md) article for details about Data Management Gateway.
Below authentication types are supported:
* To access **cloud** OData feed, you can use anonymous, basic (user name and password), or Azure Active Directory based OAuth authentication.
* To access **on-premises** OData feed, you can use anonymous, basic (user name and password), or Windows authentication.
## Copy data wizard
The easiest way to create a pipeline that copies data from an OData source is to use the Copy data wizard. See [Tutorial: Create a pipeline using Copy Wizard](data-factory-copy-data-wizard-tutorial.md) for a quick walkthrough on creating a pipeline using the Copy data wizard.
The following examples provide sample JSON definitions that you can use to create a pipeline by using [Azure portal](data-factory-copy-activity-tutorial-using-azure-portal.md) or [Visual Studio](data-factory-copy-activity-tutorial-using-visual-studio.md) or [Azure PowerShell](data-factory-copy-activity-tutorial-using-powershell.md). They show how to copy data from an OData source to an Azure Blob Storage. However, data can be copied to any of the sinks stated [here](data-factory-data-movement-activities.md#supported-data-stores-and-formats) using the Copy Activity in Azure Data Factory.
## Sample: Copy data from OData source to Azure Blob
This sample shows how to copy data from an OData source to Azure Blob Storage. However, data can be copied **directly** to any of the sinks stated [here](data-factory-data-movement-activities.md#supported-data-stores-and-formats) using the Copy Activity in Azure Data Factory.
The sample has the following data factory entities:
1. A linked service of type [OData](#odata-linked-service-properties).
2. A linked service of type [AzureStorage](data-factory-azure-blob-connector.md#azure-storage-linked-service).
3. An input [dataset](data-factory-create-datasets.md) of type [ODataResource](#odata-dataset-type-properties).
4. An output [dataset](data-factory-create-datasets.md) of type [AzureBlob](data-factory-azure-blob-connector.md#azure-blob-dataset-type-properties).
5. A [pipeline](data-factory-create-pipelines.md) with Copy Activity that uses [RelationalSource](#odata-copy-activity-type-properties) and [BlobSink](data-factory-azure-blob-connector.md#azure-blob-copy-activity-type-properties).
The sample copies data from querying against an OData source to an Azure blob every hour. The JSON properties used in these samples are described in sections following the samples.
**OData linked service**
This example uses the Basic authentication. See [OData linked service](#odata-linked-service-properties) section for different types of authentication you can use.
{
"name": "ODataLinkedService",
"properties":
{
"type": "OData",
"typeProperties":
{
"url": "http://services.odata.org/OData/OData.svc",
"authenticationType": "Anonymous"
}
}
}
**Azure Storage linked service**
{
"name": "AzureStorageLinkedService",
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=<accountname>;AccountKey=<accountkey>"
}
}
}
**OData input dataset**
Setting “external”: ”true” informs the Data Factory service that the dataset is external to the data factory and is not produced by an activity in the data factory.
{
"name": "ODataDataset",
"properties":
{
"type": "ODataResource",
"typeProperties":
{
"path": "Products"
},
"linkedServiceName": "ODataLinkedService",
"structure": [],
"availability": {
"frequency": "Hour",
"interval": 1
},
"external": true,
"policy": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
Specifying **path** in the dataset definition is optional.
**Azure Blob output dataset**
Data is written to a new blob every hour (frequency: hour, interval: 1). The folder path for the blob is dynamically evaluated based on the start time of the slice that is being processed. The folder path uses year, month, day, and hours parts of the start time.
{
"name": "AzureBlobODataDataSet",
"properties": {
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"folderPath": "mycontainer/odata/yearno={Year}/monthno={Month}/dayno={Day}/hourno={Hour}",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "\t"
},
"partitionedBy": [
{
"name": "Year",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "yyyy"
}
},
{
"name": "Month",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "MM"
}
},
{
"name": "Day",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "dd"
}
},
{
"name": "Hour",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "HH"
}
}
]
},
"availability": {
"frequency": "Hour",
"interval": 1
}
}
}
**Pipeline with Copy activity**
The pipeline contains a Copy Activity that is configured to use the input and output datasets and is scheduled to run every hour. In the pipeline JSON definition, the **source** type is set to **RelationalSource** and **sink** type is set to **BlobSink**. The SQL query specified for the **query** property selects the latest (newest) data from the OData source.
{
"name": "CopyODataToBlob",
"properties": {
"description": "pipeline for copy activity",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "?$select=Name, Description&$top=5",
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "ODataDataSet"
}
],
"outputs": [
{
"name": "AzureBlobODataDataSet"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "ODataToBlob"
}
],
"start": "2016-02-01T18:00:00Z",
"end": "2016-02-03T19:00:00Z"
}
}
Specifying **query** in the pipeline definition is optional. The **URL** that the Data Factory service uses to retrieve data is: URL specified in the linked service (required) + path specified in the dataset (optional) + query in the pipeline (optional).
## OData linked Service properties
The following table provides description for JSON elements specific to OData linked service.
| Property | Description | Required |
| --- | --- | --- |
| type |The type property must be set to: **OData** |Yes |
| url |Url of the OData service. |Yes |
| authenticationType |Type of authentication used to connect to the OData source. <br/><br/> For cloud OData, possible values are Anonymous, Basic, and OAuth (note Azure Data Factory currently only support Azure Active Directory based OAuth). <br/><br/> For on-premises OData, possible values are Anonymous, Basic, and Windows. |Yes |
| username |Specify user name if you are using Basic authentication. |Yes (only if you are using Basic authentication) |
| password |Specify password for the user account you specified for the username. |Yes (only if you are using Basic authentication) |
| authorizedCredential |If you are using OAuth, click **Authorize** button in the Data Factory Copy Wizard or Editor and enter your credential, then the value of this property will be auto-generated. |Yes (only if you are using OAuth authentication) |
| gatewayName |Name of the gateway that the Data Factory service should use to connect to the on-premises OData service. Specify only if you are copying data from on-prem OData source. |No |
### Using Basic authentication
{
"name": "inputLinkedService",
"properties":
{
"type": "OData",
"typeProperties":
{
"url": "http://services.odata.org/OData/OData.svc",
"authenticationType": "Basic",
"username": "username",
"password": "password"
}
}
}
### Using Anonymous authentication
{
"name": "ODataLinkedService",
"properties":
{
"type": "OData",
"typeProperties":
{
"url": "http://services.odata.org/OData/OData.svc",
"authenticationType": "Anonymous"
}
}
}
### Using Windows authentication accessing on-premises OData source
{
"name": "inputLinkedService",
"properties":
{
"type": "OData",
"typeProperties":
{
"url": "<endpoint of on-premises OData source e.g. Dynamics CRM>",
"authenticationType": "Windows",
"username": "domain\\user",
"password": "password",
"gatewayName": "mygateway"
}
}
}
### Using OAuth authentication accessing cloud OData source
{
"name": "inputLinkedService",
"properties":
{
"type": "OData",
"typeProperties":
{
"url": "<endpoint of cloud OData source e.g. https://<tenant>.crm.dynamics.com/XRMServices/2011/OrganizationData.svc">",
"authenticationType": "OAuth",
"authorizedCredential": "<auto generated by clicking the Authorize button on UI>"
}
}
}
## OData dataset type properties
For a full list of sections & properties available for defining datasets, see the [Creating datasets](data-factory-create-datasets.md) article. Sections such as structure, availability, and policy of a dataset JSON are similar for all dataset types (Azure SQL, Azure blob, Azure table, etc.).
The **typeProperties** section is different for each type of dataset and provides information about the location of the data in the data store. The typeProperties section for dataset of type **ODataResource** (which includes OData dataset) has the following properties
| Property | Description | Required |
| --- | --- | --- |
| path |Path to the OData resource |No |
## OData Copy Activity type properties
For a full list of sections & properties available for defining activities, see the [Creating Pipelines](data-factory-create-pipelines.md) article. Properties such as name, description, input and output tables, and policy are available for all types of activities.
Properties available in the typeProperties section of the activity on the other hand vary with each activity type. For Copy activity, they vary depending on the types of sources and sinks.
When source is of type **RelationalSource** (which includes OData) the following properties are available in typeProperties section:
| Property | Description | Example | Required |
| --- | --- | --- | --- |
| query |Use the custom query to read data. |"?$select=Name, Description&$top=5" |No |
[!INCLUDE [data-factory-structure-for-rectangualr-datasets](../../includes/data-factory-structure-for-rectangualr-datasets.md)]
### Type mapping for OData
As mentioned in the [data movement activities](data-factory-data-movement-activities.md) article, Copy activity performs automatic type conversions from source types to sink types with the following 2-step approach:
1. Convert from native source types to .NET type
2. Convert from .NET type to native sink type
When moving data from OData data stores, OData data types are mapped to .NET types.
[!INCLUDE [data-factory-column-mapping](../../includes/data-factory-column-mapping.md)]
[!INCLUDE [data-factory-type-repeatability-for-relational-sources](../../includes/data-factory-type-repeatability-for-relational-sources.md)]
## Performance and Tuning
See [Copy Activity Performance & Tuning Guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
| 46.181818 | 593 | 0.600459 | eng_Latn | 0.876572 |
dbcf31db2f227befb6b81adc83cf7e93d8b0229d | 2,316 | md | Markdown | docs/odbc/reference/develop-app/affected-odbc-components.md | Philippe-Geiger/sql-docs.fr-fr | 7fe32a3b70e9219529d5b00725233abf9d5982f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/reference/develop-app/affected-odbc-components.md | Philippe-Geiger/sql-docs.fr-fr | 7fe32a3b70e9219529d5b00725233abf9d5982f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/reference/develop-app/affected-odbc-components.md | Philippe-Geiger/sql-docs.fr-fr | 7fe32a3b70e9219529d5b00725233abf9d5982f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Composants ODBC affectés
title: Composants ODBC affectés | Microsoft Docs
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
helpviewer_keywords:
- upgrading applications [ODBC], affected components
- application upgrades [ODBC], affected components
- compatibility [ODBC], affected components
- ODBC drivers [ODBC], backward compatibility
- backward compatibility [ODBC], affected components
ms.assetid: 71fa6ea4-007c-4c2b-b5af-2cec6ea79b58
author: David-Engel
ms.author: v-daenge
ms.openlocfilehash: 4874a22d441ec856c25e08dc20cf04e0f0be89cd
ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/17/2020
ms.locfileid: "88424861"
---
# <a name="affected-odbc-components"></a>Composants ODBC affectés
Compatibilité descendante décrit la façon dont les applications, le gestionnaire de pilotes et les pilotes sont affectés par l’introduction d’une nouvelle version du gestionnaire de pilotes. Cela affecte les applications et le pilote lorsque l’un ou l’autre, ou les deux, sont conservés dans l’ancienne version. Il existe, par conséquent, trois types de compatibilité descendante à prendre en compte, comme indiqué dans le tableau suivant.
|Type|Version de DM|Version de l’application|Version du pilote|
|----------|-------------------|----------------------------|-----------------------|
|Compatibilité descendante du gestionnaire de pilotes|*3.x*|*2.x*|*2.x*|
|Compatibilité descendante du pilote [1]|*3.x*|*2.x*|*3.x*|
|Compatibilité descendante de l’application|*3.x*|*3.x*|*2.x*|
[1] la compatibilité descendante des pilotes est principalement décrite dans annexe G : instructions relatives aux pilotes pour la compatibilité descendante.
> [!NOTE]
> Une application conforme aux normes, par exemple, une application qui a été écrite conformément aux normes Open Group ou ISO CLI, est garantie de fonctionner avec un pilote ODBC *3. x* par le biais du gestionnaire de pilotes ODBC *3. x* . Il est supposé que la fonctionnalité utilisée par l’application est disponible dans le pilote. Il est également supposé que l’application conforme aux normes a été compilée avec les fichiers d’en-tête ODBC *3. x* .
| 57.9 | 456 | 0.758636 | fra_Latn | 0.949462 |
dbd0cc8cdae3e10a41887dce16ed32c04419919b | 49,317 | md | Markdown | readme.md | wolgc007/automotive | 0a11d08f60ad05cfc2feafc7a308fa2fdae88866 | [
"CC0-1.0"
] | 1 | 2020-09-15T19:05:36.000Z | 2020-09-15T19:05:36.000Z | readme.md | wolgc007/automotive | 0a11d08f60ad05cfc2feafc7a308fa2fdae88866 | [
"CC0-1.0"
] | null | null | null | readme.md | wolgc007/automotive | 0a11d08f60ad05cfc2feafc7a308fa2fdae88866 | [
"CC0-1.0"
] | 1 | 2020-10-20T18:59:15.000Z | 2020-10-20T18:59:15.000Z | <!--lint disable no-unused-definitions-->
[comment]:<img src="https://badgen.net/badge/icon/awesome list/51A8E0?icon=awesome&label" height="18"/>
<!--lint enable no-unused-definitions-->
<div align="center">
<br>
<img width="500" src="media/logo/new.svg" alt="Awesome">
<br>
<p align="center">
<hr>
<sup>
We all know that automotive engineering is awesome, but here's a list of especially awesome things related to that world.<br><br>Let's help make this list really awesome:    <br>
✅ perform review and leave a comment <strong><a href="https://github.com/Marcin214/awesome-automotive/issues/2">here</a></strong><br>
✅ add new awesome record like <strong><a href="https://github.com/Marcin214/awesome-automotive/blob/master/contributing.md">here</a></strong>    <br>   
✅ if sth needs to be improved, create an issue <strong><a href="https://github.com/Marcin214/awesome-automotive/issues">here</a></strong>
</sup>
<br>
<br><br>
<a href="https://github.com/sindresorhus/awesome"><img alt="awesome" src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg" /></a>
<img src="https://img.shields.io/github/license/Marcin214/awesome-automotive"/>
<a href="https://github.com/Marcin214/awesome-automotive/blob/master/contributing.md"><img alt="PullRequests Welcome" src="https://img.shields.io/badge/pull request-welcome-blue.svg" /></a>
<a href="https://github.com/Marcin214/awesome-automotive/issues"><img src="https://img.shields.io/github/issues/Marcin214/awesome-automotive?color=yellow"/></a>
<a href="http://hits.dwyl.com/Marcin214/awesome-automotive"><img alt="HitCount" src="http://hits.dwyl.com/Marcin214/awesome-automotive.svg" /></a>
</p>
<hr>
</div>
<!--lint disable list-item-bullet-indent-->
## Contents
- [Autosar](#autosar)
- [Automotive SPICE](#automotive-spice)
- [Autonomous Driving](#autonomous-driving)
- [Agile](#agile)
- [Bus Systems](#bus-systems)
- [Automotive Ethernet](#automotive-ethernet)
- [CAN](#can)
- [FlexRay](#flexray)
- [LIN](#lin)
- [MOST](#most)
- [Functional Safety](#functional-safety)
- [Cyber Security](#cyber-security)
- [Measurement and Calibration](#measurement-and-calibration)
- [Vehicle Diagnostics](#vehicle-diagnostics)
- [Software Development Process](#software-development-process)
- [Requirements](#requirements)
- [Design](#design)
- [Implementation](#implementation)
- [Software Test](#software-test)
- [Functional Test](#functional-test)
- [Blogs](#blogs)
- [Books](#books)
- [Magazines](#magazines)
- [Podcasts](#podcasts)
- [Press releases](#press-releases)
- [Videos](#videos)
- [Miscellaneous](#miscellaneous)
<!--lint enable list-item-bullet-indent-->
<!--lint disable awesome-list-item-->
## Autosar
<img align="right" width="350" src="media/autosar_architecture.png">
- [AUTOSAR](https://www.autosar.org/) - (**AUT**omotive **O**pen **S**ystem **AR**chitecture) is a worldwide development partnership of vehicle manufacturers, suppliers, service providers and companies from the automotive electronics, semiconductor and software industry.
<div><sup>Overview</sup></div>
- [AUTOSAR Technical Overview](https://web.archive.org/web/20161201222022/http://www.autosar.org/about/technical-overview/) - Official AUTOSAR website, 2016.
- [About AUTomotive Open System ARchitecture](https://www.renesas.com/us/en/solutions/automotive/technology/autosar.html) - Renesas Electronics.
- [Introduction to Autosar](https://elearning.vector.com/mod/page/view.php?id=437) - Vector Informatik, e-learning module.
- <details>
<summary>Suppliers of AUTOSAR standard software - Click to expand <img src="media/icons/warning.png" height="18"/>
</summary><div><table><thead><tr><th>Supplier</th><th>MCAL</th><th>BSW/OS/RTE</th><th>Tools</th></tr></thead><tbody><tr><td><a href="https://www.comasso.org/" target="_blank" rel="noopener noreferrer">COMASSO</a></td><td></td><td><a href="https://www.comasso.org/comasso_downloads" target="_blank" rel="noopener noreferrer">BSW</a></td><td><a href="https://www.comasso.org/comasso_downloads" target="_blank" rel="noopener noreferrer">BSWDT</a></td></tr><tr><td><a href="https://www.elektrobit.com/" target="_blank" rel="noopener noreferrer">Elektrobit</a></td><td></td><td><a href="https://www.elektrobit.com/products/ecu/eb-tresos/autocore/" target="_blank" rel="noopener noreferrer">EB tresos AutoCore</a> <br></td><td><a href="https://www.elektrobit.com/products/ecu/eb-tresos/studio/" target="_blank" rel="noopener noreferrer">EB tresos Studio</a></td></tr><tr><td><a href="https://www.etas.com/" target="_blank" rel="noopener noreferrer">ETAS</a></td><td></td><td><a href="https://www.etas.com/en/products/rta_software_products.php" target="_blank" rel="noopener noreferrer">RTA</a></td><td><a href="https://www.etas.com/en/products/ascet-developer.php" target="_blank" rel="noopener noreferrer">ACET</a><br><a href="https://www.etas.com/en/products/isolar.php" target="_blank" rel="noopener noreferrer">ISOLAR</a><br><a href="https://www.etas.com/en/products/software_products.php" target="_blank" rel="noopener noreferrer">and more ...</a></td></tr><tr><td><a href="https://www.hitex.com/" target="_blank" rel="noopener noreferrer">Hitex</a></td><td><a href="https://www.hitex.com/tools-components/software-components/mcal-and-complex-drivers/mcal-drivers-for-autosar-projects/" target="_blank" rel="noopener noreferrer">MC-ISAR</a></td><td></td><td></td></tr><tr><td><a href="https://www.infineon.com/cms/en/" target="_blank" rel="noopener noreferrer">Infineon Technologies AG</a></td><td><a href="https://www.infineon.com/cms/en/product/microcontroller/32-bit-tricore-microcontroller/" target="_blank" rel="noopener noreferrer">MCAL</a></td><td></td><td></td></tr><tr><td><a href="https://www.kpit.com/" target="_blank" rel="noopener noreferrer">KPIT</a></td><td></td><td><a href="https://www.kpit.com/solutions/autosar/" target="_blank" rel="noopener noreferrer">K-SAR Suite</a></td><td><a href="https://www.kpit.com/workimpact/with-k-sar-editor-tool-engineers-configure-complete-ecus-intuitively-and-comfortably/" target="_blank" rel="noopener noreferrer">K-SAR Editor</a></td></tr><tr><td><a href="https://mentor.com/" target="_blank" rel="noopener noreferrer">Mentor</a></td><td></td><td><a href="https://www.mentor.com/embedded-software/autosar/software" target="_blank" rel="noopener noreferrer">VSTAR</a><br></td><td><a href="https://www.mentor.com/embedded-software/autosar/tools" target="_blank" rel="noopener noreferrer">VSTAR Tools</a></td></tr><tr><td><a href="https://www.nxp.com/" target="_blank" rel="noopener noreferrer">NXP Semiconductors</a></td><td><a href="https://www.nxp.com/design/automotive-software-and-tools/autosar-:AUTOSAR-HOME#developer" target="_blank" rel="noopener noreferrer">MCAL</a></td><td><a href="https://www.nxp.com/design/automotive-software-and-tools/autosar-:AUTOSAR-HOME#developer" target="_blank" rel="noopener noreferrer">OS</a></td><td></td></tr><tr><td><a href="https://www.opensynergy.com/" target="_blank" rel="noopener noreferrer">OpenSynergy</a></td><td></td><td><a href="https://www.opensynergy.com/autosar/" target="_blank" rel="noopener noreferrer">COQOS</a></td><td><a href="https://www.opensynergy.com/autosar/" target="_blank" rel="noopener noreferrer">COQOSAReasy</a><br></td></tr><tr><td><a href="https://www.renesas.com/us/en/" target="_blank" rel="noopener noreferrer">Renesas Electronics</a></td><td><a href="https://www.renesas.com/us/en/solutions/automotive/technology/autosar.html" target="_blank" rel="noopener noreferrer">MCAL</a></td><td></td><td></td></tr><tr><td><a href="https://www.st.com/content/st_com/en.html" target="_blank" rel="noopener noreferrer">STMicroelectronics</a></td><td><a href="https://www.st.com/en/embedded-software/spc5-autosar-mcal.html" target="_blank" rel="noopener noreferrer">MCAL</a></td><td></td><td></td></tr><tr><td><a href="https://www.vector.com/" target="_blank" rel="noopener noreferrer">Vector Informatik</a></td><td></td><td><a href="https://www.vector.com/pl/en/products/products-a-z/embedded-components/microsar/" target="_blank" rel="noopener noreferrer">MICROSAR</a></td><td><a href="https://www.vector.com/pl/en/products/products-a-z/software/davinci-developer/" target="_blank" rel="noopener noreferrer">DaVinci Developer</a><br><a href="https://www.vector.com/pl/en/products/products-a-z/software/davinci-configurator-pro/" target="_blank" rel="noopener noreferrer">DaVinci Configurator</a><br><a href="https://www.vector.com/pl/en/products/products-a-z/software/" target="_blank" rel="noopener noreferrer">and more ...</a><br></td></tr></tbody></table><sup></sup></div>
</details>
<div><sup>Tools</sup></div>
- [AUTOSAR Development Tools](https://www.renesas.com/us/en/solutions/automotive/manual-softtools.html) - Renesas Electronics, overview on toolset.
- [Artop](https://www.artop.org/) - The **A**UTOSA**R** **T**ool **P**latform is an implementation of common base functionality for AUTOSAR development tools.
- [EB tresos®Studio documentation <img src="media/icons/pdf.png" height="18"/>](http://read.pudn.com/downloads263/doc/1209805/EB_tresos_Studio_documentation_en.pdf)
<div><sup>Papers</sup></div>
- [Evaluation of Performance and Fault Containment in AUTOSAR Micro-ECUs on a Multi-Core Processor <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-mcsoc-2018.pdf) - H. Ahmadian, R. Obermaisser, 2018.
- [Efficient Multi-core AUTOSAR-Platform Based on an Input/Output Gateway Core <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-pdp-2017.pdf) -M.Urbina, R. Obermaisser, 2017.
- [Co-simulation framework for AUTOSAR multi-core processors with message-based Network-on-Chips <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-indin-2016.pdf) - M. Urbina, R. Obermaisser, 2016.
- [Multi-core architecture for AUTOSAR based on virtual Electronic Control Units <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-etfa-2015.pdf) - M. Urbina, R. Obermaisser, 2015.
- [Artop – An ecosystem approach for collaborative AUTOSAR tool development <img src="media/icons/pdf.png" height="18"/>](https://hal.archives-ouvertes.fr/hal-02267845/document) - Christian Knüchel, +5 authors, 2010.
- [Interoperable AUTOSAR tooling with Artop <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/aef4/8c42d5252dbacb0aebd4491bb866289b8013.pdf?_ga=2.52826860.519889738.1591091523-1154219747.1586112696) - Sebastian Benz, Michael Rudorfer, Christian Knuechel, 2010.
- [How the concepts of the Automotive standard "AUTOSAR" are realized in new seamless tool-chains <img src="media/icons/pdf.png" height="18"/>](http://web1.see.asso.fr/erts2010/Site/0ANDGY78/Fichier/PAPIERS%20ERTS%202010/ERTS2010_0002_final.pdf) - Dr. Stefan Voget, P. Favrais, 2010.
- [AUTOSAR Runtime Environment and Virtual Function Bus <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/5f71/5e1b0192706de045b7d167b02441b90c2cbd.pdf?_ga=2.107492994.1632464626.1590175047-1154219747.1586112696) - N. A. Naumann, 2009.
- [AUTOSAR Software Architecture <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/b834/d611b20ba32f1cf7be3097f56449c4c350e4.pdf?_ga=2.64870191.1632464626.1590175047-1154219747.1586112696) - Robert Warschofsky, 2009.
- [Methodology and Templates in AUTOSAR <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/235d/35baee4cdea3033492625d96bdc32a51813e.pdf?_ga=2.70596240.1632464626.1590175047-1154219747.1586112696) - Regina Hebig, 2009.
- [How timing interfaces in AUTOSAR can improve distributed development of real-time software <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/1dfb/79ea35a11a96ee199ec3017cf14513fa8aaa.pdf?_ga=2.223182175.519889738.1591091523-1154219747.1586112696) - Kai Richter, +4 authors, 2008.
- [Enabling of AUTOSAR system design using Eclipse-based tooling <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/98ab/00c6e83cab79e9983785b211b15f5c350ded.pdf?_ga=2.224811742.519889738.1591091523-1154219747.1586112696) - Oliver Scheickl, +4 authors, 2008.
- [Achievements and exploitation of the AUTOSAR development partnership <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.461.5164&rep=rep1&type=pdf) - H. Fennel, +23 authors, 2006.
- [AUTomotive Open System ARchitecture – An industry-wide initiative to manage the complexity of emerging Automotive E/E-Architectures <img src="media/icons/pdf.png" height="18"/>](http://www.dii.unimo.it/~zanasi/didattica/Veicolo_OLD/AUTOSAR_Paper_Convergence_2004.pdf) - Harald Heinecke, +8 authors, 2004.
<div><sup>Miscellaneous</sup></div>
- [as <img src="media/icons/github.png" height="18"/>](https://github.com/autoas/as) - automotive software(OSEK & AUTOSAR) and its tool-chain [here](http://autoas.github.io/as/).
- [autosar-framework <img src="media/icons/github.png" height="18"/>](https://github.com/myGiter/autosar-framework) - Master-Thesis - Framework für wiederverwendbare Autosar Basis-Software-Module.
- [autosar <img src="media/icons/github.png" height="18"/>](https://github.com/cogu/autosar) - A set of python modules for working with AUTOSAR XML files.
## Automotive SPICE
- [ASPICE](http://www.automotivespice.com/download/) - Automotive SPICE® Process Assessment Model (PAM) and Process Reference Model (PRM).
- [Automotive SPICE: Ensuring ASPICE Compliance <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PL5VAskozuQ3DwQIE3A8dGKWIRKPeNBTCG) - 321 Gang, Continuous Engineering Experts.
- [A Seamless Model-Based Development Process for Automotive Systems <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.600.1988&rep=rep1&type=pdf) - Jan Meyer, Matthias Meyer, 2011.
## Autonomous Driving
- [Awesome Autonomous Driving <img src="media/icons/awesome.png" height="14"/>](https://github.com/autonomousdrivingkr/Awesome-Autonomous-Driving)
- [Awesome Autonomous Vehicles <img src="media/icons/awesome.png" height="14"/>](https://github.com/manfreddiaz/awesome-autonomous-vehicles)
- [Awesome Self-Driving Cars <img src="media/icons/awesome.png" height="14"/>](https://github.com/philbort/awesome-self-driving-cars)
## Agile
- [Scaled Agile Framework](https://www.scaledagileframework.com/) - (**SAFe**) is a set of organization and workflow patterns intended to guide enterprises in scaling lean and agile practices.
- [Agile practices when developing safety systems <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/94ee/8799144ed97fa53aa3c9806e2db68d2cc22e.pdf?_ga=2.230015071.23004512.1592854073-1154219747.1586112696) - Thor Myklebusta, Narve Lyngbya, Tor Stålhanec, 2018.
- [An Assessment of Avionics Software Development Practice: Justifications for an Agile Development Process <img src="media/icons/pdf.png" height="18"/>](https://link.springer.com/content/pdf/10.1007%2F978-3-319-57633-6_14.pdf) - Geir Kjetil Hanssen, Gosse Wedzinga, Martijn Stuip, 2017.
- [Scrum , documentation and the IEC 61508-3 : 2010 software standard <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/0819/fc97fac7e95e4f85d905be5f485fba2f5a54.pdf?_ga=2.37078499.23004512.1592854073-1154219747.1586112696) - Thor Myklebust, Tor Stålhaneb, Børge Haugseta, 2014.
## Bus Systems
### Automotive Ethernet
- [Introduction to Automotive Ethernet](https://elearning.vector.com/mod/page/view.php?id=149) - Vector Informatik, e-learning module.
- [Vector Automotive Ethernet Symposium 2019: Lectures <img src="media/icons/video.png" height="18"/>](https://elearning.vector.com/mod/page/view.php?id=149) - In 7 presentations - by Infineon, NXP, TÜV-Nord and Vector - the speakers showed the current status and solutions for the upcoming challenges, 2019
- [A TCP/IP Tutorial <img src="media/icons/student.png" height="18"/>](https://tools.ietf.org/html/rfc1180) - RFC 1180, short overview on ethernet.
- [Automotive Grade Linux](https://www.automotivelinux.org/) - Open source ethernet stack for automotive.
- [OPEN Alliance. "Automotive Ethernet Specifications"](http://opensig.org/about/specifications/)
- [SOME/IP specification](http://some-ip.com/papers.shtml)
- [vsomeip in 10 minutes](https://github.com/GENIVI/vsomeip/wiki/vsomeip-in-10-minutes) - Introduction to SOME/IP based on GENIVI implementation.
- [Security Analysis of Ethernet in Cars <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/77df/1b9418a0bf67bb9155daa94ef162054dca23.pdf?_ga=2.132109839.1632464626.1590175047-1154219747.1586112696) - Ammar Talic, 2017
- <details>
<summary>Automotive Ethernet Stack - Click to expand <img src="media/icons/warning.png" height="18"/></summary><div class="tg-wrap"><table><thead><tr><th>Use Case</th><th>Audio<br>Video<br></th><th>Time <br>Sync</th><th>Network <br>Managment</th><th>Service <br>Control</th><th>Diagnostic </th><th>Address <br>Config</th><th>Helper<br>Protocols</th></tr></thead><tbody><tr> <td align="center">Application</td> <td align="center"></td> <td align="center"></td> <td align="center"></td> <td align="center"></td><td align="center" rowspan="2"><a href="http://read.pudn.com/downloads191/doc/899044/ISO+14229+(2006).pdf">UDS*</a></td> <td align="center"></td> <td align="center"></td></tr><tr> <td align="center">Presentation</td> <td align="center"></td> <td align="center"></td> <td align="center"></td> <td align="center"></td> <td align="center"></td> <td align="center"></td></tr><tr> <td align="center">Session</td><td align="center" rowspan="2">IEEE 1722<br>(AVTP)<br></td><td align="center" rowspan="2">IEEE 802.1AS <br>(PTP)<br></td> <td align="center">UDP-NM</td> <td align="center"><a href="http://some-ip.com/papers.shtml">SOME/IP</a></td> <td align="center"><a href="http://read.pudn.com/downloads721/ebook/2887987/BS%20ISO%2013400-2-2012.pdf">DoIP*</a></td> <td align="center"><a href="https://tools.ietf.org/html/rfc2131">DHCP</a></td> <td align="center"></td></tr><tr> <td align="center">Transport</td><td align="center" colspan="4"><a href="https://tools.ietf.org/html/rfc793">TCP</a> and/or <a href="https://tools.ietf.org/html/rfc768">UDP</a></td> <td align="center"></td></tr><tr> <td align="center">Network<br></td> <td align="center"></td> <td align="center"></td><td align="center" colspan="4"><a href="https://tools.ietf.org/html/rfc791">IPv4</a>/<a href="https://tools.ietf.org/html/rfc2460">IPv6</a></td> <td align="center"><a href="https://tools.ietf.org/html/rfc792">ICMP</a>, <a href="https://tools.ietf.org/html/rfc4443">ICMPv6</a>,<br><a href="https://tools.ietf.org/html/rfc826">ARP</a>, <a href="https://tools.ietf.org/html/rfc4861">NDP</a><br></td></tr><tr> <td align="center">Data Link</td><td align="center" colspan="7">Ethernet MAC + VLAN (802.1Q)</td></tr><tr> <td align="center">Physical</td><td align="center" colspan="7">Automotive Ethernet Physical <br>(Ethernet, <a href="http://opensig.org/about/specifications/">OPEN Alliance BroadR-Reach</a>, Reduced twisted-pair Gigabit Eth)</td></tr></tbody></table><sup>(*) - superseded by newer version of standard</sup></div>
</details>
### CAN
- [CiA – CAN In Automation <img src="media/icons/warning.png" height="18"/>](https://www.can-cia.org/) - A user organization for people interested in CAN.
- [Bosch specification <img src="media/icons/pdf.png" height="18"/>](http://esd.cs.ucr.edu/webres/can20.pdf) - Specification superseded by the standard [ISO 11898](https://www.iso.org/standard/63648.html).
- [Bosch CAN FD specification Version 1.0 <img src="media/icons/pdf.png" height="18"/>](https://web.archive.org/web/20151211125301/http://www.bosch-semiconductors.de/media/ubk_semiconductors/pdf_1/canliteratur/can_fd_spec.pdf)
- [Controller Area Network (CAN) Schedulability Analysis: Refuted, Revisited and Revised](https://link.springer.com/article/10.1007%2Fs11241-007-9012-7)
- [Controller Area Network (CAN) Implementation Guide <img src="media/icons/pdf.png" height="18"/>](https://www.analog.com/media/en/technical-documentation/application-notes/AN-1123.pdf)
- [Introduction to CAN](https://elearning.vector.com/mod/page/view.php?id=333) - Vector Informatik, e-learning module.
- [Controller Area Network <img src="media/icons/pdf.png" height="18"/>](http://inst.cs.berkeley.edu/~ee249/fa08/Lectures/handout_canbus1.pdf) - UC Berkeley, presentation.
- [Understanding and Using the Controller Area Network <img src="media/icons/pdf.png" height="18"/>](http://inst.cs.berkeley.edu/~ee249/fa08/Lectures/handout_canbus2.pdf) - UC Berkeley, CAN 2.0b.
- [CAN Protocol <img src="media/icons/student.png" height="18"/>](https://www.kvaser.com/course/can-protocol-tutorial/) - Kvaser, tutorial.
- [CAN magazine](https://can-newsletter.org/magazine) - CiA publications.
### FlexRay
- [FlexRay Specification <img src="media/icons/pdf.png" height="18"/>](https://svn.ipd.kit.edu/nlrp/public/FlexRay/FlexRay%E2%84%A2%20Protocol%20Specification%20Version%203.0.1.pdf)
- [FlexRay Overview](https://www.ni.com/pl-pl/innovations/white-papers/06/flexray-automotive-communication-bus-overview.html) - National Instruments.
- [Introduction to FlexRay](https://elearning.vector.com/mod/page/view.php?id=371) - Vector Informatik, e-learning module.
- [The FlexRay Electrical Physical Layer Evolution <img src="media/icons/pdf.png" height="18"/>](https://web.archive.org/web/20150216112537/http://www.hanser-automotive.de/fileadmin/heftarchiv/FLEX_10_ONL_NXP-Y.pdf) - Lorenz Steffen, magazine Automotive 2010.
### LIN
- [Introduction to LIN](https://elearning.vector.com/mod/page/view.php?id=309) - Vector Informatik, e-learning module.
- [LIN Supplier ID Registration Authority](https://www.lin-cia.org/id/) - Standardized in the ISO 17987 series.
- [The LIN Short Story <img src="media/icons/pdf.png" height="18"/>](https://www.nxp.com/files-static/training_pdf/29021_S08_SLIN_WBT.pdf) - NXP Semiconductors.
### MOST
- [MOST Cooperation Website](https://www.mostcooperation.com/) - Technology overview and specifications.
## Functional Safety
<img align="right" width="250" src="media/safety.png">
- [ISO 26262-1:2011 Road vehicles — Functional safety — Part 1: Vocabulary](https://www.iso.org/obp/ui/#iso:std:iso:26262:-1:ed-1:v1:en) - ISO Online Browsing Platform (OBP).
- [IEC 61508-1:2010 <img src="media/icons/github.png" height="18"/>](https://github.com/wangdong412/Consen-SIS/tree/master/IEC61508) - Functional safety of electrical/electronic/programmable electronic safety-related systems.
- [exida Recorded Webinars <img src="media/icons/video.png" height="18"/>](https://www.exida.com/Webinars/Recordings) - exida worlds leading company for certification, safety, alarm management, cybersecurity.
- [Matrickz TechTalk <img src="media/icons/podcast.png" height="18"/>](https://www.matrickz.de/en/podcasts.html) - Matrickz, about ASPICE, Security and Safety (ISO26262).
- [What is the ISO 26262 Functional Safety Standard ?](https://www.ni.com/pl-pl/innovations/white-papers/11/what-is-the-iso-26262-functional-safety-standard-.html#toc2) - National Instruments.
- [Criticality categories across safety standards in different domains <img src="media/icons/pdf.png" height="18"/>](https://web1.see.asso.fr/erts2012/Site/0P2RUC89/1A-1.pdf) - ERTS2 Congress.
- [A Case Study of Toyota Unintended Acceleration and Software Safety <img src="media/icons/video.png" height="18"/>](https://www.exida.com/Webinars/Recordings) - Philip Koopman, and [slides <img src="media/icons/pdf.png" height="18"/>](https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf).
- [NASA Software Safety Guidebook](https://standards.nasa.gov/standard/nasa/nasa-gb-871913) - NASA, 2004.
- [Sudden unintended acceleration (SUA)](https://en.wikipedia.org/wiki/Sudden_unintended_acceleration#Sudden_acceleration_in_Toyota_vehicles) - Wikipedia, the US NHTSA estimates 16,000 accidents per year in USA.
- [Results of 2017 Embedded Systems Safety & Security Survey <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=EMrgTOoRARE&feature=youtu.be&t=1) - Barr Group, list of all [webinars <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLjjaR7ZI1lwO6GqCAgWh003f834InzdUa).
- [Talk on safety-critical systems and criticisms of the standards <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=EMrgTOoRARE&feature=youtu.be&t=1) - Professor Martyn Thomas CBE.
- [Talk on correctness by construction techniques <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=03mUs5NlT6U&feature=youtu.be&t=1) - Professor Martyn Thomas CBE.
- [Knowledge Bank of technical articles, presentations and talks](https://www.risktec.tuv.com/knowledge-bank/) - Risktec - TÜV Rheinland.
- [Tools and Methods for Validation and Verification as requested by ISO26262 <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.397.1932&rep=rep1&type=pdf) - Markus Gebhardt, Axel Kaske, 2014.
- [A Reference Example on the Specification of Safety Requirements using ISO 26262 <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.372.2716&rep=rep1&type=pdf) - J. Westman, M. Nyberg, 2013.
- [Early Safety Evaluation of Design Decisions in E/E Architecture according to ISO 26262 <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.666.8479&rep=rep1&type=pdf) - Vladimir Rupanov Alois, +4authors, 2012.
- [Safety Critical Systems: Challenges and Directions <img src="media/icons/pdf.png" height="18"/>](http://users.encs.concordia.ca/~ymzhang/courses/reliability/ICSE02Knight.pdf) - J.C. Knight, 2002.
## Cyber Security
- [Automotive Cybersecurity Overview](https://www.nhtsa.gov/crash-avoidance/automotive-cybersecurity) - From NHTSA (United States Department of Transportation), set of articles.
- [Cyber Security - SIG](https://site.ieee.org/ocs-cssig/?page_id=736) - From IEEE.org, numbers of great resources.
- [The Car Hacker's Handbook - A Guide for the Penetration Tester <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://docs.alexomar.com/biblioteca/thecarhackershandbook.pdf) - Craig Smith, 2016.
- [Vector Cybersecurity Symposium 2016: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJHdtX4Vmw8n8DBFuDlmQCQy) - From official Vector Informatik YouTube channel. On 23rd June 2016.
- [Vector Cybersecurity Symposium 2017: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJGvyWfoPaTMw0QN3306wTPm) - From official Vector Informatik YouTube channel.
- [Vector Cybersecurity Symposium 2019: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJHxvK3v0sRYO9Kpnpb-Thz9) - From official Vector Informatik YouTube channel. On April 3rd 2019.
- [Vehicle Control Unit Security using Open Source AUTOSAR <img src="media/icons/pdf.png" height="18"/>](http://publications.lib.chalmers.se/records/fulltext/219822/219822.pdf) - Masters Thesis in Software Engineering.
- [Awesome Vehicle Security <img src="media/icons/awesome.png" height="14"/>](https://github.com/jaredthecoder/awesome-vehicle-security) - Books, hardware, software, applications, people to follow, car hacking and tinkering.
## Measurement and Calibration
- [ASAM MCD-1 XCP](https://www.asam.net/standards/detail/mcd-1-xcp/wiki/) - ASAM (Association for Standardisation of Automation and Measuring Systems) standard description.
- [XCP – The Standard Protocol for ECU Development <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://assets.vector.com/cms/content/application-areas/ecu-calibration/xcp/XCP_ReferenceBook_V3.0_EN.pdf) - Andreas Patzer, Rainer Zaiser, Vector Informatik GmbH, 2016.
- [XCP fundamentals: measuring, calibrating and bypassing based on the ASAM standard <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=Fo3S3vKn1dk) - Official Vector Informatik YouTube channel.
## Vehicle Diagnostics
- [ISO 14229-1:2006 <img src="media/icons/pdf.png" height="18"/>](http://read.pudn.com/downloads191/doc/899044/ISO+14229+(2006).pdf) - Unified Diagnostic Services (UDS) specification, superseded by the standard [ISO 14229-1:2013](https://www.iso.org/standard/55283.html).
- [ISO 13400-2:2012 <img src="media/icons/pdf.png" height="18"/>](http://read.pudn.com/downloads721/ebook/2887987/BS%20ISO%2013400-2-2012.pdf) - Road vehicles - Diagnostic communication over Internet Protoco (DoIP).
- [Information Posters](https://automotive.softing.com/en/service/order-of-information-poster.html) - Softing Automotive, about UDS, ODX, OTX, DoIP.
- [Diagnostics and Flashing <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJFZ0ueLgYRZfSa6l-eTcwBh) - Official Vector Informatik YouTube channel, more [here](https://vctr.it/2B8hbJh).
- [Usage of AUTOSAR diagnostic modules in a MOST electronic control unit <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/85e2/1a2e7778443f7b113b58b9f9ada812959757.pdf?_ga=2.266760778.519889738.1591091523-1154219747.1586112696) - Paul Hoser, 2008.
- [Unified Diagnostic Services Protocol Implementation in an Engine Control Unit <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/f58e/dbc2c2faf010f03f7fc64798996adc160727.pdf?_ga=2.26818169.519889738.1591091523-1154219747.1586112696) - Panuwat Assawinjaipetch, Michael Heeg, Daniel Gross, Stefan Kowalewski, 2013.
- [Remote Vehicle Diagnostics over the Internet using the DoIP Protocol <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.418.5332&rep=rep1&type=pdf) - Mathias Johanson, Pål Dahle, Andreas Söderberg
## Software Development Process
### Requirements
<div><sup>General</sup></div>
- [IEEE Std 1233 <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/4018/ea1263f10052e3197c4d2a866b62fde83167.pdf) - IEEE Guide for Developing System Requirements Specifications, 1998.
- [Requirements Engineering in Automotive Development: Experiences and Challenges <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.490.1707&rep=rep1&type=pdf) - M. Weber, J. Weisbrod, 2002.
<div><sup>Polarion Software</sup></div>
- [Polarion Tutorial Videoss <img src="media/icons/student.png" height="18"/><img src="media/icons/video.png" height="18"/>](https://polarion.plm.automation.siemens.com/tutorials) - From tool vendor - Siemens Industry Software.
- [Vector Polarion Connection Utility <img src="media/icons/video.png" height="18"/>](https://elearning.vector.com/mod/page/view.php?id=149) - Add-on tool for Vector vTESTstudio that serves to integrate Siemens Polarion ALM into the Vector testing tool chain.
<div><sup>Rational DOORS</sup></div>
- [Getting started <img src="media/icons/student.png" height="18"/>](https://www.ibm.com/developerworks/rational/library/getting-started-ibm-rational-doors/index.html) - Tutorial for IBM Rational DOORS and IBM Rational DOORS Web Access.
- [Documentation](https://www.ibm.com/support/pages/node/594725) - Library pages contain documentation for earlier versions of Rational products.
- [Essentials <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLFB5C518530CFEC93) - Hands-on examples.
- [IBM Rational Rhapsody tips and tricks <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLaBR7gZA1IOjxthOjpG3aAKeyRt04Wlhd) - Hands-on examples.
- [Using DXL](https://www.ibm.com/support/knowledgecenter/SSYQBZ_9.5.0/com.ibm.doors.configuring.doc/topics/c_dxl.html) - The Rational® DOORS® eXtension Language (DXL) is an easy-to-learn scripting language that you can use to control and extend Rational DOORS functions.
- [The DXL Reference Manual](https://www.ibm.com/support/knowledgecenter/SSYQBZ_9.5.0/com.ibm.doors.requirements.doc/topics/dxl_reference_manual.pdf?view=kc)
### Design
<div><sup>General</sup></div>
- [ISO/IEC/IEEE42010 <img src="media/icons/pdf.png" height="18"/>](https://nanopdf.com/download/iso-iec-ieee-420102011e-systems-and-software-engineering_pdf) - Systems and software engineering — Architecture description, 2011.
- [IEEE Std 1016 <img src="media/icons/pdf.png" height="18"/>](http://ccftp.scu.edu.cn:8090/Download/b4994628-e3e2-450f-882b-488939cecf30.pdf) - IEEE Recommended Practice for Software Design Descriptions, 1998.
- [A Gateway Core between On-chip and Off-chip Networks for an AUTOSAR Message-based Multi-core Platform <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-ame-2016.pdf) - M. Urbina and R. Obermaisser, 2016.
- [Automotive real time development using a timing-augmented AUTOSAR specification <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/ca8c/6d82300061c0ad31d7717fc00e0875cbd96e.pdf?_ga=2.190666095.519889738.1591091523-1154219747.1586112696) - O. Scheickl, M. Rudorfer, 2008.
- [Enterprise Architect](https://sparxsystems.com/products/ea/) - Sparx Systems - tool vendor, contains demo, tutorials and more.
- [Gaphor](https://gaphor.org) - The simple open source modeling tool supporting UML and SysML.
- [Awesome Software Architecture <img src="media/icons/awesome.png" height="14"/>](https://github.com/simskij/awesome-software-architecture)
<div><sup>SysML</sup></div>
- [SysML for embedded automotive Systems: lessons lear ned <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/8b50/8115cf085b6ec71c32bba83c553801ac8985.pdf?_ga=2.228990748.1632464626.1590175047-1154219747.1586112696) - J-D. Piques, E. Andrianarison, 2011.
- [SysML for embedded automotive Systems : a practical approach <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/732a/11ca70fb34e05e47276500594c48f83e93d7.pdf?_ga=2.233208222.1632464626.1590175047-1154219747.1586112696) - E. Andrianarison, J-D. Piques, 2010.
- [Model synchronization at work: keeping SysML and AUTOSAR models consistent <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.7330&rep=rep1&type=pdf) - H. Giese , S. Hildebr , S. Neumann, 2010.
### Implementation
<img align="right" width="400" src="media/line_code.png">
<div><sup>General</sup></div>
- [IEEE Std 830 <img src="media/icons/pdf.png" height="18"/>](http://www.math.uaa.alaska.edu/~afkjm/cs401/IEEE830.pdf) - IEEE Recommended Practice for
Software Requirements Specifications, 1998.
- [IEEE Std 730 <img src="media/icons/pdf.png" height="18"/>](http://mazure.fr/attic/IEEE7301989.pdf) - IEEE Standard for Software Quality Assurance Plans, 1998.
- [Software engineering for automotive systems: A roadmap <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.6142&rep=rep1&type=pdf) - Alexander Pretschner, +3authors, 2007.
<div><sup>Code</sup></div>
- [Driving Into the Future With Modern C++: A Look at Adaptive Autosar <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=YzyGgZ_RClw&feature=emb_title) - Jan Babst, CppCon, 2017.
- [Guidelines for the use of the C++14 language in critical and safety-related systems <img src="media/icons/pdf.png" height="18"/>](https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_RS_CPP14Guidelines.pdf) - AUTOSAR standard.
- [MISRA](https://www.misra.org.uk/Publications/tabid/57/Default.aspx) - Publications from world-leading best practice guidelines for the safe and secure application.
- [Modern Embedded Systems Programming <img src="media/icons/student.png" height="18"/><img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLPW8O6W-1chwyTzI3BHwBLbGQoPFxPAPM) - Series of hands-on lessons about embedded microcontrollers in C.
- [Safe Software for Autonomous Mobility With Modern C++ <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=5WbdLUc9Jls) - Andreas Pasternak, CppCon.
- [SEI CERT Coding Standards](https://wiki.sei.cmu.edu/confluence/display/c/SEI+CERT+C+Coding+Standard) - Languages such as C, C++, Java, and Perl, and the Android™ platform.
- [Writing Safety Critical Automotive C++ Software for High Performance AI Hardware <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=F4GzsA00s5I) - Michael Wong, CppCon.
- [Awesome C <img src="media/icons/awesome.png" height="14"/>](https://github.com/aleksandar-todorovic/awesome-c)
- [Awesome C++ <img src="media/icons/awesome.png" height="14"/>](https://github.com/fffaraz/awesome-cpp#readme)
- [Awesome Embedded <img src="media/icons/awesome.png" height="14"/>](https://github.com/nhivp/Awesome-Embedded)
- [Awesome MATLAB <img src="media/icons/awesome.png" height="14"/>](https://github.com/mikecroucher/awesome-MATLAB)
<div><sup>Debug</sup></div>
- [MULTI Integrated Development Environment](https://www.ghs.com/products/MULTI_IDE.html)
- [Trace32 Lauterbach GmbH](https://www.lauterbach.com/frames.html?home.html) - High-tech company for microprocessor development tools.
- [Trace32 basic examples of usage <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLlgTI9rjcm35NUgKufepfqgn6Fd4zBe88) - Lauterbach GmbH.
- [Trace32: Debug your embedded systems <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PL1sbHjUq1DdqQSBlk-uM-EJ3O1iof0-IN) - Nohau Solutions.
- [iSYSTEM AG](https://www.isystem.com/products/id-3rd-party-software-support/autosar.html) - Debugging tools supplier WinIDEA, iC5000 Base Unit, testIDEA.
### Software Test
<div><sup>Genral</sup></div>
- [Simulation Environment based on SystemC and VEOS for Multi-Core Processors with Virtual AUTOSAR ECUs <img src="media/icons/pdf.png" height="18"/>](https://networked-embedded.de/paper/urbina-etfa-2015.pdf) - M. Urbina, Z. Owda, R. Obermaisser, 2015.
- [Software Testing Symposium 2018: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJHCmgtUcp5YOfmNkEgiXERd) - Official Vector Informatik YouTube channel.
- [Awesome Software Quality <img src="media/icons/awesome.png" height="14"/>](https://github.com/ligurio/awesome-software-quality#readme)
<div><sup>MC/DC</sup></div>
- [A Practical Tutorial on Modified Condition/Decision Coverage <img src="media/icons/pdf.png" height="18"/>](https://shemesh.larc.nasa.gov/fm/papers/Hayhurst-2001-tm210876-MCDC.pdf)
- [The Effect of Program and Model Structure on MC⁄DC Test Adequacy Coverage <img src="media/icons/pdf.png" height="18"/>](http://se.inf.ethz.ch/old/teaching/2009-S/0276/slides/fiva.pdf)
<div><sup>Unit test</sup></div>
- [ARUnit](https://www.artop.org/arunit/) - Unit Testing of AUTOSAR Software Components, [Artop](https://www.artop.org) sub-project.
- [Google Test <img src="media/icons/github.png" height="18"/>](https://github.com/google/googletest) - Google's C++ test framework.
- [Googletest Mocking (gMock) Framework <img src="media/icons/github.png" height="18"/>](https://github.com/google/googletest/tree/master/googlemock) - Google's framework for writing and using C++ mock classes.
- [Fake Function Framework (fff) <img src="media/icons/github.png" height="18"/>](https://github.com/meekrosoft/fff) - Micro-framework for creating fake C functions for tests.
- [Unit Testing C Code <img src="media/icons/stackoverflow.png" height="18"/>](https://stackoverflow.com/questions/65820/unit-testing-c-code?page=1&tab=votes#tab-top) - Discussion with overview on available C unit test frameworks.
<div><sup>Static analysis</sup></div>
- [Awesome Static Analysis <img src="media/icons/awesome.png" height="14"/>](https://github.com/analysis-tools-dev/static-analysis)
<div><sup>Timing analysis</sup></div>
- [GLIWA](https://www.gliwa.com/) - Worldwide leading provider for timing analysis, optimization and verification, [resources](https://www.gliwa.com/index.php?page=papers&lang=eng).
- [Runtime Analysis of AUTOSAR Embedded Projects <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/watch?v=C2NFKwUOpMk&list=PLLUr1-D7UabianTZOBIPKH1sA4M4nKhTw&index=2&t=5767s) - Florian Sommer, Sebastian Ziegler.
- [TA Tool Suite - Managing the Timing Behavior of AUTOSAR Multi-Core ECUs](https://www.vector.com/int/en/products/products-a-z/software/ta-tool-suite/) - Vector Informatik.
- [Tool support for seamless system development based on AUTOSAR timing extensions <img src="media/icons/pdf.png" height="18"/>](https://pdfs.semanticscholar.org/04c8/ba5319986e246f96df2be8307eb09bd1690f.pdf?_ga=2.65429098.519889738.1591091523-1154219747.1586112696) - Oliver Scheickl, Christoph Ainhauser, Peter Gliwa, 2012.
- [Timing Simulation of Interconnected AUTOSAR Software-Components <img src="media/icons/pdf.png" height="18"/>](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.659.7962&rep=rep1&type=pdf) - Matthias Krause, +4 authors, 2007.
### Functional Test
<div><sup>Genral</sup></div>
- [Vector Testing Symposium 2017: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJEpfR5iAZBNjpl1NIpRA7Gw) - Official Vector Informatik YouTube channel.
- [Vector Testing Symposium 2018: Lectures <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PLLKv-zcGiHJFxt_WSazEXShViv_jnlu0K) - Official Vector Informatik YouTube channel.
<div><sup>CANoe</sup></div>
- [CANoe: Product Videos <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/playlist?list=PL9EA087B9E8301D23) - Official Vector Informatik YouTube channel.
- [Programming with CAPL <img src="media/icons/pdf.png" height="18"/>](https://can-newsletter.org/assets/files/media/raw/a456e3078f907a0482182ce831912427.pdf)
- [Tips and Tricks for the Use of CAPL](https://kb.vector.com/entry/875/) - Three consecutive articles, for all levels of user knowledge [Part One](https://kb.vector.com/upload_551/file/CAPL_1_CANNewsletter_201406_PressArticle_EN.pdf), [Part Two](https://kb.vector.com/upload_551/file/CAPL_2_CANNewsletter_201409_PressArticle_EN.pdf), [Part Three](https://kb.vector.com/upload_551/file/CAPL_3_CANNewsletter_201411_PressArticle_EN.pdf).
## Blogs
- [just auto](https://www.just-auto.com/) - Global automotive industry news, data and analysis. Recent information about OEMs and suppliers.
- [automotivetechis](https://automotivetechis.wordpress.com/) - From engineer with 10 years in automotive domain.
- [automotive wiki](https://automotive.wiki/index.php/Main_Page) - From [SCHEID automotive GmbH](https://www.scheid-automotive.com/).
- [AUTOSAR tutorials](https://autosartutorials.com)
- [Small Business Programming](https://smallbusinessprogramming.com/) - A set of great articles on every programmers topic.
## Books
- [Technical Papers on the Development of Embedded Electronics <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/><img src="media/icons/warning.png" height="18"/>](https://assets.vector.com/cms/content/know-how/_technical-articles/Pressbook_EN_2018.pdf) - Vector Informatik GmbH, 2018
- [Automotive Embedded Systems Handbook <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://d1.amobbs.com/bbs_upload782111/files_38/ourdev_629261ASTZIF.pdf) - Nicolas Navet, 2009.
- [Understanding Automotive Electronics Eighth Edition <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://www.engbookspdf.com/uploads/pdf-books/UnderstandingAutomotiveElectronics8theditionbyWilliamB.Ribbens-1.pdf) - William B. Ribbens, 2017.
- [FMEA Handbook <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://fsp.portal.covisint.com/documents/106025/14555722/FMEA%20Handbook%20v4.2/4c14da5c-0842-4e60-a88b-75c18e143cf7) - Ford, 2011.
- [The Car Hacker's Handbook - A Guide for the Penetration Tester <img src="media/icons/pdf.png" height="18"/><img src="media/icons/book.png" height="18"/>](https://docs.alexomar.com/biblioteca/thecarhackershandbook.pdf) - Craig Smith, 2016.
- [engineeringbookspdf <img src="media/icons/search.png" height="18"/>](https://www.engineeringbookspdf.com/automobile-engineering/) - Offers free access to about 150 automotive books.
- [engbookspdf <img src="media/icons/search.png" height="18"/>](https://www.engbookspdf.com/Automobile/) - Offers free access to about 35 automotive books.
- [engbookspdf <img src="media/icons/search.png" height="18"/>](http://www.engineering108.com/pages/Automobile_Engineering/Automobile-engineering-ebooks-free-download.html) - Offers free access to about 5 automotive books.
- [eBooks-IT.org <img src="media/icons/search.png" height="18"/>](http://www.engineering108.com/pages/Automobile_Engineering/Automobile-engineering-ebooks-free-download.html) - Online library for IT ebooks.
- [Free Programming Books <img src="media/icons/awesome.png" height="14"/>](https://github.com/sindresorhus/awesome)
- [Free Software Testing Books <img src="media/icons/awesome.png" height="14"/>](https://github.com/ligurio/awesome-software-quality#readme)
## Magazines
- [SAE Magazines <img src="media/icons/warning.png" height="18"/>](https://www.sae.org/publications/magazines) - A set of free magazines from automotive industry.
- [Vehicle Electronics](https://vehicle-electronics.biz/) - Free monthly magazine for automotive electronics engineers.
- [CAN magazine](https://can-newsletter.org/magazine) - CiA publications.
## Podcasts
- [SAE Tomorrow Today <img src="media/icons/podcast.png" height="18"/>](https://www.sae.org/podcasts) - SAE International, provides unique and dynamic perspectives from innovative industry leaders.
- [Matrickz TechTalk <img src="media/icons/podcast.png" height="18"/>](https://www.matrickz.de/en/podcasts.html) - Matrickz, about ASPICE, Security and Safety (ISO26262).
- [Embedded.fm <img src="media/icons/podcast.png" height="18"/>](https://embedded.fm/) - A site dedicated to the many aspects of engineering.
## Press releases
- [Continental AG](https://www.continental.com/en/press/press-releases)
- [Elektrobit (EB)](https://www.elektrobit.com/tech-corner/)
- [Renesas Electronics Corporation](https://www.renesas.com/us/en/solutions/automotive.html)
- [OPEN Alliance](http://opensig.org/news/press-releases/)
- [SAE International](https://www.sae.org/news/press-room)
- [Softing Automotive Electronics GmbH](https://automotive.softing.com/en/service/press-publications/press-releases.html)
- [Vector Informatik GmbH](https://www.vector.com/int/en/search/?tx_solr%5Bfilter%5D%5B0%5D=contentType%3Atx_solr_file&tx_solr%5Bsort%5D=datetime+desc&tx_solr%5BresultsPerPage%5D=10)
## Videos
- [Automotive Logistics <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/user/autologisticschannel)
- [Embedded Meetup Egypt <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/channel/UC4iQ7Bz-3MKeMsfs3Bb4QjQ/featured) - Webinars related to software development for automotive embedded systems.
- [Official Elektrobit <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/user/EBAutomotiveSoftware/featured)
- [MATLAB Videos and Webinars <img src="media/icons/video.png" height="18"/>](https://www.mathworks.com/videos.html)
- [Official Vector Informatik YouTube channel <img src="media/icons/video.png" height="18"/>](https://www.youtube.com/channel/UC7P-ggVSMhM28LmVzwf2BQw)
## Miscellaneous
- [Universität Siegen](https://networked-embedded.de/es/index.php/PublicationList.html) - Publication list about critical safety and AUTOSAR projects.
- [Vector Support & Downloads](https://www.vector.com/int/en/search/?tx_solr%5Bfilter%5D%5B0%5D=contentType%3Atx_solr_file&tx_solr%5Bsort%5D=datetime+desc&tx_solr%5BresultsPerPage%5D=10) - Over 1000 great materials: webinars, articles and more.
- [Vector Knowledge Base](https://kb.vector.com/) - Vector platform with examples and solutions for problems related to offered products.
- [TOP 100 OEM suppliers <img src="media/icons/pdf.png" height="18"/>](https://www.autonews.com/assets/PDF/CA89220617.PDF) - Suplement to Automotive News magazine, 2013.
- [Awesome Indexed <img src="media/icons/awesome.png" height="14"/><img src="media/icons/search.png" height="18"/>](https://awesome-indexed.mathew-davies.co.uk/) - Search the Awesome dataset.
- [Awesome Search <img src="media/icons/awesome.png" height="14"/><img src="media/icons/search.png" height="18"/>](https://awesomelists.top/) - Quick search for Awesome lists.
<!--lint enable awesome-list-item-->
## Contribute
Contributions welcome! Read the [contribution guidelines](contributing.md) first.
| 121.17199 | 4,995 | 0.752195 | yue_Hant | 0.258829 |
dbd1551d9976d83232652a4733fd8bc972c4eac4 | 4,979 | md | Markdown | Office365-Admin/services-in-china/change-nameservers-at-ename.md | jmrfonseca/OfficeDocs-O365ITPro | a93d5bc724fff2d24605f7c114affe4914f51e3f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-02-06T10:05:01.000Z | 2019-02-06T10:05:56.000Z | Office365-Admin/services-in-china/change-nameservers-at-ename.md | jmrfonseca/OfficeDocs-O365ITPro | a93d5bc724fff2d24605f7c114affe4914f51e3f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-03-07T16:16:51.000Z | 2019-06-04T20:22:07.000Z | Office365-Admin/services-in-china/change-nameservers-at-ename.md | jmrfonseca/OfficeDocs-O365ITPro | a93d5bc724fff2d24605f7c114affe4914f51e3f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Change nameservers to set up Office 365 with ENAME"
ms.author: pebaum
author: pebaum
manager: mnirkhe
ms.audience: Admin
ms.topic: get-started-article
f1_keywords:
- 'O365P_DOM_ENAME1'
ms.service: o365-administration
localization_priority: Normal
ms.custom:
- Core_O365Admin_Migration
- domainsgallatin
search.appverid:
- MET150
- GEA150
ms.assetid: aa17fe0b-bfa7-41b8-a145-25dbc7c375a3
description: "Learn how you can set up Office 365 operated by 21Vianet to manage your DNS records, when ENAME is the DNS hosting provider."
monikerRange: 'o365-21vianet'
---
# Change nameservers to set up Office 365 with ENAME
Follow the instructions below if you want Office 365 operated by 21Vianet to manage your Office 365 DNS records for you. (If you prefer, you can [manage all your DNS records at ENAME](create-dns-records-at-ename.md).)
## Add a record to verify your domain
<a name="BKMK_add_a_record"> </a>
Before you use your domain with Office 365, we have to make sure that you own it. Your ability to log in to your account at your domain registrar and create the DNS record proves to Office 365 that you own the domain.
> [!NOTE]
> This record is used only to verify that you own your domain; it doesn't affect anything else. You can delete it later, if you like.
1. In your browser, go to [your domains list at ENAME](https://www.ename.net/manage/domainlist) and sign in.

2. In the right pane, in the **操作** (actions) column for the domain that you want to update, click **管理** (manage).

3. Click **域名解析** (domain name resolution).
The DNS records page for your domain opens.

4. Click **添加记录** (add a record).

5. Make sure that the fields are set to precisely the following values for the empty record:
- **主机记录** (host name): Leave the box blank.
- **记录类型** (record type): **TXT**
- **记录值(IP/域名)** (value): Paste **Destination or Points to Address** value that you just copied.
- **线路类型** (network): Use the default value.
- **TTL**: **3600**

6. Click **保存** (save).
Now that you've added the record at your domain registrar's site, you'll go back to Office 365 and request Office 365 to look for the record.
When Office 365 finds the correct TXT record, your domain is verified.
1. Choose **Setup** \> **Domains**.
2. On the **Manage domains** page, in the **Action** column for the domain you are verifying, choose **Start setup**.

3. On the **Add this TXT record to show you own** ***domain_name*** page, choose **Okay, I've added the record** and then, in the confirmation dialog box, choose **Finish**.

## Change your domain's nameserver records
<a name="BKMK_change_your_domain_s_1"> </a>
To complete setting up your domain with Office 365, you change your domain's NS records at your domain registrar to point to the Office 365 operated by 21Vianet primary and secondary name servers. This sets up Office 365 to update the domain's DNS records for you. We'll add all records so that email, Lync, and your public website work with your domain, and you'll be all set.
> [!CAUTION]
> When you change your domain's NS records to point to the Office 365 name servers, all the services that are currently associated with your domain are affected. For example, all email sent to your domain (like rob@ *your_domain*.com) will start coming to Office 365 after you make this change.
1. In your browser, go to [your domains list at ENAME](https://www.ename.net/manage/domainlist) and sign in.

2. In the right pane, in the **操作** (actions) column for the domain that you want to update, click **管理** (manage).

3. Click **修改DNS** (change name servers).

4. In the **DNS1** row, type or paste **ns1.dns.partner.microsoftonline.cn**.
5. In the **DNS2** row, type or paste **ns2.dns.partner.microsoftonline.cn**.
6. Click **确认修改** (change).

> [!NOTE]
> Your nameserver record updates may take up to several hours to update across the Internet's DNS system. Then your Office 365 email and other services will be all set to work with your domain.
| 42.922414 | 377 | 0.70958 | eng_Latn | 0.973429 |
dbd2d1d3896df60f88224de200396b44e0bb8fb8 | 1,454 | md | Markdown | doc/doc/compiler.md | tonywalker1/hinder | 94e090a1b75ea021cf695f53567236ce768991e6 | [
"MIT"
] | 1 | 2021-01-26T21:22:54.000Z | 2021-01-26T21:22:54.000Z | doc/doc/compiler.md | tonywalker1/hinder | 94e090a1b75ea021cf695f53567236ce768991e6 | [
"MIT"
] | null | null | null | doc/doc/compiler.md | tonywalker1/hinder | 94e090a1b75ea021cf695f53567236ce768991e6 | [
"MIT"
] | null | null | null | # hinder::compiler
Macros and tests to make it easier to support multiple compilers and C++ versions.
# Usage
## C++ Standard Versions
I never remember the version numbers for each C++ standard and have to look them up, so to help with
that, I provide the following macros you can use:
```c++
#define HINDER_CPP_11 201103L;
#define HINDER_CPP_14 201402L;
#define HINDER_CPP_17 201703L;
#define HINDER_CPP_20 202002L;
```
## Likely/Unlikely
Everyone has a version of likely/unlikely macros, so of course I wrote my own. Now, there are n+1
versions!
```cpp
if HINDER_LIKELY(answer == 42) {
// do something interesting
}
```
When compiled with C++20 support, the above expands to the newly supported [[likely]] and
[[unlikely]]. Otherwise, the macro uses __builtin_expect() which only works for Clang and GCC.
This macro affects code generation, which you can explore for your own code via
[Compiler Explorer](https://godbolt.org/). HOWEVER, don't over use it. ONLY use this macro when a
test is truly (i.e., almost 100%) likely/unlikely. A good example is an invariant or precondition
test.
## NODISCARD
C++17 added support for [[nodiscard]]. This library provides a macro that expands to [[nodiscard]]
on C++17 and higher or nothing on C++11 or C++14. Use this macro liberally. It is a good idea and
can catch simple errors.
```c++
HINDER_NODISCARD int sum(int x, int y);
```
# More?
Yes, there will be more to come...
| 29.673469 | 100 | 0.728336 | eng_Latn | 0.996828 |
dbd2fb6578343c0047562c75e76d8d21d125cbef | 2,189 | md | Markdown | src/pages/posts/request-response.md | isaacdpierce/isaacpierce-gatsby | 4bdf5e33dd366c193b8eaf5f88551906b3c8b4b9 | [
"MIT"
] | null | null | null | src/pages/posts/request-response.md | isaacdpierce/isaacpierce-gatsby | 4bdf5e33dd366c193b8eaf5f88551906b3c8b4b9 | [
"MIT"
] | 4 | 2021-09-02T00:33:09.000Z | 2022-02-26T16:53:15.000Z | src/pages/posts/request-response.md | isaacdpierce/isaacpierce-gatsby | 4bdf5e33dd366c193b8eaf5f88551906b3c8b4b9 | [
"MIT"
] | null | null | null | ---
title: 'HTTPS Request / Response Life-cycle'
date: '2019-08-22'
---
User enters url -> Browser sends request -> Server parses url && header -> Server sends response object. <!-- end -->
1. User enters a URL**[1]** `https://host:port/path/querystring`
2. Browser sends Request Object**[2]**
3. Server parses the url and header to handle the request
- Finds the HTTP methods**[3]**
- maps the request to a function that handles the request
- executes the function and forms the `response`
- if needed the function will interact with the database
- formats the `response`
- sets the status of the `response` (eg. 200 (ok), 404 (not found))
4. Server sends the Response Object**[4]** back to the users browser
- browser stores and displays `response`
**[1] URL:**
- _protocol_ - Usually `HTTP://` or `HTTPS://` - Other common protocols are FTP, SSH, and POP3.
- _domain_ -` www.getsomestuff.com`
- _port_ - :80 - points to a specific application on a machine
- _path_ - `the/file/you/want`
- _query_ - `?key1=value1&key2=value2&key3=value3` - this is not part of the routing of the request - it is the data passed into the function that is executed once the location is resolved
**[2] Request Object:**
- The object that represents the HTTP request.
- By convention it's named `req`
- It has information about the location and the data that is being requested by a user
- It includes information about the User-Agent (browser), user preferred language, the host, encoding, connections and more.
- The data includes the type of request, methods for resolving the type of data returned, any parameters, body, headers and more.
- see [MDN](https://developer.mozilla.org/en-US/docs/Web/API/Request) for more info
**[3] HTTP methods:**
- `GET`/`POST`/`PUT`/`HEAD`/`DELETE`/`PATCH`/`OPTIONS`
**[4] Response Object:**
- The response that is sent to a client from a server after the request is completed.
- By convention it's named `res`
- Includes data, type, url, status, statusText, body, headers and more.
- It contains methods for dealing with the data including error, and promises for resolving the data in specific formats (eg. arrayBuffer, blob, json).
| 44.673469 | 188 | 0.718136 | eng_Latn | 0.994695 |
dbd31e460a2468ce3b6540c19f588a7020ba3dac | 326 | md | Markdown | README.md | opensourceportfolio/brackets-navigation-history | 0e18ef06f2d56ee9afc3203e0cdf5871fcac6326 | [
"MIT"
] | 1 | 2015-11-18T18:40:18.000Z | 2015-11-18T18:40:18.000Z | README.md | opensourceportfolio/brackets-navigation-history | 0e18ef06f2d56ee9afc3203e0cdf5871fcac6326 | [
"MIT"
] | 2 | 2015-10-05T06:38:17.000Z | 2016-01-06T09:10:07.000Z | README.md | opensourceportfolio/brackets-navigation-history | 0e18ef06f2d56ee9afc3203e0cdf5871fcac6326 | [
"MIT"
] | 1 | 2021-04-25T03:19:21.000Z | 2021-04-25T03:19:21.000Z | brackets-navigation-history
===========================================
4/30/2014
Changed shortcut to be ctrl-alt instead of ctrl-shift
This extension adds the ability to ctrl-alt-right and
ctrl-alt-left to navigate back and forward in your
source code. It remember the last 100 points which were
visited in each document.
| 29.636364 | 55 | 0.696319 | eng_Latn | 0.996886 |
dbd35bd43e91ce8b68bbb0b3b2f15079a29c975d | 1,959 | md | Markdown | README.md | entwico/spa-resort | 43f1da74f1cb4faf56b7009f60804a5c1becfd8f | [
"MIT"
] | null | null | null | README.md | entwico/spa-resort | 43f1da74f1cb4faf56b7009f60804a5c1becfd8f | [
"MIT"
] | null | null | null | README.md | entwico/spa-resort | 43f1da74f1cb4faf56b7009f60804a5c1becfd8f | [
"MIT"
] | null | null | null | # SPA Resort
Opinionated server for development and hosting of the Single Page Applications (SPA).
Requirements:
- the application **must** run behind a proxy / load balancer in production
What's inside:
- sessions / OIDC support
- access / refresh / id tokens are automatically handled
- proxy for services calls (see **security notice** before using it)
What's **not** inside:
- SSL / TLS
- Security headers, such as e.g. provided by npm `helmet` library
## Usage
### Server
Use [example docker image](examples/Dockerfile) as a deployment unit or a reference.
### Local (development)
For development purposes run
```
npm i -D @entwico/spa-resort
```
Then create a config file, e.g. for Angular application it can be
```js
module.exports = {
server: {
baseUrl: 'http://localhost:4200',
port: 4200,
},
logs: {
level: 'debug',
format: 'simple',
},
session: {
cookie: {
secure: false,
},
},
spa: {
// proxy is mostly intended for development only
proxy: {
config: {
'/another-path-to-proxy': { target: process.env.UI_PROXY_CONTELLO_CORE, secure: false, changeOrigin: true },
'/': { target: 'http://localhost:4200', ws: true, changeOrigin: true },
},
},
},
oidc: {
providerUrl: process.env.UI_OIDC_PROVIDER_URL,
clientId: process.env.UI_CLIENT_ID,
clientSecret: process.env.UI_CLIENT_SECRET,
},
}
```
Finally add the script in your package.json:
```
"start:resort": "spa-resort -c .resortrc.js",
```
### Config
The full config can be found in [default-config.yaml](default-config.yaml) file.
Config can be either .`yaml` or `.json` or `.js` and can be provided with `-c` CLI parameters. Multiple configs are also supported.
### Security notice
The backend services are supposed to check the `Authentication: Bearer ...` header. If this is not the case, please consider to mitigate the CSRF attacks elsehow.
## License
[MIT](LICENSE)
| 22.517241 | 162 | 0.675345 | eng_Latn | 0.950316 |
dbd3971f8032834938c33cb2308043e702bc9657 | 202 | md | Markdown | snippets/sso-integrations/box/4.md | zamd/docs | 8acc8d2b784f06d5fbbb675c5bd576b906cf7ac4 | [
"MIT"
] | null | null | null | snippets/sso-integrations/box/4.md | zamd/docs | 8acc8d2b784f06d5fbbb675c5bd576b906cf7ac4 | [
"MIT"
] | null | null | null | snippets/sso-integrations/box/4.md | zamd/docs | 8acc8d2b784f06d5fbbb675c5bd576b906cf7ac4 | [
"MIT"
] | null | null | null | ### Configure Auth0 SSO Integration
Enter a name for your SSO Integration, and click **Save**.

| 40.4 | 105 | 0.772277 | kor_Hang | 0.273629 |
dbd4ee3acbcca82be6db555bfc11fb336011d718 | 2,003 | md | Markdown | articles/azure-functions/functions-runtime-overview.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/functions-runtime-overview.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/functions-runtime-overview.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Functions 运行时概述 | Microsoft Docs
description: Azure Functions 运行时预览版概述
services: functions
author: apwestgarth
manager: stefsch
ms.assetid: ''
ms.service: azure-functions
ms.devlang: multiple
ms.topic: conceptual
origin.date: 11/28/2017
ms.date: 06/04/2019
ms.author: v-junlch
ms.openlocfilehash: 07fbccc161e13a007dab24d7697577422acb2af9
ms.sourcegitcommit: 9e839c50ac69907e54ddc7ea13ae673d294da77a
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/04/2019
ms.locfileid: "66491402"
---
# <a name="azure-functions-runtime-overview-preview"></a>Azure Functions 运行时概述(预览版)
[!INCLUDE [intro](../../includes/functions-runtime-preview-note.md)]
Azure Functions 运行时(预览版)提供了一种新方法供你用来在本地利用 Azure Functions 编程模型的简单性和灵活性。 Azure Functions 运行时基于与 Azure Functions 相同的开源代码根而构建,并且部署在本地来提供与云服务几乎完全相同的部署体验。
![Azure Functions 运行时预览版门户][1]
Azure Functions 运行时提供了一种方法供你用来在提交到云之前体验 Azure Functions。 这样,构建的代码资产可以在迁移时随你迁移到云。 运行时还提供了新选择,例如使用本地计算机的备用计算能力在夜间运行批处理。 还可以使用组织中的设备来有条件地向本地和云中的其他系统发送数据。
Azure Functions 运行时由两部分组成:
* Azure Functions 运行时管理角色
* Azure Functions 运行时辅助角色
## <a name="azure-functions-management-role"></a>Azure Functions 管理角色
Azure Functions 管理角色提供了用于管理本地 Functions 的宿主。 此角色执行以下任务:
* 托管 Azure Functions 管理门户,该门户与你在 [Azure 门户](https://portal.azure.cn)中看到的是同一个。 该门户提供一致的体验,可让你以在 Azure 门户中的相同方式来开发函数。
* 将函数分布到多个 Functions 辅助角色中。
* 提供发布终结点,以便可以直接通过下载和导入发布配置文件从 Microsoft Visual Studio 发布函数。
## <a name="azure-functions-worker-role"></a>Azure Functions 辅助角色
Azure Functions 辅助角色部署在 Windows 容器中,它是执行函数代码的地方。 可以在整个组织中部署多个辅助角色,此选项是客户可以利用备用计算能力的主要方式。 许多组织中存在备用计算能力的一个示例是一直通电但很长时间未使用的计算机。
## <a name="minimum-requirements"></a>最低要求
若要开始使用 Azure Functions 运行时,必须具有运行 Windows Server 2016 或 Windows 10 Creators Update 且能够访问 SQL Server 实例的计算机。
## <a name="next-steps"></a>后续步骤
安装 [Azure Functions 运行时预览版](https://aka.ms/azafrdoc)
<!--Image references-->
[1]: ./media/functions-runtime-overview/AzureFunctionsRuntime_Portal.png
<!-- Update_Description: wording update --> | 33.949153 | 151 | 0.802297 | yue_Hant | 0.696184 |
dbd5502a0cfd35a5ef56959f8137d7815ba9a606 | 4,386 | md | Markdown | dms/user-guide/setting-dms-alarm-rules.md | vladimirhasko/docs | 4eb1322e83caf34981f6cc8694adb43dc41325f0 | [
"Apache-2.0"
] | null | null | null | dms/user-guide/setting-dms-alarm-rules.md | vladimirhasko/docs | 4eb1322e83caf34981f6cc8694adb43dc41325f0 | [
"Apache-2.0"
] | null | null | null | dms/user-guide/setting-dms-alarm-rules.md | vladimirhasko/docs | 4eb1322e83caf34981f6cc8694adb43dc41325f0 | [
"Apache-2.0"
] | null | null | null | # Setting DMS Alarm Rules<a name="EN-US_TOPIC_0143117147"></a>
## Scenario<a name="section2938669717629"></a>
Setting DMSalarm rules enables you to be informed of the running status of your DMS services at any time by customizing the monitored objects and notification policies.
This section describes how to set DMS alarm rules, including alarm rule names, monitored objects, metrics, alarm thresholds, monitoring intervals, and notifications.
## Prerequisites<a name="section2862447517827"></a>
- To set alarm rules for queues, ensure that you have available queues.
- To set alarm rules for Kafka premium instances, ensure that you have available Kafka premium instances.
## Procedure<a name="section3969125652019"></a>
1. Log in to the management console.
2. Click  in the upper left corner to select a region and a project.
3. Click **Service List**, and choose **Application** \> **Distributed Message Service** to open the DMS console.
4. Create an alarm rule for a queue.
This step is for creating an alarm rule for a queue. To learn how to create an alarm rule for a Kafka premium instance, see [5](#li7309816152319). The following describes how to configure an alarm rule for the **Accumulated Messages** \(**dead\_avail\_messages**\) metric of a consumer group.
1. In the navigation pane, choose **Queue Manager**.
2. Click a queue name. On the queue details page that is displayed, click **More** \> **View Metric** in the same row as a consumer group.
You are redirected to the Cloud Eye console page displaying metrics of the selected consumer group.
3. Locate the **Accumulated Messages** metric. Hover over the metric and click  to create an alarm rule for the metric.
The **Create Alarm Rule** page is displayed.
4. Specify the alarm details.
1. Specify the alarm policy and alarm severity.
For example, if the alarm is set as shown in [Figure 1](#fig112961424225), an alarm will be triggered if the average number of accumulated messages exceeds 100 for two consecutive periods of five minutes and no actions are taken to handle the exception. Set these parameters based on your requirements.
**Figure 1** Setting the alarm policy and alarm severity for dead\_avail\_messages<a name="fig112961424225"></a>

2. Set the alarm notification configurations. If you enable **Alarm Notification**, set the validity period, notification object, and trigger condition.
3. Click **Next** to set the alarm name and description.
4. Click **Finish**.
5. <a name="li7309816152319"></a>Create an alarm rule for a Kafka premium instance.
The following describes how to configure an alarm rule for the **group\_msgs** metric of an instance.
1. In the navigation pane, choose **Kafka Premium**.
2. Click **More** \> **View Metric** in the same row as an instance.
3. Locate the **group\_msgs** metric. Hover over the metric and click  to create an alarm rule for the metric.
The **Create Alarm Rule** page is displayed.
4. Specify the alarm details.
1. Specify the alarm policy and alarm severity.
For example, if the alarm is set as shown in [Figure 2](#fig187882032812), an alarm will be triggered if the average number of accumulated messages exceeds 100 for three consecutive periods of five minutes and no actions are taken to handle the exception. Set these parameters based on your requirements.
**Figure 2** Setting the alarm policy and alarm severity for group\_msgs<a name="fig187882032812"></a>

2. Set the alarm notification configurations. If you enable **Alarm Notification**, set the validity period, notification object, and trigger condition.
3. Click **Next** to set the alarm name and description.
4. Click **Finish**.
| 64.5 | 318 | 0.707706 | eng_Latn | 0.988323 |
dbd55f4ca5f29726b34393c3ac2c31c978fb0cf0 | 26,891 | md | Markdown | hybrid/app-solutions/overview-app-design-considerations.md | MicrosoftDocs/hybrid-pr.tr-tr | 9e93ec6d32c966cce99808c2465633ce98d1fff3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-20T21:13:49.000Z | 2021-04-20T21:13:49.000Z | hybrid/app-solutions/overview-app-design-considerations.md | MicrosoftDocs/hybrid-pr.tr-tr | 9e93ec6d32c966cce99808c2465633ce98d1fff3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | hybrid/app-solutions/overview-app-design-considerations.md | MicrosoftDocs/hybrid-pr.tr-tr | 9e93ec6d32c966cce99808c2465633ce98d1fff3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-06-17T06:28:34.000Z | 2021-11-04T12:35:58.000Z | ---
title: Azure 'da ve Azure Stack hub 'da karma uygulama tasarımı konuları
description: Akıllı bulut ve akıllı uç için yerleştirme, ölçeklenebilirlik, kullanılabilirlik ve esnekliği dahil olmak üzere bir karma uygulama oluştururken tasarım konuları hakkında bilgi edinin.
author: BryanLa
ms.topic: article
ms.date: 06/07/2020
ms.author: bryanla
ms.reviewer: anajod
ms.lastreviewed: 11/05/2019
ms.openlocfilehash: 8b975c7b99807490d446f557e84b6e0eabf34649
ms.sourcegitcommit: 485a1f97fa1579364e2be1755cadfc5ea89db50e
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/08/2020
ms.locfileid: "91852499"
---
# <a name="hybrid-app-design-considerations"></a>Hibrit uygulama tasarımında dikkat edilmesi gerekenler
Tek tutarlı karma bulutunuz Microsoft Azure. Geliştirme yatırımlarınızı yeniden kullanmanıza ve küresel Azure, egeign Azure bulutlarını ve Azure Stack veri merkezinizde bir Azure uzantısı olan uygulamalara olanak tanır. Bulutları kapsayan uygulamalar da *karma uygulamalar*olarak adlandırılır.
[*Azure Uygulama Mimarisi Kılavuzu*](/azure/architecture/guide) , ölçeklenebilir, dayanıklı ve yüksek oranda kullanılabilir uygulamalar tasarlamak için yapılandırılmış bir yaklaşım açıklamaktadır. [*Azure uygulama mimarisi kılavuzunda*](/azure/architecture/guide) açıklanan noktalar, tek bir bulut için tasarlanan uygulamalar ve bulutların yayılacağı uygulamalar için de geçerlidir.
Bu makalede, [*Azure Uygulama*](/azure/architecture/guide/) [ *Mimarisi Kılavuzu*](/azure/architecture/guide/) 'nda ele alınan yazılım kalitesinin ve özellikle karma uygulamalar tasarlamaya odaklanılan [*yazılım kalitesinin*](/azure/architecture/guide/pillars) her ikisi de anlatılmaktadır. Ayrıca, karma uygulamalar tek bir buluta veya bir şirket içi veri merkezine özel olmadığından, bir *yerleşim* de ekleyeceğiz.
Karma senaryolar, geliştirme için kullanılabilen kaynaklarla büyük ölçüde farklılık gösterir ve coğrafya, güvenlik, internet erişimi ve diğer konular gibi önemli noktalara dikkat edin. Bu kılavuz, belirli konuları numaralandıramasa da, izlemeniz için bazı önemli yönergeler ve en iyi uygulamalar sağlayabilir. Karma uygulama mimarisini başarıyla tasarlama, yapılandırma, dağıtma ve sürdürme, sizin için doğal olarak tanınmayan birçok tasarım ile ilgilidir.
Bu belge, hibrit uygulamalar uygulama sırasında ortaya çıkabilecek olası soruların toplanmasını ve bunlarla çalışmak için önemli noktalar (Bu işlem noktaları) ve en iyi yöntemleri sağlar. Tasarım aşamasında bu sorulara değinerek üretimde neden olabilecek sorunlardan kaçınabilirsiniz.
Temelde, bu, karma uygulama oluşturmadan önce düşünmek için gereken sorulardır. Başlamak için aşağıdaki işlemleri yapmanız gerekir:
- Uygulama bileşenlerini tanımlayıp değerlendirin.
- Uygulama bileşenlerini bu şekilde değerlendirin.
## <a name="evaluate-the-app-components"></a>Uygulama bileşenlerini değerlendir
Bir uygulamanın her bileşeni, daha büyük uygulama içinde kendi özel rolüne sahiptir ve tüm tasarım konuları ile incelenmelidir. Her bileşenin gereksinimleri ve özellikleri, uygulama mimarisini belirlemede yardımcı olması için bu noktalara eşlenmelidir.
Uygulamanızın mimarisini gerçekleştirerek ve ne içerdiğini belirleyerek uygulamanızı bileşenlerine ayırın. Bileşenler, uygulamanızın etkileşimde bulunduğu diğer uygulamaları da içerebilir. Bileşenleri tanımladığınıza göre, bu soruları isteyerek amaçlanan karma işlemlerinizi özelliklerine göre değerlendirin:
- Bileşenin amacı nedir?
- Bileşenler arasında bağımlılıkları nelerdir?
Örneğin, bir uygulamanın ön uç ve arka ucu iki bileşen olarak tanımlanmış olabilir. Karma bir senaryoda, ön uç bir bulutta, arka ucu ise diğeri de olur. Uygulama, ön uç ve Kullanıcı arasında ve ayrıca ön uç ve arka uç arasında iletişim kanalları sağlar.
Bir uygulama bileşeni birçok form ve senaryo tarafından tanımlanır. En önemli görev bunları ve bunların bulutlarını ve şirket içi konumunu tanımlar.
Stokunuzda içerilecek ortak uygulama bileşenleri Tablo 1 ' de listelenmiştir.
### <a name="table-1-common-app-components"></a>Tablo 1. Ortak uygulama bileşenleri
| **Bileşen** | **Karma Uygulama Kılavuzu** |
| ---- | ---- |
| İstemci bağlantıları | Uygulamanız (herhangi bir cihazda), aşağıdaki yöntemlerle birlikte tek bir giriş noktasından farklı yollarla kullanıcılara erişebilir:<br>-Kullanıcının uygulamayla çalışmak için bir istemcisinin yüklü olmasını gerektiren bir istemci-sunucu modeli. Tarayıcıdan erişilen sunucu tabanlı bir uygulama.<br>-İstemci bağlantıları, dolaşım ücretleri uygulanabileceğini, bağlantı kesildiğinde veya uyarıladığınızda bildirimleri içerebilir. |
| Kimlik Doğrulaması | Uygulamaya bağlanan bir kullanıcı veya başka bir bileşen için kimlik doğrulaması gerekli olabilir. |
| API'ler | Geliştiricilere, API kümeleri ve sınıf kitaplıkları ile uygulamanıza programlı erişim sağlayabilir ve internet standartlarına dayalı bir bağlantı arabirimi sağlayabilirsiniz. API 'Leri, bir uygulamayı bağımsız işletim mantıksal birimlerine ayırmak için de kullanabilirsiniz. |
| Hizmetler | Bir uygulamaya yönelik özellikleri sağlamak için kısa hizmetlerini kullanabilirsiniz. Bir hizmet, uygulamanın çalıştığı altyapı olabilir. |
| Kuyruklar | Kullanım ömürlerini ve uygulamanızın bileşenlerinin durumunu düzenlemek için kuyrukları kullanabilirsiniz. Bu kuyruklar, abone tarafların mesajlaşma, bildirim ve arabelleğe alma özelliklerini sağlayabilir. |
| Veri depolama | Bir uygulama durum bilgisiz veya durum bilgisi olabilir. Durum bilgisi olan uygulamalarda çok sayıda biçim ve birim tarafından karşılanabileceği veri depolaması gerekir. |
| Verileri önbelleğe alma | Tasarımınızda bir veri önbelleği bileşeni, gecikme sorunlarını gidermek ve bulut patlamasını tetikleyerek bir rol oynatabilir. |
| Veri alımı | Veriler, bir Web formundaki Kullanıcı tarafından gönderilen değerlerden sürekli olarak yüksek hacimli veri akışına kadar birçok şekilde bir uygulamaya gönderilebilir. |
| Veri işleme | Veri işleme görevleriniz (raporlar, analiz, toplu dışarı aktarmalar ve veri dönüştürme gibi), kaynakta işlenebilir veya verilerin bir kopyası kullanılarak ayrı bir bileşende boşaltılır. |
## <a name="assess-app-components-for-pillars"></a>Uygulama bileşenlerini, pilde için değerlendir
Her bir bileşen için her bir şekilde özelliklerini değerlendirin. Her bir bileşeni tüm pillerle değerlendirirken, kabul edilmeyebilirsiniz, karma uygulamanın tasarımını etkileyen, sizin de dikkate alınamayabilirsiniz. Bu noktalara işlem yapmak, uygulamanızı iyileştirmek için değer ekleyebilirler. Tablo 2, karma uygulamalarla ilgili olarak her bir bir açıklama sağlar.
### <a name="table-2-pillars"></a>Tablo 2. Yapı taşları
| **Yapı Taşı** | **Açıklama** |
| ----------- | --------------------------------------------------------- |
| Yerleştirme | Karma uygulamalardaki bileşenlerin stratejik konumlandırması. |
| Ölçeklenebilirlik | Sistemin artan yükü idare edebilme özelliği. |
| Kullanılabilirlik | Karma uygulamanın işlevsel ve çalışır olduğu sürenin oranı. |
| Dayanıklılık | Karma uygulama için kurtarma özelliği. |
| Yönetilebilirlik | Sistemi üretimde çalışır durumda tutan operasyon süreçleri. |
| Güvenlik | Karma uygulamaları ve verileri tehditlere karşı koruma. |
## <a name="placement"></a>Yerleştirme
Karma uygulama doğal olarak, veri merkezi gibi bir yerleşim değerlendirmesi içerir.
Yerleştirme, bileşenlerin bir karma uygulamayı en iyi şekilde hizmet edebilmesi için konumlama açısından önemli bir görevdir. Tanım, karma uygulamalar, Şirket içinden buluta ve farklı bulutlar arasında yer alır. Uygulama bileşenlerini bulutlara iki şekilde yerleştirebilirsiniz:
- **Dikey karma uygulamalar**
Uygulama bileşenleri konumlar arasında dağıtılır. Her ayrı bileşen yalnızca tek bir konumda bulunan birden çok örneğe sahip olabilir.
- **Yatay karma uygulamalar**
Uygulama bileşenleri konumlar arasında dağıtılır. Her bir bileşenin birden çok konumu kapsayan birden fazla örneği olabilir.
Bazı bileşenler, başkalarının konumu ve yerleştirmesi hakkında bilgi sahibi olmayan konumlarından haberdar olabilir. Bu virtuousness bir soyutlama katmanıyla elde edilebilir. Mikro hizmetler gibi modern bir uygulama çerçevesi olan bu katman, bulut genelinde düğümlerde çalışan uygulama bileşenlerinin yerleştirilmesiyle uygulamanın nasıl hizmet vermesini tanımlayabilir.
### <a name="placement-checklist"></a>Yerleştirme denetim listesi
**Gerekli konumları doğrulayın.** Uygulamanın veya bileşenlerinden birinin, belirli bir bulut için üzerinde çalışması veya sertifika gerektirmesi için gerekli olduğundan emin olun. Bu, şirketinizdeki veya yasalar tarafından dikte edilen egemenlik gereksinimlerini içerebilir. Ayrıca, belirli bir konum veya yerel ayar için herhangi bir şirket içi işlemin gerekli olup olmadığını belirleyebilirsiniz.
**Yokermesi bağlantı bağımlılıkları.** Gerekli konumlar ve diğer faktörler, bileşenlerinizin arasında bağlantı bağımlılıklarını dikte edebilir. Bileşenleri yerleştirirken, aralarında iletişimin en iyi bağlantısını ve güvenliğini saptayın. [ *VPN*,](/azure/vpn-gateway/) [ *ExpressRoute*](/azure/expressroute/) ve [ *karma bağlantılar*](/azure/app-service/app-service-hybrid-connections) seçenekleri bulunur.
**Platform yeteneklerini değerlendirin.** Her uygulama bileşeni için, uygulama bileşeni için gerekli kaynak sağlayıcısının bulutta kullanılabilir olup olmadığını ve bant genişliğinin beklenen aktarım hızını ve gecikme süresi gereksinimlerini barındırabilmesine olanak sağlamak için bkz..
**Taşınabilirlik için plan yapın.** Taşıma işlemlerini planlamak ve hizmet bağımlılıklarını engellemek için kapsayıcılar veya mikro hizmetler gibi modern uygulama çerçeveleri kullanın.
**Veri egemenlik gereksinimlerini belirleme.** Karma uygulamalar, yerel bir veri merkezinde olduğu gibi veri yalıtımına yönelik olarak tasarlanmıştır. Bu gereksinimi karşılamak için başarıyı iyileştirmek üzere kaynaklarınızın yerleşimini gözden geçirin.
**Gecikme için plan yapın.** Bulut tabanlı işlemler, uygulama bileşenleri arasında fiziksel mesafe ortaya çıkarabilir. Herhangi bir gecikmeyi karşılamak için gereksinimleri yokler.
**Trafik akışlarını denetleme.** Genel bir bulutta ön uç tarafından erişildiğinde, kişisel olarak tanımlanabilen bilgi verileri için en yüksek kullanımı ve uygun ve güvenli iletişimleri işleyin.
## <a name="scalability"></a>Ölçeklenebilirlik
Ölçeklenebilirlik, sistemin, uygulamanın boyutuna ve kapsamına ek olarak, diğer faktörler ve hedef kitle boyutunu etkilediği zaman içinde değişebilen bir uygulamada daha fazla yük işlemesini sağlar.
Bu pilde temel tartışmak için, mimaride üstün olan beş paragraf üzerinde [*ölçeklenebilirlik*](/azure/architecture/guide/pillars#scalability) konusuna bakın.
Karma uygulamalar için yatay ölçeklendirme yaklaşımı, talebi karşılamak için daha fazla örnek eklenmesine ve daha sonra daha sessiz dönemler sırasında devre dışı bırakılmasına olanak tanır.
Karma senaryolarda, bileşenlerin bulutlara yayıldığı durumlarda, tek tek bileşenlerin ölçeğini genişletmek için ek dikkat gerekir. Uygulamanın bir bölümünün ölçeklendirilmesi, başka bir ölçeklendirme gerektirebilir. Örneğin, istemci bağlantısı sayısı arttıkça, ancak uygulamanın Web Hizmetleri uygun şekilde ölçeklenmemişse, veritabanındaki yük uygulama için doygunluğu artırabilir.
Bazı uygulama bileşenleri doğrusal bir şekilde ölçeklenebilir, diğerleri de ölçeklendirme bağımlılıklarına sahiptir ve ölçeklendirebilecekleri ölçüde sınırlı olabilir. Örneğin, uygulama bileşenleri konumlarına yönelik karma bağlantı sağlayan bir VPN tüneli, ölçeklendirilebilen bant genişliği ve gecikme süresine sahiptir. Uygulamanın bileşenleri, bu gereksinimlerin karşılandığından emin olmak için nasıl ölçeklendirilir?
### <a name="scalability-checklist"></a>Ölçeklenebilirlik denetim listesi
**Yokermesi ölçekleme eşikleri.** Uygulamanızdaki çeşitli bağımlılıkları işlemek için, farklı bulutlarda bulunan uygulama bileşenlerinin birbirinden bağımsız olarak ölçekleyeceğini belirleme, ancak uygulamayı çalıştırma gereksinimlerini sürdürmeye devam ediyor. Hibrit uygulamalar, etkileşim kurduğu ve uygulamanın geri kalanını etkilediği bir özelliği işlemek için genellikle uygulamadaki belirli alanların ölçeklendirilmesi gerekir. Örneğin, bir dizi ön uç örneğini aşarak arka ucun ölçeklendirilmesi gerekebilir.
**Ölçek zamanlamalarını tanımlayın.** Çoğu uygulamanın meşgul süreleri vardır, bu yüzden en iyi ölçeklendirmeyi koordine etmek için yoğun süreleri en yüksek zamanlara toplamanız gerekir.
**Merkezi bir izleme sistemi kullanın.** Platform izleme özellikleri otomatik ölçeklendirme sağlayabilir, ancak karma uygulamalar sistem durumunu ve yüklemeyi toplayan merkezi bir izleme sistemine gerek duyar. Merkezi bir izleme sistemi, başka bir konumdaki kaynağa bağlı olarak bir konumda bir kaynak ölçeklendirmeyi başlatabilir ve ölçeklendirebilirsiniz. Ayrıca, merkezi bir izleme sistemi hangi bulutlar için otomatik ölçeklendirme kaynaklarını ve hangi bulutların olduğunu izleyebilir.
**Otomatik ölçeklendirme özelliklerinden yararlanın (kullanılabilir).** Otomatik ölçeklendirme özellikleri mimarinizin parçasıysa, bir uygulama bileşeninin ne zaman yukarı, aşağı, aşağı veya dışarı ölçeklenmesi gerektiğini tanımlayan eşikleri ayarlayarak otomatik ölçeklendirmeyi uygulayabilirsiniz. Otomatik ölçeklendirme örneği, daha fazla kapasiteyi işlemek için bir bulutta otomatik olarak ölçeklenen bir istemci bağlantısıdır, ancak uygulamanın diğer bağımlılıklarının aynı zamanda farklı bulutlara yayılmasına neden olur. Bu bağımlı bileşenlerin otomatik ölçeklendirme özellikleri bunun olmalıdır.
Otomatik ölçeklendirme kullanılamıyorsa, merkezi izleme sistemindeki eşiklerle tetiklenen el ile ölçeklendirmeyi karşılamak için betikleri ve diğer kaynakları uygulamayı düşünün.
**Beklenen konuma göre yüklemeyi belirleme.** İstemci isteklerini işleyen karma uygulamalar öncelikle tek bir konuma bağlı olabilirler. İstemci isteklerinin yükü bir eşiği aştığında, gelen isteklerin yükünü dağıtmak için ek kaynaklar farklı bir konuma eklenebilir. İstemci bağlantılarının artan yükleri işleyebilmesine ve ayrıca yükü işleyecek istemci bağlantılarına yönelik otomatikleştirilmiş yordamları belirleyebilmesini sağlayın.
## <a name="availability"></a>Kullanılabilirlik
Kullanılabilirlik, sistemin işlevsel ve çalışır olduğu süredir. Kullanılabilirlik, çalışma süresinin yüzdesi olarak ölçülür. Uygulama hataları, altyapı sorunları ve sistem yükü tüm kullanılabilirliği azaltabilir.
Bu pilde temel tartışmak için, mimaride üstün olan beş paragraf üzerinde [*kullanılabilirlik*](/azure/architecture/framework/) bölümüne bakın.
### <a name="availability-checklist"></a>Kullanılabilirlik denetim listesi
**Bağlantı için artıklık sağlayın.** Karma uygulamalar, uygulamanın yayıldığı bulutlar arasında bağlantı gerektirir. Karma bağlantı için bir teknoloji seçiminiz vardır. bu nedenle, birincil teknoloji seçimine ek olarak, otomatik yük devretme özellikleri ile artıklık sağlamak için başka bir teknoloji kullanın, birincil teknoloji başarısız olur.
**Hata etki alanlarını sınıflandırın.** Hataya dayanıklı uygulamalar birden çok hata etki alanı gerektirir. Hata etki alanları, tek bir sabit diskin şirket içinde başarısız olması, raf üstü bir anahtarın devre dışı olması veya tam veri merkezinin kullanılamaz olması gibi başarısızlık noktasını yalıtmaya yardımcı olur. Karma uygulamada, bir konum bir hata etki alanı olarak sınıflandırılabilirler. Daha fazla kullanılabilirlik gereksinimleriyle, tek bir hata etki alanının nasıl sınıflandırılacağını değerlendirmek için daha fazla ihtiyacınız vardır.
**Yükseltme etki alanlarını sınıflandırın.** Yükseltme etki alanları, uygulama bileşenleri örneklerinin kullanılabilir olmasını sağlamak için kullanılır, ancak aynı bileşene ait diğer örneklere güncelleştirmeler veya özellik yükseltmeleri ile bakım yapılıyor. Hata etki alanlarında olduğu gibi, yükseltme etki alanları konumlar arasında yerleştirme ile sınıflandırılabilirler. Bir uygulama bileşeninin başka bir konumda yükseltilmeden önce tek bir konumda yükseltilme veya başka etki alanı yapılandırmalarının gerekli olup olmadığını belirlemelisiniz. Tek bir konumun kendisi birden fazla yükseltme etki alanına sahip olabilir.
**Örnekleri ve kullanılabilirliği izleyin.** Yüksek oranda kullanılabilir uygulama bileşenleri, yük dengeleme ve zaman uyumlu veri çoğaltma aracılığıyla kullanılabilir. Hizmet kesintiye uğramadan önce kaç örnek çevrimdışı olabilir belirlemelisiniz.
**Kendini onaran uygulayın.** Bir sorun, uygulamanın kullanılabilirliğine karşı kesintiye neden oluyorsa, bir izleme sisteminin algılanması, başarısız örneği boşaltma ve yeniden dağıtma gibi uygulama için kendi kendini düzeltme etkinlikleri başlatabilir. Büyük olasılıkla bu, karma bir sürekli tümleştirme ve sürekli teslim (CI/CD) işlem hattı ile tümleştirilmiş bir merkezi izleme çözümü gerektirir. Uygulama bir uygulama bileşeninin yeniden dağıtımı gerektirebilecek sorunları belirlemek için bir izleme sistemiyle tümleşiktir. İzleme sistemi, aynı zamanda aynı veya diğer konumlarda bulunan diğer bağımlı bileşenler ve uygulama bileşenini yeniden dağıtmak için karma CI/CD 'yi tetikleyebilir.
**Hizmet düzeyi sözleşmelerini (SLA 'Lar) koruyun.** Müşterileriniz, müşterilerinizin sahip olduğu hizmet ve uygulamalarla bağlantı sağlamak için herhangi bir anlaşma açısından önemlidir. Karma uygulamanızın bağımlı olduğu her konumun kendi SLA 'Sı olabilir. Bu farklı SLA 'Lar karma uygulamanızın genel SLA 'sını etkileyebilir.
## <a name="resiliency"></a>Dayanıklılık
Dayanıklılık, karma uygulama ve sistemin hatalardan kurtulmasına ve çalışmaya devam etmesine olanak tanır. Dayanıklılık amacı, bir hata oluştuktan sonra uygulamayı tam çalışır duruma döndürmektir. Dayanıklılık stratejileri, yedekleme, çoğaltma ve olağanüstü durum kurtarma gibi çözümler içerir.
Bu pilde temel tartışmak için, mimaride üstün olan beş paragraf üzerinde [*dayanıklılık*](/azure/architecture/guide/pillars#resiliency) bölümüne bakın.
### <a name="resiliency-checklist"></a>Dayanıklılık denetim listesi
**Olağanüstü durum kurtarma bağımlılıklarını açın.** Bir bulutta olağanüstü durum kurtarma, başka bir buluttaki uygulama bileşenlerinde değişiklik yapılmasını gerektirebilir. Bir buluttan bir veya birden çok bileşen, aynı bulutta ya da başka bir buluta yük devretmelidir, bağımlı bileşenlerin bu değişikliklerden haberdar olması gerekir. Bu, bağlantı bağımlılıklarını da içerir. Dayanıklılık, her bulut için tam olarak test edilmiş bir uygulama kurtarma planı gerektirir.
**Kurtarma akışı oluşturun.** Etkin bir kurtarma akışı tasarımı, arabelleğe alma, yeniden deneme, başarısız veri aktarımını yeniden deneme ve gerekirse farklı bir hizmete veya iş akışına geri dönme yeteneği için uygulama bileşenlerini değerlendirdi. Hangi yedekleme mekanizmasının kullanılacağını, geri yükleme yordamının ne sıklıkta ve ne sıklıkla test edildiğini belirlemelisiniz. Hem artımlı hem de tam yedeklemelerin sıklığını da belirlemelisiniz.
**Kısmi kurtarmaları test edin.** Uygulamanın bir parçası için kısmi bir kurtarma, tüm kullanılabilir olmayan kullanıcılara daha fazla izin verebilir. Planın bu bölümü, yedekleme yapılmadan önce düzgün bir şekilde kapatmak üzere uygulamayla etkileşim kuran yedekleme ve geri yükleme hizmeti gibi bir yan etkisi olmadığından emin olmalıdır.
**Olağanüstü durum kurtarma, sigortaların ve sorumluluğun atanmasının belirlenmesi.** Bir kurtarma planı, neleri yedeklebileceğinize ve geri yüklenebileceklerini ek olarak kimlerin ve hangi rollerin, yedekleme ve kurtarma eylemlerini başlatabileceğini betimlemelidir.
**Kendi kendini iyileştirme eşiklerini olağanüstü durum kurtarma ile karşılaştırın.** Otomatik Kurtarma başlatma için bir uygulamanın kendi kendini onaran yeteneklerini ve bir uygulamanın kendi kendini onaramasının başarısız veya başarılı olarak kabul edilmesi için gereken süreyi belirleme. Her bulutun eşiklerini belirleme.
**Dayanıklılık özelliklerinin kullanılabilirliğini doğrulayın.** Her konum için dayanıklılık özelliklerinin ve özelliklerinin kullanılabilirliğini belirleme. Bir konum gerekli özellikleri sağlamıyorsa, bu konumu dayanıklılık özelliklerini sağlayan merkezi bir hizmetle tümleştirmeyi düşünün.
**Altzamanları belirleme.** Uygulamanın tamamı ve uygulama bileşenleri gibi bakım nedeniyle beklenen kapalı kalma süresini belirleme.
**Belge sorunlarını giderme yordamları.** Kaynakları ve uygulama bileşenlerini yeniden dağıtmaya ilişkin sorun giderme yordamlarını tanımlayın.
## <a name="manageability"></a>Yönetilebilirlik
Karma uygulamalarınızı yönetmenin önemli noktaları, mimarinizi tasarlarken kritik öneme sahiptir. İyi yönetilen bir karma uygulama, ortak bir geliştirme ardışık düzeninde tutarlı uygulama kodu tümleştirmesini sağlayan kod olarak bir altyapı sağlar. Altyapıda yapılan değişikliklerin tutarlı sistem genelinde ve bireysel testlerini uygulayarak, değişiklikler testleri geçirirseniz, bunlar kaynak kodla birleştirilmelerine izin vererek, tümleşik bir dağıtım sağlayabilirsiniz.
Bu pillere ilişkin temel tartışmada, mimari üstün olan beş işlem için bkz. [*DevOps*](/azure/architecture/framework/#devops) .
### <a name="manageability-checklist"></a>Yönetilebilirlik denetim listesi
**İzlemeyi uygulayın.** Bulut genelinde yayılan, sistem durumunun ve performansının toplu bir görünümünü sağlamak için bir merkezi izleme sistemi kullanın. Bu sistem, hem uygulama bileşenlerini hem de ilgili platform yeteneklerini izlemeyi içerir.
Uygulamanın izlenmesini gerektiren parçalarını saptayın.
**Koordinat ilkeleri.** Bir karma uygulamanın yaydığı her konum, izin verilen kaynak türlerini, adlandırma kurallarını, etiketleri ve diğer ölçütleri kapsayan kendi ilkesine sahip olabilir.
**Rolleri tanımlayın ve kullanın.** Bir veritabanı yöneticisi olarak, uygulama kaynaklarına erişmesi gereken farklı kişiler (bir uygulama sahibi, bir veritabanı yöneticisi ve son kullanıcı gibi) için gerekli izinleri belirlemeniz gerekir. Bu izinlerin kaynaklar üzerinde ve uygulama içinde yapılandırılması gerekir. Rol tabanlı erişim denetimi (RBAC) sistemi, uygulama kaynakları üzerinde bu izinleri ayarlamanıza olanak sağlar. Bu erişim hakları, tüm kaynaklar tek bir bulutta dağıtıldığında, ancak kaynaklar bulutlara yayıldığında daha fazla dikkat gerektirse de zorlayıcı bir işlem gerektirir. Bir bulutta ayarlanan kaynaklarla ilgili izinler, başka bir bulutta ayarlanan kaynaklar için uygulanmaz.
**CI/CD işlem hatlarını kullanın.** Sürekli tümleştirme ve sürekli geliştirme (CI/CD) işlem hattı, bulutlar arasında yayılan uygulamaları yazma ve dağıtmaya yönelik tutarlı bir işlem sağlayabilir ve altyapı ve uygulamalarına yönelik kalite güvencesi sağlar. Bu işlem hattı, altyapı ve uygulamanın bir bulutta test yapılmasını ve başka bir buluta dağıtılmasını sağlar. İşlem hattı, karma uygulamanızın belirli bileşenlerini tek bir buluta ve diğer bileşenlere, karma uygulama dağıtımı için temel oluşturacak şekilde dağıtmanıza olanak tanır. Bir CI/CD sistemi, yükleme sırasında, veritabanı için bir bağlantı dizesi gerektiren Web uygulaması gibi bağımlılıkları işlemek için kritik öneme sahiptir.
**Yaşam döngüsünü yönetin.** Bir karma uygulamanın kaynakları konumlara yayılabildiğinden, her bir konumun yaşam döngüsü yönetim yeteneğinin tek bir yaşam döngüsü yönetim biriminde toplanması gerekir. Nasıl oluşturulduğunu, güncelleştirileceğini ve silindiğini göz önünde bulundurun.
**Sorun giderme stratejilerini inceleyin.** Karma uygulama sorunlarını giderme, tek bir bulutta çalışan aynı uygulamadan daha fazla uygulama bileşeni içerir. Bulutlar arasındaki bağlantının yanı sıra, uygulama bir yerine iki platformda çalışmaktadır. Karma uygulamalardaki sorunları gidermeye yönelik önemli bir görev, uygulama bileşenlerinin toplu sistem durumunu ve performans izlemesini incelemektir.
## <a name="security"></a>Güvenlik
Güvenlik, herhangi bir bulut uygulaması için başlıca önemli noktalara biridir ve karma bulut uygulamaları için daha da kritik hale gelir.
Bu pilde temel tartışmak için, mimaride üstün olan beş paragraf üzerinde [*güvenlik*](/azure/architecture/guide/pillars#security) bölümüne bakın.
### <a name="security-checklist"></a>Güvenlik denetim listesi
**İhlalin olduğunu varsayın.** Uygulamanın bir bölümü tehlikeye atılırsa, yalnızca aynı konumda değil, ancak konumlar arasında değil, ihlalin yayılmasını en aza indirmek için çözüm olduğundan emin olun.
**İzin verilen ağ erişimini izleyin.** Uygulamanın yalnızca belirli bir alt ağdan erişmesi gibi ağ erişim ilkelerini belirleme ve yalnızca uygulamanın düzgün çalışması için gereken bileşenler arasındaki en düşük bağlantı noktalarına ve protokollere izin verme.
**Güçlü kimlik doğrulaması yapın.** Güçlü bir kimlik doğrulama düzeni, uygulamanızın güvenliği için önemlidir. Çoklu oturum açma özellikleri sağlayan ve şu şemalarından birini veya birkaçını kullanan bir federal kimlik sağlayıcısı kullanmayı düşünün: Kullanıcı adı ve parola oturum açma, ortak ve özel anahtarlar, iki öğeli veya Multi-Factor Authentication ve güvenilen güvenlik grupları. Sertifika türlerine ve gereksinimlerine ek olarak, uygulama kimlik doğrulaması için hassas verileri ve diğer gizli dizileri depolamak için uygun kaynakları belirleme.
**Şifrelemeyi kullanın.** Uygulamanın, veri depolama veya istemci iletişimi ve erişim gibi hangi alanlarda şifrelemeyi kullanacağınızı belirler.
**Güvenli kanallar kullanın.** Bulutlar genelinde güvenli bir kanal, güvenlik ve kimlik doğrulaması denetimleri, gerçek zamanlı koruma, karantina ve bulutlar arasında diğer hizmetler sağlamak için önemlidir.
**Rolleri tanımlayın ve kullanın.** Bulut genelinde kaynak yapılandırmalarına ve tek kimlik erişimine yönelik rolleri uygulayın. Uygulama ve platform kaynakları için rol tabanlı erişim denetimi (RBAC) gereksinimlerini saptayın.
**Sisteminizi denetleyin.** Sistem izleme, hem uygulama bileşenlerinden hem de ilgili bulut platformu işlemlerinden verileri günlüğe kaydedebilir ve toplayabilir.
## <a name="summary"></a>Özet
Bu makalede, karma uygulamalarınızı yazma ve tasarlama sırasında göz önünde bulundurmanız gereken öğelerin bir denetim listesi sunulmaktadır. Uygulamanızı dağıtmadan önce bu şekilde bu gibi bir şekilde gözden geçirmek, üretim kesintileri ' nda bu sorulara çalışmanızı ve tasarımınızı yeniden ziyaret etmeniz gerektiğini önler.
Bu, önceden zaman alan bir görev gibi görünebilir, ancak uygulamanızı bu şekilde tasarlar temelinde tasarlarsanız yatırım getirisi ile kolayca erişebilirsiniz.
## <a name="next-steps"></a>Sonraki adımlar
Daha fazla bilgi için aşağıdaki kaynaklara bakın:
- [Hibrit bulut](https://azure.microsoft.com/overview/hybrid-cloud/)
- [Karma bulut uygulamaları](https://azure.microsoft.com/solutions/hybrid-cloud-app/)
- [Bulut tutarlılığı için Azure Resource Manager şablonları geliştirme](/azure/azure-resource-manager/templates/templates-cloud-consistency) | 114.918803 | 701 | 0.83567 | tur_Latn | 1.000001 |
dbd5ca0842229a59c8b61ba718183c897ca2bcb3 | 128 | md | Markdown | README.md | androsanta/simple-nginx-reverse-proxy | 32ebe06a1ff3319793a4fe0c85b8ee0720c81882 | [
"MIT"
] | null | null | null | README.md | androsanta/simple-nginx-reverse-proxy | 32ebe06a1ff3319793a4fe0c85b8ee0720c81882 | [
"MIT"
] | null | null | null | README.md | androsanta/simple-nginx-reverse-proxy | 32ebe06a1ff3319793a4fe0c85b8ee0720c81882 | [
"MIT"
] | null | null | null | # Simple Nginx Reverse Proxy
## How to run
```bash
docker -t <image-name> build .
docker run --network="host" <image-name>
``` | 16 | 40 | 0.664063 | eng_Latn | 0.560885 |
dbd5e17716b144fc8fa26b8642cef3b731da2074 | 805 | md | Markdown | bitso/README.md | agoric-labs/external-adapters-js | 230d8c44b76af2445e06588500d6f1472bbeca04 | [
"MIT"
] | 1 | 2021-02-02T22:57:44.000Z | 2021-02-02T22:57:44.000Z | bitso/README.md | agoric-labs/external-adapters-js | 230d8c44b76af2445e06588500d6f1472bbeca04 | [
"MIT"
] | 3 | 2021-09-02T11:57:03.000Z | 2021-09-21T15:49:19.000Z | bitso/README.md | agoric-labs/external-adapters-js | 230d8c44b76af2445e06588500d6f1472bbeca04 | [
"MIT"
] | 1 | 2021-02-22T17:50:35.000Z | 2021-02-22T17:50:35.000Z | # Chainlink External Adapter for Bitso
## Input Params
- `base`, `from`, or `coin`: The symbol of the currency to query
- `quote`, `to`, or `market`: The symbol of the currency to convert to
- `endpoint`: Optional endpoint param (default: "ticker")
## Output
```json
{
"jobRunID":"1",
"data":{
"success":true,
"payload":{
"high":"1581920.78",
"last":"1567306.98",
"created_at":"2020-10-06T10:57:38+00:00",
"book":"btc_ars",
"volume":"16.96252687",
"vwap":"1568906.7103474855",
"low":"1553404.00",
"ask":"1574120.27",
"bid":"1567306.98",
"change_24":"2345.15"
},
"result":1567306.98
},
"result":1567306.98,
"statusCode":200
}
```
| 23.676471 | 70 | 0.514286 | eng_Latn | 0.342012 |
dbd5f95dee6d95fbef2ca605ac0faeeb250ff8ac | 382 | md | Markdown | _definitions/transcriptionDiplomatic_1998_Kline.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 2 | 2021-04-26T12:28:47.000Z | 2021-12-21T13:30:58.000Z | _definitions/transcriptionDiplomatic_1998_Kline.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 45 | 2020-04-04T19:51:35.000Z | 2022-03-24T16:56:19.000Z | _definitions/transcriptionDiplomatic_1998_Kline.md | WoutDLN/lexicon-scholarly-editing | c9b11e32dd786ade453a616bf60fb4f1b6417bbd | [
"CC-BY-4.0"
] | 3 | 2020-04-19T14:17:32.000Z | 2021-04-08T12:13:06.000Z | ---
lemma: transcription (diplomatic)
source: kline_guide_1998
page: 270
language: English
contributor: Caroline
updated_by: Caroline
---
**Diplomatic transcription.** A style of source transcription in which all details of inscription are [recorded](recordOfManuscriptAlterations.html), either symbolically in the reading [text](text.html) or in adjacent descriptive footnotes.
| 29.384615 | 240 | 0.803665 | eng_Latn | 0.903033 |
dbd613f9e816b1d97874ccb227a43ee69360f374 | 2,171 | md | Markdown | _posts/2014-04-08-92off-4-xperia-sx-3-djgaadi3xpsxm03-223.md | tokka2/tokka2.github.io | e9ed6cacd6a4aff08a23f9b96087d831d47ab8eb | [
"MIT"
] | null | null | null | _posts/2014-04-08-92off-4-xperia-sx-3-djgaadi3xpsxm03-223.md | tokka2/tokka2.github.io | e9ed6cacd6a4aff08a23f9b96087d831d47ab8eb | [
"MIT"
] | null | null | null | _posts/2014-04-08-92off-4-xperia-sx-3-djgaadi3xpsxm03-223.md | tokka2/tokka2.github.io | e9ed6cacd6a4aff08a23f9b96087d831d47ab8eb | [
"MIT"
] | null | null | null | ---
title: (92%OFF)デザエッグ デザジャケット ペルソナ4 ジ・アルティメット イン マヨナカアリーナ for Xperia SX デザイン3 里中 千枝 DJGA-ADI3-XPSX(m=03) ¥223
author: 特価情報ツウ!
excerpt: '<a target="_blank" href="//www.amazon.co.jp/gp/product/B00AA67XMM?ie=UTF8&tag=zonwari-22&linkCode=as2&camp=247&creative=7399&creativeASIN=B00AA67XMM"><img src="//ecx.images-amazon.com/images/I/51%2BNmcqymyL._SL100_.jpg"><br>デザエッグ デザジャケット ペルソナ4 ジ・アルティメット イン マヨナカアリーナ for Xperia SX デザイン3 里中 千枝 DJGA-ADI3-XPSX(m=03)<br>参考価格:¥ 2,980<br>価格:¥ 223<br>割引:¥ 2,757 (92%OFF)</a>'
layout: post
permalink: /mobile-smartphone/92off-4-xperia-sx-3-djgaadi3xpsxm03-223.html
syndication_source:
- ぞんわり 家電・カメラ お買い得新着
syndication_source_uri:
- '//zonwari.com/?cat=3210981&sort=T'
syndication_source_id:
- //zonwari.com/rss/3210981_T_1.xml
syndication_feed:
- //zonwari.com/rss/3210981_T_1.xml
syndication_feed_id:
- 19
syndication_permalink:
- '//www.amazon.co.jp/gp/product/B00AA67XMM?ie=UTF8&tag=zonwari-22&linkCode=as2&camp=247&creative=7399&creativeASIN=B00AA67XMM'
syndication_item_hash:
- af7e4be916d4aa168528455956b257cf
pvc_views:
- 1095
a8affiliate_code:
-
a8image_link:
-
a8product:
-
a8shop:
-
a8price:
-
rakuten_affiliate_code:
-
rakuten_shop:
- 楽天店
rakuten_product:
-
rakuten_price:
-
rakuten_affiliate_link:
-
categories:
- 携帯・スマートフォン
---
[<img src='//i1.wp.com/ecx.images-amazon.com/images/I/51%2BNmcqymyL._SL150_.jpg?w=546' title="" alt="" data-recalc-dims="1" />
デザエッグ デザジャケット ペルソナ4 ジ・アルティメット イン マヨナカアリーナ for Xperia SX デザイン3 里中 千枝 DJGA-ADI3-XPSX(m=03)
参考価格:¥ 2,980
価格:¥ 223
割引:¥ 2,757 (92%OFF)][1]
[1]: //www.amazon.co.jp/gp/product/B00AA67XMM?ie=UTF8&tag=tokkajohotsu-22&linkCode=as2&camp=247&creative=7399&creativeASIN=B00AA67XMM | 41.75 | 797 | 0.71672 | yue_Hant | 0.222957 |
dbd62b25e0c82fe512270a3b8763abbdbfa101e1 | 342 | md | Markdown | powerwatch/hardware/plugwatch_D/board/README.md | nklugman/PlugWatch | 4fbd2506a6808542fc5246e87d3c382761da1eaf | [
"MIT"
] | null | null | null | powerwatch/hardware/plugwatch_D/board/README.md | nklugman/PlugWatch | 4fbd2506a6808542fc5246e87d3c382761da1eaf | [
"MIT"
] | null | null | null | powerwatch/hardware/plugwatch_D/board/README.md | nklugman/PlugWatch | 4fbd2506a6808542fc5246e87d3c382761da1eaf | [
"MIT"
] | null | null | null | # Electron Sensor Board Rev 4
This sensor board is part of the [GridWatch](http://grid.watch) project.
The goal of this board is to test data collection in Accra. It is an updated
version of revision 3 with a focus on reliable operation even with a lack
of power and cloud connectivity. Please reference revision three for more
information.
| 42.75 | 76 | 0.792398 | eng_Latn | 0.999376 |
dbd72b13634bb88acd678c3f20a9e5385b177985 | 29 | md | Markdown | assets/README.md | jonathanmarp/GoWind | 95623449ac12cb9c597337e337bca27aecbb6b97 | [
"Apache-2.0"
] | 2 | 2020-10-06T07:36:12.000Z | 2020-10-08T10:26:02.000Z | assets/README.md | jonathanmarp/GoWind | 95623449ac12cb9c597337e337bca27aecbb6b97 | [
"Apache-2.0"
] | 1 | 2020-10-06T08:17:53.000Z | 2020-10-06T08:17:53.000Z | assets/README.md | jonathanmarp/GoWind | 95623449ac12cb9c597337e337bca27aecbb6b97 | [
"Apache-2.0"
] | null | null | null | # This For Assets Login File
| 14.5 | 28 | 0.758621 | eng_Latn | 0.662716 |
dbd7915d4001240ea0d560772ac9deab149f5a22 | 1,168 | md | Markdown | AlchemyInsights/in-place-upgrade-with-configuration-manager-guide.md | isabella232/OfficeDocs-AlchemyInsights-pr.et-EE | 6f83295d27a049861e9f5f81f6d42e4793d0404b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:06:20.000Z | 2021-03-06T00:35:21.000Z | AlchemyInsights/in-place-upgrade-with-configuration-manager-guide.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.et-EE | 4a00e421202d27d953e63e045bf8465047c2b72a | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:27:46.000Z | 2022-02-09T06:54:16.000Z | AlchemyInsights/in-place-upgrade-with-configuration-manager-guide.md | isabella232/OfficeDocs-AlchemyInsights-pr.et-EE | 6f83295d27a049861e9f5f81f6d42e4793d0404b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T20:35:20.000Z | 2020-06-02T23:27:27.000Z | ---
title: In-place upgrade with Configuration Manager Guide
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 12/04/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom:
- "9004199"
- "7380"
ms.openlocfilehash: 0e01230010df55e6ceb8508d86fd4833112c0972d5130871b717545d2b427170
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: et-EE
ms.lasthandoff: 08/05/2021
ms.locfileid: "54014752"
---
# <a name="in-place-upgrade-with-configuration-manager-guide"></a>In-place upgrade with Configuration Manager Guide
Versiooniuuenduse käigus säilitatakse kõik olemasoleva opsüsteemi versiooni andmed, sätted, rakendused ja draiverid. See on loodud äärmiselt usaldusväärseks ja võimaldab probleemide korral automaatselt eelmisele opsüsteemile tagasi minna.
Kasutage [7-](https://admin.microsoft.com/adminportal/home#/win10upgrade) ja Windows 8.1 seadmete täiendamisel Windows 7 ja Windows 10. Eeltingimuste kontrolliks ja kohatäienduse automaatseks konfigureerimiseks kasutage pakutavat skripti. | 43.259259 | 238 | 0.833048 | est_Latn | 0.941109 |
dbd79be03390ff148ebd9780541a2a1fd13963b7 | 243 | md | Markdown | vault/tn/DAN-vj36.md | mandolyte/uw-obsidian | 39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d | [
"MIT"
] | null | null | null | vault/tn/DAN-vj36.md | mandolyte/uw-obsidian | 39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d | [
"MIT"
] | null | null | null | vault/tn/DAN-vj36.md | mandolyte/uw-obsidian | 39e987c4cdc49d2a68e3af6b4e3fc84d1cda916d | [
"MIT"
] | null | null | null | # General Information:
Verses 19-33 use the third person to describe the punishment of Nebuchadnezzar (See: [[figs-123person]]). Many terms in this verse are almost the same as in [Daniel 4:11](../04/11.md). See how you translated that verse. | 81 | 219 | 0.753086 | eng_Latn | 0.998029 |
dbd7f20e551570888308b8d46f4235b6b3500585 | 1,093 | md | Markdown | _posts/2016-08-04-Eve-of-Milady-Amalia-Eve-of-Milady-Style-327.md | eudanceyou/eudanceyou.github.io | 9d81bccab5dd52c95c99495c5800da809ea32ed7 | [
"MIT"
] | null | null | null | _posts/2016-08-04-Eve-of-Milady-Amalia-Eve-of-Milady-Style-327.md | eudanceyou/eudanceyou.github.io | 9d81bccab5dd52c95c99495c5800da809ea32ed7 | [
"MIT"
] | null | null | null | _posts/2016-08-04-Eve-of-Milady-Amalia-Eve-of-Milady-Style-327.md | eudanceyou/eudanceyou.github.io | 9d81bccab5dd52c95c99495c5800da809ea32ed7 | [
"MIT"
] | null | null | null | ---
layout: post
date: '2016-08-04'
title: "Eve of Milady Amalia Eve of Milady Style 327"
category: Eve of Milady Amalia
tags: [Eve of Milady Amalia]
---
### Eve of Milady Amalia Eve of Milady Style 327
Just **$519.99**
###
<a href="https://www.antebrands.com/en/eve-of-milady-amalia/62573-eve-of-milady-style-327.html"><img src="//static.msromantic.com/145154/eve-of-milady-style-327.jpg" alt="Eve of Milady Style 327" style="width:100%;" /></a>
<!-- break --><a href="https://www.antebrands.com/en/eve-of-milady-amalia/62573-eve-of-milady-style-327.html"><img src="//static.msromantic.com/145155/eve-of-milady-style-327.jpg" alt="Eve of Milady Style 327" style="width:100%;" /></a>
<a href="https://www.antebrands.com/en/eve-of-milady-amalia/62573-eve-of-milady-style-327.html"><img src="//static.msromantic.com/145153/eve-of-milady-style-327.jpg" alt="Eve of Milady Style 327" style="width:100%;" /></a>
Buy it: [https://www.antebrands.com/en/eve-of-milady-amalia/62573-eve-of-milady-style-327.html](https://www.antebrands.com/en/eve-of-milady-amalia/62573-eve-of-milady-style-327.html)
| 64.294118 | 236 | 0.719122 | yue_Hant | 0.517863 |
dbd8278c6e29f952e639a04a68bfaddfeca489d0 | 2,078 | md | Markdown | CodeBook.md | vdamle/clean-har-dataset | 421dd187474e2081a6a51034365ab99baffd0da7 | [
"Xnet",
"X11"
] | null | null | null | CodeBook.md | vdamle/clean-har-dataset | 421dd187474e2081a6a51034365ab99baffd0da7 | [
"Xnet",
"X11"
] | null | null | null | CodeBook.md | vdamle/clean-har-dataset | 421dd187474e2081a6a51034365ab99baffd0da7 | [
"Xnet",
"X11"
] | null | null | null | = Transformations applied =
Here are the transformations/steps applied in forming the tidy data
* read activity labels from activity_labels.txt and assign a meaningful column
name
* read feature names from features.txt and fix the issues with the names of the variables. Specifically, the following steps were done:
** remove ()
** replace BodyBody with Body
** replace strings starting with 't' with 'time' for time domain
** replace strings starting with 'f' with 'freq' for frequency domain
** replace 'Acc' with 'Accelerometer' for a more descriptive name
** replace 'Gyro' with 'Gyroscope' for a more descriptive name
** replace '-mean-' with 'Mean' for camel case
** replace '-std-' with 'Std' for camel case
** replace '-mean' with 'Mean' for camel case (var names ending with mean)
** replace '-std' with 'Std' for camel case (var names ending with std)
** replace ',g' with 'G' for camel case
* read training data for activity, subject and feature values
* assign colnames to features
* form regex for cols we are interested in
* select the cols based on the regex into a new data frame
* add the subject id and activity id for the corresponding feature row
* repeat the same for test data
* combine the training and test data with rbind
* apply group_by based on subject and activity and summarise_each for
mean and stdev to obtain stats for each measurement
* fix names of cols with 'SummarizedMean' and 'SummarizedStd' for mean
and std
* output tidy data
= Variable naming =
To conform to tidy data principles of having clean and descriptive variable
names, the above conventions have been followed. In general, variables have
been named with camel case and the descriptions have been picked form the
info about the project
Ex: here's a list of a few variable name examples. The complete list of variables is found in tidy_data.txt
"activityId"
"subjectId"
"timeBodyAccelerometerMeanXSummarizedMean"
"timeBodyAccelerometerMeanYSummarizedMean"
"angleYGravityMeanSummarizedStd"
"angleZGravityMeanSummarizedStd"
"activityDescr"
| 39.207547 | 137 | 0.768527 | eng_Latn | 0.997805 |
dbd8690f9b387bf7b611c57a50ea95d082149485 | 2,168 | md | Markdown | src/docs/repos/frontends/DOC.md | rvsia/insights-frontend-storybook | 7e1e143db8fb46018652c183d900082f21957545 | [
"MIT"
] | null | null | null | src/docs/repos/frontends/DOC.md | rvsia/insights-frontend-storybook | 7e1e143db8fb46018652c183d900082f21957545 | [
"MIT"
] | null | null | null | src/docs/repos/frontends/DOC.md | rvsia/insights-frontend-storybook | 7e1e143db8fb46018652c183d900082f21957545 | [
"MIT"
] | null | null | null | # A running list of all of the apps by navigation groups
## Applications
### Landing Page
* [Landing Page](https://github.com/RedHatInsights/landing-page-frontend)
### Settings
* [Settings](https://github.com/RedHatInsights/settings-frontend)
* [RBAC](https://github.com/RedHatInsights/insights-rbac-ui)
* [Hooks](https://github.com/RedHatInsights/notifications-frontend)
* [Sources](https://github.com/ManageIQ/sources-ui)
* [Catalog](https://github.com/RedHatInsights/catalog-ui)
### Red Hat Insights
* [Insights (Formerly Advisor/Configuration Assessment)](https://github.com/RedHatInsights/insights-advisor-frontend)
* [Inventory](https://github.com/RedHatInsights/insights-inventory-frontend)
* [Remediations](https://github.com/RedHatInsights/insights-remediations-frontend)
### Cloud Management Services for Red Hat Enterprise Linux
* [Dashboard](https://github.com/RedHatInsights/insights-dashboard)
* [Vulnerability](https://github.com/RedHatInsights/vulnerability-ui)
* [Compliance](https://github.com/RedHatInsights/compliance-frontend)
* [Drift](https://github.com/RedHatInsights/drift-frontend)
* [Inventory](https://github.com/RedHatInsights/insights-inventory-frontend)
* [Remediations](https://github.com/RedHatInsights/insights-remediations-frontend)
### Red Hat Ansible Automation Platform
* [Automation Analytics](https://github.com/RedHatInsights/tower-analytics-frontend)
* [Automation Hub](https://github.com/ansible/ansible-hub-ui)
### Red Hat OpenShift Cluster Manager
* UHC
### Cost Management
* [Cost Management](https://github.com/project-koku/koku-ui)
### Migration Services
* [Migration Analytics](https://github.com/project-xavier/xavier-ui)
### Subscription Watch
* [Subscription Watch](https://github.com/RedHatInsights/curiosity-frontend)
### API Docs
* [API Docs](https://github.com/RedHatInsights/api-frontend)
## Utilities
* [Frontend Components - Shared components/utils/configs](https://github.com/RedHatInsights/frontend-components)
* [Storybook - documentation](https://github.com/RedHatInsights/insights-frontend-storybook)
* [Starter App](https://github.com/RedHatInsights/insights-frontend-starter-app) | 35.540984 | 117 | 0.773063 | yue_Hant | 0.742717 |
dbd890989dd80e20ccfd90fb53289bb45b713fa8 | 2,551 | md | Markdown | sccm/apps/deploy-use/update-and-retire-applications.md | Miguel-byte/SCCMDocs.es-es | 9849ced1705d6e20b02f9dfb170983f6e13682a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sccm/apps/deploy-use/update-and-retire-applications.md | Miguel-byte/SCCMDocs.es-es | 9849ced1705d6e20b02f9dfb170983f6e13682a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sccm/apps/deploy-use/update-and-retire-applications.md | Miguel-byte/SCCMDocs.es-es | 9849ced1705d6e20b02f9dfb170983f6e13682a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Actualizar y retirar aplicaciones
titleSuffix: Configuration Manager
description: Revise, sustituya o desinstale aplicaciones implementadas con System Center Configuration Manager.
ms.date: 10/06/2016
ms.prod: configuration-manager
ms.technology: configmgr-app
ms.topic: conceptual
ms.assetid: 68ac8a07-8e54-4a3c-91e3-e50dc1cabf5d
author: aczechowski
ms.author: aaroncz
manager: dougeby
ms.collection: M365-identity-device-management
ms.openlocfilehash: 767d6b54d19158a33b582dc3ab605d780914b18e
ms.sourcegitcommit: 874d78f08714a509f61c52b154387268f5b73242
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 02/12/2019
ms.locfileid: "56132823"
---
# <a name="update-and-retire-applications-with-system-center-configuration-manager"></a>Actualizar y retirar aplicaciones mediante System Center Configuration Manager
*Se aplica a: System Center Configuration Manager (Rama actual)*
Con el tiempo, es probable que quiera realizar cambios en una aplicación, desinstalar una aplicación o reemplazar una aplicación implementada por una nueva. System Center Configuration Manager ofrece las funcionalidades siguientes, que le ayudarán a actualizar y retirar aplicaciones:
- **Revisar aplicaciones**. Si realiza cambios en una aplicación o tipo de implementación, Configuration Manager mantiene un historial de esos cambios. Puede revertir la aplicación a una revisión anterior en cualquier momento. También puede ver sus propiedades, restaurar una revisión anterior de una aplicación o eliminar una revisión anterior.
Para obtener más información, consulte [Revisiones de la aplicación](revise-and-supersede-applications.md#application-revisions).
- **Sustituir aplicaciones**. Puede actualizar o reemplazar las aplicaciones existentes mediante una relación de sustitución. Cuando se sustituye una aplicación, puede especificar un nuevo tipo de implementación para reemplazar el tipo de implementación de la aplicación sustituida. También puede decidir si quiere actualizar o desinstalar la aplicación sustituida antes de que se instale la aplicación que la sustituye.
Para obtener más información, consulte [Sustitución de la aplicación](revise-and-supersede-applications.md#application-supersedence).
- **Desinstalar aplicaciones**. Configuration Manager facilita la desinstalación de una aplicación. Esto puede realizarse en modo silencioso, sin la intervención de la aplicación o del usuario del dispositivo.
Para obtener más información, consulte [Desinstalar aplicaciones](uninstall-applications.md).
| 65.410256 | 422 | 0.825951 | spa_Latn | 0.935536 |
dbd8f251fcd52c63c48cc86da81476c76c38d66d | 2,122 | md | Markdown | mail/ses-to-imap/README.md | inertia64/debian-server-tools | 0cee45351b3abe195a24cc19743ea911144503ea | [
"MIT"
] | 418 | 2015-02-13T16:01:26.000Z | 2022-03-24T10:34:56.000Z | mail/ses-to-imap/README.md | inertia64/debian-server-tools | 0cee45351b3abe195a24cc19743ea911144503ea | [
"MIT"
] | 8 | 2015-08-09T17:45:13.000Z | 2019-12-02T20:19:09.000Z | mail/ses-to-imap/README.md | inertia64/debian-server-tools | 0cee45351b3abe195a24cc19743ea911144503ea | [
"MIT"
] | 125 | 2015-01-26T17:20:11.000Z | 2022-03-01T17:02:33.000Z | # SES-to-IMAP
Receiving emails with Amazon SES, S3 and SNS to an IMAP mailbox.
Internet → Amazon SES → S3 storage → SNS notification → local web hook → incron job → s3cmd
### Setup
1. Local IMAP server and accounts
1. SES send - SMTP
1. SES receive / Email Receiving / Rule Sets: store to S3 bucket and notify SNS topic
1. `ses-sns-notifications` directory
1. Install PHP script: `https://example.com/ses-sns/endpoint.php` append to message list file: `[email protected]`
1. Add incron job: `/home/USER/website/ses-emails IN_CREATE,IN_MODIFY /usr/local/bin/ses-mail-download.sh $@ $#`
1. `ses-get-messages.sh` download messages to the specified inbox with s3cmd and clear the S3 bucket
1. SNS / Topics: Protocol HTTPS
- Log in https://console.aws.amazon.com/
- SES
enabled region EU (Ireland) US East (N. Virginia) US West (Oregon)
domain + dkim
add dns: Verification, DKIM, MX
wait for verification
- SES Receipt
Rule
info@
S3 ses-wip/info
SNS ses-wip-info
Rule name info-wip
a SES fogadja akárhonnan, szűr vírust, spam-et amennyire tud
berakja egy S3 bucket-be mint egy sima .eml fájlt
az SNS szól egy nálunk lévő PHP programocskának, hogy jött levél
a PHP program hozzáadja a levél ID-ját egy szöveg fájlhoz
ezt figyeli az incron job, ami elindít egy szkriptet
az pedig letölti a levelet és berakja a levélfiókodba :)
- SES SMTP
Sending Statistics - sandbox
create cred.
- SNS
Topics
deploy PHP code, composer update
`composer require aws/aws-php-sns-message-validator`
Create subscr HTTPS https://lampa.wip.services/ses-sns/endpoint.php
confirm URL
- Test
Publish { "default": "{\"tag\": 11}" }
From: [email protected] (verified)
console window!
See file in /home/USER/website/ses-sns-notifications
- Courier imap
hosted domain
auth userdb
add user
- RainLoop
imap+smtp
- incron
apt install
incron shell script
allow USER
incrontab -u USER incrontab
- Test email
From: [email protected] (verified)
- Courier SMTP settings
esmtpd/2
TODO
Bounces SNS
Complaints SNS
| 28.675676 | 112 | 0.712064 | hun_Latn | 0.594968 |
dbd9259c65463f91c9c80064ccf7cdc4ac0c172d | 405 | md | Markdown | docs/sources/panels/format-data/_index.md | wolftankk/grafana | a92f85a87bc0c78ae41a2b6138246c5d65f7eb5b | [
"Apache-2.0"
] | null | null | null | docs/sources/panels/format-data/_index.md | wolftankk/grafana | a92f85a87bc0c78ae41a2b6138246c5d65f7eb5b | [
"Apache-2.0"
] | null | null | null | docs/sources/panels/format-data/_index.md | wolftankk/grafana | a92f85a87bc0c78ae41a2b6138246c5d65f7eb5b | [
"Apache-2.0"
] | null | null | null | +++
aliases = ["/docs/grafana/latest/panels/format-data/", "/docs/grafana/latest/panels/value-mappings/", "/docs/sources/panels/format-data/"]
title = "Format data using value mapping"
weight = 600
+++
# Format data using value mapping
In addition to field overrides, value mapping is a technique that you can use to change the visual treatment of data that appears in a visualization.
{{< section >}}
| 33.75 | 149 | 0.740741 | eng_Latn | 0.97575 |
dbd9a927b6673f297fe4bbc7e6fa5f87915174eb | 4,670 | md | Markdown | Documenti/Bozze/Documento dei Requisiti v1.0.md | AI-ui69/MagicMirror-GBM | ba5cee1c18205909e8d20f4237efe84dcd1dce71 | [
"Apache-2.0"
] | 1 | 2021-03-21T20:22:58.000Z | 2021-03-21T20:22:58.000Z | Documenti/Bozze/Documento dei Requisiti v1.0.md | AI-ui69/MagicMirror-GBM | ba5cee1c18205909e8d20f4237efe84dcd1dce71 | [
"Apache-2.0"
] | null | null | null | Documenti/Bozze/Documento dei Requisiti v1.0.md | AI-ui69/MagicMirror-GBM | ba5cee1c18205909e8d20f4237efe84dcd1dce71 | [
"Apache-2.0"
] | null | null | null | # MagicMirror-GBM
Documento dei requisiti
*ver. 1.0*
16/02/2021
---
## Indice (automatico)
...
---
## 1 Premesse del progetto
### 1.1 Obiettivi e scopo del progetto
Il prodotto è stato appositamente studiato per fornire al cliente un'esperienza interattiva che integra i servizi di un assistente personale in uno specchio.
### 1.2 Contesto di business
Nella continua evoluzione tecnologica degli ultimi anni si è rilevata sempre più utile l'integrazione della domotica e della tecnologia in strumenti di uso quotidiano.
### 1.3 Stakeholders
Le figure che influenzano lo sviluppo del sistema software sono:
- Committente: **NonSoloTelefonia Lab**
- Clienti: **Human-Centered Design**
- Developers (analisti, progettisti)
---
## 2 Servizi del sistema
### 2.1 Requisiti funzionali
- 2.1.1 Il sistema dovrà consentire la modifica delle impostazioni del sistema software stesso
- 2.1.1.1 Il sistema dovrà consentire la gestione delle impostazioni di connettività
- 2.1.1.2 Il sistema dovrà consentire la gestione del microfono e della fotocamera
- 2.1.1.3 Il sistema dovrà permettere il reset del sistema stesso
- 2.1.1.4 Il sistema dovrà permettere una fase di configurazione iniziale
- 2.1.1.6 Il sistema dovrà permettere la modifica della lingua
- 2.1.1.7 Il sistema dovrà permettere la regolazione del volume
- 2.1.1.8 Il sistema dovrà permettere la modifica delle suonerie
- 2.1.1.9 Il sistema dovrà permettere la gestione delle notifiche
- 2.1.1.10 Il sistema dovrà consentire la modifica dello sfondo
- 2.1.2 Il sistema dovrà integrare un modulo per il meteo
- 2.1.2.1 Il sistema dovrà consentire la visualizzazione del meteo
- 2.1.2.2 Il sistema dovrà consentire la modifica della zona interessata
- 2.1.2.3 Il sistema dovrà consentire la modifica della scala termometrica
- 2.1.3 Il sistema dovrà integrare un modulo per le news
- 2.1.4 Il sistema dovrà integrare un modulo per il riconoscimento vocale
- 2.1.5 Il sistema dovrà integrare un modulo per un calendario interattivo
- 2.1.6 Il sistema dovrà integrare un modulo per data e ora
- 2.1.7 Il sistema dovrà integrare un modulo per un timer
- 2.1.9 <mark>Il sistema dovrà integrare un modulo per la messaggistica</mark>
- 2.1.10 Il sistema dovrà integrare un modulo per la riproduzione di contenuti multimediali
- 2.1.11 Il sistema dovrà integrare un modulo per le mappe
- 2.1.12 Il sistema dovrà integrare un modulo per le e-mail
- 2.1.13 Il sistema dovrà integrare un modulo per le note
- 2.1.14 Il sistema dovrà integrare un modulo per l'utilizzo della fotocamera
### 2.2 Requisiti informativi
Questa sezione è vuota (per ora).
---
## 3 Vincoli di sistema
### 3.1 Requisiti di interfaccia
L'interfaccia proposta dal sistema è stata appositamente studiata per garantire una fruizione di contenuti intuitiva ed immediata.
- 3.1.1 Interfaccia principale
- 3.1.1.1 Visualizzazione della data e dell'ora corrente
- 3.1.1.2 Visualizzazione del meteo
- 3.1.1.3 Visualizzazione delle news
- 3.1.1.4 Visualizzazione di un calendario interattivo
- 3.1.1.5 Visualizzazione delle note
- 3.1.1.6 Visualizzazione delle e-mail
- 3.1.2 Interfaccia browser
- 3.1.2.1 Visualizzazione risultati navigazione web
- 3.1.3 Interfaccia timer
- 3.1.3.1 Visualizzazione del timer
- 3.1.4 <mark>Interfaccia messaggistica</mark>
- 3.1.5 Interfaccia riproduzione di contenuti multimediali
- 3.1.6 Interfaccia mappe
- 3.1.6.1 Visualizzazione delle mappe
### 3.2 Requisiti tecnologici
- 3.2.1 Raspberry Pi (modello?)
- 3.2.2 Microfono
- 3.2.3 Fotocamera
- 3.2.4 Casse audio
- 3.2.5 Schermo (tipo?)
- 3.2.6 Telaio specchio
- 3.2.7 Two-way mirror
### 3.3 Requisiti di prestazione
Non si registrano particolari esigenze in questo ambito.
### 3.4 Requisiti di sicurezza
Non si registrano particolari esigenze in questo ambito.
### 3.5 Requisiti operativi
L'intero progetto è stato realizzato utilizzando i seguenti linguaggi:
- JavaScript
- CSS
- HTML
Si relaziona con sistemi operativi Raspberry Pi OS e applicazione Android.
### 3.6 Requisiti politici e legali
Non si registrano particolari esigenze in questo ambito.
### 3.7 Altri vincoli
Questa sezione è vuota.
---
## 4 Appendici
### 4.1 Glossario
- **Browser**: programma per navigare in Internet che inoltra la richiesta di un documento alla rete e ne consente la visualizzazione una volta arrivato.
- **Human-centered design**: approccio di problem solving che coinvolge la prospettiva del cliente in tutti gli step della risoluzione stessa.
- **Two-way mirror**: particolare tipo di specchio che da un lato riflette la luce mentre dall'altro ne permette il passaggio.
| 30.324675 | 167 | 0.755675 | ita_Latn | 0.999233 |
dbd9b9ae6618057ef95ba016b25a692c90746dde | 426 | md | Markdown | _posts/2019-11-24-linux-wind.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2019-11-24-linux-wind.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2019-11-24-linux-wind.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | ---
title: Linux, Windows Users Targeted With New ACBackdoor Malware
author: PipisCrew
date: 2019-11-24
categories: [news]
toc: true
---
https://www.bleepingcomputer.com/news/security/linux-windows-users-targeted-with-new-acbackdoor-malware/
https://www.intezer.com/blog-acbackdoor-analysis-of-a-new-multiplatform-backdoor/
origin - https://www.pipiscrew.com/?p=15912 linux-windows-users-targeted-with-new-acbackdoor-malware | 35.5 | 104 | 0.793427 | kor_Hang | 0.280348 |
dbdafe6cb284f923d36f5c3434d19aeb2355072b | 2,808 | md | Markdown | content/reference/cli/repo/update.md | KeisukeYamashita/docs-1 | bd020d557278b4adf871cf8f1f5ca38e33d4c6e7 | [
"Apache-2.0"
] | null | null | null | content/reference/cli/repo/update.md | KeisukeYamashita/docs-1 | bd020d557278b4adf871cf8f1f5ca38e33d4c6e7 | [
"Apache-2.0"
] | null | null | null | content/reference/cli/repo/update.md | KeisukeYamashita/docs-1 | bd020d557278b4adf871cf8f1f5ca38e33d4c6e7 | [
"Apache-2.0"
] | null | null | null | ---
title: "Update"
linkTitle: "Update"
description: >
Learn how to modify a repo.
---
## Command
```
$ vela update repo <parameters...> <arguments...>
```
{{% alert color="info" %}}
For more information, you can run `vela update repo --help`.
{{% /alert %}}
## Parameters
The following parameters are used to configure the command:
| Name | Description | Environment Variables |
| ------------ | -------------------------------------------------- | ------------------------------------ |
| `org` | name of organization for the repository | `VELA_ORG`, `REPO_ORG` |
| `repo` | name of repository | `VELA_REPO`, `REPO_NAME` |
| `link` | full URL for the repository | `VELA_LINK`, `REPO_LINK` |
| `clone` | clone URL for the repository | `VELA_CLONE`, `REPO_CLONE` |
| `visibility` | access level required to view the repository | `VELA_VISIBILITY`, `REPO_VISIBILITY` |
| `timeout` | max time allowed per build | `VELA_TIMEOUT`, `REPO_TIMEOUT` |
| `private` | disables public access to the repository | `VELA_PRIVATE`, `REPO_PRIVATE` |
| `trusted` | elevates permissions for builds for the repository | `VELA_TRUSTED`, `REPO_TRUSTED` |
| `active` | enables/disables the repository | `VELA_ACTIVE`, `REPO_ACTIVE` |
| `event` | events to trigger builds for the repository | `VELA_EVENTS`, `REPO_EVENTS` |
| `output` | format the output for the repository | `VELA_OUTPUT`, `REPO_OUTPUT` |
{{% alert color="info" %}}
This command also supports setting the following parameters via a configuration file:
- `org`
- `repo`
- `output`
For more information, please review the [CLI config documentation](/docs/reference/cli/config/).
{{% /alert %}}
## Permissions
COMING SOON!
## Sample
{{% alert color="warning" %}}
This section assumes you have already installed and setup the CLI.
To install the CLI, please review the [installation documentation](/docs/reference/cli/install/).
To setup the CLI, please review the [authentication documentation](/docs/reference/cli/authentication/).
{{% /alert %}}
#### Request
```sh
vela update repo --org github --repo octocat --event tag
```
#### Response
```sh
id: 1
userid: 1
org: github
name: octocat
fullname: github/octocat
link: https://github.com/github/octocat
clone: https://github.com/github/octocat.git
branch: master
timeout: 60
visibility: public
private: false
trusted: false
active: true
allowpull: true
allowpush: true
allowdeploy: false
allowtag: true
allowcomment: false
```
| 31.909091 | 108 | 0.59188 | eng_Latn | 0.753415 |
dbdbdaa73212b82993c20a418e5e15e3ee1ee6ac | 1,440 | md | Markdown | docs/interfaces/_speechly_d_.clientoptions.md | 3mcd/browser-client | 6b12594207f8005c148286ad13827f77926e7738 | [
"MIT"
] | null | null | null | docs/interfaces/_speechly_d_.clientoptions.md | 3mcd/browser-client | 6b12594207f8005c148286ad13827f77926e7738 | [
"MIT"
] | null | null | null | docs/interfaces/_speechly_d_.clientoptions.md | 3mcd/browser-client | 6b12594207f8005c148286ad13827f77926e7738 | [
"MIT"
] | null | null | null | [@speechly/browser-client](../README.md) › ["speechly.d"](../modules/_speechly_d_.md) › [ClientOptions](_speechly_d_.clientoptions.md)
# Interface: ClientOptions
The options which can be used to configure the client.
## Hierarchy
* **ClientOptions**
## Index
### Properties
* [appId](_speechly_d_.clientoptions.md#appid)
* [debug](_speechly_d_.clientoptions.md#optional-debug)
* [deviceId](_speechly_d_.clientoptions.md#optional-deviceid)
* [language](_speechly_d_.clientoptions.md#language)
* [sampleRate](_speechly_d_.clientoptions.md#optional-samplerate)
* [url](_speechly_d_.clientoptions.md#optional-url)
## Properties
### appId
• **appId**: *string*
Defined in speechly.d.ts:100
The unique identifier of an app in the dashboard.
___
### `Optional` debug
• **debug**? : *undefined | false | true*
Defined in speechly.d.ts:120
Whether to output debug statements to the console.
___
### `Optional` deviceId
• **deviceId**? : *undefined | string*
Defined in speechly.d.ts:112
The identifier of the device which is using the client.
___
### language
• **language**: *string*
Defined in speechly.d.ts:104
The language which is used by the app.
___
### `Optional` sampleRate
• **sampleRate**? : *undefined | number*
Defined in speechly.d.ts:116
The sample rate of the audio to use.
___
### `Optional` url
• **url**? : *undefined | string*
Defined in speechly.d.ts:108
The URL of Speechly API endpoint.
| 17.777778 | 134 | 0.718056 | eng_Latn | 0.898482 |
dbdc9f8c87545faeb067cd0ff8a9eeaf2c5fc3b7 | 805 | md | Markdown | thuumper-beta/README.md | ielvisd/thuumper | dcdd98ca533c079686c6ad2a190c5687cf33ecfe | [
"MIT"
] | null | null | null | thuumper-beta/README.md | ielvisd/thuumper | dcdd98ca533c079686c6ad2a190c5687cf33ecfe | [
"MIT"
] | null | null | null | thuumper-beta/README.md | ielvisd/thuumper | dcdd98ca533c079686c6ad2a190c5687cf33ecfe | [
"MIT"
] | null | null | null | # Thuumper
### Front-End
#### https://thuumper.ielvis.vercel.app/
### Storybook
#### https://thuumper-storybook.ielvis.vercel.app/
#### Requirements
1. Node
2. npm or yarn
---
#### Installation
To get the project up and running, and view components in the browser, complete the following steps:
1. Download and install Node: https://nodejs.org/
2. Clone this repo `git clone https://github.com/ielvisd/thuumper-practice.git`
3. cd into thuumper folder
4. Install project dependancies: `yarn`
5. Start the development environment: `yarn dev`
6. Open your browser and visit http://localhost:3000
---
#### Testing
Jest is used for unit and snapshot testing. From `yarn test`
---
#### Built With
- Vue
- Nuxt
- Tailwind
- Storybook
---
#### Author
- Elvis Ibarra https://github.com/ielvisd
| 16.428571 | 100 | 0.701863 | eng_Latn | 0.618992 |
dbdcbbdbf21c54368b4a350463bf897c74908fee | 1,171 | md | Markdown | SharePoint/SharePointServer/PSConfigHelp/modify-server-farm-settings.md | adrianwells/OfficeDocs-SharePoint | d60d49cf4bca382969fba214bf33b3530e046a4a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-13T07:15:33.000Z | 2021-11-13T07:15:33.000Z | SharePoint/SharePointServer/PSConfigHelp/modify-server-farm-settings.md | adrianwells/OfficeDocs-SharePoint | d60d49cf4bca382969fba214bf33b3530e046a4a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | SharePoint/SharePointServer/PSConfigHelp/modify-server-farm-settings.md | adrianwells/OfficeDocs-SharePoint | d60d49cf4bca382969fba214bf33b3530e046a4a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Modify server farm settings"
ms.reviewer:
ms.author: mikeplum
author: MikePlumleyMSFT
manager: pamgreen
ms.date: 3/1/2018
audience: ITPro
ms.topic: article
ms.prod: sharepoint-server-itpro
localization_priority: Normal
ROBOTS: NOINDEX
ms.collection:
- IT_Sharepoint_Server
- IT_Sharepoint_Server_Top
ms.assetid: 8cbb6209-2917-4039-8b27-b998de9a9c0c
description: "Summary: Learn how disconnect from a server in SharePoint Server."
---
# Modify server farm settings
**Summary:** Learn how disconnect from a server in SharePoint Server.
You can use the options on this page to modify the server farm settings. The name of the server that is hosting the configuration database and the name of the configuration database are displayed.
If you want to disconnect this computer from this server farm, select **Disconnect from this server farm**, and then click **Next**. When the configuration wizard completes, this computer will no longer host the SharePoint Central Administration web site and will immediately be removed from the SharePoint server farm.
You can run the configuration wizard later to join the computer to another server farm .
| 37.774194 | 320 | 0.793339 | eng_Latn | 0.993681 |
dbdce5add3d06b3b517a005cadbf5623c12ebdb7 | 32 | md | Markdown | org/docs/patterns/bent/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | org/docs/patterns/bent/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | org/docs/patterns/bent/fr.md | woutervdub/markdown | 402011bab2f2b5f1e2072a117b513750726e51f9 | [
"MIT"
] | null | null | null | ---
title: Bent body block
---
| 6.4 | 22 | 0.5625 | eng_Latn | 0.969837 |
dbdcfa55d0b76f83b04682092f54f75e84590fb8 | 1,731 | md | Markdown | articles/cognitive-services/QnAMaker/How-To/change-default-answer.md | afonsogcardoso/azure-docs.pt-br | eb8085d4efa64294d91788cd5c2157c336630e55 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/QnAMaker/How-To/change-default-answer.md | afonsogcardoso/azure-docs.pt-br | eb8085d4efa64294d91788cd5c2157c336630e55 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/QnAMaker/How-To/change-default-answer.md | afonsogcardoso/azure-docs.pt-br | eb8085d4efa64294d91788cd5c2157c336630e55 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Obter resposta padrão - QnA Maker
description: A resposta padrão é devolvida quando não há correspondência com a pergunta. Você pode querer alterar a resposta padrão da resposta padrão padrão.
ms.topic: conceptual
ms.date: 01/10/2020
ms.openlocfilehash: fae5c38fd64435a3fae56862bad04e000916e88b
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/28/2020
ms.locfileid: "76843270"
---
# <a name="set-default-answer-for-a-knowledge-base"></a>Defina a resposta padrão para uma base de conhecimento
A resposta padrão é devolvida quando não há correspondência com a pergunta. Você pode querer alterar a resposta padrão da resposta padrão padrão.
## <a name="change-default-answer"></a>Alterar a resposta padrão
1. Vá para o [portal do Azure](https://portal.azure.com) e navegue até o grupo de recursos que representa o serviço QnA Maker que você criou.
2. Clique para abrir o **Serviço de Aplicativo**.

3. Clique em **Configurações do Aplicativo** e edite o campo **DefaultAnswer** para a resposta padrão desejada. Clique em **Salvar**.

4. Reinicie o serviço Aplicativo

## <a name="next-steps"></a>Próximas etapas
* [Crie um bot com QnA Maker e LUIS](../tutorials/integrate-qnamaker-luis.md) | 49.457143 | 160 | 0.778163 | por_Latn | 0.994944 |
dbdd439a556f06aa8909ff7222f8739d63f6bf92 | 3,464 | md | Markdown | README.md | gizagizamax/PotatoVoiceHub | 9f3cd42dd81e65e20d3473731a188da08354ce11 | [
"MIT"
] | 1 | 2022-03-19T10:48:03.000Z | 2022-03-19T10:48:03.000Z | README.md | gizagizamax/PotatoVoiceHub | 9f3cd42dd81e65e20d3473731a188da08354ce11 | [
"MIT"
] | null | null | null | README.md | gizagizamax/PotatoVoiceHub | 9f3cd42dd81e65e20d3473731a188da08354ce11 | [
"MIT"
] | null | null | null | # PotatoVoiceHub
「ポテトボイスハブ」は棒読みちゃんの音声をA.I.Voiceに変えるアプリ
# 使い方
1.Plugin_PotatoVoiceHub.dll を棒読みちゃん(BouyomiChan.exe)のあるフォルダへコピーします。
2.棒読みちゃんを起動して、PluginPotatoVoiceHub を有効にします。
棒読みちゃん起動時に有効にするダイアログが表示されます。
棒読みちゃんの[その他]タブから有効にすることもできます。
3.PotatoVoiceHub.exe を実行します。
4.[A.I.VOICE開始]をクリックするとA.I.VOICE Editorが起動します。
5.棒読みちゃんの音声がA.I.Voiceに変わります。
# PotatoVoiceHubの設定
## 再生APIのHTTPポート
ここで指定したポートをめがけて、棒読みちゃんからテキストが飛んできます。
デフォルトでは 2119番ポートを使用しますが、他のアプリで使っている場合は適当な数字に変えてください。
棒読みちゃんのツールバーにある PluginPotatoVoiceHub を開き、こちらのポートも合わせてください。
# Voiceroid Talk Plus と連携する
@Wangdora氏作成の Voiceroid Talk Plus を PotatoVoiceHub に対応させる事ができます(非公式)。
PotatoVoiceHub.exe と VoiceroidTalkPlusReceiverHub.exe を実行してください。
Plugin_PotatoVoiceHub.dll は不要なので、棒読みちゃんのプラグイン設定でOFFにしておいてください。
基本的にはこれだけで、いつも通りVoiceroid Talk Plus が 最新の A.I.VOICE で動くようになります。
PotatoVoiceHub と VoiceroidTalkPlusReceiverHub のポートの数字は同じ物を入れておいてください。
# Recotte Studio と連携する
プロジェクトの編集画面
→メニューの[ファイル]
→[環境設定]
→[ユーザー定義音声連携の設定]を開きます
[+]ボタンを押して下記設定をします
音声連携名:適当な名前
連携方法:コメントごとにコマンドを実行
実行コマンド:C:\Windows\System32\cmd.exe
引数:/C curl -G "http://localhost:2119/saveAudio" --data-urlencode "text=%c" --data-urlencode "path=%o" --data-urlencode "preset=%s"
拡張子:wav
[適応]を押してプロジェクトの編集画面まで戻ります。
→タイムラインにある[話者1]の設定を開きます([レイヤーのプロパティ]画面が開く)
→[音声連携]を先ほど適当な名前をつけたやつに変更します。
→[話者名]にA.I.Voice Editorのプリセット名を指定します。
[話者名]を指定しないと、A.I.Voice Editorで選択しているキャラになります。
→[OK]でプロジェクトの編集画面へ。
[話者1]にコメントを記載します。
→[話者1]に更新マークが出てくるので押します。
→AIVoiceと連携して、コメントに音声がつきます。
※あらかじめ、PotateVoiceHub経由でAIVoice Editorを起動しておいてください。
# リリースノート
## PotatoVoiceHub_v2022.03.06
時間経過で接続が切れるっぽいので再接続するよう修正
## PotatoVoiceHub_v2022.03.05.2
設定ファイルがバグってたので修正
## PotatoVoiceHub_v2022.03.05
A.I.Voice 1.3で公式APIが公開されました。
それに伴い全体的に修正しています。
PotatoVoiceHub
A.I.Voice Editorのパス指定を削除しました。
公式APIを使用した場合、不要になるためです。
受信HTTPポート→再生APIのHTTPポート へ名前変更
URLを変更
http://localhost:ポート?text=メッセージ
↓
http://localhost:ポート/play?text=メッセージ&preset=プリセット名
音声保存APIのURL変更
http://localhost:ポート/saveWave?text=メッセージ&filePath=フルパス&presetName=プリセット名
↓
http://localhost:ポート/saveAudio?text=メッセージ&path=フルパス&preset=プリセット名
これに伴いREADME.mdに記載のRecotte Studio と連携方法を修正
クリップボード連携の注意書き修正
いくつかバグがあったので修正
A.I.VOICE 開始→A.I.VOICEに接続 へ名前変更
起動中のA.I.Voice Editorへ接続するように変更
A.I.Voice Editorが起動していなかったら起動するように変更
設定ファイル「VoiceHubOption.json」の項目名を変更
getStatusで取得できるステータスをplaying/waiting→busy/idleへ変更
Plugin_PotatoVoiceHub
getStatusのステータス変更に伴い、判定するステータスを変更
VoiceroidTalkPlusReceiverHub
再生APIのURL変更に伴い、呼ぶURLを変更
getStatusのステータス変更に伴い、判定するステータスを変更
## PotatoVoiceHub_v2022.01.19
Plugin_PotatoVoiceHub
・Recotte Studioと連携時に[話者名]を指定できるようにしました。
## PotatoVoiceHub_v2022.01.16
Plugin_PotatoVoiceHub
・「棒読みちゃんの辞書変換を使う」チェックボックスを追加
・棒読みちゃんコマンドは読み上げないように修正。
PotetoVoiceHub
・テーマの切り替え機能を削除
最新のAIVoice Editorに機能が追加されていたため。
・お試し再生の機能を削除
元々デバッグ用に作った機能で、機能が充実してきたので邪魔なので消しました。
・音声保存APIを追加
Recotte StudioからShift-Jisで送られてくるようなので、エンコードを指定できるようにしました。
・クリップボード連携を追加
文字をコピーしたら、AIVoiceで勝手に再生したり、ファイルに保存したりします。
## PotatoVoiceHub_v2021.11.09
VoiceroidTalkPlus のランダム再生を使って「AIVOICE」→「VOICEROID・ガイノイド」の順に発話すると
AIVOICEとVOICEROIDが同時に発話していたのを修正
## PotatoVoiceHub_v2021.11.08
VoiceroidTalkPlusReceiverHub.exe を追加
A.I.VOICE Editor を起動しないで、ダークモード/ホワイトモードを設定すると落ちてたので修正
ログが自動でスクロールしなかったので修正
アプリのアイコンを追加
## vPotatoVoiceHub_v2021.10.29
初回版
| 26.442748 | 132 | 0.790993 | yue_Hant | 0.847384 |
dbdd4a8a03923a678d9c0f7eb1128dfdc5bd92d1 | 6,408 | md | Markdown | gui-tool-tutorials/github-desktop-old-version-tutorial.md | uniyalabhishek/first-contributions | 2fd17e1d376ab13893ccb245df6ceac88cf3d47a | [
"MIT"
] | null | null | null | gui-tool-tutorials/github-desktop-old-version-tutorial.md | uniyalabhishek/first-contributions | 2fd17e1d376ab13893ccb245df6ceac88cf3d47a | [
"MIT"
] | null | null | null | gui-tool-tutorials/github-desktop-old-version-tutorial.md | uniyalabhishek/first-contributions | 2fd17e1d376ab13893ccb245df6ceac88cf3d47a | [
"MIT"
] | null | null | null | [](https://github.com/ellerbrock/open-source-badges/)
[<img align="right" width="150" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/join-slack-team.png">](https://join.slack.com/t/firstcontributors/shared_invite/enQtNjkxNzQwNzA2MTMwLTVhMWJjNjg2ODRlNWZhNjIzYjgwNDIyZWYwZjhjYTQ4OTBjMWM0MmFhZDUxNzBiYzczMGNiYzcxNjkzZDZlMDM)
[](https://opensource.org/licenses/MIT)
[](https://www.codetriage.com/roshanjossey/first-contributions)
# First Contributions
| <img alt="GitHub Desktop" src="https://desktop.github.com/images/desktop-icon.svg" width="200"> | GitHub Desktop Edition |
| ----------------------------------------------------------------------------------------------- | ---------------------- |
It's hard. It's always hard the first time you do something. Especially when you
are collaborating, making mistakes isn't a comfortable thing. But open source is
all about collaboration & working together. We wanted to simplify the way new
open-source contributors learn & contribute for the first time.
Reading articles & watching tutorials can help, but what comes better than
actually doing the stuff without messing up anything. This project aims at
providing guidance & simplifying the way rookies make their first contribution.
Remember the more relaxed you are, the better you learn. If you are looking for
making your first contribution just follow the simple steps below. We promise
you, it will be fun.
<img align="right" width="300" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/fork.png" alt="fork this repository" />
If you don't have GitHub Desktop on your machine,
[install it](https://desktop.github.com/).
## Fork this repository
Fork this repo by clicking on the fork button on the top of this page. This will
create of copy of this repository in your account.
## Clone the repository
Now clone this repo to your machine.
Open the GitHub Desktop app and click on the `+` on the top left.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-clone1.png" alt="clone this repository" />
If it is not already selected, click on `Clone`. Then choose first-contributions
and then click on `Clone first-contributions`
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-clone2.png" alt="clone this repository" />
Choose the directory on your machine you would like to clone first-contributions
into
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-clone3.png" alt="clone this repository" />
Now you have copied the contents of the first-contributions repository in github
to your computer.
## Create a branch
Now create a branch by clicking on the branch icon at the top left:
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-branch1.png" alt="make a branch" />
Name your branch add-your-name. For example, add-crawleya
Click on `Create new branch`
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-branch2.png" alt="name your branch" />
## Make necessary changes and commit those changes
Now open `Contributors.md` file in a text editor and add your name to it, then
save the file.
You can see that there are changes to Contributors.md and they have been added.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-status.png" alt="check status" />
Now commit those changes:
Write the message "Add `<your-name>` to Contributors list" in the _summary_
field
Replace `<your-name>` with your name
Click on the button that says `Commit to add-your-name`
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-commit1.png" alt="commit your changes" />
At the bottom, you can see that the commit has been created.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-commit2.png" alt="commit your changes" />
## Push changes to github
Click the `Publish` button on the top right.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/dt-publish1.png" alt="push your changes" />
## Submit your changes for review
If you go to your repository on github, you'll see `Compare & pull request`
button. click on that button.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/compare-and-pull.png" alt="create a pull request" />
Now submit the pull request.
<img style="float: right;" src="https://firstcontributions.github.io/assets/gui-tool-tutorials/github-desktop-old-version-tutorial/submit-pull-request.png" alt="submit pull request" />
Soon I'll be merging all your changes into the master branch of this project.
You will get a notification email once the changes have been merged.
## Where to go from here?
Congrats! You just completed the standard _fork -> clone -> edit -> PR_ workflow
that you'll encounter often as a contributor!
Celebrate your contribution and share it with your friends and followers by
going to [web app](https://firstcontributions.github.io#social-share).
You could join our slack team in case you need any help or have any questions.
[Join slack team](https://join.slack.com/t/firstcontributors/shared_invite/enQtMzE1MTYwNzI3ODQ0LTZiMDA2OGI2NTYyNjM1MTFiNTc4YTRhZTg4OWZjMzA0ZWZmY2UxYzVkMzI1ZmVmOWI4ODdkZWQwNTM2NDVmNjY).
### [Additional material](../additional-material/git_workflow_senarios/additional-material.md)
## Tutorials Using Other Tools
[Back to main page](https://github.com/firstcontributions/first-contributions#tutorials-using-other-tools)
| 50.456693 | 324 | 0.76779 | eng_Latn | 0.885575 |
dbdd7881d47f38fda6fd87eac0ca3806cd2b0819 | 6,307 | md | Markdown | articles/virtual-machines/linux/infrastructure-example.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/infrastructure-example.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/infrastructure-example.md | silvercr/azure-docs.es-es | a40a316665a10e4008b60dabd50cbb3ec86e9c1d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial de la infraestructura de Azure de ejemplo
description: Obtenga información sobre las directrices clave de diseño e implementación para implementar una infraestructura de ejemplo en Azure.
author: cynthn
ms.service: virtual-machines-linux
ms.workload: infrastructure-services
ms.topic: conceptual
ms.date: 12/15/2017
ms.author: cynthn
ms.openlocfilehash: 500de3f89bd041adf0b73e21762495d8c89e19c8
ms.sourcegitcommit: dccb85aed33d9251048024faf7ef23c94d695145
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/28/2020
ms.locfileid: "87286298"
---
# <a name="example-azure-infrastructure-walkthrough-for-linux-vms"></a>Tutorial de la infraestructura de Azure de ejemplo para máquinas virtuales Linux
Este artículo le guía a través de la creación de una infraestructura de aplicación de ejemplo. Detallaremos el diseño de una infraestructura para una tienda en línea sencilla que reúna todas las directrices y decisiones relacionadas con las convenciones de nomenclatura, los conjuntos de disponibilidad, las redes virtuales, los equilibradores de carga y, realmente, la implementación de sus máquinas virtuales (VM).
## <a name="example-workload"></a>Carga de trabajo de ejemplo
Adventure Works Cycles desea crear una aplicación de la tienda en línea en Azure que conste de:
* Dos servidores nginx que ejecuten el cliente front-end en un nivel web
* Dos servidores nginx que procesen datos y pedidos en un nivel de aplicación
* Dos servidores MongoDB parte de un clúster particionado para almacenar pedidos y datos de productos en un nivel de base de datos
* Dos controladores de dominio de Active Directory para los proveedores y las cuentas de cliente en un nivel de autenticación
* Todos los servidores se encuentran en dos subredes:
* una subred front-end para los servidores web
* Una subred back-end para los servidores de aplicaciones, el clúster de MongoDB y los controladores de dominio

El tráfico web seguro entrante debe ser de carga equilibrada entre los servidores web a medida que los clientes examinan la tienda en línea. El tráfico de procesamiento de pedidos en forma de solicitudes HTTP procedente de los servidores web debe equilibrarse entre los servidores de aplicaciones. Además, la infraestructura debe diseñarse para alta disponibilidad.
El diseño resultante incluirá:
* Una suscripción y una cuenta de Azure
* Un único grupo de recursos
* Azure Managed Disks
* Una red virtual con dos subredes
* Conjuntos de disponibilidad para las máquinas virtuales con un rol similar
* Máquinas virtuales
Todo lo anterior seguirá estas convenciones de nomenclatura:
* Adventure Works Cycles usa **[carga de trabajo de TI]-[ubicación]-[recurso de Azure]** como prefijo
* En este ejemplo, "**azos**" (siglas en inglés de "tienda en línea de Azure") es el nombre de la carga de trabajo de TI y "**use**" (siglas en inglés de "este de EE. UU. 2") es la ubicación.
* Las redes virtuales usan AZOS-USE-VN <strong>[número]</strong>
* Los conjuntos de disponibilidad usan azos-use-as- **[rol]**
* Los nombres de máquinas virtuales usan azos-use-vm- **[vmname]**
## <a name="azure-subscriptions-and-accounts"></a>Suscripciones y cuentas de Azure
Adventure Works Cycles usa la suscripción Enterprise, denominada Adventure Works Enterprise Subscription, para proporcionar la facturación de esta carga de trabajo de TI.
## <a name="storage"></a>Storage
Adventure Works Cycles determina que deben usar Azure Managed Disks. Al crear máquinas virtuales, se utilizan ambos niveles de almacenamiento disponible:
* **Almacenamiento estándar** de los servidores web, los servidores de aplicaciones y los controladores de dominio y sus discos de datos.
* **Premium Storage** de los servidores de clústeres particionados MongoDB y sus discos de datos.
## <a name="virtual-network-and-subnets"></a>Red virtual y subredes
Dado que la red virtual no necesita una conectividad continua con la red local de Adventure Work Cycles, la empresa optó por una red virtual solo en la nube.
Creó una red virtual solo en la nube con la siguiente configuración a través del Portal de Azure:
* Nombre: AZOS-USE-VN01
* Ubicación: Este de EE. UU. 2
* Espacio de direcciones de red virtual: 10.0.0.0/8
* Primera subred:
* Nombre: FrontEnd
* Espacio de direcciones: 10.0.1.0/24
* Segunda subred:
* Nombre: BackEnd
* Espacio de direcciones: 10.0.2.0/24
## <a name="availability-sets"></a>Conjuntos de disponibilidad
Para mantener la alta disponibilidad de los cuatro niveles de su tienda en línea, Adventure Works Cycles optó por cuatro conjuntos de disponibilidad:
* **azos-use-as-web** para los servidores web
* **azos-use-as-app** para los servidores de aplicaciones
* **azos-use-as-db** para los servidores del clúster particionado de MongoDB
* **azos-use-as-dc** para los controladores de dominio
## <a name="virtual-machines"></a>Máquinas virtuales
Adventure Works Cycles decidió los siguientes nombres para sus máquinas virtuales de Azure:
* **azos-use-vm-web01** para el primer servidor web
* **azos-use-vm-web02** para el segundo servidor web
* **azos-use-vm-app01** para el primer servidor de aplicaciones
* **azos-use-vm-app02** para el segundo servidor de aplicaciones
* **azos-use-vm-db01** para el primer servidor MongoDB en el clúster
* **azos-use-vm-db02** para el segundo servidor MongoDB en el clúster
* **azos-use-vm-dc01** para el primer controlador de dominio
* **azos-use-vm-dc02** para el segundo controlador de dominio
Aquí está la configuración resultante.

Esta configuración incluye:
* Una red virtual solo en la nube con dos subredes (FrontEnd y BackEnd)
* Azure Managed Disks que usa discos Standard y Premium
* Cuatro conjuntos de disponibilidad, uno para cada nivel de la tienda en línea
* Las máquinas virtuales para los cuatro niveles
* Un conjunto externo de carga equilibrada para el tráfico web basado en HTTPS de Internet a los servidores web
* Un conjunto interno de carga equilibrada para el tráfico web sin cifrar de los servidores web a los servidores de aplicaciones
* Un único grupo de recursos
| 57.862385 | 416 | 0.787696 | spa_Latn | 0.987763 |
dbddfd98fd4e43b0a5b068d243af04a9deb26f2f | 30 | md | Markdown | README.md | GABRIEL08266644/cadastro-de-usuario-react | cd5938acdc3e7edf2355d9398a11047e3c08b337 | [
"MIT"
] | null | null | null | README.md | GABRIEL08266644/cadastro-de-usuario-react | cd5938acdc3e7edf2355d9398a11047e3c08b337 | [
"MIT"
] | null | null | null | README.md | GABRIEL08266644/cadastro-de-usuario-react | cd5938acdc3e7edf2355d9398a11047e3c08b337 | [
"MIT"
] | null | null | null | # cadastro de usuario react
| 10 | 27 | 0.733333 | spa_Latn | 0.757392 |
dbde25ce4aa073432d6f40d40043fcb48c6c637a | 33 | md | Markdown | 5e-shaped/rukovodstvo-mastera-5e-shaped.md | palikhov/palant-roll20-guide | be6a81628fbd81ad7a6e6621f03fd9787b442c5f | [
"MIT"
] | null | null | null | 5e-shaped/rukovodstvo-mastera-5e-shaped.md | palikhov/palant-roll20-guide | be6a81628fbd81ad7a6e6621f03fd9787b442c5f | [
"MIT"
] | null | null | null | 5e-shaped/rukovodstvo-mastera-5e-shaped.md | palikhov/palant-roll20-guide | be6a81628fbd81ad7a6e6621f03fd9787b442c5f | [
"MIT"
] | null | null | null | # Руководство Мастера 5e shaped
| 11 | 31 | 0.787879 | rus_Cyrl | 0.915623 |
dbde2a04cf41aed132751f041801166ed0c35e51 | 43,369 | md | Markdown | neural-networks-3.md | pikma/cs231n.github.io | eadb4919e4e2072addfcc121f4d7959f3ad0889b | [
"MIT"
] | null | null | null | neural-networks-3.md | pikma/cs231n.github.io | eadb4919e4e2072addfcc121f4d7959f3ad0889b | [
"MIT"
] | null | null | null | neural-networks-3.md | pikma/cs231n.github.io | eadb4919e4e2072addfcc121f4d7959f3ad0889b | [
"MIT"
] | 3 | 2017-10-20T00:14:05.000Z | 2019-11-29T16:34:27.000Z | ---
layout: page
permalink: /neural-networks-3/
---
Table of Contents:
- [Gradient checks](#gradcheck)
- [Sanity checks](#sanitycheck)
- [Babysitting the learning process](#baby)
- [Loss function](#loss)
- [Train/val accuracy](#accuracy)
- [Weights:Updates ratio](#ratio)
- [Activation/Gradient distributions per layer](#distr)
- [Visualization](#vis)
- [Parameter updates](#update)
- [First-order (SGD), momentum, Nesterov momentum](#sgd)
- [Annealing the learning rate](#anneal)
- [Second-order methods](#second)
- [Per-parameter adaptive learning rates (Adagrad, RMSProp)](#ada)
- [Hyperparameter Optimization](#hyper)
- [Evaluation](#eval)
- [Model Ensembles](#ensemble)
- [Summary](#summary)
- [Additional References](#add)
## Learning
In the previous sections we've discussed the static parts of a Neural Networks: how we can set up the network connectivity, the data, and the loss function. This section is devoted to the dynamics, or in other words, the process of learning the parameters and finding good hyperparameters.
<a name='gradcheck'></a>
### Gradient Checks
In theory, performing a gradient check is as simple as comparing the analytic gradient to the numerical gradient. In practice, the process is much more involved and error prone. Here are some tips, tricks, and issues to watch out for:
**Use the centered formula**. The formula you may have seen for the finite difference approximation when evaluating the numerical gradient looks as follows:
$$
\frac{df(x)}{dx} = \frac{f(x + h) - f(x)}{h} \hspace{0.1in} \text{(bad, do not use)}
$$
where \\(h\\) is a very small number, in practice approximately 1e-5 or so. In practice, it turns out that it is much better to use the *centered* difference formula of the form:
$$
\frac{df(x)}{dx} = \frac{f(x + h) - f(x - h)}{2h} \hspace{0.1in} \text{(use instead)}
$$
This requires you to evaluate the loss function twice to check every single dimension of the gradient (so it is about 2 times as expensive), but the gradient approximation turns out to be much more precise. To see this, you can use Taylor expansion of \\(f(x+h)\\) and \\(f(x-h)\\) and verify that the first formula has an error on order of \\(O(h)\\), while the second formula only has error terms on order of \\(O(h^2)\\) (i.e. it is a second order approximation).
**Use relative error for the comparison**. What are the details of comparing the numerical gradient \\(f'\_n\\) and analytic gradient \\(f'\_a\\)? That is, how do we know if the two are not compatible? You might be temped to keep track of the difference \\(\mid f'\_a - f'\_n \mid \\) or its square and define the gradient check as failed if that difference is above a threshold. However, this is problematic. For example, consider the case where their difference is 1e-4. This seems like a very appropriate difference if the two gradients are about 1.0, so we'd consider the two gradients to match. But if the gradients were both on order of 1e-5 or lower, then we'd consider 1e-4 to be a huge difference and likely a failure. Hence, it is always more appropriate to consider the *relative error*:
$$
\frac{\mid f'\_a - f'\_n \mid}{\mid f'\_a \mid + \mid f'\_n \mid}
$$
which considers their ratio of the differences to the ratio of the absolute values of both gradients. Notice that normally the relative error formula only includes one of the two terms (either one), but I prefer to add both to make it symmetric and to prevent underflow in the case where one of the two is zero. However, one must explicitly keep track of the case where both are zero (this can often be the case with ReLUs for example) and pass the gradient check in that edge case. In practice:
- relative error > 1e-2 usually means the gradient is probably wrong
- 1e-2 > relative error > 1e-4 should make you feel uncomfortable
- 1e-4 > relative error is usually okay for objectives with kinks. But if there are no kinks (e.g. use of tanh nonlinearities and softmax), then 1e-4 is too high.
- 1e-7 and less you should be happy.
Also keep in mind that the deeper the network, the higher the relative errors will be. So if you are gradient checking the input data for a 10-layer network, a relative error of 1e-2 might be okay because the errors build up on the way. Conversely, an error of 1e-2 for a single differentiable function likely indicates incorrect gradient.
**Kinks in the objective**. One source of inaccuracy to be aware of during gradient checking is the problem of *kinks*. Kinks refer to non-differentiable parts of an objective function, introduced by functions such as ReLU (\\(max(0,x)\\)), or the SVM loss, Maxout neurons, etc. Consider gradient checking the ReLU function at \\(x = -1e6\\). Since \\(x < 0\\), the analytic gradient at this point is exactly zero. However, the numerical gradient would suddenly compute a non-zero gradient because \\(f(x+h)\\) might cross over the kink (e.g. if \\(h > 1e-6\\)) and introduce a non-zero contribution. You might think that this is a pathological case, but in fact this case can be very common. For example, an SVM for CIFAR-10 contains up to 450,000 \\(max(0,x)\\) terms because there are 50,000 examples and each example yields 9 terms to the objective. Moreover, a Neural Network with an SVM classifier will contain many more kinks due to ReLUs.
Note that it is possible to know if a kink was crossed in the evaluation of the loss. This can be done by keeping track of the identities of all "winners" in a function of form \\(max(x,y)\\); That is, was x or y higher during the forward pass. If the identity of at least one winner changes when evaluating \\(f(x+h)\\) and then \\(f(x-h)\\), then a kink was crossed and the numerical gradient will not be exact.
**Be careful with the step size h**. It is not necessarily the case that smaller is better, because when \\(h\\) is much smaller, you may start running into numerical precision problems. Sometimes when the gradient doesn't check, it is possible that you change \\(h\\) to be 1e-4 or 1e-6 and suddenly the gradient will be correct. This [wikipedia article](http://en.wikipedia.org/wiki/Numerical_differentiation) contains a chart that plots the value of **h** on the x-axis and the numerical gradient error on the y-axis.
**Gradcheck during a "characteristic" mode of operation**. It is important to realize that a gradient check is performed at a particular (and usually random), single point in the space of parameters. Even if the gradient check succeeds at that point, it is not immediately certain that the gradient is correctly implemented globally. Additionally, a random initialization might not be the most "characteristic" point in the space of parameters and may in fact introduce pathological situations where the gradient seems to be correctly implemented but isn't. For instance, an SVM with very small weight initialization will assign almost exactly zero scores to all datapoints and the gradients will exhibit a particular pattern across all datapoints. An incorrect implementation of the gradient could still produce this pattern and not generalize to a more characteristic mode of operation where some scores are larger than others. Therefore, to be safe it is best to use a short **burn-in** time during which the network is allowed to learn and perform the gradient check after the loss starts to go down. The danger of performing it at the first iteration is that this could introduce pathological edge cases and mask an incorrect implementation of the gradient.
**Don't let the regularization overwhelm the data**. It is often the case that a loss function is a sum of the data loss and the regularization loss (e.g. L2 penalty on weights). One danger to be aware of is that the regularization loss may overwhelm the data loss, in which case the gradients will be primarily coming from the regularization term (which usually has a much simpler gradient expression). This can mask an incorrect implementation of the data loss gradient. Therefore, it is recommended to turn off regularization and check the data loss alone first, and then the regularization term second and independently. One way to perform the latter is to hack the code to remove the data loss contribution. Another way is to increase the regularization strength so as to ensure that its effect is non-negligible in the gradient check, and that an incorrect implementation would be spotted.
**Remember to turn off dropout/augmentations**. When performing gradient check, remember to turn off any non-deterministic effects in the network, such as dropout, random data augmentations, etc. Otherwise these can clearly introduce huge errors when estimating the numerical gradient. The downside of turning off these effects is that you wouldn't be gradient checking them (e.g. it might be that dropout isn't backpropagated correctly). Therefore, a better solution might be to force a particular random seed before evaluating both \\(f(x+h)\\) and \\(f(x-h)\\), and when evaluating the analytic gradient.
**Check only few dimensions**. In practice the gradients can have sizes of million parameters. In these cases it is only practical to check some of the dimensions of the gradient and assume that the others are correct. **Be careful**: One issue to be careful with is to make sure to gradient check a few dimensions for every separate parameter. In some applications, people combine the parameters into a single large parameter vector for convenience. In these cases, for example, the biases could only take up a tiny number of parameters from the whole vector, so it is important to not sample at random but to take this into account and check that all parameters receive the correct gradients.
**Use only few datapoints**. If your gradcheck for only ~2 or 3 datapoints then you will almost certainly gradcheck for an entire batch. Using very few datapoints makes your gradient check faster and more efficient. Additionally, loss functions that contain kinks (e.g. due to use of ReLUs or SVM etc.) will have fewer kinks with fewer datapoints, so it is less likely for you to cross one when you perform the finite different approximation. (More on kinks below).
<a name='sanitycheck'></a>
### Before learning: sanity checks Tips/Tricks
Here are a few sanity checks you might consider running before you plunge into expensive optimization:
- **Look for correct loss at chance performance.** Make sure you're getting the loss you expect when you initialize with small parameters. It's best to first check the data loss alone (so set regularization strength to zero). For example, for CIFAR-10 with a Softmax classifier we would expect the initial loss to be 2.302, because we expect a diffuse probability of 0.1 for each class (since there are 10 classes), and Softmax loss is the negative log probability of the correct class so: -ln(0.1) = 2.302. For The Weston Watkins SVM, we expect all desired margins to be violated (since all scores are approximately zero), and hence expect a loss of 9 (since margin is 1 for each wrong class). If you're not seeing these losses there might be issue with initialization.
- As a second sanity check, increasing the regularization strength should increase the loss
- **Overfit a tiny subset of data**. Lastly and most importantly, before training on the full dataset try to train on a tiny portion (e.g. 20 examples) of your data and make sure you can achieve zero cost. For this experiment it's also best to set regularization to zero, otherwise this can prevent you from getting zero cost. Unless you pass this sanity check with a small dataset it is not worth proceeding to the full dataset.
<a name='baby'></a>
### Babysitting the learning process
There are multiple useful quantities you should monitor during training of a neural network. These plots are the window into the training process and should be utilized to get intuitions about different hyperparameter settings and how they should be changed for more efficient learning.
The x-axis of the plots below are always in units of epochs, which measure how many times every example has been seen during training in expectation (e.g. one epoch means that every example has been seen once). It is preferable to track epochs rather than iterations since the number of iterations depends on the arbitrary setting of batch size.
<a name='loss'></a>
#### Loss function
The first quantity that is useful to track during training is the loss, as it is evaluated on the individual batches during the forward pass. Below is a cartoon diagram showing the loss over time, and especially what the shape might tell you about the learning rate:
<div class="fig figcenter fighighlight">
<img src="/assets/nn3/learningrates.jpeg" width="49%">
<img src="/assets/nn3/loss.jpeg" width="49%">
<div class="figcaption">
<b>Left:</b> A cartoon depicting the effects of different learning rates. With low learning rates the improvements will be linear. With high learning rates they will start to look more exponential. Higher learning rates will decay the loss faster, but they get stuck at worse values of loss (green line). This is because there is too much "energy" in the optimization and the parameters are bouncing around chaotically, unable to settle in a nice spot in the optimization landscape. <b>Right:</b> An example of a typical loss function over time, while training a small network on CIFAR-10 dataset. This loss function looks reasonable (it might indicate a slightly too small learning rate based on its speed of decay, but it's hard to say), and also indicates that the batch size might be a little too low (since the cost is a little too noisy).
</div>
</div>
The amount of "wiggle" in the loss is related to the batch size. When the batch size is 1, the wiggle will be relatively high. When the batch size is the full dataset, the wiggle will be minimal because every gradient update should be improving the loss function monotonically (unless the learning rate is set too high).
Some people prefer to plot their loss functions in the log domain. Since learning progress generally takes an exponential form shape, the plot appears more as a slightly more interpretable straight line, rather than a hockey stick. Additionally, if multiple cross-validated models are plotted on the same loss graph, the differences between them become more apparent.
<a name='accuracy'></a>
#### Train/Val accuracy
The second important quantity to track while training a classifier is the validation/training accuracy. This plot can give you valuable insights into the amount of overfitting in your model:
<div class="fig figleft fighighlight">
<img src="/assets/nn3/accuracies.jpeg">
<div class="figcaption">
The gap between the training and validation accuracy indicates the amount of overfitting. Two possible cases are shown in the diagram on the left. The blue validation error curve shows very small validation accuracy compared to the training accuracy, indicating strong overfitting (note, it's possible for the validation accuracy to even start to go down after some point). When you see this in practice you probably want to increase regularization (stronger L2 weight penalty, more dropout, etc.) or collect more data. The other possible case is when the validation accuracy tracks the training accuracy fairly well. This case indicates that your model capacity is not high enough: make the model larger by increasing the number of parameters.
</div>
<div style="clear:both"></div>
</div>
<a name='ratio'></a>
#### Ratio of weights:updates
The last quantity you might want to track is the ratio of the update magnitudes to to the value magnitudes. Note: *updates*, not the raw gradients (e.g. in vanilla sgd this would be the gradient multiplied by the learning rate). You might want to evaluate and track this ratio for every set of parameters independently. A rough heuristic is that this ratio should be somewhere around 1e-3. If it is lower than this then the learning rate might be too low. If it is higher then the learning rate is likely too high. Below is an example figure:
<div class="fig figcenter fighighlight">
<img src="/assets/nn3/values.jpeg" width="49%">
<img src="/assets/nn3/updates.jpeg" width="49%">
<div class="figcaption">
Example of a cross-validated 2-layer neural network where the learning rate is set relatively well. We are looking at the matrix of weights W1, and plotting the min and the max across all weights on the left. On the right, we plot the min and max of the updates for the weights, during gradient descent. The updates are getting smaller due to learning rate decay used in this example. Note that the approximate range of the updates is roughly 0.0002 and the range of values is about 0.02. This gives us a ratio of 0.0002 / 0.02 = 1e-2, which is within a relatively healthy limit for a smaller network.
</div>
</div>
Instead of tracking the min or the max, some people prefer to compute and track the norm of the gradients and their updates instead. These metrics are usually correlated and often give approximately the same results.
<a name='distr'></a>
#### Activation / Gradient distributions per layer
An incorrect initialization can slow down or even completely stall the learning process. Luckily, this issue can be diagnosed relatively easily. The most important statistic to keep track of is the variance of the activations and their gradients on each layer. For example, with an improper initialization the variance of the activations may looks like:
```python
# Incorrectly initializing weights with zero mean gaussian with variance 0.01
Layer 0: Variance: 1.005315e+00
Layer 1: Variance: 3.123429e-04
Layer 2: Variance: 1.159213e-06
Layer 3: Variance: 5.467721e-10
Layer 4: Variance: 2.757210e-13
Layer 5: Variance: 3.316570e-16
Layer 6: Variance: 3.123025e-19
Layer 7: Variance: 6.199031e-22
Layer 8: Variance: 6.623673e-25
```
Where we can see that the activations vanish extremely quickly in the higher layers of the network. This will in turn lead to weight gradients near zero, since during backprop they are dependent multiplicatively on the activations. The 8-layer Neural Network above was incorrectly initialized by drawing the weights from a gaussian with a standard deviation of 0.01. By correctly normalizing the weights, the variances look much more uniform:
```python
# Appropriately scaling the initialized weights:
Layer 0: Variance: 1.002860e+00
Layer 1: Variance: 7.015103e-01
Layer 2: Variance: 6.048625e-01
Layer 3: Variance: 8.517882e-01
Layer 4: Variance: 6.362898e-01
Layer 5: Variance: 4.329555e-01
Layer 6: Variance: 3.539950e-01
Layer 7: Variance: 3.809120e-01
Layer 8: Variance: 2.497737e-01
```
In the second case, the layers were initialized so that the variance of each unit through the network is preserved (see the initialization section of Neural Networks for more details). It can also be helpful to keep track of these quantities as the learning progresses. If the variances display vastly different magnitudes it can be a sign of a problem.
<a name='vis'></a>
#### First-layer Visualizations
Lastly, when one is working with image pixels it can be helpful and satisfying to plot the first-layer features visually:
<div class="fig figcenter fighighlight">
<img src="/assets/nn3/weights.jpeg" width="43%" style="margin-right:10px;">
<img src="/assets/nn3/cnnweights.jpg" width="49%">
<div class="figcaption">
Examples of visualized weights for the first layer of a neural network. <b>Left</b>: Noisy features indicate could be a symptom: Unconverged network, improperly set learning rate, very low weight regularization penalty. <b>Right:</b> Nice, smooth, clean and diverse features are a good indication that the training is proceeding well.
</div>
</div>
<a name='update'></a>
### Parameter updates
Once the analytic gradient is computed with backpropagation, the gradients are used to perform a parameter update. There are several approaches for performing the update, which we discuss next.
We note that optimization for deep networks is currently a very active area of research. In this section we highlight some established and common techniques you may see in practice, briefly describe their intuition, but leave a detailed analysis outside of the scope of the class. We provide some further pointers for an interested reader.
<a name='sgd'></a>
#### SGD and bells and whistles
**Vanilla update**. The simplest form of update is to change the parameters along the negative gradient direction (since the gradient indicates the direction of increase, but we usually wish to minimize a loss function). Assuming a vector of parameters `x` and the gradient `dx`, the simplest update has the form:
```python
# Vanilla update
x += - learning_rate * dx
```
where `learning_rate` is a hyperparameter - a fixed constant. When evaluated on the full dataset, and when the learning rate is low enough, this is guaranteed to make non-negative progress on the loss function.
**Momentum update** is another approach that almost always enjoys better converge rates on deep networks. This update can be motivated from a physical perspective of the optimization problem. In particular, the loss can be interpreted as a the height of a hilly terrain (and therefore also to the potential energy since \\(U = mgh\\) and therefore \\( U \propto h \\) ). Initializing the parameters with random numbers is equivalent to setting a particle with zero initial velocity at some location. The optimization process can then be seen as equivalent to the process of simulating the parameter vector (i.e. a particle) as rolling on the landscape.
Since the force on the particle is related to the gradient of potential energy (i.e. \\(F = - \nabla U \\) ), the **force** felt by the particle is precisely the (negative) **gradient** of the loss function. Moreover, \\(F = ma \\) so the (negative) gradient is in this view proportional to the acceleration of the particle. Note that this is different from the SGD update shown above, where the gradient directly integrates the position. Instead, the physics view suggests an update in which the gradient only directly influences the velocity, which in turn has an effect on the position:
```python
# Momentum update
v = mu * v - learning_rate * dx # integrate velocity
x += v # integrate position
```
Here we see an introduction of a `v` variable that is initialized at zero, and an additional hyperparameter (`mu`). As an unfortunate misnomer, this variable is in optimization referred to as *momentum* (its typical value is about 0.9), but its physical meaning is more consistent with the coefficient of friction. Effectively, this variable damps the velocity and reduces the kinetic energy of the system, or otherwise the particle would never come to a stop at the bottom of a hill. When cross-validated, this parameter is usually set to values such as [0.5, 0.9, 0.95, 0.99]. Similar to annealing schedules for learning rates (discussed later, below), optimization can sometimes benefit a little from momentum schedules, where the momentum is increased in later stages of learning. A typical setting is to start with momentum of about 0.5 and anneal it to 0.99 or so over multiple epochs.
> With Momentum update, the parameter vector will build up velocity in any direction that has consistent gradient.
**Nesterov Momentum** is a slightly different version of the momentum update has recently been gaining popularity. It enjoys stronger theoretical converge guarantees for convex functions and also seems to work slightly better in practice than the momentum update described above.
The core idea behind Nesterov momentum is that when the current parameter vector is at some position `x`, then looking at the momentum update above, we know that the momentum term alone (i.e. ignoring the second term with the gradient) is about to nudge the parameter vector by `mu * v`. Therefore, if we are about to compute the gradient, we can treat the future approximate position `x + mu * v` as a "lookahead" - this is a point in the vicinity of where we are soon going to end up. Hence, it makes sense to compute the gradient at `x + mu * v` instead of at the "old/stale" position `x`. That is, we would like to do the following:
```python
x_ahead = x + mu * v
# evaluate dx_ahead (the gradient at x_ahead instead of at x)
v = mu * v - learning_rate * dx_ahead
x += v
```
However, in practice people prefer to express the update to look as similar to vanilla SGD or to the previous momentum update as possible. This is possible to achieve by manipulating the update above with a variable transform `x_head = x + mu * v`, and then expressing the update in terms of `x_ahead` instead of `x`. That is, the parameter vector we are actually storing is always the ahead version. The equations in terms of `xahead` (but renaming it back to `x`) then become:
```python
v_prev = v # back this up
v = mu * v - learning_rate * dx # velocity update stays the same
x += -mu * v_prev + (1 + mu) * v # position update changes form
```
We recommend this further reading to understand the source of these equations and the mathematical formulation of Nesterov's Accelerated Momentum (NAG):
- [Advances in optimizing Recurrent Networks](http://arxiv.org/pdf/1212.0901v2.pdf) by Yoshua Bengio, Section 3.5.
- [Ilya Sutskever's thesis](http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf) (pdf) contains a longer exposition of the topic in section 7.2
<a name='anneal'></a>
#### Annealing the learning rate
In training deep networks, it is usually helpful to anneal the learning rate over time. Good intuition to have in mind is that with a high learning rate, the system contains too much kinetic energy and the parameter vector bounces around chaotically, unable to settle down into deeper, but narrower parts of the loss function. Knowing when to decay the learning rate can be tricky: Decay it slowly and you'll be wasting computation bouncing around chaotically with little improvement for a long time. But decay it too aggressively and the system will cool too quickly, unable to reach the best position it can. There are three common types of implementing the learning rate decay:
- **Step decay**: Reduce the learning rate by some factor every few epochs. Typical values might be reducing the learning rate by a half every 5 epochs, or by 0.1 every 20 epochs. These numbers depend heavily on the type of problem and the model. One heuristic you may see in practice is to watch the validation error while training with a fixed learning rate, and reduce the learning rate by a constant (e.g. 0.5) whenever the validation error stops improving.
- **Exponential decay.** has the mathematical form \\(\alpha = \alpha\_0 e^{-k t}\\), where \\(\alpha\_0, k\\) are hyperparameters and \\(t\\) is the iteration number (but you can also use units of epochs).
- **1/t decay** has the mathematical form \\(\alpha = \alpha\_0 / (1 + k t )\\) where \\(a\_0, k\\) are hyperparameters and \\(t\\) is the iteration number.
In practice, we find that the step decay dropout is slightly preferable because the hyperparameters it involves (the fraction of decay and the step timings in units of epochs) are more interpretable than the hyperparameter \\(k\\). Lastly, if you can afford the computational budget, err on the side of slower decay and train for a longer time.
<a name='second'></a>
#### Second order methods
A second, popular group of methods for optimization in context of deep learning is based on [Newton's method](http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization), which iterates the following update:
$$
x \leftarrow x - [H f(x)]^{-1} \nabla f(x)
$$
Here, \\(H f(x)\\) is the [Hessian matrix](http://en.wikipedia.org/wiki/Hessian_matrix), which is a square matrix of second-order partial derivatives of the function. The term \\(\nabla f(x)\\) is the gradient vector, as seen in Gradient Descent. Intuitively, the Hessian describes the local curvature of the loss function, which allows us to perform a more efficient update. In particular, multiplying by the inverse Hessian leads the optimization to take more aggressive steps in directions of shallow curvature and shorter steps in directions of steep curvature. Note, crucially, the absence of any learning rate hyperparameters in the update formula, which the proponents of these methods cite this as a large advantage over first-order methods.
However, the update above is impractical for most deep learning applications because computing (and inverting) the Hessian in its explicit form is a very costly process in both space and time. For instance, a Neural Network with one million parameters would have a Hessian matrix of size [1,000,000 x 1,000,000], occupying approximately 3725 gigabytes of RAM. Hence, a large variety of *quasi-Newton* methods have been developed that seek to approximate the inverse Hessian. Among these, the most popular is [L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS), which uses the information in the gradients over time to form the approximation implicitly (i.e. the full matrix is never computed).
However, even after we eliminate the memory concerns, a large downside of a naive application of L-BFGS is that it must be computed over the entire training set, which could contain millions of examples. Unlike mini-batch SGD, getting L-BFGS to work on mini-batches is more tricky and an active area of research.
**In practice**, it is currently not common to see L-BFGS or similar second-order methods applied to large-scale Deep Learning and Convolutional Neural Networks. Instead, SGD variants based on (Nesterov's) momentum are more standard because they are simpler and scale more easily.
Additional references:
- [On Optimization Methods for Deep Learning](http://ai.stanford.edu/~ang/papers/icml11-OptimizationForDeepLearning.pdf) from Le et al. is a paper from 2011 comparing SGD vs. L-BFGS. Some of its conclusions have since been challenged.
- [Large Scale Distributed Deep Networks](http://research.google.com/archive/large_deep_networks_nips2012.html) is a paper from the Google Brain team, comparing L-BFGS and SGD variants in large-scale distributed optimization.
- [SFO](http://arxiv.org/abs/1311.2115) algorithm strives to combine the advantages of SGD with advantages of L-BFGS.
<a name='ada'></a>
#### Per-parameter adaptive learning rate methods
All previous approaches we've discussed so far manipulated the learning rate globally and equally for all parameters. Tuning the learning rates is an expensive process, so much work has gone into devising methods that can adaptively tune the learning rates, and even do so per parameter. Many of these methods may still require other hyperparameter settings, but the argument is that they are well-behaved for a broader range of hyperparameter values than the raw learning rate. In this section we highlight some common adaptive methods you may encounter in practice:
**Adagrad** is an adaptive learning rate method originally proposed by [Duchi et al.](http://jmlr.org/papers/v12/duchi11a.html).
```python
# Assume the gradient dx and parameter vector x
cache += dx**2
x += - learning_rate * dx / np.sqrt(cache + 1e-8)
```
Notice that the variable `cache` has size equal to the size of the gradient, and keeps track of per-parameter sum of squared gradients. This is then used to normalize the parameter update step, element-wise. Notice that the weights that receive high gradients will have their effective learning rate reduced, while weights that receive small or infrequent updates will have their effective learning rate increased. Amusingly, the square root operation turns out to be very important and without it the algorithm performs much worse. The smoothing term `1e-8` avoids division by zero. A downside of Adagrad is that in case of Deep Learning, the monotonic learning rate usually proves too aggressive and stops learning too early.
**RMSprop.** RMSprop is a very effective, but currently unpublished adaptive learning rate method. Amusingly, everyone who uses this method in their work currently cites [slide 29 of Lecture 6](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) of Geoff Hinton's Coursera class. The RMSProp update adjusts the Adagrad method in a very simple way in an attempt to reduce its aggressive, monotonically decreasing learning rate. In particular, it uses a moving average of squared gradients instead, giving:
```python
cache = decay_rate * cache + (1 - decay_rate) * dx**2
x += - learning_rate * dx / np.sqrt(cache + 1e-8)
```
Here, `decay_rate` is a hyperparameter and typical values are [0.9, 0.99, 0.999]. Notice that the `x+=` update is identical to Adagrad, but the `cache` variable is a "leaky". Hence, RMSProp still modulates the learning rate of each weight based on the magnitudes of its gradients, which has a beneficial equalizing effect, but unlike Adagrad the updates do not get monotonically smaller.
Additional References:
- [Adadelta](http://arxiv.org/abs/1212.5701) by Matthew Zeiler is another relatively common adaptive learning rate method
- [Adam: A Method for Stochastic Optimization](http://arxiv.org/abs/1412.6980)
- [Unit Tests for Stochastic Optimization](http://arxiv.org/abs/1312.6055) proposes a series of tests as a standardized benchmark for stochastic optimization.
<div class="fig figcenter fighighlight">
<img src="/assets/nn3/opt2.gif" width="49%" style="margin-right:10px;">
<img src="/assets/nn3/opt1.gif" width="49%">
<div class="figcaption">
Animations that may help your intuitions about the learning process dynamics. <b>Left:</b> Contours of a loss surface and time evolution of different optimization algorithms. Notice the "overshooting" behavior of momentum-based methods, which make the optimization look like a ball rolling down the hill. <b>Right:</b> A visualization of a saddle point in the optimization landscape, where the curvature along different dimension has different signs (one dimension curves up and another down). Notice that SGD has a very hard time breaking symmetry and gets stuck on the top. Conversely, algorithms such as RMSprop will see very low gradients in the saddle direction. Tue to the denominator term in the RMSprop update, this will increase the effective learning rate along this direction, helping RMSProp proceed. Images credit: <a href="https://twitter.com/alecrad">Alec Radford</a>.
</div>
</div>
<a name='hyper'></a>
### Hyperparameter optimization
As we've seen, training Neural Networks can involve many hyperparameter settings. The most common hyperparameters in context of Neural Networks include:
- the initial learning rate
- learning rate decay schedule (such as the decay constant)
- regularization strength (L2 penalty, dropout strength)
But as saw, there are many more relatively less sensitive hyperparameters, for example in per-parameter adaptive learning methods, the setting of momentum and its schedule, etc. In this section we describe some additional tips and tricks for performing the hyperparameter search:
**Implementation**. Larger Neural Networks typically require a long time to train, so performing hyperparameter search can take many days/weeks. It is important to keep this in mind since it influences the design of your code base. One particular design is to have a **worker** that continuously samples random hyperparameters and performs the optimization. During the training, the worker will keep track of the validation performance after every epoch, and writes a model checkpoint (together with miscellaneous training statistics such as the loss over time) to a file, preferably on a shared file system. It is useful to include the validation performance directly in the filename, so that it is simple to inspect and sort the progress. Then there is a second program which we will call a **master**, which launches or kills workers across a computing cluster, and may additionally inspect the checkpoints written by workers and plot their training statistics, etc.
**Prefer one validation fold to cross-validation**. In most cases a single validation set of respectable size substantially simplifies the code base, without the need for cross-validation with multiple folds. You'll hear people say they "cross-validated" a parameter, but many times it is assumed that they still only used a single validation set.
**Hyperparameter ranges**. Search for hyperparameters on log scale. For example, a typical sampling of the learning rate would look as follows: `learning_rate = 10 ** uniform(-6, 1)`. That is, we are generating a random random with a uniform distribution, but then raising it to the power of 10. The same strategy should be used for the regularization strength. As mentioned, for some parameters such as momentum, it is more common to search over a fixed set of values such as [0.5, 0.9, 0.95, 0.99] .
**Prefer random search to grid search**. As argued by Bergstra and Bengio in [Random Search for Hyper-Parameter Optimization](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf), "randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid". As it turns out, this is also usually easier to implement.
<div class="fig figcenter fighighlight">
<img src="/assets/nn3/gridsearchbad.jpeg" width="50%">
<div class="figcaption">
Core illustration from <a href="http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf">Random Search for Hyper-Parameter Optimization</a> by Bergstra and Bengio. It is very often the case that some of the hyperparameters matter much more than others (e.g. top hyperparam vs. left one in this figure). Performing random search rather than grid search allows you to much more precisely discover good values for the important ones.
</div>
</div>
**Careful with best values on border**. Sometimes it can happen that you're searching for a hyperparameter (e.g. learning rate) in a bad range. For example, suppose we use `learning_rate = 10 ** uniform(-6, 1)`. Once we receive the results, it is important to double check that the final learning rate is not at the edge of this interval, or otherwise you may be missing more optimal hyperparameter setting beyond the interval.
**Stage your search from coarse to fine**. In practice, it can be helpful to first search in coarse ranges (e.g. 10 ** [-6, 1]), and then depending on where the best results are turning up, narrow the range. Also, it can be helpful to perform the initial coarse search while only training for 1 epoch or even less, because many hyperparameter settings can lead the model to not learn at all, or immediately explode with infinite cost. The second stage could then perform a narrower search with 5 epochs, and the last stage could perform a detailed search in the final range for many more epochs (for example).
**Bayesian Hyperparameter Optimization** is a whole area of research devoted to coming up with algorithms that try to more efficiently navigate the space of hyperparameters. The core idea is to appropriately balance the exploration - exploitation trade-off when querying the performance at different hyperparameters. Multiple libraries have been developed based on these models as well, among some of the better known ones are [Spearmint](https://github.com/JasperSnoek/spearmint), [SMAC](http://www.cs.ubc.ca/labs/beta/Projects/SMAC/), and [Hyperopt](http://jaberg.github.io/hyperopt/). However, in practical settings with ConvNets it is still relatively difficult to beat random search in a carefully-chosen intervals. See some additional from-the-trenches discussion [here](http://nlpers.blogspot.com/2014/10/hyperparameter-search-bayesian.html).
<a name='eval'></a>
## Evaluation
<a name='ensemble'></a>
### Model Ensembles
In practice, one reliable approach to improving the performance of Neural Networks by a few percent is to train multiple independent models, and at test time average their predictions. As the number of models in the ensemble increases, the performance typically monotonically improves (though with diminishing returns). Moreover, the improvements are more dramatic with higher model variety in the ensemble. There are a few approaches to forming an ensemble:
- **Same model, different initializations**. Use cross-validation to determine the best hyperparameters, then train multiple models with the best set of hyperparameters but with different random initialization. The danger with this approach is that the variety is only due to initialization.
- **Top models discovered during cross-validation**. Use cross-validation to determine the best hyperparameters, then pick the top few (e.g. 10) models to form the ensemble. This improves the variety of the ensemble but has the danger of including suboptimal models. In practice, this can be easier to perform since it doesn't require additional retraining of models after cross-validation
- **Different checkpoints of a single model**. If training is very expensive, some people have had limited success in taking different checkpoints of a single network over time (for example after every epoch) and using those to form an ensemble. Clearly, this suffers from some lack of variety, but can still work reasonably well in practice. The advantage of this approach is that is very cheap.
One disadvantage of model ensembles is that they take longer to evaluate on test example. An interested reader may find the recent work from Geoff Hinton on ["Dark Knowledge"](https://www.youtube.com/watch?v=EK61htlw8hY) inspiring, where the idea is to "distill" a good ensemble back to a single model by incorporating the ensemble log likelihoods into a modified objective.
<a name='summary'></a>
## Summary
To train a Neural Network:
- Gradient check your implementation with a small batch of data and be aware of the pitfalls.
- As a sanity check, make sure your initial loss is reasonable, and that you can achieve 100% training accuracy on a very small portion of the data
- During training, monitor the loss, the training/validation accuracy, and if you're feeling fancier, the magnitude of gradient updates in relation to their values (it should be ~1e-3), and when dealing with ConvNets, the first-layer weights.
- The most common update is to use SGD+Momentum, but a good recommendation is to use RMSProp per-parameter adaptive learning rate.
- Decay your learning rate over the period of the training. For example, halve the learning rate after a fixed number of epochs, or whenever the validation accuracy tops off.
- Search for good hyperparameters with random search (not grid search). Stage your search from coarse (wide hyperparameter ranges, training only for 1-5 epochs), to fine (narrower rangers, training for many more epochs)
- Form model ensembles for extra performance
<a name='add'></a>
## Additional References
- [SGD](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) tips and tricks from Leon Bottou
- [Efficient BackProp](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) (pdf) from Yann LeCun
- [Practical Recommendations for Gradient-Based Training of Deep
Architectures](http://arxiv.org/pdf/1206.5533v2.pdf) from Yoshua Bengio
| 112.646753 | 1,262 | 0.778528 | eng_Latn | 0.999264 |
dbdeec31d6509a77d6dd71fd04995b554123a1e4 | 3,602 | md | Markdown | docs/spec/statement/switch_case.md | htsign/kinx | adfb919378b00bf13b18b337740665bc36d031ca | [
"MIT"
] | 241 | 2019-11-28T01:20:38.000Z | 2022-02-04T08:12:22.000Z | docs/spec/statement/switch_case.md | htsign/kinx | adfb919378b00bf13b18b337740665bc36d031ca | [
"MIT"
] | 89 | 2020-01-26T02:48:15.000Z | 2022-01-06T01:58:17.000Z | docs/spec/statement/switch_case.md | htsign/kinx | adfb919378b00bf13b18b337740665bc36d031ca | [
"MIT"
] | 15 | 2020-05-20T12:15:14.000Z | 2021-09-09T23:26:41.000Z |
# Switch-Case statement
## Overview
`switch-case` statement is a multiple conditional jump.
In a condition of `case` statement, you can put an expression.
If the condition's value in case as a result is same, direct value is checked first.
Otherwise, the condition written first is checked first.
You need `break` for exitting `case` statement because kinx `switch-case` is **fallthrough**.
```javascript
switch (x) {
case 1:
/* ... */
break;
case 2:
/* ... */
break;
}
```
### Default case
It is not necessary to put `default` case at the bottom.
```javascript
switch (x) {
case 1:
/* ... */
break;
default: /* fallthrough */
case 2:
/* ... */
break;
}
```
### Case label
Any value is available for `case` label.
```javascript
switch (x) {
case 1: // Number is available
break;
case a: // variable is available
break;
case x[1]+1: // expression is available
break;
case f(): // function call is available, but calling a function will be performed every time when comparing here.
break;
}
```
## Examples
### Example 1. Normal case
#### Code
```javascript
function test(a) {
switch (a) {
case 1: return a * 2;
case 2: return a * 4;
case 3: return a * 8;
case 4: return a * 16;
default:
}
return -1;
}
0.upto(8, &(n) => System.println("%d => %3d" % n % test(n)));
```
#### Result
```
0 => -1
1 => 2
2 => 8
3 => 24
4 => 64
5 => -1
6 => -1
7 => -1
8 => -1
```
### Example 2. With do-while
#### Code
```javascript
function test(count) {
switch (count) {
default: do { System.println("%d" % count); count++;
case 0: System.println("%d" % count); count++;
case 1: System.println("%d" % count); count++;
case 2: System.println("%d" % count); count++;
} while (count < 8);
}
}
test(2);
```
#### Result
```
2
3
4
5
6
7
8
9
10
```
### Example 3. Non-integer value
#### Code
```javascript
function f() {
return "a4";
}
function test(a) {
var x = [1, 1.2, "a2", "a"];
switch (a) {
case "a1": return 2;
case x[-2]: return 4;
case x[3]+"3": return 8;
case f(): return 16;
default:
}
return -1;
}
0.upto(8, &(n) => System.println("%d => %3d" % n % test("a" + n)));
```
#### Result
```
0 => -1
1 => 2
2 => 4
3 => 8
4 => 16
5 => -1
6 => -1
7 => -1
8 => -1
```
### Example 4. Complex switch-case pattern
#### Code
```javascript
var array = [1,2,3,4,5,6,7,8,9,10];
function switchTest(n) {
switch (n) {
case 1: System.println(n); break;
case 2: System.println(n); break;
case 3: System.println(n); break;
case 4: System.println(n); break;
case 5: System.println(n); break;
case 6: System.println(n); break;
case 7: System.println(n); break;
case 8: System.println(n); break;
default:
System.print("default - ");
// fallthrough
case 100: System.println(n); break;
break;
case array.length():
System.println("array-length:%{n}");
break;
case "aaa":
System.println(n);
break;
case "bbb":
System.println(n);
break;
}
}
0.upto(100, function(i) {
if (12 < i && i <= 97) {
return; // omitted.
}
System.print("%{i} => ");
switchTest(i);
});
```
#### Result
```
0 => default - 0
1 => 1
2 => 2
3 => 3
4 => 4
5 => 5
6 => 6
7 => 7
8 => 8
9 => default - 9
10 => array-length:10
11 => default - 11
12 => default - 12
98 => default - 98
99 => default - 99
100 => 100
```
| 16.372727 | 119 | 0.52804 | eng_Latn | 0.72918 |
dbdfa777710a2fa686830f15ee37565d44ecb317 | 582 | md | Markdown | repository/Seaside-Core.package/WADocumentHandler.class/README.md | marianopeck/Seaside-1 | e540a9c16a5badfe706bd2ed0f74d6c65f66987a | [
"MIT"
] | 434 | 2015-03-26T17:11:42.000Z | 2022-03-31T17:30:43.000Z | repository/Seaside-Core.package/WADocumentHandler.class/README.md | marianopeck/Seaside-1 | e540a9c16a5badfe706bd2ed0f74d6c65f66987a | [
"MIT"
] | 378 | 2015-03-28T09:07:47.000Z | 2022-02-03T00:51:13.000Z | repository/Seaside-Core.package/WADocumentHandler.class/README.md | marianopeck/Seaside-1 | e540a9c16a5badfe706bd2ed0f74d6c65f66987a | [
"MIT"
] | 80 | 2015-09-07T17:12:11.000Z | 2022-01-15T01:05:45.000Z | WADocumentHandler handles requests for images, text documents and binary files (byte arrays). This class is not normally used directly. A number of WA*Tag classes implement document:mimeType:fileName: which use WADocumentHandler. Given a document, #document:mimeType:fileName: creates a WADocumentHandler for the document, registers the handler with a Registry, and adds the correct url in the tag for the document.
Instance Variables:
document <WAMimeDocument> MIMEDocument object representing this document and mimeType, generates stream used to write document for the response. | 145.5 | 415 | 0.828179 | eng_Latn | 0.968669 |
dbdff4f3a6a99dd50a441b584ba8158cc0238aaa | 6,252 | md | Markdown | README.md | waihankan/AStar-Algorithm | 295771643c50905e4cfbbb61e895a94e4eeb2fb4 | [
"MIT"
] | null | null | null | README.md | waihankan/AStar-Algorithm | 295771643c50905e4cfbbb61e895a94e4eeb2fb4 | [
"MIT"
] | null | null | null | README.md | waihankan/AStar-Algorithm | 295771643c50905e4cfbbb61e895a94e4eeb2fb4 | [
"MIT"
] | null | null | null | # AStar Algorithm Visualization
<!--
*** Thanks for checking out the Best-README-Template. If you have a suggestion
*** that would make this better, please fork the repo and create a pull request
*** or simply open an issue with the tag "enhancement".
*** Thanks again! Now go create something AMAZING! :D
-->
<!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
<img align ="center" src= "image/logo.png" width="1500" height="250">
[![MIT License][license-shield]](https://opensource.org/licenses/MIT)
[![Contributors][contributors-shield]][contributors-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![Forks][forks-shield]][forks-url]
[![LinkedIn][linkedin-shield]](https://www.linkedin.com/in/wai-han-692305174/)
<!-- TABLE OF CONTENTS -->
<details open="open">
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
</ul>
</li>
<li>
<a href="#usage">Usage</a>
<ul>
<li><a href="#demo">Demo</a></li>
</ul>
</li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgements">Acknowledgements</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
A* (pronounced "A-star") is a graph traversal and path search algorithm, which is often used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms which can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases.
#### Equation
```f(n) = g(n) + h(n) ``` where
> n = the next node on the path
> g(n) = the cost from the start node to the node `n`
> h(n) = heuristic function that estimates the cost of cheapest path from n to goal
### Built With
* [Python](https://www.python.org)
* [Pygame](https://www.pygame.org)
* [Easygui](https://pypi.org/project/easygui)
<!-- GETTING STARTED -->
## Getting Started
This is how you may set up your project locally.
To get a local copy up and running follow these simple example steps.
### Prerequisites
```pip install pygame```
```pip install numpy```
```pip install easygui```
<!-- USAGE EXAMPLES -->
## Usage
As the current source code mainly focuses on visualization, it is a great idea for people who are learning path finding algorithm, such as [A*](https://en.wikipedia.org/wiki/A*_search_algorithm), [Dijkstra's](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) etc... **Visualizing** is one of the great techniques for learning. You can play around with the code and understand the core idea of what a *path finding algorithm* is.
Unfortunately as of right now, the only possible method is A* algortihm.
#### Demo
**Here is a demo picture of the software.**
<img align="center" src="demo/demo1.png" width = 370 hspace=50> <img align="center" src="demo/demo2.png" width=370>
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
### Note
*If you consider contributing to the project, I'd like to request you to implement the Dijkstra's algorithm firstly.*
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE` for more information.
<!-- CONTACT -->
## Contact
Email -> [email protected]
<!-- ACKNOWLEDGEMENTS -->
## Acknowledgements
* [Astar Algortithm](https://en.wikipedia.org/wiki/A*_search_algorithm)
* [Dijkstra's Algortihm](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm)
* [Tech with Tim](https://www.youtube.com/channel/UC4JX40jDee_tINbkjycV4Sg)
* [GitHub Emoji Cheat Sheet](https://www.webpagefx.com/tools/emoji-cheat-sheet)
* [Choose an Open Source License](https://choosealicense.com)
* [README Template](https://github.com/othneildrew/Best-README-Template/blob/master/README.md)
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/waihankan/AStar-Algorithm.svg?style=for-the-badge
[contributors-url]: https://github.com/waihankan/AStar-Algorithm/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/waihankan/AStar-Algorithm.svg?style=for-the-badge
[forks-url]: https://github.com/waihankan/AStar-Algorithm/network/members
[stars-shield]: https://img.shields.io/github/stars/waihankan/AStar-Algorithm.svg?style=for-the-badge
[stars-url]: https://github.com/waihankan/AStar-Algorithm/stargazers
[issues-shield]: https://img.shields.io/github/issues/waihankan/AStar-Algorithm.svg?style=for-the-badge
[issues-url]: https://github.com/waihankan/AStar-Algorithm/issues
[license-shield]: https://img.shields.io/github/license/waihankan/AStar-Algorithm.svg?style=for-the-badge
[license-url]: https://github.com/waihankan/AStar-Algorithm/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://www.linkedin.com/in/wai-han-692305174/
| 39.56962 | 528 | 0.716091 | eng_Latn | 0.651382 |
dbe0077471f613c08223325be9a00d794c9fc7cd | 2,642 | md | Markdown | _posts/2009-07-08-189.md | elton/elton.github.com | a291592c433254424a3767a7dd4d255adf702dd1 | [
"Apache-2.0"
] | 1 | 2019-11-25T09:45:00.000Z | 2019-11-25T09:45:00.000Z | _posts/2009-07-08-189.md | elton/elton.github.com | a291592c433254424a3767a7dd4d255adf702dd1 | [
"Apache-2.0"
] | 2 | 2021-05-17T23:59:24.000Z | 2022-02-26T01:26:05.000Z | _posts/2009-07-08-189.md | elton/elton.github.com | a291592c433254424a3767a7dd4d255adf702dd1 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: 'Gentoo emerge 使用方法'
date: 2009-07-08
wordpress_id: 189
permalink: /blogs/189
comments: true
categories:
- Linux
tags:
- emerge
- gentoo
---
使用emerge可以管理Portage中的软件甚至整个系统。
当我们谈到(软件)包的时候,我们通常指的是Portage树为Gentoo用户提供的包的名称。Portage树是ebuilds文件的集合,这些文件包含了Portage管理工具维护软件(安装,搜索,查询, ...)时所需要的所有信息,并被默认的放置在/usr/portage目录中。
每当您要求Portage对系统中的软件包执行一些操作的时候,它会以系统中的ebuilds文件作为基础。因此您最好定期更新系统Portage树中的ebuild文件,这样Portage才知道新加入了哪些软件,哪些软件发布了安全更新,等等。
### 更新Portage树
<pre class="prettyprint linenums"># emerge --sync</pre>
### 更新系统
<pre class="prettyprint linenums"># emerge --update --deep --newuse world</pre>
--update(-u)表示更新
--deep(-D)表示更新关联包
Portage会查找您已经安装的软件包是否有更新版本,但它只会核对那些您已经明确地安装过的软件包(也就是在/var/lib/portage/world文件中列出的包),并不会完整去的检查与这些软件产生依赖关系的包是否也需要更新。如果您想更新系统中的每个软件包,那就加上--deep参数
--newuse 如果USE变量的值变了,就要使用这个参数,通知系统USE参数有变化
### 移除依赖包
如果仅仅移除了一个包,而未将其依赖的包移除,使用emerge --depclean命令来移除依赖包。
<pre class="prettyprint linenums">
# emerge --update --deep --newuse world
# emerge --depclean
# revdep-rebuild
</pre>
在使用emerge --depclean前,确保系统完全更新过。
revdep-rebuild工具由gentoolkit包提供;使用前别忘了首先emerge它:
<pre class="prettyprint linenums">
# emerge gentoolkit
</pre>
### 升级系统完整步骤
<pre class="prettyprint linenums">
1. emerge --sync
2. emerge -vauD --newuse world
3. emerge –depclean
4. revdep-rebuild
</pre>
### 通过名称查找软件
<pre class="prettyprint linenums">
# emerge --search softname
</pre>
### 通过软件描述查找软件
<pre class="prettyprint linenums">
# emerge --searchdesc pdf
</pre>
### 安装软件
<pre class="prettyprint linenums">
# emerge gnumeric
</pre>
### 假装安装软件
<pre class="prettyprint linenums">
# emerge --pretend gnumeric
或
# emerge -p gnumeric
</pre>
### 移除软件
<pre class="prettyprint linenums">
# emerge --unmerge gnumeric
</pre>
### 全面world更新的时候,不想更新某些包
将你不想更新的包的名字加入到/etc/portage/package.mask中。
如你不想升级最新的nginx则加入
<pre class="prettyprint linenums">
www-servers/nginx
</pre>
### 设置某些包的USE属性
如果你想针对某些包设置他们的USE属性,而不是改变全局USE属性,则修改/etc/portage/package.use文件,如:
<pre class="prettyprint linenums">
>=www-servers/nginx-0.7.6 random-index addition fastcgi flv imap pcre perl ssl status sub webdav zlib
>=dev-lang/ruby-1.8.7
dev-db/mysql innodb berkdb
dev-lang/php fpm berkdb bzip2 cli crypt gdbm iconv ipv6 ncurses nls pcre readline reflection session spl ssl unicode zlib curl exif gd json mysql pdo threads xml zip
</pre>
### 使用最新的测试包
默认gentoo都是用稳定版的包,如果你想使用最新版本的软件,只要在/etc/portage/package.keywords文件中,对应的包后加入~就可以了。如:
<pre class="prettyprint linenums">
www-servers/lighttpd ~amd64
www-servers/nginx ~amd64
dev-lang/php ~amd64
dev-db/mysql ~amd64
sys-devel/gcc ~amd64
dev-lang/ruby ~amd64
</pre>
~amd64表明你更新的是64位版本的,如果是x86版本的,使用~x86就可以了。
| 25.403846 | 165 | 0.772899 | yue_Hant | 0.25778 |
dbe042ea9dff81a7f79825566257602e63f78560 | 385 | md | Markdown | GIS_VIZ/attributes_intro.md | geoinformatik/webbook | 4b758f3c9d0cd8a61c3979544ff9a5ebdb7349bc | [
"Apache-2.0"
] | 1 | 2018-10-19T14:08:38.000Z | 2018-10-19T14:08:38.000Z | GIS_VIZ/attributes_intro.md | geoinformatik/webbook | 4b758f3c9d0cd8a61c3979544ff9a5ebdb7349bc | [
"Apache-2.0"
] | null | null | null | GIS_VIZ/attributes_intro.md | geoinformatik/webbook | 4b758f3c9d0cd8a61c3979544ff9a5ebdb7349bc | [
"Apache-2.0"
] | null | null | null | # Introduction to Working with Non-Spatial Attributes
Topics covered are:
* Basic understanding of the relational data model and SQL.
* Creating data subsets
* Filters
* Selection sets
* Creating new attributes
* Calculating new values attributes
* Understand the use of the selection set in QGIS
* Grouping and Calculating summery statistics.
* Joining data based on attribute values
| 32.083333 | 59 | 0.805195 | eng_Latn | 0.980348 |
dbe10d1e70defd08a8e82e1664a8a07811181afe | 712 | md | Markdown | _posts/2016-02-05-Symphony-Bridal-Gowns-Style-S3138.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | _posts/2016-02-05-Symphony-Bridal-Gowns-Style-S3138.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | _posts/2016-02-05-Symphony-Bridal-Gowns-Style-S3138.md | celermarryious/celermarryious.github.io | bcf6ff5049c82e276226a68ba269c11ccca7f970 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-02-05
title: "Symphony Bridal Gowns Style S3138"
category: Symphony Bridal
tags: [Symphony Bridal]
---
### Symphony Bridal Gowns Style S3138
Just **$409.99**
###
<table><tr><td>BRANDS</td><td>Symphony Bridal</td></tr></table>
<a href="https://www.readybrides.com/en/symphony-bridal/70174-symphony-bridal-gowns-style-s3138.html"><img src="//img.readybrides.com//symphony-bridal-gowns-style-s3138.jpg" alt="Symphony Bridal Gowns Style S3138" style="width:100%;" /></a>
<!-- break -->
Buy it: [https://www.readybrides.com/en/symphony-bridal/70174-symphony-bridal-gowns-style-s3138.html](https://www.readybrides.com/en/symphony-bridal/70174-symphony-bridal-gowns-style-s3138.html)
| 44.5 | 240 | 0.730337 | yue_Hant | 0.248458 |
dbe124b6f5128bc22f843a240cccecbbb8ccd3e3 | 10,614 | md | Markdown | astropath/hpfs/README.md | AstroPathJHU/AstroPathPipeline | 3b0d9cdad18b84e55d00724f89f09bc7f7464c98 | [
"Apache-2.0"
] | 14 | 2021-06-14T05:05:58.000Z | 2022-01-15T14:07:30.000Z | astropath/hpfs/README.md | AstroPathJHU/AstroPathPipeline | 3b0d9cdad18b84e55d00724f89f09bc7f7464c98 | [
"Apache-2.0"
] | null | null | null | astropath/hpfs/README.md | AstroPathJHU/AstroPathPipeline | 3b0d9cdad18b84e55d00724f89f09bc7f7464c98 | [
"Apache-2.0"
] | 2 | 2021-08-14T16:22:50.000Z | 2022-01-16T17:47:07.000Z | # 5. HPF Processing (HPFs)
## 5.1. Description
This section of the documentation and code picks up after the slides have been added to the *Specimen_Table_N.xlsx*, a description of that file is located [here](../scans/docs/scanning/SpecimenTable.md). After the ```transferdeamon``` module, all code assumes slides and file structure format in [4.6.](../scans/docs/DirectoryOrganization.md) have been adhered to, namely that the ```<ScanNN>``` subfolder is now in an additional *im3* subfolder under each ```<SlideID>```. Additional definitions on naming should be reviewed before continuing, found [here](../scans/docs/Definitions.md#43-definitions).
In this section the ```<Dpath>\<Dname>``` processing folders are intialized, a slide list is created, and slides are assigned new ```SlideID```s [5.5.](astroidgen#55-astroid-generation-v0000001 "Title"). Next images are renamed, transferred and backed up ([5.6.](transferdaemon#56-transfer-daemon "Title")). Images are then corrected for imaging warping and flatfield effects ([5.8.](flatw#58-flatw "Title")). After this, cells are segmented and classified in each image using the multipass phenotype method in inForm. Each of these steps are run by custom code that process on perpetual loops with limited user interaction. An important step of this process is to create and check the quality control images generated on the cell classification output (that process is described in [5.10.6.](inform_processing/docs/EvaluatinginFormPhenotypeQCOutputfortheAstroPathPipeline.md#5106-evaluating-inform-phenotype-qc-output-for-the-astropath-pipeline)) . After, images from all slides in a cohort pass the cell classification quality control, cell segmentation maps are rebuilt to remove cells not defined by the final merged cell segmentation ([described here](segmaps#511-seg-maps "Title")). Finally, slide macro regional annotations are applied to the images using the HALO software and distributed to the slides using a batch script [5.12.4.](transferanno/README.md#5124-transfer-annotations-to-the-bki-server).
A final checklist of things to verify and complete before loading into the database is added [here](cleanup#513-clean-up). This section is a ```cleanup``` module which checks for any missing files, does the remaining file conversions, and provides final upkeep\ processing instructions.
## 5.2. Contents
- [5.1. Description](#51-description "Title")
- [5.2. Contents](#52-contents "Title")
- [5.3. Instructions](docs/Instructions.md)
- [5.3.1. Contents](docs/Instructions.md#531-contents)
- [5.3.2. Typical User Instructions](docs/TypicalUserInstructions.md#532-typical-user-instructions)
- [5.3.3 Launching Code Instructions](docs/LaunchingCodeInstructions.md#533-launching-code-instructions)
- [5.4. Workflow Overview](docs/WorkflowOverview.md#54-workflow-overview)
- [5.5. AstroIDGen](astroidgen#55-astroid-generation-v0000001 "Title")
- [5.5.1. Description](astroidgen#551-description "Title")
- [5.5.2. Important Definitions](astroidgen#552-important-definitions "Title")
- [5.5.3. Instructions](astroidgen#553-instructions "Title")
- [5.5.4. Workflow](astroidgen#554-workflow "Title")
- [5.6. Transfer Daemon](transferdaemon#56-transfer-daemon "Title")
- [5.6.1. Description](transferdaemon#561-description "Title")
- [5.6.2. Important Definitions](transferdaemon#562-important-definitions "Title")
- [5.6.3. Instructions](transferdaemon#563-instructions "Title")
- [5.6.4. Workflow](transferdaemon#564-workflow "Title")
- [5.6.4.1. Initial Transfer](transferdaemon#5641-initial-transfer "Title")
- [5.6.4.2. MD5 Check](transferdaemon#5642-md5-check "Title")
- [5.6.4.3. Compression Into Backup](transferdaemon#5643-compression-into-backup "Title")
- [5.6.4.4. Source File Handling](transferdaemon#5644-source-file-handling "Title")
- [5.6.5. Notes](transferdaemon#565-notes "Title")
- [5.7. Meanimages](meanimages#57-meanimages "Title")
- [5.7.1. Description](meanimages#571-description "Title")
- [5.7.2. Important Definitions](meanimages#572-important-definitions "Title")
- [5.7.3. Instructions](meanimages#573-instructions "Title")
- [5.7.4. Workflow](meanimages#574-workflow "Title")
- [5.7.4.1. Checking for Tasks](meanimages#5741-checking-for-tasks "Title")
- [5.7.4.2. Shred Im3s](meanimages#5742-shred-im3s "Title")
- [5.7.4.3. raw2mean](meanimages#5743-raw2mean "Title")
- [5.8. Flatw](flatw#58-flatw "Title")
- [5.8.1. Description](flatw#581-description "Title")
- [5.8.2. Contents](flatw#582-contents "Title")
- [5.8.3. Important Definitions](flatw/docs/ImportantDefinitions.md#583-important-definitions)
- [5.8.3.1. Flatw Expected Directory Structure](flatw/docs/ImportantDefinitions.md#5831-flatw-expected-directory-structure)
- [5.8.3.2. Output Formatting](flatw/docs/ImportantDefinitions.md#5832-output-formatting)
- [5.8.4. Workflow Instructions](flatw/docs/WorkflowInstructions.md#584-workflow-instructions)
- [5.8.4.1. flatw queue](flatw/docs/WorkflowInstructions.md#5841-flatw_queue)
- [5.8.4.2. flatw worker](flatw/docs/WorkflowInstructions.md#5842-flatw_worker)
- [5.8.5. Additional Tools](flatw/docs/AdditionalTools.md#585-additional-tools)
- [5.8.5.1. Im3Tools](flatw/docs/AdditionalTools.md#5851-im3tools)
- [5.8.5.2. ConvertIm3](flatw/docs/AdditionalTools.md#5852-convertim3)
- [5.8.5.3. ConvertIm3Path & ConvertIm3Cohort](flatw/docs/AdditionalTools.md#5853-convertim3path--convertim3cohort)
- [5.8.6. Overview Workflow of Im3Tools](flatw/docs/OverviewWorkflowofIm3Tools.md#586-overview-workflow-of-im3tools)
- [5.9. Mergeloop](mergeloop#59-mergeloop "Title")
- [5.9.1. Description](mergeloop#591-description)
- [5.9.2. Important Definitions](mergeloop#592-important-definitions)
- [5.9.3. Instructions](mergeloop#593-instructions)
- [5.9.4. Workflow](mergeloop#594-workflow)
- [5.9.5. Merge a Single Sample (MaSS)](mergeloop/MaSS#merge-a-single-sample-mass)
- [5.9.6. Create Image QA QC utility](mergeloop/MaSS#section-8-create-image-qa-qc-utility)
- [5.10. Inform Processing](inform_processing#510-inform-processing "Title")
- [5.10.1. Description](inform_processing/README.md#5101-description)
- [5.10.2. Contents](inform_processing/README.md#5102-contents)
- [5.10.3. inForm® Multipass Phenotyping](inform_processing/docs/inFormMultipassPhenotype.md#5103-inform-multipass-phenotype "Title")
- [5.10.3.1. Description](inform_processing/docs/inFormMultipassPhenotype.md#51031-description "Title")
- [5.10.3.2. Instructions](inform_processing/docs/inFormMultipassPhenotype.md#51032-instructions "Title")
- [5.10.3.2.1. Getting Started](inform_processing/docs/inFormMultipassPhenotype.md#510321-getting-started "Title")
- [5.10.3.2.2. Core Icons to Remember](inform_processing/docs/inFormMultipassPhenotype.md#510322-core-icons-to-remember "Title")
- [5.10.3.2.3. Segment Tissue](inform_processing/docs/inFormMultipassPhenotype.md#510323-segment-tissue "Title")
- [5.10.3.2.4. Adaptive Cell Segmentation](inform_processing/docs/inFormMultipassPhenotype.md#510324-adaptive-cell-segmentation "Title")
- [5.10.3.2.5. Phenotyping](inform_processing/docs/inFormMultipassPhenotype.md#510325-phenotyping "Title")
- [5.10.3.2.6. Export](inform_processing/docs/inFormMultipassPhenotype.md#510326-export "Title")
- [5.10.4. Saving Project for the inForm® JHU Processing Farm](inform_processing/docs/SavingProjectsfortheinFormJHUProcessingFarm.md#5104-saving-projects-for-the-inform-jhu-processing-farm "Title")
- [5.10.4.1. Description](inform_processing/docs/SavingProjectsfortheinFormJHUProcessingFarm.md#51041-description "Title")
- [5.10.4.2. Instructions](inform_processing/docs/SavingProjectsfortheinFormJHUProcessingFarm.md#51042-instructions "Title")
- [5.10.5. Adding Slides to the inForm Queue](inform_processing/docs/AddingSlidestotheinFormQueue.md#5105-adding-slides-to-the-inform-queue)
- [5.10.6. Evaluating inForm® Phenotype QC Output for the *AstroPath Pipeline*](inform_processing/docs/EvaluatinginFormPhenotypeQCOutputfortheAstroPathPipeline.md#5106-evaluating-inform-phenotype-qc-output-for-the-astropath-pipeline)
- [5.10.7. Processing inForm® Tasks](inform_processing/docs/ProcessinginFormTasks.md#5107-proccessing-inform-tasks)
- [5.10.7.1. Description](inform_processing/docs/ProcessinginFormTasks.md#51071-description)
- [5.10.7.2. Important Definitions](inform_processing/docs/ProcessinginFormTasks.md#51072-important-definitions)
- [5.10.7.3. Instructions](inform_processing/docs/ProcessinginFormTasks.md#51073-instructions)
- [5.10.7.3.1. Setting up the Virtual Machines for inForm®](inform_processing/docs/ProcessinginFormTasks.md#510731-setting-up-the-virtual-machines-for-inform)
- [5.10.7.3.2. Running the ```inform queue``` Module](inform_processing/docs/ProcessinginFormTasks.md#510732-running-the-inform-queue-module)
- [5.10.7.3.3. Running the ```inform worker``` Module](inform_processing/docs/ProcessinginFormTasks.md#510733-running-the-inform-worker-module)
- [5.10.7.4. Workflow](inform_processing/docs/ProcessinginFormTasks.md#51074-workflow)
- [5.11. Segmaps](segmaps#511-seg-maps "Title")
- [5.11.1. Description](segmaps#5111-description)
- [5.11.2. Instructions](segmaps#5112-instructions)
- [5.11.3. Segmentation Map Structure Definition](segmaps#5113-segmenation-map-structure-definition)
- [5.12. Create & Transfer Annotations](transferanno#512-transfer-annotations "Title")
- [5.12.1. Description](transferanno#5121-description)
- [5.12.2. Create HALO Annoations for the *AstroPath Pipeline*](transferanno/README.md#5122-creating-halo-annotations-for-the-astropath-pipeline)
- [5.12.2.1. Prepare the HALO Project](transferanno/README.md#51221-prepare-the-halo-project)
- [5.12.2.2. Create and Edit Annotation Layers](transferanno/README.md#51222-create-and-edit-annotation-layers)
- [5.12.2.3. Annotation Layer Data Dictionary](transferanno/README.md#51223-annotation-layer-data-dictionary)
- [5.12.3. Exporting Annotations](transferanno/README.md#5123-exporting-annotations)
- [5.12.4. Transfer Annotations to the BKI Server](transferanno/README.md#5124-transfer-annotations-to-the-bki-server)
- [5.12.4.1. Description](transferanno/README.md#51241-description)
- [5.12.4.2. Instructions](transferanno/README.md#51242-instructions)
- [5.13. Cleanup](cleanup#513-clean-up)
- [5.13.1. Description](cleanup#5131-description)
- [5.13.2. Instructions](cleanup#5132-instructions)
- [5.13.3. Next Steps](cleanup#5133-next-steps)
| 102.057692 | 1,411 | 0.761259 | eng_Latn | 0.345946 |
dbe14af19eb4be6fbc2e117cc550f339704dceba | 1,714 | md | Markdown | _posts/2021-02-16-working-with-combine-publishers-in-swiftui-cb85257807e3.md | top-poster/top-poster.github.io | 562714aa274d0650482ba842e080dd6b1a3d968a | [
"RSA-MD"
] | null | null | null | _posts/2021-02-16-working-with-combine-publishers-in-swiftui-cb85257807e3.md | top-poster/top-poster.github.io | 562714aa274d0650482ba842e080dd6b1a3d968a | [
"RSA-MD"
] | null | null | null | _posts/2021-02-16-working-with-combine-publishers-in-swiftui-cb85257807e3.md | top-poster/top-poster.github.io | 562714aa274d0650482ba842e080dd6b1a3d968a | [
"RSA-MD"
] | null | null | null | ---
layout: post
title: "Swift에서 게시자 결합 작업UI"
author: "Logger"
thumbnail: "undefined"
tags:
---

출판사는 콤바인 프레임워크의 생명줄이며, 수십 개 이상의 회사가 있습니다. 이 기사에서는 사용 가능한 주요 게시자를 사용하여 "Hello World" 인사말의 배경색을 변경하려고 합니다.
나는 이것이 그 일을 하는 올바른 방법이나 최선의 방법이라고 주장하지 않는다. 이는 이러한 출판사들이 어떻게 작동하는지, 어떻게 여러분을 위해 그들을 일하게 할 수 있는지를 이해하기 위한 연습에 가깝다.
# 프리앰블
게시자를 사용할 때 가장 어려운 점 중 하나는 특히 Swift와 함께라면 스트림을 종료할 수 있다는 점입니다.UI를 끝점으로 사용합니다. 반대로, 종료할 수 없다는 것은 때때로 똑같이 어려운 일이다. 계속 읽어주세요.
# 작업
클래식한 `헬로월드` 인사의 배경색을 바꾸는 과제를 스스로 정했다. 저는 배경을 빨간색, 파란색, 녹색, 그리고 마침내 12초 이하의 짧은 기간에 걸쳐 주황색으로 바꾸고 싶었습니다. 나는 네 개의 다른 출판사를 이용했다. 그들 중 몇몇은 다른 사람들보다 그 일에 훨씬 더 적합하다. 나도 스위프트를 이용하는 걸 발견했어대부분 UI 제목(제5 요소 및 게시자)입니다.
# 미래
미래 출판사의 기본 구문은 이렇다. 게시자로서 사용자는 실제로 이 게시자를 사용하여 값을 사전 로드합니다. 콤바인에서 약속을 하는 것은 말한다. 아래 표시된 코드는 빨간색을 2초 지연으로 게시자에게 전달합니다.
```undefined
미래 1 = 미래 = 절대 { 약속하지 않음1
DispatchQueue.main.asyncAfter(기한: .now() + 2) {
약속 1(.성공(색상.빨간색)
}
}
```
4개 출판사 중 미래로 도전하는 것이 가장 어려웠다. 진짜 변화는 내가 뷰를 다시 로드할 때마다 `미래`가 재설정된다는 사실이었고, 그래서 나는 그것을 사용한 후에 출판사를 가리기 위해 부울을 도입하게 되었다.
# 그냥
정의로운 출판사의 기본 구문은 이렇다. 출판사로서 제가 마주친 주된 문제는 그것이 완성되는 즉시 종료된다는 사실입니다. 그래서 4개의 (주제) 출판사를 추가로 만들어야 했다. 미래와 함께 이 과제를 해결하는 것은 좋은 선택으로 보이지 않는다.
# 시퀀스
시퀀스 출판사의 기본 구문은 이렇다. 게시자로서 이번에는 시퀀스에 중점을 두고 값을 다시 사전 로드합니다. 처음 두 명과 싸웠기 때문에 훨씬 쉬웠다.
그것을 작동시키는 데 있어서 가장 큰 변화는 단지 그것을 늦추려는 것이었다. 계산된 지연으로 디스패치 큐를 호출하는 두 번째 출판사(주제)를 다시 사용해 해결했다. 나는 그것을 직접적으로 지연시키려 했지만 성공하지 못했다.
# 기록
음반 출판사의 기본 구문은 이렇다. 앞에서 언급한 두 게시자와 마찬가지로, 실제로 값을 사전 로드합니다.
이 솔루션은 작동 구문을 찾는 가장 쉬운 방법이었다. 실제로 시퀀스를 사용하는 것도 잘 먹혔지만 이 도전에 딱 들어맞는다.
이 기사가 내 작품에서처럼 당신의 작품에서도 유용하게 쓰였으면 좋겠어요. 침착해, 계속 코딩해 | 31.740741 | 205 | 0.679113 | kor_Hang | 1.00001 |
dbe187fa6e8f0f455b9541f4502cb0207e834cce | 117 | md | Markdown | README.md | TheZavitaev/YTD | f59b509d30d48f540d9f53fe6e58160796e15d22 | [
"BSD-3-Clause"
] | null | null | null | README.md | TheZavitaev/YTD | f59b509d30d48f540d9f53fe6e58160796e15d22 | [
"BSD-3-Clause"
] | null | null | null | README.md | TheZavitaev/YTD | f59b509d30d48f540d9f53fe6e58160796e15d22 | [
"BSD-3-Clause"
] | null | null | null | # YTD
Скрипт для скачивания видео\аудио с ютуба
Короче, pip install -r requirements.txt делай, потом скрипт запускай
| 29.25 | 68 | 0.803419 | rus_Cyrl | 0.931322 |
dbe2743231a8e2371e590f1d6a01357131ac17ac | 77 | md | Markdown | tag/birthday.md | andrewsio/andrewsio-blog.github.io | 008e349a3aeb8ba3e162b5cf72e15093ee63eea2 | [
"MIT"
] | 2 | 2021-10-02T23:32:37.000Z | 2022-01-03T21:57:43.000Z | tag/birthday.md | andrewsio/andrewsio-blog.github.io | 008e349a3aeb8ba3e162b5cf72e15093ee63eea2 | [
"MIT"
] | null | null | null | tag/birthday.md | andrewsio/andrewsio-blog.github.io | 008e349a3aeb8ba3e162b5cf72e15093ee63eea2 | [
"MIT"
] | 1 | 2020-01-02T19:15:55.000Z | 2020-01-02T19:15:55.000Z | ---
layout: tagpage
title: "Tag: birthday"
tag: birthday
robots: noindex
---
| 11 | 22 | 0.688312 | eng_Latn | 0.189208 |
dbe2ecd1f3571576f67ab149345fda6f6e8f4c82 | 335 | md | Markdown | iambismark.net/content/post/2013/08/1375746952.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | iambismark.net/content/post/2013/08/1375746952.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | iambismark.net/content/post/2013/08/1375746952.md | bismark/iambismark.net | 1ef89663cfcf4682fbfd60781bb143a7fd276312 | [
"MIT"
] | null | null | null | ---
alturls:
- https://twitter.com/bismark/status/364535256080060416
- https://www.facebook.com/17803937/posts/10101008782648459
archive:
- 2013-08
date: '2013-08-05T23:55:52+00:00'
slug: '1375746952'
---
My latest commit message:
> Reverts AutoLayout parts of 8a3a02f7
>
> (╯°□°)╯︵ ┻━┻
So... I'm sure you can guess how that went.
| 17.631579 | 59 | 0.701493 | kor_Hang | 0.186055 |
dbe308a66b67d23ef1ad8aa4c96c2bbc5371ad91 | 697 | md | Markdown | api/Outlook.MailItem.SentOnBehalfOfName.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Outlook.MailItem.SentOnBehalfOfName.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Outlook.MailItem.SentOnBehalfOfName.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: MailItem.SentOnBehalfOfName Property (Outlook)
keywords: vbaol11.chm1360
f1_keywords:
- vbaol11.chm1360
ms.prod: outlook
api_name:
- Outlook.MailItem.SentOnBehalfOfName
ms.assetid: 1f58a4b4-abf8-3031-4be1-1538d2d81f5c
ms.date: 06/08/2017
---
# MailItem.SentOnBehalfOfName Property (Outlook)
Returns a **String** indicating the display name for the intended sender of the mail message. Read/write.
## Syntax
_expression_. `SentOnBehalfOfName`
_expression_ A variable that represents a [MailItem](./Outlook.MailItem.md) object.
## Remarks
This property corresponds to the MAPI property **PidTagSentRepresentingName**.
## See also
[MailItem Object](Outlook.MailItem.md)
| 19.361111 | 106 | 0.774749 | yue_Hant | 0.386954 |
dbe3102161083a3c3dacce516beeec4cacfac6f6 | 2,122 | md | Markdown | _posts/blog/2018-04-29-ive-finished-third-year.md | kevi8991/kevi8991.github.io | 1fbe5785b9b3281fb815d276f64051a420db2026 | [
"MIT"
] | null | null | null | _posts/blog/2018-04-29-ive-finished-third-year.md | kevi8991/kevi8991.github.io | 1fbe5785b9b3281fb815d276f64051a420db2026 | [
"MIT"
] | null | null | null | _posts/blog/2018-04-29-ive-finished-third-year.md | kevi8991/kevi8991.github.io | 1fbe5785b9b3281fb815d276f64051a420db2026 | [
"MIT"
] | null | null | null | ---
title: I've Finished Third Year!
categories: blog
layout: post
date: 2018-04-29T01:11
modified: 2018-05-12T00:52
tags: [school, life]
image:
feature: blog/000099200037-2.jpg
---
So I'm finally done third year!!! For those who don't know me outside of Vic or musical theatre, I'm studying piano at the Faculty of Music with a German studies major on the side. Needless to say, this school year was A Time.
I started off the year thinking that I wouldn't do as much theatrical work as I'd done in previous years, but I ended up working on more shows this past year than the previous two years combined--definitely something I'll have to avoid doing next year! I also traveled a bunch, made a ton of new friends, practiced my photography, started this blog, and proposed a show; I'd say it was a fairly successful year.
Some things to work on probably include my time management--as usual, I can't say no to things, and that often is a detriment to my sleep schedule and general well-being. I also need to start eating more regularly, because apparently pooping once a week is abnormal...?
Summer goals include fixing up my laptop, traveling a bunch, cycling or working out on a daily basis, and learning a bunch of repertoire for next year! I'm really excited about my upcoming trip to Maryland--visiting an internet friend--and getting started on the Brahms Piano Trio No.3, which I didn't have time to see through to completion this year.
I'm also hoping to find time to start writing in that audio section of the blog! Having been a part of pretty much every musical on campus this year, I feel like I've got a lot of material to work from. Maybe I'll start a photography section as well? Who knows? I don't think I have enough all-photo posts just yet, but if that changes anytime soon I might add an extra category.
Think that's about it for now! Might update this as things progress over the summer.
_Fri 2018-05-11 12:05_
Made it to Maryland in one piece despite missing my flight!
_Sat 2018-05-12 11:43_
I might have pinkeye? I've never had pinkeye before, but I guess it's pretty common after flying.
| 73.172414 | 411 | 0.77427 | eng_Latn | 0.999821 |
dbe3967a6fd3aeda1f4733420f7a29aa2a3575c2 | 821 | md | Markdown | _projects/2021-02-2021-05-MALib.md | SkyRiver-2000/SkyRiver-2000.github.io | 22ad3beec4a53543f5ba049daa85575132d682e0 | [
"MIT"
] | null | null | null | _projects/2021-02-2021-05-MALib.md | SkyRiver-2000/SkyRiver-2000.github.io | 22ad3beec4a53543f5ba049daa85575132d682e0 | [
"MIT"
] | null | null | null | _projects/2021-02-2021-05-MALib.md | SkyRiver-2000/SkyRiver-2000.github.io | 22ad3beec4a53543f5ba049daa85575132d682e0 | [
"MIT"
] | 1 | 2021-06-07T03:19:49.000Z | 2021-06-07T03:19:49.000Z | ---
title: "MALib: A Parallel Framework for Population-based Multi-agent Reinforcement Learning"
collection: projects
permalink: /project/2021-02-2021-05-MALib
excerpt: 'MALib is a parallel framework of population-based learning nested with (multi-agent) reinforcement learning (RL) methods. I mainly focus on the implementation of some imitation learning metrics and algorithms, which have not been released yet (might come up soon).'
period: 2021.02-2021.05
---
MALib is a parallel framework of population-based learning nested with (multi-agent) reinforcement learning (RL) methods. I mainly focus on the implementation of some imitation learning metrics and algorithms, which have not been released yet (might come up soon).
**Codes:** [click here](https://github.com/sjtu-marl/malib) to visit our GitHub repository
| 74.636364 | 275 | 0.794153 | eng_Latn | 0.995374 |
dbe3c3a5fb2f7f0f1600a7767e78b8ab7ab7e50e | 2,447 | markdown | Markdown | docs/_posts/2011-1-1-basic-usage.markdown | andriusch/blueprints_boy | 0385110fe7ce4d72afaa89f3ca55ac666262bf8d | [
"MIT"
] | 1 | 2015-03-09T08:38:52.000Z | 2015-03-09T08:38:52.000Z | docs/_posts/2011-1-1-basic-usage.markdown | andriusch/blueprints_boy | 0385110fe7ce4d72afaa89f3ca55ac666262bf8d | [
"MIT"
] | 1 | 2015-02-12T10:26:31.000Z | 2015-12-18T12:04:52.000Z | docs/_posts/2011-1-1-basic-usage.markdown | andriusch/blueprints_boy | 0385110fe7ce4d72afaa89f3ca55ac666262bf8d | [
"MIT"
] | 1 | 2015-03-22T00:54:46.000Z | 2015-03-22T00:54:46.000Z | ---
layout: application
title: Basic usage
---
# Setup
The easiest way to install this gem is simply adding this line to your Gemfile:
{% highlight ruby %}
gem 'database_cleaner-active_record'
gem 'blueprints_boy'
{% endhighlight %}
If you're not using bundler, then you can install it through command line
{% highlight ruby %}
gem install blueprints
{% endhighlight %}
Blueprints boy is activated by calling BlueprintsBoy.enable at the bottom of your spec_helper/test_helper.
{% highlight ruby %}
# spec/spec_helper.rb
BlueprintsBoy.enable
{% endhighlight %}
# Blueprints file
Blueprints file is the file that contains all definitions of blueprints. This can either be single file or whole folder
if you have many blueprints.
By default blueprints are searched in these files in this particular order in application root (which is either Rails.root if it's defined or current folder by default):
{% include file_patterns.markdown %}
You can set root option to override application root and filename option to pass custom filename pattern. For more information see [configuration](/blueprints_boy/configuration)
## Basic definitions
Basic definitions of blueprints look like this:
{% highlight ruby %}
blueprint :apple do
Fruit.new 'apple'
end
blueprint :orange do
Fruit.new 'orange'
end
depends_on(:apple, :orange).blueprint :fruitbowl do
FruitBowl.new apple, orange
end
{% endhighlight %}
Note that in :fruitbowl blueprint we define depenendencies on other blueprints, meaning that once we build
:fruitbowl, then :apple, :orange and all their dependencies will also be built.
## Usage
You can use your defined blueprints in specs(tests) like this:
{% highlight ruby %}
describe Fruit, "apple" do
before do
build :apple
end
it "is an apple" do
expect(apple.species).to eq('apple')
end
end
describe FruitBowl, "with and apple and an orange" do
before do
build :fruitbowl
end
it "has 2 fruits" do
expect(fruitbowl.fruits).to eq([apple, orange])
end
end
{% endhighlight %}
Whatever your blueprint block returns can be reached using method with the same name as blueprint.
All blueprints are built only once, so:
{% highlight ruby %}
build(:apple).equal? build(:apple) #=> true
{% endhighlight %}
## Advanced Usage
Its just ruby, right? So go nuts:
{% highlight ruby %}
1.upto(9) do |i|
blueprint("user_#{i}") do
User.create! :name => "user#{i}"
end
end
{% endhighlight %}
| 23.084906 | 177 | 0.742542 | eng_Latn | 0.990831 |
dbe3e39f7e580da2839c5019896d4b8410be481d | 1,084 | md | Markdown | README.md | mrutkows/incubator-openwhisk-test | b128cfaa0834164d2b7758c63d380a918c93bf89 | [
"Apache-2.0"
] | null | null | null | README.md | mrutkows/incubator-openwhisk-test | b128cfaa0834164d2b7758c63d380a918c93bf89 | [
"Apache-2.0"
] | null | null | null | README.md | mrutkows/incubator-openwhisk-test | b128cfaa0834164d2b7758c63d380a918c93bf89 | [
"Apache-2.0"
] | null | null | null | # incubator-openwhisk-test
[](http://www.apache.org/licenses/LICENSE-2.0)
[](https://travis-ci.org/apache/incubator-openwhisk-test)
This repository is used for hosting packages used in the integration testing of OpenWhisk tooling.
Currently, it is being used to test GitHub (event) integration for the following sub-projects:
- [https://github.com/apache/incubator-openwhisk-wskdeploy](https://github.com/apache/incubator-openwhisk-wskdeploy).
Used to test GitHub as an Event Provider using the OpenWhisk GitHub webhook:
* [Test -> Integration -> Dependency](https://github.com/apache/incubator-openwhisk-wskdeploy/tree/master/tests/src/integration/dependency)
* [Test -> Use Cases -> GitHub](https://github.com/apache/incubator-openwhisk-wskdeploy/tree/master/tests/usecases/github)
* [Test -> Use Cases -> Dependency](https://github.com/apache/incubator-openwhisk-wskdeploy/tree/master/tests/usecases/dependency)
| 77.428571 | 145 | 0.777675 | eng_Latn | 0.351188 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.