id
int64 2.05k
16.6k
| title
stringlengths 5
75
| fromurl
stringlengths 19
185
| date
timestamp[s] | tags
sequencelengths 0
11
| permalink
stringlengths 20
37
| content
stringlengths 342
82.2k
| fromurl_status
int64 200
526
⌀ | status_msg
stringclasses 339
values | from_content
stringlengths 0
229k
⌀ |
---|---|---|---|---|---|---|---|---|---|
11,159 | 精通 Linux 的 ls 命令 | https://opensource.com/article/19/7/master-ls-command | 2019-07-29T10:58:33 | [
"ls"
] | https://linux.cn/article-11159-1.html |
>
> Linux 的 ls 命令拥有数量惊人的选项,可以提供有关文件的重要信息。
>
>
>

`ls` 命令可以列出一个 [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) 系统上的文件。这是一个简单的命令,但它经常被低估,不是它能做什么(因为它确实只做了一件事),而是你该如何优化对它的使用。
要知道在最重要的 10 个终端命令中,这个简单的 `ls` 命令可以排进前三,因为 `ls` 不会*只是*列出文件,它还会告诉你有关它们的重要信息。它会告诉你诸如拥有文件或目录的人、每个文件修改的时间、甚至是什么类型的文件。它的附带功能能让你了解你在哪里、附近有些什么,以及你可以用它们做什么。
如果你对 `ls` 的体验仅限于你的发行版在 `.bashrc` 中的别名,那么你可能错失了它。
### GNU 还是 BSD?
在了解 `ls` 的隐藏能力之前,你必须确定你正在运行哪个 `ls` 命令。有两个最流行的版本:包含在 GNU coreutils 包中的 GNU 版本,以及 BSD 版本。如果你正在运行 Linux,那么你很可能已经安装了 GNU 版本的 `ls`(LCTT 译注:几乎可以完全确定)。如果你正在运行 BSD 或 MacOS,那么你有的是 BSD 版本。本文会介绍它们的不同之处。
你可以使用 `--version` 选项找出你计算机上的版本:
```
$ ls --version
```
如果它返回有关 GNU coreutils 的信息,那么你拥有的是 GNU 版本。如果它返回一个错误,你可能正在运行的是 BSD 版本(运行 `man ls | head` 以确定)。
你还应该调查你的发行版可能具有哪些预设选项。终端命令的自定义通常放在 `$HOME/.bashrc` 或 `$HOME/.bash_aliases` 或 `$HOME/.profile` 中,它们是通过将 `ls` 别名化为更复杂的 `ls` 命令来完成的。例如:
```
alias ls='ls --color'
```
发行版提供的预设非常有用,但它们确实很难分辨出哪些是 `ls` 本身的特性,哪些是它的附加选项提供的。你要是想要运行 `ls` 命令本身而不是它的别名,你可以用反斜杠“转义”命令:
```
$ \ls
```
### 分类
单独运行 `ls` 会以适合你终端的列数列出文件:
```
$ ls ~/example
bunko jdk-10.0.2
chapterize otf2ttf.ff
despacer overtar.sh
estimate.sh pandoc-2.7.1
fop-2.3 safe_yaml
games tt
```
这是有用的信息,但所有这些文件看起来基本相同,没有方便的图标来快速表示出哪个是目录、文本文件或图像等等。
使用 `-F`(或 GNU 上的长选项 `--classify`)以在每个条目之后显示标识文件类型的指示符:
```
$ ls ~/example
bunko jdk-10.0.2/
chapterize* otf2ttf.ff*
despacer* overtar.sh*
estimate.sh pandoc@
fop-2.3/ pandoc-2.7.1/
games/ tt*
```
使用此选项,终端中列出的项目使用简写符号来按文件类型分类:
* 斜杠(`/`)表示目录(或“文件夹”)。
* 星号(`*`)表示可执行文件。这包括二进制文件(编译代码)以及脚本(具有[可执行权限](https://opensource.com/article/19/6/understanding-linux-permissions)的文本文件)。
* 符号(`@`)表示符号链接(或“别名”)。
* 等号(`=`)表示套接字。
* 在 BSD 上,百分号(`%`)表示<ruby> 涂改 <rt> whiteout </rt></ruby>(某些文件系统上的文件删除方法)。
* 在 GNU 上,尖括号(`>`)表示<ruby> 门 <rt> door </rt></ruby>([Illumos](https://www.illumos.org/) 和 Solaris上的进程间通信)。
* 竖线(`|`)表示 [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) 管道。 这个选项的一个更简单的版本是 `-p`,它只区分文件和目录。
(LCTT 译注:在支持彩色的终端上,使用 `--color` 选项可以以不同的颜色来区分文件类型,但要注意如果将输出导入到管道中,则颜色消失。)
### 长列表
从 `ls` 获取“长列表”的做法是如此常见,以至于许多发行版将 `ll` 别名为 `ls -l`。长列表提供了许多重要的文件属性,例如权限、拥有每个文件的用户、文件所属的组、文件大小(以字节为单位)以及文件上次更改的日期:
```
$ ls -l
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
[...]
```
如果你不想以字节为单位,请添加 `-h` 标志(或 GNU 中的 `--human`)以将文件大小转换为更加人性化的表示方法:
```
$ ls --human
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth seth 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20M Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth seth 6.1K May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
```
要看到更少的信息,你可以带有 `-o` 选项只显示所有者的列,或带有 `-g` 选项只显示所属组的列:
```
$ ls -o
-rwxrwx---. 1 seth 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth 20M Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth 6.1K May 22 10:22 geteltorito
-rwxrwx---. 1 seth 177 Nov 12 2018 html4mutt.sh
```
也可以将两个选项组合使用以显示两者。
### 时间和日期格式
`ls` 的长列表格式通常如下所示:
```
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
```
月份的名字不便于排序,无论是通过计算还是识别(取决于你的大脑是否倾向于喜欢字符串或整数)。你可以使用 `--time-style` 选项和格式名称更改时间戳的格式。可用格式为:
* `full-iso`:ISO 完整格式(1970-01-01 21:12:00)
* `long-iso`:ISO 长格式(1970-01-01 21:12)
* `iso`:iso 格式(01-01 21:12)
* `locale`:本地化格式(使用你的区域设置)
* `posix-STYLE`:POSIX 风格(用区域设置定义替换 `STYLE`)
你还可以使用 `date` 命令的正式表示法创建自定义样式。
### 按时间排序
通常,`ls` 命令按字母顺序排序。你可以使用 `-t` 选项根据文件的最近更改的时间(最新的文件最先列出)进行排序。
例如:
```
$ touch foo bar baz
$ ls
bar baz foo
$ touch foo
$ ls -t
foo bar baz
```
### 列出方式
`ls` 的标准输出平衡了可读性和空间效率,但有时你需要按照特定方式排列的文件列表。
要以逗号分隔文件列表,请使用 `-m`:
```
ls -m ~/example
bar, baz, foo
```
要强制每行一个文件,请使用 `-1` 选项(这是数字 1,而不是小写的 L):
```
$ ls -1 ~/bin/
bar
baz
foo
```
要按文件扩展名而不是文件名对条目进行排序,请使用 `-X`(这是大写 X):
```
$ ls
bar.xfc baz.txt foo.asc
$ ls -X
foo.asc baz.txt bar.xfc
```
### 隐藏杂项
在某些 `ls` 列表中有一些你可能不关心的条目。例如,元字符 `.` 和 `..` 分别代表“本目录”和“父目录”。如果你熟悉在终端中如何切换目录,你可能已经知道每个目录都将自己称为 `.`,并将其父目录称为 `..`,因此当你使用 `-a` 选项显示隐藏文件时并不需要它经常提醒你。
要显示几乎所有隐藏文件(`.` 和 `..` 除外),请使用 `-A` 选项:
```
$ ls -a
.
..
.android
.atom
.bash_aliases
[...]
$ ls -A
.android
.atom
.bash_aliases
[...]
```
有许多优秀的 Unix 工具有保存备份文件的传统,它们会在保存文件的名称后附加一些特殊字符作为备份文件。例如,在 Vim 中,备份会以在文件名后附加 `~` 字符的文件名保存。
这些类型的备份文件已经多次使我免于愚蠢的错误,但是经过多年享受它们提供的安全感后,我觉得不需要用视觉证据来证明它们存在。我相信 Linux 应用程序可以生成备份文件(如果它们声称这样做的话),我很乐意相信它们存在 —— 而不用必须看到它们。
要隐藏备份文件,请使用 `-B` 或 `--ignore-backups` 隐藏常用备份格式(此选项在 BSD 的 `ls` 中不可用):
```
$ ls
bar.xfc baz.txt foo.asc~ foo.asc
$ ls -B
bar.xfc baz.txt foo.asc
```
当然,备份文件仍然存在;它只是过滤掉了,你不必看到它。
除非另有配置,GNU Emacs 在文件名的开头和结尾添加哈希字符(`#`)来保存备份文件(`#file#`)。其他应用程序可能使用不同的样式。使用什么模式并不重要,因为你可以使用 `--hide` 选项创建自己的排除项:
```
$ ls
bar.xfc baz.txt #foo.asc# foo.asc
$ ls --hide="#*#"
bar.xfc baz.txt foo.asc
```
### 递归地列出目录
除非你在指定目录上运行 `ls`,否则子目录的内容不会与 `ls` 命令一起列出:
```
$ ls -F
example/ quux* xyz.txt
$ ls -R
quux xyz.txt
./example:
bar.xfc baz.txt #foo.asc# foo.asc
```
### 使用别名使其永久化
`ls` 命令可能是 shell 会话期间最常使用的命令。这是你的眼睛和耳朵,为你提供上下文信息和确认命令的结果。虽然有很多选项很有用,但 `ls` 之美的一部分就是简洁:两个字符和回车键,你就知道你到底在哪里以及附近有什么。如果你不得不停下思考(更不用说输入)几个不同的选项,它会变得不那么方便,所以通常情况下,即使最有用的选项也不会用了。
解决方案是为你的 `ls` 命令添加别名,以便在使用它时,你可以获得最关心的信息。
要在 Bash shell 中为命令创建别名,请在主目录中创建名为 `.bash_aliases` 的文件(必须在开头包含 `.`)。 在此文件中,列出要创建的别名,然后是要为其创建别名的命令。例如:
```
alias ls='ls -A -F -B --human --color'
```
这一行导致你的 Bash shell 将 `ls` 命令解释为 `ls -A -F -B --human --color`。
你不必仅限于重新定义现有命令,还可以创建自己的别名:
```
alias ll='ls -l'
alias la='ls -A'
alias lh='ls -h'
```
要使别名起作用,shell 必须知道 `.bash_aliases` 配置文件存在。在编辑器中打开 `.bashrc` 文件(如果它不存在则创建它),并包含以下代码块:
```
if [ -e $HOME/.bash_aliases ]; then
source $HOME/.bash_aliases
fi
```
每次加载 `.bashrc`(这是一个新的 Bash shell 启动的时候),Bash 会将 `.bash_aliases` 加载到你的环境中。你可以关闭并重新启动 Bash 会话,或者直接强制它执行此操作:
```
$ source ~/.bashrc
```
如果你忘了你是否有别名命令,`which` 命令可以告诉你:
```
$ which ls
alias ls='ls -A -F -B --human --color'
/usr/bin/ls
```
如果你将 `ls` 命令别名为带有选项的 `ls` 命令,则可以通过将反斜杠前缀到 `ls` 前来覆盖你的别名。例如,在示例别名中,使用 `-B` 选项隐藏备份文件,这意味着无法使用 `ls` 命令显示备份文件。 可以覆盖该别名以查看备份文件:
```
$ ls
bar baz foo
$ \ls
bar baz baz~ foo
```
### 做一件事,把它做好
`ls` 命令有很多选项,其中许多是特定用途的或高度依赖于你所使用的终端。在 GNU 系统上查看 `info ls`,或在 GNU 或 BSD 系统上查看 `man ls` 以了解更多选项。
你可能会觉得奇怪的是,一个以每个工具“做一件事,把它做好”的前提而闻名的系统会让其最常见的命令背负 50 个选项。但是 `ls` 只做一件事:它列出文件,而这 50 个选项允许你控制接收列表的方式,`ls` 的这项工作做得非常、*非常*好。
---
via: <https://opensource.com/article/19/7/master-ls-command>
作者:[Seth Kenlon](https://opensource.com/users/sethhttps://opensource.com/users/sambocettahttps://opensource.com/users/scottnesbitthttps://opensource.com/users/sethhttps://opensource.com/users/marcobravohttps://opensource.com/users/sethhttps://opensource.com/users/don-watkinshttps://opensource.com/users/sethhttps://opensource.com/users/jamesfhttps://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The **ls** command lists files on a [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) system. It's a simple command, often underestimated, not in what it can do (because it really does only one thing), but in how you can optimize your use of it.
Of the 10 most essential terminal commands to know, the humble **ls** command is in the top three, because **ls** doesn't *just* list files, it tells you important information about them. It tells you things like who owns a file or directory, when each file was lost or modified, and even what kind of file it is. And then there's its incidental function of giving you a sense of where you are, what nearby objects are lying around, and what you can do with them.
If your experience with **ls** is limited to whatever your distribution aliases it to in **.bashrc**, then you're probably missing out.
## GNU or BSD?
Before looking at the hidden powers of **ls**, you must determine which **ls** command you're running. The two most popular versions are the GNU version, included in the GNU **coreutils** package, and the BSD version. If you're running Linux, then you probably have **ls** installed. If you're running BSD or MacOS, then you have the BSD version. There are differences, for which this article accounts.
You can find out which version is on your computer with the **--version** option:
`$ ls --version`
If this returns information about GNU coreutils, then you have the GNU version. If it returns an error, you're probably running the BSD version (run **man ls | head** to be sure).
You should also investigate what presets your distribution may have in place. Customizations to terminal commands are frequently placed in **$HOME/.bashrc** or **$HOME/.bash_aliases** or **$HOME/.profile**, and they're accomplished by aliasing **ls** to a more complex **ls** command. For example:
`alias ls='ls --color'`
The presets provided by distributions are very helpful, but they do make it difficult to discern what **ls** does on its own and what its additional options provide. Should you ever want to run **ls** and not the alias, you can "escape" the command with a backslash:
`$ \ls`
## Classify
Run on its own, **ls** simply lists files in as many columns as can fit into your terminal:
```
$ ls ~/example
bunko jdk-10.0.2
chapterize otf2ttf.ff
despacer overtar.sh
estimate.sh pandoc-2.7.1
fop-2.3 safe_yaml
games tt
```
It's useful information, but all of those files look basically the same without the convenience of icons to quickly convey which is a directory, or a text file, or an image, and so on.
Use the **-F** (or **--classify** on GNU) to show indicators after each entry that identify the kind of file it is:
```
$ ls ~/example
bunko jdk-10.0.2/
chapterize* otf2ttf.ff*
despacer* overtar.sh*
estimate.sh pandoc@
fop-2.3/ pandoc-2.7.1/
games/ tt*
```
With this option, items listed in your terminal are classified by file type using this shorthand:
- A slash (
**/**) denotes a directory (or "folder"). - An asterisk (
*****) denotes an executable file. This includes a binary file (compiled code) as well as scripts (text files that have[executable permission](https://opensource.com/article/19/6/understanding-linux-permissions)). - An at sign (
**@**) denotes a symbolic link (or "alias"). - An equals sign (
**=**) denotes a socket. - On BSD, a percent sign (
**%**) denotes a whiteout (a method of file removal on certain file systems). - On GNU, an angle bracket (
**>**) denotes a door (inter-process communication on[Illumos](https://www.illumos.org/)and Solaris). - A vertical bar (
**|**) denotes a[FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)).
A simpler version of this option is **-p**, which only differentiates a file from a directory.
## Long list
Getting a "long list" from **ls** is so common that many distributions alias **ll** to **ls -l**. The long list form provides many important file attributes, such as permissions, the user who owns each file, the group to which the file belongs, the file size in bytes, and the date the file was last changed:
```
$ ls -l
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
[...]
```
If you don't think in bytes, add the **-h** flag (or **--human** in GNU) to translate file sizes to more human-friendly notation:
```
$ ls --human
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth seth 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20M Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth seth 6.1K May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
```
You can see just a little less information by showing only the owner column with **-o** or only the group column with **-g**:
```
$ ls -o
-rwxrwx---. 1 seth 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth 20M Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth 6.1K May 22 10:22 geteltorito
-rwxrwx---. 1 seth 177 Nov 12 2018 html4mutt.sh
```
Combine both options to show neither.
## Time and date format
The long list format of **ls** usually looks like this:
```
-rwxrwx---. 1 seth users 455 Mar 2 2017 estimate.sh
-rwxrwxr-x. 1 seth users 662 Apr 29 22:27 factorial
-rwxrwx---. 1 seth users 20697793 Jun 29 2018 fop-2.3-bin.tar.gz
-rwxrwxr-x. 1 seth users 6210 May 22 10:22 geteltorito
-rwxrwx---. 1 seth users 177 Nov 12 2018 html4mutt.sh
```
The names of months aren't easy to sort, both computationally or (depending on whether your brain tends to prefer strings or integers) by recognition. You can change the format of the time stamp with the **--time-style** option plus the name of a format. Available formats are:
- full-iso (1970-01-01 21:12:00)
- long-iso (1970-01-01 21:12)
- iso (01-01 21:12)
- locale (uses your locale settings)
- posix-STYLE (replace STYLE with a locale definition)
You can also create a custom style using the formal notation of the **date** command.
## Sort by time
Usually, the **ls** command sorts alphabetically. You can make it sort according to which file was most recently changed (the newest is listed first) with the **-t** option.
For example:
```
$ touch foo bar baz
$ ls
bar baz foo
$ touch foo
$ ls -t
foo bar baz
```
## List type
The standard output of **ls** balances readability with space efficiency, but sometimes you want your file list in a specific arrangement.
For a comma-separated list of files, use **-m**:
```
ls -m ~/example
bar, baz, foo
```
To force one file per line, use the **-1** option (that's the number one, not a lowercase L):
```
$ ls -1 ~/bin/
bar
baz
foo
```
To sort entries by file extension rather than the filename, use **-X** (that's a capital X):
```
$ ls
bar.xfc baz.txt foo.asc
$ ls -X
foo.asc baz.txt bar.xfc
```
## Hide the clutter
There are a few entries in some **ls** listings that you may not care about. For instance, the metacharacters **.** and **..** represent "here" and "back one level," respectively. If you're familiar with navigating in a terminal, you probably already know that each directory refers to itself as **.** and to its parent as **..**, so you don't need to be constantly reminded of it when you use the **-a** option to show hidden files.
To show almost all hidden files (the **.** and **..** excluded), use the **-A** option:
```
$ ls -a
.
..
.android
.atom
.bash_aliases
[...]
$ ls -A
.android
.atom
.bash_aliases
[...]
```
With many good Unix tools, there's a tradition of saving backup files by appending some special character to the name of the file being saved. For instance, in Vim, backups get saved with the **~** character appended to the name.
These kinds of backup files have saved me from stupid mistakes on several occasions, but after years of enjoying the sense of security they provide, I don't feel the need to have visual evidence that they exist. I trust Linux applications to generate backup files (if they claim to do so), and I'm happy to take it on faith that they exist.
To hide backup files from view, use **-B** or **--ignore-backups** to conceal common backup formats (this option is not available in BSD **ls**):
```
$ ls
bar.xfc baz.txt foo.asc~ foo.asc
$ ls -B
bar.xfc baz.txt foo.asc
```
Of course, the backup file still exists; it's just filtered out so that you don't have to look at it.
GNU Emacs saves backup files (unless otherwise configured) with a hash character (**#**) at the start and end of the file name (**#file#**). Other applications may use a different style. It doesn't matter what pattern is used, because you can create your own exclusions with the **--hide** option:
```
$ ls
bar.xfc baz.txt #foo.asc# foo.asc
$ ls --hide="#*#"
bar.xfc baz.txt foo.asc
```
## List directories with recursion
The contents of directories are not listed with the **ls** command unless you run **ls** on that directory specifically:
```
$ ls -F
example/ quux* xyz.txt
$ ls -R
quux xyz.txt
./example:
bar.xfc baz.txt #foo.asc# foo.asc
```
## Make it permanent with an alias
The **ls** command is probably the command used most often during any given shell session. It's your eyes and ears, providing you with context and confirming the results of commands. While it's useful to have lots of options, part of the beauty of **ls** is its brevity: two characters and the Return key, and you know exactly where you are and what's nearby. If you have to stop to think about (much less type) several different options, it becomes less convenient, so typically even the most useful options are left off.
The solution is to alias your **ls** command so that when you use it, you get the information you care about the most.
To create an alias for a command in the Bash shell, create a file in your home directory called **.bash_aliases** (you must include the dot at the beginning). In this file, list the command you want to create an alias for and then the alias you want to create. For example:
`alias ls='ls -A -F -B --human --color'`
This line causes your Bash shell to interpret the **ls** command as **ls -A -F -B --human --color**.
You aren't limited to redefining existing commands. You can create your own aliases:
```
alias ll='ls -l'
alias la='ls -A'
alias lh='ls -h'
```
For aliases to work, your shell must know that the **.bash_aliases** configuration file exists. Open the **.bashrc** file in an editor (or create it, if it doesn't exist), and include this block of code:
```
if [ -e $HOME/.bash_aliases ]; then
source $HOME/.bash_aliases
fi
```
Each time **.bashrc** is loaded (which is any time a new Bash shell is launched), Bash will load **.bash_aliases** into your environment. You can close and relaunch your Bash session or just force it to do that now:
`$ source ~/.bashrc`
If you forget whether you have aliased a command, the **which** command tells you:
```
$ which ls
alias ls='ls -A -F -B --human --color'
/usr/bin/ls
```
If you've aliased the **ls** command to itself with options, you can override your own alias at any time by prefacing **ls** with a backslash. For instance, in the example alias, backup files are hidden using the **-B** option, which means there's no way to back up files with the **ls** command. Override the alias to see the backup files:
```
$ ls
bar baz foo
$ \ls
bar baz baz~ foo
```
## Do one thing and do it well
The **ls** command has a staggering number of options, many of which are niche or highly dependent upon the terminal you use. Take a look at **info ls** on GNU systems or **man ls** on GNU or BSD systems for more options.
You might find it strange that a system famous for the premise that each tool "does one thing and does it well" would weigh down its most common command with 50 options. But **ls** does only one thing: it lists files. And with 50 options to allow you to control how you receive that list, **ls** does its one job very, *very* well.
## 12 Comments |
11,160 | 使用 pandoc 将 Markdown 转换为格式化文档 | https://opensource.com/article/19/5/convert-markdown-to-word-pandoc | 2019-07-29T11:34:22 | [
"Markdown",
"pandoc"
] | https://linux.cn/article-11160-1.html |
>
> 生活在普通文本世界么?以下是无需使用文字处理器而创建别人要的格式化文档的方法。
>
>
>

如果你生活在[普通文本](https://plaintextproject.online/)世界里,总会有人要求你提供格式化文档。我就经常遇到这个问题,特别是在 Day JobTM。虽然我已经给与我合作的开发团队之一介绍了用于撰写和审阅发行说明的 [Docs Like Code](https://www.docslikecode.com/) 工作流程,但是还有少数人对 GitHub 和使用 [Markdown](https://en.wikipedia.org/wiki/Markdown) 没有兴趣,他们更喜欢为特定的专有应用格式化的文档。
好消息是,你不会被卡在将未格式化的文本复制粘贴到文字处理器的问题当中。使用 [pandoc](https://pandoc.org/),你可以快速地给人们他们想要的东西。让我们看看如何使用 pandoc 将文档从 Markdown 转换为 Linux 中的文字处理器格式。
请注意,pandoc 也可用于从两种 BSD([NetBSD](https://www.netbsd.org/) 和 [FreeBSD](https://www.freebsd.org/))到 Chrome OS、MacOS 和 Windows 等的各种操作系统。
### 基本转换
首先,在你的计算机上[安装 pandoc](https://pandoc.org/installing.html)。然后,打开控制台终端窗口,并导航到包含要转换的文件的目录。
输入此命令以创建 ODT 文件(可以使用 [LibreOffice Writer](https://www.libreoffice.org/discover/writer/) 或 [AbiWord](https://www.abisource.com/) 等字处理器打开):
```
pandoc -t odt filename.md -o filename.odt
```
记得用实际文件名称替换 `filename`。如果你需要为其他文字处理器(你知道我的意思)创建一个文件,替换命令行的 `odt` 为 `docx`。以下是本文转换为 ODT 文件时的内容:

这些转换结果虽然可用,但有点乏味。让我们看看如何为转换后的文档添加更多样式。
### 带样式转换
`pandoc` 有一个漂亮的功能,使你可以在将带标记的纯文本文件转换为字处理器格式时指定样式模板。在此文件中,你可以编辑文档中的少量样式,包括控制段落、文章标题和副标题、段落标题、说明、基本的表格和超链接的样式。
让我们来看看能做什么。
#### 创建模板
要设置文档样式,你不能只是使用任何一个模板就行。你需要生成 pandoc 称之为引用模板的文件,这是将文本文件转换为文字处理器文档时使用的模板。要创建此文件,请在终端窗口中键入以下内容:
```
pandoc -o custom-reference.odt --print-default-data-file reference.odt
```
此命令创建一个名为 `custom-reference.odt` 的文件。如果你正在使用其他文字处理程序,请将命令行中的 “odt” 更改为 “docx”。
在 LibreOffice Writer 中打开模板文件,然后按 `F11` 打开 LibreOffice Writer 的 “样式” 窗格。虽然 [pandoc 手册](https://pandoc.org/MANUAL.html)建议不要对该文件进行其他更改,但我会在必要时更改页面大小并添加页眉和页脚。
#### 使用模板
那么,你要如何使用刚刚创建的模板?有两种方法可以做到这一点。
最简单的方法是将模板放在家目录的 `.pandoc` 文件夹中,如果该文件夹不存在,则必须先创建该文件夹。当转换文档时,`pandoc` 会使用此模板文件。如果你需要多个模板,请参阅下一节了解如何从多个模板中进行选择。
使用模板的另一种方法是在命令行键入以下转换选项:
```
pandoc -t odt file-name.md --reference-doc=path-to-your-file/reference.odt -o file-name.odt
```
如果你想知道使用自定义模板转换后的文件是什么样的,这是一个示例:

#### 选择模板
很多人只需要一个 `pandoc` 模板,但是,有些人需要不止一个。
例如,在我的日常工作中,我使用了几个模板:一个带有 DRAFT 水印,一个带有表示内部使用的水印,另一个用于文档的最终版本。每种类型的文档都需要不同的模板。
如果你有类似的需求,可以像使用单个模板一样创建文件 `custom-reference.odt`,将生成的文件重命名为例如 `custom-reference-draft.odt` 这样的名字,然后在 LibreOffice Writer 中打开它并修改样式。对你需要的每个模板重复此过程。
接下来,将文件复制到家目录中。如果你愿意,你甚至可以将它们放在 `.pandoc` 文件夹中。
要在转换时选择特定模板,你需要在终端中运行此命令:
```
pandoc -t odt file-name.md --reference-doc=path-to-your-file/custom-template.odt -o file-name.odt
```
改变 `custom-template.odt` 为你的模板文件名。
### 结语
为了不用记住我不经常使用的一组选项,我拼凑了一些简单的、非常蹩脚的单行脚本,这些脚本封装了每个模板的选项。例如,我运行脚本 `todraft.sh` 以使用带有 DRAFT 水印的模板创建文字处理器文档。你可能也想要这样做。
以下是使用包含 DRAFT 水印的模板的脚本示例:
```
pandoc -t odt $1.md -o $1.odt --reference-doc=~/Documents/pandoc-templates/custom-reference-draft.odt
```
使用 pandoc 是一种不必放弃命令行生活而以人们要求的格式提供文档的好方法。此工具也不仅适用于 Markdown。我在本文中讨论的内容还可以让你在各种标记语言之间创建和转换文档。有关更多详细信息,请参阅前面链接的 [pandoc 官网](https://pandoc.org/)。
---
via: <https://opensource.com/article/19/5/convert-markdown-to-word-pandoc>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you live your life in [plaintext](https://plaintextproject.online/), there invariably comes a time when someone asks for a word processor document. I run into this issue frequently, especially at the Day JobTM. Although I've introduced one of the development teams I work with to a [Docs Like Code](https://www.docslikecode.com/) workflow for writing and reviewing release notes, there are a small number of people who have no interest in GitHub or working with [Markdown](https://en.wikipedia.org/wiki/Markdown). They prefer documents formatted for a certain proprietary application.
The good news is that you're not stuck copying and pasting unformatted text into a word processor document. Using ** pandoc**, you can quickly give people what they want. Let's take a look at how to convert a document from Markdown to a word processor format in
[Linux](https://opensource.com/resources/linux)using
**pandoc.**
Note that **pandoc** is also available for a wide variety of operating systems, ranging from two flavors of BSD ([NetBSD](https://www.netbsd.org/) and [FreeBSD](https://www.freebsd.org/)) to Chrome OS, MacOS, and Windows.
## Converting basics
To begin, [install pandoc](https://pandoc.org/installing.html) on your computer. Then, crack open a console terminal window and navigate to the directory containing the file that you want to convert.
Type this command to create an ODT file (which you can open with a word processor like [LibreOffice Writer](https://www.libreoffice.org/discover/writer/) or [AbiWord](https://www.abisource.com/)):
**pandoc -t odt filename.md -o filename.odt**
Remember to replace **filename** with the file's actual name. And if you need to create a file for that other word processor (you know the one I mean), replace **odt** on the command line with **docx**. Here's what this article looks like when converted to an ODT file:

These results are serviceable, but a bit bland. Let's look at how to add a bit more style to the converted documents.
## Converting with style
**pandoc** has a nifty feature enabling you to specify a style template when converting a marked-up plaintext file to a word processor format. In this file, you can edit a small number of styles in the document, including those that control the look of paragraphs, headings, captions, titles and subtitles, a basic table, and hyperlinks.
Let's look at the possibilities.
### Creating a template
In order to style your documents, you can't just use *any* template. You need to generate what **pandoc** calls a *reference* template, which is the template it uses when converting text files to word processor documents. To create this file, type the following in a terminal window:
**pandoc -o custom-reference.odt --print-default-data-file reference.odt**
This command creates a file called **custom-reference.odt**. If you're using that other word processor, change the references to **odt** on the command line to **docx**.
Open the template file in LibreOffice Writer, and then press **F11** to open LibreOffice Writer's **Styles** pane. Although the [pandoc manual](https://pandoc.org/MANUAL.html) advises against making other changes to the file, I change the page size and add headers and footers when necessary.
### Using the template
So, how do you use that template you just created? There are two ways to do this.
The easiest way is to drop the template in your **/home** directory's **.pandoc** folder—you might have to create the folder first if it doesn't exist. When it's time to convert a document, **pandoc** uses this template file. See the next section on how to choose from multiple templates if you need more than one.
The other way to use your template is to type this set of conversion options at the command line:
**pandoc -t odt file-name.md --reference-doc=path-to-your-file/reference.odt -o file-name.odt**
If you're wondering what a converted file looks like with a customized template, here's an example:

opensource.com
### Choosing from multiple templates
Many people only need one **pandoc** template. Some people, however, need more than one.
At my day job, for example, I use several templates—one with a DRAFT watermark, one with a watermark stating FOR INTERNAL USE, and one for a document's final versions. Each type of document needs a different template.
If you have similar needs, start the same way you do for a single template, by creating the file **custom-reference.odt**. Rename the resulting file—for example, to **custom-reference-draft.odt**—then open it in LibreOffice Writer and modify the styles. Repeat this process for each template you need.
Next, copy the files into your **/home** directory. You can even put them in the **.pandoc** folder if you want to.
To select a specific template at conversion time, you'll need to run this command in a terminal:
**pandoc -t odt file-name.md --reference-doc=path-to-your-file/custom-template.odt -o file-name.odt**
Change **custom-template.odt** to your template file's name.
## Wrapping up
To avoid having to remember a set of options I don't regularly use, I cobbled together some simple, very lame one-line scripts that encapsulate the options for each template. For example, I run the script **todraft.sh** to create a word processor document using the template with a DRAFT watermark. You might want to do the same.
Here's an example of a script using the template containing a DRAFT watermark:
`pandoc -t odt $1.md -o $1.odt --reference-doc=~/Documents/pandoc-templates/custom-reference-draft.odt`
Using **pandoc** is a great way to provide documents in the format that people ask for, without having to give up the command line life. This tool doesn't just work with Markdown, either. What I've discussed in this article also allows you to create and convert documents between a wide variety of markup languages. See the **pandoc** site linked earlier for more details.
## 1 Comment |
11,163 | 如何在 Firefox 中启用 DNS-over-HTTPS(DoH) | https://www.zdnet.com/article/how-to-enable-dns-over-https-doh-in-firefox/ | 2019-07-30T10:58:00 | [
"Firefox",
"DNS",
"DoH"
] | https://linux.cn/article-11163-1.html | 
DNS-over-HTTPS(DoH)协议目前是谈论的焦点,Firefox 是唯一支持它的浏览器。但是,Firefox 默认不启用此功能,用户必须经历许多步骤并修改多个设置才能启动并运行 DoH。
在开始如何在 Firefox 中启用 DoH 支持的分步教程之前,让我们先描述它的原理。
### DNS-over-HTTPS 的工作原理
DNS-over-HTTPS 协议通过获取用户在浏览器中输入的域名,并向 DNS 服务器发送查询,以了解托管该站点的 Web 服务器的 IP 地址。
这也是正常 DNS 的工作原理。但是,DoH 通过 443 端口的加密 HTTPS 连接接受 DNS 查询将其发送到兼容 DoH 的 DNS 服务器(解析器),而不是在 53 端口上发送纯文本。这样,DoH 就会在常规 HTTPS 流量中隐藏 DNS 查询,因此第三方监听者将无法嗅探流量,并了解用户的 DNS 查询,从而推断他们将要访问的网站。
此外,DNS-over-HTTPS 的第二个特性是该协议工作在应用层。应用可以带上内部硬编码的 DoH 兼容的 DNS 解析器列表,从而向它们发送 DoH 查询。这种操作模式绕过了系统级别的默认 DNS 设置,在大多数情况下,这些设置是由本地 Internet 服务提供商(ISP)设置的。这也意味着支持 DoH 的应用可以有效地绕过本地 ISP 流量过滤器并访问可能被本地电信公司或当地政府阻止的内容 —— 这也是 DoH 目前被誉为用户隐私和安全的福音的原因。
这是 DoH 在推出后不到两年的时间里获得相当大的普及的原因之一,同时也是一群[英国 ISP 因为 Mozilla 计划支持 DoH 协议而提名它为 2019 年的“互联网恶棍” (Internet Villian)](/article-11068-1.html)的原因,ISP 认为 DoH 协议会阻碍他们过滤不良流量的努力。(LCTT 译注:后来这一奖项的提名被取消。)
作为回应,并且由于英国政府阻止访问侵犯版权内容的复杂情况,以及 ISP 自愿阻止访问虐待儿童网站的情况,[Mozilla 已决定不为英国用户默认启用此功能](https://www.zdnet.com/article/mozilla-no-plans-to-enable-dns-over-https-by-default-in-the-uk/)。
下面的分步指南将向英国和世界各地的 Firefox 用户展示如何立即启用该功能,而不用等到 Mozilla 将来启用它 —— 如果它会这么做的话。在 Firefox 中有两种启用 DoH 支持的方法。
### 方法 1:通过 Firefox 设置
**步骤 1:**进入 Firefox 菜单,选择**工具**,然后选择**首选项**。 可选在 URL 栏中输入 `about:preferences`,然后按下回车。这将打开 Firefox 的首选项。
**步骤 2:**在**常规**中,向下滚动到**网络设置**,然后按**设置**按钮。

**步骤3:**在弹出窗口中,向下滚动并选择“**Enable DNS over HTTPS**”,然后配置你需要的 DoH 解析器。你可以使用内置的 Cloudflare 解析器(该公司与 Mozilla [达成协议](https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/),记录更少的 Firefox 用户数据),或者你可以在[这个列表](https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/)中选择一个。

### 方法 2:通过 about:config
**步骤 1:**在 URL 栏中输入 `about:config`,然后按回车访问 Firefox 的隐藏配置面板。在这里,用户需要启用和修改三个设置。
**步骤 2:**第一个设置是 `network.trr.mode`。这打开了 DoH 支持。此设置支持四个值:
* `0` - 标准 Firefox 安装中的默认值(当前为 5,表示禁用 DoH)
* `1` - 启用 DoH,但 Firefox 依据哪个请求更快返回选择使用 DoH 或者常规 DNS
* `2` - 启用 DoH,常规 DNS 作为备用
* `3` - 启用 DoH,并禁用常规 DNS
* `5` - 禁用 DoH
值为 2 工作得最好

**步骤3:**需要修改的第二个设置是 `network.trr.uri`。这是与 DoH 兼容的 DNS 服务器的 URL,Firefox 将向它发送 DoH DNS 查询。默认情况下,Firefox 使用 Cloudflare 的 DoH服务,地址是:<https://mozilla.cloudflare-dns.com/dns-query>。但是,用户可以使用自己的 DoH 服务器 URL。他们可以从[这个列表](https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers)中选择其中一个可用的。Mozilla 在 Firefox 中使用 Cloudflare 的原因是因为与这家公司[达成了协议](https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/),之后 Cloudflare 将收集来自 Firefox 用户的 DoH 查询的非常少的数据。
[DoH in Firefox](https://zdnet2.cbsistatic.com/hub/i/2019/07/06/4dd1d5c1-6fa7-4f5b-b7cd-b544748edfed/baa7a70ac084861d94a744a57a3147ad/doh-2.png)
**步骤4:**第三个设置是可选的,你可以跳过此设置。 但是如果设置不起作用,你可以使用此作为步骤 3 的备用。该选项名为 `network.trr.bootstrapAddress`,它是一个输入字段,用户可以输入步骤 3 中兼容 DoH 的 DNS 解析器的 IP 地址。对于 Cloudflare,它是 1.1.1.1。 对于 Google 服务,它是 8.8.8.8。 如果你使用了另一个 DoH 解析器的 URL,如果有必要的话,你需要追踪那台服务器的 IP 地址并输入。

通常,在步骤 3 中输入的 URL 应该足够了。 设置应该立即生效,但如果它们不起作用,请重新启动 Firefox。
文章信息来源:[Mozilla Wiki](https://wiki.mozilla.org/Trusted_Recursive_Resolver)
---
via: <https://www.zdnet.com/article/how-to-enable-dns-over-https-doh-in-firefox/>
作者:[Catalin Cimpanu](https://www.zdnet.com/meet-the-team/us/catalin.cimpanu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # How to enable DNS-over-HTTPS (DoH) in Firefox

The DNS-over-HTTPS (DoH) protocol is currently the talk of the town, and the Firefox browser is the only one to support it.
However, the feature is not enabled by default for Firefox users, who will have to go through many hoops and modify multiple settings before they can get the DoH up and running.
But before we go into a step-by-step tutorial on how someone can enable DoH support in Firefox, let's describe what it does first.
### How DNS-over-HTTPS works
The DNS-over-HTTPS protocol works by taking a domain name that a user has typed in their browser and sending a query to a DNS server to learn the numerical IP address of the web server that hosts that specific site.
This is how normal DNS works, too. However, DoH takes the DNS query and sends it to a DoH-compatible DNS server (resolver) via an encrypted HTTPS connection on port 443, rather than plaintext on port 53.
This way, DoH hides DNS queries inside regular HTTPS traffic, so third-party observers won't be able to sniff traffic and tell what DNS queries users have run and infer what websites they are about to access.
Further, a secondary feature of DNS-over-HTTPS is that the protocol works at the app level. Apps can come with internally hardcoded lists of DoH-compatible DNS resolvers where they can send DoH queries.
This mode of operation bypasses the default DNS settings that exist at the OS level, which, in most cases are the ones set by local internet service providers (ISPs).
This also means that apps that support DoH can effectively bypass local ISPs traffic filters and access content that may be blocked by a local telco or local government -- and a reason why DoH is currently hailed as a boon for users' privacy and security.
This is one of the reasons that DoH has gained quite the popularity in less than two years after it launched, and a reason why a group of [UK ISPs nominated Mozilla for the award of 2019 Internet Vilain](https://www.zdnet.com/article/uk-isp-group-names-mozilla-internet-villain-for-supporting-dns-over-https/) for its plans to support the DoH protocol, which they said would thwart their efforts in filtering bad traffic.
As a response, and due to the complex situation in the UK where the government blocks access to copyright-infringing content, and where ISPs voluntarily block access to child abuse website, [Mozilla has decided not to enable this feature by default for British users](https://www.zdnet.com/article/mozilla-no-plans-to-enable-dns-over-https-by-default-in-the-uk/).
The below step-by-step guide will show Firefox users in the UK and Firefox users all over the world how to enable the feature right now, and not wait until Mozilla enables it later down the road -- if it will ever do. There are two methods of enabling DoH support in Firefox.
### Method 1 - via the Firefox settings
**Step 1:** Go to the Firefox menu, choose **Tools**, and then **Preferences**. Optionally type **about:preferences** in the URL bar and press enter. This will open the Firefox prerences section.
**Step 2:** In the **General** section, scroll down to the **Network Settings** panel, and press the **Settings** button.
**Step 3:** In the popup, scroll down and select "**Enable DNS over HTTPS**," then configure your desired DoH resolver. You can use the built in Cloudflare resolver (a company with which Mozilla has [reached an agreement](https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/) to log less data about Firefox users), or use one of your choice, [from this list](https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers).
### Method 2 - via about:config
**Step 1:** Type **about:config** in the URL bar and press Enter to access Firefox's hidden configuration panel. Here users will need to enable and modify three settings.
**Step 2:** The first setting is **network.trr.mode**. This turns on DoH support. This setting supports four values:
- 0 - Default value in standard Firefox installations (currently is 5, which means DoH is disabled)
- 1 - DoH is enabled, but Firefox picks if it uses DoH or regular DNS based on which returns faster query responses
- 2 - DoH is enabled, and regular DNS works as a backup
- 3 - DoH is enabled, and regular DNS is disabled
- 5 - DoH is disabled
A value of 2 works best.
**Step 3:** The second setting that needs to be modified is **network.trr.uri**. This is the URL of the DoH-compatible DNS server where Firefox will send DoH DNS queries. By default, Firefox uses Cloudflare's DoH service located at *https://mozilla.cloudflare-dns.com/dns-query. *However, users can use their own DoH server URL. They can select one from the many available servers, [from this list, here](https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers). The reason why Mozilla uses Cloudflare in Firefox is because the companies [reached an agreement](https://developers.cloudflare.com/1.1.1.1/commitment-to-privacy/privacy-policy/firefox/) following which Cloudflare would collect very little data on DoH queries coming from Firefox users.
**Step 4:** The third setting is optional and you can skip this one. But if things don't work, you can use this one as a backup for Step 3. The option is called **network.trr.bootstrapAddress** and is an input field where users can enter the numerical IP address of the DoH-compatible DNS resolver they entered in Step 3. For Cloudflare, that would be 1.1.1.1. For Google's service, that would be 8.8.8.8. If you used another DoH resolver's URL, you'll need to track down that server's IP and enter it here, if ever necesarry.
Normally, the URL entered in Step 3 should be enough, though.
Settings should apply right away, but in case they don't work, give Firefox a restart.
*Article source: **Mozilla Wiki*
#### Awesome Google Chrome extensions (May 2019 edition)
### More browser coverage:
[Germany to publish standard on modern secure browsers](https://www.zdnet.com/article/germany-to-publish-standard-on-modern-secure-browsers/)[UK ISP group names Mozilla 'Internet Villain' for supporting 'DNS-over-HTTPS'](https://www.zdnet.com/article/uk-isp-group-names-mozilla-internet-villain-for-supporting-dns-over-https/)[Google Chrome to block heavy ads that use too many system resources](https://www.zdnet.com/article/google-chrome-to-block-heavy-ads-that-use-too-many-system-resources/)[Mozilla: No plans to enable DNS-over-HTTPS by default in the UK](https://www.zdnet.com/article/mozilla-no-plans-to-enable-dns-over-https-by-default-in-the-uk/)[Google Chrome to get a video Play/Pause button on the toolbar](https://www.zdnet.com/article/google-chrome-to-get-a-video-playpause-button-on-the-toolbar/)[Firefox finally fixes the problems with antivirus apps crashing HTTPS websites](https://www.zdnet.com/article/firefox-finally-fixes-the-problems-with-antivirus-apps-crashing-https-websites/)[How to use the Tor browser on an Android device](https://www.techrepublic.com/article/how-to-use-the-tor-browser-on-an-android-device/?ftag=CMG-01-10aaa1b)**TechRepublic**[Brave's privacy-first browser ads arrive with promised payout for you](https://www.cnet.com/news/braves-privacy-first-browser-ads-arrive-with-promised-payout-for-you/?ftag=CMG-01-10aaa1b)**CNET**
[Editorial standards](/editorial-guidelines/) |
11,165 | 在树莓派上玩怀旧游戏的 5 种方法 | https://opensource.com/article/18/9/retro-gaming-raspberry-pi | 2019-07-30T15:42:00 | [
"游戏",
"复古"
] | https://linux.cn/article-11165-1.html |
>
> 使用这些用于树莓派的开源平台来重温游戏的黄金时代。
>
>
>

他们使它们不像过去那样子了,对吧?我是说,电子游戏。
当然,现在的设备更强大了。<ruby> 赛达尔公主 <rt> Princess Zelda </rt></ruby>在过去每个边只有 16 个像素,而现在的图像处理能力足够处理她头上的每根头发。今天的处理器打败 1988 年的处理器简直不费吹灰之力。
但是你知道缺少什么吗?乐趣。
你有数之不尽的游戏,按下一个按钮就可以完成教程任务。可能有故事情节,当然杀死坏蛋也可以不需要故事情节,你需要的只是跳跃和射击。因此,毫不奇怪,树莓派最持久的流行用途之一就是重温上世纪八九十年代的 8 位和 16 位游戏的黄金时代。但从哪里开始呢?
在树莓派上有几种方法可以玩怀旧游戏。每一种都有自己的优点和缺点,我将在这里讨论这些。
### RetroPie
[RetroPie](https://retropie.org.uk/) 可能是树莓派上最受欢迎的复古游戏平台。它是一个可靠的万能选手,是模拟经典桌面和控制台游戏系统的绝佳选择。

#### 介绍
RetroPie 构建在 [Raspbian](https://www.raspbian.org/) 上运行。如果你愿意,它也可以安装在现有的 Raspbian 镜像上。它使用 [EmulationStation](https://emulationstation.org/) 作为开源仿真器库(包括 [Libretro](https://www.libretro.com/) 仿真器)的图形前端。
不过,你要玩游戏其实并不需要理解上面的任何一个词。
#### 它有什么好处
入门很容易。你需要做的就是将镜像刻录到 SD 卡,配置你的控制器、复制游戏,然后开始杀死坏蛋。
它的庞大用户群意味着有大量的支持和信息,活跃的在线社区也可以求助问题。
除了随 RetroPie 镜像一起安装的仿真器之外,还有一个可以从包管理器安装的庞大的仿真器库,并且它一直在增长。RetroPie 还提供了用户友好的菜单系统来管理这些,可以节省你的时间。
从 RetroPie 菜单中可以轻松添加 Kodi 和配备了 Chromium 浏览器的 Raspbian 桌面。这意味着你的这套复古游戏装备也适于作为家庭影院、[YouTube](https://www.youtube.com/)、[SoundCloud](https://soundcloud.com/) 以及所有其它“休息室电脑”产品。
RetroPie 还有许多其它自定义选项:你可以更改菜单中的图形,为不同的模拟器设置不同的控制手柄配置,使你的树莓派文件系统的所有内容对你的本地 Windows 网络可见等等。
RetroPie 建立在 Raspbian 上,这意味着你可以探索这个树莓派最受欢迎的操作系统。你所发现的大多数树莓派项目和教程都是为 Raspbian 编写的,因此可以轻松地自定义和安装新内容。我已经使用我的 RetroPie 装备作为无线桥接器,在上面安装了 MIDI 合成器,自学了一些 Python,更重要的是,所有这些都没有影响它作为游戏机的用途。
#### 它有什么不太好的
RetroPie 的安装简单和易用性在某种程度上是一把双刃剑。你可以在 RetroPie 上玩了很长时间,而甚至没有学习过哪怕像 `sudo apt-get` 这样简单的东西,但这也意味着你错过了很多树莓派的体验。
但不一定必须如此;当你需要时,命令行仍然存在于底层,但是也许用户与 Bash shell 有点隔离,而使它最终并没有看上去那么可怕、另外,RetroPie 的主菜单只能通过控制手柄操作,当你没有接入手柄时,这可能很烦人,因为你一直将该系统用于游戏之外的事情。
#### 它适用于谁?
任何想直接玩一些游戏的人,任何想拥有最大、最好的模拟器库的人,以及任何想在不玩游戏的时候开始探索 Linux 的人。
### Recalbox
[Recalbox](https://www.recalbox.com/) 是一个较新的树莓派开源模拟器套件。它还支持其它基于 ARM 的小型计算机。

#### 介绍
与 Retropie 一样, Recalbox 基于 EmulationStation 和 Libretro。它的不同之处在于它不是基于 Raspbian 构建的,而是基于它自己的 Linux 发行版:RecalboxOS。
#### 它有什么好处
Recalbox 的设置比 RetroPie 更容易。你甚至不需要做 SD 卡镜像;只需复制一些文件即可。它还为一些游戏控制器提供开箱即用的支持,可以让你更快地开始游戏。它预装了 Kodi。这是一个现成的游戏和媒体平台。
#### 它有什么不太好的
Recalbox 比 RetroPie 拥有更少的仿真器、更少的自定义选项和更小的用户社区。
你的 Recalbox 装备可能一直用于模拟器和 Kodi,安装成什么样就是什么样。如果你想深入了解 Linux,你可能需要为 Raspbian 提供一个新的 SD 卡。
#### 它适用于谁?
如果你想要绝对简单的复古游戏体验,并且不想玩一些比较少见的游戏平台模拟器,或者你害怕一些技术性工作(也没有兴趣去做),那么 Recalbox 非常适合你。
对于大多数读者来说,Recalbox 可能最适合推荐给你那些不太懂技术的朋友或亲戚。它超级简单的设置和几乎没什么选项甚至可以让你免去帮助他们解决问题。
### 做个你自己的
好,你可能已经注意到 Retropie 和 Recalbox 都是由许多相同的开源组件构建的。那么为什么不自己把它们组合在一起呢?
#### 介绍
无论你想要的是什么,开源软件的本质意味着你可以使用现有的模拟器套件作为起点,或者随意使用它们。
#### 它有什么好处
如果你想有自己的自定义界面,我想除了亲自动手别无它法。这也是安装在 RetroPie 中没有的仿真器的方法,例如 [BeebEm](http://www.mkw.me.uk/beebem/)) 或 [ArcEm](http://arcem.sourceforge.net/)。
#### 它有什么不太好的
嗯,工作量有点大。
#### 它适用于谁?
喜欢鼓捣的人,有动手能力的人,开发者,经验丰富的业余爱好者等。
### 原生 RISC OS 游戏体验
现在有一匹黑马:[RISC OS](https://opensource.com/article/18/7/gentle-intro-risc-os),它是 ARM 设备的原始操作系统。
#### 介绍
在 ARM 成为世界上最受欢迎的 CPU 架构之前,它最初是作为 Acorn Archimedes 的处理器而开发的。现在看起来这像是一种被遗忘的野兽,但是那几年,它作为世界上最强大的台式计算机独领风骚了好几年,并且吸引了大量的游戏开发项目。
树莓派中的 ARM 处理器是 Archimedes 的曾孙辈的 CPU,所以我们仍然可以在其上安装 RISC OS,只要做一点工作,就可以让这些游戏运行起来。这与我们到上面所介绍的仿真器方式不同,我们是在玩为该操作系统和 CPU 架构开发的游戏。
#### 它有什么好处
这是 RISC OS 的完美展现,这绝对是操作系统的瑰宝,非常值得一试。
事实上,你使用的是和以前几乎相同的操作系统来加载和玩你的游戏,这使得你的复古游戏装备像是一个时间机器一样,这无疑为该项目增添了一些魅力和复古价值。
有一些精彩的游戏只在 Archimedes 上发布过。Archimedes 的巨大硬件优势也意味着它通常拥有许多多平台游戏大作的最佳图形和最流畅的游戏体验。这类游戏的版权持有者非常慷慨,可以合法地免费下载它们。
#### 它有什么不太好的
安装了 RISC OS 之后,它仍然需要一些努力才能让游戏运行起来。这是 [入门指南](https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/)。
对于休息室来说,这绝对不是一个很好的全能选手。没有什么比 [Kodi](https://kodi.tv/) 更好的了。它有一个网络浏览器 [NetSurf](https://www.netsurf-browser.org/),但它在支持现代 Web 方面还需要一些努力。你不会像使用模拟器套件那样得到大量可以玩的游戏。RISC OS Open 对于爱好者来说可以免费下载和使用,而且很多源代码已经开源,尽管由于这个名字,它不是一个 100% 的开源操作系统。
#### 它适用于谁?
这是专为追求新奇的人,绝对怀旧的人,想要探索一个来自上世纪 80 年代的有趣的操作系统的人,怀旧过去的 Acorn 机器的人,以及想要一个完全不同的怀旧游戏项目的人而设计的。
### 终端游戏
你是否真的需要安装模拟器或者一个异域风情的操作系统才能重温辉煌的日子?为什么不从命令行安装一些原生 Linux 游戏呢?
#### 介绍
有一系列原生的 Linux 游戏经过测试可以在 [树莓派](https://www.raspberrypi.org/forums/viewtopic.php?f=78&t=51794) 上运行。
#### 它有什么好处
你可以使用命令行从程序包安装其中的大部分,然后开始玩。很容易。如果你已经有了一个跑起来的 Raspbian,那么它可能是你运行游戏的最快途径。
#### 它有什么不太好的
严格来说,这并不是真正的复古游戏。Linux 诞生于 1991 年,过了一段时间才成为了一个游戏平台。这些不是经典的 8 位和 16 位时代的游戏体验;后来有一些移植的游戏或受复古影响的游戏。
#### 它适用于谁?
如果你只是想找点乐子,这没问题。但如果你想重温过去,那就不完全是这样了。
---
via: <https://opensource.com/article/18/9/retro-gaming-raspberry-pi>
作者:[James Mawson](https://opensource.com/users/dxmjames) 选题:[lujun9972](https://github.com/lujun9972) 译者:[canhetingsky](https://github.com/canhetingsky) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | They don't make 'em like they used to, do they? Video games, I mean.
Sure, there's a bit more grunt in the gear now. Princess Zelda used to be 16 pixels in each direction; there's now enough graphics power for every hair on her head. Today's processors could beat up 1988's processors in a cage-fight deathmatch without breaking a sweat.
But you know what's missing? The fun.
You've got a squillion and one buttons to learn just to get past the tutorial mission. There's probably a storyline, too. You shouldn't need a backstory to kill bad guys. All you need is jump and shoot. So, it's little wonder that one of the most enduring popular uses for a Raspberry Pi is to relive the 8- and 16-bit golden age of gaming in the '80s and early '90s. But where to start?
There are a few ways to play old-school games on the Pi. Each has its strengths and weaknesses, which I'll discuss here.
## Retropie
[Retropie](https://retropie.org.uk/) is probably the most popular retro-gaming platform for the Raspberry Pi. It's a solid all-rounder and a great default option for emulating classic desktop and console gaming systems.
### What is it?
Retropie is built to run on [Raspbian](https://www.raspbian.org/). It can also be installed over an existing Raspbian image if you'd prefer. It uses [EmulationStation](https://emulationstation.org/) as a graphical front-end for a library of open source emulators, including the [Libretro](https://www.libretro.com/) emulators.
You don't need to understand a word of that to play your games, though.
### What's great about it
It's very easy to get started. All you need to do is burn the image to an SD card, configure your controllers, copy your games over, and start killing bad guys.
The huge user base means that there is a wealth of support and information out there, and active online communities to turn to for questions.
In addition to the emulators that come installed with the Retropie image, there's a huge library of emulators you can install from the package manager, and it's growing all the time. Retropie also offers a user-friendly menu system to manage this, saving you time.
From the Retropie menu, it's easy to add Kodi and the Raspbian desktop, which comes with the Chromium web browser. This means your retro-gaming rig is also good for home theatre, [YouTube](https://www.youtube.com/), [SoundCloud](https://soundcloud.com/), and all those other “lounge room computer” goodies.
Retropie also has a number of other customization options: You can change the graphics in the menus, set up different control pad configurations for different emulators, make your Raspberry Pi file system visible to your local Windows network—all sorts of stuff.
Retropie is built on Raspbian, which means you have the Raspberry Pi's most popular operating system to explore. Most Raspberry Pi projects and tutorials you find floating around are written for Raspbian, making it easy to customize and install new things on it. I've used my Retropie rig as a wireless bridge, installed MIDI synthesizers on it, taught myself a bit of Python, and more—all without compromising its use as a gaming machine.
### What's not so great about it
Retropie's simple installation and ease of use is, in a way, a double-edged sword. You can go for a long time with Retropie without ever learning simple stuff like `sudo apt-get`
, which means you're missing out on a lot of the Raspberry Pi experience.
It doesn't have to be this way; the command line is still there under the hood when you want it, but perhaps users are a bit too insulated from a Bash shell that's ultimately a lot less scary than it looks. Retropie's main menu is operable only with a control pad, which can be annoying when you don't have one plugged in because you've been using the system for things other than gaming.
### Who's it for?
Anyone who wants to get straight into some gaming, anyone who wants the biggest and best library of emulators, and anyone who wants a great way to start exploring Linux when they're not playing games.
## Recalbox
[Recalbox](https://www.recalbox.com/) is a newer open source suite of emulators for the Raspberry Pi. It also supports other ARM-based small-board computers.
### What is it?
Like Retropie, Recalbox is built on EmulationStation and Libretro. Where it differs is that it's not built on Raspbian, but on its own flavor of Linux: RecalboxOS.
### What's great about it
The setup for Recalbox is even easier than for Retropie. You don't even need to image an SD card; simply copy some files over and go. It also has out-of-the-box support for some game controllers, getting you to Level 1 that little bit faster. Kodi comes preinstalled. This is a ready-to-go gaming and media rig.
### What's not so great about it
Recalbox has fewer emulators than Retropie, fewer customization options, and a smaller user community.
Your Recalbox rig is probably always just going to be for emulators and Kodi, the same as when you installed it. If you feel like getting deeper into Linux, you'll probably want a new SD card for Raspbian.
### Who's it for?
Recalbox is great if you want the absolute easiest retro gaming experience and can happily go without some of the more obscure gaming platforms, or if you are intimidated by the idea of doing anything a bit technical (and have no interest in growing out of that).
For most opensource.com readers, Recalbox will probably come in most handy to recommend to your not-so-technical friend or relative. Its super-simple setup and overall lack of options might even help you avoid having to help them with it.
## Roll your own
Ok, if you've been paying attention, you might have noticed that both Retropie and Recalbox are built from many of the same open source components. So what's to stop you from putting them together yourself?
### What is it?
Whatever you want it to be, baby. The nature of open source software means you could use an existing emulator suite as a starting point, or pilfer from them at will.
### What's great about it
If you have your own custom interface in mind, I guess there's nothing to do but roll your sleeves up and get to it. This is also a way to install emulators that haven't quite found their way into Retropie yet, such as [BeebEm](http://www.mkw.me.uk/beebem/) or [ArcEm](http://arcem.sourceforge.net/).
### What's not so great about it
Well, it's a bit of work, isn't it?
### Who's it for?
Hackers, tinkerers, builders, seasoned hobbyists, and such.
## Native RISC OS gaming
Now here's a dark horse: [RISC OS](https://opensource.com/article/18/7/gentle-intro-risc-os), the original operating system for ARM devices.
### What is it?
Before ARM went on to become the world's most popular CPU architecture, it was originally built to be the heart of the Acorn Archimedes. That's kind of a forgotten beast nowadays, but for a few years it was light years ahead as the most powerful desktop computer in the world, and it attracted a lot of games development.
Because the ARM processor in the Pi is the great-grandchild of the one in the Archimedes, we can still install RISC OS on it, and with a little bit of work, get these games running. This is different to the emulator options we've covered so far because we're playing our games on the operating system and CPU architecture for which they were written.
### What's great about it
It's the perfect introduction to RISC OS. This is an absolute gem of an operating system and well worth checking out in its own right.
The fact that you're using much the same operating system as back in the day to load and play your games makes your retro gaming rig just that little bit more of a time machine. This definitely adds some charm and retro value to the project.
There are a few superb games that were released only on the Archimedes. The massive hardware advantage of the Archimedes also means that it often had the best graphics and smoothest gameplay of a lot of multi-platform titles. The rights holders to many of these games have been generous enough to make them legally available for free download.
### What's not so great about it
Once you have installed RISC OS, it still takes a bit of elbow grease to get the games working. Here's a [guide to getting started](https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/).
This is definitely not a great all-rounder for the lounge room. There's nothing like [Kodi](https://kodi.tv/). There's a web browser, [NetSurf](https://www.netsurf-browser.org/), but it's struggling to catch up to the modern web. You won't get the range of titles to play as you would with an emulator suite. RISC OS Open is free for hobbyists to download and use and much of the source code has been made open. But despite the name, it's not a 100% open source operating system.
### Who's it for?
This one's for novelty seekers, absolute retro heads, people who want to explore an interesting operating system from the '80s, people who are nostalgic for Acorn machines from back in the day, and people who want a totally different retro gaming project.
## Command line gaming
Do you really need to install an emulator or an exotic operating system just to relive the glory days? Why not just install some native linux games from the command line?
### What is it?
There's a whole range of native Linux games tested to work on the [Raspberry Pi](https://www.raspberrypi.org/forums/viewtopic.php?f=78&t=51794).
### What's great about it
You can install most of these from packages using the command line and start playing. Easy. If you've already got Raspbian up and running, it's probably your fastest path to getting a game running.
### What's not so great about it
This isn't, strictly speaking, actual retro gaming. Linux was born in 1991 and took a while longer to come together as a gaming platform. This isn't quite gaming from the classic 8- and 16-bit era; these are ports and retro-influenced games that were built later.
### Who's it for?
If you're just after a bucket of fun, no problem. But if you're trying to relive the actual era, this isn't quite it.
## 9 Comments |
11,167 | Linux 中的软盘走向终结了吗?Torvalds 将软盘的驱动标记为“孤儿” | https://itsfoss.com/end-of-floppy-disk-in-linux/ | 2019-07-30T23:13:24 | [
"软驱",
"软盘"
] | https://linux.cn/article-11167-1.html |
>
> 在 Linux 内核最近的提交当中,Linus Torvalds 将软盘的驱动程序标记为孤儿。这标志着软盘在 Linux 中步入结束了吗?
>
>
>
有可能你很多年没见过真正的软盘了。如果你正在寻找带软盘驱动器的计算机,可能需要去博物馆里看看。
在二十多年前,软盘是用于存储数据和运行操作系统的流行介质。[早期的 Linux 发行版](https://itsfoss.com/earliest-linux-distros/)使用软盘进行“分发”。软盘也广泛用于保存和传输数据。
你有没有想过为什么许多应用程序中的保存图标看起来像软盘?因为它就是软盘啊!软盘常用于保存数据,因此许多应用程序将其用作保存图标,并且这个传统一直延续至今。

今天我为什么要说起软盘?因为 Linus Torvalds 在一个 Linux 内核代码的提交里标记软盘的驱动程序为“孤儿”。
### 在 Linux 内核中被标记为“孤儿”的软盘驱动程序
正如你可以在 [GitHub 镜像上的提交](https://github.com/torvalds/linux/commit/47d6a7607443ea43dbc4d0f371bf773540a8f8f4)中看到的那样,开发人员 Jiri 不再使用带有软驱的工作计算机了。而如果没有正确的硬件,Jiri 将无法继续开发。这就是 Torvalds 将其标记为孤儿的原因。
>
> 越来越难以找到可以实际工作的软盘的物理硬件,虽然 Willy 能够对此进行测试,但我认为从实际的硬件角度来看,这个驱动程序几乎已经死了。目前仍然销售的硬件似乎主要是基于 USB 的,根本不使用这种传统的驱动器。
>
>
>

### “孤儿”在 Linux 内核中意味着什么?
“孤儿”意味着没有开发人员能够或愿意支持这部分代码。如果没有其他人出现继续维护和开发它,孤儿模块可能会被弃用并最终删除。
### 它没有被立即删除
Torvalds 指出,各种虚拟环境模拟器仍在使用软盘驱动器。所以软盘的驱动程序不会被立即丢弃。
>
> 各种 VM 环境中仍然在仿真旧的软盘控制器,因此该驱动程序不会消失,但让我们看看是否有人有兴趣进一步维护它。
>
>
>
为什么不永远保持内核中的软盘驱动器支持呢?因为这将构成安全威胁。即使没有真正的计算机使用软盘驱动程序,虚拟机仍然拥有它,这将使虚拟机容易受到攻击。
### 一个时代的终结?
这将是一个时代的结束还是会有其他人出现并承担起在 Linux 中继续维护软盘驱动程序的责任?只有时间会给出答案。
在 Linux 内核中,软盘驱动器成为孤儿我不觉得有什么可惜的。
在过去的十五年里我没有使用过软盘,我怀疑很多人也是如此。那么你呢?你有没有用过软盘?如果是的话,你最后一次使用它的时间是什么时候?
---
via: <https://itsfoss.com/end-of-floppy-disk-in-linux/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *In a recent commit to the Linux Kernel, Linus Torvalds marked the floppy disk drivers as orphaned. Could this be the beginning of the end of floppy disks in Linux?*
Chances are that you haven’t seen a real floppy disk in years. And if you are looking for a computer with floppy drive, you may have to visit a museum.
More than two decades ago, floppy disks were the popular medium for storing data and running operating systems on it. The [early Linux distributions](https://itsfoss.com/earliest-linux-distros/) were ‘distributed’ using floppy disks. Floppy disks were also used extensively for saving and transferring data.
Have you ever wondered why the save icons in many applications look like a floppy? Because it IS floppy disk. Floppy disks were popular for saving data and hence many applications used it as save icons and the tradition continues till date.

Why am I talking about floppy disks today? Because Linus Torvalds has marked floppy drivers ‘orphaned’ in a commit to the Linux kernel code.
## Floppy disk drivers marked ‘orphaned’ in Linux kernel
As you can read in the [commit on the GitHub mirror](https://github.com/torvalds/linux/commit/47d6a7607443ea43dbc4d0f371bf773540a8f8f4), the developer Jiri doesn’t have a working computer with floppy drive anymore. Without the correct hardware, continuing the development won’t be possible for Jiri. And that’s why Torvalds marked it orphan
Actual working physical floppy hardware is getting hard to find, and while Willy was able to test this, I think the driver can be considered pretty much dead from an actual hardware standpoint. The hardware that is still sold seems to be mainly USB-based, which doesn’t use this legacy driver at all.

## What does ‘orphan’ mean in Linux kernel?
Orphan means that there are no developers able to or willing to support that part of code.
An orphaned module will probably get deprecated and removed eventually if no one else comes forward to continue maintaining and developing it.
## It’s not being removed immediately
Torvalds notes that floppy drives are still used by various virtual environment emulators. So the floppy drivers won’t be discarded straightway.
The old floppy disk controller is still emulated in various VM environments, so the driver isn’t going away, but let’s see if anybody is interested to step up to maintain it.
Why not just keep the floppy drive support in the kernel forever? It’s because this will pose a security threat. Even if there is no real computer using floppy drivers, the VMs still have it and this will leave the VMs vulnerable.
## End of an era?
Will this be the end of an era or will someone else come up and take the responsibility of keeping floppy support alive in Linux? Only time will tell.
I don’t think there is any love lost here with floppy drives being orphaned in Linux kernel.
I haven’t used a floppy disks in last fifteen years and I doubt many people either. What about you? Have you ever used a floppy disk? If yes, when was the last time you used it? |
11,170 | WPS Office:Linux 上的 Microsoft Office 的免费替代品 | https://itsfoss.com/wps-office-linux/ | 2019-07-31T09:53:28 | [
"WPS",
"Office"
] | https://linux.cn/article-11170-1.html |
>
> 如果你在寻找 Linux 上 Microsoft Office 免费替代品,那么 WPS Office 是最佳选择之一。它可以免费使用,并兼容 MS Office 文档格式。
>
>
>

[WPS Office](https://www.wps.com/) 是一个跨平台的办公生产力套件。它轻巧,并且与 Microsoft Office、Google Docs/Sheets/Slide 和 Adobe PDF 完全兼容。
对于许多用户而言,WPS Office 足够直观,并且能够满足他们的需求。由于它在外观和兼容性方面与 Microsoft Office 非常相似,因此广受欢迎。

WPS office 由中国的金山公司创建。对于 Windows 用户而言,WPS Office 有免费版和高级版。对于 Linux 用户,WPS Office 可通过其[社区项目](http://wps-community.org/)免费获得。
>
> **非 FOSS 警告!**
>
>
> WPS Office 不是一个开源软件。因为它对于 Linux 用户免费使用,我们已经在这介绍过它,有时我们也会介绍即使不是开源的 Linux 软件。
>
>
>
### Linux 上的 WPS Office

WPS Office 有四个主要组件:
* WPS 文字
* WPS 演示
* WPS 表格
* WPS PDF
WPS Office 与 MS Office 完全兼容,支持 .doc、.docx、.dotx、.ppt、.pptx、.xls、.xlsx、.docm、.dotm、.xml、.txt、.html、.rtf (等其他),以及它自己的格式(.wps、.wpt)。它还默认包含 Microsoft 字体(以确保兼容性),它可以导出 PDF 并提供超过 10 种语言的拼写检查功能。
但是,它在 ODT、ODP 和其他开放文档格式方面表现不佳。
三个主要的 WPS Office 应用都有与 Microsoft Office 非常相似的界面,都有相同的 Ribbon UI。尽管存在细微差别,但使用习惯仍然相对一致。你可以使用 WPS Office 轻松克隆任何 Microsoft Office/LibreOffice 文档。

你可能唯一不喜欢的是一些默认的样式设置(一些标题下面有很多空间等),但这些可以很容易地调整。
默认情况下,WPS 以 .docx、.pptx 和 .xlsx 文件类型保存文件。你还可以将文档保存到 **[WPS 云](https://account.wps.com/?cb=https%3A%2F%2Fdrive.wps.com%2F)**中并与他人协作。另一个不错的功能是能从[这里](https://template.wps.com/)下载大量模板。
### 在 Linux 上安装 WPS Office
WPS 为 Linux 发行版提供 DEB 和 RPM 安装程序。如果你使用的是 Debian/Ubuntu 或基于 Fedora 的发行版,那么安装 WPS Office 就简单了。
你可以在下载区那下载 Linux 中的 WPS:
* [下载 WPS Office for Linux](http://wps-community.org/downloads)
向下滚动,你将看到最新版本包的链接:

下载适合你发行版的文件。只需双击 DEB 或者 RPM 就能[安装它们](https://itsfoss.com/install-deb-files-ubuntu/)。这会打开软件中心,你将看到安装选项:

几秒钟后,应用应该成功安装到你的系统上了!
你现在可以在“应用程序”菜单中搜索 **WPS**,查找 WPS Office 套件中所有的应用:

### 你是否使用 WPS Office 或其他软件?
还有其他 [Microsoft Office 的开源替代方案](https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/),但它们与 MS Office 的兼容性很差。
就个人而言,我更喜欢 LibreOffice,但如果你必须要用到 Microsoft Office,你可以尝试在 Linux 上使用 WPS Office。它看起来和 MS Office 类似,并且与 MS 文档格式具有良好的兼容性。它在 Linux 上是免费的,因此你也不必担心 Office 365 订阅。
你在系统上使用什么办公套件?你曾经在 Linux 上使用过 WPS Office 吗?你的体验如何?
---
via: <https://itsfoss.com/wps-office-linux/>
作者:[Sergiu](https://itsfoss.com/author/sergiu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

[WPS Office](https://www.wps.com/?ref=itsfoss.com) is a cross-platform office suite by Kingsoft Corporation based in China. It is free and fully compatible with Microsoft Office, Google Docs/Sheets/Slide, and Adobe PDF.
A good alternative productivity suite with closeness to MS Office in terms of looks and compatibility is a pleasant surprise for those [Linux users who must use Microsoft Office on Linux](https://itsfoss.com/use-microsoft-office-linux/).
For Windows users, WPS Office has both free and premium versions. **For Linux users, WPS Office is available for free through its ****community project****.**
**Non-FOSS Warning!**WPS Office is not an open-source software. We have covered it here because it’s available for free for Linux users and at times, we cover software created for Linux even if they are not open source.
## Quick introduction to WPS: Alternative to MS Office on Linux

WPS Office has four main components:
- WPS Writer
- WPS Presentation
- WPS Spreadsheet
- WPS PDF
WPS Office is fully compatible with MS Office and more, supporting .doc, .docx, .dotx, .ppt, .pptx, .xls, .xlsx, .docm, .dotm, .xml, .txt, .html, .rtf (and others), as well as its own format (.wps, .wpt).
It also includes [Microsoft fonts in Linux](https://itsfoss.com/install-microsoft-fonts-ubuntu/) by default (to ensure compatibility), can export PDFs, and provides spell-checking capabilities in more than 10 languages.
However, it didn’t do very well with ODT, ODP, and other open document formats.
All three main WPS Office applications feature a very similar interface to Microsoft Office, with the same Ribbon UI.
Although there are minor differences, the usage remains relatively the same. You can easily clone any Microsoft Office/LibreOffice document using WPS Office.

The only thing you might dislike are some of the default styling settings (some headers having a lot of space beneath them etc.), but these can be easily tweaked.
By default, WPS saves files in .docx, .pptx, and .xlsx file types. You can also save documents to the [WPS Cloud](https://account.wps.com/?cb=https%3A%2F%2Fdrive.wps.com%2F&ref=itsfoss.com) and collaborate on them with others. Another nice addition is the possibility to download a great number of templates from
[here](https://template.wps.com/?ref=itsfoss.com).
## Installing WPS Office on Linux
WPS provides DEB and RPM installer files for Linux distributions.
**This makes installing WPS Office easy if you use Debian/Ubuntu or Fedora-based distributions.**

You can visit [their official site WPS Office](https://www.wps.com/office/linux/?ref=itsfoss.com) for Linux.
If you use Debian or Ubuntu, get the Deb package and if you use Fedora and SUSE, get the RPM package.
The [method for installing RPM files](https://itsfoss.com/install-rpm-files-fedora/) and Deb files is the same. Right click on the downloaded file and look for 'open with software install' option. It will open the Software Center and you can install it there.
I'll share the steps for [installing Deb file here](https://itsfoss.com/install-deb-files-ubuntu/). Select and right-click the Deb package, choose `Open With Other Application`
and choose the `Software Install`
option:

It will open the software center. Here, click on the `Install`
button, enter your password and the WPS Office will be installed:

Now, you can start the WPS Office from the system menu:

Being a closed-source software, you need to accept their terms and conditions to use their service:

And that's it! Enjoy WPS Office on Linux.
## Here are open-source alternatives to MS Office
If you're someone who prefers using FOSS only then we compiled a list of [open-source alternatives to MS Office](https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/) that you can use without any issues:

What's your favorite office suite software? Mine is LibreOffice! |
11,171 | 在 Linux 上用 Bash 脚本监控 messages 日志 | https://www.2daygeek.com/linux-bash-script-to-monitor-messages-log-warning-error-critical-send-email/ | 2019-07-31T21:14:00 | [
"日志",
"邮件"
] | https://linux.cn/article-11171-1.html | 
目前市场上有许多开源监控工具可用于监控 Linux 系统的性能。当系统达到指定的阈值限制时,它将发送电子邮件警报。它可以监视 CPU 利用率、内存利用率、交换利用率、磁盘空间利用率等所有内容。
如果你只有很少的系统并且想要监视它们,那么编写一个小的 shell 脚本可以使你的任务变得非常简单。
在本教程中,我们添加了一个 shell 脚本来监视 Linux 系统上的 messages 日志。
我们过去添加了许多有用的 shell 脚本。如果要查看这些内容,请导航至以下链接。
* [如何使用 shell 脚本监控系统的日常活动?](https://www.2daygeek.com/category/shell-script/)
此脚本将检查 `/var/log/messages` 文件中的 “warning“、“error” 和 “critical”,如果发现任何有关的东西,就给指定电子邮件地址发邮件。
如果服务器有许多匹配的字符串,我们就不能经常运行这个可能填满收件箱的脚本,我们可以在一天内运行一次。
为了解决这个问题,我让脚本以不同的方式触发电子邮件。
如果 `/var/log/messages` 文件中昨天的日志中找到任何给定字符串,则脚本将向给定的电子邮件地址发送电子邮件警报。
**注意:**你需要更改电子邮件地址,而不是我们的电子邮件地址。
```
# vi /opt/scripts/os-log-alert.sh
```
```
#!/bin/bash
#Set the variable which equal to zero
prev_count=0
count=$(grep -i "`date --date='yesterday' '+%b %e'`" /var/log/messages | egrep -wi 'warning|error|critical' | wc -l)
if [ "$prev_count" -lt "$count" ] ; then
# Send a mail to given email id when errors found in log
SUBJECT="WARNING: Errors found in log on "`date --date='yesterday' '+%b %e'`""
# This is a temp file, which is created to store the email message.
MESSAGE="/tmp/logs.txt"
TO="[email protected]"
echo "ATTENTION: Errors are found in /var/log/messages. Please Check with Linux admin." >> $MESSAGE
echo "Hostname: `hostname`" >> $MESSAGE
echo -e "\n" >> $MESSAGE
echo "+------------------------------------------------------------------------------------+" >> $MESSAGE
echo "Error messages in the log file as below" >> $MESSAGE
echo "+------------------------------------------------------------------------------------+" >> $MESSAGE
grep -i "`date --date='yesterday' '+%b %e'`" /var/log/messages | awk '{ $3=""; print}' | egrep -wi 'warning|error|critical' >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
#rm $MESSAGE
fi
```
为 `os-log-alert.sh` 文件设置可执行权限。
```
$ chmod +x /opt/scripts/os-log-alert.sh
```
最后添加一个 cron 任务来自动执行此操作。它将每天 7 点钟运行。
```
# crontab -e
```
```
0 7 * * * /bin/bash /opt/scripts/os-log-alert.sh
```
**注意:**你将在每天 7 点收到昨天日志的电子邮件提醒。
**输出:**你将收到类似下面的电子邮件提醒。
```
ATTENTION: Errors are found in /var/log/messages. Please Check with Linux admin.
+-----------------------------------------------------+
Error messages in the log file as below
+-----------------------------------------------------+
Jul 3 02:40:11 ns1 kernel: php-fpm[3175]: segfault at 299 ip 000055dfe7cc7e25 sp 00007ffd799d7d38 error 4 in php-fpm[55dfe7a89000+3a7000]
Jul 3 02:50:14 ns1 kernel: lmtp[8249]: segfault at 20 ip 00007f9cc05295e4 sp 00007ffc57bca1a0 error 4 in libdovecot-storage.so.0.0.0[7f9cc04df000+148000]
Jul 3 15:36:09 ns1 kernel: php-fpm[17846]: segfault at 299 ip 000055dfe7cc7e25 sp 00007ffd799d7d38 error 4 in php-fpm[55dfe7a89000+3a7000]
Jul 3 15:45:54 ns1 pure-ftpd: ([email protected]) [WARNING] Authentication failed for user [daygeek]
Jul 3 16:25:36 ns1 pure-ftpd: ([email protected]) [WARNING] Sorry, cleartext sessions and weak ciphers are not accepted on this server.#012Please reconnect using TLS security mechanisms.
Jul 3 16:44:20 ns1 kernel: php-fpm[8979]: segfault at 299 ip 000055dfe7cc7e25 sp 00007ffd799d7d38 error 4 in php-fpm[55dfe7a89000+3a7000]
```
---
via: <https://www.2daygeek.com/linux-bash-script-to-monitor-messages-log-warning-error-critical-send-email/>
作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,172 | 如何通过命令行升级 Debian 9 为 Debian 10 | https://www.linuxtechi.com/upgrade-debian-9-to-debian-10-command-line/ | 2019-08-01T08:54:37 | [
"Debian"
] | https://linux.cn/article-11172-1.html | 我们已经在先前的文章中看到如何安装 [Debian 10(Buster)](/article-11083-1.html)。今天,我们将学习如何从 Debian 9 升级为 Debian 10,虽然我们已将看到 Debian 10 和它的特色,所以这里我们不会深入介绍。但是可能读者没有机会读到那篇文章,让我们快速了解一下 Debian 10 和它的新功能。

在差不多两年的开发后,Debian 团队最终发布一个稳定版本,Debian 10 的代码名称是 Buster。Buster 是一个 LTS (长期支持支持)版本,因此未来将由 Debian 支持 5 年。
### Debian 10(Buster)新的特色
Debian 10(Buster)回报给大多数 Debian 爱好者大量的新特色。一些特色包括:
* GNOME 桌面 3.30
* 默认启用 AppArmor
* 支持 Linux 内核 4.19.0-4
* 支持 OpenJDk 11.0
* 从 Nodejs 4 ~ 8 升级到 Nodejs 10.15.2
* Iptables 替换为 NFTables
等等。
### 从 Debian 9 到 Debian 10 的逐步升级指南
在我们开始升级 Debian 10 前,让我们看看升级需要的必备条件:
#### 步骤 1) Debian 升级必备条件
* 一个良好的网络连接
* root 用户权限
* 数据备份
备份你所有的应用程序代码库、数据文件、用户账号详细信息、配置文件是极其重要的,以便在升级出错时,你可以总是可以还原到先前的版本。
#### 步骤 2) 升级 Debian 9 现有的软件包
接下来的步骤是升级你所有现有的软件包,因为一些软件包被标志为保留不能升级,从 Debian 9 升级为 Debian 10 有失败或引发一些问题的可能性。所以,我们不冒任何风险,更好地升级软件包。使用下面的代码来升级软件包:
```
root@linuxtechi:~$ sudo apt update && sudo apt upgrade -y
```
#### 步骤 3) 修改软件包存储库文件 /etc/sources.list
接下来的步骤是修改软件包存储库文件 `/etc/sources.list`,你需要用文本 `Buster` 替换 `Stretch`。
但是,在你更改任何东西前,确保如下创建一个 `sources.list` 文件的备份:
```
root@linuxtechi:~$ sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
```
现在使用下面的 `sed` 命令来在软件包存储库文件中使用 `buster` 替换 `stretch`,示例如下显示:
```
root@linuxtechi:~$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list
root@linuxtechi:~$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/*.list
```
更新后,你需要如下更新软件包存储库索引:
```
root@linuxtechi:~$ sudo apt update
```
在开始升级你现有的 Debian 操作系统前,让我们使用下面的命令验证当前版本,
```
root@linuxtechi:~$ cat /etc/*-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@linuxtechi:~$
```
#### 步骤 4) 从 Debian 9 升级到 Debian 10
你做完所有的更改后,是时候从 Debian 9 升级到 Debian 10 了。但是在这之前,再次如下确保更新你的软件包:
```
root@linuxtechi:~$ sudo apt update && sudo apt upgrade -y
```
在软件包升级期间,你将被提示启动服务,所以选择你较喜欢的选项。
一旦你系统的所有软件包升级完成,就升级你的发行版的软件包。使用下面的代码来升级发行版:
```
root@linuxtechi:~$ sudo apt dist-upgrade -y
```
升级过程可能花费一些时间,取决于你的网络速度。记住在升级过程中,你将被询问一些问题,在软件包升级后是否需要重启服务、你是否需要保留现存的配置文件等。如果你不想进行一些自定义更改,简单地键入 “Y” ,来让升级过程继续。
#### 步骤 5) 验证升级
一旦升级过程完成,重启你的机器,并使用下面的方法检测版本:
```
root@linuxtechi:~$ lsb_release -a
```
如果你获得如下输出:
```
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
root@linuxtechi:~$
```
是的,你已经成功地从 Debian 9 升级到 Debian 10。
验证升级的备用方法:
```
root@linuxtechi:~$ cat /etc/*-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@linuxtechi:~$
```
### 结束
希望上面的逐步指南为你提供了从 Debian 9(Stretch)简单地升级为 Debian 10(Buster)的所有信息。在评论部分,请给予你使用 Debian 10 的反馈、建议、体验。
---
via: <https://www.linuxtechi.com/upgrade-debian-9-to-debian-10-command-line/>
作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Hello All!!!, Good to See you! So we saw how to install [Debian 10(Buster)](https://www.linuxtechi.com/debian-10-buster-installation-guide/) in the previous article. Today, we are going to learn how to upgrade from Debian 9 to Debian 10. Since we have already seen about Debian 10 and its features, let’s not go deep into it. But readers who didn’t have the chance to read that article, let’s give a quick update about Debian 10 and its new features.
After almost two years of development, the team at Debian has finally released a stable version of Buster, code name for Debian 10. Buster is a LTS (Long Term Support) version and hence will be supported for the next 5 years by Debian.
#### Debian 10 (Buster) – New Features
Debian 10 (Buster) comes packed with a lot of new features which could be rewarding to most of the Debian fans out there. Some of the features include:
- GNOME Desktop 3.30
- AppArmor enabled by default
- Supports Linux Kernel 4.19.0-4
- Supports OpenJDk 11.0
- Moved from Nodejs 4-8 to Nodejs 10.15.2
- Iptables replaced by NFTables
and a lot more.
#### Step by Step Guide to Upgrade from Debian 9 to Debian 10
Before we start upgrading to Debian 10, let’s look at the prerequisites needed for the upgrade:
#### Step 1) Debian upgrade prerequisites
- A good internet connection
- Root user permission
- Data backup
It is extremely important to backup all your application code bases, data files, user account details, configuration files, so that you can always revert to the previous version if anything goes wrong during the upgrade.
#### Step 2) Upgrade Debian 9 Existing Packages
Next step is to upgrade all your existing packages as any packages that are tagged as held back cannot be upgraded and there is a possibility the upgrade from Debian 9 to Debian 10 may fail or cause some issues. So, let’s not take any chances and better upgrade the packages. Use the following code to upgrade the packages:
pkumar@linuxtechi:~$ sudo apt update && sudo apt upgrade -y
#### Step 3) Modify Package Repository file (/etc/sources.list)
Next step is to modify package repository file “/etc/sources.list” where you need to replace the text “Stretch” with the text “Buster”.
But before you change anything, make sure to create a backup of the sources.list file as shown below:
pkumar@linuxtechi:~$ sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
Now use below sed commands to replace the text ‘**stretch**‘ with ‘**buster**‘ in package repository file, example is shown below,
pkumar@linuxtechi:~$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list pkumar@linuxtechi:~$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/*.list
Once the text is updated, you need to update the package index like shown below:
pkumar@linuxtechi:~$ sudo apt update
Before start upgrading your existing Debian OS , let’s verify the current version using the following command,
pkumar@linuxtechi:~$ cat /etc/*-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" pkumar@linuxtechi:~$
#### Step 4) Upgrade from Debian 9 to Debian 10
Once you made all the changes, it is time to upgrade from Debian 9 – Debian 10. But before that, make sure to update your packages again as shown below:
pkumar@linuxtechi:~$ sudo apt update && sudo apt upgrade -y
During packages upgrade you will be prompted to start the services, so choose your preferred option
Once all the packages are updated in your system, it is time to upgrade your distribution package. Use the following code to upgrade the distribution:
pkumar@linuxtechi:~$ sudo apt dist-upgrade -y
The upgrade process may take a few minutes depending upon your internet connection. Remember during the upgrade process, you’ll also be asked a few questions whether you need to restart the services during the packages are upgraded and whether you need to keep the existing configurations files. If you don’t want to make any custom changes, simply type “Y” and let the upgrade process continue.
#### Step 5) Verify Upgrade
Once the upgrade process is completed, reboot your machine and check the version using the following command:
pkumar@linuxtechi:~$ lsb_release -a
If you get the output as shown below:
Distributor ID: Debian Description: Debian GNU/Linux 10 (buster) Release: 10 Codename: buster pkumar@linuxtechi:~$
Yes, you have successfully upgraded from Debian 9 to Debian 10.
Alternate way to verify upgrade
pkumar@linuxtechi:~$ cat /etc/*-release PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" pkumar@linuxtechi:~$
#### Conclusion
Hope the above step by step guide provided you with all the information needed to upgrade from Debian 9(Stretch) to Debian 10 (Buster) easily. Please give us your feedback, suggestions and your experiences with the all new Debian 10 in the comments section. For more such Linux tutorials and articles, keep visiting LinuxTechi.com every now and then. |
11,173 | 在 Linux 上安装 NetData 性能监控工具 | https://www.ostechnix.com/netdata-real-time-performance-monitoring-tool-linux/ | 2019-08-01T09:09:00 | [
"NetData"
] | https://linux.cn/article-11173-1.html | 
**NetData** 是一个用于系统和应用的分布式实时性能和健康监控工具。它提供了对系统中实时发生的所有事情的全面检测。你可以在高度互动的 Web 仪表板中查看结果。使用 Netdata,你可以清楚地了解现在发生的事情,以及之前系统和应用中发生的事情。你无需成为专家即可在 Linux 系统中部署此工具。NetData 开箱即用,零配置、零依赖。只需安装它然后坐等,之后 NetData 将负责其余部分。
它有自己的内置 Web 服务器,以图形形式显示结果。NetData 非常快速高效,安装后可立即开始分析系统性能。它是用 C 编程语言编写的,所以它非常轻量。它占用的单核 CPU 使用率不到 3%,内存占用 10-15MB。我们可以轻松地在任何现有网页上嵌入图表,并且它还有一个插件 API,以便你可以监控任何应用。
以下是 Linux 系统中 NetData 的监控列表。
* CPU 使用率
* RAM 使用率
* 交换内存使用率
* 内核内存使用率
* 硬盘及其使用率
* 网络接口
* IPtables
* Netfilter
* DDoS 保护
* 进程
* 应用
* NFS 服务器
* Web 服务器 (Apache 和 Nginx)
* 数据库服务器 (MySQL),
* DHCP 服务器
* DNS 服务器
* 电子邮件服务
* 代理服务器
* Tomcat
* PHP
* SNP 设备
* 等等
NetData 是自由开源工具,它支持 Linux、FreeBSD 和 Mac OS。
### 在 Linux 上安装 NetData
Netdata 可以安装在任何安装了 Bash 的 Linux 发行版上。
最简单的安装 Netdata 的方法是从终端运行以下命令:
```
$ bash <(curl -Ss https://my-netdata.io/kickstart-static64.sh)
```
这将下载并安装启动和运行 Netdata 所需的一切。
有些用户可能不想在没有研究的情况下将某些东西直接注入到 Bash。如果你不喜欢此方法,可以按照以下步骤在系统上安装它。
#### 在 Arch Linux 上
Arch Linux 默认仓库中提供了最新版本。所以,我们可以使用以下 [pacman](https://www.ostechnix.com/getting-started-pacman/) 命令安装它:
```
$ sudo pacman -S netdata
```
#### 在基于 DEB 和基于 RPM 的系统上
在基于 DEB (Ubuntu / Debian)或基于 RPM(RHEL / CentOS / Fedora) 系统的默认仓库没有 NetData。我们需要从它的 Git 仓库手动安装 NetData。
首先安装所需的依赖项:
```
# Debian / Ubuntu
$ sudo apt-get install zlib1g-dev uuid-dev libuv1-dev liblz4-dev libjudy-dev libssl-dev libmnl-dev gcc make git autoconf autoconf-archive autogen automake pkg-config curl
# Fedora
$ sudo dnf install zlib-devel libuuid-devel libuv-devel lz4-devel Judy-devel openssl-devel libmnl-devel gcc make git autoconf autoconf-archive autogen automake pkgconfig curl findutils
# CentOS / Red Hat Enterprise Linux
$ sudo yum install epel-release
$ sudo yum install autoconf automake curl gcc git libmnl-devel libuuid-devel openssl-devel libuv-devel lz4-devel Judy-devel lm_sensors make MySQL-python nc pkgconfig python python-psycopg2 PyYAML zlib-devel
# openSUSE
$ sudo zypper install zlib-devel libuuid-devel libuv-devel liblz4-devel judy-devel openssl-devel libmnl-devel gcc make git autoconf autoconf-archive autogen automake pkgconfig curl findutils
```
安装依赖项后,在基于 DEB 或基于 RPM 的系统上安装 NetData,如下所示。
Git 克隆 NetData 仓库:
```
$ git clone https://github.com/netdata/netdata.git --depth=100
```
上面的命令将在当前工作目录中创建一个名为 `netdata` 的目录。
切换到 `netdata` 目录:
```
$ cd netdata/
```
最后,使用命令安装并启动 NetData:
```
$ sudo ./netdata-installer.sh
```
**示例输出:**
```
Welcome to netdata!
Nice to see you are giving it a try!
You are about to build and install netdata to your system.
It will be installed at these locations:
- the daemon at /usr/sbin/netdata
- config files at /etc/netdata
- web files at /usr/share/netdata
- plugins at /usr/libexec/netdata
- cache files at /var/cache/netdata
- db files at /var/lib/netdata
- log files at /var/log/netdata
- pid file at /var/run
This installer allows you to change the installation path.
Press Control-C and run the same command with --help for help.
Press ENTER to build and install netdata to your system > ## Press ENTER key
```
安装完成后,你将在最后看到以下输出:
```
-------------------------------------------------------------------------------
OK. NetData is installed and it is running (listening to *:19999).
-------------------------------------------------------------------------------
INFO: Command line options changed. -pidfile, -nd and -ch are deprecated.
If you use custom startup scripts, please run netdata -h to see the
corresponding options and update your scripts.
Hit http://localhost:19999/ from your browser.
To stop netdata, just kill it, with:
killall netdata
To start it, just run it:
/usr/sbin/netdata
Enjoy!
Uninstall script generated: ./netdata-uninstaller.sh
```

*安装 NetData*
NetData 已安装并启动。
要在其他 Linux 发行版上安装 Netdata,请参阅[官方安装说明页面](https://docs.netdata.cloud/packaging/installer/)。
### 在防火墙或者路由器上允许 NetData 的默认端口
如果你的系统在防火墙或者路由器后面,那么必须允许默认端口 `19999` 以便从任何远程系统访问 NetData 的 web 界面。
#### 在 Ubuntu/Debian 中
```
$ sudo ufw allow 19999
```
#### 在 CentOS/RHEL/Fedora 中
```
$ sudo firewall-cmd --permanent --add-port=19999/tcp
$ sudo firewall-cmd --reload
```
### 启动/停止 NetData
要在使用 Systemd 的系统上启用和启动 Netdata 服务,请运行:
```
$ sudo systemctl enable netdata
$ sudo systemctl start netdata
```
要停止:
```
$ sudo systemctl stop netdata
```
要在使用 Init 的系统上启用和启动 Netdata 服务,请运行:
```
$ sudo service netdata start
$ sudo chkconfig netdata on
```
要停止:
```
$ sudo service netdata stop
```
### 通过 Web 浏览器访问 NetData
打开 Web 浏览器,然后打开 `http://127.0.0.1:19999` 或者 `http://localhost:19999/` 或者 `http://ip-address:19999`。你应该看到如下页面。

*Netdata 仪表板*
在仪表板中,你可以找到 Linux 系统的完整统计信息。向下滚动以查看每个部分。
你可以随时打开 `http://localhost:19999/netdata.conf` 来下载和/或查看 NetData 默认配置文件。

*Netdata 配置文件*
### 更新 NetData
在 Arch Linux 中,只需运行以下命令即可更新 NetData。如果仓库中提供了更新版本,那么就会自动安装该版本。
```
$ sudo pacman -Syyu
```
在基于 DEB 或 RPM 的系统中,只需进入已克隆它的目录(此例中是 `netdata`)。
```
$ cd netdata
```
拉取最新更新:
```
$ git pull
```
然后,使用命令重新构建并更新它:
```
$ sudo ./netdata-installer.sh
```
### 卸载 NetData
进入克隆 NetData 的文件夹。
```
$ cd netdata
```
然后,使用命令卸载它:
```
$ sudo ./netdata-uninstaller.sh --force
```
在 Arch Linux 中,使用以下命令卸载它。
```
$ sudo pacman -Rns netdata
```
### 资源
* [NetData 网站](http://netdata.firehol.org/)
* [NetData 的 GitHub 页面](https://github.com/firehol/netdata)
---
via: <https://www.ostechnix.com/netdata-real-time-performance-monitoring-tool-linux/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,175 | 如何在 Ubuntu 上安装 MongoDB | https://itsfoss.com/install-mongodb-ubuntu | 2019-08-02T10:46:11 | [
"MongoDB"
] | https://linux.cn/article-11175-1.html |
>
> 本教程介绍了在 Ubuntu 和基于 Ubuntu 的 Linux 发行版上安装 MongoDB 的两种方法。
>
>
>
[MongoDB](https://www.mongodb.com/) 是一个越来越流行的自由开源的 NoSQL 数据库,它将数据存储在类似 JSON 的灵活文档集中,这与 SQL 数据库中常见的表格形式形成对比。
你很可能发现在现代 Web 应用中使用 MongoDB。它的文档模型使得使用各种编程语言能非常直观地访问和处理它。

在本文中,我将介绍两种在 Ubuntu 上安装 MongoDB 的方法。
### 在基于 Ubuntu 的发行版上安装 MongoDB
1. 使用 Ubuntu 仓库安装 MongoDB。简单但不是最新版本的 MongoDB
2. 使用其官方仓库安装 MongoDB。稍微复杂,但你能得到最新版本的 MongoDB。
第一种安装方法更容易,但如果你计划使用官方支持的最新版本,那么我建议使用第二种方法。
有些人可能更喜欢使用 snap 包。Ubuntu 软件中心提供了 snap,但我不建议使用它们,因为他们现在已经过期了,因此我这里不会提到。
### 方法 1:从 Ubuntu 仓库安装 MongoDB
这是在系统中安装 MongoDB 的简便方法,你只需输入一个命令即可。
#### 安装 MongoDB
首先,确保你的包是最新的。打开终端并输入:
```
sudo apt update && sudo apt upgrade -y
```
继续安装 MongoDB:
```
sudo apt install mongodb
```
这就完成了!MongoDB 现在安装到你的计算机上了。
MongoDB 服务应该在安装时自动启动,但要检查服务状态:
```
sudo systemctl status mongodb
```

你可以看到该服务是**活动**的。
#### 运行 MongoDB
MongoDB 目前是一个 systemd 服务,因此我们使用 `systemctl` 来检查和修改它的状态,使用以下命令:
```
sudo systemctl status mongodb
sudo systemctl stop mongodb
sudo systemctl start mongodb
sudo systemctl restart mongodb
```
你也可以修改 MongoDB 是否自动随系统启动(默认:启用):
```
sudo systemctl disable mongodb
sudo systemctl enable mongodb
```
要开始使用(创建和编辑)数据库,请输入:
```
mongo
```
这将启动 **mongo shell**。有关查询和选项的详细信息,请查看[手册](https://docs.mongodb.com/manual/tutorial/getting-started/)。
**注意:**根据你计划使用 MongoDB 的方式,你可能需要调整防火墙。不过这超出了本篇的内容,并且取决于你的配置。
#### 卸载 MongoDB
如果你从 Ubuntu 仓库安装 MongoDB 并想要卸载它(可能要使用官方支持的方式安装),请输入:
```
sudo systemctl stop mongodb
sudo apt purge mongodb
sudo apt autoremove
```
这应该会完全卸载 MongoDB。确保**备份**你可能想要保留的任何集合或文档,因为它们将被删除!
### 方法 2:在 Ubuntu 上安装 MongoDB 社区版
这是推荐的安装 MongoDB 的方法,它使用包管理器。你需要多打几条命令,对于 Linux 新手而言,这可能会感到害怕。
但没有什么可怕的!我们将一步步说明安装过程。
#### 安装 MongoDB
由 MongoDB Inc. 维护的包称为 `mongodb-org`,而不是 `mongodb`(这是 Ubuntu 仓库中包的名称)。在开始之前,请确保系统上未安装 `mongodb`。因为包之间会发生冲突。让我们开始吧!
首先,我们必须导入公钥:
```
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
```
现在,你需要在源列表中添加一个新的仓库,以便你可以安装 MongoDB 社区版并获得自动更新:
```
echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu $(lsb_release -cs)/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
```
要安装 `mongodb-org`,我们需要更新我们的包数据库,以便系统知道可用的新包:
```
sudo apt update
```
现在你可以安装**最新稳定版**的 MongoDB:
```
sudo apt install -y mongodb-org
```
或者某个**特定版本**(在 `=` 后面修改版本号)
```
sudo apt install -y mongodb-org=4.0.6 mongodb-org-server=4.0.6 mongodb-org-shell=4.0.6 mongodb-org-mongos=4.0.6 mongodb-org-tools=4.0.6
```
如果你选择安装特定版本,请确保在所有位置都修改了版本号。如果你修改了 `mongodb-org=4.0.6`,你将安装最新版本。
默认情况下,使用包管理器(`apt-get`)更新时,MongoDB 将更新为最新的版本。要阻止这种情况发生(并冻结为已安装的版本),请使用:
```
echo "mongodb-org hold" | sudo dpkg --set-selections
echo "mongodb-org-server hold" | sudo dpkg --set-selections
echo "mongodb-org-shell hold" | sudo dpkg --set-selections
echo "mongodb-org-mongos hold" | sudo dpkg --set-selections
echo "mongodb-org-tools hold" | sudo dpkg --set-selections
```
你现在已经成功安装了 MongoDB!
#### 配置 MongoDB
默认情况下,包管理器将创建 `/var/lib/mongodb` 和 `/var/log/mongodb`,MongoDB 将使用 `mongodb` 用户帐户运行。
我不会去更改这些默认设置,因为这超出了本指南的范围。有关详细信息,请查看[手册](https://docs.mongodb.com/manual/)。
`/etc/mongod.conf` 中的设置在启动/重新启动 **mongodb** 服务实例时生效。
##### 运行 MongoDB
要启动 mongodb 的守护进程 `mongod`,请输入:
```
sudo service mongod start
```
现在你应该验证 `mongod` 进程是否已成功启动。此信息(默认情况下)保存在 `/var/log/mongodb/mongod.log` 中。我们来看看文件的内容:
```
sudo cat /var/log/mongodb/mongod.log
```

只要你在某处看到:`[initandlisten] waiting for connections on port 27017`,就说明进程正常运行。
**注意**:27017 是 `mongod` 的默认端口。
要停止/重启 `mongod`,请输入:
```
sudo service mongod stop
sudo service mongod restart
```
现在,你可以通过打开 **mongo shell** 来使用 MongoDB:
```
mongo
```
#### 卸载 MongoDB
运行以下命令:
```
sudo service mongod stop
sudo apt purge mongodb-org*
```
要删除**数据库**和**日志文件**(确保**备份**你要保留的内容!):
```
sudo rm -r /var/log/mongodb
sudo rm -r /var/lib/mongodb
```
### 总结
MongoDB 是一个很棒的 NoSQL 数据库,它易于集成到现代项目中。我希望本教程能帮助你在 Ubuntu 上安装它!在下面的评论中告诉我们你计划如何使用 MongoDB。
---
via: <https://itsfoss.com/install-mongodb-ubuntu>
作者:[Sergiu](https://itsfoss.com/author/sergiu/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,176 | Linux 内核的持续集成测试 | https://opensource.com/article/19/6/continuous-kernel-integration-linux | 2019-08-02T11:26:28 | [
"内核",
"CI"
] | https://linux.cn/article-11176-1.html |
>
> CKI 团队是如何防止 bug 被合并到 Linux 内核中。
>
>
>

Linux 内核的每个发布版本包含了来自 1,700 个开发者产生的 14,000 个变更集,很显然,这使得 Linux 内核快速迭代的同时也产生了巨大的复杂性问题。内核上 Bug 有小麻烦也有大问题,有时是系统崩溃,有时是数据丢失。
随着越来越多的项目对于持续集成(CI)的呼声,[内核持续集成(CKI)](https://cki-project.org/)小组秉承着一个任务目标:防止 Bug 被合并到内核当中。
### Linux 测试问题
许多 Linux 发行版只在需要的时候对 Linux 内核进行测试。而这种测试往往只在版本发布时或者用户发现错误时进行。
有时候,出现玄学问题时,维护人员需要在包含了数万个补丁的变更中匆忙地寻找哪个补丁导致这个新的玄学 Bug。诊断 Bug 需要专业的硬件设备、一系列的触发器以及内核相关的专业知识。
#### CI 和 Linux
许多现代软件代码库都采用某种自动化 CI 测试机制,能够在提交进入代码存储库之前对其进行测试。这种自动化测试使得维护人员可以通过查看 CI 测试报告来发现软件质量问题以及大多数的错误。一些简单的项目,比如某个 Python 库,附带的大量工具使得整个检查过程更简单。
在任何测试之前都需要配置和编译 Linux。而这么做将耗费大量的时间和计算资源。此外,Linux 内核必需在虚拟机或者裸机上启动才能进行测试。而访问某些硬件架构需要额外的开销或者非常慢的仿真。因此,人们必须确定一组能够触发错误或者验证修复的测试集。
#### CKI 团队如何运作?
Red Hat 公司的 CKI 团队当前正追踪来自数个内部内核分支和上游的[稳定内核分支树](https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html)等内核分支的更改。我们关注每个代码库的两类关键事件:
1. 当维护人员合并 PR 或者补丁时,代码库变化后的最终结果。
2. 当开发人员通过拼凑或者稳定补丁队列发起变更合并时。
当这些事件发生时,自动化工具开始执行,[GitLab CI 管道](https://docs.gitlab.com/ee/ci/pipelines.html)开始进行测试。一旦管道开始执行 [linting](https://en.wikipedia.org/wiki/Lint_(software)) 脚本、合并每一个补丁,并为多种硬件架构编译内核,真正的测试便开始了。我们会在六分钟内完成四种硬件架构的内核编译工作,并且通常会在两个小时或更短的时间内将反馈提交到稳定邮件列表中。(自 2019 年 1 月起)每月执行超过 100,000 次内核测试,并完成了超过 11,000 个 GitLab 管道。
每个内核都会在本地硬件架构上启动,其中包含:
* [aarch64](https://en.wikipedia.org/wiki/ARM_architecture):64 位 [ARM](https://www.arm.com/),例如 [Cavium(当前是 Marvell)ThunderX](https://www.marvell.com/server-processors/thunderx-arm-processors/)。
* [ppc64/ppc64le](https://en.wikipedia.org/wiki/Ppc64):大端和小端的 [IBM POWER](https://www.ibm.com/it-infrastructure/power) 系统。
* [s390x](https://en.wikipedia.org/wiki/Linux_on_z_Systems):[IBM Zseries](https://www.ibm.com/it-infrastructure/z) 大型机
* [x86\_64](https://en.wikipedia.org/wiki/X86-64):[Intel](https://www.intel.com/) 和 [AMD](https://www.amd.com/) 工作站、笔记本和服务器。
这些内核上运行了包括 [Linux 测试项目(LTP)](https://github.com/linux-test-project/ltp)在内的多个测试,其中包括使用常用测试工具的大量测试。我们 CKI 团队开源了超过 44 个测试并将继续开源更多测试。
### 参与其中
上游的内核测试工作日渐增多。包括 [Google](https://www.google.com/)、Intel、[Linaro](https://www.linaro.org/) 和 [Sony](https://www.sony.com/) 在内的许多公司为各种内核提供了测试输出。每一项工作都专注于为上游内核以及每个公司的客户群带来价值。
如果你或者你的公司想要参与这一工作,请参加在 9 月份在葡萄牙里斯本举办的 [Linux Plumbers Conference 2019](https://www.linuxplumbersconf.org/)。在会议结束后的两天加入我们的 Kernel CI hackfest 活动,并推动快速内核测试的发展。
更多详细信息,[请见](https://docs.google.com/presentation/d/1T0JaRA0wtDU0aTWTyASwwy_ugtzjUcw_ZDmC5KFzw-A/edit?usp=sharing)我在 Texas Linux Fest 2019 上的演讲。
---
via: <https://opensource.com/article/19/6/continuous-kernel-integration-linux>
作者:[Major Hayden](https://opensource.com/users/mhayden) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LazyWolfLin](https://github.com/LazyWolfLin) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | With 14,000 changesets per release from over 1,700 different developers, it's clear that the Linux kernel moves quickly, and brings plenty of complexity. Kernel bugs range from small annoyances to larger problems, such as system crashes and data loss.
As the call for continuous integration (CI) grows for more and more projects, the [Continuous Kernel Integration (CKI)](https://cki-project.org/) team forges ahead with a single mission: prevent bugs from being merged into the kernel.
## Linux testing problems
Many Linux distributions test the Linux kernel when needed. This testing often occurs around release time, or when users find a bug.
Unrelated issues sometimes appear, and maintainers scramble to find which patch in a changeset full of tens of thousands of patches caused the new, unrelated bug. Diagnosing the bug may require specialized hardware, a series of triggers, and specialized knowledge of that portion of the kernel.
### CI and Linux
Most modern software repositories have some sort of automated CI testing that tests commits before they find their way into the repository. This automated testing allows the maintainers to find software quality issues, along with most bugs, by reviewing the CI report. Simpler projects, such as a Python library, come with tons of tools to make this process easier.
Linux must be configured and compiled prior to any testing. Doing so takes time and compute resources. In addition, that kernel must boot in a virtual machine or on a bare metal machine for testing. Getting access to certain system architectures requires additional expense or very slow emulation. From there, someone must identify a set of tests which trigger the bug or verify the fix.
### How the CKI team works
The CKI team at Red Hat currently follows changes from several internal kernels, as well as upstream kernels such as the [stable kernel tree](https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html). We watch for two critical events in each repository:
-
When maintainers merge pull requests or patches, and the resulting commits in the repository change.
-
When developers propose changes for merging via patchwork or the stable patch queue.
As these events occur, automation springs into action and [GitLab CI pipelines](https://docs.gitlab.com/ee/ci/pipelines.html) begin the testing process. Once the pipeline runs [linting](https://en.wikipedia.org/wiki/Lint_(software)) scripts, merges any patches, and compiles the kernel for multiple architectures, the real testing begins. We compile kernels in under six minutes for four architectures and submit feedback to the stable mailing list usually in two hours or less. Over 100,000 kernel tests run each month and over 11,000 GitLab pipelines have completed (since January 2019).
Each kernel is booted on its native architecture, which includes:
● [aarch64](https://en.wikipedia.org/wiki/ARM_architecture): 64-bit [ARM](https://www.arm.com/), such as the [Cavium (now Marvell) ThunderX](https://www.marvell.com/server-processors/thunderx-arm-processors/).
● [ppc64/ppc64le](https://en.wikipedia.org/wiki/Ppc64): Big and little endian [IBM POWER](https://www.ibm.com/it-infrastructure/power) systems.
● [s390x](https://en.wikipedia.org/wiki/Linux_on_z_Systems): [IBM Zseries](https://www.ibm.com/it-infrastructure/z) mainframes.
● [x86_64](https://en.wikipedia.org/wiki/X86-64): [Intel](https://www.intel.com/) and [AMD](https://www.amd.com/) workstations, laptops, and servers.
Multiple tests run on these kernels, including the [Linux Test Project (LTP)](https://github.com/linux-test-project/ltp), which contains a myriad of tests using a common test harness. My CKI team open-sourced over 44 tests with more on the way.
## Get involved
The upstream kernel testing effort grows day-by-day. Many companies provide test output for various kernels, including [Google](https://www.google.com/), Intel, [Linaro](https://www.linaro.org/), and [Sony](https://www.sony.com/). Each effort is focused on bringing value to the upstream kernel as well as each company’s customer base.
If you or your company want to join the effort, please come to the [Linux Plumbers Conference 2019](https://www.linuxplumbersconf.org/) in Lisbon, Portugal. Join us at the Kernel CI hackfest during the two days after the conference, and drive the future of rapid kernel testing.
For more details, [review the slides](https://docs.google.com/presentation/d/1T0JaRA0wtDU0aTWTyASwwy_ugtzjUcw_ZDmC5KFzw-A/edit?usp=sharing) from my Texas Linux Fest 2019 talk.
## Comments are closed. |
11,178 | Debian 10(Buster)安装后要做的前 8 件事 | https://www.linuxtechi.com/things-to-do-after-installing-debian-10/ | 2019-08-02T14:04:40 | [
"Debian"
] | https://linux.cn/article-11178-1.html | Debian 10 的代号是 Buster,它是来自 Debian 家族的最新 LTS 发布版本,并包含大量的特色功能。因此,如果你已经在你的电脑上安装了 Debian 10,并在思考接下来该做什么,那么,请继续阅读这篇文章直到结尾,因为我们为你提供在安装 Debian 10 后要做的前 8 件事。对于还没有安装 Debian 10 的人们,请阅读这篇指南 [图解 Debian 10 (Buster) 安装步骤](/article-11083-1.html)。 让我们继续这篇文章。

### 1) 安装和配置 sudo
在设置完成 Debian 10 后,你需要做的第一件事是安装 sudo 软件包,因为它能够使你获得管理员权限来安装你需要的软件包。为安装和配置 sudo,请使用下面的命令:
变成 root 用户,然后使用下面的命令安装 sudo 软件包,
```
root@linuxtechi:~$ su -
Password:
root@linuxtechi:~# apt install sudo -y
```
添加你的本地用户到 sudo 组,使用下面的 [usermod](https://www.linuxtechi.com/linux-commands-to-manage-local-accounts/) 命令,
```
root@linuxtechi:~# usermod -aG sudo pkumar
root@linuxtechi:~#
```
现在验证是否本地用户获得 sudo 权限:
```
root@linuxtechi:~$ id
uid=1000(pkumar) gid=1000(pkumar) groups=1000(pkumar),27(sudo)
root@linuxtechi:~$ sudo vi /etc/hosts
[sudo] password for pkumar:
root@linuxtechi:~$
```
### 2) 校正日期和时间
在你成功配置 sudo 软件包后,接下来,你需要根据你的位置来校正日期和时间。为了校正日期和时间,
转到系统 **设置** –> **详细说明** –> **日期和时间** ,然后更改为适合你的位置的时区。

一旦时区被更改,你可以看到时钟中的时间自动更改。
### 3) 应用所有更新
在 Debian 10 安装后,建议安装所有 Debian 10 软件包存储库中可用的更新,执行下面的 `apt` 命令:
```
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt upgrade -y
```
**注意:** 如果你是 vi 编辑器的忠实粉丝,那么使用下面的 `apt` 命令安装 `vim`:
```
root@linuxtechi:~$ sudo apt install vim -y
```
### 4) 安装 Flash 播放器插件
默认情况下,Debian 10(Buster)存储库不包含 Flash 插件,因此,用户需要遵循下面的介绍来在他们的系统中查找和安装 flash 播放器。
为 Flash 播放器配置存储库:
```
root@linuxtechi:~$ echo "deb http://ftp.de.debian.org/debian buster main contrib" | sudo tee -a /etc/apt/sources.list
deb http://ftp.de.debian.org/debian buster main contrib
root@linuxtechi:~
```
现在使用下面的命令更新软件包索引:
```
root@linuxtechi:~$ sudo apt update
```
使用下面的 `apt` 命令安装 Flash 插件:
```
root@linuxtechi:~$ sudo apt install pepperflashplugin-nonfree -y
```
一旦软件包被成功安装,接下来,尝试播放 YouTube 中的视频:

### 5) 安装软件,如 VLC、Skype、FileZilla 和截图工具
如此,现在我们已经启用 Flash 播放器,是时候在我们的 Debian 10 系统中安装所有其它的软件,如 VLC、Skype,Filezilla 和截图工具(flameshot)。
#### 安装 VLC 多媒体播放器
为在你的系统中安装 VLC 播放器,使用下面的 `apt` 命令:
```
root@linuxtechi:~$ sudo apt install vlc -y
```
在成功安装 VLC 播放器后,尝试播放你喜欢的视频。

#### 安装 Skype
首先,下载最新的 Skype 软件包:
```
root@linuxtechi:~$ wget https://go.skype.com/skypeforlinux-64.deb
```
接下来,使用 `apt` 命令安装软件包:
```
root@linuxtechi:~$ sudo apt install ./skypeforlinux-64.deb
```
在成功安装 Skype 后,尝试访问它,并输入你的用户名和密码。

#### 安装 Filezilla
为在你的系统中安装 Filezilla,使用下面的 `apt` 命令,
```
root@linuxtechi:~$ sudo apt install filezilla -y
```
一旦 FileZilla 软件包被成功安装,尝试访问它。

#### 安装截图工具(flameshot)
使用下面的命令来安装截图工具:flameshot,
```
root@linuxtechi:~$ sudo apt install flameshot -y
```
**注意:** Shutter 工具在 Debian 10 中已被移除。

### 6) 启用和启动防火墙
总是建议启动防火墙来使你的网络安全。如果你希望在 Debian 10 中启用防火墙, **UFW**(简单的防火墙)是最好的控制防火墙的工具。UFW 在 Debian 存储库中可用,它非常容易安装,如下:
```
root@linuxtechi:~$ sudo apt install ufw
```
在你安装 UFW 后,接下来的步骤是设置防火墙。因此,设置防火墙,通过拒绝端口来禁用所有的传入流量,并且只允许需要的端口传出,像 ssh、http 和 https。
```
root@linuxtechi:~$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)
root@linuxtechi:~$ sudo ufw default allow outgoing
Default outgoing policy changed to 'allow'
(be sure to update your rules accordingly)
root@linuxtechi:~$
```
允许 SSH 端口:
```
root@linuxtechi:~$ sudo ufw allow ssh
Rules updated
Rules updated (v6)
root@linuxtechi:~$
```
假使你在系统中已经安装 Web 服务器,那么使用下面的 `ufw` 命令来在防火墙中允许它们的端口:
```
root@linuxtechi:~$ sudo ufw allow 80
Rules updated
Rules updated (v6)
root@linuxtechi:~$ sudo ufw allow 443
Rules updated
Rules updated (v6)
root@linuxtechi:~$
```
最后,你可以使用下面的命令启用 UFW:
```
root@linuxtechi:~$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
root@linuxtechi:~$
```
假使你想检查你的防火墙的状态,你可以使用下面的命令检查它:
```
root@linuxtechi:~$ sudo ufw status
```
### 7) 安装虚拟化软件(VirtualBox)
安装 Virtualbox 的第一步是将 Oracle VirtualBox 存储库的公钥导入到你的 Debian 10 系统:
```
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
OK
root@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
OK
root@linuxtechi:~$
```
如果导入成功,你将看到一个 “OK” 显示信息。
接下来,你需要添加存储库到仓库列表:
```
root@linuxtechi:~$ sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian buster contrib"
root@linuxtechi:~$
```
最后,是时候在你的系统中安装 VirtualBox 6.0:
```
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt install virtualbox-6.0 -y
```
一旦 VirtualBox 软件包被成功安装,尝试访问它,并开始创建虚拟机。

### 8) 安装最新的 AMD 驱动程序
最后,你也可以安装需要的附加 AMD 显卡驱动程序(如 ATI 专有驱动)和 Nvidia 图形驱动程序。为安装最新的 AMD 驱动程序,首先,我们需要修改 `/etc/apt/sources.list` 文件,在包含 **main** 和 **contrib** 的行中添加 **non-free** 单词,示例如下显示:
```
root@linuxtechi:~$ sudo vi /etc/apt/sources.list
```
```
...
deb http://deb.debian.org/debian/ buster main non-free contrib
deb-src http://deb.debian.org/debian/ buster main non-free contrib
deb http://security.debian.org/debian-security buster/updates main contrib non-free
deb-src http://security.debian.org/debian-security buster/updates main contrib non-free
deb http://ftp.us.debian.org/debian/ buster-updates main contrib non-free
...
```
现在,使用下面的 `apt` 命令来在 Debian 10 系统中安装最新的 AMD 驱动程序。
```
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt install firmware-linux firmware-linux-nonfree libdrm-amdgpu1 xserver-xorg-video-amdgpu -y
```
这就是这篇文章的全部内容,我希望你了解在安装 Debian 10 后应该做什么。请在下面的评论区,分享你的反馈和评论。
---
via: <https://www.linuxtechi.com/things-to-do-after-installing-debian-10/>
作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Debian 10 code name Buster is the latest LTS release from the house of Debian and the latest release comes packed with a lot of features. So if you have already installed the Debian 10 in your system and thinking what next, then please continue reading the article till the end as we provide you with the top 8 things to do after installing Debian 10. For those who haven’t installed Debian 10, please read this guide [ Debian 10 (Buster) Installation Steps with Screenshots](https://www.linuxtechi.com/debian-10-buster-installation-guide/). So lets continue with the article:
#### 1) Install and Configure sudo
Once you complete setting up Debian 10 in your system, the first thing you need to do is install the sudo package as it enables you to get administrative privileges to install any package you need. In order to install and configure sudo, please use the following command:
Become the root user and then install sudo package using the beneath command,
pkumar@linuxtechi:~$ su - Password: root@linuxtechi:~# apt install sudo -y
Add your local user to sudo group using the following [usermod](https://www.linuxtechi.com/linux-commands-to-manage-local-accounts/) command,
root@linuxtechi:~# usermod -aG sudo pkumar root@linuxtechi:~#
Now verify whether local user got the sudo rights or not,
pkumar@linuxtechi:~$ id uid=1000(pkumar) gid=1000(pkumar) groups=1000(pkumar),27(sudo) pkumar@linuxtechi:~$ sudo vi /etc/hosts [sudo] password for pkumar: pkumar@linuxtechi:~$
#### 2) Fix Date and time
Once you’ve successfully configured the sudo package, next thing you need to fix the date and time according to your location. In order to fix the date and time,
Go to System **Settings** –> **Details** –> **Date and Time** and then change your time zone that suits to your location.
Once the time zone is changed, you can see the time changed automatically in your clock
#### 3) Apply all updates
After Debian 10 installation, it is recommended to install all updates which are available via Debian 10 package repositories, execute the beneath apt command,
pkumar@linuxtechi:~$ sudo apt update pkumar@linuxtechi:~$ sudo apt upgrade -y
**Note:** If you are a big fan of vi editor then install vim using the following command apt command,
pkumar@linuxtechi:~$ sudo apt install vim -y
#### 4) Tweak Desktop Settings using Tweak tool
When we install Gnome Desktop then tweak tool gets installed, as the name suggests it helps us to change or tweak our desktop settings,
**Access Tweak tool :**
Click on “Tweaks” icon and change the settings that suits to your desktop
#### 5) Install Software like VLC, SKYPE, FileZilla and Screenshot tool
So now we’ve enabled flash player, it is time to install all other software like VLC, Skype, Filezilla and screenshot tool like flameshot in our Debian 10 system.
**Install VLC Media Player**
To install VLC player in your system using apt command,
pkumar@linuxtechi:~$ sudo apt install vlc -y
After the successful installation of VLC player, try to play your favorite videos
**Install Skype:**
First download the latest Skype package as shown below:
pkumar@linuxtechi:~$ wget https://go.skype.com/skypeforlinux-64.deb
Next install the package using the apt command as shown below:
pkumar@linuxtechi:~$ sudo apt install ./skypeforlinux-64.deb
After successful installation of Skype, try to access it and enter your Credentials,
**Install Filezilla**
To install Filezilla in your system use the following apt command,
pkumar@linuxtechi:~$ sudo apt install filezilla -y
Once FileZilla package is installed successfully, try to access it,
**Install Screenshot tool (flameshot)**
Use the following command to install screenshoot tool flameshot,
pkumar@linuxtechi:~$ sudo apt install flameshot -y
**Note:** Shutter Tool in Debian 10 has been removed
#### 6) Enable and Start Firewall
It is always recommended to start firewall to make your secure over the network. If you are looking to enable firewall in Debian 10, **UFW** (Uncomplicated Firewall) is the best tool handle firewall. Since UFW is available in the Debian repositories, it is quite easy to install as shown below:
pkumar@linuxtechi:~$ sudo apt install ufw
Once you have installed UFW, the next step is to set up the firewall. So, to setup the firewall, disable all incoming traffic by denying the ports and allow only the required ports like ssh, http and https.
pkumar@linuxtechi:~$ sudo ufw default deny incoming Default incoming policy changed to 'deny' (be sure to update your rules accordingly) pkumar@linuxtechi:~$ sudo ufw default allow outgoing Default outgoing policy changed to 'allow' (be sure to update your rules accordingly) pkumar@linuxtechi:~$
Allow SSH port
pkumar@linuxtechi:~$ sudo ufw allow ssh Rules updated Rules updated (v6) pkumar@linuxtechi:~$
In case you have installed Web Server in your system then allow their ports too in the firewall using the following ufw command,
pkumar@linuxtechi:~$ sudo ufw allow 80 Rules updated Rules updated (v6) pkumar@linuxtechi:~$ sudo ufw allow 443 Rules updated Rules updated (v6) pkumar@linuxtechi:~$
Finally, you can enable UFW using the following command
pkumar@linuxtechi:~$ sudo ufw enable Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup pkumar@linuxtechi:~$
In case if you want to check the status of your firewall, you can check it using the following command
pkumar@linuxtechi:~$ sudo ufw status
#### 7) Install Virtualization Software (VirtualBox)
First step in installing Virtualbox is by importing the public keys of the Oracle VirtualBox repository to your Debian 10 system
pkumar@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - OK pkumar@linuxtechi:~$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - OK pkumar@linuxtechi:~$
If the import is successful, you will see a “OK” message displayed.
Next you need to add the repository to the source list
pkumar@linuxtechi:~$ sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian buster contrib" pkumar@linuxtechi:~$
Finally, it is time to install VirtualBox 6.0 in your system
pkumar@linuxtechi:~$ sudo apt update pkumar@linuxtechi:~$ sudo apt install virtualbox-6.0 -y
Once VirtualBox packages are installed successfully, try access it and start creating virtual machines,
#### 8) Install latest AMD Drivers
Finally, you can also install additional AMD drivers needed like the graphics card, ATI Proprietary and Nvidia Graphics drivers. To Install the latest AMD Drivers, first we must modify** /etc/apt/sources.list** file, add **non-free** word in lines which contains **main** and **contrib**, example is shown below
pkumar@linuxtechi:~$ sudo vi /etc/apt/sources.list ………………… deb http://deb.debian.org/debian/ buster main non-free contrib deb-src http://deb.debian.org/debian/ buster main non-free contrib deb http://security.debian.org/debian-security buster/updates main contrib non-free deb-src http://security.debian.org/debian-security buster/updates main contrib non-free deb http://ftp.us.debian.org/debian/ buster-updates main contrib non-free ……………………
Now use the following apt commands to install latest AMD drivers in Debian 10 system
pkumar@linuxtechi:~$ sudo apt update pkumar@linuxtechi:~$ sudo apt install firmware-linux firmware-linux-nonfree libdrm-amdgpu1 xserver-xorg-video-amdgpu -y
That’s all from this article, I hope you got an idea what one should after installing Debian 10. Please do share your feedback and comments in comments section below.
braciszekosdHi, why there is – y after install software?
Pradeep KumarIf we don’t pass -y option in apt command then we have manually type y(or yes) to install packages
Cronoit will just auto do Yes for you. So instead of it asking do you want to continue yes or no, it will just select yes for you. I wouldnt recommend it if you are still new, mistakes happen lol. wait till you get more comfortable and do commands with the -y at the end. Im new and thats what im doing.
sebt3apt-get install -y virt-manager qemu-kvm seems a way better choice than virtualbox
KingneutronFlash is dead. You should only need HTML5 to play videos these days.
DebtumbsYou don’t need install Flash Player to play YouTube videos.
Alex BrightBoss, FLASH was cremated long ago…!! Now its HTML 5
warningdangthe flash is not used to play YouTube Videos , but it is used to play some games …
BeyI hate flash, but flash is needed in a lot of old unsupported firmware admin web pages.
IlyaComplain to your vendor(s) then and let’s kill this security bloatware off for good.
Stuart Stevensonmaybe one if the FIRST things to do would be to tell people how to add a user to usermod BEFORE telling them to use the command ‘usermod’?
DaveFirezilla will only backup drives that are unmounted. I have installed this on ubuntu 18.04. And it has errors when trying to backup.
I been using clonezilla live usb had no issues with for images. I have restored once with no issues.
MarjFirstly thank you time and effort.
I was looking for how to update firefox as this computer couldn’t play ABC iview, so I updated and upgraded everything.
Then I installed Skype and am about to install ‘flameshot’ to give it a try.
Your instructions and explanations make the processes crystal clear.
Marj |
11,179 | 不可或缺的 Bash 别名 | https://opensource.com/article/19/7/bash-aliases | 2019-08-03T09:59:00 | [
"别名"
] | https://linux.cn/article-11179-1.html |
>
> 厌倦了一遍又一遍地输入相同的长命令?你觉得在命令行上工作效率低吗?Bash 别名可以为你创造一个与众不同的世界。
>
>
>

Bash 别名是一种用新的命令补充或覆盖 Bash 命令的方法。Bash 别名使用户可以轻松地在 [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) 终端中自定义其体验。它们通常定义在 `$HOME/.bashrc` 或 `$HOME/bash_aliases` 中(它是由 `$HOME/.bashrc` 加载的)。
大多数发行版在新用户帐户的默认 `.bashrc` 文件中至少添加了一些流行的别名。这些可以用来简单演示 Bash 别名的语法:
```
alias ls='ls -F'
alias ll='ls -lh'
```
但并非所有发行版都附带预先添加好的别名。如果你想手动添加别名,则必须将它们加载到当前的 Bash 会话中:
```
$ source ~/.bashrc
```
否则,你可以关闭终端并重新打开它,以便重新加载其配置文件。
通过 Bash 初始化脚本中定义的那些别名,你可以键入 `ll` 而得到 `ls -l` 的结果,当你键入 `ls` 时,得到也不是原来的 [ls](https://opensource.com/article/19/7/master-ls-command) 的普通输出。
那些别名很棒,但它们只是浅尝辄止。以下是十大 Bash 别名,一旦你试过它们,你会发现再也不能离开它们。
### 首先设置
在开始之前,创建一个名为 `~/.bash_aliases` 的文件:
```
$ touch ~/.bash_aliases
```
然后,确认这些代码出现在你的 `~/.bashrc` 文件当中:
```
if [ -e $HOME/.bash_aliases ]; then
source $HOME/.bash_aliases
fi
```
如果你想亲自尝试本文中的任何别名,请将它们输入到 `.bash_aliases` 文件当中,然后使用 `source ~/.bashrc` 命令将它们加载到当前 Bash 会话中。
### 按文件大小排序
如果你一开始使用过 GNOME 中的 Nautilus、MacOS 中的 Finder 或 Windows 中的资源管理器等 GUI 文件管理器,那么你很可能习惯了按文件大小排序文件列表。你也可以在终端上做到这一点,但这条命令不是很简洁。
将此别名添加到 GNU 系统上的配置中:
```
alias lt='ls --human-readable --size -1 -S --classify'
```
此别名将 `lt` 替换为 `ls` 命令,该命令在单个列中显示每个项目的大小,然后按大小对其进行排序,并使用符号表示文件类型。加载新别名,然后试一下:
```
$ source ~/.bashrc
$ lt
total 344K
140K configure*
44K aclocal.m4
36K LICENSE
32K config.status*
24K Makefile
24K Makefile.in
12K config.log
8.0K README.md
4.0K info.slackermedia.Git-portal.json
4.0K git-portal.spec
4.0K flatpak.path.patch
4.0K Makefile.am*
4.0K dot-gitlab.ci.yml
4.0K configure.ac*
0 autom4te.cache/
0 share/
0 bin/
0 install-sh@
0 compile@
0 missing@
0 COPYING@
```
在 MacOS 或 BSD 上,`ls` 命令没有相同的选项,因此这个别名可以改为:
```
alias lt='du -sh * | sort -h'
```
这个版本的结果稍有不同:
```
$ du -sh * | sort -h
0 compile
0 COPYING
0 install-sh
0 missing
4.0K configure.ac
4.0K dot-gitlab.ci.yml
4.0K flatpak.path.patch
4.0K git-portal.spec
4.0K info.slackermedia.Git-portal.json
4.0K Makefile.am
8.0K README.md
12K config.log
16K bin
24K Makefile
24K Makefile.in
32K config.status
36K LICENSE
44K aclocal.m4
60K share
140K configure
476K autom4te.cache
```
实际上,即使在 Linux上,上面这个命令也很有用,因为使用 `ls` 列出的目录和符号链接的大小为 0,这可能不是你真正想要的信息。使用哪个看你自己的喜好。
*感谢 Brad Alexander 提供的这个别名的思路。*
### 只查看挂载的驱动器
`mount` 命令过去很简单。只需一个命令,你就可以获得计算机上所有已挂载的文件系统的列表,它经常用于概览连接到工作站有哪些驱动器。在过去看到超过三、四个条目就会令人印象深刻,因为大多数计算机没有那么多的 USB 端口,因此这个结果还是比较好查看的。
现在计算机有点复杂,有 LVM、物理驱动器、网络存储和虚拟文件系统,`mount` 的结果就很难一目了然:
```
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
[...]
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
```
要解决这个问题,试试这个别名:
```
alias mnt='mount | awk -F' ' '{ printf "%s\t%s\n",$1,$3; }' | column -t | egrep ^/dev/ | sort'
```
此别名使用 `awk` 按列解析 `mount` 的输出,将输出减少到你可能想要查找的内容(挂载了哪些硬盘驱动器,而不是文件系统):
```
$ mnt
/dev/mapper/fedora-root /
/dev/nvme0n1p1 /boot/efi
/dev/nvme0n1p2 /boot
/dev/sda1 /run/media/seth/pocket
/dev/sdc1 /run/media/seth/trip
```
在 MacOS 上,`mount` 命令不提供非常详细的输出,因此这个别名可能过度精简了。但是,如果你更喜欢简洁的报告,请尝试以下方法:
```
alias mnt='mount | grep -E ^/dev | column -t'
```
结果:
```
$ mnt
/dev/disk1s1 on / (apfs, local, journaled)
/dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse)
```
### 在你的 grep 历史中查找命令
有时你好不容易弄清楚了如何在终端完成某件事,并觉得自己永远不会忘记你刚学到的东西。然后,一个小时过去之后你就完全忘记了你做了什么。
搜索 Bash 历史记录是每个人不时要做的事情。如果你确切地知道要搜索的内容,可以使用 `Ctrl + R` 对历史记录进行反向搜索,但有时你无法记住要查找的确切命令。
这是使该任务更容易的别名:
```
alias gh='history|grep'
```
这是如何使用的例子:
```
$ gh bash
482 cat ~/.bashrc | grep _alias
498 emacs ~/.bashrc
530 emacs ~/.bash_aliases
531 source ~/.bashrc
```
### 按修改时间排序
每个星期一都会这样:你坐在你的电脑前开始工作,你打开一个终端,你发现你已经忘记了上周五你在做什么。你需要的是列出最近修改的文件的别名。
你可以使用 `ls` 命令创建别名,以帮助你找到上次离开的位置:
```
alias left='ls -t -1'
```
输出很简单,但如果你愿意,可以使用 `--long` 选项扩展它。这个别名列出的显示如下:
```
$ left
demo.jpeg
demo.xcf
design-proposal.md
rejects.txt
brainstorm.txt
query-letter.xml
```
### 文件计数
如果你需要知道目录中有多少文件,那么该解决方案是 UNIX 命令构造的最典型示例之一:使用 `ls` 命令列出文件,用`-1` 选项将其输出控制为只有一列,然后输出到 `wc`(单词计数)命令的管道,以计算有多少行。
这是 UNIX 理念如何允许用户使用小型的系统组件构建自己的解决方案的精彩演示。如果你碰巧每天都要做几次,这个命令组合也要输入很多字母,如果没有使用 `-R` 选项,它就不能用于目录,这会为输出引入新行并导致无用的结果。
而这个别名使这个过程变得简单:
```
alias count='find . -type f | wc -l'
```
这个别名会计算文件,忽略目录,但**不会**忽略目录的内容。如果你有一个包含两个目录的项目文件夹,每个目录包含两个文件,则该别名将返回 4,因为整个项目中有 4 个文件。
```
$ ls
foo bar
$ count
4
```
### 创建 Python 虚拟环境
你用 Python 编程吗?
你用 Python 编写了很多程序吗?
如果是这样,那么你就知道创建 Python 虚拟环境至少需要 53 次击键。
这个数字里有 49 次是多余的,它很容易被两个名为 `ve` 和 `va` 的新别名所解决:
```
alias ve='python3 -m venv ./venv'
alias va='source ./venv/bin/activate'
```
运行 `ve` 会创建一个名为 `venv` 的新目录,其中包含 Python 3 的常用虚拟环境文件系统。`va` 别名在当前 shell 中的激活该环境:
```
$ cd my-project
$ ve
$ va
(venv) $
```
### 增加一个复制进度条
每个人都会吐槽进度条,因为它们似乎总是不合时宜。然而,在内心深处,我们似乎都想要它们。UNIX 的 `cp` 命令没有进度条,但它有一个 `-v` 选项用于显示详细信息,它回显了复制的每个文件名到终端。这是一个相当不错的技巧,但是当你复制一个大文件并且想要了解还有多少文件尚未传输时,它的作用就没那么大了。
`pv` 命令可以在复制期间提供进度条,但它并不常用。另一方面,`rsync` 命令包含在几乎所有的 POSIX 系统的默认安装中,并且它被普遍认为是远程和本地复制文件的最智能方法之一。
更好的是,它有一个内置的进度条。
```
alias cpv='rsync -ah --info=progress2'
```
像使用 `cp` 命令一样使用此别名:
```
$ cpv bigfile.flac /run/media/seth/audio/
3.83M 6% 213.15MB/s 0:00:00 (xfr#4, to-chk=0/4)
```
使用此命令的一个有趣的副作用是 `rsync` 无需 `-r` 标志就可以复制文件和目录,而 `cp` 则需要。
### 避免意外删除
你不应该使用 `rm` 命令。`rm` 手册甚至这样说:
>
> **警告:**如果使用 `rm` 删除文件,通常可以恢复该文件的内容。如果你想要更加确保内容真正无法恢复,请考虑使用 `shred`。
>
>
>
如果要删除文件,则应将文件移动到“废纸篓”,就像使用桌面时一样。
POSIX 使这很简单,因为垃圾桶是文件系统中可访问的一个实际位置。该位置可能会发生变化,具体取决于你的平台:在 [FreeDesktop](https://www.freedesktop.org/wiki/) 上,“垃圾桶”位于 `~/.local/share/Trash`,而在 MacOS 上则是 `~/.Trash`,但无论如何,它只是一个目录,你可以将文件藏在那个看不见的地方,直到你准备永久删除它们为止。
这个简单的别名提供了一种从终端将文件扔进垃圾桶的方法:
```
alias tcn='mv --force -t ~/.local/share/Trash '
```
该别名使用一个鲜为人知的 `mv` 标志(`-t`),使你能够提供作为最终移动目标的参数,而忽略了首先列出要移动的文件的通常要求。现在,你可以使用新命令将文件和文件夹移动到系统垃圾桶:
```
$ ls
foo bar
$ tcn foo
$ ls
bar
```
现在文件已“消失”,只有在你一头冷汗的时候才意识到你还需要它。此时,你可以从系统垃圾桶中抢救该文件;这肯定可以给 Bash 和 `mv` 开发人员提供一些帮助。
**注意:**如果你需要一个具有更好的 FreeDesktop 兼容性的更强大的垃圾桶命令,请参阅 [Trashy](https://gitlab.com/trashy/trashy)。
### 简化 Git 工作流
每个人都有自己独特的工作流程,但无论如何,通常都会有重复的任务。如果你经常使用 Git,那么你可能会发现自己经常重复的一些操作序列。也许你会发现自己回到主分支并整天一遍又一遍地拉取最新的变化,或者你可能发现自己创建了标签然后将它们推到远端,抑或可能完全是其它的什么东西。
无论让你厌倦一遍遍输入的 Git 魔咒是什么,你都可以通过 Bash 别名减轻一些痛苦。很大程度上,由于它能够将参数传递给钩子,Git 拥有着丰富的内省命令,可以让你不必在 Bash 中执行那些丑陋冗长的命令。
例如,虽然你可能很难在 Bash 中找到项目的顶级目录(就 Bash 而言,它是一个完全随意的名称,因为计算机的绝对顶级是根目录),但 Git 可以通过简单的查询找到项目的顶级目录。如果你研究过 Git 钩子,你会发现自己能够找到 Bash 一无所知的各种信息,而你可以利用 Bash 别名来利用这些信息。
这是一个来查找 Git 项目的顶级目录的别名,无论你当前在哪个项目中工作,都可以将目录改变为顶级目录,切换到主分支,并执行 Git 拉取:
```
alias startgit='cd `git rev-parse --show-toplevel` && git checkout master && git pull'
```
这种别名绝不是一个普遍有用的别名,但它演示了一个相对简单的别名如何能够消除大量繁琐的导航、命令和等待提示。
一个更简单,可能更通用的别名将使你返回到 Git 项目的顶级目录。这个别名非常有用,因为当你在一个项目上工作时,该项目或多或少会成为你的“临时家目录”。它应该像回家一样简单,就像回你真正的家一样,这里有一个别名:
```
alias cg='cd `git rev-parse --show-toplevel`'
```
现在,命令 `cg` 将你带到 Git 项目的顶部,无论你下潜的目录结构有多深。
### 切换目录并同时查看目录内容
(据称)曾经一位著名科学家提出过,我们可以通过收集极客输入 `cd` 后跟 `ls` 消耗的能量来解决地球上的许多能量问题。
这是一种常见的用法,因为通常当你更改目录时,你都会有查看周围的内容的冲动或需要。
但是在你的计算机的目录树中移动并不一定是一个走走停停的过程。
这是一个作弊,因为它根本不是别名,但它是探索 Bash 功能的一个很好的借口。虽然别名非常适合快速替换一个命令,但 Bash 也允许你在 `.bashrc` 文件中添加本地函数(或者你加载到 `.bashrc` 中的单独函数文件,就像你的别名文件一样)。
为了保持模块化,创建一个名为 `~/.bash_functions` 的新文件,然后让你的 `.bashrc` 加载它:
```
if [ -e $HOME/.bash_functions ]; then
source $HOME/.bash_functions
fi
```
在该函数文件中,添加这些代码:
```
function cl() {
DIR="$*";
# if no DIR given, go home
if [ $# -lt 1 ]; then
DIR=$HOME;
fi;
builtin cd "${DIR}" && \
# use your preferred ls command
ls -F --color=auto
}
```
将函数加载到 Bash 会话中,然后尝试:
```
$ source ~/.bash_functions
$ cl Documents
foo bar baz
$ pwd
/home/seth/Documents
$ cl ..
Desktop Documents Downloads
[...]
$ pwd
/home/seth
```
函数比别名更灵活,但有了这种灵活性,你就有责任确保代码有意义并达到你的期望。别名是简单的,所以要保持简单而有用。要正式修改 Bash 的行为,请使用保存到 `PATH` 环境变量中某个位置的函数或自定义的 shell 脚本。
附注,有一些巧妙的奇技淫巧来实现 `cd` 和 `ls` 序列作为别名,所以如果你足够耐心,那么即使是一个简单的别名也永无止限。
### 开始别名化和函数化吧
可以定制你的环境使得 Linux 变得如此有趣,提高效率使得 Linux 可以改变生活。开始使用简单的别名,进而使用函数,并在评论中发布你必须拥有的别名!
---
via: <https://opensource.com/article/19/7/bash-aliases>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | A Bash alias is a method of supplementing or overriding Bash commands with new ones. Bash aliases make it easy for users to customize their experience in a [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) terminal. They are often defined in **$HOME/.bashrc** or **$HOME/bash_aliases** (which must be loaded by **$HOME/.bashrc**).
Most distributions add at least some popular aliases in the default **.bashrc** file of any new user account. These are simple ones to demonstrate the syntax of a Bash alias:
```
alias ls='ls -F'
alias ll='ls -lh'
```
Not all distributions ship with pre-populated aliases, though. If you add aliases manually, then you must load them into your current Bash session:
```
$ source ~/.bashrc
```
Otherwise, you can close your terminal and re-open it so that it reloads its configuration file.
With those aliases defined in your Bash initialization script, you can then type **ll** and get the results of **ls -l**, and when you type **ls** you get, instead of the output of plain old** **[ls](https://opensource.com/article/19/7/master-ls-command).
Those aliases are great to have, but they just scratch the surface of what’s possible. Here are the top 10 Bash aliases that, once you try them, you won’t be able to live without.
## Set up first
Before beginning, create a file called **~/.bash_aliases**:
```
$ touch ~/.bash_aliases
```
Then, make sure that this code appears in your **~/.bashrc** file:
```
if [ -e $HOME/.bash_aliases ]; then
source $HOME/.bash_aliases
fi
```
If you want to try any of the aliases in this article for yourself, enter them into your **.bash_aliases** file, and then load them into your Bash session with the **source ~/.bashrc** command.
## Sort by file size
If you started your computing life with GUI file managers like Nautilus in GNOME, the Finder in MacOS, or Explorer in Windows, then you’re probably used to sorting a list of files by their size. You can do that in a terminal as well, but it’s not exactly succinct.
Add this alias to your configuration on a GNU system:
```
alias lt='ls --human-readable --size -1 -S --classify'
```
This alias replaces **lt** with an **ls** command that displays the size of each item, and then sorts it by size, in a single column, with a notation to indicate the kind of file. Load your new alias, and then try it out:
```
$ source ~/.bashrc
$ lt
total 344K
140K configure*
44K aclocal.m4
36K LICENSE
32K config.status*
24K Makefile
24K Makefile.in
12K config.log
8.0K README.md
4.0K info.slackermedia.Git-portal.json
4.0K git-portal.spec
4.0K flatpak.path.patch
4.0K Makefile.am*
4.0K dot-gitlab.ci.yml
4.0K configure.ac*
0 autom4te.cache/
0 share/
0 bin/
0 install-sh@
0 compile@
0 missing@
0 COPYING@
```
On MacOS or BSD, the **ls** command doesn’t have the same options, so this alias works instead:
```
alias lt='du -sh * | sort -h'
```
The results of this version are a little different:
```
$ du -sh * | sort -h
0 compile
0 COPYING
0 install-sh
0 missing
4.0K configure.ac
4.0K dot-gitlab.ci.yml
4.0K flatpak.path.patch
4.0K git-portal.spec
4.0K info.slackermedia.Git-portal.json
4.0K Makefile.am
8.0K README.md
12K config.log
16K bin
24K Makefile
24K Makefile.in
32K config.status
36K LICENSE
44K aclocal.m4
60K share
140K configure
476K autom4te.cache
```
In fact, even on Linux, that command is useful, because** **using** ls** lists directories and symlinks as being 0 in size, which may not be the information you actually want. It’s your choice.
*Thanks to Brad Alexander for this alias idea.*
## View only mounted drives
The **mount** command used to be so simple. With just one command, you could get a list of all the mounted filesystems on your computer, and it was frequently used for an overview of what drives were attached to a workstation. It used to be impressive to see more than three or four entries because most computers don’t have many more USB ports than that, so the results were manageable.
Computers are a little more complicated now, and between LVM, physical drives, network storage, and virtual filesystems, the results of **mount** can be difficult to parse:
```
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
[...]
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
```
To solve that problem, try an alias like this:
```
alias mnt="mount | awk -F' ' '{ printf \"%s\t%s\n\",\$1,\$3; }' | column -t | egrep ^/dev/ | sort"
```
This alias uses **awk **to parse the output of **mount **by column, reducing the output to what you probably looking for (what hard drives, and not file systems, are mounted):
```
$ mnt
/dev/mapper/fedora-root /
/dev/nvme0n1p1 /boot/efi
/dev/nvme0n1p2 /boot
/dev/sda1 /run/media/seth/pocket
/dev/sdc1 /run/media/seth/trip
```
On MacOS, the **mount** command doesn’t provide terribly verbose output, so an alias may be overkill. However, if you prefer a succinct report, try this:
```
alias mnt='mount | grep -E ^/dev | column -t'
```
The results:
```
$ mnt
/dev/disk1s1 on / (apfs, local, journaled)
/dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse)
```
## Find a command in your grep history
Sometimes you figure out how to do something in the terminal, and promise yourself that you’ll never forget what you’ve just learned. Then an hour goes by, and you’ve completely forgotten what you did.
Searching through your Bash history is something everyone has to do from time to time. If you know exactly what you’re searching for, you can use **Ctrl+R** to do a reverse search through your history, but sometimes you can’t remember the exact command you want to find.
Here’s an alias to make that task a little easier:
```
alias gh='history|grep'
```
Here’s an example of how to use it:
```
$ gh bash
482 cat ~/.bashrc | grep _alias
498 emacs ~/.bashrc
530 emacs ~/.bash_aliases
531 source ~/.bashrc
```
## Sort by modification time
It happens every Monday: You get to work, you sit down at your computer, you open a terminal, and you find you’ve forgotten what you were doing last Friday. What you need is an alias to list the most recently modified files.
You can use the **ls** command to create an alias to help you find where you left off:
```
alias left='ls -t -1'
```
The output is simple, although you can extend it with the --**long** option if you prefer. The alias, as listed, displays this:
```
$ left
demo.jpeg
demo.xcf
design-proposal.md
rejects.txt
brainstorm.txt
query-letter.xml
```
## Count files
If you need to know how many files you have in a directory, the solution is one of the most classic examples of UNIX command construction: You list files with the **ls** command, control its output to be only one column with the **-1** option, and then pipe that output to the **wc** (word count) command to count how many lines of single files there are.
It’s a brilliant demonstration of how the UNIX philosophy allows users to build their own solutions using small system components. This command combination is also a lot to type if you happen to do it several times a day, and it doesn’t exactly work for a directory of directories without using the **-R** option, which introduces new lines to the output and renders the exercise useless.
Instead, this alias makes the process easy:
```
alias count='find . -type f | wc -l'
```
This one counts files, ignoring directories, but *not* the contents of directories. If you have a project folder containing two directories, each of which contains two files, the alias returns four, because there are four files in the entire project.
```
$ ls
foo bar
$ count
4
```
## Create a Python virtual environment
Do you code in Python?
Do you code in Python a lot?
If you do, then you know that creating a Python virtual environment requires, at the very least, 53 keystrokes.
That’s 49 too many, but that’s easily circumvented with two new aliases called **ve** and **va**:
```
alias ve='python3 -m venv ./venv'
alias va='source ./venv/bin/activate'
```
Running **ve** creates a new directory, called **venv**, containing the usual virtual environment filesystem for Python3. The **va** alias activates the environment in your current shell:
```
$ cd my-project
$ ve
$ va
(venv) $
```
## Add a copy progress bar
Everybody pokes fun at progress bars because they’re infamously inaccurate. And yet, deep down, we all seem to want them. The UNIX **cp** command has no progress bar, but it does have a **-v** option for verbosity, meaning that it echoes the name of each file being copied to your terminal. That’s a pretty good hack, but it doesn’t work so well when you’re copying one big file and want some indication of how much of the file has yet to be transferred.
The **pv** command provides a progress bar during copy, but it’s not common as a default application. On the other hand, the **rsync** command is included in the default installation of nearly every POSIX system available, and it’s widely recognized as one of the smartest ways to copy files both remotely and locally.
Better yet, it has a built-in progress bar.
```
alias cpv='rsync -ah --info=progress2'
```
Using this alias is the same as using the **cp** command:
```
$ cpv bigfile.flac /run/media/seth/audio/
3.83M 6% 213.15MB/s 0:00:00 (xfr#4, to-chk=0/4)
```
An interesting side effect of using this command is that **rsync** copies both files and directories without the **-r** flag that **cp** would otherwise require.
## Protect yourself from file removal accidents
You shouldn’t use the **rm** command. The **rm** manual even says so:
Warning: If you use ‘rm’ to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using ‘shred’.
If you want to remove a file, you should move the file to your Trash, just as you do when using a desktop.
POSIX makes this easy, because the Trash is an accessible, actual location in your filesystem. That location may change, depending on your platform: On a [FreeDesktop](https://www.freedesktop.org/wiki/), the Trash is located at **~/.local/share/Trash**, while on MacOS it’s **~/.Trash**, but either way, it’s just a directory into which you place files that you want out of sight until you’re ready to erase them forever.
This simple alias provides a way to toss files into the Trash bin from your terminal:
```
alias tcn='mv --force -t ~/.local/share/Trash '
```
This alias uses a little-known **mv** flag that enables you to provide the file you want to move as the final argument, ignoring the usual requirement for that file to be listed first. Now you can use your new command to move files and folders to your system Trash:
```
$ ls
foo bar
$ tcn foo
$ ls
bar
```
Now the file is "gone," but only until you realize in a cold sweat that you still need it. At that point, you can rescue the file from your system Trash; be sure to tip the Bash and **mv** developers on the way out.
**Note:** If you need a more robust **Trash** command with better FreeDesktop compliance, see [Trashy](https://gitlab.com/trashy/trashy).
## Simplify your Git workflow
Everyone has a unique workflow, but there are usually repetitive tasks no matter what. If you work with Git on a regular basis, then there’s probably some sequence you find yourself repeating pretty frequently. Maybe you find yourself going back to the master branch and pulling the latest changes over and over again during the day, or maybe you find yourself creating tags and then pushing them to the remote, or maybe it’s something else entirely.
No matter what Git incantation you’ve grown tired of typing, you may be able to alleviate some pain with a Bash alias. Largely thanks to its ability to pass arguments to hooks, Git has a rich set of introspective commands that save you from having to perform uncanny feats in Bash.
For instance, while you might struggle to locate, in Bash, a project’s top-level directory (which, as far as Bash is concerned, is an entirely arbitrary designation, since the absolute top level to a computer is the root directory), Git knows its top level with a simple query. If you study up on Git hooks, you’ll find yourself able to find out all kinds of information that Bash knows nothing about, but you can leverage that information with a Bash alias.
Here’s an alias to find the top level of a Git project, no matter where in that project you are currently working, and then to change directory to it, change to the master branch, and perform a Git pull:
```
alias startgit='cd `git rev-parse --show-toplevel` && git checkout master && git pull'
```
This kind of alias is by no means a universally useful alias, but it demonstrates how a relatively simple alias can eliminate a lot of laborious navigation, commands, and waiting for prompts.
A simpler, and probably more universal, alias returns you to the Git project’s top level. This alias is useful because when you’re working on a project, that project more or less becomes your "temporary home" directory. It should be as simple to go "home" as it is to go to your actual home, and here’s an alias to do it:
```
alias cg='cd `git rev-parse --show-toplevel`'
```
Now the command **cg** takes you to the top of your Git project, no matter how deep into its directory structure you have descended.
## Change directories and view the contents at the same time
It was once (allegedly) proposed by a leading scientist that we could solve many of the planet’s energy problems by harnessing the energy expended by geeks typing **cd** followed by **ls**.
It’s a common pattern, because generally when you change directories, you have the impulse or the need to see what’s around.
But "walking" your computer’s directory tree doesn’t have to be a start-and-stop process.
This one’s cheating, because it’s not an alias at all, but it’s a great excuse to explore Bash functions. While aliases are great for quick substitutions, Bash allows you to add local functions in your **.bashrc** file (or a separate functions file that you load into **.bashrc**, just as you do your aliases file).
To keep things modular, create a new file called **~/.bash_functions** and then have your **.bashrc** load it:
```
if [ -e $HOME/.bash_functions ]; then
source $HOME/.bash_functions
fi
```
In the functions file, add this code:
```
function cl() {
DIR="$*";
# if no DIR given, go home
if [ $# -lt 1 ]; then
DIR=$HOME;
fi;
builtin cd "${DIR}" && \
# use your preferred ls command
ls -F --color=auto
}
```
Load the function into your Bash session and then try it out:
```
$ source ~/.bash_functions
$ cl Documents
foo bar baz
$ pwd
/home/seth/Documents
$ cl ..
Desktop Documents Downloads
[...]
$ pwd
/home/seth
```
Functions are much more flexible than aliases, but with that flexibility comes the responsibility for you to ensure that your code makes sense and does what you expect. Aliases are meant to be simple, so keep them easy, but useful. For serious modifications to how Bash behaves, use functions or custom shell scripts saved to a location in your **PATH**.
For the record, there *are* some clever hacks to implement the **cd** and **ls** sequence as an alias, so if you’re patient enough, then the sky is the limit even using humble aliases.
## Start aliasing and functioning
Customizing your environment is what makes Linux fun, and increasing your efficiency is what makes Linux life-changing. Get started with simple aliases, graduate to functions, and post your must-have aliases in the comments!
## 28 Comments |
11,181 | 使用 Bitwarden 和 Podman 管理你的密码 | https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/ | 2019-08-03T10:45:41 | [
"密码",
"密码管理器"
] | https://linux.cn/article-11181-1.html | 
在过去的一年中,你可能会遇到一些试图向你推销密码管理器的广告。比如 [LastPass](https://www.lastpass.com)、[1Password](https://1password.com/) 或 [Dashlane](https://www.dashlane.com/)。密码管理器消除了记住所有网站密码的负担。你不再需要使用重复或容易记住的密码。相反,你只需要记住一个可以解锁所有其他密码的密码。
通过使用一个强密码而不是许多弱密码,这可以使你更安全。如果你有基于云的密码管理器(例如 LastPass、1Password 或 Dashlane),你还可以跨设备同步密码。不幸的是,这些产品都不是开源的。幸运的是,还有其他开源替代品。
### 开源密码管理器
替代方案包括 Bitwarden、[LessPass](https://lesspass.com/) 或 [KeePass](https://keepass.info/)。Bitwarden 是一款[开源密码管理器](https://bitwarden.com/),它会将所有密码加密存储在服务器上,它的工作方式与 LastPass、1Password 或 Dashlane 相同。LessPass 有点不同,因为它专注于成为无状态密码管理器。这意味着它根据主密码、网站和用户名生成密码,而不是保存加密的密码。另一方面,KeePass 是一个基于文件的密码管理器,它的插件和应用具有很大的灵活性。
这三个应用中的每一个都有其自身的缺点。Bitwarden 将所有东西保存在一个地方,并通过其 API 和网站接口暴露给网络。LessPass 无法保存自定义密码,因为它是无状态的,因此你需要使用它生成的密码。KeePass 是一个基于文件的密码管理器,因此无法在设备之间轻松同步。你可以使用云存储和 [WebDAV](https://en.wikipedia.org/wiki/WebDAV) 来解决此问题,但是有许多客户端不支持它,如果设备无法正确同步,你可能会遇到文件冲突。
本文重点介绍 Bitwarden。
### 运行非官方的 Bitwarden 实现
有一个名为 [bitwarden\_rs](https://github.com/dani-garcia/bitwarden_rs/) 的服务器及其 API 的社区实现。这个实现是完全开源的,因为它可以使用 SQLite 或 MariaDB/MySQL,而不是官方服务器使用的专有 Microsoft SQL Server。
有一点重要的是要认识到官方和非官方版本之间存在一些差异。例如,[官方服务器已经由第三方审核](https://blog.bitwarden.com/bitwarden-completes-third-party-security-audit-c1cc81b6d33),而非官方服务器还没有。在实现方面,非官方版本缺少[电子邮件确认和采用 Duo 或邮件码的双因素身份验证](https://github.com/dani-garcia/bitwarden_rs/wiki#missing-features)。
让我们在 SELinux 中运行服务器。根据 bitwarden\_rs 的文档,你可以如下构建一个 Podman 命令:
```
$ podman run -d \
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest
```
这将下载 bitwarden\_rs 镜像并在用户命名空间下的用户容器中运行它。它使用 1024 以上的端口,以便非 root 用户可以绑定它。它还使用 `:Z` 更改卷的 SELinux 上下文,以防止在 `/data` 中的读写权限问题。
如果你在某个域下托管它,建议将此服务器放在 Apache 或 Nginx 的反向代理下。这样,你可以使用 80 和 443 端口指向容器的 8080 端口,而无需以 root 身份运行容器。
### 在 systemd 下运行
Bitwarden 现在运行了,你可能希望保持这种状态。接下来,创建一个使容器保持运行的单元文件,如果它没有响应则自动重新启动,并在系统重启后开始运行。创建文件 `/etc/systemd/system/bitwarden.service`:
```
[Unit]
Description=Bitwarden Podman container
Wants=syslog.service
[Service]
User=egustavs
Group=egustavs
TimeoutStartSec=0
ExecStart=/usr/bin/podman run 'bitwarden'
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
Restart=always
RestartSec=30s
KillMode=none
[Install]
WantedBy=multi-user.target
```
现在使用 [sudo](https://fedoramagazine.org/howto-use-sudo/) 启用并启动该服务:
```
$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$ systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M
```
成功了!Bitwarden 现在运行了并将继续运行。
### 添加 LetsEncrypt
如果你有域名,强烈建议你使用类似 LetsEncrypt 的加密证书运行你的 Bitwarden 实例。Certbot 是一个为我们创建 LetsEncrypt 证书的机器人,这里有个[在 Fedora 中操作的指南](https://certbot.eff.org/instructions)。
生成证书后,你可以按照 [bitwarden\_rs 指南中关于 HTTPS 的部分来](https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS)。只要记得将 `:Z` 附加到 LetsEncrypt 来处理权限,而不用更改端口。
---
照片由 [CMDR Shane](https://unsplash.com/@cmdrshane?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) 拍摄,发表在 [Unsplash](https://unsplash.com/search/photos/password?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) 上。
---
via: <https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/>
作者:[Eric Gustavsson](https://fedoramagazine.org/author/egustavs/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are [LastPass](https://www.lastpass.com), [1Password](https://1password.com/), or [Dashlane](https://www.dashlane.com/). A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.
This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.
## Open source password managers
These alternatives include Bitwarden, [LessPass](https://lesspass.com/), or [KeePass](https://keepass.info/). Bitwarden is [an open source password manager](https://bitwarden.com/) that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.
Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with [WebDAV](https://en.wikipedia.org/wiki/WebDAV) to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.
This article focuses on Bitwarden.
## Running an unofficial Bitwarden implementation
There is a community implementation of the server and its API called [bitwarden_rs](https://github.com/dani-garcia/bitwarden_rs/). This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.
It’s important to recognize some differences exist between the official and the unofficial version. For instance, the [official server has been audited by a third-party](https://blog.bitwarden.com/bitwarden-completes-third-party-security-audit-c1cc81b6d33), whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks [email confirmation and support for two-factor authentication using Duo or email codes](https://github.com/dani-garcia/bitwarden_rs/wiki#missing-features).
Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:
$ podman run -d \
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest
This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with *:Z* to prevent permission issues with read-write on */data*.
If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.
## Running under systemd
With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as */etc/systemd/system/bitwarden.service*:
[Unit] Description=Bitwarden Podman container Wants=syslog.service [Service] User=egustavs Group=egustavs TimeoutStartSec=0 ExecStart=/usr/bin/podman start 'bitwarden' ExecStop=-/usr/bin/podman stop -t 10 'bitwarden' Restart=always RestartSec=30s KillMode=none [Install] WantedBy=multi-user.target
Now, enable and start it [using ](https://fedoramagazine.org/howto-use-sudo/)* sudo*:
$sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M
Success! Bitwarden is now running under system and will keep running.
## Adding LetsEncrypt
It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a [guide for doing this through Fedora](https://certbot.eff.org/instructions).
After you generate a certificate, you can follow the [bitwarden_rs guide about HTTPS](https://github.com/dani-garcia/bitwarden_rs/wiki/Enabling-HTTPS). Just remember to append *:Z* to the LetsEncrypt volume to handle permissions while not changing the port.
*Photo by **CMDR Shane** on *[ Unsplash](https://unsplash.com/search/photos/password?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText).
## Sam Warfield
Nice I’ve been looking for a replacement for Lastpass!
## Dave Adelphi
Steve Gibson (GRC Research) has developed a robust next generation digital identity tool called SQRL (“squirrel”). It’s an elegant solution to the password problem.
https://www.grc.com/sqrl/sqrl.htm
## Eric Gustavsson
Sounds like an elegant solution, though at this point I doubt it will catch on given it relies on server implementations
## Dave
True. The server implementation is simple, essentially a one-liner. There are about 1,300 SQRL testers out there. Hopefully, some of them have the sway to implement it as an alternative authentication method in some more-than-niche markets.
SQRL, with its trust no one security model and public/private key pairing digital identity system, facilitates a paradigm shift away from the manual login process – a change on par with eliminating urban horse manure.
## alex
SQRL sign-in app for Windows.## svsv sarma
Too many password managers but nothing is safe, secure and simple. I rather use a multi lingual unique pattern than relaying on the system stored password managers.
## Eric Gustavsson
If you want something that’s simple and has been used for over a decade, try KeePass. It’s a file-based password manager that’s very secure
Patterns can be broken more easily than password managers as passwords get leaked from websites regularly
## Simone
pass the Standard Unix Password Manager is best for me
## Joe
Gave Bitwarden a chance after seeing article. Thanks!
## chips
What about gnome keyring/seahorse
## Benjamin
Gnome Seahorse is not really a Password Manager.
## Jesse
What is Gnome Seahorse then?
What about KDE Wallet?
thanks,
JV
## Andy Haninger
I put up with seahorse on my Fedora system for a long time before switching to KeePass after using it at work.
The Seahorse UI does integrate well with the rest of Gnome 3, but that’s about all it has going for it. KeePass allows you to store things like a URL for the site you’re logging in to, notes, attached files, etc, etc. Its database is easily portable to other machines. There may be a way to export/import your Seahorse passwords, but it’s not as simple as copying your .kdb and maybe .key files and bingo you’re done.
Seahorse is supposedly still an active project, but wow is it pokey.
## Sergio
IMHO the best password manager is pass (https://www.passwordstore.org/) which relies on standard tools such as gpg2 and git.
## Thomas Preston
Absolutely, standard tools and text files. I sync mine with macOS and iOS too. I bet there’s an Android client as well.
## Mark McIntyre
I also use pass for my password management. Easy to use CLI, self-hosted git server, and an Android app for mobile convenience make it quite the thing.
## tom
thanks for the article. it provides a nice and concise introduction to managing passwords on remote servers. personally i use pass, too. together with pass-tomb the file structure is encrypted. with pass-git you can push it to remote repos. i store my private key (to unlock the tomb) on an external device that I have to carry around with me. so a server solution seems attractive to me.
what I miss to get your concept fully, is a short explanation and link to the podman docs. i dont know at all what podman is or does and how or why i need it.
## jakfrost
Podman is a container manager, like Docker, but rootless.
## Batisteo
It’s not clear for me how to use it with the Android app and Firefox extension.
## Eric Gustavsson
You can follow this article on Bitwardens website. After you have it hosted you can point it to the domain its hosted under or point it to the IP address https://help.bitwarden.com/article/change-client-environment/ |
11,182 | 如何在 Ubuntu 登录屏幕上启用轻击 | https://itsfoss.com/enable-tap-to-click-on-ubuntu-login-screen/ | 2019-08-03T11:20:52 | [
"轻击",
"点击"
] | https://linux.cn/article-11182-1.html |
>
> <ruby> 轻击 <rt> tap to click </rt></ruby>选项在 Ubuntu 18.04 GNOME 桌面的登录屏幕上不起作用。在本教程中,你将学习如何在 Ubuntu 登录屏幕上启用“轻击”。
>
>
>
安装 Ubuntu 后我做的第一件事就是确保启用了轻击功能。作为笔记本电脑用户,我更喜欢轻击触摸板进行左键单击。这比使用触摸板上的左键单击按钮更方便。
我登录并使用操作系统时可以轻击。但是,如果你在登录屏幕上,轻击不起作用,这是一个烦恼。
在 Ubuntu(或使用 GNOME 桌面的其他发行版)的 [GDM 登录屏幕](https://wiki.archlinux.org/index.php/GDM)上,你必须单击用户名才能显示密码字段。现在,如果你习惯了轻击,即使你已启用了它并在登录系统后可以使用,它也无法在登录屏幕上运行。
这是一个轻微的烦恼,但仍然是一个烦恼。好消息是你可以解决这个烦恼。让我告诉你如何在这个快速提示中做到这一点。
### 在 Ubuntu 登录屏幕上启用轻击

你必须在这里使用终端和一些命令。我希望你能够适应。
[在 Ubuntu 中使用 Ctrl + Alt + T 快捷键打开终端](https://itsfoss.com/ubuntu-shortcuts/)。由于 Ubuntu 18.04 仍在使用 X 显示服务器,因此需要启用它才能连接到 [X 服务器](https://en.wikipedia.org/wiki/X.Org_Server)。为此,你可以将 `gdm` 添加到访问控制列表中。
首先切换到 `root` 用户。这是必需的,因为你必须稍后切换为 `gdm` 用户,而不能以非 `root` 用户身份执行此操作。
```
sudo -i
```
[在 Ubuntu 中没有为 root 用户设置密码](https://itsfoss.com/change-password-ubuntu/)。你可以使用管理员用户帐户访问它。因此,当要求输入密码时,请使用你自己的密码。输入密码时,屏幕上不会显示任何输入内容。
```
xhost +SI:localuser:gdm
```
这是我的输出:
```
xhost +SI:localuser:gdm
localuser:gdm being added to access control list
```
现在运行此命令,以便 `gdm` 用户具有正确的轻击设置。
```
gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
```
如果你看到这样的警告:`(process:6339): dconf-WARNING **: 19:52:21.217: Unable to open /root/.local/share/flatpak/exports/share/dconf/profile/user: Permission denied`。别担心。忽略它就行。
这将使你能够轻击登录屏幕。为什么在系统设置中进行更改之前无法使用轻击?这是因为在登录屏幕上,你还没有选择用户名。只有在屏幕上选择用户时才能使用你的帐户。这就是你必须使用用户 `gdm` 并使用它添加正确设置的原因。
重新启动 Ubuntu,你会看到现在可以使用轻击来选择你的用户帐户。
#### 还原改变
如果你因为某些原因不喜欢在 Ubuntu 登录界面轻击,可以还原更改。
你必须执行上一节中的所有步骤:切换到 `root`,将 `gdm` 与 X 服务器连接,切换到 `gdm` 用户。但是,你需要运行此命令,而不是上一个命令:
```
gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click false
```
就是这样。
正如我所说,这是一件小事。我的意思是你可以轻松地点击左键而不是轻击。这只是一次单击的问题。但是,当你在几次轻击后被迫使用左键单击时,它会打破操作“连续性”。
我希望你喜欢这个快速的小调整。如果你知道其他一些很酷的调整,请与我们分享。
---
via: <https://itsfoss.com/enable-tap-to-click-on-ubuntu-login-screen/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *Brief: The tap to click option doesn’t work on the login screen in Ubuntu 18.04 GNOME desktop. In this tutorial, you’ll learn to enable the ‘tap to click’ on the Ubuntu login screen.*
One of the first few things I do after installing Ubuntu is to make sure that tap to click has been enabled. As a laptop user, I prefer to tap the touchpad for making a left click. This is more convenient than using the left click button on the touchpad all the time.
This is what happens when I have logged in and using the operating system. But if you are at the login screen, the tap to click doesn’t work and that’s an annoyance.
On the [GDM login screen](https://wiki.archlinux.org/index.php/GDM) in Ubuntu (or other distributions using GNOME desktop), you have to click the username in order to bring the password field. Now if you are habitual of tap to click, it doesn’t work on the login screen even if you have it enabled and use it after logging into the system.
This is a minor annoyance but an annoyance nonetheless. The good news is that you can fix this annoyance.Let me show you how to do that in this quick tip.
## Enabling tap to click on Ubuntu login screen

You’ll have to use the terminal and a few commands here. I hope you are comfortable with it.
[Open a terminal using Ctrl+Alt+T shortcut in Ubuntu](https://itsfoss.com/ubuntu-shortcuts/). Since Ubuntu 18.04 is still using X server, you need to enable it to connect to the [x server](https://en.wikipedia.org/wiki/X.Org_Server). For that, you can add gdm to access control list.
Switch to root user first. It’s required because you have to switch as gdm user later and you cannot do that as a non-root user.
sudo -i
[There is no password set for root user in Ubuntu](https://itsfoss.com/change-password-ubuntu/). You access it with your admin user account. So when asked for password, use your own password. You won’t see anything being typed on the screen when you type in your password.
xhost +SI:localuser:gdm
Here’s the output for me:
xhost +SI:localuser:gdm localuser:gdm being added to access control list
Now run this command so that the the ‘user gdm’ has the correct tap to click setting.
gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
If you see a warning like this: (process:6339): dconf-WARNING **: 19:52:21.217: Unable to open /root/.local/share/flatpak/exports/share/dconf/profile/user: Permission denied . Don’t worry. Just ignore it.
This will enable you to perform a tap to click on the login screen. Why were you not able to use tap to click when you made the changes in the system settings before? It’s because at the login screen, you haven’t selected your username yet. You get to use your account only when you select the user on the screen. This is why you had to use the user gdm and add the correct settings with it.
Restart Ubuntu and you’ll see that you can now use the tap to select your user account now.
### Revert the changes
If you are not happy with the tap to click on the Ubuntu login screen for some reason, you can revert the changes.
You’ll have to perform all the steps you did in the previous section: switch to root, connect gdm with x server, switch to gdm user. But instead of the last command, you need to run this command:
gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click false
That’s it.
As I said, it’s a tiny thing. I mean you can easily do a left click instead of the tap to click. It’s just a matter of one single click.However, it breaks the ‘continuity’ when you are forced to use the left click after a few taps.
I hope you liked this quick little tweak. If you know some other cool tweaks, do share it with us. |
11,183 | DevOps 团队必备的 3 种指标仪表板 | https://opensource.com/article/19/7/dashboards-devops-teams | 2019-08-04T08:36:18 | [
"DevOps"
] | https://linux.cn/article-11183-1.html |
>
> 仪表板可以帮助 DevOps 团队观测和监控系统,以提高性能。
>
>
>

指标仪表板帮助 [DevOps](https://opensource.com/resources/devops) 团队监控整个 DevOps 平台,以便实时响应<ruby> 议题 <rt> issue </rt></ruby>。在处理生产环境宕机或者应用服务中断等情况时,指标仪表板显得尤为重要。
DevOps 仪表板聚合了多个监测工具的指标,为开发和运维团队生成监控报告。它还允许团队跟踪多项指标,例如服务部署时间、程序 bug、报错信息、工作项、待办事项等等。
下面三种指标仪表板可以帮助 DevOps 团队监测系统,改善服务性能。
### 敏捷项目管理仪表板
这种类型的仪表板为 DevOps 团队的工作项提供可视化视图,优化敏捷项目的工作流。有利于提高团队协作效率,对工作进行可视化并提供灵活的视图 —— 就像我们过去在白板上使用便利贴来共享项目进度、<ruby> 议题 <rt> issue </rt></ruby>和待办事项一样。
* [Kanban boards](https://opensource.com/article/19/1/productivity-tool-taskboard) 允许 DevOps 团队创建卡片、标签、任务和栏目,便于持续交付敏捷项目。
* [Burndown charts](https://openpracticelibrary.com/practice/burndown/) 对指定时间段内未完成的工作或待办事项提供可视化视图,并记录团队当前的效率和轨迹,这些指标通常用于敏捷项目和 DevOps 项目管理。
* [Jira boards](https://www.atlassian.com/software/jira) 帮助 DevOps 团队创建议题、计划迭代并生成团队总结。这些灵活的仪表板还能帮助团队综合考虑并确定个人和团队任务的优先级;实时查看、汇报和跟踪正在进行的工作;并提高团队绩效。
* [GitHub project boards](https://opensource.com/life/15/11/short-introduction-github) 帮助确定团队任务的优先级。它们还支持拉取请求,因此团队成员可以方便地提交 DevOps 项目相关的信息。
### 应用程序监控仪表板
开发者负责优化应用和服务的性能,并开发新功能。应用程序监控面板则帮助开发者在<ruby> 持续集成/持续开发 <rt> CI / CD </rt></ruby>流程下,加快修复 bug、增强程序健壮性、发布安全修丁的进度。另外,这些可视化仪表板有利于查看请求模式、请求耗时、报错和网络拓扑信息。
* [Jaeger](https://www.jaegertracing.io/) 帮助开发人员跟踪请求数量、请求响应时间等。对于分布式网络系统上的云原生应用程序,它还使用 [Istio 服务网格](https://opensource.com/article/19/3/getting-started-jaeger)加强了监控和跟踪。
* [OpenCensus](https://opencensus.io/) 帮助团队查看运行应用程序的主机上的数据,它还提供了一个可插拔的导出系统,用于将数据导出到数据中心。
### DevOps 平台监控面板
你可能使用多种技术和工具在云上或本地构建 DevOps 平台,但 Linux 容器管理工具(如 Kubernetes 和 OpenShift )更利于搭建出一个成功的 DevOps 平台。因为 Linux 容器的不可变性和可移植性使得应用程序从开发环境到生产环境的编译、测试和部署变得更快更容易。
DevOps 平台监控仪表板帮助运营团队从机器/节点故障和服务报错中收集各种按时序排列的数据,用于编排应用程序容器和基于软件的基础架构,如网络(SDN)和存储(SDS)。这些仪表板还能可视化多维数据格式,方便地查询数据模式。
* [Prometheus dashboards](https://opensource.com/article/18/12/introduction-prometheus) 从平台节点或者运行中的容器化应用中收集指标。帮助 DevOps 团队构建基于指标的监控系统和仪表板,监控微服务的客户端/服务器工作负载,及时识别出异常节点故障。
* [Grafana boards](https://opensource.com/article/17/8/linux-grafana) 帮助收集事件驱动的各项指标,包括服务响应持续时间、请求量、<ruby> 客户端/服务器 <rt> client/server </rt></ruby>工作负载、网络流量等,并提供了可视化面板。DevOps 团队可以通过多种方式分享指标面板,也可以生成编码的当前监控数据快照分享给其他团队。
### 总结
这些仪表板提供了可视化的工作流程,能够发现团队协作、应用程序交付和平台状态中的各种问题。它们帮助开发团队增强其在快速应用交付、安全运行和自动化 CI/CD 等领域的能力。
---
via: <https://opensource.com/article/19/7/dashboards-devops-teams>
作者:[Daniel Oh](https://opensource.com/users/daniel-ohhttps://opensource.com/users/daniel-ohhttps://opensource.com/users/heronthecli) 选题:[lujun9972](https://github.com/lujun9972) 译者:[hello-wn](https://github.com/hello-wn) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Metrics dashboards enable [DevOps](https://opensource.com/resources/devops) teams to monitor the entire DevOps platform so they can respond to issues in real-time, which is critical in the event of downtime or disruption in the production environment or application services.
DevOps dashboards aggregate metrics from multiple observation tools to create monitoring reports for dev and ops teams. They also allow teams to track multiple metrics, such as service deployment times, bugs, errors, work items, backlogs, and more.
The three categories of metrics dashboards described below help DevOps teams observe and monitor systems and thereby improve performance.
## Agile project management dashboards
This type of dashboard visualizes work items for DevOps teams to optimize workflows in agile projects. The dashboard should be designed for maximizing team collaboration efficiency, visualizing work, and providing flexible views—just like we used to use sticky notes on a whiteboard to share project progress, issues, and backlogs.
[Kanban boards](https://opensource.com/article/19/1/productivity-tool-taskboard)enable DevOps teams to create cards, labels, assignments, and columns for continuous delivery of agile projects.[Burndown charts](https://openpracticelibrary.com/practice/burndown/)visualize uncompleted work or backlogs in a specified time period and provide the team's current velocity and trajectory, metrics that are typically used in agile and DevOps project management.[Jira boards](https://www.atlassian.com/software/jira)enable DevOps teams to create issues, plan sprints, and generate team stories. These flexible dashboards also allow the team to prioritize individual and team tasks in full context; provide visibility to view, report, and track work in progress; and help improve team performance.[GitHub project boards](https://opensource.com/life/15/11/short-introduction-github)help prioritize the team's tasks. They also support pull requests so team members can add information related to DevOps projects.
## Application monitoring dashboards
Developers are responsible for improving application and services performance and developing new functions. An app monitoring dashboard enables developers to produce bugfixes, enhance features, and release security patches as soon as possible within a continuous integration/continuous development (CI/CD) pipeline. These dashboards should also visualize request patterns, elapsed time, errors, and network topology.
[Jaeger](https://www.jaegertracing.io/)enables developers to trace the number of requests, response time for each request, and more. It also improves monitoring and tracing of cloud-native apps on a distributed networking system with the[Istio service mesh](https://opensource.com/article/19/3/getting-started-jaeger).[OpenCensus](https://opencensus.io/)allows the team to view data on the host where an application is running, but it also has a pluggable export system for exporting data to central aggregators.
## DevOps platform observation dashboard
You might have combined technologies and tools to build a DevOps platform in the cloud or on-premises, but Linux container management tools, such as Kubernetes and OpenShift, are the foundation of a successful DevOps platform. This is because a Linux container's immutability and portability make it faster and easier to move from app development to building, testing, and deployment in production.
DevOps platform observation dashboards enable the ops teams to orchestrate application containers and software-defined infrastructure, like networking ([SDN](https://opensource.com/article/18/11/intro-software-defined-networking)) and storage ([SDS](https://opensource.com/business/14/10/sage-weil-interview-openstack-ceph)), by collecting numeric time-series data from machine or node failures and services errors. These dashboards also visualize multi-dimensional data formats and query data patterns.
[Prometheus dashboards](https://opensource.com/article/18/12/introduction-prometheus)scrape metrics from nodes in the platform or directly in running containerized applications. They allow DevOps teams to build a metric-based monitoring system and dashboard to observe microservices' client/server workloads to identify abnormal node failures.[Grafana boards](https://opensource.com/article/17/8/linux-grafana)allow DevOps organizations to utilize event-driven metrics and visualize multiple panels, including service response duration, request volume, client/server workloads, network traffic flow, and more. DevOps teams can share metrics panels easily in a variety of ways as well as take the snapshot that encodes current monitoring data and share it with other teams.
## Summary
These dashboards visualize metrics on how your DevOps team works and can help identify current or potential issues in team collaboration, application delivery, and platform health status. They also enable DevOps teams to enhance their capabilities in areas such as fast app delivery, secured runtimes, and automated CI/CD.
## Comments are closed. |
11,185 | 命令行快速提示:权限进阶 | https://fedoramagazine.org/command-line-quick-tips-more-about-permissions/ | 2019-08-04T09:51:30 | [
"权限"
] | https://linux.cn/article-11185-1.html | 
前一篇文章[介绍了 Fedora 系统上有关文件权限的一些基础知识](/article-11123-1.html)。本部分介绍使用权限管理文件访问和共享的其他方法。它建立在前一篇文章中的知识和示例的基础上,所以如果你还没有阅读过那篇文章,请[查看](/article-11123-1.html)它。
### 符号与八进制
在上一篇文章中,你了解到文件有三个不同的权限集。拥有该文件的用户有一个集合,拥有该文件的组的成员有一个集合,然后最终一个集合适用于其他所有人。在长列表(`ls -l`)中这些权限使用符号模式显示在屏幕上。
每个集合都有 `r`、`w` 和 `x` 条目,表示特定用户(所有者、组成员或其他)是否可以读取、写入或执行该文件。但是还有另一种表达这些权限的方法:八进制模式。
你已经习惯了[十进制](https://en.wikipedia.org/wiki/Decimal)编号系统,它有十个不同的值(`0` 到 `9`)。另一方面,八进制系统有八个不同的值(`0` 到 `7`)。在表示权限时,八进制用作速记来显示 `r`、`w` 和 `x` 字段的值。将每个字段视为具有如下值:
* `r` = 4
* `w` = 2
* `x` = 1
现在,你可以使用单个八进制值表达任何组合。例如,读取和写入权限(但没有执行权限)的值为 `6`。读取和执行权限的值仅为 `5`。文件的 `rwxr-xr-x` 符号权限的八进制值为 `755`。
与符号值类似,你可以使用八进制值使用 `chmod` 命令设置文件权限。以下两个命令对文件设置相同的权限:
```
chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1
```
### 特殊权限位
文件上还有几个特殊权限位。这些被称为 `setuid`(或 `suid`)、`setgid`(或 `sgid`),以及<ruby> 粘滞位 <rt> sticky bit </rt></ruby>(或<ruby> 阻止删除位 <rt> delete inhibit </rt></ruby>)。 将此视为另一组八进制值:
* `setuid` = 4
* `setgid` = 2
* `sticky` = 1
**除非**该文件是可执行的,否则 `setuid` 位是被忽略的。如果是可执行的这种情况,则该文件(可能是应用程序或脚本)的运行就像拥有该文件的用户启动的一样。`setuid` 的一个很好的例子是 `/bin/passwd` 实用程序,它允许用户设置或更改密码。此实用程序必须能够写入到不允许普通用户更改的文件中(LCTT 译注:此处是指 `/etc/passwd` 和 `/etc/shadow`)。因此它需要精心编写,由 `root` 用户拥有,并具有 `setuid` 位,以便它可以更改密码相关文件。
`setgid` 位对于可执行文件的工作方式类似。该文件将使用拥有它的组的权限运行。但是,`setgid` 对于目录还有一个额外的用途。如果在具有 `setgid` 权限的目录中创建文件,则该文件的组所有者将设置为该目录的组所有者。
最后,虽然文件粘滞位没有意义会被忽略,但它对目录很有用。在目录上设置的粘滞位将阻止用户删除其他用户拥有的该目录中的文件。
在八进制模式下使用 `chmod` 设置这些位的方法是添加一个值前缀,例如 `4755`,可以将 `setuid` 添加到可执行文件中。在符号模式下,`u` 和 `g` 也可用于设置或删除 `setuid` 和 `setgid`,例如 `u+s,g+s`。粘滞位使用 `o+t` 设置。(其他的组合,如 `o+s` 或 `u+t`,是没有意义的,会被忽略。)
### 共享与特殊权限
回想一下前一篇文章中关于需要共享文件的财务团队的示例。可以想象,特殊权限位有助于更有效地解决问题。原来的解决方案只是创建了一个整个组可以写入的目录:
```
drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance
```
此目录的一个问题是,`finance` 组成员的用户 `dwayne` 和 `jill` 可以删除彼此的文件。这对于共享空间来说不是最佳选择。它在某些情况下可能有用,但在处理财务记录时可能不会!
另一个问题是此目录中的文件可能无法真正共享,因为它们将由 `dwayne` 和 `jill` 的默认组拥有 - 很可能用户私有组也命名为 `dwayne` 和 `jill`,而不是 `finance`。
解决此问题的更好方法是在文件夹上设置 `setgid` 和粘滞位。这将做两件事:使文件夹中创建的文件自动归 `finance` 组所有,并防止 `dwayne` 和 `jill` 删除彼此的文件。下面这些命令中的任何一个都可以工作:
```
sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance
```
该文件的长列表现在显示了所应用的新特殊权限。粘滞位显示为 `T` 而不是 `t`,因为 `finance` 组之外的用户无法搜索该文件夹。
```
drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance
```
---
via: <https://fedoramagazine.org/command-line-quick-tips-more-about-permissions/>
作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | A previous article [covered some basics about file permissions](https://fedoramagazine.org/command-line-quick-tips-permissions/) on your Fedora system. This installment shows you additional ways to use permissions to manage file access and sharing. It also builds on the knowledge and examples in the previous article, so if you haven’t read that one, do [check it out](https://fedoramagazine.org/command-line-quick-tips-permissions/).
## Symbolic and octal
In the previous article you saw how there are three distinct permission sets for a file. The user that owns the file has a set, members of the group that owns the file has a set, and then a final set is for everyone else. These permissions are expressed on screen in a long listing (*ls -l*) using symbolic mode.
Each set has **r, w,** and **x** entries for whether a particular user (owner, group member, or other) can read, write, or execute that file. But there’s another way to express these permissions: in **octal** mode.
You’re used to the [decimal](https://en.wikipedia.org/wiki/Decimal) numbering system, which has ten distinct values (0 through 9). The octal system, on the other hand, has eight distinct values (0 through 7). In the case of permissions, octal is used as a shorthand to show the value of the **r, w,** and **x** fields. Think of each field as having a value:
**r**= 4**w**= 2**x**= 1
Now you can express any combination with a single octal value. For instance, read and write permission, but no execute permission, would have a value of 6. Read and execute permission only would have a value of 5. A file’s **rwxr-xr-x** symbolic permission has an octal value of **755**.
You can use octal values to set file permissions with the *chmod* command similarly to symbolic values. The following two commands set the same permissions on a file:
chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1
## Special permission bits
There are several special permission bits also available on a file. These are called **setuid** (or **suid**), **setgid** (or **sgid**), and the **sticky bit** (or **delete inhibit**). Think of this as yet another set of octal values:
- setuid = 4
- setgid = 2
- sticky = 1
The **setuid** bit is ignored *unless* the file is executable. If that’s the case, the file (presumably an app or a script) runs as if it were launched by the user who owns the file. A good example of setuid is the */bin/passwd* utility, which allows a user to set or change passwords. This utility must be able to write to files no user should be allowed to change. Therefore it is carefully written, owned by the *root* user, and has a setuid bit so it can alter the password related files.
The **setgid** bit works similarly for executable files. The file will run with the permissions of the group that owns it. However, setgid also has an additional use for directories. If a file is created in a directory with setgid permission, the group owner for the file will be set to the group owner of the directory.
Finally, the sticky bit, while ignored for files, is useful for directories. The sticky bit set on a directory will prevent a user from deleting files in that directory owned by other users.
The way to set these bits with *chmod* in octal mode is to add a value prefix, such as **4755** to add setuid to an executable file. In symbolic mode, the **u** and **g** can be used to set or remove setuid and setgid, such as **u+s,g+s**. The sticky bit is set using **o+t**. (Other combinations, like **o+s** or **u+t**, are meaningless and ignored.)
## Sharing and special permissions
Recall the example from the previous article concerning a finance team that needs to share files. As you can imagine, the special permission bits help to solve their problem even more effectively. The original solution simply made a directory the whole group could write to:
drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance
One problem with this directory is that users *dwayne* and *jill*, who are both members of the *finance* group, can delete each other’s files. That’s not optimal for a shared space. It might be useful in some situations, but probably not when dealing with financial records!
Another problem is that files in this directory may not be truly shared, because they will be owned by the default groups of *dwayne* and *jill* — most likely the user private groups also named *dwayne* and *jill*.
A better way to solve this is to set both setgid and the sticky bit on the folder. This will do two things — cause files created in the folder to be owned by the *finance* group automatically, and prevent *dwayne* and *jill* from deleting each other’s files. Either of these commands will work:
sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance
The long listing for the file now shows the new special permissions applied. The sticky bit appears as **T** and not **t** because the folder is not searchable for users outside the *finance* group.
drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance
## Elmar Fudd
In the paragraph before the last:
„A better way to solve this is to set both
setuidand the sticky bit on the folder.“Aren’t the groups permission bound to be changed? setgid?
sudo chmod 3770 finance
## Paul W. Frields
@Elmar: Good catch, thanks — fixed.
## Jake
just want to note that gnu chmod made it impossible to clear the setuid/setgid bits on directories using octal mode. a personal pet peeve that still annoys me to this day.
## Rich
All you need is add an additional leading double zero like 00755.
## Pat
A use-case example I recently have to use, when copying files back from an NTFS partition. All files came back as 777 since NTFS doesn’t support (afaik) Linux user permissions. To get them back to a usable form, i.e. 755 for directories, 644 for users:
$ chmod -R a=r,u+w,a+X .
Where:
-R, is recursive down directories from current (.),
a=r, sets all permissions to read (user, group & others),
u+w, sets files owned by user to be writable,
a+X, sets executable bit only if the entry is a directory (not a file)
Nice article btw! I learned something about special permissions.
Thanks,
-Pat
## Rick
My Fedora chmod man page acknowledges GNU and says “To clear these bits for directories with a numeric mode requires an additional leading zero, or leading = like 00755 , or =755”. I remember this slowed me up for awhile, a long time ago.
## Alex
Normally I would’ve thought you’d carefuly ‘only’ permit users in the
financegroup whom you’d wish to be able torwxin that shared folder, but didn’t even occur to me the default group on file creation would not befinance.This is a great
chmodarticle, explanation, and simple & effective use-cases for the special permissions which this workstation user rarely uses–Thanks## dac.override
Nice, unless I overlooked it you did not mention the first bit that indicates the object class: – plain file d directory l symlink p fifo file s sock file c character file b block file
You did mentiion that bits can have a different meaning depending on the object class ihey’re associated with (for example x with plain file means execute, whereas x with directory means traverse)
It gets even more interesting when you compare and contrast DAC with SELinux. After all: SELinux enhances DAC.
Where DAC is identity-based (a decentralized or discrtionary access control framework), SELinux is label-based (a centralized or mandatory access control framework). Where DAC only knows few object classes to set few permission on, SELinux knows many object classes to set many permisisons on (i.e. more comprehenssive)
What is also interesting is how DAC setuid/setgid compares to SELinux transitions. |
11,187 | 开源新闻综述:有史以来最快的开源 CPU、Facebook 分享对抗有害内容的 AI 算法 | https://opensource.com/article/19/8/news-august-3 | 2019-08-04T23:20:00 | [
"开源"
] | https://linux.cn/article-11187-1.html |
>
> 不要错过最近两周最大的开源新闻。
>
>
>

在本期开源新闻综述中,我们分享了 Facebook 开源了两种算法来查找有害内容,Apple 在数据传输项目中的新角色以及你应该知道的更多新闻。
### Facebook 开源算法用于查找有害内容
Facebook 宣布它[开源两种算法](https://www.theverge.com/2019/8/1/20750752/facebook-child-exploitation-terrorism-open-source-algorithm-pdq-tmk)用于在该平台上发现儿童剥削、恐怖主义威胁和写实暴力。在 8 月 1 日的博客文章中,Facebook 分享了 PDQ 和 TMK + PDQF 这两种将文件存储为数字哈希的技术,然后将它们与已知的有害内容示例进行比较 - [现在已放在 GitHub 上](https://github.com/facebook/ThreatExchange/tree/master/hashing/tmk)。
该代码是在 Facebook 要尽快将有害内容从平台上移除的压力之下发布的。三月份在新西兰的大规模谋杀案被曝光在 Facebook Live 上,澳大利亚政府[威胁](https://www.buzzfeed.com/hannahryan/social-media-facebook-livestreaming-laws-christchurch)如果视频没有及时删除 Facebook 高管将被处以罚款和入狱。通过发布这些算法的源代码,Facebook 表示希望非营利组织、科技公司和独立开发人员都能帮助他们更快地找到并删除有害内容。
### 阿里巴巴发布了最快的开源 CPU
上个月,阿里巴巴的子公司平头哥半导体公司[发布了其玄铁 91 处理器](https://hexus.net/tech/news/cpu/133229-alibabas-16-core-risc-v-fastest-open-source-cpu-yet/)。它可以用于人工智能、物联网、5G 和自动驾驶汽车等基础设施。它拥有 7.1 Coremark/MHz 的基准,使其成为市场上最快的开源 CPU。
平头哥宣布计划在今年 9 月在 GitHub 上提供其优质代码。分析师认为此次发布旨在帮助中国实现其目标,即到 2021 年使用本地供应商满足 40% 的处理器需求。近期美国的关税调整威胁要破坏这一目标,从而造成了对开源计算机组件的需求。
### Mattermost 为开源协作应用提供了理由
所有开源社区都受益于可以从一个或多个地方彼此进行通信。团队聊天应用程序的世界似乎由 Slack 和 Microsoft Teams 等极少数的强大工具主导。大多数选择都是基于云的和专有产品的;而 Mattermost 采用了不同的方法,销售开源协作应用程序的价值。
“人们想要一个开源替代品,因为他们需要信任、灵活性和只有开源才能提供的创新,”Mattermost 的联合创始人兼首席执行官 Ian Tien 说。
随着从优步到国防部的客户,Mattermost 走上了一个关键市场:需要开源软件的团队,他们可以信任这些软件并安装在他们自己的服务器上。对于需要协作应用程序在其内部基础架构上运行的企业,Mattermost 填补了 [Atlassian 离开后](https://lab.getapp.com/atlassian-slack-on-premise-software/) 的空白。在 Computerworld 的一篇文章中,Matthew Finnegan [探讨](https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html)了为什么在本地部署的开源聊天工具尚未死亡。
### Apple 加入了开源数据传输项目
Google、Facebook、Twitter 和微软去年联合创建了<ruby> 数据传输项目 <rt> Data Transfer Project </rt></ruby>(DTP)。DTP 被誉为通过自己的数据提升数据安全性和用户代理的一种方式,是一种罕见的技术上的团结展示。本周,Apple 宣布他们也将[加入](https://www.techspot.com/news/81221-apple-joins-data-transfer-project-open-source-project.html)。
DTP 的目标是帮助用户通过开源平台将数据从一个在线服务转移到另一个在线服务。DTP 旨在通过使用 API 和授权工具来取消中间人,以便用户将数据从一个服务转移到另一个服务。这将消除用户下载数据然后将其上传到另一个服务的需要。Apple 加入 DTP 的选择将允许用户将数据传入和传出 iCloud,这可能是隐私权拥护者的一大胜利。
#### 其它新闻
* [FlexiWAN 的开源 SD-WAN 可在公共测试版中下载](https://www.fiercetelecom.com/telecom/flexiwan-s-open-source-sd-wan-available-for-download-public-beta-release)
* [开源的 Windows 计算器应用程序获得了永远置顶模式](https://mspoweruser.com/open-source-windows-calculator-app-to-get-always-on-top-mode/)
* [通过 Zowe,开源和 DevOps 正在使大型计算机民主化](https://siliconangle.com/2019/07/29/zowe-open-source-devops-democratizing-mainframe-computer/)
* [Mozilla 首次推出 WebThings Gateway 开源路由器固件的实现](https://venturebeat.com/2019/07/25/mozilla-debuts-webthings-gateway-open-source-router-firmware-for-turris-omnia/)
* [更新:向 Mozilla 代码库做成贡献](https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Introduction)
*谢谢 Opensource.com 的工作人员和主持人本周的帮助。*
---
via: <https://opensource.com/article/19/8/news-august-3>
作者:[Lauren Maffeo](https://opensource.com/users/lmaffeo) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In this edition of our open source news roundup, we share Facebook's choice to open source two algorithms for finding harmful content, Apple's new role in the Data Transfer Project, and more news you should know.
## Facebook open sources algorithms used to find harmful content
Facebook announced that it has [open sourced two algorithms](https://www.theverge.com/2019/8/1/20750752/facebook-child-exploitation-terrorism-open-source-algorithm-pdq-tmk) used to find child exploitation, threats of terrorism, and graphic violence on the platform. In a blog post on August 1st, Facebook shared that PDQ and TMK+PDQF–two technologies that store files as digital hashes, then compare them with known examples of harmful content–are [now live on GitHub](https://github.com/facebook/ThreatExchange/tree/master/hashing/tmk).
The code release comes amidst increased pressure to get harmful content off the platform as soon as possible. After March's mass murder in New Zealand was streamed on Facebook Live, the Australian government [threatened Facebook execs with fines and jail time](https://www.buzzfeed.com/hannahryan/social-media-facebook-livestreaming-laws-christchurch) if the video wasn't promptly removed. By releasing the source code for these algorithms, Facebook said that it hopes nonprofits, tech companies, and solo developers can all help them find and remove harmful content faster.
## Alibaba launches the fastest open source CPU
Pingtouge Semiconductor - an Alibaba subsidiary - [announced its Xuantie 91 processor ](https://hexus.net/tech/news/cpu/133229-alibabas-16-core-risc-v-fastest-open-source-cpu-yet/)last month. It's equipped to manage infrastructure for AI, the IoT, 5G, and autonomous vehicles, among other projects. It boasts a a 7.1 Coremark/MHz, making it the fastest open source CPU on the market.
Pintogue announced plans to make its polished code available on GitHub this September. Analysts view this release as a power move to help China hit its goal of using local suppliers to meet 40 percent of processor demand by 2021. Recent tariffs on behalf of the U.S. threaten to derail this goal, creating the need for open source computer components.
## Mattermost makes the case for open source collaboration apps
All open source communities benefit from one or more places to communicate with each other. The world of team chat apps seems dominated by minimal, mighty tools like Slack and Microsoft Teams. Most options are cloud-based and proprietary; Mattermost takes a different approach by selling the value of open source collaboration apps.
“People want an open-source alternative because they need the trust, the flexibility and the innovation that only open source is able to deliver,” said Ian Tien, co-founder and CEO of Mattermost.
With clients that range from Uber to the Department of Defense, Mattermost cornered a crucial market: Teams that want open source software they can trust and install on their own servers. For businesses that need collaboration apps to run on their internal infrastructure, Mattermost fills a gap that [Atlassian left bare](https://lab.getapp.com/atlassian-slack-on-premise-software/). In an article for Computerworld, Matthew Finnegan [explores](https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html) why on-premises, open source chat tools aren't dead yet.
## Apple joins the open source Data Transfer Project
Google, Facebook, Twitter, and Microsoft united last year to create the Data Transfer Project (DTP). Hailed as a way to boost data security and user agency over their own data, the DTP is a rare show of solidarity in tech. This week, Apple announced that they'll [join the fold](https://www.techspot.com/news/81221-apple-joins-data-transfer-project-open-source-project.html).
The DTP's goal is to help users transfer their data from one online service to another via an open source platform. DTP aims to take out the middleman by using APIs and authorization tools to let users transfer their data from one service to another. This would erase the need for users to download their data, then upload it to another service. Apple's choice to join the DTP will allow users to transfer data in and out of iCloud, and could be a big win for privacy advocates.
### In other news
-
[FlexiWAN's open source SD-WAN available for download in public beta release](https://www.fiercetelecom.com/telecom/flexiwan-s-open-source-sd-wan-available-for-download-public-beta-release) -
[Open source Windows Calculator app to get Always-on-Top mode](https://mspoweruser.com/open-source-windows-calculator-app-to-get-always-on-top-mode/) -
[With Zowe, open source and DevOps are democratizing the mainframe computer](https://siliconangle.com/2019/07/29/zowe-open-source-devops-democratizing-mainframe-computer/) -
[Mozilla debuts implementation of WebThings Gateway open source router firmware](https://venturebeat.com/2019/07/25/mozilla-debuts-webthings-gateway-open-source-router-firmware-for-turris-omnia/)
*Thanks, as always, to Opensource.com staff members and moderators for their help this week.*
## Comments are closed. |
11,189 | Logreduce:用 Python 和机器学习去除日志噪音 | https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning | 2019-08-05T08:35:24 | [
"日志"
] | /article-11189-1.html |
>
> Logreduce 可以通过从大量日志数据中挑选出异常来节省调试时间。
>
>
>

持续集成(CI)作业会生成大量数据。当一个作业失败时,弄清楚出了什么问题可能是一个繁琐的过程,它涉及到调查日志以发现根本原因 —— 这通常只能在全部的作业输出的一小部分中找到。为了更容易地将最相关的数据与其余数据分开,可以使用先前成功运行的作业结果来训练 [Logreduce](https://pypi.org/project/logreduce/) 机器学习模型,以从失败的运行日志中提取异常。
此方法也可以应用于其他用例,例如,从 [Journald](http://man7.org/linux/man-pages/man8/systemd-journald.service.8.html) 或其他系统级的常规日志文件中提取异常。
### 使用机器学习来降低噪音
典型的日志文件包含许多标称事件(“基线”)以及与开发人员相关的一些例外事件。基线可能包含随机元素,例如难以检测和删除的时间戳或唯一标识符。要删除基线事件,我们可以使用 [k-最近邻模式识别算法](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm)(k-NN)。

日志事件必须转换为可用于 k-NN 回归的数值。使用通用特征提取工具 [HashingVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html) 可以将该过程应用于任何类型的日志。它散列每个单词并在稀疏矩阵中对每个事件进行编码。为了进一步减少搜索空间,这个标记化过程删除了已知的随机单词,例如日期或 IP 地址。

训练模型后,k-NN 搜索可以告诉我们每个新事件与基线的距离。

这个 [Jupyter 笔记本](https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev/blob/master/datasets/notebook/anomaly-detection-with-scikit-learn.ipynb) 演示了该稀疏矩阵向量的处理和图形。

### Logreduce 介绍
Logreduce Python 软件透明地实现了这个过程。Logreduce 的最初目标是使用构建数据库来协助分析 [Zuul CI](https://zuul-ci.org) 作业的失败问题,现在它已集成到 [Software Factory 开发车间](https://www.softwarefactory-project.io)的作业日志处理中。
最简单的是,Logreduce 会比较文件或目录并删除相似的行。Logreduce 为每个源文件构建模型,并使用以下语法输出距离高于定义阈值的任何目标行:`distance | filename:line-number: line-content`。
```
$ logreduce diff /var/log/audit/audit.log.1 /var/log/audit/audit.log
INFO logreduce.Classifier - Training took 21.982s at 0.364MB/s (1.314kl/s) (8.000 MB - 28.884 kilo-lines)
0.244 | audit.log:19963: type=USER_AUTH acct="root" exe="/usr/bin/su" hostname=managesf.sftests.com
INFO logreduce.Classifier - Testing took 18.297s at 0.306MB/s (1.094kl/s) (5.607 MB - 20.015 kilo-lines)
99.99% reduction (from 20015 lines to 1
```
更高级的 Logreduce 用法可以离线训练模型以便重复使用。可以使用基线的许多变体来拟合 k-NN 搜索树。
```
$ logreduce dir-train audit.clf /var/log/audit/audit.log.*
INFO logreduce.Classifier - Training took 80.883s at 0.396MB/s (1.397kl/s) (32.001 MB - 112.977 kilo-lines)
DEBUG logreduce.Classifier - audit.clf: written
$ logreduce dir-run audit.clf /var/log/audit/audit.log
```
Logreduce 还实现了接口,以发现 Journald 时间范围(天/周/月)和 Zuul CI 作业构建历史的基线。它还可以生成 HTML 报告,该报告在一个简单的界面中将在多个文件中发现的异常进行分组。

### 管理基线
使用 k-NN 回归进行异常检测的关键是拥有一个已知良好基线的数据库,该模型使用数据库来检测偏离太远的日志行。此方法依赖于包含所有标称事件的基线,因为基线中未找到的任何内容都将报告为异常。
CI 作业是 k-NN 回归的重要目标,因为作业的输出通常是确定性的,之前的运行结果可以自动用作基线。 Logreduce 具有 Zuul 作业角色,可以将其用作失败的作业发布任务的一部分,以便发布简明报告(而不是完整作业的日志)。只要可以提前构建基线,该原则就可以应用于其他情况。例如,标称系统的 [SoS 报告](https://sos.readthedocs.io/en/latest/) 可用于查找缺陷部署中的问题。

### 异常分类服务
下一版本的 Logreduce 引入了一种服务器模式,可以将日志处理卸载到外部服务,在外部服务中可以进一步分析该报告。它还支持导入现有报告和请求以分析 Zuul 构建。这些服务以异步方式运行分析,并具有 Web 界面以调整分数并消除误报。

已审核的报告可以作为独立数据集存档,其中包含目标日志文件和记录在一个普通的 JSON 文件中的异常行的分数。
### 项目路线图
Logreduce 已经能有效使用,但是有很多机会来改进该工具。未来的计划包括:
* 策划在日志文件中发现的许多带注释的异常,并生成一个公共域数据集以进行进一步研究。日志文件中的异常检测是一个具有挑战性的主题,并且有一个用于测试新模型的通用数据集将有助于识别新的解决方案。
* 重复使用带注释的异常模型来优化所报告的距离。例如,当用户通过将距离设置为零来将日志行标记为误报时,模型可能会降低未来报告中这些日志行的得分。
* 对存档异常取指纹特征以检测新报告何时包含已知的异常。因此,该服务可以通知用户该作业遇到已知问题,而不是报告异常的内容。解决问题后,该服务可以自动重新启动该作业。
* 支持更多基准发现接口,用于 SOS 报告、Jenkins 构建、Travis CI 等目标。
如果你有兴趣参与此项目,请通过 #log-classify Freenode IRC 频道与我们联系。欢迎反馈!
---
via: <https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning>
作者:[Tristan de Cacqueray](https://opensource.com/users/tristanc) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,191 | 系统管理员入门:排除故障 | http://northernmost.org/blog/troubleshooting-101/index.html | 2019-08-06T10:07:00 | [
"故障",
"系统管理员"
] | https://linux.cn/article-11191-1.html | 
我通常会严格保持此博客的技术性,将观察、意见等内容保持在最低限度。但是,这篇和接下来的几篇文章将介绍刚进入系统管理/SRE/系统工程师/sysops/devops-ops(无论你想称自己是什么)角色的常见的基础知识。
请跟我来!
>
> “我的网站很慢!”
>
>
>
我只是随机选择了本文的问题类型,这也可以应用于任何与系统管理员相关的故障排除。我并不是要炫耀那些可以发现最多的信息的最聪明的“金句”。它也不是一个详尽的、一步步指导的、并在最后一个方框中导向“利润”一词的“流程图”。
我会通过一些例子展示常规的方法。
示例场景仅用于说明本文目的。它们有时会做一些不适用于所有情况的假设,而且肯定会有很多读者在某些时候说“哦,但我觉得你会发现……”。
但那可能会让我们错失重点。
十多年来,我一直在从事于支持工作,或在支持机构工作,有一件事让我一次又一次地感到震惊,这促使我写下了这篇文章。
**有许多技术人员在遇到问题时的本能反应,就是不管三七二十一去尝试可能的解决方案。**
*“我的网站很慢,所以”,*
* 我将尝试增大 `MaxClients`/`MaxRequestWorkers`/`worker_connections`
* 我将尝试提升 `innodb_buffer_pool_size`/`effective_cache_size`
* 我打算尝试启用 `mod_gzip`(遗憾的是,这是真实的故事)
*“我曾经看过这个问题,它是因为某种原因造成的 —— 所以我估计还是这个原因,它应该能解决这个问题。”*
这浪费了很多时间,并会让你在黑暗中盲目乱撞,胡乱鼓捣。
你的 InnoDB 的缓冲池也许达到 100% 的利用率,但这可能只是因为有人运行了一段时间的一次性大型报告导致的。如果没有排除这种情况,那你就是在浪费时间。
### 开始之前
在这里,我应该说明一下,虽然这些建议同样适用于许多角色,但我是从一般的支持系统管理员的角度来撰写的。在一个成熟的内部组织中,或与规模较大的、规范管理的或“企业级”客户合作时,你通常会对一切都进行检测、测量、绘制、整理(甚至不是文字),并发出警报。那么你的方法也往往会有所不同。让我们在这里先忽略这种情况。
如果你没有这种东西,那就随意了。
### 澄清问题
首先确定实际上是什么问题。“慢”可以是多种形式的。是收到第一个字节的时间吗?从糟糕的 Javascript 加载和每页加载要拉取 15 MB 的静态内容,这是一个完全不同类型的问题。是慢,还是比通常慢?这是两个非常不同的解决方案!
在你着手做某事之前,确保你知道实际报告和遇到的问题。找到问题的根源通常很困难,但即便找不到也必须找到问题本身。
否则,这相当于系统管理员带着一把刀去参加枪战。
### 唾手可得
首次登录可疑服务器时,你可以查找一些常见的嫌疑对象。事实上,你应该这样做!每当我登录到服务器时,我都会发出一些命令来快速检查一些事情:我们是否发生了页交换(`free` / `vmstat`),磁盘是否繁忙(`top` / `iostat` / `iotop`),是否有丢包(`netstat` / `proc/net/dev`),是否处于连接数过多的状态(`netstat`),有什么东西占用了 CPU(`top`),谁在这个服务器上(`w` / `who`),syslog 和 `dmesg` 中是否有引人注目的消息?
如果你从 RAID 控制器得到 2000 条抱怨直写式缓存没有生效的消息,那么继续进行是没有意义的。
这用不了半分钟。如果什么都没有引起你的注意 —— 那么继续。
### 重现问题
如果某处确实存在问题,并且找不到唾手可得的信息。
那么采取所有步骤来尝试重现问题。当你可以重现该问题时,你就可以观察它。**当你能观察到时,你就可以解决。**如果在第一步中尚未显现出或覆盖了问题所在,询问报告问题的人需要采取哪些确切步骤来重现问题。
对于由太阳耀斑或只能运行在 OS/2 上的客户端引起的问题,重现并不总是可行的。但你的第一个停靠港应该是至少尝试一下!在一开始,你所知道的是“某人认为他们的网站很慢”。对于那些人,他们可能还在用他们的 GPRS 手机,也可能正在安装 Windows 更新。你在这里挖掘得再深也是浪费时间。
尝试重现!
### 检查日志
我对于有必要包括这一点感到很难过。但是我曾经看到有人在运行 `tail /var/log/...` 之后几分钟就不看了。大多数 \*NIX 工具都特别喜欢记录日志。任何明显的错误都会在大多数应用程序日志中显得非常突出。检查一下。
### 缩小范围
如果没有明显的问题,但你可以重现所报告的问题,那也很棒。所以,你现在知道网站是慢的。现在你已经把范围缩小到:浏览器的渲染/错误、应用程序代码、DNS 基础设施、路由器、防火墙、网卡(所有的)、以太网电缆、负载均衡器、数据库、缓存层、会话存储、Web 服务器软件、应用程序服务器、内存、CPU、RAID 卡、磁盘等等。
根据设置添加一些其他可能的罪魁祸首。它们也可能是 SAN,也不要忘记硬件 WAF!以及…… 你明白我的意思。
如果问题是接收到第一个字节的时间,你当然会开始对 Web 服务器去应用上已知的修复程序,就是它响应缓慢,你也觉得几乎就是它,对吧?但是你错了!
你要回去尝试重现这个问题。只是这一次,你要试图消除尽可能多的潜在问题来源。
你可以非常轻松地消除绝大多数可能的罪魁祸首:你能从服务器本地重现问题吗?恭喜,你刚刚节省了自己必须尝试修复 BGP 路由的时间。
如果不能,请尝试使用同一网络上的其他计算机。如果可以的话,至少你可以将防火墙移到你的嫌疑人名单上,(但是要注意一下那个交换机!)
是所有的连接都很慢吗?虽然服务器是 Web 服务器,但并不意味着你不应该尝试使用其他类型的服务进行重现问题。[netcat](http://nc110.sourceforge.net/) 在这些场景中非常有用(但是你的 SSH 连接可能会一直有延迟,这可以作为线索)! 如果这也很慢,你至少知道你很可能遇到了网络问题,可以忽略掉整个 Web 软件及其所有组件的问题。用这个知识(我不收 200 美元)再次从顶部开始,按你的方式由内到外地进行!
即使你可以在本地复现 —— 仍然有很多“因素”留下。让我们排除一些变量。你能用普通文件重现它吗? 如果 `i_am_a_1kb_file.html` 很慢,你就知道它不是数据库、缓存层或 OS 以外的任何东西和 Web 服务器本身的问题。
你能用一个需要解释或执行的 `hello_world.(py|php|js|rb..)` 文件重现问题吗?如果可以的话,你已经大大缩小了范围,你可以专注于少数事情。如果 `hello_world` 可以马上工作,你仍然学到了很多东西!你知道了没有任何明显的资源限制、任何满的队列或在任何地方卡住的 IPC 调用,所以这是应用程序正在做的事情或它正在与之通信的事情。
所有页面都慢吗?或者只是从第三方加载“实时分数数据”的页面慢?
**这可以归结为:你仍然可以重现这个问题所涉及的最少量的“因素”是什么?**
我们的示例是一个缓慢的网站,但这同样适用于几乎所有问题。邮件投递?你能在本地投递吗?能发给自己吗?能发给<常见的服务提供者>吗?使用小的、纯文本的消息进行测试。尝试直到遇到 2MB 拥堵时。使用 STARTTLS 和不使用 STARTTLS 呢?按你的方式由内到外地进行!
这些步骤中的每一步都只需要几秒钟,远远快于实施大多数“可能的”修复方案。
### 隔离观察
到目前为止,当你去除特定组件时无法重现问题时,你可能已经偶然发现了问题所在。
但如果你还没有,或者你仍然不知道**为什么**:一旦你找到了一种方法来重现问题,你和问题之间的“东西”(某个技术术语)最少,那么就该开始隔离和观察了。
请记住,许多服务可以在前台运行和/或启用调试。对于某些类别的问题,执行此操作通常非常有帮助。
这也是你的传统武器库发挥作用的地方。`strace`、`lsof`、`netstat`、`GDB`、`iotop`、`valgrind`、语言分析器(cProfile、xdebug、ruby-prof ……)那些类型的工具。
一旦你走到这一步,你就很少能摆脱剖析器或调试器了。
[strace](https://linux.die.net/man/1/strace) 通常是一个非常好的起点。
你可能会注意到应用程序停留在某个连接到端口 3306 的套接字文件描述符上的特定 `read()` 调用上。你会知道该怎么做。
转到 MySQL 并再次从顶部开始。显而易见:“等待某某锁”、死锁、`max_connections` ……进而:是所有查询?还是只写请求?只有某些表?还是只有某些存储引擎?等等……
你可能会注意到调用外部 API 资源的 `connect()` 需要五秒钟才能完成,甚至超时。你会知道该怎么做。
你可能会注意到,在同一对文件中有 1000 个调用 `fstat()` 和 `open()` 作为循环依赖的一部分。你会知道该怎么做。
它可能不是那些特别的东西,但我保证,你会发现一些东西。
如果你只是从这一部分学到一点,那也不错;学习使用 `strace` 吧!**真的**学习它,阅读整个手册页。甚至不要跳过历史部分。`man` 每个你还不知道它做了什么的系统调用。98% 的故障排除会话以 `strace` 而终结。
---
via: <http://northernmost.org/blog/troubleshooting-101/index.html>
作者:[Erik Ljungstrom](http://northernmost.org) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ## Sysadmin 101: Troubleshooting
I typically keep this blog strictly technical, keeping observations, opinions
and the like to a minimum. But this, and the next few posts will be about
basics and fundamentals for starting out in system administration/SRE/system engineer/sysops/devops-ops
(whatever you want to call yourself) roles more generally.
Bear with me!
*“My web site is slow”*
I just picked the type of issue for this article at random, this can be
applied to pretty much any sysadmin related troubleshooting.
It’s not about showing off the cleverest oneliners to find the most
information. It’s also not an exhaustive, step-by-step “flowchart” with the
word “profit” in the last box.
It’s about general approach, by means of a few examples.
The example scenarios are solely for illustrative purposes. They sometimes
have a basis in assumptions that doesn’t apply to all cases all of the time, and I’m
positive many readers will go *“oh, but I think you will find…”* at some point.
But that would be missing the point.
Having worked in support, or within a support organization for over a decade,
there is one thing that strikes me time and time again and that made me write
this;
**The instinctive reaction many techs have when facing a problem, is
to start throwing potential solutions at it.**
*“My website is slow”*
- I’m going to try upping
`MaxClients/MaxRequestWorkers/worker_connections`
- I’m going to try to increase
`innodb_buffer_pool_size/effective_cache_size`
- I’m going to try to enable
`mod_gzip`
(true story, sadly)
*“I saw this issue once, and then it was because X. So I’m going to try to fix X
again, it might work”*.
This wastes a lot of time, and leads you down a wild goose chase. In the dark. Wearing greased mittens.
InnoDB’s buffer pool may well be at 100% utilization, but that’s just because
there are remnants of a large one-off report someone ran a while back in there.
If there are no evictions, you’ve just wasted time.
#### Quick side-bar before we start
At this point, I should mention that while it’s equally applicable to many roles, I’m writing this from a general support system adminstrator’s point of view. In a mature, in-house organization or when working with larger, fully managed or “enterprise” customers, you’ll typically have everything instrumented, measured, graphed, thresheld (not even word) and alerted on. Then your approach will often be rather different. We’re going in blind here.
If you don’t have that sort of thing at your disposal;
### Clarify and First look
Establish what the issue actually is. “Slow” can take many forms. Is it time to first byte? That’s a whole different class of problem from poor Javascript loading and pulling down 15 MB of static assets on each page load. Is it slow, or just slower than it usually is? Two very different plans of attack!
Make sure you know what the issue reported/experienced actually is before you
go off and do something. Finding the source of the problem is often difficult
enough, without also having to find the problem itself.
That is the sysadmin equivalent of bringing a knife to a gunfight.
### Low hanging fruit / gimmies
You are allowed to look for a few usual suspects when you first log in to a
suspect server. In fact, you should! I tend to fire off a smattering of commands
whenever I log in to a server to just very quickly check a few things; Are we
swapping (`free/vmstat`
), are the disks busy (`top/iostat/iotop`
), are we dropping
packets (`netstat/proc/net/dev`
), is there an undue amount of connections in an
undue state (`netstat`
), is something hogging the CPUs (`top`
), is someone else on
this server (`w/who`
), any eye-catching messages in syslog and `dmesg`
?
There’s little point to carrying on if you have 2000 messages from your RAID controller about how unhappy it is with its write-through cache.
This doesn’t have to take more than half a minute. If nothing catches your eye – continue.
### Reproduce
If there indeed is a problem somewhere, and there’s no low hanging fruit to be found;
Take all steps you can to try and reproduce the problem. When you can
reproduce, you can observe. **When you can observe, you can solve.**
Ask the person reporting the issue what exact steps to take to reproduce the
issue if it isn’t already obvious or covered by the first section.
Now, for issues caused by solar flares and clients running exclusively on OS/2, it’s not always feasible to reproduce. But your first port of call should be to at least try! In the very beginning, all you know is “X thinks their website is slow”. For all you know at that point, they could be tethered to their GPRS mobile phone and applying Windows updates. Delving any deeper than we already have at that point is, again, a waste of time.
Attempt to reproduce!
### Check the log!
It saddens me that I felt the need to include this. But I’ve seen escalations
that ended mere minutes after someone ran `tail /var/log/..`
Most *NIX tools these days
are pretty good at logging. Anything blatantly wrong will manifest itself quite
prominently in most application logs. Check it.
### Narrow down
If there are no obvious issues, but you can reproduce the reported problem,
great.
So, you know the website is slow.
Now you’ve narrowed things down to: Browser rendering/bug, application
code, DNS infrastructure, router, firewall, NICs (all eight+ involved),
ethernet cables, load balancer, database, caching layer, session storage, web
server software, application server, RAM, CPU, RAID card, disks.
Add a smattering of other potential culprits depending on the set-up. It could
be the SAN, too. And don’t forget about the hardware WAF! And.. you get my
point.
If the issue is time-to-first-byte you’ll of course start applying known fixes
to the webserver, that’s the one responding slowly and what you know the most
about, right? Wrong!
You go back to trying to reproduce the issue. Only this time, you try to
eliminate as many potential sources of issues as possible.
You can eliminate the vast majority of potential culprits very
easily:
Can you reproduce the issue locally from the server(s)?
Congratulations, you’ve
just saved yourself having to try your fixes for BGP routing.
If you can’t, try from another machine on the same network.
If you can - at least you can move the firewall down your list of suspects, (but do keep
a suspicious eye on that switch!)
Are all connections slow? Just because the
server is a web server, doesn’t mean you shouldn’t try to reproduce with another
type of service. [netcat](http://nc110.sourceforge.net/) is very useful in these scenarios
(but chances are your SSH connection would have been lagging
this whole time, as a clue)! If that’s also slow, you at least know you’ve
most likely got a networking problem and can disregard the entire web
stack and all its components. Start from the top again with this knowledge
(do not collect $200).
Work your way from the inside-out!
Even if you can reproduce locally - there’s still a whole lot of “stuff”
left. Let’s remove a few more variables.
Can you reproduce it with a flat-file? If `i_am_a_1kb_file.html`
is slow,
you know it’s not your DB, caching layer or anything beyond the OS and the webserver
itself.
Can you reproduce with an interpreted/executed
`hello_world.(py|php|js|rb..)`
file?
If you can, you’ve narrowed things down considerably, and you can focus on
just a handful of things.
If `hello_world`
is served instantly, you’ve still learned a lot! You know
there aren’t any blatant resource constraints, any full queues or stuck
IPC calls anywhere. So it’s something the application is doing or
something it’s communicating with.
Are all pages slow? Or just the ones loading the “Live scores feed” from a third party?
**What this boils down to is; What’s the smallest amount of “stuff” that you
can involve, and still reproduce the issue?**
Our example is a slow web site, but this is equally applicable to almost any issue. Mail delivery? Can you deliver locally? To yourself? To <common provider here>? Test with small, plaintext messages. Work your way up to the 2MB campaign blast. STARTTLS and no STARTTLS. Work your way from the inside-out.
Each one of these steps takes mere seconds each, far quicker than implementing most “potential” fixes.
### Observe / isolate
By now, you may already have stumbled across the problem by virtue of being unable to reproduce when you removed a particular component.
But if you haven’t, or you still don’t know **why**;
Once you’ve found a way to reproduce the issue with the smallest amount of
“stuff” (technical term) between you and the issue, it’s time to start
isolating and observing.
Bear in mind that many services can be ran in the foreground, and/or have debugging enabled. For certain classes of issues, it is often hugely helpful to do this.
Here’s also where your traditional armory comes into play. `strace`
, `lsof`
, `netstat`
,
`GDB`
, `iotop`
, `valgrind`
, language profilers (cProfile, xdebug, ruby-prof…).
Those types of tools.
Once you’ve come this far, you rarely end up having to break out profilers or debugers though.
[ strace](https://linux.die.net/man/1/strace) is often a very good place to start.
You might notice that the application is stuck on a particular
`read()`
call
on a socket file descriptor connected to port 3306 somewhere. You’ll know
what to do.Move on to MySQL and start from the top again. Low hanging fruit: “Waiting_for * lock”, deadlocks, max_connections.. Move on to: All queries? Only writes? Only certain tables? Only certain storage engines?…
You might notice that there’s a `connect()`
to an external API resource that
takes five seconds to complete, or even times out. You’ll know what to do.
You might notice that there are 1000 calls to `fstat()`
and `open()`
on the
same couple of files as part of a circular dependency somewhere. You’ll
know what to do.
It might not be any of those particular things, but I promise you, you’ll notice something.
If you’re only going to take one thing from this section, let it be; learn
to use `strace`
! **Really** learn it, read the *whole* man page. Don’t even skip
the HISTORY section. `man`
each syscall you don’t already know what it
does. 98% of troubleshooting sessions ends with strace. |
11,193 | OpenHMD:用于 VR 开发的开源项目 | https://itsfoss.com/openhmd/ | 2019-08-06T10:50:06 | [
"VR",
"OpenHMD"
] | https://linux.cn/article-11193-1.html |
>
> 在这个时代,有一些开源替代品可满足你的所有计算需求。甚至还有一个 VR 眼镜之类的开源平台。让我们快速看一下 OpenHMD 这个项目。
>
>
>

### 什么是 OpenHMD?

[OpenHMD](http://www.openhmd.net/) 是一个为沉浸式技术创建开源 API 及驱动的项目。这类技术包括带内置头部跟踪的头戴式显示器。
它目前支持很多系统,包括 Android、FreeBSD、Linux、OpenBSD、mac OS 和 Windows。它支持的[设备](http://www.openhmd.net/index.php/devices/)包括 Oculus Rift、HTC Vive、DreamWorld DreamGlass、Playstation Move 等。它还支持各种语言,包括 Go、Java、.NET、Perl、Python 和 Rust。
OpenHMD 项目是在 [Boost 许可证](https://github.com/OpenHMD/OpenHMD/blob/master/LICENSE)下发布的。
### 新版本中的更多功能和改进功能
最近,OpenHMD 项目[发布版本 0.3.0](http://www.openhmd.net/index.php/2019/07/12/openhmd-0-3-0-djungelvral-released/),代号为 Djungelvral([Djungelvral](https://www.youtube.com/watch?v=byP5i6LdDXs) 是来自瑞典的盐渍甘草)。它带来了不少变化。
这次更新添加了对以下设备的支持:
* 3Glasses D3
* Oculus Rift CV1
* HTC Vive 和 HTC Vive Pro
* NOLO VR
* Windows Mixed Reality HMD 支持
* Deepoon E2
* GearVR Gen1
OpenHMD 增加了一个通用扭曲着色器。这一新增功能“可以方便地在驱动程序中设置一些变量,为着色器提供有关镜头尺寸、色差、位置和 Quirks 的信息。”
他们还宣布计划改变构建系统。OpenHMD 增加了对 Meson 的支持,并将在下一个 (0.4) 版本中将删除对 Autotools 的支持。
OpenHMD 背后的团队还不得不删除一些功能,因为他们希望他们的系统适合所有人。由于 Windows 和 mac OS 对 HID 头的兼容问题,因此禁用了对 PlayStation VR 的支持。NOLO 有一堆固件版本,很多都会有小改动。OpenHMD 无法测试所有固件版本,因此某些版本可能无法正常工作。他们建议升级到最新的固件版本。最后,几个设备仅提供有限的支持,因此不包含在此版本中。
他们预计将加快 OpenHMD 发布周期,以便更快地获得更新的功能并为用户提供更多设备支持。他们优先要做的是“让当前在主干分支中禁用的设备在下次发布补丁时能够试用,同时让支持的头戴式显示器支持位置跟踪。”
### 最后总结
我没有 VR 设备而且从未使用过。我相信它们有很大的潜力,甚至能超越游戏。我很兴奋(但并不惊讶)有一个开源实现会去支持许多设备。我很高兴他们专注于各种各样的设备,而不是专注于一些非品牌的 VR 的努力。
我希望 OpenHMD 团队做得不错,并希望他们创建一个平台,让它们成为 VR项目。
你曾经使用或看到过 OpenHMD 吗?你有没有使用 VR 进行游戏和其他用途?如果是,你是否用过任何开源硬件或软件?请在下面的评论中告诉我们。
---
via: <https://itsfoss.com/openhmd/>
作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In this day and age, there are open-source alternatives for all your computing needs. There is even an open-source platform for VR goggles and the like. Let’s have a quick look at the OpenHMD project.
## What is OpenHMD?

[OpenHMD](http://www.openhmd.net/) is a project that aims to create an open-source API and drivers for immersive technology. This category includes head-mounted displays with built-in head tracking.
They currently support quite a few systems, including Android, FreeBSD, Linux, OpenBSD, mac OS, and Windows. The [devices](http://www.openhmd.net/index.php/devices/) that they support include Oculus Rift, HTC Vive, DreamWorld DreamGlass, Playstation Move, and others. They also offer support for a wide range of languages, including Go, Java, .NET, Perl, Python, and Rust.
The OpenHMD project is released under the [Boost License](https://github.com/OpenHMD/OpenHMD/blob/master/LICENSE).
## More and Improved Features in the new Release

Recently, the OpenHMD project [released version 0.3.0](http://www.openhmd.net/index.php/2019/07/12/openhmd-0-3-0-djungelvral-released/) codenamed Djungelvral. ([Djungelvral](https://www.youtube.com/watch?v=byP5i6LdDXs) is a salted licorice from Sweden.) This brought quite a few changes.
The update added support for the following devices:
- 3Glasses D3
- Oculus Rift CV1
- HTC Vive and HTC Vive Pro
- NOLO VR
- Windows Mixed Reality HMD support
- Deepoon E2
- GearVR Gen1
A universal distortion shader was added to OpenHMD. This additions “makes it possible to simply set some variables in the drivers that gives information to the shader regarding lens size, chromatic aberration, position and quirks.”
They also announced plans to change the build system. OpenHMD added support for Meson and will remove support for Autotools in the next (0.4) release.
The team behind OpenHMD also had to remove some features because they want their system to work for everyone. Support for PlayStation VR has been disabled because of some issue with Windows and mac OS due to incomplete HID headers. NOLO has a bunch of firmware version, many will small changes. OpenHMD is unable to test all of the firmware versions, so some version might not work. They recommend upgrading to the latest firmware release. Finally, several devices only have limited support and therefore are not included in this release.
They accounted that they will be speeding up the OpenHMD release cycle to get newer features and support for more devices to users quicker. Their main priority will be to get “currently disabled devices in master ready for a patch release will be priority as well, among getting the elusive positional tracking functional for supported HMD’s.”
## Final Thoughts
I don’t have a VR device and have never used one. I do believe that they have great potential, even beyond gaming. I am thrill (but not surprised) that there is an open-source implementation that seeks to support many devices. I’m glad that they are focusing on a wide range of devices, instead of focussing on some off-brand VR effort.
I wish the OpenHMD team well and hope they create a platform that will make them the goto VR project.
Have you ever used or encountered OpenHMD? Have you ever used VR for gaming and other pursuits? If yes, have you encountered any open-source hardware or software? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit](http://reddit.com/r/linuxusersgroup). |
11,194 | 如何在 Linux 上查找硬件规格 | https://www.ostechnix.com/getting-hardwaresoftware-specifications-in-linux-mint-ubuntu/ | 2019-08-06T11:17:43 | [
"硬件"
] | https://linux.cn/article-11194-1.html | 
在 Linux 系统上有许多工具可用于查找硬件规格。在这里,我列出了四种最常用的工具,可以获取 Linux 系统的几乎所有硬件(和软件)细节。好在是这些工具在某些 Linux 发行版上默认预装。我在 Ubuntu 18.04 LTS 桌面上测试了这些工具,但是它们也适用于其他 Linux 发行版。
### 1、LSHW
lshw(硬件列表)是一个简单但功能齐全的实用程序,它提供了 Linux 系统上的硬件规格的详细信息。它可以报告确切的内存规格、固件版本、主板规格、CPU 版本和速度、缓存规格、总线速度等。信息可以以纯文本、XML 或 HTML 格式输出。
它目前支持 DMI(仅限 x86 和 EFI)、Open Firmware 设备树(仅限 PowerPC)、PCI/AGP、ISA PnP(x86)、CPUID(x86)、IDE/ATA/ATAPI、PCMCIA(仅在 x86 上测试过)、USB 和 SCSI。
就像我已经说过的那样,Ubuntu 默认预装了 lshw。如果它未安装在你的 Ubuntu 系统中,请使用以下命令安装它:
```
$ sudo apt install lshw lshw-gtk
```
在其他 Linux 发行版上,例如 Arch Linux,运行:
```
$ sudo pacman -S lshw lshw-gtk
```
安装后,运行 `lshw` 以查找系统硬件详细信息:
```
$ sudo lshw
```
你将看到输出详细的系统硬件。
示例输出:

*使用 lshw 在 Linux 上查找硬件规格*
请注意,如果你没有以 `sudo` 权限运行 `lshw` 命令,则输出可能不完整或不准确。
`lshw` 可以将输出显示为 HTML 页面。为此,请使用:
```
$ sudo lshw -html
```
同样,我们可以将设备树输出为 XML 和 json 格式,如下所示:
```
$ sudo lshw -xml
$ sudo lshw -json
```
要输出显示硬件路径的设备树,请使用 `-short` 选项:
```
$ sudo lshw -short
```

*使用 lshw 显示具有硬件路径的设备树*
要列出设备的总线信息、详细的 SCSI、USB、IDE 和 PCI 地址,请运行:
```
$ sudo lshw -businfo
```
默认情况下,`lshw` 显示所有硬件详细信息。你还可以使用类选项查看特定硬件详细信息的硬件信息,例如处理器、内存、显示器等。可以使用 `lshw -short` 或 `lshw -businfo` 找到类选项。
要显示特定硬件详细信息,例如处理器,请执行以下操作:
```
$ sudo lshw -class processor
```
示例输出:
```
*-cpu
description: CPU
product: Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
vendor: Intel Corp.
physical id: 4
bus info: [email protected]
version: Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
serial: To Be Filled By O.E.M.
slot: CPU 1
size: 913MHz
capacity: 2300MHz
width: 64 bits
clock: 100MHz
capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm arat pln pts md_clear flush_l1d cpufreq
configuration: cores=2 enabledcores=1 threads=2
```
类似的,你可以得到系统细节:
```
$ sudo lshw -class system
```
硬盘细节:
```
$ sudo lshw -class disk
```
网络细节:
```
$ sudo lshw -class network
```
内存细节:
```
$ sudo lshw -class memory
```
你也可以像下面这样列出多个设备的细节:
```
$ sudo lshw -class storage -class power -class volume
```
如果你想要查看带有硬件路径的细节信息,加上 `-short` 选项即可:
```
$ sudo lshw -short -class processor
```
示例输出:
```
H/W path Device Class Description
=======================================================
/0/4 processor Intel(R) Core(TM) i3-2350M CPU @ 2.30GHz
```
有时,你可能希望将某些硬件详细信息共享给别人,例如客户支持人员。如果是这样,你可以从输出中删除潜在的敏感信息,如 IP 地址、序列号等,如下所示。
```
$ lshw -sanitize
```
#### lshw-gtk GUI 工具
如果你对 CLI 不熟悉,可以使用 `lshw-gtk`,这是 `lshw` 命令行工具的图形界面。
它可以从终端或 Dash 中打开。
要从终端启动它,只需执行以下操作:
```
$ sudo lshw-gtk
```
这是 `lshw` 工具的默认 GUI 界面。

*使用 lshw-gtk 在 Linux 上查找硬件*
只需双击“Portable Computer”即可进一步展开细节。

*使用 lshw-gtk GUI 在 Linux 上查找硬件*
你可以双击后续的硬件选项卡以获取详细视图。
有关更多详细信息,请参阅手册页。
```
$ man lshw
```
### 2、Inxi
Inxi 是我查找 Linux 系统上几乎所有内容的另一个最喜欢的工具。它是一个自由开源的、功能齐全的命令行系统信息工具。它显示了系统硬件、CPU、驱动程序、Xorg、桌面、内核、GCC 版本、进程、RAM 使用情况以及各种其他有用信息。无论是硬盘还是 CPU、主板还是整个系统的完整细节,inxi 都能在几秒钟内更准确地显示它。由于它是 CLI 工具,你可以在桌面或服务器版本中使用它。有关更多详细信息,请参阅以下指南。
* [如何使用 inxi 发现系统细节](https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/)
### 3、Hardinfo
Hardinfo 将为你提供 `lshw` 中没有的系统硬件和软件详细信息。
HardInfo 可以收集有关系统硬件和操作系统的信息,执行基准测试,并以 HTML 或纯文本格式生成可打印的报告。
如果 Ubuntu 中未安装 Hardinfo,请使用以下命令安装:
```
$ sudo apt install hardinfo
```
安装后,Hardinfo 工具可以从终端或菜单中进行。
以下是 Hardinfo 默认界面的外观。

*使用 Hardinfo 在 Linux 上查找硬件*
正如你在上面的屏幕截图中看到的,Hardinfo 的 GUI 简单直观。
所有硬件信息分为四个主要组:计算机、设备、网络和基准。每个组都显示特定的硬件详细信息。
例如,要查看处理器详细信息,请单击“设备”组下的“处理器”选项。

*使用 hardinfo 显示处理器详细信息*
与 `lshw` 不同,Hardinfo 可帮助你查找基本软件规范,如操作系统详细信息、内核模块、区域设置信息、文件系统使用情况、用户/组和开发工具等。

*使用 hardinfo 显示操作系统详细信息*
Hardinfo 的另一个显着特点是它允许我们做简单的基准测试来测试 CPU 和 FPU 功能以及一些图形用户界面功能。

*使用 hardinfo 执行基准测试*
建议阅读:
* [Phoronix 测试套件 - 开源测试和基准测试工具](https://www.ostechnix.com/phoronix-test-suite-open-source-testing-benchmarking-tool/)
* [UnixBench - 类 Unix 系统的基准套件](https://www.ostechnix.com/unixbench-benchmark-suite-unix-like-systems/)
* [如何从命令行对 Linux 命令和程序进行基准测试](https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/)
我们可以生成整个系统以及各个设备的报告。要生成报告,只需单击菜单栏上的“生成报告”按钮,然后选择要包含在报告中的信息。

*使用 hardinfo 生成系统报告*
Hardinfo 也有几个命令行选项。
例如,要生成报告并在终端中显示它,请运行:
```
$ hardinfo -r
```
列出模块:
```
$ hardinfo -l
```
更多信息请参考手册:
```
$ man hardinfo
```
### 4、Sysinfo
Sysinfo 是 HardInfo 和 lshw-gtk 实用程序的另一个替代品,可用于获取下面列出的硬件和软件信息。
* 系统详细信息,如发行版版本、GNOME 版本、内核、gcc 和 Xorg 以及主机名。
* CPU 详细信息,如供应商标识、型号名称、频率、L2 缓存、型号和标志。
* 内存详细信息,如系统全部内存、可用内存、交换空间总量和空闲、缓存、活动/非活动的内存。
* 存储控制器,如 IDE 接口、所有 IDE 设备、SCSI 设备。
* 硬件详细信息,如主板、图形卡、声卡和网络设备。
让我们使用以下命令安装 sysinfo:
```
$ sudo apt install sysinfo
```
Sysinfo 可以从终端或 Dash 启动。
要从终端启动它,请运行:
```
$ sysinfo
```
这是 Sysinfo 实用程序的默认界面。

*sysinfo 界面*
如你所见,所有硬件(和软件)详细信息都分为五类,即系统、CPU、内存、存储和硬件。单击导航栏上的类别以获取相应的详细信息。

*使用 Sysinfo 在 Linux 上查找硬件*
更多细节可以在手册页上找到。
```
$ man sysinfo
```
就这样。就像我已经提到的那样,可以有很多工具可用于显示硬件/软件规范。但是,这四个工具足以找到你的 Linux 发行版的所有软硬件规格信息。
---
via: <https://www.ostechnix.com/getting-hardwaresoftware-specifications-in-linux-mint-ubuntu/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,195 | 使用 dd 检查存储性能 | https://fedoramagazine.org/check-storage-performance-with-dd/ | 2019-08-07T07:00:17 | [
"存储",
"dd"
] | https://linux.cn/article-11195-1.html | 
本文包含一些示例命令,向你展示如何使用 `dd` 命令*粗略*估计硬盘驱动器和 RAID 阵列的性能。准确的测量必须考虑诸如[写入放大](https://www.ibm.com/developerworks/community/blogs/ibmnas/entry/misalignment_can_be_twice_the_cost1?lang=en)和[系统调用开销](https://eklitzke.org/efficient-file-copying-on-linux)之类的事情,本指南不会考虑这些。对于可能提供更准确结果的工具,你可能需要考虑使用 [hdparm](https://en.wikipedia.org/wiki/Hdparm)。
为了分解与文件系统相关的性能问题,这些示例显示了如何通过直接读取和写入块设备来在块级测试驱动器和阵列的性能。**警告**:*写入*测试将会销毁用来运行测试的块设备上的所有数据。**不要对包含你想要保留的数据的任何设备运行这些测试!**
### 四个测试
下面是四个示例 `dd` 命令,可用于测试块设备的性能:
1、 从 `$MY_DISK` 读取的一个进程:
```
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
2、写入到 `$MY_DISK` 的一个进程:
```
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
3、从 `$MY_DISK` 并发读取的两个进程:
```
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
4、 并发写入到 `$MY_DISK` 的两个进程:
```
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
* 执行读写测试时,相应的 `iflag=nocache` 和 `oflag=direct` 参数非常重要,因为没有它们,`dd` 命令有时会显示从[内存](https://en.wikipedia.org/wiki/Random-access_memory)中传输数据的结果速度,而不是从硬盘。
* `bs` 和 `count` 参数的值有些随意,我选择的值应足够大,以便在大多数情况下为当前硬件提供合适的平均值。
* `null` 和 `zero` 设备在读写测试中分别用于目标和源,因为它们足够快,不会成为性能测试中的限制因素。
* 并发读写测试中第二个 `dd` 命令的 `skip=200` 参数是为了确保 `dd` 的两个副本在硬盘驱动器的不同区域上运行。
### 16 个示例
下面是演示,显示针对以下四个块设备中之一运行上述四个测试中的各个结果:
1. `MY_DISK=/dev/sda2`(用在示例 1-X 中)
2. `MY_DISK=/dev/sdb2`(用在示例 2-X 中)
3. `MY_DISK=/dev/md/stripped`(用在示例 3-X 中)
4. `MY_DISK=/dev/md/mirrored`(用在示例 4-X 中)
首先将计算机置于*救援*模式,以减少后台服务的磁盘 I/O 随机影响测试结果的可能性。**警告**:这将关闭所有非必要的程序和服务。在运行这些命令之前,请务必保存你的工作。你需要知道 `root` 密码才能进入救援模式。`passwd` 命令以 `root` 用户身份运行时,将提示你(重新)设置 `root` 帐户密码。
```
$ sudo -i
# passwd
# setenforce 0
# systemctl rescue
```
你可能还想暂时禁止将日志记录到磁盘:
```
# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
```
如果你有交换设备,可以暂时禁用它并用于执行后面的测试:
```
# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS
```
#### 示例 1-1 (从 sda 读取)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s
```
#### 示例 1-2 (写入到 sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s
```
#### 示例 1-3 (从 sda 并发读取)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
```
#### 示例 1-4 (并发写入到 sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s
```
#### 示例 2-1 (从 sdb 读取)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s
```
#### 示例 2-2 (写入到 sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s
```
#### 示例 2-3 (从 sdb 并发读取)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
```
#### 示例 2-4 (并发写入到 sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
```
#### 示例 3-1 (从 RAID0 读取)
```
# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s
```
#### 示例 3-2 (写入到 RAID0)
```
# MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s
```
#### 示例 3-3 (从 RAID0 并发读取)
```
# MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
```
#### 示例 3-4 (并发写入到 RAID0)
```
# MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
```
#### 示例 4-1 (从 RAID1 读取)
```
# mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s
```
#### 示例 4-2 (写入到 RAID1)
```
# MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s
```
#### 示例 4-3 (从 RAID1 并发读取)
```
# MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
```
#### 示例 4-4 (并发写入到 RAID1)
```
# MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
```
#### 恢复交换设备和日志配置
```
# mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot
```
### 结果解读
示例 1-1、1-2、2-1 和 2-2 表明我的每个驱动器以大约 125 MB/s 的速度读写。
示例 1-3、1-4、2-3 和 2-4 表明,当在同一驱动器上并行完成两次读取或写入时,每个进程的驱动器带宽大约为一半(60 MB/s)。
3-X 示例显示了将两个驱动器放在 RAID0(数据条带化)阵列中的性能优势。在所有情况下,这些数字表明 RAID0 阵列的执行速度是任何一个驱动器能够独立提供的速度的两倍。相应的是,丢失所有内容的可能性也是两倍,因为每个驱动器只包含一半的数据。一个三个驱动器阵列的执行速度是单个驱动器的三倍(所有驱动器规格都相同),但遭受[灾难性故障](https://blog.elcomsoft.com/2019/01/why-ssds-die-a-sudden-death-and-how-to-deal-with-it/)的可能也是三倍。
4-X 示例显示 RAID1(数据镜像)阵列的性能类似于单个磁盘的性能,除了多个进程同时读取的情况(示例4-3)。在多个进程读取的情况下,RAID1 阵列的性能类似于 RAID0 阵列的性能。这意味着你将看到 RAID1 的性能优势,但仅限于进程同时读取时。例如,当你在前台使用 Web 浏览器或电子邮件客户端时,进程会尝试访问后台中的大量文件。RAID1 的主要好处是,[如果驱动器出现故障](https://www.computerworld.com/article/2484998/ssds-do-die--as-linus-torvalds-just-discovered.html),你的数据不太可能丢失。
### 故障排除
如果上述测试未按预期执行,则可能是驱动器坏了或出现故障。大多数现代硬盘都内置了自我监控、分析和报告技术([SMART](https://en.wikipedia.org/wiki/S.M.A.R.T.))。如果你的驱动器支持它,`smartctl` 命令可用于查询你的硬盘驱动器的内部统计信息:
```
# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda
```
另一种可以调整 PC 以获得更好性能的方法是更改 [I/O 调度程序](https://en.wikipedia.org/wiki/I/O_scheduling)。Linux 系统支持多个 I/O 调度程序,Fedora 系统的当前默认值是 [deadline](https://en.wikipedia.org/wiki/Deadline_scheduler) 调度程序的 [multiqueue](https://lwn.net/Articles/552904/) 变体。默认情况下它的整体性能非常好,并且对于具有许多处理器和大型磁盘阵列的大型服务器,其扩展性极为出色。但是,有一些更专业的调度程序在某些条件下可能表现更好。
要查看驱动器正在使用的 I/O 调度程序,请运行以下命令:
```
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
```
你可以通过将所需调度程序的名称写入 `/sys/block/<device name>/queue/scheduler` 文件来更改驱动器的调度程序:
```
# echo bfq > /sys/block/sda/queue/scheduler
```
你可以通过为驱动器创建 [udev 规则](http://www.reactivated.net/writing_udev_rules.html)来永久更改它。以下示例显示了如何创建将所有的[旋转式驱动器](https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics)设置为使用 [BFQ](http://algo.ing.unimo.it/people/paolo/disk_sched/) I/O 调度程序的 udev 规则:
```
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
END
```
这是另一个设置所有的[固态驱动器](https://en.wikipedia.org/wiki/Solid-state_drive)使用 [NOOP](https://en.wikipedia.org/wiki/Noop_scheduler) I/O 调度程序的示例:
```
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
END
```
更改 I/O 调度程序不会影响设备的原始吞吐量,但通过优先考虑后台任务的带宽或消除不必要的块重新排序,可能会使你的 PC 看起来响应更快。
---
via: <https://fedoramagazine.org/check-storage-performance-with-dd/>
作者:[Gregory Bartholomew](https://fedoramagazine.org/author/glb/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This article includes some example commands to show you how to get a *rough* estimate of hard drive and RAID array performance using the *dd* command. Accurate measurements would have to take into account things like [write amplification](https://www.ibm.com/support/pages/misalignment-can-be-twice-cost) and [system call overhead](https://eklitzke.org/efficient-file-copying-on-linux), which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm](https://en.wikipedia.org/wiki/Hdparm).
To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING**: The *write* tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!**
## Four tests
Below are four example dd commands that can be used to test the performance of a block device:
- One process reading from $MY_DISK:
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
- One process writing to $MY_DISK:
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
- Two processes reading concurrently from $MY_DISK:
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
- Two processes writing concurrently to $MY_DISK:
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
– The *iflag=nocache* and *oflag=direct* parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM](https://en.wikipedia.org/wiki/Random-access_memory) rather than the hard drive.
– The values for the *bs* and *count* parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.
– The *null* and *zero* devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.
– The *skip=200* parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.
## 16 examples
Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:
- MY_DISK=/dev/sda2 (used in examples 1-X)
- MY_DISK=/dev/sdb2 (used in examples 2-X)
- MY_DISK=/dev/md/stripped (used in examples 3-X)
- MY_DISK=/dev/md/mirrored (used in examples 4-X)
A video demonstration of the these tests being run on a PC is provided at the end of this guide.
Begin by putting your computer into *rescue* mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING**: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your *root* password to get into rescue mode. The *passwd* command, when run as the root user, will prompt you to (re)set your root account password.
$ sudo -i
# passwd
# setenforce 0
# systemctl rescue
You might also want to temporarily disable logging to disk:
# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
If you have a swap device, it can be temporarily disabled and used to perform the following tests:
# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS
### Example 1-1 (reading from sda)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s
### Example 1-2 (writing to sda)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s
### Example 1-3 (reading concurrently from sda)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
### Example 1-4 (writing concurrently to sda)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct seek=200 &)
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s
### Example 2-1 (reading from sdb)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s
### Example 2-2 (writing to sdb)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s
### Example 2-3 (reading concurrently from sdb)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
### Example 2-4 (writing concurrently to sdb)
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct seek=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
### Example 3-1 (reading from RAID0)
# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s
### Example 3-2 (writing to RAID0)
# MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s
### Example 3-3 (reading concurrently from RAID0)
# MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
### Example 3-4 (writing concurrently to RAID0)
# MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct seek=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
### Example 4-1 (reading from RAID1)
# mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s
### Example 4-2 (writing to RAID1)
# MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s
### Example 4-3 (reading concurrently from RAID1)
# MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
### Example 4-4 (writing concurrently to RAID1)
# MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct seek=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
### Restore your swap device and journald configuration
# mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot
## Interpreting the results
Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.
Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).
The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure](https://blog.elcomsoft.com/2019/01/why-ssds-die-a-sudden-death-and-how-to-deal-with-it/).
The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails](https://www.computerworld.com/article/2484998/ssds-do-die--as-linus-torvalds-just-discovered.html).
## Video demo
## Troubleshooting
If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in __S__elf-__M__onitoring, __A__nalysis and __R__eporting __T__echnology ([SMART](https://en.wikipedia.org/wiki/S.M.A.R.T.)). If your drive supports it, the *smartctl* command can be used to query your hard drive for its internal statistics:
# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda
Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler](https://en.wikipedia.org/wiki/I/O_scheduling). Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue](https://lwn.net/Articles/552904/) variant of the [deadline](https://en.wikipedia.org/wiki/Deadline_scheduler) scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.
To view which I/O scheduler your drives are using, issue the following command:
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:
# echo bfq > /sys/block/sda/queue/scheduler
You can make your changes permanent by creating a [udev rule](http://www.reactivated.net/writing_udev_rules.html) for your drive. The following example shows how to create a udev rule that will set all [rotational drives](https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics) to use the [BFQ](http://algo.ing.unimo.it/people/paolo/disk_sched/) I/O scheduler:
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
END
Here is another example that sets all [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) to use the [NOOP](https://en.wikipedia.org/wiki/Noop_scheduler) I/O scheduler:
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
END
Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.
*Photo by **James Donovan** on **Unsplash**.*
## Konstantin
Nice article!
Have you written any book on system administration?
## Gregory Bartholomew
I wish I knew enough to write a book (that would sell)! Unfortunately, I only know just enough to be dangerous. ????
## Joe Thompson
dd is also great as a source of data for doing ad-hoc network bandwidth tests. dd’ing /dev/urandom and piping it to netcat (or other tool of your choice) is a quick way to fire as much data as you like across a network.
## Gregory Bartholomew
Indeed, dd is a surprisingly versatile tool. I was impressed when I read in its man page that it can even change text from upper case to lower case and vice versa.
## Mehdi
Thanks for the article.
Seems like dangerous commands.
I am afraid of everything that has to do with dd!
## Gregory Bartholomew
Definitely — dd should be used with extreme caution. It is one of the few commands that can destroy data faster than rm. The rm command removes files one at a time, but the dd command can obliterate an entire file system with just one block of output. It is the “Death Star” of Unix commands!
## Batisteo
I know this command as Disk Destroyer 😀
## Realtimecat
I’ve often used DD to get a feel for how systems are performing. It will also help indicate results from disk systems that provide sparse file access and compression. The above commands when written to / from files rather than devices may yield surprisingly different results.
When writing large blocks of zeros to my zfs raid set it pretty much says “ho hum” and returns almost instantly, since it is eliminating most of the data before it ever makes it to the hard drive.
## Patrick Fox
Why are you telling people to use
? if $MY_DISK=/dev/sdaX this will write to the first sector of the harddrive and will corrupt anything there.
This will break your machine!!
## Göran Uddeborg
Thanks again for yet another interesting article!
In the concurrent write tests, shouldn’t you use “seek” rather than “skip” as an option to “dd”? From what I read in the manual, “seek” is about the output while “skip” is about the input. (And skipping in /dev/zero won’t make much difference, I guess. :-))
## Gregory Bartholomew
Hi Göran:
YES! You are correct. Good catch and thanks for letting me know. I’m trying to get the editors to make the corrections now. This is a good case-in-point of Linus’ Law (and of the effect of sleep deprivation on working memory ????) |
11,196 | 5 个免费的 Linux 分区管理器 | https://itsfoss.com/partition-managers-linux/ | 2019-08-07T07:21:34 | [
"分区"
] | https://linux.cn/article-11196-1.html |
>
> 以下是我们推荐的 Linux 分区工具。它们能让你删除、添加、调整或缩放 Linux 系统上的磁盘分区。
>
>
>
通常,你在安装操作系统时决定磁盘分区。但是,如果你需要在安装后的某个时间修改分区,该怎么办?你无法回到系统安装阶段。因此,这就需要分区管理器(或准确地说是磁盘分区管理器)上场了。
在大多数情况下,你无需单独安装分区管理器,因为它已预先安装。此外,值得注意的是,你可以选择基于命令行或有 GUI 的分区管理器。
**注意!**
>
> 磁盘分区是一项有风险的任务。除非绝对必要,否则不要这样做。
>
>
> 如果你使用的是基于命令行的分区工具,那么需要学习完成任务的命令。否则,你可能最终会擦除整个磁盘。
>
>
>
### Linux 中的 5 个管理磁盘分区的工具

下面的列表没有特定的排名顺序。大多数分区工具应该存在于 Linux 发行版的仓库中。
#### GParted

这可能是 Linux 发行版中最流行的基于 GUI 的分区管理器。你可能已在某些发行版中预装它。如果还没有,只需在软件中心搜索它即可完成安装。
它会在启动时直接提示你以 root 用户进行身份验证。所以,你根本不需要在这里使用终端。身份验证后,它会分析设备,然后让你调整磁盘分区。如果发生数据丢失或意外删除文件,你还可以找到“尝试数据救援”的选项。
* [GParted](https://gparted.org/)
#### GNOME Disks

一个基于 GUI 的分区管理器,随 Ubuntu 或任何基于 Ubuntu 的发行版(如 Zorin OS)一起出现。
它能让你删除、添加、缩放和微调分区。如果你遇到故障,它甚至可以[在 Ubuntu 中格式化 USB](https://itsfoss.com/format-usb-drive-sd-card-ubuntu/) 来帮助你救援机器。
你甚至可以借助此工具尝试修复分区。它的选项还包括编辑文件系统、创建分区镜像、还原镜像以及对分区进行基准测试。
* [GNOME Disks](https://wiki.gnome.org/Apps/Disks)
#### KDE Partition Manager

KDE Partition Manager 应该已经预装在基于 KDE 的 Linux 发行版上了。但是,如果没有,你可以在软件中心搜索并轻松安装它。
如果你不是预装的,那么可能会在尝试启动时通知你没有管理权限。没有管理员权限,你无法做任何事情。因此,在这种情况下,请输入以下命令:
```
sudo partitionmanager
```
它将扫描你的设备,然后你就可以创建、移动、复制、删除和缩放分区。你还可以导入/导出分区表及使用其他许多调整选项。
* [KDE Partition Manager](https://kde.org/applications/system/org.kde.partitionmanager)
#### Fdisk(命令行)

[fdisk](https://en.wikipedia.org/wiki/Fdisk) 是一个命令行程序,它在每个类 Unix 的系统中都有。不要担心,即使它需要你启动终端并输入命令,但这并不是很困难。但是,如果你在使用基于文本的程序时感到困惑,那么你应该继续使用上面提到的 GUI 程序。它们都做同样的事情。
要启动 `fdisk`,你必须是 root 用户并指定管理分区的设备。以下是该命令的示例:
```
sudo fdisk /dev/sdc
```
你可以参考 [Linux 文档项目的维基页面](https://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html)以获取命令列表以及有关其工作原理的更多详细信息。
#### GNU Parted(命令行)

这是在你 Linux 发行版上预安装的另一个命令行程序。你需要输入下面的命令启动:
```
sudo parted
```
### 总结
我不会忘了说 [QtParted](http://qtparted.sourceforge.net/) 是分区管理器的替代品之一。但它已经几年没有维护,因此我不建议使用它。
你如何看待这里提到的分区管理器?我有错过你最喜欢的吗?让我知道,我将根据你的建议更新这个 Linux 分区管理器列表。
---
via: <https://itsfoss.com/partition-managers-linux/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *Here’s our recommended list of partitioning tools for Linux distributions. These tools let you delete, add, tweak or resize the disk partitioning on your Linux system.*
Usually, you decide the disk partitions while installing the OS. But, what if you need to modify the partitions sometime after the installation. You just can’t go back to the setup screen in any way. So, that is where partition managers (or accurately disk partition managers) come in handy.
In most of the cases, you do not need to separately install the partition manager because it comes pre-installed. Also, it is worth noting that you can either opt for a command-line based partition manager or something with a GUI.
Attention!
Playing with disk partitioning is a risky task. Don’t do it unless it’s absolutely necessary.
If you are using a command line based partitioning tool, you need to learn the commands to get the job done. Or else, you might just end up wiping the entire disk.
## 5 Tools To Manage Disk Partitions in Linux

The list below is in no particular order of ranking. Most of these partitioning tools should be available in your Linux distribution’s repository.
### GParted

[GParted](https://itsfoss.com/gparted/) could perhaps be the most popular GUI-based partition manager available for Linux distributions. You might have it pre-installed in some distributions. If you don’t, simply search for it in the software center to get it installed.
It directly prompts you to authenticate as the root user when you launch it. So, you don’t have to utilize the terminal here – at all. After authentication, it analyzes the devices and then lets you tweak the disk partitions. You will also find an option to “Attempt Data Rescue” in case of data loss or accidental deletion of files.
### GNOME Disks

A GUI-based partition manager that comes baked in with Ubuntu or any Ubuntu-based distros like Zorin OS.
It lets you delete, add, resize and tweak the partition. It even helps you in [formatting the USB in Ubuntu](https://itsfoss.com/format-usb-drive-sd-card-ubuntu/) if there is any problem.
You can even attempt to repair a partition with the help of this tool. The options also include editing filesystem, creating a partition image, restoring the image, and benchmarking the partition.
### KDE Partition Manager

KDE Partition Manager should probably come pre-installed on KDE-based Linux distros. But, if it isn’t there – you can search for it on the software center to easily get it installed.
If you didn’t have it pre-installed, you might get the notice that you do not have administrative privileges when you try launching it. Without admin privileges, you cannot do anything. So, in that case, type in the following command to get started:
`sudo partitionmanager`
It will scan your devices and then you will be able to create, move, copy, delete, and resize partitions. You can also import/export partition tables along with a lot of other options available to tweak.
### Fdisk [Command Line]

[Fdisk](https://en.wikipedia.org/wiki/Fdisk) is a command line utility that comes baked in with every unix-like OS. Fret not, even though it requires you to launch a terminal and enter commands – it isn’t very difficult. However, if you are too confused while using a text-based utility, you should stick to the GUI applications mentioned above. They all do the same thing.
To launch fdisk, you will have to be the root user and specify the device to manage partitions. Here’s an example for the command to start with:
sudo fdisk /dev/sdc
You can refer to [The Linux Documentation Project’s wiki page](https://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html) for the list of commands and more details on how it works.
### GNU Parted [Command Line]

Yet another command-line utility that you can find pre-installed on your Linux distro. You just need to enter the following command to get started:
`sudo parted`
**Wrapping Up**
I wouldn’t forget to mention [QtParted](http://qtparted.sourceforge.net/) as one of the alternatives to the list of partition managers. However, it has not been maintained for years now – so I do not recommend using it.
What do you think about the partition managers mentioned here? Did I miss any of your favorites? Let me know and I’ll update this list of partition manager for Linux with your suggestion. |
11,198 | 如何用 Linux 命令行工具解析和格式化输出 JSON | https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/ | 2019-08-07T22:57:16 | [
"JSON"
] | https://linux.cn/article-11198-1.html | 
JSON 是一种轻量级且与语言无关的数据存储格式,易于与大多数编程语言集成,也易于人类理解 —— 当然,如果格式正确的话。JSON 这个词代表 **J**ava **S**cript **O**bject **N**otation,虽然它以 JavaScript 开头,而且主要用于在服务器和浏览器之间交换数据,但现在正在用于许多领域,包括嵌入式系统。在这里,我们将使用 Linux 上的命令行工具解析并格式化打印 JSON。它对于在 shell 脚本中处理大型 JSON 数据或在 shell 脚本中处理 JSON 数据非常有用。
### 什么是格式化输出?
JSON 数据的结构更具人性化。但是在大多数情况下,JSON 数据会存储在一行中,甚至没有行结束字符。
显然,这对于手动阅读和编辑不太方便。
这是<ruby> 格式化输出 <rt> pretty print </rt></ruby>就很有用。这个该名称不言自明:重新格式化 JSON 文本,使人们读起来更清晰。这被称为 **JSON 格式化输出**。
### 用 Linux 命令行工具解析和格式化输出 JSON
可以使用命令行文本处理器解析 JSON 数据,例如 `awk`、`sed` 和 `gerp`。实际上 `JSON.awk` 是一个来做这个的 awk 脚本。但是,也有一些专用工具可用于同一目的。
1. `jq` 或 `jshon`,shell 下的 JSON 解析器,它们都非常有用。
2. Shell 脚本,如 `JSON.sh` 或 `jsonv.sh`,用于在 bash、zsh 或 dash shell 中解析JSON。
3. `JSON.awk`,JSON 解析器 awk 脚本。
4. 像 `json.tool` 这样的 Python 模块。
5. `undercore-cli`,基于 Node.js 和 javascript。
在本教程中,我只关注 `jq`,这是一个 shell 下的非常强大的 JSON 解析器,具有高级过滤和脚本编程功能。
### JSON 格式化输出
JSON 数据可能放在一行上使人难以解读,因此为了使其具有一定的可读性,JSON 格式化输出就可用于此目的的。
**示例:**来自 `jsonip.com` 的数据,使用 `curl` 或 `wget` 工具获得 JSON 格式的外部 IP 地址,如下所示。
```
$ wget -cq http://jsonip.com/ -O -
```
实际数据看起来类似这样:
```
{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}
```
现在使用 `jq` 格式化输出它:
```
$ wget -cq http://jsonip.com/ -O - | jq '.'
```
通过 `jq` 过滤了该结果之后,它应该看起来类似这样:
```
{
"ip": "111.222.333.444",
"about": "/about",
"Pro!": "http://getjsonip.com"
}
```
同样也可以通过 Python `json.tool` 模块做到。示例如下:
```
$ cat anything.json | python -m json.tool
```
这种基于 Python 的解决方案对于大多数用户来说应该没问题,但是如果没有预安装或无法安装 Python 则不行,比如在嵌入式系统上。
然而,`json.tool` Python 模块具有明显的优势,它是跨平台的。因此,你可以在 Windows、Linux 或 Mac OS 上无缝使用它。
### 如何用 jq 解析 JSON
首先,你需要安装 `jq`,它已被大多数 GNU/Linux 发行版选中,并使用各自的软件包安装程序命令进行安装。
在 Arch Linux 上:
```
$ sudo pacman -S jq
```
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install jq
```
在 Fedora 上:
```
$ sudo dnf install jq
```
在 openSUSE 上:
```
$ sudo zypper install jq
```
对于其它操作系统或平台参见[官方的安装指导](https://stedolan.github.io/jq/download/)。
#### jq 的基本过滤和标识符功能
`jq` 可以从 `STDIN` 或文件中读取 JSON 数据。你可以根据情况使用。
单个符号 `.` 是最基本的过滤器。这些过滤器也称为**对象标识符-索引**。`jq` 使用单个 `.` 过滤器基本上相当将输入的 JSON 文件格式化输出。
* **单引号**:不必始终使用单引号。但是如果你在一行中组合几个过滤器,那么你必须使用它们。
* **双引号**:你必须用两个双引号括起任何特殊字符,如 `@`、`#`、`$`,例如 `jq .foo.”@bar”`。
* **原始数据打印**:不管出于任何原因,如果你只需要最终解析的数据(不包含在双引号内),请使用带有 `-r` 标志的 `jq` 命令,如下所示:`jq -r .foo.bar`。
#### 解析特定数据
要过滤出 JSON 的特定部分,你需要了解格式化输出的 JSON 文件的数据层次结构。
来自维基百科的 JSON 数据示例:
```
{
"firstName": "John",
"lastName": "Smith",
"age": 25,
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021"
},
"phoneNumber": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "fax",
"number": "646 555-4567"
}
],
"gender": {
"type": "male"
}
}
```
我将在本教程中将此 JSON 数据用作示例,将其保存为 `sample.json`。
假设我想从 `sample.json` 文件中过滤出地址。所以命令应该是这样的:
```
$ jq .address sample.json
```
示例输出:
```
{
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021"
}
```
再次,我想要邮政编码,然后我要添加另一个**对象标识符-索引**,即另一个过滤器。
```
$ cat sample.json | jq .address.postalCode
```
另请注意,**过滤器区分大小写**,并且你必须使用完全相同的字符串来获取有意义的输出,否则就是 null。
#### 从 JSON 数组中解析元素
JSON 数组的元素包含在方括号内,这无疑是非常通用的。
要解析数组中的元素,你必须使用 `[]` 标识符以及其他对象标识符索引。
在此示例 JSON 数据中,电话号码存储在数组中,要从此数组中获取所有内容,你只需使用括号,像这个示例:
```
$ jq .phoneNumber[] sample.json
```
假设你只想要数组的第一个元素,然后使用从 `0` 开始的数组对象编号,对于第一个项目,使用 `[0]`,对于下一个项目,它应该每步增加 1。
```
$ jq .phoneNumber[0] sample.json
```
#### 脚本编程示例
假设我只想要家庭电话,而不是整个 JSON 数组数据。这就是用 `jq` 命令脚本编写的方便之处。
```
$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'
```
首先,我将一个过滤器的结果传递给另一个,然后使用 `select` 属性选择特定类型的数据,再次将结果传递给另一个过滤器。
解释每种类型的 `jq` 过滤器和脚本编程超出了本教程的范围和目的。强烈建议你阅读 `jq` 手册,以便更好地理解下面的内容。
资源:
* <https://stedolan.github.io/jq/manual/>
* <http://www.compciv.org/recipes/cli/jq-for-parsing-json/>
* <https://lzone.de/cheat-sheet/jq>
---
via: <https://www.ostechnix.com/how-to-parse-and-pretty-print-json-with-linux-commandline-tools/>
作者:[ostechnix](https://www.ostechnix.com/author/editor/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,199 | 在系统创建新用户时发送邮件的 Bash 脚本 | https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/ | 2019-08-07T23:28:00 | [
"用户"
] | https://linux.cn/article-11199-1.html | 
目前市场上有许多开源监测工具可用于监控 Linux 系统的性能。当系统到达指定的阈值时,它将发送邮件提醒。
它会监控 CPU 利用率、内存利用率、交换内存利用率、磁盘空间利用率等所有内容。但我不认为它们可以选择监控新用户创建活动,并发送提醒。
如果没有,这并不重要,因为我们可以编写自己的 bash 脚本来实现这一点。
我们过去写了许多有用的 shell 脚本。如果要查看它们,请点击以下链接。
* [如何使用 shell 脚本自动化执行日常任务?](https://www.2daygeek.com/category/shell-script/)
这个脚本做了什么?它监测 `/var/log/secure` 文件,并在系统创建新帐户时提醒管理员。
我们不会经常运行此脚本,因为创建用户不经常发生。但是,我打算一天运行一次这个脚本。因此,我们可以获得有关用户创建的综合报告。
如果在昨天的 `/var/log/secure` 中找到了 “useradd” 字符串,那么该脚本将向指定的邮箱发送邮件提醒,其中包含了新用户的详细信息。
**注意:**你需要更改邮箱而不是使用我们的邮箱。
```
# vi /opt/scripts/new-user.sh
```
```
#!/bin/bash
#Set the variable which equal to zero
prev_count=0
count=$(grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | wc -l)
if [ "$prev_count" -lt "$count" ] ; then
# Send a mail to given email id when errors found in log
SUBJECT="ATTENTION: New User Account is created on server : `date --date='yesterday' '+%b %e'`"
# This is a temp file, which is created to store the email message.
MESSAGE="/tmp/new-user-logs.txt"
TO="[email protected]"
echo "Hostname: `hostname`" >> $MESSAGE
echo -e "\n" >> $MESSAGE
echo "The New User Details are below." >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
grep -i "`date --date='yesterday' '+%b %e'`" /var/log/secure | egrep -wi 'useradd' | grep -v 'failed adding'| awk '{print $4,$8}' | uniq | sed 's/,/ /' >> $MESSAGE
echo "+------------------------------+" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm $MESSAGE
fi
```
给 `new-user.sh` 添加可执行权限。
```
$ chmod +x /opt/scripts/new-user.sh
```
最后添加一个 cron 任务来自动化执行它。它会在每天 7 点运行。
```
# crontab -e
0 7 * * * /bin/bash /opt/scripts/new-user.sh
```
注意:你将在每天 7 点收到一封邮件提醒,但这是昨天的日志。
你将会看到类似下面的邮件提醒。
```
# cat /tmp/logs.txt
Hostname: 2g.server10.com
The New User Details are below.
+------------------------------+
2g.server10.com name=magesh
2g.server10.com name=daygeek
+------------------------------+
```
---
via: <https://www.2daygeek.com/linux-bash-script-to-monitor-user-creation-send-email/>
作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,200 | 如何检测自动生成的电子邮件 | https://arp242.net/weblog/autoreply.html | 2019-08-08T00:35:27 | [
"电子邮件"
] | https://linux.cn/article-11200-1.html | 
当你用电子邮件系统发送自动回复时,你需要注意不要向自动生成的电子邮件发送回复。最好的情况下,你将获得无用的投递失败消息。更可能的是,你会得到一个无限的电子邮件循环和一个混乱的世界。
事实证明,可靠地检测自动生成的电子邮件并不总是那么容易。以下是基于为此编写的检测器并使用它扫描大约 100,000 封电子邮件(大量的个人存档和公司存档)的观察结果。
### Auto-submitted 信头
由 [RFC 3834](http://tools.ietf.org/html/rfc3834) 定义。
这是表示你的邮件是自动回复的“官方”标准。如果存在 `Auto-Submitted` 信头,并且其值不是 `no`,你应该**不**发送回复。
### X-Auto-Response-Suppress 信头
[由微软](https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx)定义。
此信头由微软 Exchange、Outlook 和其他一些产品使用。许多新闻订阅等都设定了这个。如果 `X-Auto-Response-Suppress` 包含 `DR`(“抑制投递报告”)、`AutoReply`(“禁止 OOF 通知以外的自动回复消息”)或 `All`,你应该**不**发送回复。
### List-Id 和 List-Unsubscribe 信头
由 [RFC 2919](https://tools.ietf.org/html/rfc2919)) 定义。
你通常不希望给邮件列表或新闻订阅发送自动回复。几乎所有的邮件列表和大多数新闻订阅都至少设置了其中一个信头。如果存在这些信头中的任何一个,你应该**不**发送回复。这个信头的值不重要。
### Feedback-ID 信头
[由谷歌](https://support.google.com/mail/answer/6254652?hl=en)定义。
Gmail 使用此信头识别邮件是否是新闻订阅,并使用它为这些新闻订阅的所有者生成统计信息或报告。如果此信头存在,你应该**不**发送回复。这个信头的值不重要。
### 非标准方式
上述方法定义明确(即使有些是非标准的)。不幸的是,有些电子邮件系统不使用它们中的任何一个 :-( 这里有一些额外的措施。
#### Precedence 信头
在 [RFC 2076](http://www.faqs.org/rfcs/rfc2076.html) 中没有真正定义,不鼓励使用它(但通常会遇到此信头)。
请注意,不建议检查是否存在此信头,因为某些邮件使用 `normal` 和其他一些(少见的)值(尽管这不常见)。
我的建议是如果其值不区分大小写地匹配 `bulk`、`auto_reply` 或 `list`,则**不**发送回复。
#### 其他不常见的信头
这是我遇到的另外的一些(不常见的)信头。如果设置了其中一个,我建议**不**发送自动回复。大多数邮件也设置了上述信头之一,但有些没有(这并不常见)。
* `X-MSFBL`:无法真正找到定义(Microsoft 信头?),但我只有自动生成的邮件带有此信头。
* `X-Loop`:在任何地方都没有真正定义过,有点罕见,但有时有。它通常设置为不应该收到电子邮件的地址,但也会遇到 `X-Loop: yes`。
* `X-Autoreply`:相当罕见,并且似乎总是具有 `yes` 的值。
#### Email 地址
检查 `From` 或 `Reply-To` 信头是否包含 `noreply`、`no-reply` 或 `no_reply`(正则表达式:`^no.?reply@`)。
#### 只有 HTML 部分
如果电子邮件只有 HTML 部分,而没有文本部分,则表明这是一个自动生成的邮件或新闻订阅。几乎所有邮件客户端都设置了文本部分。
#### 投递失败消息
许多传递失败消息并不能真正表明它们是失败的。一些检查方法:
* `From` 包含 `mailer-daemon` 或 `Mail Delivery Subsystem`
#### 特定的邮件库特征
许多邮件类库留下了某种痕迹,大多数常规邮件客户端使用自己的数据覆盖它。检查这个似乎工作得相当可靠。
* `X-Mailer: Microsoft CDO for Windows 2000`:由某些微软软件设置;我只能在自动生成的邮件中找到它。是的,在 2015 年它仍然在使用。
* `Message-ID` 信头包含 `.JavaMail.`:我发现了一些(5 个 50k 大小的)常规消息,但不是很多;绝大多数(数千封)邮件是新闻订阅、订单确认等。
* `^X-Mailer` 以 `PHP` 开头。这应该会同时看到 `X-Mailer: PHP/5.5.0` 和 `X-Mailer: PHPmailer XXX XXX`。与 “JavaMail” 相同。
* 出现了 `X-Library`;似乎只有 [Indy](http://www.indyproject.org/index.en.aspx) 设定了这个。
* `X-Mailer` 以 `wdcollect` 开头。由一些 Plesk 邮件设置。
* `X-Mailer` 以 `MIME-tools` 开头。
### 最后的预防措施:限制回复的数量
即使遵循上述所有建议,你仍可能会遇到一个避开所有这些检测的电子邮件程序。这可能非常危险,因为电子邮件系统只是“如果有电子邮件那么发送”,就有可能导致无限的电子邮件循环。
出于这个原因,我建议你记录你自动发送的电子邮件,并将此速率限制为在几分钟内最多几封电子邮件。这将打破循环链条。
我们使用每五分钟一封电子邮件的设置,但没这么严格的设置可能也会运作良好。
### 你需要为自动回复设置什么信头
具体细节取决于你发送的邮件类型。这是我们用于自动回复邮件的内容:
```
Auto-Submitted: auto-replied
X-Auto-Response-Suppress: All
Precedence: auto_reply
```
### 反馈
你可以发送电子邮件至 [[email protected]](mailto:[email protected]) 或 [创建 GitHub 议题](https://github.com/Carpetsmoker/arp242.net/issues/new)以提交反馈、问题等。
---
via: <https://arp242.net/weblog/autoreply.html>
作者:[Martin Tournoij](https://arp242.net/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,202 | 什么是黄金镜像? | https://opensource.com/article/19/7/what-golden-image | 2019-08-08T18:42:43 | [
"黄金镜像"
] | https://linux.cn/article-11202-1.html |
>
> 正在开发一个将广泛分发的项目吗?了解一下黄金镜像吧,以便在出现问题时轻松恢复到“完美”状态。
>
>
>

如果你正在从事于质量保证、系统管理或媒体制作(没想到吧),你可能听说过<ruby> 正式版 <rt> gold master </rt></ruby>这一术语的某些变体,如<ruby> 黄金镜像 <rt> golden image </rt></ruby>或<ruby> 母片 <rt> master image </rt></ruby>等等。这个术语已经进入了每个参与创建**完美**模具的人的集体意识,然后从该模具中产生许多复制品。母片或黄金镜像就是:一种虚拟模具,你可以从中打造可分发的模型。
在媒体制作中,这就是所有人努力开发母片的过程。这个最终产品是独一无二的。它看起来和听起来像是可以看和听的最好的电影或专辑(或其他任何东西)。可以制作和压缩该母片的副本并发送给急切的公众。
在软件中,与该术语相关联的也是类似的意思。一旦软件经过编译和一再测试,完美的构建成果就会被声明为**黄金版本**,不允许对它进一步更改,并且所有可分发的副本都是从此母片生成的(当软件是用 CD 或 DVD 分发时,这实际上就是母盘)。
在系统管理中,你可能会遇到你的机构所选的操作系统的黄金镜像,其中的重要设置已经就绪,如安装好的虚拟专用网络(VPN)证书、设置好的电子邮件收件服务器的邮件客户端等等。同样,你可能也会在虚拟机(VM)的世界中听到这个术语,其中精心配置了虚拟驱动器的黄金镜像是所有克隆的新虚拟机的源头。
### GNOME Boxes
正式版的概念很简单,但往往忽视将其付诸实践。有时,你的团队很高兴能够达成他们的目标,但没有人停下来考虑将这些成就指定为权威版本。在其他时候,没有简单的机制来做到这一点。
黄金镜像等同于部分历史的保存和提前备份计划。一旦你制作了一个完美的模型,无论你正在努力做什么,你都应该为自己保留这项工作,因为它不仅标志着你的进步,而且如果你继续工作时遇到问题,它就会成为一个后备。
[GNOME Boxes](https://wiki.gnome.org/Apps/Boxes),是随 GNOME 桌面一起提供的虚拟化平台,可以用作简单的演示用途。如果你从未使用过 GNOME Boxes,你可以在 Alan Formy-Duval 的文章 [GNOME Boxes 入门](https://opensource.com/article/19/5/getting-started-gnome-boxes-virtualization)中学习它的基础知识。
想象一下,你使用 GNOME Boxes 创建虚拟机,然后将操作系统安装到该 VM 中。现在,你想要制作一个黄金镜像。GNOME Boxes 已经率先摄取了你的安装快照,可以作为更多的操作系统安装的黄金镜像。
打开 GNOME Boxes 并在仪表板视图中,右键单击任何虚拟机,然后选择**属性**。在**属性**窗口中,选择**快照**选项卡。由 GNOME Boxes 自动创建的第一个快照是“Just Installed”。顾名思义,这是你最初安装到虚拟机上的操作系统。

如果你的虚拟机变成了你不想要的状态,你可以随时恢复为“Just Installed”镜像。
当然,如果你已经为自己调整了环境,那么在安装后恢复操作系统将是一个极大的工程。这就是为什么虚拟机的常见工作流程是:首先安装操作系统,然后根据你的要求或偏好修改它,然后拍摄快照,将该快照声明为配置好的黄金镜像。例如,如果你使用虚拟机进行 [Flatpak](https://opensource.com/business/16/8/flatpak) 打包,那么在初始安装之后,你可以添加软件和 Flatpak 开发工具,构建工作环境,然后拍摄快照。创建快照后,你可以重命名该虚拟机以指示其真实用途。
要重命名虚拟机,请在仪表板视图中右键单击其缩略图,然后选择**属性**。在**属性**窗口中,输入新名称:

要克隆你的黄金映像,请右键单击 GNOME Boxes 界面中的虚拟机,然后选择**克隆**。

你现在可以从黄金映像的最新快照中克隆了。
### 黄金镜像
很少有学科无法从黄金镜像中受益。无论你是在 [Git](https://git-scm.com) 中标记版本、在 Boxes 中拍摄快照、出版原型黑胶唱片、打印书籍以进行审核、设计用于批量生产的丝网印刷、还是制作文字模具,到处都是各种原型。这只是现代技术让我们人类更聪明而不是更努力的另一种方式,因此为你的项目制作一个黄金镜像,并根据需要随时生成克隆吧。
---
via: <https://opensource.com/article/19/7/what-golden-image>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you’re in quality assurance, system administration, or (believe it or not) media production, you might have heard some variation of the term *gold master*, *golden image*, or *master image*, and so on. It’s a term that has made its way into the collective consciousness of anyone involved in creating one *perfect* model and then producing many duplicates from that mold. That’s what a gold master, or golden image, is: The virtual mold from which you cast your distributable models.
In media production, the theory is that a crew works toward the gold master. This final product is one of a kind. It looks and sounds the best a movie or an album (or whatever it is) can possibly look and sound. Copies of this master image are made, compressed, and sent out to the eager public.
In software, a similar idea is associated with the term. Once software has been compiled and tested and re-tested, the perfect build is declared *gold*. No further changes are allowed, and all distributable copies are generated from this master image (this used to actually mean something, back when software was distributed on CDs or DVDs).
And in system administration, you may encounter golden images of an organization’s chosen operating system, with the important settings *baked in—*the virtual private network (VPN) certificates are already in place, incoming email servers are already set in the email client, and so on. Similarly, you might also hear this term in the world of virtual machines (VMs), where a *golden image* of a carefully configured virtual drive is the source from which all new virtual machines are cloned.
## GNOME Boxes
The concept of a gold master is simple, but putting it into practice is often overlooked. Sometimes, your team is just so happy to have reached their goal that no one stops to think about designating the achievement as the authoritative version. At other times, there’s no simple mechanism for doing this.
A golden image is equal parts historical preservation and a backup plan in advance. Once you craft a perfect model of whatever it is you were toiling over, you owe it to yourself to preserve that work, because it not only marks your progress, but it serves as a fallback should you stumble as you continue your work.
[GNOME Boxes](https://wiki.gnome.org/Apps/Boxes), the virtualization platform that ships with the GNOME desktop, can provide a simple demonstration. If you’ve never used GNOME Boxes, you can learn the basics from Alan Formy-Duval in his article [Getting started with GNOME Boxes](https://opensource.com/article/19/5/getting-started-gnome-boxes-virtualization).
Imagine that you used GNOME boxes to create a virtual machine, and then installed an operating system into that VM. Now, you want to make a golden image. GNOME Boxes is one step ahead of you: It has already taken a snapshot of your install, which can serve as the golden image for a stock OS installation.
With GNOME Boxes open and in the dashboard view, right-click on any virtual machine and select **Properties**. In the **Properties** window, select the **Snapshots** tab. The first snapshot, created automatically by GNOME Boxes, is **Just Installed**. As its name suggests, this is the operating system as you originally installed it onto its virtual machine.

Should your virtual machine reach a state that you did not intend, you can always revert to this **Just Installed** image.
Of course, reverting back to the OS after it’s just been installed would be a drastic measure if you’ve already fine-tuned the environment for yourself. That’s why it’s a common workflow with virtual machines to first install the OS, then modify it to suit your requirements or preferences, and then take a snapshot, declaring that snapshot as your configured golden image. For instance, if you are using the virtual machine for [Flatpak](https://opensource.com/business/16/8/flatpak) packaging, then after your initial install you might add software and Flatpak development tools, construct your working environment, and then take a snapshot. Once the snapshot is created, you can rename the virtual machine to indicate its true purpose in life.
To rename a virtual machine, right-click on its thumbnail in the dashboard view, and select **Properties**. In the **Properties** window, enter a new name:

opensource.com
To make a clone of your golden image, right-click on the virtual machine in the GNOME Boxes interfaces and select **Clone**.

You now have a clone from your golden image’s latest snapshot.
## Golden
There are few disciplines that can’t benefit from golden images. Whether you’re tagging releases in [Git](https://git-scm.com), taking snapshots in Boxes, pressing a prototype vinyl, printing a book for approval, designing a screen print for mass production, or fashioning a literal mold, the archetype is everything. It’s just one more way that modern technology lets us humans work smarter rather than harder, so make a golden image for your project, and generate clones as often as you need.
## 1 Comment |
11,204 | 两种 cp 命令的绝佳用法的快捷方式 | https://opensource.com/article/18/1/two-great-uses-cp-command-update | 2019-08-08T23:39:14 | [
"cp"
] | https://linux.cn/article-11204-1.html |
>
> 这篇文章是关于如何在使用 cp 命令进行备份以及同步时提高效率。
>
>
>

去年七月,我写了一篇[关于 cp 命令的两种绝佳用法](https://opensource.com/article/17/7/two-great-uses-cp-command)的文章:备份一个文件,以及同步一个文件夹的备份。
虽然这些工具确实很好用,但同时,输入这些命令太过于累赘了。为了解决这个问题,我在我的 Bash 启动文件里创建了一些 Bash 快捷方式。现在,我想把这些捷径分享给你们,以便于你们在需要的时候可以拿来用,或者是给那些还不知道怎么使用 Bash 的别名以及函数的用户提供一些思路。
### 使用 Bash 别名来更新一个文件夹的副本
如果要使用 `cp` 来更新一个文件夹的副本,通常会使用到的命令是:
```
cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY
```
其中 `-r` 代表“向下递归访问文件夹中的所有文件”,`-u` 代表“更新目标”,`-v` 代表“详细模式”,`SOURCE-FOLDER` 是包含最新文件的文件夹的名称,`DESTINATION-DIRECTORY` 是包含必须同步的`SOURCE-FOLDER` 副本的目录。
因为我经常使用 `cp` 命令来复制文件夹,我会很自然地想起使用 `-r` 选项。也许再想地更深入一些,我还可以想起用 `-v` 选项,如果再想得再深一层,我会想起用选项 `-u`(不知道这个选项是代表“更新”还是“同步”还是一些什么其它的)。
或者,还可以使用[Bash 的别名功能](https://opensource.com/article/17/5/introduction-alias-command-line-tool)来将 `cp` 命令以及其后的选项转换成一个更容易记忆的单词,就像这样:
```
alias sync='cp -r -u -v'
```
如果我将其保存在我的主目录中的 `.bash_aliases` 文件中,然后启动一个新的终端会话,我可以使用该别名了,例如:
```
sync Pictures /media/me/4388-E5FE
```
可以将我的主目录中的图片文件夹与我的 USB 驱动器中的相同版本同步。
不清楚 `sync` 是否已经定义了?你可以在终端里输入 `alias` 这个单词来列出所有正在使用的命令别名。
喜欢吗?想要现在就立即使用吗?那就现在打开终端,输入:
```
echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases
```
然后启动一个新的终端窗口并在命令提示符下键入 `alias`。你应该看到这样的东西:
```
me@mymachine~$ alias
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias gvm='sdk'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias sync='cp -r -u -v'
me@mymachine:~$
```
这里你能看到 `sync` 已经定义了。
### 使用 Bash 函数来为备份编号
若要使用 `cp` 来备份一个文件,通常使用的命令是:
```
cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE
```
其中 `--force` 代表“强制制作副本”,`--backup= numbered` 代表“使用数字表示备份的生成”,`WORKING-FILE` 是我们希望保留的当前文件,`BACKED-UP-FILE` 与 `WORKING-FILE` 的名称相同,并附加生成信息。
我们不仅需要记得所有 `cp` 的选项,我们还需要记得去重复输入 `WORKING-FILE` 的名字。但当[Bash 的函数功能](https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions)已经可以帮我们做这一切,为什么我们还要不断地重复这个过程呢?就像这样:
再一次提醒,你可将下列内容保存入你在家目录下的 `.bash_aliases` 文件里:
```
function backup {
if [ $# -ne 1 ]; then
echo "Usage: $0 filename"
elif [ -f $1 ] ; then
echo "cp --force --backup=numbered $1 $1"
cp --force --backup=numbered $1 $1
else
echo "$0: $1 is not a file"
fi
}
```
我将此函数称之为 `backup`,因为我的系统上没有任何其他名为 `backup` 的命令,但你可以选择适合的任何名称。
第一个 `if` 语句是用于检查是否提供有且只有一个参数,否则,它会用 `echo` 命令来打印出正确的用法。
`elif` 语句是用于检查提供的参数所指向的是一个文件,如果是的话,它会用第二个 `echo` 命令来打印所需的 `cp` 的命令(所有的选项都是用全称来表示)并且执行它。
如果所提供的参数不是一个文件,文件中的第三个 `echo` 用于打印错误信息。
在我的家目录下,如果我执行 `backup` 这个命令,我可以发现目录下多了一个文件名为`checkCounts.sql.~1~` 的文件,如果我再执行一次,便又多了另一个名为 `checkCounts.sql.~2~` 的文件。
成功了!就像所想的一样,我可以继续编辑 `checkCounts.sql`,但如果我可以经常地用这个命令来为文件制作快照的话,我可以在我遇到问题的时候回退到最近的版本。
也许在未来的某个时间,使用 `git` 作为版本控制系统会是一个好主意。但像上文所介绍的 `backup` 这个简单而又好用的工具,是你在需要使用快照的功能时却还未准备好使用 `git` 的最好工具。
### 结论
在我的上一篇文章里,我保证我会通过使用脚本,shell 里的函数以及别名功能来简化一些机械性的动作来提高生产效率。
在这篇文章里,我已经展示了如何在使用 `cp` 命令同步或者备份文件时运用 shell 函数以及别名功能来简化操作。如果你想要了解更多,可以读一下这两篇文章:[怎样通过使用命令别名功能来减少敲击键盘的次数](https://opensource.com/article/17/5/introduction-alias-command-line-tool) 以及由我的同事 Greg 和 Seth 写的 [Shell 编程:shift 方法和自定义函数介绍](https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions)。
---
via: <https://opensource.com/article/18/1/two-great-uses-cp-command-update>
作者:[Chris Hermansen](https://opensource.com/users/clhermansen) 译者:[zyk2290](https://github.com/zyk2290) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Last July, I wrote about [two great uses for the cp command](https://opensource.com/article/17/7/two-great-uses-cp-command): making a backup of a file, and synchronizing a secondary copy of a folder.
Having discovered these great utilities, I find that they are more verbose than necessary, so I created shortcuts to them in my Bash shell startup script. I thought I’d share these shortcuts in case they are useful to others or could offer inspiration to Bash users who haven’t quite taken on aliases or shell functions.
## Updating a second copy of a folder – Bash alias
The general pattern for updating a second copy of a folder with `cp`
is:
```
````cp -r -u -v SOURCE-FOLDER DESTINATION-DIRECTORY`
where the `-r`
stands for “recursively descend through the folder visiting all files”, `-u`
stands for “update the target” and `-v`
stands for “verbose mode”, `SOURCE-FOLDER`
is the name of the folder that contains the most up-to-date information, and `DESTINATION-DIRECTORY`
is the directory containing copy of the `SOURCE-FOLDER`
that must be synchronized.
I can easily remember the `-r`
option because I use it often when copying folders around. I can probably, with some more effort, remember `-v`
, and with even more effort, `-u`
(is it “update” or “synchronize” or…).
Or I can just use the [alias capability in Bash](https://opensource.com/article/17/5/introduction-alias-command-line-tool) to convert the `cp`
command and options to something more memorable, like this:
```
````alias sync='cp -r -u -v'`
If I save this in my `.bash_aliases`
file in my home directory and then start a new terminal session, I can use the alias, for example:
```
````sync Pictures /media/me/4388-E5FE`
to synchronize my Pictures folder in my home directory with the version of the same in my USB drive.
Not sure if you already have a `sync`
alias defined? You can list all your currently defined aliases by typing the word `alias`
at the command prompt in your terminal window.
Like this so much you just want to start using it right away? Open a terminal window and type:
```
````echo "alias sync='cp -r -u -v'" >> ~/.bash_aliases`
Then start up a new terminal window and type the word `alias`
at the command prompt. You should see something like this:
```
``````
me@mymachine~$ alias
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias gvm='sdk'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias sync='cp -r -u -v'
me@mymachine:~$
```
There you can see the `sync`
alias defined.
## Making versioned backups – Bash function
The general pattern for making a backup of a file with `cp`
is:
```
````cp --force --backup=numbered WORKING-FILE BACKED-UP-FILE`
where the `-- force`
stands for “make the copy no matter what”, the` -- backup=numbered`
stands for “use a number to indicate the generation of backup”, `WORKING-FILE`
is the current file we wish to preserve, and `BACKED-UP-FILE`
is the same name as the `WORKING-FILE`
and will have the generation information appended.
Besides remembering the options to the `cp`
command, we also need to remember to repeat the `WORKING-FILE`
name a second time. But why repeat ourselves when [a Bash function](https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions) can take care of that overhead for us, like this:
Again, you can save this to your `.bash_aliases`
file in your home directory.
```
``````
function backup {
if [ $# -ne 1 ]; then
echo "Usage: $0 filename"
elif [ -f $1 ] ; then
echo "cp --force --backup=numbered $1 $1"
cp --force --backup=numbered $1 $1
else
echo "$0: $1 is not a file"
fi
}
```
I called this function “backup” because I don’t have any other commands called “backup” on my system, but you can choose whatever name suits.
The first `if`
statement checks to make sure that only one argument is provided to the function, otherwise printing the correct usage with the `echo`
command.
The `elif`
statement checks to make sure the argument provided is a file, and if so, it (verbosely) uses the second `echo`
to print the `cp`
command to be used and then executes it.
If the single argument is not a file, the third `echo`
prints an error message to that effect.
In my home directory, if I execute the `backup`
command so defined on the file `checkCounts.sql`
, I see that `backup`
creates a file called `checkCounts.sql.~1~`
. If I execute it once more, I see a new file `checkCounts.sql.~2~`
.
Success! As planned, I can go on editing `checkCounts.sql`
, but if I take a snapshot of it every so often with backup, I can return to the most recent snapshot should I run into trouble.
At some point, it’s better to start using `git`
for version control, but `backup`
as defined above is a nice cheap tool when you need to create snapshots but you’re not ready for `git`
.
## Conclusion
In my last article, I promised you that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases.
Here I’ve shown concrete examples of the use of shell aliases and shell functions to streamline the synchronize and backup functionality of the `cp`
command. If you’d like to learn more about this, check out the two articles cited above: [How to save keystrokes at the command line with alias](https://opensource.com/article/17/5/introduction-alias-command-line-tool) and [Shell scripting: An introduction to the shift method and custom functions](https://opensource.com/article/17/1/shell-scripting-shift-method-custom-functions), written by my colleagues Greg and Seth, respectively.
## 8 Comments |
11,206 | COPR 仓库中 4 个很酷的新项目(2019.8) | https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-august-2019/ | 2019-08-09T21:22:04 | [
"COPR"
] | https://linux.cn/article-11206-1.html | 
COPR 是个人软件仓库[集合](https://copr.fedorainfracloud.org/),它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
这是 COPR 中一组新的有趣项目。
### Duc
[duc](https://duc.zevv.nl/) 是磁盘使用率检查和可视化工具的集合。Duc 使用索引数据库来保存系统上文件的大小。索引完成后,你可以通过命令行界面或 GUI 快速查看磁盘使用情况。

#### 安装说明
该[仓库](https://copr.fedorainfracloud.org/coprs/terrywang/duc/)目前为 EPEL 7、Fedora 29 和 30 提供 duc。要安装 duc,请使用以下命令:
```
sudo dnf copr enable terrywang/duc
sudo dnf install duc
```
### MuseScore
[MuseScore](https://musescore.org/) 是一个处理音乐符号的软件。使用 MuseScore,你可以使用鼠标、虚拟键盘或 MIDI 控制器创建乐谱。然后,MuseScore 可以播放创建的音乐或将其导出为 PDF、MIDI 或 MusicXML。此外,它还有一个由 MuseScore 用户创建的含有大量乐谱的数据库。

#### 安装说明
该[仓库](https://copr.fedorainfracloud.org/coprs/terrywang/duc/)目前为 Fedora 29 和 30 提供 MuseScore。要安装 MuseScore,请使用以下命令:
```
sudo dnf copr enable jjames/MuseScore
sudo dnf install musescore
```
### 动态墙纸编辑器
[动态墙纸编辑器](https://github.com/maoschanz/dynamic-wallpaper-editor) 是一个可在 GNOME 中创建和编辑随时间变化的壁纸集合的工具。这可以使用 XML 文件来完成,但是,动态墙纸编辑器通过其图形界面使其变得简单,你可以在其中简单地添加图片、排列图片并设置每张图片的持续时间以及它们之间的过渡。

#### 安装说明
该[仓库](https://copr.fedorainfracloud.org/coprs/atim/dynamic-wallpaper-editor/)目前为 Fedora 30 和 Rawhide 提供动态墙纸编辑器。要安装它,请使用以下命令:
```
sudo dnf copr enable atim/dynamic-wallpaper-editor
sudo dnf install dynamic-wallpaper-editor
```
### Manuskript
[Manuskript](https://www.theologeek.ch/manuskript/) 是一个给作者的工具,旨在让创建大型写作项目更容易。它既可以作为编写文本的编辑器,也可以作为组织故事本身、故事人物和单个情节的注释的工具。

#### 安装说明
该[仓库](https://copr.fedorainfracloud.org/coprs/notsag/manuskript/)目前为 Fedora 29、30 和 Rawhide 提供 Manuskript。要安装 Manuskript,请使用以下命令:
```
sudo dnf copr enable notsag/manuskript
sudo dnf install manuskript
```
---
via: <https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-august-2019/>
作者:[Dominik Turecek](https://fedoramagazine.org/author/dturecek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | COPR is a [collection](https://copr.fedorainfracloud.org/) of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Here’s a set of new and interesting projects in COPR.
### Duc
[Duc](https://duc.zevv.nl/) is a collection of tools for disk usage inspection and visualization. Duc uses an indexed database to store sizes of files on your system. Once the indexing is done, you can then quickly overview your disk usage either by its command-line interface or the GUI.

#### Installation instructions
The [repo](https://copr.fedorainfracloud.org/coprs/terrywang/duc/) currently provides duc for EPEL 7, Fedora 29 and 30. To install duc, use these commands:
sudo dnf copr enable terrywang/duc sudo dnf install duc
### MuseScore
[MuseScore](https://musescore.org/) is a software for working with music notation. With MuseScore, you can create sheet music either by using a mouse, virtual keyboard or a MIDI controller. MuseScore can then play the created music or export it as a PDF, MIDI or MusicXML. Additionally, there’s an extensive database of sheet music created by Musescore users.

#### Installation instructions
The [repo](https://copr.fedorainfracloud.org/coprs/jjames/MuseScore/) currently provides MuseScore for Fedora 29 and 30. To install MuseScore, use these commands:
sudo dnf copr enable jjames/MuseScore sudo dnf install musescore
### Dynamic Wallpaper Editor
[Dynamic Wallpaper Editor](https://github.com/maoschanz/dynamic-wallpaper-editor) is a tool for creating and editing a collection of wallpapers in GNOME that change in time. This can be done using XML files, however, Dynamic Wallpaper Editor makes this easy with its graphical interface, where you can simply add pictures, arrange them and set the duration of each picture and transitions between them.

#### Installation instructions
The [repo](https://copr.fedorainfracloud.org/coprs/atim/dynamic-wallpaper-editor/) currently provides dynamic-wallpaper-editor for Fedora 30 and Rawhide. To install dynamic-wallpaper-editor, use these commands:
sudo dnf copr enable atim/dynamic-wallpaper-editor sudo dnf install dynamic-wallpaper-editor
### Manuskript
[Manuskript](https://www.theologeek.ch/manuskript/) is a tool for writers and is aimed to make creating large writing projects easier. It serves as an editor for writing the text itself, as well as a tool for organizing notes about the story itself, characters of the story and individual plots.

#### Installation instructions
The [repo](https://copr.fedorainfracloud.org/coprs/notsag/manuskript/) currently provides Manuskript for Fedora 29, 30 and Rawhide. To install Manuskript, use these commands:
sudo dnf copr enable notsag/manuskript sudo dnf install manuskript
## Joe Fidler
Really diverse and interesting group of projects. Thanks for this.
## Alan Gruskoff
The MuseScore software installed just fine and works well on my Fedora 30 box.
Thanks
## Jeff Sandys
jjames MuseScore is version 3, MuseScore 2 is in the fedora repos also works great. Version 3 is coming in f31.
## C. W.
Musescore2 is already in Fedora’s repos.
## Ravindu Meegasmulla
This is much useful, Thank you
## Ruben
These are actually interesting suggestions. MuseScore works really nice and I’ll check out the Wallpaper Editor! Thanks 🙂
## User
Why copr? Use flatpak!
## Elliott S
Note, MuseScore is already in Fedora
. The copr is still useful though, as it’s a backport of the new 3.x version that isn’t yet in older Fedora.
## hammerhead corvette
Dynamic Wallpaper Editor is available as a flatpak.
## Situs Bandar Agen Casino
Do you have any video of that? I’d care to find out more details.
## Nandan Bhat
Tried duc. Very nice! |
11,207 | 微软发现由俄罗斯背后支持的利用物联网设备进行的攻击 | https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html | 2019-08-09T23:48:00 | [
"安全",
"IoT"
] | https://linux.cn/article-11207-1.html |
>
> 微软表示,默认密码、未打补丁的设备,物联网设备库存不足是导致俄罗斯的 STRONTIUM 黑客组织发起针对公司的攻击的原因。
>
>
>

在微软安全响应中心周一发布的博客文章中,该公司称,STRONTIUM 黑客组织对未披露名字的微软客户进行了基于 [IoT](https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html) 的攻击,安全研究人员相信 STRONTIUM 黑客组织和俄罗斯 GRU 军事情报机构有密切的关系。
微软[在博客中说](https://msrc-blog.microsoft.com/2019/08/05/corporate-iot-a-path-to-intrusion/),它在 4 月份发现的攻击针对三种特定的物联网设备:一部 VoIP 电话、一部视频解码器和一台打印机(该公司拒绝说明品牌),并将它们用于获得对不特定的公司网络的访问权限。其中两个设备遭到入侵是因为没有更改过制造商的默认密码,而另一个设备则是因为没有应用最新的安全补丁。
以这种方式受到攻击的设备成为了安全的网络的后门,允许攻击者自由扫描这些网络以获得进一步的漏洞,并访问其他系统获取更多的信息。攻击者也被发现其在调查受攻击网络上的管理组,试图获得更多访问权限,以及分析本地子网流量以获取其他数据。
STRONTIUM,也被称为 Fancy Bear、Pawn Storm、Sofacy 和 APT28,被认为是代表俄罗斯政府进行的一系列恶意网络活动的幕后黑手,其中包括 2016 年对民主党全国委员会的攻击,对世界反兴奋剂机构的攻击,针对记者调查马来西亚航空公司 17 号航班在乌克兰上空被击落的情况,向美国军人的妻子发送捏造的死亡威胁等等。
根据 2018 年 7 月特别顾问罗伯特·穆勒办公室发布的起诉书,STRONTIUM 袭击的指挥者是一群俄罗斯军官,所有这些人都被 FBI 通缉与这些罪行有关。
微软通知客户发现其遭到了民族国家的攻击,并在过去 12 个月内发送了大约 1,400 条与 STRONTIUM 相关的通知。微软表示,其中大多数(五分之四)是对政府、军队、国防、IT、医药、教育和工程领域的组织的攻击,其余的则是非政府组织、智囊团和其他“政治附属组织”。
根据微软团队的说法,漏洞的核心是机构缺乏对其网络上运行的所有设备的充分认识。另外,他们建议对在企业环境中运行的所有 IoT 设备进行编目,为每个设备实施自定义安全策略,在可行的情况下在各自独立的网络上屏蔽物联网设备,并对物联网组件执行定期补丁和配置审核。
---
via: <https://www.networkworld.com/article/3430356/microsoft-finds-russia-backed-attacks-that-exploit-iot-devices.html>
作者:[Jon Gold](https://www.networkworld.com/author/Jon-Gold/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,208 | 理解软件设计模式 | https://opensource.com/article/19/7/understanding-software-design-patterns | 2019-08-10T08:09:00 | [] | https://linux.cn/article-11208-1.html |
>
> 设计模式可以帮助消除冗余代码。学习如何利用 Java 使用单例模式、工厂模式和观察者模式。
>
>
>

如果你是一名正在致力于计算机科学或者相关学科的程序员或者学生,很快,你将会遇到一条术语 “<ruby> 软件设计模式 <rt> software design pattern </rt></ruby>”。根据维基百科,“*[软件设计模式](https://en.wikipedia.org/wiki/Software_design_pattern)是在平常的软件设计工作中所遭遇的问题的一种通用的、可重复使用的解决方案*”。我对该定义的理解是:当在从事于一个编码项目时,你经常会思考,“嗯,这里貌似是冗余代码,我觉得是否能改变这些代码使之更灵活和便于修改?”因此,你会开始考虑怎样分割那些保持不变的内容和需要经常改变的内容。
>
> **设计模式**是一种通过分割那些保持不变的部分和经常变化的部分,让你的代码更容易修改的方法。
>
>
>
不出意外的话,每个从事编程项目的人都可能会有同样的思考。特别是那些工业级别的项目,在那里通常工作着数十甚至数百名开发者;协作过程表明必须有一些标准和规则来使代码更加优雅并适应变化。这就是为什么我们有了 [面向对象编程](https://en.wikipedia.org/wiki/Object-oriented_programming)(OOP)和 [软件框架工具](https://en.wikipedia.org/wiki/Software_framework)。设计模式有点类似于 OOP,但它通过将变化视为自然开发过程的一部分而进一步发展。基本上,设计模式利用了一些 OOP 的思想,比如抽象和接口,但是专注于改变的过程。
当你开始开发项目时,你经常会听到这样一个术语*重构*,它意味着*通过改变代码使它变得更优雅和可复用*;这就是设计模式耀眼的地方。当你处理现有代码时(无论是由其他人构建还是你自己过去构建的),了解设计模式可以帮助你以不同的方式看待事物,你将发现问题以及改进代码的方法。
有很多种设计模式,其中单例模式、工厂模式和观察者模式三种最受欢迎,在这篇文章中我将会一一介绍它们。
### 如何遵循本指南
无论你是一位有经验的编程工作者还是一名刚刚接触的新手,我想让这篇教程让每个人都很容易理解。设计模式概念并不容易理解,减少开始旅程时的学习曲线始终是首要任务。因此,除了这篇带有图表和代码片段的文章外,我还创建了一个 [GitHub 仓库](https://github.com/bryantson/OpensourceDotComDemos/tree/master/TopDesignPatterns),你可以克隆仓库并在你的电脑上运行这些代码来实现这三种设计模式。你也可以观看我创建的 [YouTube视频](https://www.youtube.com/watch?v=VlBXYtLI7kE&feature=youtu.be)。
#### 必要条件
如果你只是想了解一般的设计模式思想,则无需克隆示例项目或安装任何工具。但是,如果要运行示例代码,你需要安装以下工具:
* **Java 开发套件(JDK)**:我强烈建议使用 [OpenJDK](https://openjdk.java.net/)。
* **Apache Maven**:这个简单的项目使用 [Apache Maven](https://maven.apache.org/) 构建;好的是许多 IDE 自带了Maven。
* **交互式开发编辑器(IDE)**:我使用 [社区版 IntelliJ](https://www.jetbrains.com/idea/download/#section=mac),但是你也可以使用 [Eclipse IDE](https://www.eclipse.org/ide/) 或者其他你喜欢的 Java IDE。
* **Git**:如果你想克隆这个工程,你需要 [Git](https://git-scm.com/) 客户端。
安装好 Git 后运行下列命令克隆这个工程:
```
git clone https://github.com/bryantson/OpensourceDotComDemos.git
```
然后在你喜欢的 IDE 中,你可以将 TopDesignPatterns 仓库中的代码作为 Apache Maven 项目导入。
我使用的是 Java,但你也可以使用支持[抽象原则](https://en.wikipedia.org/wiki/Abstraction_principle_(computer_programming))的任何编程语言来实现设计模式。
### 单例模式:避免每次创建一个对象
<ruby> <a href="https://en.wikipedia.org/wiki/Singleton_pattern"> 单例模式 </a> <rt> singleton pattern </rt></ruby>是非常流行的设计模式,它的实现相对来说很简单,因为你只需要一个类。然而,许多开发人员争论单例设计模式的是否利大于弊,因为它缺乏明显的好处并且容易被滥用。很少有开发人员直接实现单例;相反,像 Spring Framework 和 Google Guice 等编程框架内置了单例设计模式的特性。
但是了解单例模式仍然有巨大的用处。单例模式确保一个类仅创建一次且提供了一个对它的全局访问点。
>
> **单例模式**:确保仅创建一个实例且避免在同一个项目中创建多个实例。
>
>
>
下面这幅图展示了典型的类对象创建过程。当客户端请求创建一个对象时,构造函数会创建或者实例化一个对象并调用方法返回这个类给调用者。但是每次请求一个对象都会发生这样的情况:构造函数被调用,一个新的对象被创建并且它返回了一个独一无二的对象。我猜面向对象语言的创建者有每次都创建一个新对象的理由,但是单例过程的支持者说这是冗余的且浪费资源。

下面这幅图使用单例模式创建对象。这里,构造函数仅当对象首次通过调用预先设计好的 `getInstance()` 方法时才会被调用。这通常通过检查该值是否为 `null` 来完成,并且这个对象被作为私有变量保存在单例类的内部。下次 `getInstance()` 被调用时,这个类会返回第一次被创建的对象。而没有新的对象产生;它只是返回旧的那一个。

下面这段代码展示了创建单例模式最简单的方法:
```
package org.opensource.demo.singleton;
public class OpensourceSingleton {
private static OpensourceSingleton uniqueInstance;
private OpensourceSingleton() {
}
public static OpensourceSingleton getInstance() {
if (uniqueInstance == null) {
uniqueInstance = new OpensourceSingleton();
}
return uniqueInstance;
}
}
```
在调用方,这里展示了如何调用单例类来获取对象:
```
Opensource newObject = Opensource.getInstance();
```
这段代码很好的验证了单例模式的思想:
1. 当 `getInstance()` 被调用时,它通过检查 `null` 值来检查对象是否已经被创建。
2. 如果值为 `null`,它会创建一个新对象并把它保存到私有域,返回这个对象给调用者。否则直接返回之前被创建的对象。
单例模式实现的主要问题是它忽略了并行进程。当多个进程使用线程同时访问资源时,这个问题就产生了。对于这种情况有对应的解决方案,它被称为*双重检查锁*,用于多线程安全,如下所示:
```
package org.opensource.demo.singleton;
public class ImprovedOpensourceSingleton {
private volatile static ImprovedOpensourceSingleton uniqueInstance;
private ImprovedOpensourceSingleton() {}
public static ImprovedOpensourceSingleton getInstance() {
if (uniqueInstance == null) {
synchronized (ImprovedOpensourceSingleton.class) {
if (uniqueInstance == null) {
uniqueInstance = new ImprovedOpensourceSingleton();
}
}
}
return uniqueInstance;
}
}
```
再强调一下前面的观点,确保只有在你认为这是一个安全的选择时才直接实现你的单例模式。最好的方法是通过使用一个制作精良的编程框架来利用单例功能。
### 工厂模式:将对象创建委派给工厂类以隐藏创建逻辑
<ruby> <a href="https://en.wikipedia.org/wiki/Factory_method_pattern"> 工厂模式 </a> <rt> factory pattern </rt></ruby>是另一种众所周知的设计模式,但是有一小点复杂。实现工厂模式的方法有很多,而下列的代码示例为最简单的实现方式。为了创建对象,工厂模式定义了一个接口,让它的子类去决定实例化哪一个类。
>
> **工厂模式**:将对象创建委派给工厂类,因此它能隐藏创建逻辑。
>
>
>
下列的图片展示了最简单的工厂模式是如何实现的。

客户端请求工厂类创建类型为 x 的某个对象,而不是客户端直接调用对象创建。根据其类型,工厂模式决定要创建和返回的对象。
在下列代码示例中,`OpensourceFactory` 是工厂类实现,它从调用者那里获取*类型*并根据该输入值决定要创建和返回的对象:
```
package org.opensource.demo.factory;
public class OpensourceFactory {
public OpensourceJVMServers getServerByVendor(String name) {
if(name.equals("Apache")) {
return new Tomcat();
}
else if(name.equals("Eclipse")) {
return new Jetty();
}
else if (name.equals("RedHat")) {
return new WildFly();
}
else {
return null;
}
}
}
```
`OpenSourceJVMServer` 是一个 100% 的抽象类(即接口类),它指示要实现的是什么,而不是怎样实现:
```
package org.opensource.demo.factory;
public interface OpensourceJVMServers {
public void startServer();
public void stopServer();
public String getName();
}
```
这是一个 `OpensourceJVMServers` 类的实现示例。当 `RedHat` 被作为类型传递给工厂类,`WildFly` 服务器将被创建:
```
package org.opensource.demo.factory;
public class WildFly implements OpensourceJVMServers {
public void startServer() {
System.out.println("Starting WildFly Server...");
}
public void stopServer() {
System.out.println("Shutting Down WildFly Server...");
}
public String getName() {
return "WildFly";
}
}
```
### 观察者模式:订阅主题并获取相关更新的通知
最后是<ruby> <a href="https://en.wikipedia.org/wiki/Observer_pattern"> 观察者模式 </a> <rt> observer pattern </rt></ruby>。像单例模式那样,很少有专业的程序员直接实现观察者模式。但是,许多消息队列和数据服务实现都借用了观察者模式的概念。观察者模式在对象之间定义了一对多的依赖关系,当一个对象的状态发生改变时,所有依赖它的对象都将被自动地通知和更新。
>
> **观察者模式**:如果有更新,那么订阅了该话题/主题的客户端将被通知。
>
>
>
理解观察者模式的最简单方法是想象一个邮件列表,你可以在其中订阅任何主题,无论是开源、技术、名人、烹饪还是您感兴趣的任何其他内容。每个主题维护者一个它的订阅者列表,在观察者模式中它们相当于观察者。当某一个主题更新时,它所有的订阅者(观察者)都将被通知这次改变。并且订阅者总是能取消某一个主题的订阅。
如下图所示,客户端可以订阅不同的主题并添加观察者以获得最新信息的通知。因为观察者不断的监听着这个主题,这个观察者会通知客户端任何发生的改变。

让我们来看看观察者模式的代码示例,从主题/话题类开始:
```
package org.opensource.demo.observer;
public interface Topic {
public void addObserver(Observer observer);
public void deleteObserver(Observer observer);
public void notifyObservers();
}
```
这段代码描述了一个为不同的主题去实现已定义方法的接口。注意一个观察者如何被添加、移除和通知的。
这是一个主题的实现示例:
```
package org.opensource.demo.observer;
import java.util.List;
import java.util.ArrayList;
public class Conference implements Topic {
private List<Observer> listObservers;
private int totalAttendees;
private int totalSpeakers;
private String nameEvent;
public Conference() {
listObservers = new ArrayList<Observer>();
}
public void addObserver(Observer observer) {
listObservers.add(observer);
}
public void deleteObserver(Observer observer) {
int i = listObservers.indexOf(observer);
if (i >= 0) {
listObservers.remove(i);
}
}
public void notifyObservers() {
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
Observer observer = listObservers.get(i);
observer.update(totalAttendees,totalSpeakers,nameEvent);
}
}
public void setConferenceDetails(int totalAttendees, int totalSpeakers, String nameEvent) {
this.totalAttendees = totalAttendees;
this.totalSpeakers = totalSpeakers;
this.nameEvent = nameEvent;
notifyObservers();
}
}
```
这段代码定义了一个特定主题的实现。当发生改变时,这个实现调用它自己的方法。注意这将获取观察者的数量,它以列表方式存储,并且可以通知和维护观察者。
这是一个观察者类:
```
package org.opensource.demo.observer;
public interface Observer {
public void update(int totalAttendees, int totalSpeakers, String nameEvent);
}
```
这个类定义了一个接口,不同的观察者可以实现该接口以执行特定的操作。
例如,实现了该接口的观察者可以在会议上打印出与会者和发言人的数量:
```
package org.opensource.demo.observer;
public class MonitorConferenceAttendees implements Observer {
private int totalAttendees;
private int totalSpeakers;
private String nameEvent;
private Topic topic;
public MonitorConferenceAttendees(Topic topic) {
this.topic = topic;
topic.addObserver(this);
}
public void update(int totalAttendees, int totalSpeakers, String nameEvent) {
this.totalAttendees = totalAttendees;
this.totalSpeakers = totalSpeakers;
this.nameEvent = nameEvent;
printConferenceInfo();
}
public void printConferenceInfo() {
System.out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
}
}
```
### 接下来
现在你已经阅读了这篇对于设计模式的介绍引导,你还可以去寻求了解其他设计模式,例如外观模式,模版模式和装饰器模式。也有一些并发和分布式系统的设计模式如断路器模式和锚定模式。
可是,我相信最好的磨砺你的技能的方式首先是通过在你的业余项目或者练习中实现这些设计模式。你甚至可以开始考虑如何在实际项目中应用这些设计模式。接下来,我强烈建议你查看 OOP 的 [SOLID 原则](https://en.wikipedia.org/wiki/SOLID)。之后,你就准备好了解其他设计模式。
---
via: <https://opensource.com/article/19/7/understanding-software-design-patterns>
作者:[Bryant Son](https://opensource.com/users/brsonhttps://opensource.com/users/erezhttps://opensource.com/users/brson) 选题:[lujun9972](https://github.com/lujun9972) 译者:[arrowfeng](https://github.com/arrowfeng) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you are a programmer or a student pursuing computer science or a similar discipline, sooner or later, you will encounter the term "software design pattern." According to Wikipedia, *"a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design."* Here is my take on the definition: When you have been working on a coding project for a while, you often begin to think, "Huh, this seems redundant. I wonder if I can change the code to be more flexible and accepting of changes?" So, you begin to think about how to separate what stays the same from what needs to change often.
A
design patternis a way to make your code easier to change by separating the part that stays the same and the part that needs constant changes.
Not surprisingly, everyone who has worked on a programming project has probably had the same thought. Especially for any industry-level project, where it's common to work with dozens or even hundreds of developers; the collaboration process suggests that there have to be some standards and rules to make the code more elegant and adaptable to changes. That is why we have [object-oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming) (OOP) and [software framework tools](https://en.wikipedia.org/wiki/Software_framework). A design pattern is somewhat similar to OOP, but it goes further by considering changes as part of the natural development process. Basically, the design pattern leverages some ideas from OOP, like abstractions and interfaces, but focuses on the process of changes.
When you start to work on a project, you often hear the term *refactoring*, which means *to change the code to be more elegant and reusable;* this is where the design pattern shines. Whenever you're working on existing code (whether built by someone else or your past self), knowing the design patterns helps you begin to see things differently—you will discover problems and ways to improve the code.
There are numerous design patterns, but three popular ones, which I'll present in this introductory article, are singleton pattern, factory pattern, and observer pattern.
## How to follow this guide
I want this tutorial to be as easy as possible for anyone to understand, whether you are an experienced programmer or a beginner to coding. The design pattern concept is not exactly easy to understand, and reducing the learning curve when you start a journey is always a top priority. Therefore, in addition to this article with diagrams and code pieces, I've also created a [GitHub repository](https://github.com/bryantson/OpensourceDotComDemos/tree/master/TopDesignPatterns) you can clone and run the code to implement the three design patterns on your own. You can also follow along with the following [YouTube video](https://www.youtube.com/watch?v=VlBXYtLI7kE&feature=youtu.be) I created.
### Prerequisites
If you just want to get the idea of design patterns in general, you do not need to clone the sample project or install any of the tools. However, to run the sample code, you need to have the following installed:
**Java Development Kit (JDK):**I highly recommend[OpenJDK](https://openjdk.java.net/).**Apache Maven:**The sample project is built using[Apache Maven](https://maven.apache.org/); fortunately, many IDEs come with Maven installed.**Interactive development editor (IDE):**I use[IntelliJ Community Edition](https://www.jetbrains.com/idea/download/#section=mac), but you can use[Eclipse IDE](https://www.eclipse.org/ide/)or any other Java IDE of your choice**Git:**If you want to clone the project, you need a[Git](https://git-scm.com/)client.
To clone the project and follow along, run the following command after you install Git:
`git clone https://github.com/bryantson/OpensourceDotComDemos.git`
Then, in your favorite IDE, you can import the code in the TopDesignPatterns repo as an Apache Maven project.
I am using Java, but you can implement the design pattern using any programming language that supports the [abstraction principle](https://en.wikipedia.org/wiki/Abstraction_principle_(computer_programming)).
## Singleton pattern: Avoid creating an object every single time
The [singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) is a very popular design pattern that is also relatively simple to implement because you need just one class. However, many developers debate whether the singleton design pattern's benefits outpace its problems because it lacks clear benefits and is easy to abuse. Few developers implement singleton directly; instead, programming frameworks like Spring Framework and Google Guice have built-in singleton design pattern features.
But knowing about singleton is still tremendously useful. The singleton pattern makes sure that a class is created only once and provides a global point of access to it.
Singleton pattern:Ensures that only one instantation is created and avoids creating multiple instances of the same object.
The diagram below shows the typical process for creating a class object. When the client asks to create an object, the constructor creates, or instantiates, an object and returns to the class with the caller method. However, this happens every single time an object is requested—the constructor is called, a new object is created, and it returns with a unique object. I guess the creators of the OOP language had a reason behind creating a new object every single time, but the proponents of the singleton process say this is redundant and a waste of resources.

The following diagram creates the object using the singleton pattern. Here, the constructor is called only when the object is requested the first time through a designated getInstance() method. This is usually done by checking the null value, and the object is saved inside the singleton class as a private field value. The next time the getInstance() is called, the class returns the object that was created the first time. No new object is created; it just returns the old one.

The following script shows the simplest possible way to create the singleton pattern:
```
package org.opensource.demo.singleton;
public class OpensourceSingleton {
private static OpensourceSingleton uniqueInstance;
private OpensourceSingleton() {
}
public static OpensourceSingleton getInstance() {
if (uniqueInstance == null) {
uniqueInstance = new OpensourceSingleton();
}
return uniqueInstance;
}
}
```
On the caller side, here is how the singleton class will be called to get an object:
```
Opensource newObject = Opensource.getInstance();
```
This code demonstrates the idea of a singleton well:
- When getInstance() is called, it checks whether the object was already created by checking the null value.
- If the value is null, it creates a new object, saves it into the private field, and returns the object to the caller. Otherwise, it returns the object that was created previously.
The main problem with this singleton implementation is its disregard for parallel processes. When multiple processes using threads access the resource simultaneously, a problem occurs. There is one solution to this, and it is called *double-checked locking* for multithread safety, which is shown here:
```
package org.opensource.demo.singleton;
public class ImprovedOpensourceSingleton {
private volatile static ImprovedOpensourceSingleton uniqueInstance;
private ImprovedOpensourceSingleton() {}
public static ImprovedOpensourceSingleton getInstance() {
if (uniqueInstance == null) {
synchronized (ImprovedOpensourceSingleton.class) {
if (uniqueInstance == null) {
uniqueInstance = new ImprovedOpensourceSingleton();
}
}
}
return uniqueInstance;
}
}
```
Just to emphasize the previous point, make sure to implement your singleton directly only when you believe is a safe option to do so. The best way is to leverage the singleton feature is by using a well-made programming framework.
## Factory pattern: Delegate object creation to the factory class to hide creation logic
The [factory pattern](https://en.wikipedia.org/wiki/Factory_method_pattern) is another well-known design pattern, but it is a little more complex. There are several ways to implement the factory pattern, but the following sample code demonstrates the simplest possible way. The factory pattern defines an interface for creating an object but lets the subclasses decide which class to instantiate.
Factory pattern:Delegates object creation to the factory class so it hides the creation logic.
The diagram below shows how the simplest factory pattern is implemented.

Instead of the client directly calling the object creation, the client asks the factory class for a certain object, type x. Based on the type, the factory pattern decides which object to create and to return.
In this code sample, OpensourceFactory is the factory class implementation that takes the *type* from the caller and decides which object to create based on that input value:
```
package org.opensource.demo.factory;
public class OpensourceFactory {
public OpensourceJVMServers getServerByVendor(String name) {
if(name.equals("Apache")) {
return new Tomcat();
}
else if(name.equals("Eclipse")) {
return new Jetty();
}
else if (name.equals("RedHat")) {
return new WildFly();
}
else {
return null;
}
}
}
```
And OpenSourceJVMServer is a 100% abstraction class (or an interface class) that indicates what to implement, not how:
```
package org.opensource.demo.factory;
public interface OpensourceJVMServers {
public void startServer();
public void stopServer();
public String getName();
}
```
Here is a sample implementation class for OpensourceJVMServers. When "RedHat" is passed as the type to the factory class, the WildFly server is created:
```
package org.opensource.demo.factory;
public class WildFly implements OpensourceJVMServers {
public void startServer() {
System.out.println("Starting WildFly Server...");
}
public void stopServer() {
System.out.println("Shutting Down WildFly Server...");
}
public String getName() {
return "WildFly";
}
}
```
## Observer pattern: Subscribe to topics and get notified about updates
Finally, there is the [observer pattern](https://en.wikipedia.org/wiki/Observer_pattern)*.* Like the singleton pattern, few professional programmers implement the observer pattern directly. However, many messaging queue and data service implementations borrow the observer pattern concept. The observer pattern defines one-to-many dependencies between objects so that when one object changes state, all of its dependents are notified and updated automatically.
Observer pattern:Subscribe to the topics/subjects where the client can be notified if there is an update.
The easiest way to think about the observer pattern is to imagine a mailing list where you can subscribe to any topic, whether it is open source, technologies, celebrities, cooking, or anything else that interests you. Each topic maintains a list of its subscribers, which is equivalent to an "observer" in the observer pattern. When a topic is updated, all of its subscribers (observers) are notified of the changes. And a subscriber can always unsubscribe from a topic.
As the following diagram shows, the client can be subscribed to different topics and add the observer to be notified about new information. Because the observer listens continuously to the subject, the observer notifies the client about any change that occurs.

Let's look at the sample code for the observer pattern, starting with the subject/topic class:
```
package org.opensource.demo.observer;
public interface Topic {
public void addObserver(Observer observer);
public void deleteObserver(Observer observer);
public void notifyObservers();
}
```
This code describes an interface for different topics to implement the defined methods. Notice how an observer can be added, removed, or notified.
Here is an example implementation of the topic:
```
package org.opensource.demo.observer;
import java.util.List;
import java.util.ArrayList;
public class Conference implements Topic {
private List<Observer> listObservers;
private int totalAttendees;
private int totalSpeakers;
private String nameEvent;
public Conference() {
listObservers = new ArrayList<Observer>();
}
public void addObserver(Observer observer) {
listObservers.add(observer);
}
public void deleteObserver(Observer observer) {
int i = listObservers.indexOf(observer);
if (i >= 0) {
listObservers.remove(i);
}
}
public void notifyObservers() {
for (int i=0, nObservers = listObservers.size(); i < nObservers; ++ i) {
Observer observer = listObservers.get(i);
observer.update(totalAttendees,totalSpeakers,nameEvent);
}
}
public void setConferenceDetails(int totalAttendees, int totalSpeakers, String nameEvent) {
this.totalAttendees = totalAttendees;
this.totalSpeakers = totalSpeakers;
this.nameEvent = nameEvent;
notifyObservers();
}
}
```
This class defines the implementation of a particular topic. When a change happens, this implementation is where it is invoked. Notice that this takes the number of observers, which is stored as the list, and can both notify and maintain the observers.
Here is an observer class:
```
package org.opensource.demo.observer;
public interface Observer {
public void update(int totalAttendees, int totalSpeakers, String nameEvent);
}
```
This class defines an interface that different observers can implement to take certain actions.
For example, the observer implementation can print out the number of attendees and speakers at a conference:
```
package org.opensource.demo.observer;
public class MonitorConferenceAttendees implements Observer {
private int totalAttendees;
private int totalSpeakers;
private String nameEvent;
private Topic topic;
public MonitorConferenceAttendees(Topic topic) {
this.topic = topic;
topic.addObserver(this);
}
public void update(int totalAttendees, int totalSpeakers, String nameEvent) {
this.totalAttendees = totalAttendees;
this.totalSpeakers = totalSpeakers;
this.nameEvent = nameEvent;
printConferenceInfo();
}
public void printConferenceInfo() {
System.out.println(this.nameEvent + " has " + totalSpeakers + " speakers and " + totalAttendees + " attendees");
}
}
```
## Where to go from here?
Now that you've read this introductory guide to design patterns, you should be in a good place to pursue other design patterns, such as facade, template, and decorator. There are also concurrent and distributed system design patterns like the circuit breaker pattern and the actor pattern.
However, I believe it's best to hone your skills first by implementing these design patterns in your side projects or just as practice. You can even begin to contemplate how you can apply these design patterns in your real projects. Next, I highly recommend checking out the [SOLID principles](https://en.wikipedia.org/wiki/SOLID) of OOP. After that, you will be ready to look into the other design patterns.
## 4 Comments |
11,211 | GameMode:提高 Linux 游戏性能的工具 | https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/ | 2019-08-11T07:50:07 | [
"游戏"
] | https://linux.cn/article-11211-1.html | 
去问一些 Linux 用户为什么他们仍然坚持 Windows 双启动,他们的答案可能是 - “游戏!”。这是真的!幸运的是,开源游戏平台如 [Lutris](https://www.ostechnix.com/manage-games-using-lutris-linux/) 和专有游戏平台 Steam 已经为 Linux 平台带来了许多游戏,并且近几年来显著改善了 Linux 的游戏体验。今天,我偶然发现了另一款名为 GameMode 的 Linux 游戏相关开源工具,它能让用户提高 Linux 上的游戏性能。
GameMode 基本上是一组守护进程/库,它可以按需优化 Linux 系统的游戏性能。我以为 GameMode 是一个杀死在后台运行的对资源消耗大进程的工具。但它并不是。它实际上只是让 CPU **在用户玩游戏时自动运行在高性能模式下**并帮助 Linux 用户从游戏中获得最佳性能。
在玩游戏时,GameMode 通过对宿主机请求临时应用一组优化来显著提升游戏性能。目前,它支持下面这些优化:
* CPU 调控器,
* I/O 优先级,
* 进程 nice 值
* 内核调度器(SCHED\_ISO),
* 禁止屏幕保护,
* GPU 高性能模式(NVIDIA 和 AMD),GPU 超频(NVIDIA),
* 自定义脚本。
GameMode 是由世界领先的游戏发行商 [Feral Interactive](http://www.feralinteractive.com/en/) 开发的自由开源的系统工具。
### 安装 GameMode
GameMode 适用于许多 Linux 发行版。
在 Arch Linux 及其变体上,你可以使用任何 AUR 助手程序,如 [Yay](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/) 从 [AUR](https://aur.archlinux.org/packages/gamemode/) 安装它。
```
$ yay -S gamemode
```
在 Debian、Ubuntu、Linux Mint 和其他基于 Deb 的系统上:
```
$ sudo apt install gamemode
```
如果 GameMode 不适用于你的系统,你可以按照它的 Github 页面中开发章节下的描述从源码手动编译和安装它。
### 激活 GameMode 支持以改善 Linux 上的游戏性能
以下是集成支持了 GameMode 的游戏列表,因此我们无需进行任何其他配置即可激活 GameMode 支持。
* 古墓丽影:崛起
* 全面战争传奇:不列颠尼亚王座
* 全面战争:战锤 2
* 尘埃 4
* 全面战争:三国
只需运行这些游戏,就会自动启用 GameMode 支持。
这里还有将 GameMode 与 GNOME shell 集成的的[扩展](https://github.com/gicmo/gamemode-extension)。它会在顶部指示 GameMode 何时处于活跃。
对于其他游戏,你可能需要手动请求 GameMode 支持,如下所示。
```
gamemoderun ./game
```
我不喜欢游戏,并且我已经很多年没玩游戏了。所以,我无法分享一些实际的基准测试。
但是,我在 Youtube 上找到了一个简短的[视频教程](https://youtu.be/4gyRyYfyGJw),以便为 Lutris 游戏启用 GameMode 支持。对于那些想要第一次尝试 GameMode 的人来说,这是个不错的开始。
通过浏览视频中的评论,我可以说 GameMode 确实提高了 Linux 上的游戏性能。
对于更多细节,请参阅 [GameMode 的 GitHub 仓库](https://github.com/FeralInteractive/gamemode)。
相关阅读:
* [GameHub – 将所有游戏集合在一起的仓库](https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/)
* [如何在 Linux 中运行 MS-DOS 游戏和程序](https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/)
你用过 GameMode 吗?它真的有改善 Linux 上的游戏性能吗?请在下面的评论栏分享你的想法。
---
via: <https://www.ostechnix.com/gamemode-a-tool-to-improve-gaming-performance-on-linux/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,212 | 如何在 Linux 中验证 ISO 镜像 | https://www.ostechnix.com/how-to-verify-iso-images-in-linux/ | 2019-08-11T08:01:20 | [
"ISO"
] | https://linux.cn/article-11212-1.html | 
你从喜爱的 Linux 发行版的官方网站或第三方网站下载了它的 ISO 镜像之后,接下来要做什么呢?是[创建可启动介质](https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/)并开始安装系统吗?并不是,请稍等一下。在开始使用它之前,强烈建议你检查一下你刚下载到本地系统中的 ISO 文件是否是下载镜像站点中 ISO 文件的一个精确拷贝。因为在前几年 [Linux Mint 的网站被攻破了](https://blog.linuxmint.com/?p=2994),并且攻击者创建了一个包含后门的经过修改的 Linux Mint ISO 文件。 所以验证下载的 Linux ISO 镜像的可靠性和完整性是非常重要的一件事儿。假如你不知道如何在 Linux 中验证 ISO 镜像,本次的简要介绍将给予你帮助,请接着往下看!
### 在 Linux 中验证 ISO 镜像
我们可以使用 ISO 镜像的“校验和”来验证 ISO 镜像。校验和是一系列字母和数字的组合,用来检验下载文件的数据是否有错以及验证其可靠性和完整性。当前存在不同类型的校验和,例如 SHA-0、SHA-1、SHA-2(224、256、384、512)和 MD5。MD5 校验和最为常用,但对于现代的 Linux 发行版,SHA-256 最常被使用。
我们将使用名为 `gpg` 和 `sha256` 的两个工具来验证 ISO 镜像的可靠性和完整性。
#### 下载校验和及签名
针对本篇指南的目的,我将使用 Ubuntu 18.04 LTS 服务器 ISO 镜像来做验证,但对于其他的 Linux 发行版应该也是适用的。
在靠近 Ubuntu 下载页的最上端,你将看到一些额外的文件(校验和及签名),正如下面展示的图片那样:

其中名为 `SHA256SUMS` 的文件包含了这里所有可获取镜像的校验和,而 `SHA256SUMS.gpg` 文件则是这个文件的 GnuPG 签名。在下面的步骤中,我们将使用这个签名文件来 **验证** 校验和文件。
下载 Ubuntu 的 ISO 镜像文件以及刚才提到的那两个文件,然后将它们放到同一目录下,例如这里的 `ISO` 目录:
```
$ ls ISO/
SHA256SUMS SHA256SUMS.gpg ubuntu-18.04.2-live-server-amd64.iso
```
如你所见,我已经下载了 Ubuntu 18.04.2 LTS 服务器版本的镜像,以及对应的校验和文件和签名文件。
#### 下载有效的签名秘钥
现在,使用下面的命令来下载正确的签名秘钥:
```
$ gpg --keyid-format long --keyserver hkp://keyserver.ubuntu.com --recv-keys 0x46181433FBB75451 0xD94AA3F0EFE21092
```
示例输出如下:
```
gpg: key D94AA3F0EFE21092: 57 signatures not checked due to missing keys
gpg: key D94AA3F0EFE21092: public key "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" imported
gpg: key 46181433FBB75451: 105 signatures not checked due to missing keys
gpg: key 46181433FBB75451: public key "Ubuntu CD Image Automatic Signing Key <[email protected]>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 2
gpg: imported: 2
```
#### 验证 SHA-256 校验和
接下来我们将使用签名来验证校验和文件:
```
$ gpg --keyid-format long --verify SHA256SUMS.gpg SHA256SUMS
```
下面是示例输出:
```
gpg: Signature made Friday 15 February 2019 04:23:33 AM IST
gpg: using DSA key 46181433FBB75451
gpg: Good signature from "Ubuntu CD Image Automatic Signing Key <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: C598 6B4F 1257 FFA8 6632 CBA7 4618 1433 FBB7 5451
gpg: Signature made Friday 15 February 2019 04:23:33 AM IST
gpg: using RSA key D94AA3F0EFE21092
gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
```
假如你在输出中看到 `Good signature` 字样,那么该校验和文件便是由 Ubuntu 开发者制作的,并且由秘钥文件的所属者签名认证。
#### 检验下载的 ISO 文件
下面让我们继续检查下载的 ISO 文件是否和所给的校验和相匹配。为了达到该目的,只需要运行:
```
$ sha256sum -c SHA256SUMS 2>&1 | grep OK
ubuntu-18.04.2-live-server-amd64.iso: OK
```
假如校验和是匹配的,你将看到 `OK` 字样,这意味着下载的文件是合法的,没有被改变或篡改过。
假如你没有获得类似的输出,或者看到不同的输出,则该 ISO 文件可能已经被修改过或者没有被正确地下载。你必须从一个更好的下载源重新下载该文件。
某些 Linux 发行版已经在它的下载页面中包含了校验和。例如 Pop!\_os 的开发者在他们的下载页面中提供了所有 ISO 镜像的 SHA-256 校验和,这样你就可以快速地验证这些 ISO 镜像。

在下载完 ISO 镜像文件后,可以使用下面的命令来验证它们:
```
$ sha256sum Soft_backup/ISOs/pop-os_18.04_amd64_intel_54.iso
```
示例输出如下:
```
680e1aa5a76c86843750e8120e2e50c2787973343430956b5cbe275d3ec228a6 Soft_backup/ISOs/pop-os_18.04_amd64_intel_54.iso
```

在上面的输出中,以 `680elaa` 开头的部分为 SHA-256 校验和的值。请将该值与位于下载页面中提供的 SHA-256 校验和的值进行比较,如果这两个值相同,那说明这个下载的 ISO 文件是合法的,与它的原有状态相比没有经过更改或者篡改。万事俱备,你可以进行下一步了!
上面的内容便是我们如何在 Linux 中验证一个 ISO 文件的可靠性和完整性的方法。无论你是从官方站点或者第三方站点下载 ISO 文件,我们总是推荐你在使用它们之前做一次简单的快速验证。希望本篇的内容对你有所帮助。
参考文献:
* <https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify-ubuntu>
---
via: <https://www.ostechnix.com/how-to-verify-iso-images-in-linux/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,213 | 本地开发如何测试 Webhook | https://medium.freecodecamp.org/testing-webhooks-while-using-vagrant-for-development-98b5f3bedb1d | 2019-08-11T09:06:00 | [
"Webhook"
] | https://linux.cn/article-11213-1.html | 
[Webhook](https://sendgrid.com/blog/whats-webhook/) 可用于外部系统通知你的系统发生了某个事件或更新。可能最知名的 [Webhook](https://sendgrid.com/blog/whats-webhook/) 类型是支付服务提供商(PSP)通知你的系统支付状态有了更新。
它们通常以监听的预定义 URL 的形式出现,例如 `http://example.com/webhooks/payment-update`。同时,另一个系统向该 URL 发送具有特定有效载荷的 POST 请求(例如支付 ID)。一旦请求进入,你就会获得支付 ID,可以通过 PSP 的 API 用这个支付 ID 向它们询问最新状态,然后更新你的数据库。
其他例子可以在这个对 Webhook 的出色的解释中找到:<https://sendgrid.com/blog/whats-webhook/>。
只要系统可通过互联网公开访问(这可能是你的生产环境或可公开访问的临时环境),测试这些 webhook 就相当顺利。而当你在笔记本电脑上或虚拟机内部(例如,Vagrant 虚拟机)进行本地开发时,它就变得困难了。在这些情况下,发送 webhook 的一方无法公开访问你的本地 URL。此外,监视发送的请求也很困难,这可能使开发和调试变得困难。
因此,这个例子将解决:
* 测试来自本地开发环境的 webhook,该环境无法通过互联网访问。从服务器向 webhook 发送数据的服务无法访问它。
* 监控发送的请求和数据,以及应用程序生成的响应。这样可以更轻松地进行调试,从而缩短开发周期。
前置需求:
* *可选*:如果你使用虚拟机(VM)进行开发,请确保它正在运行,并确保在 VM 中完成后续步骤。
* 对于本教程,我们假设你定义了一个 vhost:`webhook.example.vagrant`。我在本教程中使用了 Vagrant VM,但你可以自由选择 vhost。
* 按照这个[安装说明](https://ngrok.com/download)安装 `ngrok`。在 VM 中,我发现它的 Node 版本也很有用:<https://www.npmjs.com/package/ngrok>,但你可以随意使用其他方法。
我假设你没有在你的环境中运行 SSL,但如果你使用了,请将在下面的示例中的端口 80 替换为端口 433,`http://` 替换为 `https://`。
### 使 webhook 可测试
我们假设以下示例代码。我将使用 PHP,但请将其视作伪代码,因为我留下了一些关键部分(例如 API 密钥、输入验证等)没有编写。
第一个文件:`payment.php`。此文件创建一个 `$payment` 对象,将其注册到 PSP。然后它获取客户需要访问的 URL,以便支付并将用户重定向到客户那里。
请注意,此示例中的 `webhook.example.vagrant` 是我们为开发设置定义的本地虚拟主机。它无法从外部世界进入。
```
<?php
/*
* This file creates a payment and tells the PSP what webhook URL to use for updates
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
*/
$payment = [
'order_id' => 123,
'amount' => 25.00,
'description' => 'Test payment',
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
'webhook_url' => 'http://webhook.example.vagrant/webhook.php',
];
$payment = $paymentProvider->createPayment($payment);
header("Location: " . $payment->getPaymentUrl());
```
第二个文件:`webhook.php`。此文件等待 PSP 调用以获得有关更新的通知。
```
<?php
/*
* This file gets called by the PSP and in the $_POST they submit an 'id'
* We can use this ID to get the latest status from the PSP and update our internal systems afterward
*/
$paymentId = $_POST['id'];
$paymentInfo = $paymentProvider->getPayment($paymentId);
$status = $paymentInfo->getStatus();
// Perform actions in here to update your system
if ($status === 'paid') {
..
}
elseif ($status === 'cancelled') {
..
}
```
我们的 webhook URL 无法通过互联网访问(请记住它:`webhook.example.vagrant`)。因此,PSP 永远不可能调用文件 `webhook.php`,你的系统将永远不会知道付款状态,这最终导致订单永远不会被运送给客户。
幸运的是,`ngrok` 可以解决这个问题。 [ngrok](https://ngrok.com/) 将自己描述为:
>
> ngrok 通过安全隧道将 NAT 和防火墙后面的本地服务器暴露给公共互联网。
>
>
>
让我们为我们的项目启动一个基本的隧道。在你的环境中(在你的系统上或在 VM 上)运行以下命令:
```
ngrok http -host-header=rewrite webhook.example.vagrant:80
```
>
> 阅读其文档可以了解更多配置选项:<https://ngrok.com/docs>。
>
>
>
会出现这样的屏幕:

*ngrok 输出*
我们刚刚做了什么?基本上,我们指示 `ngrok` 在端口 80 建立了一个到 `http://webhook.example.vagrant` 的隧道。同一个 URL 也可以通过 `http://39741ffc.ngrok.io` 或 `https://39741ffc.ngrok.io` 访问,它们能被任何知道此 URL 的人通过互联网公开访问。
请注意,你可以同时获得 HTTP 和 HTTPS 两个服务。这个文档提供了如何将此限制为 HTTPS 的示例:<https://ngrok.com/docs#bind-tls>。
那么,我们如何让我们的 webhook 现在工作起来?将 `payment.php` 更新为以下代码:
```
<?php
/*
* This file creates a payment and tells the PSP what webhook URL to use for updates
* After creating the payment, we get a URL to send the customer to in order to pay at the PSP
*/
$payment = [
'order_id' => 123,
'amount' => 25.00,
'description' => 'Test payment',
'redirect_url' => 'http://webhook.example.vagrant/redirect.php',
'webhook_url' => 'https://39741ffc.ngrok.io/webhook.php',
];
$payment = $paymentProvider->createPayment($payment);
header("Location: " . $payment->getPaymentUrl());
```
现在,我们告诉 PSP 通过 HTTPS 调用此隧道 URL。只要 PSP 通过隧道调用 webhook,`ngrok` 将确保使用未修改的有效负载调用内部 URL。
### 如何监控对 webhook 的调用?
你在上面看到的屏幕截图概述了对隧道主机的调用,这些数据相当有限。幸运的是,`ngrok` 提供了一个非常好的仪表板,允许你检查所有调用:

我不会深入研究这个问题,因为它是不言自明的,你只要运行它就行了。因此,我将解释如何在 Vagrant 虚拟机上访问它,因为它不是开箱即用的。
仪表板将允许你查看所有调用、其状态代码、标头和发送的数据。你将看到应用程序生成的响应。
仪表板的另一个优点是它允许你重放某个调用。假设你的 webhook 代码遇到了致命的错误,开始新的付款并等待 webhook 被调用将会很繁琐。重放上一个调用可以使你的开发过程更快。
默认情况下,仪表板可在 `http://localhost:4040` 访问。
### 虚拟机中的仪表盘
为了在 VM 中完成此工作,你必须执行一些额外的步骤:
首先,确保可以在端口 4040 上访问 VM。然后,在 VM 内创建一个文件已存放此配置:
```
web_addr: 0.0.0.0:4040
```
现在,杀死仍在运行的 `ngrok` 进程,并使用稍微调整过的命令启动它:
```
ngrok http -config=/path/to/config/ngrok.conf -host-header=rewrite webhook.example.vagrant:80
```
尽管 ID 已经更改,但你将看到类似于上一屏幕截图的屏幕。之前的网址不再有效,但你有了一个新网址。 此外,`Web Interface` URL 已更改:

现在将浏览器指向 `http://webhook.example.vagrant:4040` 以访问仪表板。另外,对 `https://e65642b5.ngrok.io/webhook.php` 做个调用。这可能会导致你的浏览器出错,但仪表板应显示正有一个请求。
### 最后的备注
上面的例子是伪代码。原因是每个外部系统都以不同的方式使用 webhook。我试图基于一个虚构的 PSP 实现给出一个例子,因为可能很多开发人员在某个时刻肯定会处理付款。
请注意,你的 webhook 网址也可能被意图不好的其他人使用。确保验证发送给它的任何输入。
更好的的,可以向 URL 添加令牌,该令牌对于每个支付是唯一的。只有你的系统和发送 webhook 的系统才能知道此令牌。
祝你测试和调试你的 webhook 顺利!
注意:我没有在 Docker 上测试过本教程。但是,这个 Docker 容器看起来是一个很好的起点,并包含了明确的说明:<https://github.com/wernight/docker-ngrok> 。
---
via: <https://medium.freecodecamp.org/testing-webhooks-while-using-vagrant-for-development-98b5f3bedb1d>
作者:[Stefan Doorn](https://medium.freecodecamp.org/@stefandoorn) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,214 | IT 灾备:系统管理员对抗自然灾害 | https://www.hpe.com/us/en/insights/articles/it-disaster-recovery-sysadmins-vs-natural-disasters-1711.html | 2019-08-12T08:02:35 | [
"备份"
] | /article-11214-1.html | 
>
> 面对倾泻的洪水或地震时业务需要继续运转。在飓风卡特里娜、桑迪和其他灾难中幸存下来的系统管理员向在紧急状况下负责 IT 的人们分享真实世界中的建议。
>
>
>
说到自然灾害,2017 年可算是多灾多难。(LCTT 译注:本文发表于 2017 年)飓风哈维、厄玛和玛莉亚给休斯顿、波多黎各、弗罗里达和加勒比造成了严重破坏。此外,西部的野火将多处住宅和商业建筑付之一炬。
再来一篇关于[有备无患](https://www.hpe.com/us/en/insights/articles/what-is-disaster-recovery-really-1704.html)的警示文章 —— 当然其中都是好的建议 —— 是很简单的,但这无法帮助网络管理员应对湿漉漉的烂摊子。那些善意的建议中大多数都假定掌权的人乐于投入资金来实施这些建议。
我们对真实世界更感兴趣。不如让我们来充分利用这些坏消息。
一个很好的例子:自然灾害的一个后果是老板可能突然愿意给灾备计划投入预算。如同一个纽约地区的系统管理员所言,“[我发现飓风桑迪的最大好处](https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/)是我们的客户对 IT 投资更有兴趣了,但愿你也能得到更多预算。”
不过别指望这种意愿持续很久。任何想提议改进基础设施的系统管理员最好趁热打铁。如同另一位飓风桑迪中幸存下来的 IT 专员懊悔地提及那样,“[对 IT 开支最初的兴趣持续到当年为止](https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma6gse/)。到了第二年,任何尚未开工的计划都因为‘预算约束’被搁置了,大约 6 个月之后则完全被遗忘。”
在管理层忘记恶劣的自然灾害也可能降临到好公司头上之前提醒他们这点会有所帮助。根据<ruby> 商业和家庭安全协会 <rt> Institute for Business & Home Safety </rt></ruby>的说法,[自然灾害后歇业的公司中 25% 再也没能重新开业](https://disastersafety.org/wp-content/uploads/open-for-business-english.pdf)。<ruby> 联邦紧急事务管理署 <rt> FEMA </rt></ruby>认为这过于乐观。根据他们的统计,“灾后 [40% 的小公司再也没能重新开门营业](https://www.fema.gov/protecting-your-businesses)。”
如果你是个系统管理员,你能帮忙挽救你的公司。这里有一些幸存者的最好的主意,这些主意是基于他们从过去几次自然灾害中得到的经验。
### 制订一个计划
当灯光忽明忽暗,狂风象火车机车一样怒号时,就该启动你的业务持续计划和灾备计划了。
有太多的系统管理员报告当暴风雨来临时这两个计划中一个也没有。这并不令人惊讶。2014 年<ruby> <a href="http://drbenchmark.org/about-us/our-council/"> 灾备预备状态委员会 </a> <rt> Disaster Recovery Preparedness Council </rt></ruby>发现[世界范围内被调查的公司中有 73% 没有足够的灾备计划](https://www.prnewswire.com/news-releases/global-benchmark-study-reveals-73-of-companies-are-unprepared-for-disaster-recovery-248359051.html)。
“**足够**”是关键词。正如一个系统管理员 2016 年在 Reddit 上写的那样,“[我们的灾备计划就是一场灾难。](https://www.reddit.com/r/sysadmin/comments/3cob1k/what_does_your_disaster_recovery_plan_look_like/csxh8sn/)我们所有的数据都备份在离这里大约 30 英里的一个<ruby> 存储区域网络 <rt> SAN </rt></ruby>。我们没有将数据重新上线的硬件,甚至好几天过去了都没能让核心服务器启动运行起来。我们是个年营收 40 亿美元的公司,却不愿为适当的设备投入几十万美元,或是在数据中心添置几台服务器。当添置硬件的提案被提出的时候,我们的管理层说,‘嗐,碰到这种事情的机会能有多大呢’。”
同一个帖子中另一个人说得更简洁:“眼下我的灾备计划只能在黑暗潮湿的角落里哭泣,但愿没人在乎损失的任何东西。”
如果你在哭泣,但愿你至少不是独自流泪。任何灾备计划,即便是 IT 部门制订的灾备计划,必须确定[你能跟别人通讯](http://www.theregister.co.uk/2015/07/12/surviving_hurricane_katrina),如同系统管理员 Jim Thompson 从卡特里娜飓风中得到的教训:“确保你有一个与人们通讯的计划。在一场严重的区域性灾难期间,你将无法给身处灾区的任何人打电话。”
有一个选择可能会让有技术头脑的人感兴趣:<ruby> <a href="https://theprepared.com/guides/beginners-guide-amateur-ham-radio-preppers/"> 业余电台 </a> <rt> ham radio </rt></ruby>。[它在波多黎各发挥了巨大作用](http://www.npr.org/2017/09/29/554600989/amateur-radio-operators-stepped-in-to-help-communications-with-puerto-rico)。
### 列一个愿望清单
第一步是承认问题。“许多公司实际上对灾备计划不感兴趣,或是消极对待”,[Micro Focus](https://www.microfocus.com/) 的首席架构师 [Joshua Focus](http://www8.hp.com/us/en/software/joshua-brusse.html) 说。“将灾备看作业务持续性的一个方面是种不同的视角。所有公司都要应对业务持续性,所以灾备应被视为业务持续性的一部分。”
IT 部门需要将其需求书面化以确保适当的灾备和业务持续性计划。即使是你不知道如何着手,或尤其是这种时候,也是如此。正如一个系统管理员所言,“我喜欢有一个‘想法转储’,让所有计划、点子、改进措施毫无保留地提出来。(这)[对一类情况尤其有帮助,即当你提议变更](https://www.reddit.com/r/sysadmin/comments/6wricr/dear_houston_tx_sysadmins/dma87xv/),并付诸实施,接着 6 个月之后你警告过的状况就要来临。”现在你做好了一切准备并且可以开始讨论:“如同我们之前在 4 月讨论过的那样……”
因此,当你的管理层对业务持续性计划回应道“嗐,碰到这种事的机会能有多大呢?”的时候你能做些什么呢?有个系统管理员称这也完全是管理层的正常行为。在这种糟糕的处境下,老练的系统管理员建议用书面形式把这些事情记录下来。记录应清楚表明你告知管理层需要采取的措施,且[他们拒绝采纳建议](https://www.hpe.com/us/en/insights/articles/my-boss-asked-me-to-do-what-how-to-handle-worrying-work-requests-1710.html)。“总的来说就是有足够的书面材料能让他们搓成一根绳子上吊,”该系统管理员补充道。
如果那也不起作用,恢复一个被洪水淹没的数据中心的相关经验对你[找个新工作](https://www.hpe.com/us/en/insights/articles/sysadmin-survival-guide-1707.html)是很有帮助的。
### 保护有形的基础设施
“[我们的办公室是幢摇摇欲坠的建筑](https://www.reddit.com/r/sysadmin/comments/6wk92q/any_houston_admins_executing_their_dr_plans_this/dm8xj0q/),”飓风哈维重创休斯顿之后有个系统管理员提到。“我们盲目地进入那幢建筑,现场的基础设施糟透了。正是我们给那幢建筑里带去了最不想要的一滴水,现在基础设施整个都沉在水下了。”
尽管如此,如果你想让数据中心继续运转——或在暴风雨过后恢复运转 —— 你需要确保该场所不仅能经受住你所在地区那些意料中的灾难,而且能经受住那些意料之外的灾难。一个旧金山的系统管理员知道为什么重要的是确保公司的服务器安置在可以承受里氏 7 级地震的建筑内。一家圣路易斯的公司知道如何应对龙卷风。但你应当为所有可能发生的事情做好准备:加州的龙卷风、密苏里州的地震,或[僵尸末日](https://community.spiceworks.com/how_to/1243-ensure-your-dr-plan-is-ready-for-a-zombie-apocolypse)(给你在 IT 预算里增加一把链锯提供了充分理由)。
在休斯顿的情况下,[多数数据中心保持运转](http://www.datacenterdynamics.com/content-tracks/security-risk/houston-data-centers-withstand-hurricane-harvey/98867.article),因为它们是按照抵御暴风雨和洪水的标准建造的。[Data Foundry](https://www.datafoundry.com/) 的首席技术官 Edward Henigin 说他们公司的数据中心之一,“专门建造的休斯顿 2 号的设计能抵御 5 级飓风的风速。这个场所的公共供电没有中断,我们得以避免切换到后备发电机。”
那是好消息。坏消息是伴随着超级飓风桑迪于 2012 年登场,如果[你的数据中心没准备好应对洪水](http://www.datacenterknowledge.com/archives/2012/10/30/major-flooding-nyc-data-centers),你会陷入一个麻烦不断的世界。一个不能正常运转的数据中心 [Datagram](https://datagram.com/) 服务的客户包括 Gawker、Gizmodo 和 Buzzfeed 等知名网站。
当然,有时候你什么也做不了。正如某个波多黎各圣胡安的系统管理员在飓风厄玛扫过后悲伤地写到,“发电机没油了。服务器机房靠电池在运转但是没有(空调)。[永别了,服务器](https://www.reddit.com/r/sysadmin/comments/6yjb3p/shutting_down_everything_blame_irma/)。”由于 <ruby> MPLS <rt> Multiprotocol Lable Switching </rt></ruby> 线路亦中断,该系统管理员没法切换到灾备措施:“多么充实的一天。”
总而言之,IT 专业人士需要了解他们所处的地区,了解他们面临的风险并将他们的服务器安置在能抵御当地自然灾害的数据中心内。
### 关于云的争议
当暴风雨席卷一切时避免 IT 数据中心失效的最佳方法就是确保灾备数据中心在其他地方。选择地点时需要审慎的决策。你的灾备数据中心不应在会被同一场自然灾害影响到的<ruby> 地域 <rt> region </rt></ruby>;你的资源应安置在多个<ruby> 可用区 <rt> availability zone </rt></ruby>内。考虑一下主备数据中心位于一场地震中的同一条断层带上,或是主备数据中心易于受互通河道导致的洪灾影响这类情况。
有些系统管理员[利用云作为冗余设施](https://www.hpe.com/us/en/insights/articles/everything-you-need-to-know-about-clouds-and-hybrid-it-1701.html)。例如,总是用微软 Azure 云存储服务保存副本以确保持久性和高可用性。根据你的选择,Azure 复制功能将你的数据要么拷贝到同一个数据中心要么拷贝到另一个数据中心。多数公有云提供类似的自动备份服务以确保数据安全,不论你的数据中心发生什么情况——除非你的云服务供应商全部设施都在暴风雨的行进路径上。
昂贵么?是的。跟业务中断 1、2 天一样昂贵么?并非如此。
信不过公有云?可以考虑 <ruby> colo <rt> colocation </rt></ruby> 服务。有了 colo,你依旧拥有你的硬件,运行你自己的应用,但这些硬件可以远离麻烦。例如飓风哈维期间,一家公司“虚拟地”将它的资源从休斯顿搬到了其位于德克萨斯奥斯汀的 colo。但是那些本地数据中心和 colo 场所需要准备好应对灾难;这点是你选择场所时要考虑的一个因素。举个例子,一个寻找 colo 场所的西雅图系统管理员考虑的“全都是抗震和旱灾应对措施(加固的地基以及补给冷却系统的运水卡车)。”
### 周围一片黑暗时
正如 Forrester Research 的分析师 Rachel Dines 在一份为[灾备期刊](https://www.drj.com)所做的调查中报告的那样,宣布的灾难中[最普遍的原因就是断电](https://www.drj.com/images/surveys_pdf/forrester/2011Forrester_survey.pdf)。尽管你能应对一般情况下的断电,飓风、火灾和洪水的考验会超越设备的极限。
某个系统管理员挖苦式的计划是什么呢?“趁 UPS 完蛋之前把你能关的机器关掉,不能关的就让它崩溃咯。然后,[喝个痛快直到供电恢复](https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/d6c71p1/)。”
在 2016 年德尔塔和西南航空停电事故之后,IT 员工推动的一个更加严肃的计划是由一个有管理的服务供应商为其客户[部署不间断电源](https://www.reddit.com/r/sysadmin/comments/4x3mmq/datacenter_power_failure_procedures_what_do_yours/):“对于至关重要的部分,在供电中断时我们结合使用<ruby> 简单网络管理协议 <rt> SNMP </rt></ruby>信令和 <ruby> PowerChute 网络关机 <rt> PowerChute Nrework Shutdown </rt></ruby>客户端来关闭设备。至于重新开机,那取决于客户。有些是自动启动,有些则需要人工干预。”
另一种做法是用来自两个供电所的供电线路支持数据中心。例如,[西雅图威斯汀大厦数据中心](https://cloudandcolocation.com/datacenters/the-westin-building-seattle-data-center/)有来自不同供电所的多路 13.4 千伏供电线路,以及多个 480 伏三相变电箱。
预防严重断电的系统不是“通用的”设备。系统管理员应当[为数据中心请求一台定制的柴油发电机](https://www.techrepublic.com/article/what-to-look-for-in-a-data-center-backup-generator/)。除了按你特定的需求调整,发电机必须能迅速跳至全速运转并承载全部电力负荷而不致影响系统负载性能。”
这些发电机也必须加以保护。例如,将你的发电机安置在泛洪区的一楼就不是个聪明的主意。位于纽约<ruby> 百老街 <rt> Broad street </rt></ruby>的数据中心在超级飓风桑迪期间就是类似情形,备用发电机的燃料油桶在地下室 —— 并且被水淹了。尽管一场[“人力接龙”用容量 5 加仑的水桶将柴油输送到 17 段楼梯之上的发电机](http://www.datacenterknowledge.com/archives/2012/10/31/peer-1-mobilizes-diesel-bucket-brigade-at-75-broad)使 [Peer 1 Hosting](https://www.cogecopeer1.com/) 得以继续运营,但这不是一个切实可行的业务持续计划。
正如多数数据中心专家所知那样,如果你有时间 —— 假设一个飓风离你有一天的距离 —— 确保你的发电机正常工作,加满油,准备好当供电线路被刮断时立即开启,不管怎样你之前应当每月测试你的发电机。你之前是那么做的,是吧?是就好!
### 测试你对备份的信心
普通用户几乎从不备份,检查备份是否实际完好的就更少了。系统管理员对此更加了解。
有些 [IT 部门在寻求将他们的备份迁移到云端](https://www.reddit.com/r/sysadmin/comments/7a6m7n/aws_glacier_archival/)。但有些系统管理员仍对此不买账 —— 他们有很好的理由。最近有人报告,“在用了整整 5 天[从亚马逊 Glacier 恢复了(400 GB)数据](https://www.reddit.com/r/sysadmin/comments/63mypu/the_dangers_of_cloudberry_and_amazon_glacier_how/)之后,我欠了亚马逊将近 200 美元的传输费,并且(我还是)处于未完全恢复状态,还差大约 100 GB 文件。”
结果是有些系统管理员依然喜欢磁带备份。磁带肯定不够时髦,但正如操作系统专家 Andrew S. Tanenbaum 说的那样,“[永远不要低估一辆装满磁带在高速上飞驰的旅行车的带宽](https://en.wikiquote.org/wiki/Andrew_S._Tanenbaum)。”
目前每盘磁带可以存储 10 TB 数据;有的进行中的实验可在磁带上存储高达 200 TB 数据。诸如<ruby> <a href="http://www.snia.org/ltfs"> 线性磁带文件系统 </a> <rt> Linear Tape File System </rt></ruby>之类的技术允许你象访问网络硬盘一样读取磁带数据。
然而对许多人而言,磁带[绝对是最后选择的手段](https://www.reddit.com/r/sysadmin/comments/5visaq/backups_how_many_of_you_still_have_tapes/de2d0qm/)。没关系,因为备份应该有大量的可选方案。在这种情况下,一个系统管理员说到,“故障时我们会用这些方法(恢复备份):(Windows)服务器层面的 VSS (Volume Shadow Storage)快照,<ruby> 存储区域网络 <rt> SAN </rt></ruby>层面的卷快照,以及存储区域网络层面的异地归档快照。但是万一有什么事情发生并摧毁了我们的虚拟机,存储区域网络和备份存储区域网络,我们还是可以取回磁带并恢复数据。”
当麻烦即将到来时,可使用副本工具如 [Veeam](https://helpcenter.veeam.com/docs/backup/vsphere/failover.html?ver=95),它会为你的服务器创建一个虚拟机副本。若出现故障,副本会自动启动。没有麻烦,没有手忙脚乱,正如某个系统管理员在这个流行的系统管理员帖子中所说,“[我爱你 Veeam](https://www.reddit.com/r/sysadmin/comments/5rttuo/i_love_you_veeam/)。”
### 网络?什么网络?
当然,如果员工们无法触及他们的服务,没有任何云、colo 和远程数据中心能帮到你。你不需要一场自然灾害来证明冗余互联网连接的正确性。只需要一台挖断线路的挖掘机或断掉的光缆就能让你在工作中渡过糟糕的一天。
“理想状态下”,某个系统管理员明智地观察到,“你应该有[两路互联网接入线路连接到有独立基础设施的两个 ISP](https://www.reddit.com/r/sysadmin/comments/5rmqfx/ars_surviving_a_cloudbased_disaster_recovery_plan/dd90auv/)。例如,你不希望两个 ISP 都依赖于同一根光缆。你也不希望采用两家本地 ISP,并发现他们的上行带宽都依赖于同一家骨干网运营商。”
聪明的系统管理员知道他们公司的互联网接入线路[必须是商业级别的](http://www.e-vergent.com/what-is-dedicated-internet-access/),带有<ruby> 服务等级协议 <rt> service-level agreement(SLA) </rt></ruby>,其中包含“修复时间”条款。或者更好的是采用<ruby> 互联网接入专线 <rt> </rt> dedicated Internet access</ruby>。技术上这与任何其他互联网接入方式没有区别。区别在于互联网接入专线不是一种“尽力而为”的接入方式,而是你会得到明确规定的专供你使用的带宽并附有服务等级协议。这种专线不便宜,但正如一句格言所说的那样,“速度、可靠性、便宜,只能挑两个。”当你的业务跑在这条线路上并且一场暴风雨即将来袭,“可靠性”必须是你挑的两个之一。
### 晴空重现之时
你没法准备应对所有自然灾害,但你可以为其中很多做好计划。有一个深思熟虑且经过测试的灾备和业务持续计划,并逐字逐句严格执行,当竞争对手溺毙的时候,你的公司可以幸存下来。
### 系统管理员对抗自然灾害:给领导者的教训
* 你的 IT 员工得说多少次:不要仅仅备份,还得测试备份?
* 没电就没公司。确保你的服务器有足够的应急电源来满足业务需要,并确保它们能正常工作。
* 如果你的公司在一场自然灾害中幸存下来,或者避开了灾害,明智的系统管理员知道这就是向管理层申请被他们推迟的灾备预算的时候了。因为下次你就未必有这么幸运了。
---
via: <https://www.hpe.com/us/en/insights/articles/it-disaster-recovery-sysadmins-vs-natural-disasters-1711.html>
作者:[Steven-J-Vaughan-Nichols](https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html) 译者:[0x996](https://github.com/0x996) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='www.hpe.com', port=443): Read timed out. (read timeout=10) | null |
11,215 | 如何在 Github 上创建一个拉取请求 | https://opensource.com/article/19/7/create-pull-request-github | 2019-08-12T08:38:26 | [
"GitHub",
"PR"
] | https://linux.cn/article-11215-1.html |
>
> 学习如何复刻一个仓库,进行更改,并要求维护人员审查并合并它。
>
>
>

你知道如何使用 git 了,你有一个 [GitHub](https://github.com/) 仓库并且可以向它推送。这一切都很好。但是你如何为他人的 GitHub 项目做出贡献? 这是我在学习 git 和 GitHub 之后想知道的。在本文中,我将解释如何<ruby> 复刻 <rt> fork </rt></ruby>一个 git 仓库、进行更改并提交一个<ruby> 拉取请求 <rt> pull request </rt></ruby>。
当你想要在一个 GitHub 项目上工作时,第一步是复刻一个仓库。

你可以使用[我的演示仓库](https://github.com/kedark3/demo)试一试。
当你在这个页面时,单击右上角的 “Fork”(复刻)按钮。这将在你的 GitHub 用户账户下创建我的演示仓库的一个新副本,其 URL 如下:
```
https://github.com/<你的用户名>/demo
```
这个副本包含了原始仓库中的所有代码、分支和提交。
接下来,打开你计算机上的终端并运行命令来<ruby> 克隆 <rt> clone </rt></ruby>仓库:
```
git clone https://github.com/<你的用户名>/demo
```
一旦仓库被克隆后,你需要做两件事:
1、通过发出命令创建一个新分支 `new_branch` :
```
git checkout -b new_branch
```
2、使用以下命令为上游仓库创建一个新的<ruby> 远程 <rt> remote </rt></ruby>:
```
git remote add upstream https://github.com/kedark3/demo
```
在这种情况下,“上游仓库”指的是你创建复刻来自的原始仓库。
现在你可以更改代码了。以下代码创建一个新分支,进行任意更改,并将其推送到 `new_branch` 分支:
```
$ git checkout -b new_branch
Switched to a new branch ‘new_branch’
$ echo “some test file” > test
$ cat test
Some test file
$ git status
On branch new_branch
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
test
nothing added to commit but untracked files present (use "git add" to track)
$ git add test
$ git commit -S -m "Adding a test file to new_branch"
[new_branch (root-commit) 4265ec8] Adding a test file to new_branch
1 file changed, 1 insertion(+)
create mode 100644 test
$ git push -u origin new_branch
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 918 bytes | 918.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
Remote: Create a pull request for ‘new_branch’ on GitHub by visiting:
Remote: <http://github.com/example/Demo/pull/new/new\_branch>
Remote:
* [new branch] new_branch -> new_branch
```
一旦你将更改推送到您的仓库后, “Compare & pull request”(比较和拉取请求)按钮将出现在GitHub。

单击它,你将进入此屏幕:

单击 “Create pull request”(创建拉取请求)按钮打开一个拉取请求。这将允许仓库的维护者们审查你的贡献。然后,如果你的贡献是没问题的,他们可以合并它,或者他们可能会要求你做一些改变。
### 精简版
总之,如果您想为一个项目做出贡献,最简单的方法是:
1. 找到您想要贡献的项目
2. 复刻它
3. 将其克隆到你的本地系统
4. 建立一个新的分支
5. 进行你的更改
6. 将其推送回你的仓库
7. 单击 “Compare & pull request”(比较和拉取请求)按钮
8. 单击 “Create pull request”(创建拉取请求)以打开一个新的拉取请求
如果审阅者要求更改,请重复步骤 5 和 6,为你的拉取请求添加更多提交。
快乐编码!
---
via: <https://opensource.com/article/19/7/create-pull-request-github>
作者:[Kedar Vijay Kulkarni](https://opensource.com/users/kkulkarnhttps://opensource.com/users/fontanahttps://opensource.com/users/mhanwellhttps://opensource.com/users/mysentimentshttps://opensource.com/users/greg-p) 选题:[lujun9972](https://github.com/lujun9972) 译者:[furrybear](https://github.com/furrybear) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | So, you know how to use git. You have a [GitHub](https://github.com/) repo and can push to it. All is well. But how the heck do you contribute to other people's GitHub projects? That is what I wanted to know after I learned git and GitHub. In this article, I will explain how to fork a git repo, make changes, and submit a pull request.
When you want to work on a GitHub project, the first step is to fork a repo.

Use [my demo repo](https://github.com/kedark3/demo) to try it out.
Once there, click on the **Fork** button in the top-right corner. This creates a new copy of my demo repo under your GitHub user account with a URL like:
`https://github.com/<YourUserName>/demo`
The copy includes all the code, branches, and commits from the original repo.
Next, clone the repo by opening the terminal on your computer and running the command:
`git clone https://github.com/<YourUserName>/demo`
Once the repo is cloned, you need to do two things:
-
Create a new branch by issuing the command:
`git checkout -b new_branch`
-
Create a new remote for the upstream repo with the command:
`git remote add upstream https://github.com/kedark3/demo`
In this case, "upstream repo" refers to the original repo you created your fork from.
Now you can make changes to the code. The following code creates a new branch, makes an arbitrary change, and pushes it to **new_branch**:
```
$ git checkout -b new_branch
Switched to a new branch ‘new_branch’
$ echo “some test file” > test
$ cat test
Some test file
$ git status
On branch new_branch
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
test
nothing added to commit but untracked files present (use "git add" to track)
$ git add test
$ git commit -S -m "Adding a test file to new_branch"
[new_branch (root-commit) 4265ec8] Adding a test file to new_branch
1 file changed, 1 insertion(+)
create mode 100644 test
$ git push -u origin new_branch
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 918 bytes | 918.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
Remote: Create a pull request for ‘new_branch’ on GitHub by visiting:
Remote: http://github.com/example/Demo/pull/new/new_branch
Remote:
* [new branch] new_branch -> new_branch
```
Once you push the changes to your repo, the **Compare & pull request** button will appear in GitHub.

opensource.com
Click it and you'll be taken to this screen:

Open a pull request by clicking the **Create pull request** button. This allows the repo's maintainers to review your contribution. From here, they can merge it if it is good, or they may ask you to make some changes.
## TLDR
In summary, if you want to contribute to a project, the simplest way is to:
- Find a project you want to contribute to
- Fork it
- Clone it to your local system
- Make a new branch
- Make your changes
- Push it back to your repo
- Click the
**Compare & pull request**button - Click
**Create pull request**to open a new pull request
If the reviewers ask for changes, repeat steps 5 and 6 to add more commits to your pull request.
Happy coding!
## 5 Comments |
11,216 | 在 Fedora 下使用下拉式终端更快输入命令 | https://fedoramagazine.org/use-a-drop-down-terminal-for-fast-commands-in-fedora/ | 2019-08-12T09:39:31 | [
"终端"
] | https://linux.cn/article-11216-1.html | 
下拉式终端可以一键打开,并快速输入桌面上的任何命令。通常它会以平滑的方式创建终端,有时会带有下拉效果。本文演示了如何使用 Yakuake、Tilda、Guake 和 GNOME 扩展等下拉式终端来改善和加速日常任务。
### Yakuake
[Yakuake](https://kde.org/applications/system/org.kde.yakuake) 是一个基于 KDE Konsole 技术的下拉式终端模拟器。它以 GNU GPLv2 条款分发。它包括以下功能:
* 从屏幕顶部平滑地滚下
* 标签式界面
* 尺寸和动画速度可配置
* 换肤
* 先进的 D-Bus 接口
要安装 Yakuake,请使用以下命令:
```
$ sudo dnf install -y yakuake
```
#### 启动和配置
如果你运行 KDE,请打开系统设置,然后转到“启动和关闭”。将“yakuake”添加到“自动启动”下的程序列表中,如下所示:

Yakuake 运行时很容易配置,首先在命令行启动该程序:
```
$ yakuake &
```
随后出现欢迎对话框。如果标准的快捷键和你已经使用的快捷键冲突,你可以设置一个新的。

点击菜单按钮,出现如下帮助菜单。接着,选择“配置 Yakuake……”访问配置选项。

你可以自定义外观选项,例如透明度、行为(例如当鼠标指针移过它们时聚焦终端)和窗口(如大小和动画)。在窗口选项中,你会发现当你使用两个或更多监视器时最有用的选项之一:“在鼠标所在的屏幕上打开”。
#### 使用 Yakuake
主要的快捷键有:
* `F12` = 打开/撤回 Yakuake
* `Ctrl+F11` = 全屏模式
* `Ctrl+)` = 上下分割
* `Ctrl+(` = 左右分割
* `Ctrl+Shift+T` = 新会话
* `Shift+Right` = 下一个会话
* `Shift+Left` = 上一个会话
* `Ctrl+Alt+S` = 重命名会话
以下是 Yakuake 像[终端多路复用器](https://fedoramagazine.org/4-cool-terminal-multiplexers/)一样分割会话的示例。使用此功能,你可以在一个会话中运行多个 shell。

### Tilda
[Tilda](https://github.com/lanoxx/tilda) 是一个下拉式终端,可与其他流行的终端模拟器相媲美,如 GNOME 终端、KDE 的 Konsole、xterm 等等。
它具有高度可配置的界面。你甚至可以更改终端大小和动画速度等选项。Tilda 还允许你启用热键,以绑定到各种命令和操作。
要安装 Tilda,请运行以下命令:
```
$ sudo dnf install -y tilda
```
#### 启动和配置
大多数用户更喜欢在登录时就在后台运行一个下拉式终端。要设置此选项,请首先转到桌面上的应用启动器,搜索 Tilda,然后将其打开。
接下来,打开 Tilda 配置窗口。 选择“隐藏启动 Tilda”,即启动时不会立即显示终端。

接下来,你要设置你的桌面自动启动 Tilda。如果你使用的是 KDE,请转到“系统设置 > 启动与关闭 > 自动启动”并“添加一个程序”。

如果你正在使用 GNOME,则可以在终端中运行此命令:
```
$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/
```
当你第一次运行它时,会出现一个向导来设置首选项。如果需要更改设置,请右键单击终端并转到菜单中的“首选项”。

你还可以创建多个配置文件,并绑定其他快捷键以在屏幕上的不同位置打开新终端。为此,请运行以下命令:
```
$ tilda -C
```
每次使用上述命令时,Tilda 都会在名为 `~/.config/tilda/` 文件夹中创建名为 `config_0`、`config_1` 之类的新配置文件。然后,你可以映射组合键以打开具有一组特定选项的新 Tilda 终端。
#### 使用 Tilda
主要快捷键有:
* `F1` = 拉下终端 Tilda(注意:如果你有多个配置文件,快捷方式是 F1、F2、F3 等)
* `F11` = 全屏模式
* `F12` = 切换透明模式
* `Ctrl+Shift+T` = 添加标签
* `Ctrl+Page Up` = 下一个标签
* `Ctrl+Page Down` = 上一个标签
### GNOME 扩展
Drop-down Terminal [GNOME 扩展](https://extensions.gnome.org/extension/442/drop-down-terminal/)允许你在 GNOME Shell 中使用这个有用的工具。它易于安装和配置,使你可以快速访问终端会话。
#### 安装
打开浏览器并转到[此 GNOME 扩展的站点](https://extensions.gnome.org/extension/442/drop-down-terminal/)。启用扩展设置为“On”,如下所示:

然后选择“Install”以在系统上安装扩展。

执行此操作后,无需设置任何自动启动选项。只要你登录 GNOME,扩展程序就会自动运行!
#### 配置
安装后,将打开 Drop-down Terminal 配置窗口以设置首选项。例如,可以设置终端大小、动画、透明度和使用滚动条。

如果你将来需要更改某些首选项,请运行 `gnome-shell-extension-prefs` 命令并选择“Drop Down Terminal”。
#### 使用该扩展
快捷键很简单:
* 反尖号 (通常是 `Tab` 键上面的一个键) = 打开/撤回终端
* `F12` (可以定制) = 打开/撤回终端
---
via: <https://fedoramagazine.org/use-a-drop-down-terminal-for-fast-commands-in-fedora/>
作者:[Guilherme Schelp](https://fedoramagazine.org/author/schelp/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | A **drop-down terminal** lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.
## Yakuake
[Yakuake](https://kde.org/applications/system/org.kde.yakuake) is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:
- Smoothly rolls down from the top of your screen
- Tabbed interface
- Configurable dimensions and animation speed
- Skinnable
- Sophisticated D-Bus interface
To install Yakuake, use the following command:
$ sudo dnf install -y yakuake
### Startup and configuration
If you’re runnign KDE, open the System Settings and go to *Startup and Shutdown*. Add *yakuake* to the list of programs under *Autostart*, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:
$ yakuake &
The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select *Configure Yakuake…* to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: *Open on screen: At mouse location*.
### Using Yakuake
The main shortcuts are:
**F12**= Open/Retract Yakuake**Ctrl+F11**= Full Screen Mode**Ctrl+)**= Split Top/Bottom**Ctrl+(**= Split Left/Right**Ctrl+Shift+T**= New Session**Shift+Right**= Next Session**Shift+Left**= Previous Session**Ctrl+Alt+S**= Rename Session
Below is an example of Yakuake being used to split the session like a [terminal multiplexer](https://fedoramagazine.org/4-cool-terminal-multiplexers/). Using this feature, you can run several shells in one session.

## Tilda
[Tilda](https://github.com/lanoxx/tilda) is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.
It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.
To install Tilda, run this command:
$ sudo dnf install -y tilda
### Startup and configuration
Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.
Next, open up the Tilda Config window. Select *Start Tilda hidden*, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to *System Settings* > *Startup and Shutdown* > *Autostart* and use *Add a Program*.

If you’re using GNOME, you can run this command in a terminal:
$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/
When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to *Preferences* in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:
$ tilda -C
Every time you use the above command, Tilda creates a new config file located in the *~/.config/tilda/* folder called *config_0*, *config_1*, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.
### Using Tilda
The main shortcuts are:
**F1**= Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent*open/retract*shortcut like F1, F2, F3, and so on)**F11**= Full Screen Mode**F12**= Toggle Transparency**Ctrl+Shift+T**= Add Tab**Ctrl+Page Up**= Go to Next Tab**Ctrl+Page Down**= Go to Previous Tab
## GNOME Extension
The Drop-down Terminal [GNOME Extension](https://extensions.gnome.org/extension/442/drop-down-terminal/) lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.
### Installation
Open a browser and go to the [site for this GNOME extension](https://extensions.gnome.org/extension/442/drop-down-terminal/). Enable the extension setting to *On*, as shown here:

Then select *Install* to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!
### Configuration
After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the *gnome-shell-extension-prefs* command and choose *Drop Down Terminal*.
### Using the extension
The shortcuts are simple:
**`**(usually the key above**Tab**) = Open/Retract Terminal**F12**(customize as you prefer) = Open/Retract Terminal
## Phantom X
xfce4-terminal supports drop-down too, but launch key must be set manually with a shortcut utility (like xfce4-keyboard-settings):
xfce4-terminal –hide-menubar –drop-down
## Mark
Do these support cut and paste commands? CTRL-Insert and SHIFT-Insert. That’s my major issue with most GUIs, as an admin.
## LUrk
The question is:
why the obvious choice for Gnome centric distro – GUAKE – got ommited?
And second of all
whenFedora will ship new version of Guake? Fedora 30 has obsolete version 0.8.8 from Nov 28, 2016 in its repositories. It’s not only not up-to-date 0.8.x legacy release but it also requires Python2 dependencies which are (hopefully) going to be removed in F31 due to Python2 enf-of-life.I really hope we’ll get the new Guake 3 in Fedora 31.
## Pete
I think it is important to note this doesn’t work, nor does guake in gnome on wayland.
## chr314
if you are using tilix you can just add a keyboard shortcut to the command
tilix –quake
## Max
I’ve used Tilix for this task. Great piece of software:
https://github.com/gnunn1/tilix/wiki/Quake-Mode
## Neville A. Cross
I friend recommend me to use a drop-down terminal. I still have to practice and remind me to use it. I haven’t fully trained my self to use it. In my case I am using guake.
## Darrin Swanson
You forgot Guake
## Aurelie
Nice article and my favorite is Yakuake (in KDE). I had a few issues with resolution in Fedora 29 with my HDPI XPS13 screen but seems to be fixed in the last versions. I’m a very happy user. Thanks
## Gee Bee
No guake in comparison?
## anon
you forgot to mention that xfce terminal has dropdown mode.
https://docs.xfce.org/apps/terminal/dropdown
## Yury Donskoy
Actually, all this article demonstrates is how to configure the various applications, not they will help in my daily tasks, assuming the apps even know what my daily tasks are.
I boot up, I always run Konsole and mark it to be available on all my desktops. Then it’s always available on all my desktops. Why do I need yet another terminal app?
## Luke
Let’s address the elephant in the room:
there’s an obvious choice of a dropdown console for Gnome-centric distro like Fedora and it’s called Guake. It got mentioned by name once and that’s it.
While having nothing against Yakuake (which should be recommended for KDE spin) installing it on default Fedora Gnome involves installing 55 (!) KDE specific dependencies including icon theme and phonon.
On the other hand Guake is built with GTK3. A perfect fit for Fedora Gnome. Yet Fedora ships deprecated unpatched version 0.8.8 from 2016. It’s not even the latest version of legacy branch…
To make it worse Guake 0.8.8 relies on Python 2 which was on its deathbed for the last 5 years. Current plans are to get rid of Python 2 packages for F31.
Can we expect Guake 3 build on GTK3 and Python3 available for Fedora 31?
PS There are currently at least 3 open bug reports about this on Fedora’s bugzilla.
## pamsu
Exactly….guake is the best drop down terminal for gnome-shell.
Even though compatibility is crappy it can be integrated completely with gnome-shell.
The current state does not work with wayland but only on a browser does the f12 function work.
## Andy
Following is an easy workaround for Guake hot keys on Wayland
https://github.com/Guake/guake/issues/492#issuecomment-119075514
## Luke
Looks like we’re finally going to get updated Guake in next Fedora release!
https://bugzilla.redhat.com/show_bug.cgi?id=1738955
## Anton
I love your magazine!
## Lars
Maybe you want to add another option… Install xfce4-terminal, if you’re not having it already. It comes with a dropdown option on board:
xfce4-terminal –drop-down
and assign a shortcut for it
## pamsu
Where is guake?
It still is the best drop down terminal…though it does not work properly, it still remains the best
## Blake
The latest guake is already packaged in F31 rawhide repo. Follow this to install as standalone RPM:
Download the guake-3.6.3-1.fc30.noarch.rpm from https://koji.fedoraproject.org/koji/buildinfo?buildID=1349392
Install dependencies: sudo dnf install gnome-common python3dist(pbr)
Rebuild RPM for current Fedora (30): rpmbuild –rebuild ./guake-3.6.3-1.fc31.src.rpm
RPM will be placed in ~/rpmbulid/BUILD/RPMS/noarch/guake-3.6.3-1.fc30.noarch.rpm
Install rpm: sudo dnf install ./rpmbulid/BUILD/RPMS/noarch/guake-3.6.3-1.fc30.noarch.rpm
## rocker
sudo dnf install xdotool
map this shell script to a key (F12 in my case),
#!/usr/bin/env bash
pidterminal=$(pidof gnome-terminal-server)
if [ “$pidterminal” = “” ] ; then
gnome-terminal-server 2> /dev/null &
fi
window=$(xdotool search –name ‘Terminal – .*’)
current=$(xdotool getwindowfocus)
if [ “$current” = “$window” ] ; then
xdotool windowminimize $window
else
xdotool windowactivate $window
disabled (multiple monitors): xdotool windowsize $window 100% 100%
fi
## Tim
“This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake..”
Unfortunately for GNOME users, the first one requires many KDE dependencies, and the latter two are broken (in Fedora 30).
## TonyG
If your desktop environment supports configurable hotkeys, set one to open your existing terminal. I set Ctrl+Alt+T to launch gnome-terminal. No new package needed, no new terminal to learn.
## Paul W. Frields
@TonyG: I do the same thing, cool! However, for reasons of screen real estate, drop down may be better for some people. |
11,218 | GNU Autotools 介绍 | https://opensource.com/article/19/7/introduction-gnu-autotools | 2019-08-13T09:48:42 | [
"Autotools",
"make"
] | https://linux.cn/article-11218-1.html |
>
> 如果你仍未使用过 Autotools,那么这篇文章将改变你递交代码的方式。
>
>
>

你有没有下载过流行的软件项目的源代码,要求你输入几乎是仪式般的 `./configure; make && make install` 命令序列来构建和安装它?如果是这样,你已经使用过 [GNU Autotools](https://www.gnu.org/software/automake/faq/autotools-faq.html) 了。如果你曾经研究过这样的项目所附带的一些文件,你可能会对这种构建系统的显而易见的复杂性感到害怕。
好的消息是,GNU Autotools 的设置要比你想象的要简单得多,GNU Autotools 本身可以为你生成这些上千行的配置文件。是的,你可以编写 20 或 30 行安装代码,并免费获得其他 4,000 行。
### Autotools 工作方式
如果你是初次使用 Linux 的用户,正在寻找有关如何安装应用程序的信息,那么你不必阅读本文!如果你想研究如何构建软件,欢迎阅读它;但如果你只是要安装一个新应用程序,请阅读我在[在 Linux 上安装应用程序](/article-9486-1.html)的文章。
对于开发人员来说,Autotools 是一种管理和打包源代码的快捷方式,以便用户可以编译和安装软件。 Autotools 也得到了主要打包格式(如 DEB 和 RPM)的良好支持,因此软件存储库的维护者可以轻松管理使用 Autotools 构建的项目。
Autotools 工作步骤:
1. 首先,在 `./configure` 步骤中,Autotools 扫描宿主机系统(即当前正在运行的计算机)以发现默认设置。默认设置包括支持库所在的位置,以及新软件应放在系统上的位置。
2. 接下来,在 `make` 步骤中,Autotools 通常通过将人类可读的源代码转换为机器语言来构建应用程序。
3. 最后,在 `make install` 步骤中,Autotools 将其构建好的文件复制到计算机上(在配置阶段检测到)的相应位置。
这个过程看起来很简单,和你使用 Autotools 的步骤一样。
### Autotools 的优势
GNU Autotools 是我们大多数人认为理所当然的重要软件。与 [GCC(GNU 编译器集合)](https://en.wikipedia.org/wiki/GNU_Compiler_Collection)一起,Autotools 是支持将自由软件构建和安装到正在运行的系统的脚手架。如果你正在运行 [POSIX](https://en.wikipedia.org/wiki/POSIX) 系统,可以毫不保守地说,你的计算机上的操作系统里大多数可运行软件都是这些这样构建的。
即使是你的项目是个玩具项目不是操作系统,你可能会认为 Autotools 对你的需求来说太过分了。但是,尽管它的名气很大,Autotools 有许多可能对你有益的小功能,即使你的项目只是一个相对简单的应用程序或一系列脚本。
#### 可移植性
首先,Autotools 考虑到了可移植性。虽然它无法使你的项目在所有 POSIX 平台上工作(这取决于你,编码的人),但 Autotools 可以确保你标记为要安装的文件安装到已知平台上最合理的位置。而且由于 Autotools,高级用户可以轻松地根据他们自己的系统情况定制和覆盖任何非最佳设定。
使用 Autotools,你只要知道需要将文件安装到哪个常规位置就行了。它会处理其他一切。不需要可能破坏未经测试的操作系统的定制安装脚本。
#### 打包
Autotools 也得到了很好的支持。将一个带有 Autotools 的项目交给一个发行版打包者,无论他们是打包成 RPM、DEB、TGZ 还是其他任何东西,都很简单。打包工具知道 Autotools,因此可能不需要修补、魔改或调整。在许多情况下,将 Autotools 项目结合到流程中甚至可以实现自动化。
### 如何使用 Autotools
要使用 Autotools,必须先安装它。你的发行版可能提供一个单个的软件包来帮助开发人员构建项目,或者它可能为每个组件提供了单独的软件包,因此你可能需要在你的平台上进行一些研究以发现需要安装的软件包。
Autotools 的组件是:
* `automake`
* `autoconf`
* `automake`
* `make`
虽然你可能需要安装项目所需的编译器(例如 GCC),但 Autotools 可以很好地处理不需要编译的脚本或二进制文件。实际上,Autotools 对于此类项目非常有用,因为它提供了一个 `make uninstall` 脚本,以便于删除。
安装了所有组件之后,现在让我们了解一下你的项目文件的组成结构。
#### Autotools 项目结构
GNU Autotools 有非常具体的预期规范,如果你经常下载和构建源代码,可能大多数都很熟悉。首先,源代码本身应该位于一个名为 `src` 的子目录中。
你的项目不必遵循所有这些预期规范,但如果你将文件放在非标准位置(从 Autotools 的角度来看),那么你将不得不稍后在 `Makefile` 中对其进行调整。
此外,这些文件是必需的:
* `NEWS`
* `README`
* `AUTHORS`
* `ChangeLog`
你不必主动使用这些文件,它们可以是包含所有信息的单个汇总文档(如 `README.md`)的符号链接,但它们必须存在。
#### Autotools 配置
在你的项目根目录下创建一个名为 `configure.ac` 的文件。`autoconf` 使用此文件来创建用户在构建之前运行的 `configure` shell 脚本。该文件必须至少包含 `AC_INIT` 和 `AC_OUTPUT` [M4 宏](https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Initializing-configure.html)。你不需要了解有关 M4 语言的任何信息就可以使用这些宏;它们已经为你编写好了,并且所有与 Autotools 相关的内容都在该文档中定义好了。
在你喜欢的文本编辑器中打开该文件。`AC_INIT` 宏可以包括包名称、版本、报告错误的电子邮件地址、项目 URL 以及可选的源 TAR 文件名称等参数。
[AC\_OUTPUT](https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Output.html#Output) 宏更简单,不用任何参数。
```
AC_INIT([penguin], [2019.3.6], [[[email protected]][8]])
AC_OUTPUT
```
如果你此刻运行 `autoconf`,会依据你的 `configure.ac` 文件生成一个 `configure` 脚本,它是可以运行的。但是,也就是能运行而已,因为到目前为止你所做的就是定义项目的元数据,并要求创建一个配置脚本。
你必须在 `configure.ac` 文件中调用的下一个宏是创建 [Makefile](https://www.gnu.org/software/make/manual/html_node/Introduction.html) 的函数。 `Makefile` 会告诉 `make` 命令做什么(通常是如何编译和链接程序)。
创建 `Makefile` 的宏是 `AM_INIT_AUTOMAKE`,它不接受任何参数,而 `AC_CONFIG_FILES` 接受的参数是你要输出的文件的名称。
最后,你必须添加一个宏来考虑你的项目所需的编译器。你使用的宏显然取决于你的项目。如果你的项目是用 C++ 编写的,那么适当的宏是 `AC_PROG_CXX`,而用 C 编写的项目需要 `AC_PROG_CC`,依此类推,详见 Autoconf 文档中的 [Building Programs and Libraries](https://www.gnu.org/software/automake/manual/html_node/Programs.html#Programs) 部分。
例如,我可能会为我的 C++ 程序添加以下内容:
```
AC_INIT([penguin], [2019.3.6], [[[email protected]][8]])
AC_OUTPUT
AM_INIT_AUTOMAKE
AC_CONFIG_FILES([Makefile])
AC_PROG_CXX
```
保存该文件。现在让我们将目光转到 `Makefile`。
#### 生成 Autotools Makefile
`Makefile` 并不难手写,但 Autotools 可以为你编写一个,而它生成的那个将使用在 `./configure` 步骤中检测到的配置选项,并且它将包含比你考虑要包括或想要自己写的还要多得多的选项。然而,Autotools 并不能检测你的项目构建所需的所有内容,因此你必须在文件 `Makefile.am` 中添加一些细节,然后在构造 `Makefile` 时由 `automake` 使用。
`Makefile.am` 使用与 `Makefile` 相同的语法,所以如果你曾经从头开始编写过 `Makefile`,那么这个过程将是熟悉和简单的。通常,`Makefile.am` 文件只需要几个变量定义来指示要构建的文件以及它们的安装位置即可。
以 `_PROGRAMS` 结尾的变量标识了要构建的代码(这通常被认为是<ruby> 原语 <rt> primary </rt></ruby>目标;这是 `Makefile` 存在的主要意义)。Automake 也会识别其他原语,如 `_SCRIPTS`、`_ DATA`、`_LIBRARIES`,以及构成软件项目的其他常见部分。
如果你的应用程序在构建过程中需要实际编译,那么你可以用 `bin_PROGRAMS` 变量将其标记为二进制程序,然后使用该程序名称作为变量前缀引用构建它所需的源代码的任何部分(这些部分可能是将被编译和链接在一起的一个或多个文件):
```
bin_PROGRAMS = penguin
penguin_SOURCES = penguin.cpp
```
`bin_PROGRAMS` 的目标被安装在 `bindir` 中,它在编译期间可由用户配置。
如果你的应用程序不需要实际编译,那么你的项目根本不需要 `bin_PROGRAMS` 变量。例如,如果你的项目是用 Bash、Perl 或类似的解释语言编写的脚本,那么定义一个 `_SCRIPTS` 变量来替代:
```
bin_SCRIPTS = bin/penguin
```
Automake 期望源代码位于名为 `src` 的目录中,因此如果你的项目使用替代目录结构进行布局,则必须告知 Automake 接受来自外部源的代码:
```
AUTOMAKE_OPTIONS = foreign subdir-objects
```
最后,你可以在 `Makefile.am` 中创建任何自定义的 `Makefile` 规则,它们将逐字复制到生成的 `Makefile` 中。例如,如果你知道一些源代码中的临时值需要在安装前替换,则可以为该过程创建自定义规则:
```
all-am: penguin
touch bin/penguin.sh
penguin: bin/penguin.sh
@sed "s|__datadir__|@datadir@|" $< >bin/$@
```
一个特别有用的技巧是扩展现有的 `clean` 目标,至少在开发期间是这样的。`make clean` 命令通常会删除除了 Automake 基础结构之外的所有生成的构建文件。它是这样设计的,因为大多数用户很少想要 `make clean` 来删除那些便于构建代码的文件。
但是,在开发期间,你可能需要一种方法可靠地将项目返回到相对不受 Autotools 影响的状态。在这种情况下,你可能想要添加:
```
clean-local:
@rm config.status configure config.log
@rm Makefile
@rm -r autom4te.cache/
@rm aclocal.m4
@rm compile install-sh missing Makefile.in
```
这里有很多灵活性,如果你还不熟悉 `Makefile`,那么很难知道你的 `Makefile.am` 需要什么。最基本需要的是原语目标,无论是二进制程序还是脚本,以及源代码所在位置的指示(无论是通过 `_SOURCES` 变量还是使用 `AUTOMAKE_OPTIONS` 告诉 Automake 在哪里查找源代码)。
一旦定义了这些变量和设置,如下一节所示,你就可以尝试生成构建脚本,并调整缺少的任何内容。
#### 生成 Autotools 构建脚本
你已经构建了基础结构,现在是时候让 Autotools 做它最擅长的事情:自动化你的项目工具。对于开发人员(你),Autotools 的接口与构建代码的用户的不同。
构建者通常使用这个众所周知的顺序:
```
$ ./configure
$ make
$ sudo make install
```
但是,要使这种咒语起作用,你作为开发人员必须引导构建这些基础结构。首先,运行 `autoreconf` 以生成用户在运行 `make` 之前调用的 `configure` 脚本。使用 `-install` 选项将辅助文件(例如符号链接)引入到 `depcomp`(这是在编译过程中生成依赖项的脚本),以及 `compile` 脚本的副本(一个编译器的包装器,用于说明语法,等等)。
```
$ autoreconf --install
configure.ac:3: installing './compile'
configure.ac:2: installing './install-sh'
configure.ac:2: installing './missing'
```
使用此开发构建环境,你可以创建源代码分发包:
```
$ make dist
```
`dist` 目标是从 Autotools “免费”获得的规则。这是一个内置于 `Makefile` 中的功能,它是通过简单的 `Makefile.am` 配置生成的。该目标可以生成一个 `tar.gz` 存档,其中包含了所有源代码和所有必要的 Autotools 基础设施,以便下载程序包的人员可以构建项目。
此时,你应该仔细查看存档文件的内容,以确保它包含你要发送给用户的所有内容。当然,你也应该尝试自己构建:
```
$ tar --extract --file penguin-0.0.1.tar.gz
$ cd penguin-0.0.1
$ ./configure
$ make
$ DESTDIR=/tmp/penguin-test-build make install
```
如果你的构建成功,你将找到由 `DESTDIR` 指定的已编译应用程序的本地副本(在此示例的情况下为 `/tmp/penguin-test-build`)。
```
$ /tmp/example-test-build/usr/local/bin/example
hello world from GNU Autotools
```
### 去使用 Autotools
Autotools 是一个很好的脚本集合,可用于可预测的自动发布过程。如果你习惯使用 Python 或 Bash 构建器,这个工具集对你来说可能是新的,但它为你的项目提供的结构和适应性可能值得学习。
而 Autotools 也不只是用于代码。Autotools 可用于构建 [Docbook](https://opensource.com/article/17/9/docbook) 项目,保持媒体有序(我使用 Autotools 进行音乐发布),文档项目以及其他任何可以从可自定义安装目标中受益的内容。
---
via: <https://opensource.com/article/19/7/introduction-gnu-autotools>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Have you ever downloaded the source code for a popular software project that required you to type the almost ritualistic **./configure; make && make install** command sequence to build and install it? If so, you’ve used [GNU Autotools](https://www.gnu.org/software/automake/faq/autotools-faq.html). If you’ve ever looked into some of the files accompanying such a project, you’ve likely also been terrified at the apparent complexity of such a build system.
Good news! GNU Autotools is a lot simpler to set up than you think, and it’s GNU Autotools itself that generates those 1,000-line configuration files for you. Yes, you can write 20 or 30 lines of installation code and get the other 4,000 for free.
## Autotools at work
If you’re a user new to Linux looking for information on how to install applications, you do not have to read this article! You’re welcome to read it if you want to research how software is built, but if you’re just installing a new application, go read my article about [installing apps on Linux](https://opensource.com/article/18/1/how-install-apps-linux).
For developers, Autotools is a quick and easy way to manage and package source code so users can compile and install software. Autotools is also well-supported by major packaging formats, like DEB and RPM, so maintainers of software repositories can easily prepare a project built with Autotools.
Autotools works in stages:
- First, during the
**./configure**step, Autotools scans the host system (the computer it’s being run on) to discover the default settings. Default settings include where support libraries are located, and where new software should be placed on the system. - Next, during the
**make**step, Autotools builds the application, usually by converting human-readable source code into machine language. - Finally, during the
**make install**step, Autotools copies the files it built to the appropriate locations (as detected during the configure stage) on your computer.
This process seems simple, and it is, as long as you use Autotools.
## The Autotools advantage
GNU Autotools is a big and important piece of software that most of us take for granted. Along with [GCC (the GNU Compiler Collection)](https://en.wikipedia.org/wiki/GNU_Compiler_Collection), Autotools is the scaffolding that allows Free Software to be constructed and installed to a running system. If you’re running a [POSIX](https://en.wikipedia.org/wiki/POSIX) system, it’s not an understatement to say that most of your operating system exists as runnable software on your computer because of these projects.
In the likely event that your pet project isn’t an operating system, you might assume that Autotools is overkill for your needs. But, despite its reputation, Autotools has lots of little features that may benefit you, even if your project is a relatively simple application or series of scripts.
### Portability
First of all, Autotools comes with portability in mind. While it can’t make your project work across all POSIX platforms (that’s up to you, as the coder), Autotools can ensure that the files you’ve marked for installation get installed to the most sensible locations on a known platform. And because of Autotools, it’s trivial for a power user to customize and override any non-optimal value, according to their own system.
With Autotools, all you need to know is what files need to be installed to what general location. It takes care of everything else. No more custom install scripts that break on any untested OS.
### Packaging
Autotools is also well-supported. Hand a project with Autotools over to a distro packager, whether they’re packaging an RPM, DEB, TGZ, or anything else, and their job is simple. Packaging tools know Autotools, so there’s likely to be no patching, hacking, or adjustments necessary. In many cases, incorporating an Autotools project into a pipeline can even be automated.
## How to use Autotools
To use Autotools, you must first have Autotools installed. Your distribution may provide one package meant to help developers build projects, or it may provide separate packages for each component, so you may have to do some research on your platform to discover what packages you need to install.
The primary components of Autotools are:
**automake****autoconf****make**
While you likely need to install the compiler (GCC, for instance) required by your project, Autotools works just fine with scripts or binary assets that don’t need to be compiled. In fact, Autotools can be useful for such projects because it provides a **make uninstall** script for easy removal.
Once you have all of the components installed, it’s time to look at the structure of your project’s files.
### Autotools project structure
GNU Autotools has very specific expectations, and most of them are probably familiar if you download and build source code often. First, the source code itself is expected to be in a subdirectory called **src**.
Your project doesn’t have to follow all of these expectations, but if you put files in non-standard locations (from the perspective of Autotools), then you’ll have to make adjustments for that in your Makefile later.
Additionally, these files are required:
**NEWS****README****AUTHORS****ChangeLog**
You don’t have to actively use the files, and they can be symlinks to a monolithic document (like **README.md**) that encompasses all of that information, but they must be present.
### Autotools configuration
Create a file called **configure.ac** at your project’s root directory. This file is used by **autoconf** to create the **configure** shell script that users run before building. The file must contain, at the very least, the **AC_INIT** and **AC_OUTPUT** [M4 macros](https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Initializing-configure.html). You don’t need to know anything about the M4 language to use these macros; they’re already written for you, and all of the ones relevant to Autotools are defined in the documentation.
Open the file in your favorite text editor. The **AC_INIT** macro may consist of the package name, version, an email address for bug reports, the project URL, and optionally the name of the source TAR file.
The ** AC_OUTPUT** macro is much simpler and accepts no arguments.
```
AC_INIT([penguin], [2019.3.6], [[email protected]])
AC_OUTPUT
```
If you were to run **autoconf** at this point, a **configure** script would be generated from your **configure.ac** file, and it would run successfully. That’s all it would do, though, because all you have done so far is define your project’s metadata and called for a configuration script to be created.
The next macros you must invoke in your **configure.ac** file are functions to create a [Makefile](https://www.gnu.org/software/make/manual/html_node/Introduction.html). A Makefile tells the **make** command what to do (usually, how to compile and link a program).
The macros to create a Makefile are **AM_INIT_AUTOMAKE**, which accepts no arguments, and **AC_CONFIG_FILES**, which accepts the name you want to call your output file.
Finally, you must add a macro to account for the compiler your project needs. The macro you use obviously depends on your project. If your project is written in C++, the appropriate macro is **AC_PROG_CXX**, while a project written in C requires **AC_PROG_CC**, and so on, as detailed in the [Building Programs and Libraries](https://www.gnu.org/software/automake/manual/html_node/Programs.html#Programs) section in the Autoconf documentation.
For example, I might add the following for my C++ program:
```
AC_INIT([penguin], [2019.3.6], [[email protected]])
AC_OUTPUT
AM_INIT_AUTOMAKE
AC_CONFIG_FILES([Makefile])
AC_PROG_CXX
```
Save the file. It’s time to move on to the Makefile.
### Autotools Makefile generation
Makefiles aren’t difficult to [write manually](https://opensource.com/article/18/8/what-how-makefile), but Autotools can write one for you, and the one it generates will use the configuration options detected during the `./configure`
step, and it will contain far more options than you would think to include or want to write yourself. However, Autotools can’t detect everything your project requires to build, so you have to add some details in the file **Makefile.am**, which in turn is used by **automake** when constructing a Makefile.
**Makefile.am** uses the same syntax as a Makefile, so if you’ve ever written a Makefile from scratch, then this process will be familiar and simple. Often, a **Makefile.am** file needs only a few variable definitions to indicate what files are to be built, and where they are to be installed.
Variables ending in **_PROGRAMS** identify code that is to be built (this is usually considered the *primary* target; it’s the main reason the Makefile exists). Automake recognizes other primaries, like **_SCRIPTS**, **_DATA**, **_LIBRARIES**, and other common parts that make up a software project.
If your application is literally compiled during the build process, then you identify it as a binary program with the **bin_PROGRAMS** variable, and then reference any part of the source code required to build it (these parts may be one or more files to be compiled and linked together) using the program name as the variable prefix:
```
bin_PROGRAMS = penguin
penguin_SOURCES = penguin.cpp
```
The target of **bin_PROGRAMS** is installed into the **bindir**, which is user-configurable during compilation.
If your application isn’t actually compiled, then your project doesn’t need a **bin_PROGRAMS** variable at all. For instance, if your project is a script written in Bash, Perl, or a similar interpreted language, then define a **_SCRIPTS** variable instead:
```
bin_SCRIPTS = bin/penguin
```
Automake expects sources to be located in a directory called **src**, so if your project uses an alternative directory structure for its layout, you must tell Automake to accept code from outside sources:
```
AUTOMAKE_OPTIONS = foreign subdir-objects
```
Finally, you can create any custom Makefile rules in **Makefile.am** and they’ll be copied verbatim into the generated Makefile. For instance, if you know that a temporary value needs to be replaced in your source code before the installation proceeds, you could make a custom rule for that process:
```
all-am: penguin
touch bin/penguin.sh
penguin: bin/penguin.sh
@sed "s|__datadir__|@datadir@|" $< >bin/$@
```
A particularly useful trick is to extend the existing **clean** target, at least during development. The **make clean** command generally removes all generated build files with the exception of the Automake infrastructure. It’s designed this way because most users rarely want **make clean** to obliterate the files that make it easy to build their code.
However, during development, you might want a method to reliably return your project to a state relatively unaffected by Autotools. In that case, you may want to add this:
```
clean-local:
@rm config.status configure config.log
@rm Makefile
@rm -r autom4te.cache/
@rm aclocal.m4
@rm compile install-sh missing Makefile.in
```
There’s a lot of flexibility here, and if you’re not already familiar with Makefiles, it can be difficult to know what your **Makefile.am** needs. The barest necessity is a primary target, whether that’s a binary program or a script, and an indication of where the source code is located (whether that’s through a **_SOURCES** variable or by using **AUTOMAKE_OPTIONS** to tell Automake where to look for source code).
Once you have those variables and settings defined, you can try generating your build scripts as you see in the next section, and adjust for anything that’s missing.
### Autotools build script generation
You’ve built the infrastructure, now it’s time to let Autotools do what it does best: automate your project tooling. The way the developer (you) interfaces with Autotools is different from how users building your code do.
Builders generally use this well-known sequence:
```
$ ./configure
$ make
$ sudo make install
```
For that incantation to work, though, you as the developer must bootstrap the build infrastructure. First, run **autoreconf** to generate the configure script that users invoke before running **make**. Use the **–install** option to bring in auxiliary files, such as a symlink to **depcomp**, a script to generate dependencies during the compiling process, and a copy of the **compile** script, a wrapper for compilers to account for syntax variance, and so on.
```
$ autoreconf --install
configure.ac:3: installing './compile'
configure.ac:2: installing './install-sh'
configure.ac:2: installing './missing'
```
With this development build environment, you can then create a package for source code distribution:
```
$ make dist
```
The **dist** target is a rule you get for "free" from Autotools.
It’s a feature that gets built into the Makefile generated from your humble **Makefile.am** configuration. This target produces a **tar.gz** archive containing all of your source code and all of the essential Autotools infrastructure so that people downloading the package can build the project.
At this point, you should review the contents of the archive carefully to ensure that it contains everything you intend to ship to your users. You should also, of course, try building from it yourself:
```
$ tar --extract --file penguin-0.0.1.tar.gz
$ cd penguin-0.0.1
$ ./configure
$ make
$ DESTDIR=/tmp/penguin-test-build make install
```
If your build is successful, you find a local copy of your compiled application specified by **DESTDIR** (in the case of this example, **/tmp/penguin-test-build**).
```
$ /tmp/example-test-build/usr/local/bin/example
hello world from GNU Autotools
```
## Time to use Autotools
Autotools is a great collection of scripts for a predictable and automated release process. This toolset may be new to you if you’re used to Python or Bash builders, but it’s likely worth learning for the structure and adaptability it provides to your project.
And Autotools is not just for code, either. Autotools can be used to build [Docbook](https://opensource.com/article/17/9/docbook) projects, to keep media organized (I use Autotools for my music releases), documentation projects, and anything else that could benefit from customizable install targets.
## 2 Comments |
11,220 | 如何在 Ubuntu 上设置时间同步 | https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/ | 2019-08-13T13:54:00 | [
"时间",
"时区"
] | https://linux.cn/article-11220-1.html | 
你可能设置过 [cron 任务](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/) 来在特定时间备份重要文件或执行系统相关任务。也许你配置了一个日志服务器在特定时间间隔[轮转日志](https://www.ostechnix.com/manage-log-files-using-logrotate-linux/)。但如果你的时钟不同步,这些任务将无法按时执行。这就是要在 Linux 系统上设置正确的时区并保持时钟与互联网同步的原因。本指南介绍如何在 Ubuntu Linux 上设置时间同步。下面的步骤已经在 Ubuntu 18.04 上进行了测试,但是对于使用 systemd 的 `timesyncd` 服务的其他基于 Ubuntu 的系统它们是相同的。
### 在 Ubuntu 上设置时间同步
通常,我们在安装时设置时区。但是,你可以根据需要更改或设置不同的时区。
首先,让我们使用 `date` 命令查看 Ubuntu 系统中的当前时区:
```
$ date
```
示例输出:
```
Tue Jul 30 11:47:39 UTC 2019
```
如上所见,`date` 命令显示实际日期和当前时间。这里,我当前的时区是 **UTC**,代表**协调世界时**。
或者,你可以在 `/etc/timezone` 文件中查找当前时区。
```
$ cat /etc/timezone
UTC
```
现在,让我们看看时钟是否与互联网同步。只需运行:
```
$ timedatectl
```
示例输出:
```
Local time: Tue 2019-07-30 11:53:58 UTC
Universal time: Tue 2019-07-30 11:53:58 UTC
RTC time: Tue 2019-07-30 11:53:59
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
```
如你所见,`timedatectl` 命令显示本地时间、世界时、时区以及系统时钟是否与互联网服务器同步,以及 `systemd-timesyncd.service` 是处于活动状态还是非活动状态。就我而言,系统时钟已与互联网时间服务器同步。
如果时钟不同步,你会看到下面截图中显示的 `System clock synchronized: no`。

*时间同步已禁用。*
注意:上面的截图是旧截图。这就是你看到不同日期的原因。
如果你看到 `System clock synchronized:` 值设置为 `no`,那么 `timesyncd` 服务可能处于非活动状态。因此,只需重启服务并看下是否正常。
```
$ sudo systemctl restart systemd-timesyncd.service
```
现在检查 `timesyncd` 服务状态:
```
$ sudo systemctl status systemd-timesyncd.service
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-07-30 10:50:18 UTC; 1h 11min ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 498 (systemd-timesyn)
Status: "Synchronized to time server [2001:67c:1560:8003::c7]:123 (ntp.ubuntu.com)."
Tasks: 2 (limit: 2319)
CGroup: /system.slice/systemd-timesyncd.service
└─498 /lib/systemd/systemd-timesyncd
Jul 30 10:50:30 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:31 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:31 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:32 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:32 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:50:35 ubuntuserver systemd-timesyncd[498]: Network configuration changed, trying to estab
Jul 30 10:51:06 ubuntuserver systemd-timesyncd[498]: Synchronized to time server [2001:67c:1560:800
```
如果此服务已启用并处于活动状态,那么系统时钟应与互联网时间服务器同步。
你可以使用命令验证是否启用了时间同步:
```
$ timedatectl
```
如果仍然不起作用,请运行以下命令以启用时间同步:
```
$ sudo timedatectl set-ntp true
```
现在,你的系统时钟将与互联网时间服务器同步。
#### 使用 timedatectl 命令更改时区
如果我想使用 UTC 以外的其他时区怎么办?这很容易!
首先,使用命令列出可用时区:
```
$ timedatectl list-timezones
```
你将看到类似于下图的输出。

*使用 timedatectl 命令列出时区*
你可以使用以下命令设置所需的时区(例如,Asia/Shanghai):
(LCTT 译注:本文原文使用印度时区作为示例,这里为了便于使用,换为中国标准时区 CST。另外,在时区设置中,要注意 CST 这个缩写会代表四个不同的时区,因此建议使用城市和 UTC+8 来说设置。)
```
$ sudo timedatectl set-timezone Asia/Shanghai
```
使用 `date` 命令再次检查时区是否已真正更改:
```
$ date
Tue Jul 30 20:22:33 CST 2019
```
或者,如果需要详细输出,请使用 `timedatectl` 命令:
```
$ timedatectl
Local time: Tue 2019-07-30 20:22:35 CST
Universal time: Tue 2019-07-30 12:22:35 UTC
RTC time: Tue 2019-07-30 12:22:36
Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
```
如你所见,我已将时区从 UTC 更改为 CST(中国标准时间)。()
要切换回 UTC 时区,只需运行:
```
$ sudo timedatectl set-timezone UTC
```
#### 使用 tzdata 更改时区
在较旧的 Ubuntu 版本中,没有 `timedatectl` 命令。这种情况下,你可以使用 `tzdata`(Time zone data)来设置时间同步。
```
$ sudo dpkg-reconfigure tzdata
```
选择你居住的地理区域。对我而言,我选择 **Asia**。选择 OK,然后按回车键。

接下来,选择与你的时区对应的城市或地区。这里,我选择了 **Kolkata**(LCTT 译注:中国用户请相应使用 Shanghai 等城市)。

最后,你将在终端中看到类似下面的输出。
```
Current default time zone: 'Asia/Shanghai'
Local time is now: Tue Jul 30 21:59:25 CST 2019.
Universal Time is now: Tue Jul 30 13:59:25 UTC 2019.
```
#### 在图形模式下配置时区
有些用户可能对命令行方式不太满意。如果你是其中之一,那么你可以轻松地在图形模式的系统设置面板中进行设置。
点击 Super 键(Windows 键),在 Ubuntu dash 中输入 **settings**,然后点击设置图标。

*从 Ubuntu dash 启动系统的设置*
或者,单击位于 Ubuntu 桌面右上角的向下箭头,然后单击左上角的“设置”图标。

*从顶部面板启动系统的设置*
在下一个窗口中,选择“细节”,然后单击“日期与时间”选项。打开“自动的日期与时间”和“自动的时区”。

*在 Ubuntu 中设置自动时区*
关闭设置窗口就行了!你的系统始终应该与互联网时间服务器同步了。
---
via: <https://www.ostechnix.com/how-to-set-up-time-synchronization-on-ubuntu/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,221 | 如何在 Linux 上安装 Elasticsearch 和 Kibana | https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux | 2019-08-13T14:33:41 | [
"Elasticsearch"
] | https://linux.cn/article-11221-1.html |
>
> 获取我们关于安装两者的简化说明。
>
>
>

如果你渴望学习基于开源 Lucene 库的著名开源搜索引擎 Elasticsearch,那么没有比在本地安装它更好的方法了。这个过程在 [Elasticsearch 网站](https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html)中有详细介绍,但如果你是初学者,官方说明就比必要的信息多得多。本文采用一种简化的方法。
### 添加 Elasticsearch 仓库
首先,将 Elasticsearch 仓库添加到你的系统,以便你可以根据需要安装它并接收更新。如何做取决于你的发行版。在基于 RPM 的系统上,例如 [Fedora](https://getfedora.org)、[CentOS](https://www.centos.org)、[Red Hat Enterprise Linux(RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) 或 [openSUSE](https://www.opensuse.org),(本文任何地方引用 Fedora 或 RHEL 的也适用于 CentOS 和 openSUSE)在 `/etc/yum.repos.d/` 中创建一个名为 `elasticsearch.repo` 的仓库描述文件:
```
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/oss-7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
```
在 Ubuntu 或 Debian 上,不要使用 `add-apt-repository` 工具。由于它自身默认的和 Elasticsearch 仓库提供的不匹配而导致错误。相反,设置这个:
```
$ echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
```
在你从该仓库安装之前,导入 GPG 公钥,然后更新:
```
$ sudo apt-key adv --keyserver \
hkp://keyserver.ubuntu.com:80 \
--recv D27D666CD88E42B4
$ sudo apt update
```
此存储库仅包含 Elasticsearch 的开源功能,在 [Apache 许可证](http://www.apache.org/licenses/)下发布,没有提供订阅版本的额外功能。如果你需要仅限订阅的功能(这些功能是**并不**开源),那么 `baseurl` 必须设置为:
```
baseurl=https://artifacts.elastic.co/packages/7.x/yum
```
### 安装 Elasticsearch
你需要安装的软件包的名称取决于你使用的是开源版本还是订阅版本。本文使用开源版本,包名最后有 `-oss` 后缀。如果包名后没有 `-oss`,那么表示你请求的是仅限订阅版本。
如果你创建了订阅版本的仓库却尝试安装开源版本,那么就会收到“非指定”的错误。如果你创建了一个开源版本仓库却没有将 `-oss` 添加到包名后,那么你也会收到错误。
使用包管理器安装 Elasticsearch。例如,在 Fedora、CentOS 或 RHEL 上运行以下命令:
```
$ sudo dnf install elasticsearch-oss
```
在 Ubuntu 或 Debian 上,运行:
```
$ sudo apt install elasticsearch-oss
```
如果你在安装 Elasticsearch 时遇到错误,那么你可能安装的是错误的软件包。如果你想如本文这样使用开源包,那么请确保使用正确的 apt 仓库或在 Yum 配置正确的 `baseurl`。
### 启动并启用 Elasticsearch
安装 Elasticsearch 后,你必须启动并启用它:
```
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now elasticsearch.service
```
要确认 Elasticsearch 在其默认端口 9200 上运行,请在 Web 浏览器中打开 `localhost:9200`。你可以使用 GUI 浏览器,也可以在终端中执行此操作:
```
$ curl localhost:9200
{
"name" : "fedora30",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
"version" : {
"number" : "7.2.0",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "508c38a",
"build_date" : "2019-06-20T15:54:18.811730Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
```
### 安装 Kibana
Kibana 是 Elasticsearch 数据可视化的图形界面。它包含在 Elasticsearch 仓库,因此你可以使用包管理器进行安装。与 Elasticsearch 本身一样,如果你使用的是 Elasticsearch 的开源版本,那么必须将 `-oss` 放到包名最后,订阅版本则不用(两者安装需要匹配):
```
$ sudo dnf install kibana-oss
```
在 Ubuntu 或 Debian 上:
```
$ sudo apt install kibana-oss
```
Kibana 在端口 5601 上运行,因此打开图形化 Web 浏览器并进入 `localhost:5601` 来开始使用 Kibana,如下所示:

### 故障排除
如果在安装 Elasticsearch 时出现错误,请尝试手动安装 Java 环境。在 Fedora、CentOS 和 RHEL 上:
```
$ sudo dnf install java-openjdk-devel java-openjdk
```
在 Ubuntu 上:
```
$ sudo apt install default-jdk
```
如果所有其他方法都失败,请尝试直接从 Elasticsearch 服务器安装 Elasticsearch RPM:
```
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86_64.rpm{,.sha512}
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
```
在 Ubuntu 或 Debian 上,请使用 DEB 包。
如果你无法使用 Web 浏览器访问 Elasticsearch 或 Kibana,那么可能是你的防火墙阻止了这些端口。你可以通过调整防火墙设置来允许这些端口上的流量。例如,如果你运行的是 `firewalld`(Fedora 和 RHEL 上的默认防火墙,并且可以在 Debian 和 Ubuntu 上安装),那么你可以使用 `firewall-cmd`:
```
$ sudo firewall-cmd --add-port=9200/tcp --permanent
$ sudo firewall-cmd --add-port=5601/tcp --permanent
$ sudo firewall-cmd --reload
```
设置完成了,你可以关注我们接下来的 Elasticsearch 和 Kibana 安装文章。
---
via: <https://opensource.com/article/19/7/install-elasticsearch-and-kibana-linux>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you're keen to learn Elasticsearch, the famous open source search engine based on the open source Lucene library, then there's no better way than to install it locally. The process is outlined in detail on the [Elasticsearch website](https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html), but the official instructions have a lot more detail than necessary if you're a beginner. This article takes a simplified approach.
## Add the Elasticsearch repository
First, add the Elasticsearch software repository to your system, so you can install it and receive updates as needed. How you do so depends on your distribution. On an RPM-based system, such as [Fedora](https://getfedora.org), [CentOS](https://www.centos.org), [Red Hat Enterprise Linux (RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux), or [openSUSE](https://www.opensuse.org), (anywhere in this article that references Fedora or RHEL applies to CentOS and openSUSE as well) create a repository description file in **/etc/yum.repos.d/** called **elasticsearch.repo**:
```
$ cat << EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/oss-7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
```
On Ubuntu or Debian, do not use the **add-apt-repository** utility. It causes errors due to a mismatch in its defaults and what Elasticsearch’s repository provides. Instead, set up this one:
```
$ echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
```
Before you can install from that repository, import its GPG key, and then update:
```
$ sudo apt-key adv --keyserver \
hkp://keyserver.ubuntu.com:80 \
--recv D27D666CD88E42B4
$ sudo apt update
```
This repository contains only Elasticsearch’s open source features, under an [Apache License](http://www.apache.org/licenses/), with none of the extra features provided by a subscription. If you need subscription-only features (these features are *not* open source), the **baseurl** must be set to:
`baseurl=https://artifacts.elastic.co/packages/7.x/yum`
## Install Elasticsearch
The name of the package you need to install depends on whether you use the open source version or the subscription version. This article uses the open source version, which appends **-oss** to the end of the package name. Without **-oss** appended to the package name, you are requesting the subscription-only version.
If you create a repository pointing to the subscription version but try to install the open source version, you will get a fairly non-specific error in return. If you create a repository for the open source version and fail to append **-oss** to the package name, you will also get an error.
Install Elasticsearch with your package manager. For instance, on Fedora, CentOS, or RHEL, run the following:
```
$ sudo dnf install elasticsearch-oss
```
On Ubuntu or Debian, run:
```
$ sudo apt install elasticsearch-oss
```
If you get errors while installing Elasticsearch, then you may be attempting to install the wrong package. If your intention is to use the open source package, as this article does, then make sure you are using the correct **apt** repository or baseurl in your Yum configuration.
## Start and enable Elasticsearch
Once Elasticsearch has been installed, you must start and enable it:
```
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now elasticsearch.service
```
Then, to confirm that Elasticsearch is running on its default port of 9200, point a web browser to **localhost:9200**. You can use a GUI browser or you can do it in the terminal:
```
$ curl localhost:9200
{
"name" : "fedora30",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "OqSbb16NQB2M0ysynnX1hA",
"version" : {
"number" : "7.2.0",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "508c38a",
"build_date" : "2019-06-20T15:54:18.811730Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
```
## Install Kibana
Kibana is a graphical interface for Elasticsearch data visualization. It’s included in the Elasticsearch repository, so you can install it with your package manager. Just as with Elasticsearch itself, you must append **-oss** to the end of the package name if you are using the open source version of Elasticsearch, and not the subscription version (the two installations need to match):
```
$ sudo dnf install kibana-oss
```
On Ubuntu or Debian:
```
$ sudo apt install kibana-oss
```
Kibana runs on port 5601, so launch a graphical web browser and navigate to **localhost:5601** to start using the Kibana interface, which is shown below:

## Troubleshoot
If you get errors while installing Elasticsearch, try installing a Java environment manually. On Fedora, CentOS, and RHEL:
```
$ sudo dnf install java-openjdk-devel java-openjdk
```
On Ubuntu:
`$ sudo apt install default-jdk`
If all else fails, try installing the Elasticsearch RPM directly from the Elasticsearch servers:
```
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.2.0-x86_64.rpm{,.sha512}
$ shasum -a 512 -c elasticsearch-oss-7.2.0-x86_64.rpm.sha512 && sudo rpm --install elasticsearch-oss-7.2.0-x86_64.rpm
```
On Ubuntu or Debian, use the DEB package instead.
If you cannot access either Elasticsearch or Kibana with a web browser, then your firewall may be blocking those ports. You can allow traffic on those ports by adjusting your firewall settings. For instance, if you are running **firewalld** (the default on Fedora and RHEL, and installable on Debian and Ubuntu), then you can use **firewall-cmd**:
```
$ sudo firewall-cmd --add-port=9200/tcp --permanent
$ sudo firewall-cmd --add-port=5601/tcp --permanent
$ sudo firewall-cmd --reload
```
You’re now set up and can follow along with our upcoming installation articles for Elasticsearch and Kibana.
## 7 Comments |
11,222 | POSIX 是什么?让我们听听 Richard Stallman 的诠释 | https://opensource.com/article/19/7/what-posix-richard-stallman-explains | 2019-08-13T23:18:00 | [
"GNU",
"POSIX"
] | https://linux.cn/article-11222-1.html |
>
> 从计算机自由先驱的口中探寻操作系统兼容性标准背后的本质。
>
>
>

[POSIX](https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/) 是什么?为什么如此重要?你可能在很多的技术类文章中看到这个术语,但往往会在探寻其本质时迷失在<ruby> 技术初始主义 <rt> techno-initialisms </rt></ruby>的海洋或是<ruby> 以 X 结尾的行话 <rt> jargon-that-ends-in-X </rt></ruby>中。我给 [Richard Stallman](https://stallman.org/) 博士(在黑客圈里面常称之为 RMS)发了邮件以探寻这个术语的起源及其背后的概念。
Richard Stallman 认为用 “开源” 和 “闭源” 来归类软件是一种错误的方法。Stallman 将程序分类为 <ruby> 尊重自由的 <rt> freedom-respecting </rt></ruby>(“<ruby> 自由 <rt> free </rt></ruby>” 或 “<ruby> 自由(西语) <rt> libre </rt></ruby>”)和 <ruby> 践踏自由的 <rt> freedom-trampling </rt></ruby>(“<ruby> 非自由 <rt> non-free </rt></ruby>” 或 “<ruby> 专有 <rt> proprietary </rt></ruby>”)。开源讨论通常会为了(用户)实际得到的<ruby> 优势/便利 <rt> advantages </rt></ruby>考虑去鼓励某些做法,而非作为道德层面上的约束。
Stallman 在由其本人于 1984 年发起的<ruby> 自由软件运动 <rt> The free software movement </rt></ruby>表明,不仅仅是这些 <ruby> 优势/便利 <rt> advantages </rt></ruby> 受到了威胁。计算机的用户 <ruby> 理应得到 <rt> deserve </rt></ruby> 计算机的控制权,因此拒绝被用户控制的程序即是 <ruby> 非正义 <rt> injustice </rt></ruby>,理应被<ruby> 拒绝 <rt> rejected </rt></ruby>和<ruby> 排斥 <rt> eliminated </rt></ruby>。对于用户的控制权,程序应当给予用户 [四项基本自由](https://www.gnu.org/philosophy/free-sw.en.html):
* 自由度 0:无论用户出于何种目的,必须可以按照用户意愿,自由地运行该软件。
* 自由度 1:用户可以自由地学习并修改该软件,以便按照自己的意愿进行计算。作为前提,用户必须可以访问到该软件的源代码。
* 自由度 2:用户可以自由地分发该软件的副本,以便可以帮助他人。
* 自由度 3:用户可以自由地分发该软件修改后的副本。借此,你可以让整个社区受益于你的改进。作为前提,用户必须可以访问到该软件的源代码。
### 关于 POSIX
**Seth:** POSIX 标准是由 [IEEE](https://www.ieee.org/) 发布,用于描述 “<ruby> 可移植操作系统 <rt> portable operating system </rt></ruby>” 的文档。只要开发人员编写符合此描述的程序,他们生产的便是符合 POSIX 的程序。在科技行业,我们称之为 “<ruby> 规范 <rt> specification </rt></ruby>” 或将其简写为 “spec”。就技术用语而言,这是可以理解的,但我们不禁要问是什么使操作系统 “可移植”?
**RMS:** 我认为是<ruby> 接口 <rt> interface </rt></ruby>应该(在不同系统之间)是可移植的,而非任何一种*系统*。实际上,内部构造不同的各种系统都支持部分的 POSIX 接口规范。
**Seth:** 因此,如果两个系统皆具有符合 POSIX 的程序,那么它们便可以彼此假设,从而知道如何相互 “交谈”。我了解到 “POSIX” 这个简称是你想出来的。那你是怎么想出来的呢?它是如何就被 IEEE 采纳了呢?
**RMS:** IEEE 已经完成了规范的开发,但还没为其想好简练的名称。标题类似是 “可移植操作系统接口”,虽然我已记不清确切的单词。委员会倾向于将 “IEEEIX” 作为简称。而我认为那不太好。发音有点怪 - 听起来像恐怖的尖叫,“Ayeee!” - 所以我觉得人们反而会倾向于称之为 “Unix”。
但是,由于 <ruby> <a href="http://gnu.org"> GNU 并不是 Unix </a> <rt> GNU’s Not Unix </rt></ruby>,并且它打算取代之,我不希望人们将 GNU 称为 “Unix 系统”。因此,我提出了人们可能会实际使用的简称。那个时候也没有什么灵感,我就用了一个并不是非常聪明的方式创造了这个简称:我使用了 “<ruby> 可移植操作系统 <rp> ( </rp> <rt> portable operating system </rt> <rp> ) </rp></ruby>” 的首字母缩写,并在末尾添加了 “ix” 作为简称。IEEE 也欣然接受了。
**Seth:** POSIX 缩写中的 “操作系统” 是仅涉及 Unix 和类 Unix 的系统(如 GNU)呢?还是意图包含所有操作系统?
**RMS:** 术语 “操作系统” 抽象地说,涵盖了完全不像 Unix 的系统、完全和 POSIX 规范无关的系统。但是,POSIX 规范适用于大量类 Unix 系统;也只有这样的系统才适合 POSIX 规范。
**Seth:** 你是否参与审核或更新当前版本的 POSIX 标准?
**RMS:** 现在不了。
**Seth:** GNU Autotools 工具链可以使应用程序更容易移植,至少在构建和安装时如此。所以可以认为 Autotools 是构建可移植基础设施的重要一环吗?
**RMS:** 是的,因为即使在遵循 POSIX 的系统中,也存在着诸多差异。而 Autotools 可以使程序更容易适应这些差异。顺带一提,如果有人想助力 Autotools 的开发,可以发邮件联系我。
**Seth:** 我想,当 GNU 刚刚开始让人们意识到一个非 Unix 的系统可以从专有的技术中解放出来的时候,关于自由软件如何协作方面,这其间一定存在一些空白区域吧。
**RMS:** 我不认为有任何空白或不确定性。我只是照着 BSD 的接口写而已。
**Seth:** 一些 GNU 应用程序符合 POSIX 标准,而另一些 GNU 应用程序的 GNU 特定的功能,要么不在 POSIX 规范中,要么缺少该规范要求的功能。对于 GNU 应用程序 POSIX 合规性有多重要?
**RMS:** 遵循标准对于利于用户的程度很重要。我们不将标准视为权威,而是且将其作为可能有用的指南来遵循。因此,我们谈论的是<ruby> 遵循 <rt> following </rt></ruby>标准而不是“<ruby> 遵守 <rt> complying </rt></ruby>”。可以参考 <ruby> GNU 编码标准 <rt> GNU Coding Standards </rt></ruby>中的 [非 GNU 标准](https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Standards.html) 段落。
我们努力在大多数问题上与标准兼容,因为在大多数的问题上这最有利于用户。但也偶有例外。
例如,POSIX 指定某些实用程序以 512 字节为单位测量磁盘空间。我要求委员会将其改为 1K,但被拒绝了,说是有个<ruby> 官僚主义的规则 <rt> bureaucratic rule </rt></ruby>强迫选用 512。我不记得有多少人试图争辩说,用户会对这个决定感到满意的。
由于 GNU 在用户的<ruby> 自由 <rt> freedom </rt></ruby>之后的第二优先级,是用户的<ruby> 便利 <rt> convenience </rt></ruby>,我们使 GNU 程序以默认 1K 为单位按块测量磁盘空间。
然而,为了防止竞争对手利用这点给 GNU 安上 “<ruby> 不合规 <rt> noncompliant </rt></ruby>” 的骂名,我们实现了遵循 POSIX 和 ISO C 的可选模式,这种妥协着实可笑。想要遵循 POSIX,只需设置环境变量 `POSIXLY_CORRECT`,即可使程序符合 POSIX 以 512 字节为单位列出磁盘空间。如果有人知道实际使用 `POSIXLY_CORRECT` 或者 GCC 中对应的 `--pedantic` 会为某些用户提供什么实际好处的话,请务必告诉我。
**Seth:** 符合 POSIX 标准的自由软件项目是否更容易移植到其他类 Unix 系统?
**RMS:** 我认为是这样,但自上世纪 80 年代开始,我决定不再把时间浪费在将软件移植到 GNU 以外的系统上。我开始专注于推进 GNU 系统,使其不必使用任何非自由软件。至于将 GNU 程序移植到非类 GNU 系统就留给想在其他系统上运行它们的人们了。
**Seth:** POSIX 对于软件的自由很重要吗?
**RMS:** 本质上说,(遵不遵循 POSIX)其实没有任何区别。但是,POSIX 和 ISO C 的标准化确实使 GNU 系统更容易迁移,这有助于我们更快地实现从非自由软件中解放用户的目标。这个目标于上世纪 90 年代早期达成,当时Linux成为自由软件,同时也填补了 GNU 中内核的空白。
### POSIX 采纳 GNU 的创新
我还问过 Stallman 博士,是否有任何 GNU 特定的创新或惯例后来被采纳为 POSIX 标准。他无法回想起具体的例子,但友好地代我向几位开发者发了邮件。
开发者 Giacomo Catenazzi,James Youngman,Eric Blake,Arnold Robbins 和 Joshua Judson Rosen 对以前的 POSIX 迭代以及仍在进行中的 POSIX 迭代做出了回应。POSIX 是一个 “<ruby> 活的 <rt> living </rt></ruby>” 标准,因此会不断被行业专业人士更新和评审,许多从事 GNU 项目的开发人员提出了对 GNU 特性的包含。
为了回顾这些有趣的历史,接下来会罗列一些已经融入 POSIX 的流行的 GNU 特性。
#### Make
一些 GNU **Make** 的特性已经被 POSIX 的 `make` 定义所采用。相关的 [规范](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html) 提供了从现有实现中借来的特性的详细归因。
#### Diff 和 patch
[diff](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/diff.html) 和 [patch](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/patch.html) 命令都直接从这些工具的 GNU 版本中引进了 `-u` 和 `-U` 选项。
#### C 库
POSIX 采用了 GNU C 库 **glibc** 的许多特性。<ruby> 血统 <rt> Lineage </rt></ruby>一时已难以追溯,但 James Youngman 如是写道:
>
> “我非常确定 GCC 首创了许多 ISO C 的特性。例如,**\_Noreturn** 是 C11 中的新特性,但 GCC-1.35 便具有此功能(使用 `volatile` 作为声明函数的修饰符)。另外尽管我不确定,GCC-1.35 支持的可变长度数组似乎与现代 C 中的(<ruby> 柔性数组 <rt> conformant array </rt></ruby>)非常相似。”
>
>
>
Giacomo Catenazzi 援引 Open Group 的 [strftime](https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html) 文章,并指出其归因:“这是基于某版本 GNU libc 的 `strftime()` 的特性。”
Eric Blake 指出,对于 `getline()` 和各种基于语言环境的 `*_l()` 函数,GNU 绝对是这方面的先驱。
Joshua Judson Rosen 补充道,他清楚地记得,在全然不同的操作系统的代码中奇怪地目睹了熟悉的 GNU 式的行为后,对 `getline()` 函数的采用给他留下了深刻的印象。
“等等……那不是 GNU 特有的吗?哦,显然已经不再是了。”
Rosen 向我指出了 [getline 手册页](http://man7.org/linux/man-pages/man3/getline.3.html) 中写道:
>
> `getline()` 和 `getdelim()` 最初都是 GNU 扩展。在 POSIX.1-2008 中被标准化。
>
>
>
Eric Blake 向我发送了一份其他扩展的列表,这些扩展可能会在下一个 POSIX 修订版中添加(代号为 Issue 8,大约在 2021 年前后):
* [ppoll](http://austingroupbugs.net/view.php?id=1263)
* [pthread\_cond\_clockwait et al.](http://austingroupbugs.net/view.php?id=1216)
* [posix\_spawn\_file\_actions\_addchdir](http://austingroupbugs.net/view.php?id=1208)
* [getlocalename\_1](http://austingroupbugs.net/view.php?id=1220)
* [reallocarray](http://austingroupbugs.net/view.php?id=1218)
### 关于用户空间的扩展
POSIX 不仅为开发人员定义了函数和特性,还为用户空间定义了标准行为。
#### ls
`-A` 选项会排除来自 `ls` 命令结果中的符号 `.`(代表当前位置)和 `..`(代表上一级目录)。它被 POSIX 2008 采纳。
#### find
`find` 命令是一个<ruby> 特别的 <rp> ( </rp> <rt> ad hoc </rt> <rp> ) </rp></ruby> [for 循环](https://opensource.com/article/19/6/how-write-loop-bash) 工具,也是 [<ruby> 并行 <rt> parallel </rt></ruby>](https://opensource.com/article/18/5/gnu-parallel) 处理的出入口。
一些从 GNU 引入到 POSIX 的<ruby> 便捷操作 <rt> conveniences </rt></ruby>,包括 `-path` 和 `-perm` 选项。
`-path` 选项帮你过滤与文件系统路径模式匹配的搜索结果,并且从 1996 年(根据 `findutil` 的 Git 仓库中最早的记录)GNU 版本的 `find` 便可使用此选项。James Youngman 指出 [HP-UX](https://www.hpe.com/us/en/servers/hp-ux.html) 也很早就有这个选项,所以究竟是 GNU 还是 HP-UX 做出的这一创新(抑或两者兼而有之)无法考证。
`-perm` 选项帮你按文件权限过滤搜索结果。这在 1996 年 GNU 版本的 `find` 中便已存在,随后被纳入 POSIX 标准 “IEEE Std 1003.1,2004 Edition” 中。
`xargs` 命令是 `findutils` 软件包的一部分,1996 年的时候就有一个 `-p` 选项会将 `xargs` 置于交互模式(用户将被提示是否继续),随后被纳入 POSIX 标准 “IEEE Std 1003.1, 2004 Edition” 中。
#### Awk
GNU awk(即 `/usr/bin` 目录中的 `gawk` 命令,可能也是符号链接 `awk` 的目标地址)的维护者 Arnold Robbins 说道,`gawk` 和 `mawk`(另一个GPL 的 `awk` 实现)允许 `RS`(记录分隔符)是一个正则表达式,即这时 `RS` 的长度会大于 1。这一特性还不是 POSIX 的特性,但有 [迹象表明它即将会是](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html):
>
> NUL 在扩展正则表达式中产生的未定义行为允许 GNU `gawk` 程序未来可以扩展以处理二进制数据。
>
>
> 使用多字符 RS 值的未指定行为是为了未来可能的扩展,它是基于用于记录分隔符(RS)的扩展正则表达式的。目前的历史实现为采用该字符串的第一个字符而忽略其他字符。
>
>
>
这是一个重大的增强,因为 `RS` 符号定义了记录之间的分隔符。可能是逗号、分号、短划线、或者是任何此类字符,但如果它是字符*序列*,则只会使用第一个字符,除非你使用的是 `gawk` 或 `mawk`。想象一下这种情况,使用省略号(连续的三个点)作为解析 IP 地址文档的分隔记录,只是想获取在每个 IP 地址的每个点处解析的结果。
[mawk](https://invisible-island.net/mawk/) 首先支持这个功能,但是几年来没有维护者,留下来的火把由 `gawk` 接过。(`mawk` 已然获得了一个新的维护者,可以说是大家薪火传承地将这一特性推向共同的预期值。)
### POSIX 规范
总的来说,Giacomo Catenzzi 指出,“……因为 GNU 的实用程序使用广泛,而且许多其他的选项和行为又对标规范。在 shell 的每次更改中,Bash 都会(作为一等公民)被用作比较。” 当某些东西被纳入 POSIX 规范时,无需提及 GNU 或任何其他影响,你可以简单地认为 POSIX 规范会受到许多方面的影响,GNU 只是其中之一。
共识是 POSIX 存在的意义所在。一群技术人员共同努力为了实现共同规范,再分享给数以百计各异的开发人员,经由他们的赋能,从而实现软件的独立性,以及开发人员和用户的自由。
---
via: <https://opensource.com/article/19/7/what-posix-richard-stallman-explains>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[martin2011qi](https://github.com/martin2011qi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | What is [POSIX](https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/), and why does it matter? It's a term you've likely seen in technical writing, but it often gets lost in a sea of techno-initialisms and jargon-that-ends-in-X. I emailed Dr. [Richard Stallman](https://stallman.org/) (better known in hacker circles as RMS) to find out more about the term's origin and the concept behind it.
Richard Stallman says "open" and "closed" are the wrong way to classify software. Stallman classifies programs as *freedom-respecting* ("free" or "libre") and *freedom-trampling* ("non-free" or "proprietary"). Open source discourse typically encourages certain practices for the sake of practical advantages, not as a moral imperative.
The free software movement, which Stallman launched in 1984, says more than *advantages* are at stake. Users of computers *deserve* control of their computing, so programs denying users control are an injustice to be rejected and eliminated. For users to have control, the program must give them the [four essential freedoms](https://www.gnu.org/philosophy/free-sw.en.html):
- The freedom to run the program as you wish, for any purpose (freedom 0).
- The freedom to study how the program works and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help others (freedom 2).
- The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
## About POSIX
**Seth:** The POSIX standard is a document released by the [IEEE](https://www.ieee.org/) that describes a "portable operating system." As long as developers write programs to match this description, they have produced a POSIX-compliant program. In the tech industry, we call this a "specification" or "spec" for short. That's mostly understandable, as far as tech jargon goes, but what makes an operating system "portable"?
**RMS:** I think it was the *interface* that was supposed to be portable (among systems), rather than any one *system*. Indeed, various systems that are different inside do support parts of the POSIX interface spec.
**Seth:** So if two systems feature POSIX-compliant programs, then they can make assumptions about one another, enabling them to know how to "talk" to one another. I read that it was you who came up with the name "POSIX." How did you come up with the term, and how was it adopted by the IEEE?
**RMS:** The IEEE had finished developing the spec but had no concise name for it. The title said something like "portable operating system interface," though I don't remember the exact words. The committee put on "IEEEIX" as the concise name. I did not think that was a good choice. It is ugly to pronounce—it would sound like a scream of terror, "Ayeee!"—so I expected people would instead call the spec "Unix."
Since [GNU's Not Unix](http://gnu.org), and it was intended to replace Unix, I did not want people to call GNU a "Unix system." I, therefore, proposed a concise name that people might actually use. Having no particular inspiration, I generated a name the unclever way: I took the initials of "portable operating system" and added "ix." The IEEE adopted this eagerly.
**Seth:** Does "operating system" in the POSIX acronym refer only to Unix and Unix-like systems such as GNU, or is the intent to encompass all operating systems?
**RMS:** The term "operating system," in the abstract, covers systems that are not at all like Unix, not at all close to the POSIX spec. However, the spec is meant for systems that are a lot like Unix; only such systems will fit the POSIX spec.
**Seth:** Are you involved in reviewing or updating the current POSIX standards?
**RMS:** Not now.
**Seth:** The GNU Autotools toolchain does a lot to make applications easier to port, at least in terms of when it comes time to build and install. Is Autotools an important part of building a portable infrastructure?
**RMS:** Yes, because even among systems that follow POSIX, there are lots of little differences. The Autotools make it easier for a program to adapt to those differences. By the way, if anyone wants to help in the development of the Autotools, please email me.
**Seth:** I imagine, way back when GNU was just starting to make people realize that a (not)Unix liberated from proprietary technology was possible, there must have been a void of clarity about how free software could possibly work together.
**RMS:** I don't think there was any void or any uncertainty. I was simply going to follow the interfaces of BSD.
**Seth:** Some GNU applications are POSIX-compliant, while others have GNU-specific features either not in the POSIX spec or lack features required by the spec. How important is POSIX compliance to GNU applications?
**RMS:** Following a standard is important to the extent it serves users. We do not treat a standard as an authority, but rather as a guide that may be useful to follow. Thus, we talk about following standards rather than "complying" with them. See the section [Non-GNU Standards](https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Standards.html) in the GNU Coding Standards.
We strive to be compatible with standards on most issues because, on most issues, that serves users best. But there are occasional exceptions.
For instance, POSIX specifies that some utilities measure disk space in units of 512 bytes. I asked the committee to change this to 1K, but it refused, saying that a bureaucratic rule compelled the choice of 512. I don't recall much attempt to argue that users would be pleased with that decision.
Since GNU's second priority, after users' freedom, is users' convenience, we made GNU programs measure disk space in blocks of 1K by default.
However, to defend against potential attacks by competitors who might claim that this deviation made GNU "noncompliant," we implemented optional modes that follow POSIX and ISO C to ridiculous extremes. For POSIX, setting the environment variable POSIXLY_CORRECT makes programs specified by POSIX list disk space in terms of 512 bytes. If anyone knows of a case of an actual use of POSIXLY_CORRECT or its GCC counterpart **--pedantic** that provides an actual benefit to some user, please tell me about it.
**Seth:** Are POSIX-compliant free software projects easier to port to other Unix-like systems?
**RMS:** I suppose so, but I decided in the 1980s not to spend my time on porting software to systems other than GNU. I focused on advancing the GNU system towards making it unnecessary to use any non-free software and left the porting of GNU programs to non-GNU-like systems to whoever wanted to run them on those systems.
**Seth:** Is POSIX important to software freedom?
**RMS:** At the fundamental level, it makes no difference. However, standardization by POSIX and ISO C surely made the GNU system easier to migrate to, and that helped us advance more quickly towards the goal of liberating users from non-free software. That was achieved in the early 1990s, when Linux was made free software and then filled the kernel-shaped gap in GNU.
## GNU innovations adopted by POSIX
I also asked Dr. Stallman whether any GNU-specific innovations or conventions had later become adopted as a POSIX standard. He couldn't recall specific examples, but kindly emailed several developers on my behalf.
Developers Giacomo Catenazzi, James Youngman, Eric Blake, Arnold Robbins, and Joshua Judson Rosen all responded with memories of previous POSIX iterations as well as ones still in progress. POSIX is a "living" standard, so it's being updated and reviewed by industry professionals continuously, and many developers who work on GNU projects propose the inclusion of GNU features.
For the sake of historical interest, here' are some popular GNU features that have made their way into POSIX.
### Make
Some GNU **Make** features have been adopted into POSIX's definition of **make**. The relevant [specification](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html) provides detailed attribution for features borrowed from existing implementations.
### Diff and patch
Both the ** diff** and
**commands have gotten**
[patch](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/patch.html)**-u**and
**-U**options added directly from GNU versions of those tools.
### C library
Many features of the GNU C library, **glibc**, have been adopted in POSIX. Lineage is sometimes difficult to trace, but James Youngman wrote:
"I'm pretty sure there are a number of features of ISO C which were pioneered by GCC. For example,
_Noreturnis new in C11, but GCC-1.35 had this feature (one used thevolatilemodifier on a function declaration). Also—though I'm not certain about this—GCC-1.35 supported Arrays of Variable Length which seem to be very similar to modern C's conformant arrays."
Giacomo Catenazzi cites the Open Group's [ strftime article](https://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html), pointing to this attribution: "This is based on a feature in some versions of GNU libc's
**strftime()**."
Eric Blake notes that the **getline()** and various ***_l()** locale-based functions were definitely pioneered by GNU.
Joshua Judson Rosen adds to this, saying he clearly remembers being impressed by the adoption of **getline** functions after witnessing strangely familiar GNU-like behavior from code meant for a different OS entirely.
"Wait…that's GNU-specific… isn't it? Oh—not anymore, apparently."
Rosen pointed me to the [ getline man page](http://man7.org/linux/man-pages/man3/getline.3.html), which says:
Both
getline()andgetdelim()were originally GNU extensions. They were standardized in POSIX.1-2008.
Eric Blake sent me a list of other extensions that may be added in the next POSIX revision (codenamed Issue 8, currently due around 2021):
## Userspace extensions
POSIX doesn't just define functions and features for developers. It defines standard behavior for userspace as well.
### ls
The **-A** option is used to exclude the **.** (representing the current location) and **..** (representing the opportunity to go back one directory) notation from the results of an **ls** command. This was adopted for POSIX 2008.
### find
The **find** command is a useful tool for ad hoc [ for loops](https://opensource.com/article/19/6/how-write-loop-bash) and as a gateway into
[parallel](https://opensource.com/article/18/5/gnu-parallel)processing.
A few conveniences made their way from GNU to POSIX, including the **-path** and **-perm** options.
The **-path** option lets you filter search results matching a filesystem path pattern and was available in GNU's version of **find** since before 1996 (the earliest record in **findutil**'s Git repository). James Youngman notes that [HP-UX](https://www.hpe.com/us/en/servers/hp-ux.html) also had this option very early on, so whether it's a GNU or an HP-UX innovation (or both) is uncertain.
The **-perm** option lets you filter search results by file permission. This was in GNU's version of **find** by 1996 and arrived later in the POSIX standard "IEEE Std 1003.1, 2004 Edition."
The **xargs** command, part of the **findutils** package, had a **-p** option to put **xargs** into an interactive mode (the user is prompted whether to continue or not) by 1996, and it arrived in POSIX in "IEEE Std 1003.1, 2004 Edition."
### Awk
Arnold Robbins, the maintainer of GNU **awk** (the **gawk** command in your **/usr/bin** directory, probably the destination of the symlink **awk**), says that **gawk** and **mawk** (another GPL **awk** implementation) allow **RS** to be a regular expression, which is the case when **RS** has a length greater than 1. This isn't a feature in POSIX yet, but there's an [indication that it will be](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html):
The undefined behavior resulting from NULs in extended regular expressions allows future extensions for the GNU gawk program to process binary data.
The unspecified behavior from using multi-character RS values is to allow possible future extensions based on extended regular expressions used for record separators. Historical implementations take the first character of the string and ignore the others.
This is a significant enhancement because the **RS** notation defines a separator between records. It might be a comma or a semicolon or a dash or any such character, but if it's a *sequence* of characters, then only the first character is used unless you're working in **gawk** or **mawk**. Imagine parsing a document of IP addresses with records separated by an ellipsis (three dots in a row), only to get back results parsed at every dot in every IP address.
** Mawk** supported the feature first, but it was without a maintainer for several years, leaving
**gawk**to carry the torch. (
**Mawk**has since gained a new maintainer, so arguably credit can be shared for pushing this feature into the collective expectation.)
## The POSIX spec
In general, Giacomo Catenzzi points out, "…because GNU utilities were used so much, a lot of other options and behaviors were aligned. At every change in shell, Bash is used as comparison (as a first-class citizen)." There's no requirement to cite GNU or any other influence when something is rolled into the POSIX spec, and it can safely be assumed that influences to POSIX come from many sources, with GNU being only one of many.
The significance of POSIX is consensus. A group of technologists working together toward common specifications to be shared by hundreds of uncommon developers lends strength to the greater movement toward software independence and developer and user freedom.
## 4 Comments |
11,224 | 使用 Postfix 从 Fedora 系统中获取电子邮件 | https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/ | 2019-08-14T10:35:56 | [
"Postfix",
"邮件"
] | https://linux.cn/article-11224-1.html | 
交流是非常重要的。你的电脑可能正试图告诉你一些重要的事情。但是,如果你没有正确配置邮件传输代理([MTA](https://en.wikipedia.org/wiki/Message_transfer_agent)),那么你可能不会收到通知。Postfix 是一个[易于配置且以强大的安全记录而闻名](https://en.wikipedia.org/wiki/Postfix_(software))的 MTA。遵循以下步骤,以确保从本地服务发送的电子邮件通知将通过 Postfix MTA 路由到你的互联网电子邮件账户中。
### 安装软件包
使用 `dnf` 来安装一些必须软件包([你应该配置了 sudo,对吧?](https://fedoramagazine.org/howto-use-sudo/)):
```
$ sudo -i
# dnf install postfix mailx
```
如果以前配置了不同的 MTA,那么你可能需要将 Postfix 设置为系统默认。使用 `alternatives` 命令设置系统默认 MTA:
```
$ sudo alternatives --config mta
There are 2 programs which provide 'mta'.
Selection Command
*+ 1 /usr/sbin/sendmail.sendmail
2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2
```
### 创建一个 password\_maps 文件
你需要创建一个 Postfix 查询表条目,其中包含你要用于发送电子邮件账户的地址和密码:
```
# [email protected]
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c
```
如果你使用的是 Gmail 账户,那么你需要为 Postfix 配置一个“应用程序密码”而不是使用你的 Gmail 密码。有关配置应用程序密码的说明,参阅“[使用应用程序密码登录](https://support.google.com/accounts/answer/185833)”。
接下来,你必须对 Postfix 查询表运行 `postmap` 命令,以创建或更新 Postfix 实际使用的文件的散列版本:
```
# postmap /etc/postfix/password_maps
```
散列后的版本将具有相同的文件名,但后缀为 `.db`。
### 更新 main.cf 文件
更新 Postfix 的 `main.cf` 配置文件,以引用刚刚创建 Postfix 查询表。编辑该文件并添加以下行:
```
relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps
```
这里假设你使用 Gmail 作为 `relayhost` 设置,但是你可以用正确的主机名和端口替换系统应该将邮件发送到的邮件主机。
有关上述配置选项的最新详细信息,参考 man 帮助:
```
$ man postconf.5
```
### 启用、启动和测试 Postfix
更新 `main.cf` 文件后,启用并启动 Postfix 服务:
```
# systemctl enable --now postfix.service
```
然后,你可以使用 `exit` 命令或 `Ctrl+D` 以 root 身份退出 `sudo` 会话。你现在应该能够使用 `mail` 命令测试你的配置:
```
$ echo 'It worked!' | mail -s "Test: $(date)" [email protected]
```
### 更新服务
如果你安装了像 [logwatch](https://src.fedoraproject.org/rpms/logwatch)、[mdadm](https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/)、[fail2ban](https://fedoraproject.org/wiki/Fail2ban_with_FirewallD)、[apcupsd](https://src.fedoraproject.org/rpms/apcupsd) 或 [certwatch](https://www.linux.com/learn/automated-certificate-expiration-checks-centos) 这样的服务,你现在可以更新它们的配置,以便它们的电子邮件通知转到你的 Internet 电子邮件地址。
另外,你可能希望将发送到本地系统 root 账户的所有电子邮件都转到互联网电子邮件地址中,将以下行添加到系统的 `/etc/alises` 文件中(你需要使用 `sudo` 编辑此文件,或首先切换到 `root` 账户):
```
root: [email protected]
```
现在运行此命令重新读取别名:
```
# newaliases
```
* 提示: 如果你使用的是 Gmail,那么你可以在用户名和 `@` 符号之间[添加字母数字标记](https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html),如上所示,以便更轻松地识别和过滤从计算机收到的电子邮件。
### 常用命令
**查看邮件队列:**
```
$ mailq
```
**清除队列中的所有电子邮件:**
```
# postsuper -d ALL
```
**过滤设置,以获得感兴趣的值:**
```
$ postconf | grep "^relayhost\|^smtp_"
```
**查看 `postfix/smtp` 日志:**
```
$ journalctl --no-pager -t postfix/smtp
```
**进行配置更改后重新加载 postfix:**
```
$ systemctl reload postfix
```
---
via: <https://fedoramagazine.org/use-postfix-to-get-email-from-your-fedora-system/>
作者:[Gregory Bartholomew](https://fedoramagazine.org/author/glb/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent ([MTA](https://en.wikipedia.org/wiki/Message_transfer_agent)) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA [that’s easy to configure and known for a strong security record](https://en.wikipedia.org/wiki/Postfix_(software)). Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.
## Install packages
Use *dnf *to install the required packages ([you configured ](https://fedoramagazine.org/howto-use-sudo/)[sudo](https://fedoramagazine.org/howto-use-sudo/)[, right?](https://fedoramagazine.org/howto-use-sudo/)):
$ sudo -i # dnf install postfix mailx
If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the *alternatives* command to set your system default MTA:
$sudo alternatives --config mtaThere are 2 programs which provide 'mta'. Selection Command *+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix Enter to keep the current selection[+], or type selection number: 2
## Create a *password_maps* file
You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:
# [email protected] # MY_EMAIL_PASSWORD=abcdefghijklmnop # MY_SMTP_SERVER=smtp.gmail.com # MY_SMTP_SERVER_PORT=587 # echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps # chmod 600 /etc/postfix/password_maps # unset MY_EMAIL_PASSWORD # history -c
If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “[Sign in using App Passwords](https://support.google.com/accounts/answer/185833)” for instructions on configuring an app password.
Next, you must run the *postmap* command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:
# postmap /etc/postfix/password_maps
The hashed version will have the same file name but it will be suffixed with *.db*.
## Update the *main.cf* file
Update Postfix’s *main.cf* configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.
relayhost = [smtp.gmail.com]:587 smtp_tls_security_level = verify smtp_tls_mandatory_ciphers = high smtp_tls_verify_cert_match = hostname smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/password_maps
The example assumes you’re using Gmail for the *relayhost* setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.
For the most up-to-date details about the above configuration options, see the man page:
$ man postconf.5
## Enable, start, and test Postfix
After you have updated the main.cf file, enable and start the Postfix service:
# systemctl enable --now postfix.service
You can then exit your *sudo* session as root using the *exit* command or **Ctrl+D**. You should now be able to test your configuration with the *mail* command:
$ echo 'It worked!' | mail -s "Test: $(date)" [email protected]
## Update services
If you have services like [logwatch](https://src.fedoraproject.org/rpms/logwatch), [mdadm](https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/), [fail2ban](https://fedoraproject.org/wiki/Fail2ban_with_FirewallD), [apcupsd](https://src.fedoraproject.org/rpms/apcupsd) or [certwatch](https://www.linux.com/learn/automated-certificate-expiration-checks-centos) installed, you can now update their configurations so that their email notifications will go to your internet email address.
Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the */etc/aliases* file on your system (you’ll need to use *sudo* to edit this file, or switch to the *root* account first):
root: [email protected]
Now run this command to re-read the aliases:
# newaliases
- TIP: If you are using Gmail, you can
[add an alpha-numeric mark](https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-more-from-your.html)between your username and the**@**symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).
## Troubleshooting
**View the mail queue:**
$ mailq
**Clear all email from the queues:**
# postsuper -d ALL
**Filter the configuration settings for interesting values:**
$ postconf | grep "^relayhost\|^smtp_"
**View the postfix/smtp logs:**
$ journalctl --no-pager -t postfix/smtp
**Reload postfix after making configuration changes:**
$ systemctl reload postfix
*Photo by **Sharon McCutcheon** on Unsplash*.
## Caaaaarrrrlll
sSMTP is an easier alternative. https://wiki.archlinux.org/index.php/SSMTP
## Paul W. Frields
@Carl: When you’re talking about email, though, security is a real concern. It’s especially important if the system you’re using could trigger blacklisting or other side effects. Postfix has a good security stance and record (as well as longevity).
## Daniel
If security is the goal then OpenSMTPD is the go-to option.
## Dave Kimble
Configuring postfix is NOT easy. The only people that should attempt it are experts who know it all.
## Paul W. Frields
It’s not too hard to set up the simple configurations like this one. As a proof point, Greg figured it out for this article! You can do very complex things too, of course. Saying no one should do it sounds a lot like “no one should do Linux.” 😉
## Dokter
Are there ways to do this? Not using postfix.
Right now I get a message via Pushover that triggers a script in sshrc when I ssh into my box.
Beyond that I’ve failed to find a good solution, or optional solution that passes system messages via other means, such as the mentioned solution, triggering a script that might send the information via an API.
The solution above wasn’t easy to find might I add.
For system messages Pushover might not be well suited, but I’d consider using Mailgun and their API.
## Gregory Bartholomew
Hi Dokter:
Sorry, I haven’t really looked into alternatives beyond sendmail/postfix. The only alternative that comes to mind would be to use a sms gateway. I’ve never tried it though, so I really can’t vouch for how reliable it is.
## Guus Bonnema
When running the commands you suggested, I got a warning from postmap:
postmap: warning: /etc/postfix/password_maps, line 2: expected format: key whitespace value
So I changed the layout to servername space emailname followed by the same line for password with the “:” replaced by a space. Of course this did not work (I should have used a keyword-value stuff as the message indicated). So when I used exactly what you said it worked.
What is it with this message? Why do I get it? Why do I use a different layout (with [] and 🙂 to get it working?
## Paul W. Frields
This was a problem in the initial version but should be fixed now. Something got goofed up in the edits, because Greg’s original version was correct.
## Gregory Bartholomew
Hi Guus:
Glad you figured it out and sorry that the directions weren’t exactly correct initially.
The “key” in the password_maps file should match the value for “relayhost” in main.cf exactly (brackets and all).
If you are still seeing warnings about the formatting of the password_maps file, you might want to open the file with a text editor and delete any extra lines that might be left over from previous attempts.
The brackets around the relayhost address indicate that the address is for an SMTP server. Without the brackets the address is taken to be a domain name against which a MX record lookup should be done to find the address of the SMTP server. I think Google has patched their server in such a way that it might work without the brackets, but the MX mechanism has some limitations and you should probably use the brackets to get the best/nearest SMTP server. Google uses DNS to load balance their SMTP servers. You might be able to see it in action by running “nslookup smtp.gmail.com” several times consecutively.
As for
whythe syntax is what it is, I doubt there is a really good reason for it. A separate setting like “do_mx_lookup = false” would certainly be easier to understand.## GiP
I don’t understand…
I wanted to be able to use mail from my PC to send the results or errors from cron jobs and I did two things:
install postfix
enable postfix
And it just works! I can use a
mail -s “Cron job Log” myaddress <job.log
from the script.
Is all the other stuff really necessary?
G
## Gregory Bartholomew
Hi GiP:
That sort of auto-configuration is possible but, as I understand it, a couple of things are required that probably aren’t available in most people’s environment.
In order for MTA auto-configuration to work, you need:
A network administrator (or possibly a spambot) to configure option 69 on your DHCP server (or home router).
A open mail relay on your network (a SMTP server that requires neither authentication nor encryption).
If you are lucky enough that both the above conditions are met, then just turning the postfix service on will work. Unfortunately, I don’t think that is the case for most people, so they will have to explicitly configure Postfix to route their email through a mail server that they trust.
## GiP
Hi,
Thanks for the answer!
I see, but I don’t have either… On the other hand, as I said, I don’t want to receive mail, just send, and for this it seems that postfix can act as SMTP and (at least) Gmail and my own provider accept the messages without problems.
Well, Gmail has in the headers:
” best guess record for domain of gip@ designates as permitted sender)”.
But it just works….
Thanks again,
GiP
## GiP
Hi,
my reply was somehow mangled by the system…
The GM header should have been ” best guess record for domain of gip@ MYHOST designates MYIPADDRESS as permitted sender)” |
11,225 | DF-SHOW:一个基于老式 DOS 应用的终端文件管理器 | https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/ | 2019-08-14T10:42:01 | [
"文件管理器"
] | https://linux.cn/article-11225-1.html | 
如果你曾经使用过老牌的 MS-DOS,你可能已经使用或听说过 DF-EDIT。DF-EDIT,意即 **D**irectory **F**ile **Edit**,它是一个鲜为人知的 DOS 文件管理器,最初由 Larry Kroeker 为 MS-DOS 和 PC-DOS 系统而编写。它用于在 MS-DOS 和 PC-DOS 系统中显示给定目录或文件的内容。今天,我偶然发现了一个名为 DF-SHOW 的类似实用程序(**D**irectory **F**ile **Show**),这是一个类 Unix 操作系统的终端文件管理器。它是鲜为人知的 DF-EDIT 文件管理器的 Unix 重写版本,其基于 1986 年发布的 DF-EDIT 2.3d。DF-SHOW 完全是自由开源的,并在 GPLv3 下发布。
DF-SHOW 可以:
* 列出目录的内容,
* 查看文件,
* 使用你的默认文件编辑器编辑文件,
* 将文件复制到不同位置,
* 重命名文件,
* 删除文件,
* 在 DF-SHOW 界面中创建新目录,
* 更新文件权限,所有者和组,
* 搜索与搜索词匹配的文件,
* 启动可执行文件。
### DF-SHOW 用法
DF-SHOW 实际上是两个程序的结合,名为 `show` 和 `sf`。
#### Show 命令
`show` 程序(类似于 `ls` 命令)用于显示目录的内容、创建新目录、重命名和删除文件/文件夹、更新权限、搜索文件等。
要查看目录中的内容列表,请使用以下命令:
```
$ show <directory path>
```
示例:
```
$ show dfshow
```
这里,`dfshow` 是一个目录。如果在未指定目录路径的情况下调用 `show` 命令,它将显示当前目录的内容。
这是 DF-SHOW 默认界面的样子。

如你所见,DF-SHOW 的界面不言自明。
在顶部栏上,你会看到可用的选项列表,例如复制、删除、编辑、修改等。
完整的可用选项列表如下:
* `C` opy(复制)
* `D` elete(删除)
* `E` dit(编辑)
* `H` idden(隐藏)
* `M` odify(修改)
* `Q` uit(退出)
* `R` ename(重命名)
* `S` how(显示)
* h `U` nt(文件内搜索)
* e `X` ec(执行)
* `R` un command(运行命令)
* `E` dit file(编辑文件)
* `H` elp(帮助)
* `M` ake dir(创建目录)
* `S` how dir(显示目录)
在每个选项中,有一个字母以大写粗体标记。只需按下该字母即可执行相应的操作。例如,要重命名文件,只需按 `R` 并键入新名称,然后按回车键重命名所选项目。

要显示所有选项或取消操作,只需按 `ESC` 键即可。
此外,你将在 DF-SHOW 界面的底部看到一堆功能键,以浏览目录的内容。
* `UP` / `DOWN` 箭头或 `F1` / `F2` - 上下移动(一次一行),
* `PgUp` / `PgDn` - 一次移动一页,
* `F3` / `F4` - 立即转到列表的顶部和底部,
* `F5` - 刷新,
* `F6` - 标记/取消标记文件(标记的文件将在它们前面用 `*` 表示),
* `F7` / `F8` - 一次性标记/取消标记所有文件,
* `F9` - 按以下顺序对列表排序 - 日期和时间、名称、大小。
按 `h` 了解有关 `show` 命令及其选项的更多详细信息。
要退出 DF-SHOW,只需按 `q` 即可。
#### SF 命令
`sf` (显示文件)用于显示文件的内容。
```
$ sf <file>
```

按 `h` 了解更多 `sf` 命令及其选项。要退出,请按 `q`。
想试试看?很好,让我们继续在 Linux 系统上安装 DF-SHOW,如下所述。
### 安装 DF-SHOW
DF-SHOW 在 [AUR](https://aur.archlinux.org/packages/dfshow/) 中可用,因此你可以使用 AUR 程序(如 [yay](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/))在任何基于 Arch 的系统上安装它。
```
$ yay -S dfshow
```
在 Ubuntu 及其衍生版上:
```
$ sudo add-apt-repository ppa:ian-hawdon/dfshow
$ sudo apt-get update
$ sudo apt-get install dfshow
```
在其他 Linux 发行版上,你可以从源代码编译和构建它,如下所示。
```
$ git clone https://github.com/roberthawdon/dfshow
$ cd dfshow
$ ./bootstrap
$ ./configure
$ make
$ sudo make install
```
DF-SHOW 项目的作者只重写了 DF-EDIT 实用程序的一些应用程序。由于源代码可以在 GitHub 上免费获得,因此你可以添加更多功能、改进代码并提交或修复错误(如果有的话)。它仍处于 beta 阶段,但功能齐全。
你有没试过吗?如果试过,觉得如何?请在下面的评论部分告诉我们你的体验。
不管如何,希望这有用。还有更多好东西。敬请关注!
---
via: <https://www.ostechnix.com/df-show-a-terminal-file-manager-based-on-an-old-dos-application/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,227 | 探索 Linux 内核:Kconfig/kbuild 的秘密 | https://opensource.com/article/18/10/kbuild-and-kconfig | 2019-08-15T09:40:04 | [
"内核"
] | https://linux.cn/article-11227-1.html |
>
> 深入理解 Linux 配置/构建系统是如何工作的。
>
>
>

自从 Linux 内核代码迁移到 Git 以来,Linux 内核配置/构建系统(也称为 Kconfig/kbuild)已存在很长时间了。然而,作为支持基础设施,它很少成为人们关注的焦点;甚至在日常工作中使用它的内核开发人员也从未真正思考过它。
为了探索如何编译 Linux 内核,本文将深入介绍 Kconfig/kbuild 内部的过程,解释如何生成 `.config` 文件和 `vmlinux`/`bzImage` 文件,并介绍一个巧妙的依赖性跟踪技巧。
### Kconfig
构建内核的第一步始终是配置。Kconfig 有助于使 Linux 内核高度模块化和可定制。Kconfig 为用户提供了许多配置目标:
| 配置目标 | 解释 |
| --- | --- |
| `config` | 利用命令行程序更新当前配置 |
| `nconfig` | 利用基于 ncurses 菜单的程序更新当前配置 |
| `menuconfig` | 利用基于菜单的程序更新当前配置 |
| `xconfig` | 利用基于 Qt 的前端程序更新当前配置 |
| `gconfig` | 利用基于 GTK+ 的前端程序更新当前配置 |
| `oldconfig` | 基于提供的 `.config` 更新当前配置 |
| `localmodconfig` | 更新当前配置,禁用没有载入的模块 |
| `localyesconfig` | 更新当前配置,转换本地模块到核心 |
| `defconfig` | 带有来自架构提供的 `defconcig` 默认值的新配置 |
| `savedefconfig` | 保存当前配置为 `./defconfig`(最小配置) |
| `allnoconfig` | 所有选项回答为 `no` 的新配置 |
| `allyesconfig` | 所有选项回答为 `yes` 的新配置 |
| `allmodconfig` | 尽可能选择所有模块的新配置 |
| `alldefconfig` | 所有符号(选项)设置为默认值的新配置 |
| `randconfig` | 所有选项随机选择的新配置 |
| `listnewconfig` | 列出新选项 |
| `olddefconfig` | 同 `oldconfig` 一样,但设置新符号(选项)为其默认值而无须提问 |
| `kvmconfig` | 启用支持 KVM 访客内核模块的附加选项 |
| `xenconfig` | 启用支持 xen 的 dom0 和 访客内核模块的附加选项 |
| `tinyconfig` | 配置尽可能小的内核 |
我认为 `menuconfig` 是这些目标中最受欢迎的。这些目标由不同的<ruby> 主程序 <rt> host program </rt></ruby>处理,这些程序由内核提供并在内核构建期间构建。一些目标有 GUI(为了方便用户),而大多数没有。与 Kconfig 相关的工具和源代码主要位于内核源代码中的 `scripts/kconfig/` 下。从 `scripts/kconfig/Makefile` 中可以看到,这里有几个主程序,包括 `conf`、`mconf` 和 `nconf`。除了 `conf` 之外,每个都负责一个基于 GUI 的配置目标,因此,`conf` 处理大多数目标。
从逻辑上讲,Kconfig 的基础结构有两部分:一部分实现一种[新语言](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt)来定义配置项(参见内核源代码下的 Kconfig 文件),另一部分解析 Kconfig 语言并处理配置操作。
大多数配置目标具有大致相同的内部过程(如下所示):

请注意,所有配置项都具有默认值。
第一步读取源代码根目录下的 Kconfig 文件,构建初始配置数据库;然后它根据如下优先级读取现有配置文件来更新初始数据库:
1. `.config`
2. `/lib/modules/$(shell,uname -r)/.config`
3. `/etc/kernel-config`
4. `/boot/config-$(shell,uname -r)`
5. `ARCH_DEFCONFIG`
6. `arch/$(ARCH)/defconfig`
如果你通过 `menuconfig` 进行基于 GUI 的配置或通过 `oldconfig` 进行基于命令行的配置,则根据你的自定义更新数据库。最后,该配置数据库被转储到 `.config` 文件中。
但 `.config` 文件不是内核构建的最终素材;这就是 `syncconfig` 目标存在的原因。`syncconfig`曾经是一个名为 `silentoldconfig` 的配置目标,但它没有做到其旧名称所说的工作,所以它被重命名。此外,因为它是供内部使用的(不适用于用户),所以它已从上述列表中删除。
以下是 `syncconfig` 的作用:

`syncconfig` 将 `.config` 作为输入并输出许多其他文件,这些文件分为三类:
* `auto.conf` & `tristate.conf` 用于 makefile 文本处理。例如,你可以在组件的 makefile 中看到这样的语句:`obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o`。
* `autoconf.h` 用于 C 语言的源文件。
* `include/config/` 下空的头文件用于 kbuild 期间的配置依赖性跟踪。下面会解释。
配置完成后,我们将知道哪些文件和代码片段未编译。
### kbuild
组件式构建,称为*递归 make*,是 GNU `make` 管理大型项目的常用方法。kbuild 是递归 make 的一个很好的例子。通过将源文件划分为不同的模块/组件,每个组件都由其自己的 makefile 管理。当你开始构建时,顶级 makefile 以正确的顺序调用每个组件的 makefile、构建组件,并将它们收集到最终的执行程序中。
kbuild 指向到不同类型的 makefile:
* `Makefile` 位于源代码根目录的顶级 makefile。
* `.config` 是内核配置文件。
* `arch/$(ARCH)/Makefile` 是架构的 makefile,它用于补充顶级 makefile。
* `scripts/Makefile.*` 描述所有的 kbuild makefile 的通用规则。
* 最后,大约有 500 个 kbuild makefile。
顶级 makefile 会将架构 makefile 包含进去,读取 `.config` 文件,下到子目录,在 `scripts/ Makefile.*` 中定义的例程的帮助下,在每个组件的 makefile 上调用 `make`,构建每个中间对象,并将所有的中间对象链接为 `vmlinux`。内核文档 [Documentation/kbuild/makefiles.txt](https://www.mjmwired.net/kernel/Documentation/kbuild/makefiles.txt) 描述了这些 makefile 的方方面面。
作为一个例子,让我们看看如何在 x86-64 上生成 `vmlinux`:

(此插图基于 Richard Y. Steven 的[博客](https://blog.csdn.net/richardysteven/article/details/52502734)。有过更新,并在作者允许的情况下使用。)
进入 `vmlinux` 的所有 `.o` 文件首先进入它们自己的 `built-in.a`,它通过变量`KBUILD_VMLINUX_INIT`、`KBUILD_VMLINUX_MAIN`、`KBUILD_VMLINUX_LIBS` 表示,然后被收集到 `vmlinux` 文件中。
在下面这个简化的 makefile 代码的帮助下,了解如何在 Linux 内核中实现递归 make:
```
# In top Makefile
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
+$(call if_changed,link-vmlinux)
# Variable assignments
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
export KBUILD_VMLINUX_LIBS := $(libs-y1)
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
init-y := init/
drivers-y := drivers/ sound/ firmware/
net-y := net/
libs-y := lib/
core-y := usr/
virt-y := virt/
# Transform to corresponding built-in.a
init-y := $(patsubst %/, %/built-in.a, $(init-y))
core-y := $(patsubst %/, %/built-in.a, $(core-y))
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
net-y := $(patsubst %/, %/built-in.a, $(net-y))
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
# will be executed. Refer "4.6 Phony Targets" of `info make`
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
# Variable vmlinux-dirs is the directory part of each built-in.a
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
# The entry of recursive make
$(vmlinux-dirs):
$(Q)$(MAKE) $(build)=$@ need-builtin=1
```
递归 make 的<ruby> 配方 <rt> recipe </rt></ruby>被扩展开是这样的:
```
make -f scripts/Makefile.build obj=init need-builtin=1
```
这意味着 `make` 将进入 `scripts/Makefile.build` 以继续构建每个 `built-in.a`。在`scripts/link-vmlinux.sh` 的帮助下,`vmlinux` 文件最终位于源根目录下。
#### vmlinux 与 bzImage 对比
许多 Linux 内核开发人员可能不清楚 `vmlinux` 和 `bzImage` 之间的关系。例如,这是他们在 x86-64 中的关系:

源代码根目录下的 `vmlinux` 被剥离、压缩后,放入 `piggy.S`,然后与其他对等对象链接到 `arch/x86/boot/compressed/vmlinux`。同时,在 `arch/x86/boot` 下生成一个名为 `setup.bin` 的文件。可能有一个可选的第三个文件,它带有重定位信息,具体取决于 `CONFIG_X86_NEED_RELOCS` 的配置。
由内核提供的称为 `build` 的宿主程序将这两个(或三个)部分构建到最终的 `bzImage` 文件中。
#### 依赖跟踪
kbuild 跟踪三种依赖关系:
1. 所有必备文件(`*.c` 和 `*.h`)
2. 所有必备文件中使用的 `CONFIG_` 选项
3. 用于编译该目标的命令行依赖项
第一个很容易理解,但第二个和第三个呢? 内核开发人员经常会看到如下代码:
```
#ifdef CONFIG_SMP
__boot_cpu_id = cpu;
#endif
```
当 `CONFIG_SMP` 改变时,这段代码应该重新编译。编译源文件的命令行也很重要,因为不同的命令行可能会导致不同的目标文件。
当 `.c` 文件通过 `#include` 指令使用头文件时,你需要编写如下规则:
```
main.o: defs.h
recipe...
```
管理大型项目时,需要大量的这些规则;把它们全部写下来会很乏味无聊。幸运的是,大多数现代 C 编译器都可以通过查看源文件中的 `#include` 行来为你编写这些规则。对于 GNU 编译器集合(GCC),只需添加一个命令行参数:`-MD depfile`
```
# In scripts/Makefile.lib
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
-include $(srctree)/include/linux/compiler_types.h \
$(__c_flags) $(modkern_cflags) \
$(basename_flags) $(modname_flags)
```
这将生成一个 `.d` 文件,内容如下:
```
init_task.o: init/init_task.c include/linux/kconfig.h \
include/generated/autoconf.h include/linux/init_task.h \
include/linux/rcupdate.h include/linux/types.h \
...
```
然后主程序 [fixdep](https://github.com/torvalds/linux/blob/master/scripts/basic/fixdep.c) 通过将 depfile 文件和命令行作为输入来处理其他两个依赖项,然后以 makefile 格式输出一个 `.<target>.cmd` 文件,它记录命令行和目标的所有先决条件(包括配置)。 它看起来像这样:
```
# The command line used to compile the target
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
...
# The dependency files
deps_init/init_task.o := \
$(wildcard include/config/posix/timers.h) \
$(wildcard include/config/arch/task/struct/on/stack.h) \
$(wildcard include/config/thread/info/in/task.h) \
...
include/uapi/linux/types.h \
arch/x86/include/uapi/asm/types.h \
include/uapi/asm-generic/types.h \
...
```
在递归 make 中,`.<target>.cmd` 文件将被包含,以提供所有依赖关系信息并帮助决定是否重建目标。
这背后的秘密是 `fixdep` 将解析 depfile(`.d` 文件),然后解析里面的所有依赖文件,搜索所有 `CONFIG_` 字符串的文本,将它们转换为相应的空的头文件,并将它们添加到目标的先决条件。每次配置更改时,相应的空的头文件也将更新,因此 kbuild 可以检测到该更改并重建依赖于它的目标。因为还记录了命令行,所以很容易比较最后和当前的编译参数。
### 展望未来
Kconfig/kbuild 在很长一段时间内没有什么变化,直到新的维护者 Masahiro Yamada 于 2017 年初加入,现在 kbuild 正在再次积极开发中。如果你不久后看到与本文中的内容不同的内容,请不要感到惊讶。
---
via: <https://opensource.com/article/18/10/kbuild-and-kconfig>
作者:[Cao Jin](https://opensource.com/users/pinocchio) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.
To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.
## Kconfig
The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:
config | Update current config utilizing a line-oriented program |
nconfig | Update current config utilizing a ncurses menu-based program |
menuconfig | Update current config utilizing a menu-based program |
xconfig | Update current config utilizing a Qt-based frontend |
gconfig | Update current config utilizing a GTK+ based frontend |
oldconfig | Update current config utilizing a provided .config as base |
localmodconfig | Update current config disabling modules not loaded |
localyesconfig | Update current config converting local mods to core |
defconfig | New config with default from Arch-supplied defconfig |
savedefconfig | Save current config as ./defconfig (minimal config) |
allnoconfig | New config where all options are answered with 'no' |
allyesconfig | New config where all options are accepted with 'yes' |
allmodconfig | New config selecting modules when possible |
alldefconfig | New config with all symbols set to default |
randconfig | New config with a random answer to all options |
listnewconfig | List new options |
olddefconfig | Same as oldconfig but sets new symbols to their default value without prompting |
kvmconfig | Enable additional options for KVM guest kernel support |
xenconfig | Enable additional options for xen dom0 and guest kernel support |
tinyconfig | Configure the tiniest possible kernel |
I think **menuconfig** is the most popular of these targets. The targets are processed by different host programs, which are provided by the kernel and built during kernel building. Some targets have a GUI (for the user's convenience) while most don't. Kconfig-related tools and source code reside mainly under **scripts/kconfig/** in the kernel source. As we can see from **scripts/kconfig/Makefile**, there are several host programs, including **conf**, **mconf**, and **nconf**. Except for **conf**, each of them is responsible for one of the GUI-based config targets, so, **conf** deals with most of them.
Logically, Kconfig's infrastructure has two parts: one implements a [new language](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kconfig-language.txt) to define the configuration items (see the Kconfig files under the kernel source), and the other parses the Kconfig language and deals with configuration actions.
Most of the config targets have roughly the same internal process (shown below):

Note that all configuration items have a default value.
The first step reads the Kconfig file under source root to construct an initial configuration database; then it updates the initial database by reading an existing configuration file according to this priority:
.config
/lib/modules/$(shell,uname -r)/.config
/etc/kernel-config
/boot/config-$(shell,uname -r)
ARCH_DEFCONFIG
arch/$(ARCH)/defconfig
If you are doing GUI-based configuration via **menuconfig** or command-line-based configuration via **oldconfig**, the database is updated according to your customization. Finally, the configuration database is dumped into the .config file.
But the .config file is not the final fodder for kernel building; this is why the **syncconfig** target exists. **syncconfig** used to be a config target called **silentoldconfig**, but it doesn't do what the old name says, so it was renamed. Also, because it is for internal use (not for users), it was dropped from the list.
Here is an illustration of what **syncconfig** does:

**syncconfig** takes .config as input and outputs many other files, which fall into three categories:
**auto.conf & tristate.conf**are used for makefile text processing. For example, you may see statements like this in a component's makefile:
`obj-$(CONFIG_GENERIC_CALIBRATE_DELAY) += calibrate.o`
**autoconf.h**is used in C-language source files.- Empty header files under
**include/config/**are used for configuration-dependency tracking during kbuild, which is explained below.
After configuration, we will know which files and code pieces are not compiled.
## kbuild
Component-wise building, called *recursive make*, is a common way for GNU `make`
to manage a large project. Kbuild is a good example of recursive make. By dividing source files into different modules/components, each component is managed by its own makefile. When you start building, a top makefile invokes each component's makefile in the proper order, builds the components, and collects them into the final executive.
Kbuild refers to different kinds of makefiles:
**Makefile**is the top makefile located in source root.**.config**is the kernel configuration file.**arch/$(ARCH)/Makefile**is the arch makefile, which is the supplement to the top makefile.**scripts/Makefile.***describes common rules for all kbuild makefiles.- Finally, there are about 500
**kbuild makefiles**.
The top makefile includes the arch makefile, reads the .config file, descends into subdirectories, invokes **make** on each component's makefile with the help of routines defined in **scripts/Makefile.***, builds up each intermediate object, and links all the intermediate objects into vmlinux. Kernel document [Documentation/kbuild/makefiles.txt](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) describes all aspects of these makefiles.
As an example, let's look at how vmlinux is produced on x86-64:

All the **.o** files that go into vmlinux first go into their own **built-in.a**, which is indicated via variables **KBUILD_VMLINUX_INIT**, **KBUILD_VMLINUX_MAIN**, **KBUILD_VMLINUX_LIBS**, then are collected into the vmlinux file.
Take a look at how recursive make is implemented in the Linux kernel, with the help of simplified makefile code:
```
# In top Makefile
vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps)
+$(call if_changed,link-vmlinux)
# Variable assignments
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS)
export KBUILD_VMLINUX_INIT := $(head-y) $(init-y)
export KBUILD_VMLINUX_MAIN := $(core-y) $(libs-y2) $(drivers-y) $(net-y) $(virt-y)
export KBUILD_VMLINUX_LIBS := $(libs-y1)
export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds
init-y := init/
drivers-y := drivers/ sound/ firmware/
net-y := net/
libs-y := lib/
core-y := usr/
virt-y := virt/
# Transform to corresponding built-in.a
init-y := $(patsubst %/, %/built-in.a, $(init-y))
core-y := $(patsubst %/, %/built-in.a, $(core-y))
drivers-y := $(patsubst %/, %/built-in.a, $(drivers-y))
net-y := $(patsubst %/, %/built-in.a, $(net-y))
libs-y1 := $(patsubst %/, %/lib.a, $(libs-y))
libs-y2 := $(patsubst %/, %/built-in.a, $(filter-out %.a, $(libs-y)))
virt-y := $(patsubst %/, %/built-in.a, $(virt-y))
# Setup the dependency. vmlinux-deps are all intermediate objects, vmlinux-dirs
# are phony targets, so every time comes to this rule, the recipe of vmlinux-dirs
# will be executed. Refer "4.6 Phony Targets" of `info make`
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
# Variable vmlinux-dirs is the directory part of each built-in.a
vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \
$(core-y) $(core-m) $(drivers-y) $(drivers-m) \
$(net-y) $(net-m) $(libs-y) $(libs-m) $(virt-y)))
# The entry of recursive make
$(vmlinux-dirs):
$(Q)$(MAKE) $(build)=$@ need-builtin=1
```
The recursive make recipe is expanded, for example:
`make -f scripts/Makefile.build obj=init need-builtin=1`
This means **make** will go into **scripts/Makefile.build** to continue the work of building each **built-in.a**. With the help of **scripts/link-vmlinux.sh**, the vmlinux file is finally under source root.
### Understanding vmlinux vs. bzImage
Many Linux kernel developers may not be clear about the relationship between vmlinux and bzImage. For example, here is their relationship in x86-64:

The source root vmlinux is stripped, compressed, put into **piggy.S**, then linked with other peer objects into **arch/x86/boot/compressed/vmlinux**. Meanwhile, a file called setup.bin is produced under **arch/x86/boot**. There may be an optional third file that has relocation info, depending on the configuration of **CONFIG_X86_NEED_RELOCS**.
A host program called **build**, provided by the kernel, builds these two (or three) parts into the final bzImage file.
### Dependency tracking
Kbuild tracks three kinds of dependencies:
- All prerequisite files (both *
**.c**and ***.h**) **CONFIG_**options used in all prerequisite files- Command-line dependencies used to compile the target
The first one is easy to understand, but what about the second and third? Kernel developers often see code pieces like this:
```
#ifdef CONFIG_SMP
__boot_cpu_id = cpu;
#endif
```
When **CONFIG_SMP** changes, this piece of code should be recompiled. The command line for compiling a source file also matters, because different command lines may result in different object files.
When a **.c** file uses a header file via a **#include** directive, you need write a rule like this:
```
main.o: defs.h
recipe...
```
When managing a large project, you need a lot of these kinds of rules; writing them all would be tedious and boring. Fortunately, most modern C compilers can write these rules for you by looking at the **#include** lines in the source file. For the GNU Compiler Collection (GCC), it is just a matter of adding a command-line parameter: **-MD depfile**
```
# In scripts/Makefile.lib
c_flags = -Wp,-MD,$(depfile) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) \
-include $(srctree)/include/linux/compiler_types.h \
$(__c_flags) $(modkern_cflags) \
$(basename_flags) $(modname_flags)
```
This would generate a **.d** file with content like:
```
init_task.o: init/init_task.c include/linux/kconfig.h \
include/generated/autoconf.h include/linux/init_task.h \
include/linux/rcupdate.h include/linux/types.h \
...
```
Then the host program ** fixdep** takes care of the other two dependencies by taking the
**depfile**and command line as input, then outputting a
**.<target>.cmd**file in makefile syntax, which records the command line and all the prerequisites (including the configuration) for a target. It looks like this:
```
# The command line used to compile the target
cmd_init/init_task.o := gcc -Wp,-MD,init/.init_task.o.d -nostdinc ...
...
# The dependency files
deps_init/init_task.o := \
$(wildcard include/config/posix/timers.h) \
$(wildcard include/config/arch/task/struct/on/stack.h) \
$(wildcard include/config/thread/info/in/task.h) \
...
include/uapi/linux/types.h \
arch/x86/include/uapi/asm/types.h \
include/uapi/asm-generic/types.h \
...
```
A **.<target>.cmd** file will be included during recursive make, providing all the dependency info and helping to decide whether to rebuild a target or not.
The secret behind this is that **fixdep** will parse the **depfile** (**.d** file), then parse all the dependency files inside, search the text for all the **CONFIG_** strings, convert them to the corresponding empty header file, and add them to the target's prerequisites. Every time the configuration changes, the corresponding empty header file will be updated, too, so kbuild can detect that change and rebuild the target that depends on it. Because the command line is also recorded, it is easy to compare the last and current compiling parameters.
## Looking ahead
Kconfig/kbuild remained the same for a long time until the new maintainer, Masahiro Yamada, joined in early 2017, and now kbuild is under active development again. Don't be surprised if you soon see something different from what's in this article.
## Comments are closed. |
11,229 | 基于 Linux 的智能手机 Librem 5 开启预售 | https://itsfoss.com/librem-5-available/ | 2019-08-15T10:43:41 | [
"Librem",
"手机"
] | https://linux.cn/article-11229-1.html | Purism 近期[宣布](https://puri.sm/posts/librem-5-smartphone-final-specs-announced/)了 [Librem 5 智能手机](https://itsfoss.com/librem-linux-phone/)的最终规格。它不是基于 Android 或 iOS 的,而是基于 [Android 的开源替代品](https://itsfoss.com/open-source-alternatives-android/)–[PureOS](https://pureos.net/)。
随着这一消息的宣布,Librem 5 也正式[以 649 美元的价格开启预售](https://shop.puri.sm/shop/librem-5/)(这是 7 月 31 日前的早鸟价),在那以后价格将会上涨 50 美元,产品将会于 2019 年第三季度发货。

以下是 Purism 博客文章中关于 Librem 5 的信息:
>
> 我们认为手机不应该跟踪你,也不应该利用你的数字生活。
>
>
> Librem 5 意味着你有机会通过自由开源软件、开放式治理和透明度来收回和保护你的私人信息和数字生活。Librem 5 是一个**基于 [PureOS](https://pureos.net/) 的手机**,这是一个完全免费、符合道德的**不基于 Android 或 iOS** 的开源操作系统(了解更多关于[为什么这很重要](https://puri.sm/products/librem-5/pureos-mobile/)的信息)。
>
>
> 我们已成功超额完成了众筹计划,我们将会一一去实现我们的承诺。Librem 5 的硬件和软件开发正在[稳步前进](https://puri.sm/posts/tag/phones),它计划在 2019 年的第三季度发行初始版本。你可以用 649 美元的价格预购直到产品发货或正式价格生效。现在附赠外接显示器、键盘和鼠标的套餐也可以预购了。
>
>
>
### Librem 5 的配置
从它的预览来看,Librem 5 旨在提供更好的隐私保护和安全性。除此之外,它试图避免使用 Google 或 Apple 的服务。
虽然这个想法够好,它是如何成为一款低于 700 美元的商用智能手机?

让我们来看一下它的配置:

从数据上讲它的配置已经足够高了。不是很好,也不是很差。但是,性能呢?用户体验呢?
我们并不能够确切地了解到它的信息,除非我们用过它。所以,如果你打算预购,应该要考虑到这一点。
### Librem 5 提供终身软件更新支持
当然,和同价位的智能手机相比,它的这些配置并不是很优秀。
然而,随着他们做出终身软件更新支持的承诺后,它看起来确实像被开源爱好者所钟情的一个好产品。
### 其他关键特性
Purism 还强调 Librem 5 将成为有史以来第一款以 [Matrix](http://matrix.org) 提供支持的智能手机。这意味着它将支持端到端的分布式加密通讯的短信、电话。
除了这些,耳机接口和用户可以自行更换电池使它成为一个可靠的产品。
### 总结
即使它很难与 Android 或 iOS 智能手机竞争,但多一种选择方式总是好的。Librem 5 不可能成为每个用户都喜欢的智能手机,但如果你是一个开源爱好者,而且正在寻找一款尊重隐私和安全,不使用 Google 和 Apple 服务的简单智能手机,那么这就很适合你。
另外,它提供终身的软件更新支持,这让它成为了一个优秀的智能手机。
你如何看待 Librem 5?有在考虑预购吗?请在下方的评论中将你的想法告诉我们。
---
via: <https://itsfoss.com/librem-5-available/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Scvoet](https://github.com/scvoet) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Purism recently [announced](https://puri.sm/posts/librem-5-smartphone-final-specs-announced/) the final specs for its [Librem 5 smartphone](https://itsfoss.com/librem-linux-phone/). This is not based on Android or iOS – but built on [PureOS](https://pureos.net/), which is an [open-source alternative to Android](https://itsfoss.com/open-source-alternatives-android/).
Along with the announcement, the Librem 5 is also available for [pre-orders for $649](https://shop.puri.sm/shop/librem-5/) (as an early bird offer till 31st July) and it will go up by $50 following the date. It will start shipping from Q3 of 2019.

Here’s what Purism mentioned about Librem 5 in its blog post:
**We believe phones should not track you nor exploit your digital life.**
*The Librem 5 represents the opportunity for you to take back control and protect your private information, your digital life through free and open source software, open governance, and transparency. The Librem 5 is **a phone built on **PureOS**, a fully free, ethical and open-source operating system that is **not based on Android or iOS** (learn more about why this is important).*
*We have successfully crossed our crowdfunding goals and will be delivering on our promise. The Librem 5’s hardware and software development is advancing at a steady pace, and is scheduled for an initial release in Q3 2019. You can preorder the phone at $649 until shipping begins and regular pricing comes into effect. Kits with an external monitor, keyboard and mouse, are also available for preorder.*
## Librem 5 Specifications
From what it looks like, Librem 5 definitely aims to provide better privacy and security. In addition to this, it tries to avoid using anything from Google or Apple.
While the idea is good enough – how does it hold up as a commercial smartphone under $700?

Let us take a look at the tech specs:

On paper the tech specs seems to be good enough. Not too great – not too bad. But, what about the performance? The user experience?
Well, we can’t be too sure about it – unless we use it. So, if you are pre-ordering it – take that into consideration.
## Lifetime software updates for Librem 5
Of course, the specs aren’t very pretty when compared to the smartphones available at this price range.
However, with the promise of lifetime software updates – it does look like a decent offering for open source enthusiasts.
## Other Key Features
Purism also highlights the fact that Librem 5 will be the first-ever [Matrix](https://matrix.org)-powered smartphone. This means that it will support end-to-end decentralized encrypted communications for messaging and calling.
In addition to all these, the presence of headphone jack and a user-replaceable battery makes it a pretty solid deal.
## Wrapping Up
Even though it is tough to compete with the likes of Android/iOS smartphones, having an alternative is always good. Librem 5 may not prove to be an ideal smartphone for every user – but if you are an open-source enthusiast and looking for a simple smartphone that respects privacy and security without utilizing Google/Apple services, this is for you.
Also the fact that it will receive lifetime software updates – makes it an interesting smartphone.
What do you think about Librem 5? Are you thinking to pre-order it? Let us know your thoughts in the comments below. |
11,230 | 如何在 Linux 命令行操作 PDF | https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html | 2019-08-15T11:01:00 | [
"PDF"
] | https://linux.cn/article-11230-1.html |
>
> pdftk 命令提供了许多处理 PDF 的命令行操作,包括合并页面、加密文件、添加水印、压缩文件,甚至还有修复 PDF。
>
>
>

虽然 PDF 通常被认为是相当稳定的文件,但在 Linux 和其他系统上你可以做很多处理。包括合并、拆分、旋转、拆分成单页、加密和解密、添加水印、压缩和解压缩,甚至还有修复。 `pdftk` 命令能执行所有甚至更多操作。
“pdftk” 代表 “PDF 工具包”(PDF tool kit),这个命令非常易于使用,并且可以很好地操作 PDF。例如,要将独立的文件合并成一个文件,你可以使用以下命令:
```
$ pdftk pg1.pdf pg2.pdf pg3.pdf pg4.pdf pg5.pdf cat output OneDoc.pdf
```
`OneDoc.pdf` 将包含上面显示的所有五个文档,命令将在几秒钟内运行完毕。请注意,`cat` 选项表示将文件连接在一起,`output` 选项指定新文件的名称。
你还可以从 PDF 中提取选定页面来创建单独的 PDF 文件。例如,如果要创建仅包含上面创建的文档的第 1、2、3 和 5 页的新 PDF,那么可以执行以下操作:
```
$ pdftk OneDoc.pdf cat 1-3 5 output 4pgs.pdf
```
另外,如果你想要第 1、3、4 和 5 页(总计 5 页),我们可以使用以下命令:
```
$ pdftk OneDoc.pdf cat 1 3-end output 4pgs.pdf
```
你可以选择单独页面或者页面范围,如上例所示。
下一个命令将从一个包含奇数页(1、3 等)的文件和一个包含偶数页(2、4 等)的文件创建一个整合文档:
```
$ pdftk A=odd.pdf B=even.pdf shuffle A B output collated.pdf
```
请注意,`shuffle` 选项使得能够完成整合,并指示文档的使用顺序。另请注意:虽然上面建议用的是奇数/偶数页,但你不限于仅使用两个文件。
如果要创建只能由知道密码的收件人打开的加密 PDF,可以使用如下命令:
```
$ pdftk prep.pdf output report.pdf user_pw AsK4n0thingGeTn0thing
```
选项提供 40(`encrypt_40bit`)和 128(`encrypt_128bit`)位加密。默认情况下使用 128 位加密。
你还可以使用 `burst` 选项将 PDF 文件分成单个页面:
```
$ pdftk allpgs.pdf burst
$ ls -ltr *.pdf | tail -5
-rw-rw-r-- 1 shs shs 22933 Aug 8 08:18 pg_0001.pdf
-rw-rw-r-- 1 shs shs 23773 Aug 8 08:18 pg_0002.pdf
-rw-rw-r-- 1 shs shs 23260 Aug 8 08:18 pg_0003.pdf
-rw-rw-r-- 1 shs shs 23435 Aug 8 08:18 pg_0004.pdf
-rw-rw-r-- 1 shs shs 23136 Aug 8 08:18 pg_0005.pdf
```
`pdftk` 命令使得合并、拆分、重建、加密 PDF 文件非常容易。要了解更多选项,请查看 [PDF 实验室](https://www.pdflabs.com/docs/pdftk-cli-examples/)中的示例页面。
---
via: <https://www.networkworld.com/article/3430781/how-to-manipulate-pdfs-on-linux.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,231 | 何谓 Linux 用户? | https://opensource.com/article/19/6/what-linux-user | 2019-08-15T21:17:29 | [
"Linux"
] | https://linux.cn/article-11231-1.html |
>
> “Linux 用户”这一定义已经拓展到了更大的范围,同时也发生了巨大的改变。
>
>
>

*编者按: 本文更新于 2019 年 6 月 11 日下午 1:15:19,以更准确地反映作者对 Linux 社区中开放和包容的社区性的看法。*
再有不到两年,Linux 内核就要迎来它 30 岁的生日了。让我们回想一下!1991 年的时候你在哪里?你出生了吗?那年我 13 岁!在 1991 到 1993 年间只推出了少数几款 Linux 发行版,至少它们中的三个:Slackware、Debian 和 Red Hat 为 Linux 运动发展提供了[支柱](https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg)。
当年获得 Linux 发行版的副本,并在笔记本或服务器上进行安装和配置,和今天相比是很不一样的。当时是十分艰难的!也是令人沮丧的!如果你让能让它运行起来,就是一个了不起的成就!我们不得不与不兼容的硬件、设备上的配置跳线、BIOS 问题以及许多其他问题作斗争。甚至即使硬件是兼容的,很多时候,你仍然需要编译内核、模块和驱动程序才能让它们在你的系统上工作。
如果你当时经过过那些,你可能会表示赞同。有些读者甚至称它们为“美好的过往”,因为选择使用 Linux 意味着仅仅是为了让操作系统继续运行,你就必须学习操作系统、计算机体系架构、系统管理、网络,甚至编程。但我并不赞同他们的说法,窃以为:Linux 在 IT 行业带给我们的最让人惊讶的改变就是,它成为了我们每个人技术能力的基础组成部分!
将近 30 年过去了,无论是桌面和服务器领域 Linux 系统都有了脱胎换骨的变换。你可以在汽车上,在飞机上,家用电器上,智能手机上……几乎任何地方发现 Linux 的影子!你甚至可以购买预装 Linux 的笔记本电脑、台式机和服务器。如果你考虑云计算,企业甚至个人都可以一键部署 Linux 虚拟机,由此可见 Linux 的应用已经变得多么普遍了。
考虑到这些,我想问你的问题是:**这个时代如何定义“Linux 用户”?**
如果你从 System76 或 Dell 为你的父母或祖父母购买一台 Linux 笔记本电脑,为其登录好他们的社交媒体和电子邮件,并告诉他们经常单击“系统升级”,那么他们现在就是 Linux 用户了。如果你是在 Windows 或 MacOS 机器上进行以上操作,那么他们就是 Windows 或 MacOS 用户。令人难以置信的是,与 90 年代不同,现在的 Linux 任何人都可以轻易上手。
由于种种原因,这也归因于 web 浏览器成为了桌面计算机上的“杀手级应用程序”。现在,许多用户并不关心他们使用的是什么操作系统,只要他们能够访问到他们的应用程序或服务。
你知道有多少人经常使用他们的电话、桌面或笔记本电脑,但不会管理他们系统上的文件、目录和驱动程序?又有多少人不会安装“应用程序商店”没有收录的二进制文件程序?更不要提从头编译应用程序,对我来说,几乎全是这样的。这正是成熟的开源软件和相应的生态对于易用性的改进的动人之处。
今天的 Linux 用户不需要像上世纪 90 年代或 21 世纪初的 Linux 用户那样了解、学习甚至查询信息,这并不是一件坏事。过去那种认为 Linux 只适合工科男使用的想法已经一去不复返了。
对于那些对计算机、操作系统以及在自由软件上创建、使用和协作的想法感兴趣、好奇、着迷的 Linux 用户来说,Liunx 依旧有研究的空间。如今在 Windows 和 MacOS 上也有同样多的空间留给创造性的开源贡献者。今天,成为 Linux 用户就是成为一名与 Linux 系统同行的人。这是一件很棒的事情。
### Linux 用户定义的转变
当我开始使用 Linux 时,作为一个 Linux 用户意味着知道操作系统如何以各种方式、形态和形式运行。Linux 在某种程度上已经成熟,这使得“Linux 用户”的定义可以包含更广泛的领域及那些领域里的人们。这可能是显而易见的一点,但重要的还是要说清楚:任何 Linux 用户皆“生”而平等。
---
via: <https://opensource.com/article/19/6/what-linux-user>
作者:[Anderson Silva](https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qfzy1233](https://github.com/qfzy1233) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Editor's note: this article was updated on Jun 11, 2019, at 1:15:19 PM to more accurately reflect the author's perspective on an open and inclusive community of practice in the Linux community.
In only two years, the Linux kernel will be 30 years old. Think about that! Where were you in 1991? Were you even born? I was 13! Between 1991 and 1993 a few Linux distributions were created, and at least three of them—Slackware, Debian, and Red Hat–provided the [backbone](https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg) the Linux movement was built on.
Getting a copy of a Linux distribution and installing and configuring it on a desktop or server was very different back then than today. It was hard! It was frustrating! It was an accomplishment if you got it running! We had to fight with incompatible hardware, configuration jumpers on devices, BIOS issues, and many other things. Even if the hardware was compatible, many times, you still had to compile the kernel, modules, and drivers to get them to work on your system.
If you were around during those days, you are probably nodding your head. Some readers might even call them the "good old days," because choosing to use Linux meant you had to learn about operating systems, computer architecture, system administration, networking, and even programming, just to keep the OS functioning. I am not one of them though: Linux being a regular part of everyone's technology experience is one of the most amazing changes in our industry!
Almost 30 years later, Linux has gone far beyond the desktop and server. You will find Linux in automobiles, airplanes, appliances, smartphones… virtually everywhere! You can even purchase laptops, desktops, and servers with Linux preinstalled. If you consider cloud computing, where corporations and even individuals can deploy Linux virtual machines with the click of a button, it's clear how widespread the availability of Linux has become.
With all that in mind, my question for you is: **How do you define a "Linux user" today?**
If you buy your parent or grandparent a Linux laptop from System76 or Dell, log them into their social media and email, and tell them to click "update system" every so often, they are now a Linux user. If you did the same with a Windows or MacOS machine, they would be Windows or MacOS users. It's incredible to me that, unlike the '90s, Linux is now a place for anyone and everyone to compute.
In many ways, this is due to the web browser becoming the "killer app" on the desktop computer. Now, many users don't care what operating system they are using as long as they can get to their app or service.
How many people do you know who use their phone, desktop, or laptop regularly but can't manage files, directories, and drivers on their systems? How many can't install a binary that isn't attached to an "app store" of some sort? How about compiling an application from scratch?! For me, it's almost no one. That's the beauty of open source software maturing along with an ecosystem that cares about accessibility.
Today's Linux user is not required to know, study, or even look up information as the Linux user of the '90s or early 2000s did, and that's not a bad thing. The old imagery of Linux being exclusively for bearded men is long gone, and I say good riddance.
There will always be room for a Linux user who is interested, curious, *fascinated* about computers, operating systems, and the idea of creating, using, and collaborating on free software. There is just as much room for creative open source contributors on Windows and MacOS these days as well. Today, being a Linux user is being anyone with a Linux system. And that's a wonderful thing.
## The change to what it means to be a Linux user
When I started with Linux, being a user meant knowing how to the operating system functioned in every way, shape, and form. Linux has matured in a way that allows the definition of "Linux users" to encompass a much broader world of possibility and the people who inhabit it. It may be obvious to say, but it is important to say clearly: anyone who uses Linux is an equal Linux user.
## 15 Comments |
11,232 | 如何在 Ubuntu 18.04 的右键单击菜单中添加“新建文档”按钮 | https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/ | 2019-08-16T09:27:12 | [
"GNOME"
] | https://linux.cn/article-11232-1.html | 
前几天,我在各种在线资源站点上收集关于 [Linux 包管理器](https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/) 的参考资料。在我想创建一个用于保存笔记的文件,我突然发现我的 Ubuntu 18.04 LTS 桌面并没有“新建文件”的按钮了,它好像离奇失踪了。在谷歌一下后,我发现原来“新建文档”按钮不再被集成在 Ubuntu GNOME 版本中了。庆幸的是,我找到了一个在 Ubuntu 18.04 LTS 桌面的右键单击菜单中添加“新建文档”按钮的简易解决方案。
就像你在下方截图中看到的一样,Nautilus 文件管理器的右键单击菜单中并没有“新建文件”按钮。

*Ubuntu 18.04 移除了右键点击菜单中的“新建文件”的选项。*
如果你想添加此按钮,请按照以下步骤进行操作。
### 在 Ubuntu 的右键单击菜单中添加“新建文件”按钮
首先,你需要确保你的系统中有 `~/Templates` 文件夹。如果没有的话,可以按照下面的命令进行创建。
```
$ mkdir ~/Templates
```
然后打开终端应用并使用 `cd` 命令进入 `~/Templates` 文件夹:
```
$ cd ~/Templates
```
创建一个空文件:
```
$ touch Empty\ Document
```
或
```
$ touch "Empty Document"
```

新打开一个 Nautilus 文件管理器,然后检查一下右键单击菜单中是否成功添加了“新建文档”按钮。

*在 Ubuntu 18.04 的右键单击菜单中添加“新建文件”按钮*
如上图所示,我们重新启用了“新建文件”的按钮。
你还可以为不同文件类型添加按钮。
```
$ cd ~/Templates
$ touch New\ Word\ Document.docx
$ touch New\ PDF\ Document.pdf
$ touch New\ Text\ Document.txt
$ touch New\ PyScript.py
```

在“新建文件”子菜单中给不同的文件类型添加按钮
注意,所有文件都应该创建在 `~/Templates` 文件夹下。
现在,打开 Nautilus 并检查“新建文件” 菜单中是否有相应的新建文件按钮。

如果你要从子菜单中删除任一文件类型,只需在 Templates 目录中移除相应的文件即可。
```
$ rm ~/Templates/New\ Word\ Document.docx
```
我十分好奇为什么最新的 Ubuntu GNOME 版本将这个常用选项移除了。不过,重新启用这个按钮也十分简单,只需要几分钟。
---
via: <https://www.ostechnix.com/how-to-add-new-document-option-in-right-click-context-menu-in-ubuntu-18-04/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[scvoet](https://github.com/scvoet) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,234 | Prometheus 入门 | https://opensource.com/article/18/12/introduction-prometheus | 2019-08-16T11:42:25 | [
"Prometheus"
] | https://linux.cn/article-11234-1.html |
>
> 学习安装 Prometheus 监控和警报系统并编写它的查询。
>
>
>

[Prometheus](https://prometheus.io/) 是一个开源的监控和警报系统,它直接从目标主机上运行的代理程序中抓取指标,并将收集的样本集中存储在其服务器上。也可以使用像 `collectd_exporter` 这样的插件推送指标,尽管这不是 Promethius 的默认行为,但在主机位于防火墙后面或位于安全策略禁止打开端口的某些环境中它可能很有用。
Prometheus 是[云原生计算基金会(CNCF)](https://www.cncf.io/)的一个项目。它使用<ruby> 联合模型 <rt> federation model </rt></ruby>进行扩展,该模型使得一个 Prometheus 服务器能够抓取另一个 Prometheus 服务器的数据。这允许创建分层拓扑,其中中央系统或更高级别的 Prometheus 服务器可以抓取已从下级实例收集的聚合数据。
除 Prometheus 服务器外,其最常见的组件是[警报管理器](https://prometheus.io/docs/alerting/alertmanager/)及其输出器。
警报规则可以在 Prometheus 中创建,并配置为向警报管理器发送自定义警报。然后,警报管理器处理和管理这些警报,包括通过电子邮件或第三方服务(如 [PagerDuty](https://en.wikipedia.org/wiki/PagerDuty))等不同机制发送通知。
Prometheus 的输出器可以是库、进程、设备或任何其他能将 Prometheus 抓取的指标公开出去的东西。 这些指标可在端点 `/metrics` 中获得,它允许 Prometheus 无需代理直接抓取它们。本文中的教程使用 `node_exporter` 来公开目标主机的硬件和操作系统指标。输出器的输出是明文的、高度可读的,这是 Prometheus 的优势之一。
此外,你可以将 Prometheus 作为后端,配置 [Grafana](https://grafana.com/) 来提供数据可视化和仪表板功能。
### 理解 Prometheus 的配置文件
抓取 `/metrics` 的间隔秒数控制了时间序列数据库的粒度。这在配置文件中定义为 `scrape_interval` 参数,默认情况下设置为 60 秒。
在 `scrape_configs` 部分中为每个抓取作业设置了目标。每个作业都有自己的名称和一组标签,可以帮助你过滤、分类并更轻松地识别目标。一项作业可以有很多目标。
### 安装 Prometheus
在本教程中,为简单起见,我们将使用 Docker 安装 Prometheus 服务器和 `node_exporter`。Docker 应该已经在你的系统上正确安装和配置。对于更深入、自动化的方法,我推荐 Steve Ovens 的文章《[如何使用 Ansible 与 Prometheus 建立系统监控](https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus)》。
在开始之前,在工作目录中创建 Prometheus 配置文件 `prometheus.yml`,如下所示:
```
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'webservers'
static_configs:
- targets: ['<node exporter node IP>:9100']
```
通过运行以下命令用 Docker 启动 Prometheus:
```
$ sudo docker run -d -p 9090:9090 -v
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
prom/prometheus
```
默认情况下,Prometheus 服务器将使用端口 9090。如果此端口已在使用,你可以通过在上一个命令的后面添加参数 `--web.listen-address="<IP of machine>:<port>"` 来更改它。
在要监视的计算机中,使用以下命令下载并运行 `node_exporter` 容器:
```
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
mount-points "^/(sys|proc|dev|host|etc)($|/)"
```
出于本文练习的目的,你可以在同一台机器上安装 `node_exporter` 和 Prometheus。请注意,生产环境中在 Docker 下运行 `node_exporter` 是不明智的 —— 这仅用于测试目的。
要验证 `node_exporter` 是否正在运行,请打开浏览器并导航到 `http://<IP of Node exporter host>:9100/metrics`,这将显示收集到的所有指标;也即是 Prometheus 将要抓取的相同指标。

要确认 Prometheus 服务器安装成功,打开浏览器并导航至:<http://localhost:9090>。
你应该看到了 Prometheus 的界面。单击“Status”,然后单击“Targets”。在 “Status” 下,你应该看到你的机器被列为 “UP”。

### 使用 Prometheus 查询
现在是时候熟悉一下 [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/)(Prometheus 的查询语法)及其图形化 Web 界面了。转到 Prometheus 服务器上的 `http://localhost:9090/graph`。你将看到一个查询编辑器和两个选项卡:“Graph” 和 “Console”。
Prometheus 将所有数据存储为时间序列,使用指标名称标识每个数据。例如,指标 `node_filesystem_avail_bytes` 显示可用的文件系统空间。指标的名称可以在表达式框中使用,以选择具有此名称的所有时间序列并生成即时向量。如果需要,可以使用选择器和标签(一组键值对)过滤这些时间序列,例如:
```
node_filesystem_avail_bytes{fstype="ext4"}
```
过滤时,你可以匹配“完全相等”(`=`)、“不等于”(`!=`),“正则匹配”(`=~`)和“正则排除匹配”(`!~`)。以下示例说明了这一点:
要过滤 `node_filesystem_avail_bytes` 以显示 ext4 和 XFS 文件系统:
```
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
```
要排除匹配:
```
node_filesystem_avail_bytes{fstype!="xfs"}
```
你还可以使用方括号得到从当前时间往回的一系列样本。你可以使用 `s` 表示秒,`m` 表示分钟,`h` 表示小时,`d` 表示天,`w` 表示周,而 `y` 表示年。使用时间范围时,返回的向量将是范围向量。
例如,以下命令生成从五分钟前到现在的样本:
```
node_memory_MemAvailable_bytes[5m]
```
Prometheus 还包括了高级查询的功能,例如:
```
100 * (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
```
请注意标签如何用于过滤作业和模式。指标 `node_cpu_seconds_total` 返回一个计数器,`irate()`函数根据范围间隔的最后两个数据点计算每秒的变化率(意味着该范围可以小于五分钟)。要计算 CPU 总体使用率,可以使用 `node_cpu_seconds_total` 指标的空闲(`idle`)模式。处理器的空闲比例与繁忙比例相反,因此从 1 中减去 `irate` 值。要使其为百分比,请将其乘以 100。

### 了解更多
Prometheus 是一个功能强大、可扩展、轻量级、易于使用和部署的监视工具,对于每个系统管理员和开发人员来说都是必不可少的。出于这些原因和其他原因,许多公司正在将 Prometheus 作为其基础设施的一部分。
要了解有关 Prometheus 及其功能的更多信息,我建议使用以下资源:
* 关于 [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/)
* 什么是 [node\_exporters 集合](https://github.com/prometheus/node_exporter#collectors)
* [Prometheus 函数](https://prometheus.io/docs/prometheus/latest/querying/functions/)
* [4 个开源监控工具](https://opensource.com/article/18/8/open-source-monitoring-tools)
* [现已推出:DevOps 监控工具的开源指南](https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools)
---
via: <https://opensource.com/article/18/12/introduction-prometheus>
作者:[Michael Zamot](https://opensource.com/users/mzamot) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [Prometheus](https://prometheus.io/) is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter**—although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy.
Prometheus, a project of the [Cloud Native Computing Foundation](https://www.cncf.io/), scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances.
Besides the Prometheus server, its most common components are its [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and its exporters.
Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty](https://en.wikipedia.org/wiki/PagerDuty).
Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics**, which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths.
In addition, you can configure [Grafana](https://grafana.com/) to use Prometheus as a backend to provide data visualization and dashboarding functions.
## Making sense of Prometheus' configuration file
The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds.
Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets.
## Installing Prometheus
In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus](https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus).
Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows:
```
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'webservers'
static_configs:
- targets: ['<node exporter node IP>:9100']
```
Start Prometheus with Docker by running the following command:
```
$ sudo docker run -d -p 9090:9090 -v
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
prom/prometheus
```
By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **--web.listen-address="<IP of machine>:<port>"** at the end of the previous command.
In the machine you want to monitor, download and run the **node_exporter** container by using the following command:
```
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
mount-points "^/(sys|proc|dev|host|etc)($|/)"
```
For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only.
To verify that **node_exporter** is running, open your browser and navigate to **http://<IP of Node exporter host>:9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape.

To verify the Prometheus server installation, open your browser and navigate to [http://localhost:9090](http://localhost:9090).
You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**.

## Using Prometheus queries
It's time to get familiar with [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), Prometheus' query syntax, and its graphing web interface. Go to ** http://localhost:9090/graph** on your Prometheus server. You will see a query editor and two tabs: Graph and Console.
Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example:
`node_filesystem_avail_bytes{fstype="ext4"}`
When filtering, you can match "exactly equal" (**=**), "not equal" (**!=**), "regex-match" (**=~**), and "do not regex-match" (**!~**). The following examples illustrate this:
To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems:
`node_filesystem_avail_bytes{fstype=~"ext4|xfs"}`
To exclude a match:
`node_filesystem_avail_bytes{fstype!="xfs"}`
You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector.
For example, the following command produces the samples from five minutes to the present:
`node_memory_MemAvailable_bytes[5m]`
Prometheus also includes functions to allow advanced queries, such as this:
`100 * (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))`
Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100.

## Learn more
Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure.
To learn more about Prometheus and its functions, I recommend the following resources:
## Comments are closed. |
11,235 | 试试动态窗口管理器 dwm 吧 | https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/ | 2019-08-16T18:29:09 | [
"dwm"
] | https://linux.cn/article-11235-1.html | 
如果你崇尚效率和极简主义,并且正在为你的 Linux 桌面寻找新的窗口管理器,那么你应该尝试一下<ruby> 动态窗口管理器 <rt> dynamic window manager </rt></ruby> dwm。以不到 2000 标准行的代码写就的 dwm,是一个速度极快而功能强大,且可高度定制的窗口管理器。
你可以在平铺、单片和浮动布局之间动态选择,使用标签将窗口组织到多个工作区,并使用键盘快捷键快速导航。本文将帮助你开始使用 dwm。
### 安装
要在 Fedora 上安装 dwm,运行:
```
$ sudo dnf install dwm dwm-user
```
`dwm` 包会安装窗口管理器本身,`dwm-user` 包显著简化了配置,本文稍后将对此进行说明。
此外,为了能够在需要时锁定屏幕,我们还将安装 `slock`,这是一个简单的 X 显示锁屏。
```
$ sudo dnf install slock
```
当然,你可以根据你的个人喜好使用其它的锁屏。
### 快速入门
要启动 dwm,在登录屏选择 “dwm-user” 选项。

登录后,你将看到一个非常简单的桌面。事实上,顶部唯一的一个面板列出了代表工作空间的 9 个标签和一个代表窗户布局的 `[]=` 符号。
#### 启动应用
在查看布局之前,首先启动一些应用程序,以便你可以随时使用布局。可以通过按 `Alt+p` 并键入应用程序的名称,然后回车来启动应用程序。还有一个快捷键 `Alt+Shift+Enter` 用于打开终端。
现在有一些应用程序正在运行了,请查看布局。
#### 布局
默认情况下有三种布局:平铺布局,单片布局和浮动布局。
平铺布局由条形图上的 `[]=` 表示,它将窗口组织为两个主要区域:左侧为主区域,右侧为堆叠区。你可以按 `Alt+t` 激活平铺布局。

平铺布局背后的想法是,主窗口放在主区域中,同时仍然可以看到堆叠区中的其他窗口。你可以根据需要在它们之间快速切换。
要在两个区域之间交换窗口,请将鼠标悬停在堆叠区中的一个窗口上,然后按 `Alt+Enter` 将其与主区域中的窗口交换。

单片布局由顶部栏上的 `[N]` 表示,可以使你的主窗口占据整个屏幕。你可以按 `Alt+m` 切换到它。
最后,浮动布局可让你自由移动和调整窗口大小。它的快捷方式是 `Alt+f`,顶栏上的符号是 `><>`。
#### 工作区和标签
每个窗口都分配了一个顶部栏中列出的标签(1-9)。要查看特定标签,请使用鼠标单击其编号或按 `Alt+1..9`。你甚至可以使用鼠标右键单击其编号,一次查看多个标签。
通过使用鼠标突出显示后,并按 `Alt+Shift+1..9`,窗口可以在不同标签之间移动。
### 配置
为了使 dwm 尽可能简约,它不使用典型的配置文件。而是你需要修改代表配置的 C 语言头文件,并重新编译它。但是不要担心,在 Fedora 中你只需要简单地编辑主目录中的一个文件,而其他一切都会在后台发生,这要归功于 Fedora 的维护者提供的 `dwm-user` 包。
首先,你需要使用类似于以下的命令将文件复制到主目录中:
```
$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
```
你可以通过运行 `man dwm-start` 来获取确切的路径。
其次,只需编辑 `~/.dwm/config.h` 文件。例如,让我们配置一个新的快捷方式:通过按 `Alt+Shift+L` 来锁定屏幕。
考虑到我们已经安装了本文前面提到的 `slock` 包,我们需要在文件中添加以下两行以使其工作:
在 `/* commands */` 注释下,添加:
```
static const char *slockcmd[] = { "slock", NULL };
```
添加下列行到 `static Key keys[]` 中:
```
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
```
最终,它应该看起来如下:
```
...
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };
static const char *slockcmd[] = { "slock", NULL };
static Key keys[] = {
/* modifier key function argument */
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
...
```
保存文件。
最后,按 `Alt+Shift+q` 注销,然后重新登录。`dwm-user` 包提供的脚本将识别你已更改主目录中的`config.h` 文件,并会在登录时重新编译 dwm。因为 dwm 非常小,它快到你甚至都不会注意到它重新编译了。
你现在可以尝试按 `Alt+Shift+L` 锁定屏幕,然后输入密码并按回车键再次登录。
### 总结
如果你崇尚极简主义并想要一个非常快速而功能强大的窗口管理器,dwm 可能正是你一直在寻找的。但是,它可能不适合初学者,你可能需要做许多其他配置才能按照你的喜好进行配置。
要了解有关 dwm 的更多信息,请参阅该项目的主页: <https://dwm.suckless.org/>。
---
via: <https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/>
作者:[Adam Šamalík](https://fedoramagazine.org/author/asamalik/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try *dwm* — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
**Installation**
To install dwm on Fedora, run:
$ sudo dnf install dwm dwm-user
The *dwm* package installs the window manager itself, and the *dwm-user* package significantly simplifies configuration which will be explained later in this article.
Additionally, to be able to lock the screen when needed, we’ll also install *slock* — a simple X display locker.
$ sudo dnf install slock
However, you can use a different one based on your personal preference.
**Quick start**
To start dwm, choose the *dwm-user* option on the login screen.

After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a *[]=* symbol that represents the layout of your windows.
### Launching applications
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing *Alt+p* and typing the name of the app followed by *Enter*. There’s also a shortcut *Alt+Shift+Enter* for opening a terminal.
Now that some apps are running, have a look at the layouts.
### Layouts
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
The tiling layout, represented by *[]=* on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing *Alt+t.*

The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
To swap windows between the two areas, hover your mouse over one in the stack area and press *Alt+Enter* to swap it with the one in the master area.

The monocle layout, represented by *[N]* on the top bar, makes your primary window take the whole screen. You can switch to it by pressing *Alt+m*.
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is *Alt+f* and the symbol on the top bar is *><>*.
### Workspaces and tags
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press *Alt+1..9.* You can even view multiple tags at once by clicking on their number using the secondary mouse button.
Windows can be moved between different tags by highlighting them using your mouse, and pressing *Alt+Shift+1..9.*
**Configuration**
To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the *dwm-user* package provided by the maintainer in Fedora.
First, you need to copy the file into your home directory using a command similar to the following:
$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
You can get the exact path by running *man dwm-start.*
Second, just edit the *~/.dwm/config.h* file. As an example, let’s configure a new shortcut to lock the screen by pressing *Alt+Shift+L*.
Considering we’ve installed the *slock* package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
Under the */* commands */ *comment, add:
static const char *slockcmd[] = { "slock", NULL };
And the following line into *static Key keys[]*:
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
In the end, it should look like as follows: (added lines are highlighted)
...
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };static const char *slockcmd[] = { "slock", NULL };
static Key keys[] = {
/* modifier key function argument */{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
...
Save the file.
Finally, just log out by pressing *Alt+Shift+q* and log in again. The scripts provided by the *dwm-user* package will recognize that you have changed the *config.h* file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.
You can try locking your screen now by pressing *Alt+Shift+L*, and then logging back in again by typing your password and pressing enter.
**Conclusion**
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.
To learn more about dwm, see the project’s homepage at [https://dwm.suckless.org/](https://dwm.suckless.org/).
## Kerel
Any people favoring dwm over i3wm?
## Adam Šamalík
I think it depends on what you want. i3 is probably easier to configure and customize, there are more resources for it also. dwm on the other hand is really tiny and lets you build a much lighter setup. I used both and I liked both for different reasons.
## Kostas
Looks like an nice window manager. If i want to try it out on a headless system, like Fedora Server, the only other requirement is to install X.org ?
## Saša
If you’re not going to compile it yourself, yes. If you compile it yourself, you’ll need additional libraries for compilation. I’ve been using fedora minimal install with suckless tools for a few months and I am loving it. If you’re interested in making your own suckless setup from scratch take a look at my configuration (https://github.com/saleone/configs). There is an Ansible playbook that you can run right after the base system is installed to set up everything (but you should adjust it based on your computer and needs).
## Adam Šamalík
Nice! Did you try the dwm-user package? It recompiles dwm automatically on login when you change the config.h in your home. It’s really cool.
## Saša
I haven’t tried it. I’ve made my own changes to dwm source and applied a few patches from their website which do not only affect config.h file (autostart, layouts per tag, …). It’s much easier for me to copy my repo than to apply series of patches even if I could change the dwm source with dwm-user package. Doing manual compilation is just one more step, and that is just calling
, so its not a big deal, you still have to log out.
## Adam Šamalík
Not sure if I remember exactly, but when I was building a custom desktop from Fedora minimal I installed X, input and video drivers, xinit, and dwm. My laptop had an integrated intel graphics. I think these were the packages:
xorg-x11-server-Xorg
xorg-x11-drv-evdev
xorg-x11-drv-intel
xinit
dwm
dwm-user
And then I created ~/.xinitrc and added “dwm-user” to it.
Sorry I don’t have that sytem around anymore, would have looked otherwise. Good luck!
## Ilia
Oh, it is my favorite tiling vm.
## Adam Šamalík
I really like it too for how tiny it is.
## Jayanth Varma B
Really impressed by this window manager. If all goes fine I am going to make a shift from XFCE4 to dwm.
## Adam Šamalík
Glad you like it. Good luck!
## owczarz
Haha 🙂 I just have switched from XFCE to i3 😀
Congrats on your decision 🙂
## Gustavo Murillo
I don’t know if this is added by default, but if it’s not you The next line should be added to the dwm session:
dbus-update-activation-environment –systemd DBUS_SESSION_BUS_ADDRESS DISPLAY XAUTHORITY &
If it’s not set some applications don’t launch or take time to launch. Like gnome-terminal and nautilus.
I add it in .xinitrc and launch with xinit, the maintainer should make a patch with this if it’s not already set.
## Adam Šamalík
I think it’s meant to be really minimalistic with as little dependencies as possible, an also flexible — like potentially running without dbus. That’s probably why there’s as little default settings as possible. As I understand it, it’s targeted at advanced users anyway and it might be expected that you need to do things in .xinitrc to make it run the way you like it. But I’m sure the maintainer will give you a better answer than me. 🙂
## Pierre Stassen
Hi, nice to see an article on dwm!
I use it and I love it, but on Fedora it feels like a little step backwards because it’s a window manager for X. On the other hand Sway, the i3-based alternative for Wayland, is not working flawless.
## Lena Kelly
Manger?
## Paul W. Frields
@Lena: Thanks, fixed! |
11,236 | 如何在 Ubuntu 18.04 LTS 中获取 Linux 5.0 内核 | https://itsfoss.com/ubuntu-hwe-kernel/ | 2019-08-17T10:11:02 | [
"内核"
] | https://linux.cn/article-11236-1.html |
>
> 最近发布的 Ubuntu 18.04.3 包括 Linux 5.0 内核中的几个新功能和改进,但默认情况下没有安装。本教程演示了如何在 Ubuntu 18.04 LTS 中获取 Linux 5 内核。
>
>
>

[Ubuntu 18.04 的第三个“点发布版”已经发布](https://ubuntu.com/blog/enhanced-livepatch-desktop-integration-available-with-ubuntu-18-04-3-lts),它带来了新的稳定版本的 GNOME 组件、livepatch 桌面集成和内核 5.0。
可是等等!什么是“<ruby> 小数点版本 <rt> point release </rt></ruby>”?让我先解释一下。
### Ubuntu LTS 小数点版本
Ubuntu 18.04 于 2018 年 4 月发布,由于它是一个长期支持 (LTS) 版本,它将一直支持到 2023 年。从那时起,已经有许多 bug 修复、安全更新和软件升级。如果你今天下载 Ubuntu 18.04,你需要在[在安装 Ubuntu 后首先安装这些更新](https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/)。
当然,这不是一种理想情况。这就是 Ubuntu 提供这些“小数点版本”的原因。点发布版包含所有功能和安全更新以及自 LTS 版本首次发布以来添加的 bug 修复。如果你今天下载 Ubuntu,你会得到 Ubuntu 18.04.3 而不是 Ubuntu 18.04。这节省了在新安装的 Ubuntu 系统上下载和安装数百个更新的麻烦。
好了!现在你知道“小数点版本”的概念了。你如何升级到这些小数点版本?答案很简单。只需要像平时一样[更新你的 Ubuntu 系统](https://itsfoss.com/update-ubuntu/),这样你将在最新的小数点版本上了。
你可以[查看 Ubuntu 版本](https://itsfoss.com/how-to-know-ubuntu-unity-version/)来了解正在使用的版本。我检查了一下,因为我用的是 Ubuntu 18.04.3,我以为我的内核会是 5。当我[查看 Linux 内核版本](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/)时,它仍然是基本内核 4.15。

这是为什么?如果 Ubuntu 18.04.3 有 Linux 5.0 内核,为什么它仍然使用 Linux 4.15 内核?这是因为你必须通过选择 LTS <ruby> 支持栈 <rt> Enablement Stack </rt></ruby>(通常称为 HWE)手动请求在 Ubuntu LTS 中安装新内核。
### 使用 HWE 在Ubuntu 18.04 中获取 Linux 5.0 内核
默认情况下,Ubuntu LTS 将保持在最初发布的 Linux 内核上。<ruby> <a href="https://wiki.ubuntu.com/Kernel/LTSEnablementStack"> 硬件支持栈 </a> <rt> hardware enablement stack </rt></ruby>(HWE)为现有的 Ubuntu LTS 版本提供了更新的内核和 xorg 支持。
最近发生了一些变化。如果你下载了 Ubuntu 18.04.2 或更新的桌面版本,那么就会为你启用 HWE,默认情况下你将获得新内核以及常规更新。
对于服务器版本以及下载了 18.04 和 18.04.1 的人员,你需要安装 HWE 内核。完成后,你将获得 Ubuntu 提供的更新的 LTS 版本内核。
要在 Ubuntu 桌面上安装 HWE 内核以及更新的 xorg,你可以在终端中使用此命令:
```
sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04
```
如果你使用的是 Ubuntu 服务器版,那么就不会有 xorg 选项。所以只需在 Ubutnu 服务器版中安装 HWE 内核:
```
sudo apt-get install --install-recommends linux-generic-hwe-18.04
```
完成 HWE 内核的安装后,重启系统。现在你应该拥有更新的 Linux 内核了。
### 你在 Ubuntu 18.04 中获取 5.0 内核了么?
请注意,下载并安装了 Ubuntu 18.04.2 的用户已经启用了 HWE。所以这些用户将能轻松获取 5.0 内核。
在 Ubuntu 中启用 HWE 内核遇到困难了么?这完全取决于你。[Linux 5.0 内核](https://itsfoss.com/linux-kernel-5/)有几项性能改进和更好的硬件支持。你将从新内核获益。
你怎么看?你会安装 5.0 内核还是宁愿留在 4.15 内核上?
---
via: <https://itsfoss.com/ubuntu-hwe-kernel/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *The recently released Ubuntu 18.04.3 includes Linux Kernel 5.0 among several new features and improvements but you won’t get it by default. This tutorial demonstrates how to get Linux Kernel 5 in Ubuntu 18.04 LTS.*
The [third point release of Ubuntu 18.04 is here](https://ubuntu.com/blog/enhanced-livepatch-desktop-integration-available-with-ubuntu-18-04-3-lts) and it brings new stable versions of GNOME component, livepatch desktop integration and kernel 5.0.
But wait! What is a point release? Let me explain it to you first.
## Ubuntu LTS point release
Ubuntu 18.04 was released in April 2018 and since it’s a long term support (LTS) release, it will be supported till 2023. There have been a number of bug fixes, security updates and software upgrades since then. If you download Ubuntu 18.04 today, you’ll have to install all those updates as one of the first [things to do after installing Ubuntu 18.04](https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/).
That, of course, is not an ideal situation. This is why Ubuntu provides these “point releases”. A point release consists of all the feature and security updates along with the bug fixes that has been added since the initial release of the LTS version. If you download Ubuntu today, you’ll get Ubuntu 18.04.3 instead of Ubuntu 18.04. This saves the trouble of downloading and installing hundreds of updates on a newly installed Ubuntu system.
Okay! So now you know the concept of point release. How do you upgrade to these point releases? The answer is simple. Just [update your Ubuntu system](https://itsfoss.com/update-ubuntu/) like you normally do and you’ll be already on the latest point release.
You can [check Ubuntu version](https://itsfoss.com/how-to-know-ubuntu-unity-version/) to see which point release you are using. I did a check and since I was on Ubuntu 18.04.3, I assumed that I would have gotten Linux kernel 5 as well. When I [check the Linux kernel version](https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/), it was still the base kernel 4.15.

Why is that? If Ubuntu 18.04.3 has Linux kernel 5.0 then why does it still have Linux Kernel 4.15? It’s because you have to manually ask for installing the new kernel in Ubuntu LTS by opting for LTS Enablement Stack popularly known as HWE.
## Get Linux Kernel 5.0 in Ubuntu 18.04 with Hardware Enablement Stack
By default, Ubuntu LTS release stay on the same Linux kernel they were released with. The [hardware enablement stack](https://wiki.ubuntu.com/Kernel/LTSEnablementStack) (HWE) provides newer kernel and xorg support for existing Ubuntu LTS release.
Things have been changed recently. If you downloaded Ubuntu 18.04.2 or newer desktop version, HWE is enabled for you and you’ll get the new kernel along with the regular updates by default.
For server versions and people who downloaded 18.04 and 18.04.1, you’ll have to install the HWE kernel. Once you do that, you’ll get the newer kernel releases provided by Ubuntu to the LTS version.
To install HWE kernel in Ubuntu desktop along with newer xorg, you can use this command in the terminal:
`sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04`
If you are using Ubuntu Server edition, you won’t have the xorg option. So just install the HWE kernel in Ubutnu server:
`sudo apt-get install --install-recommends linux-generic-hwe-18.04`
Once you finish installing the HWE kernel, restart your system. Now you should have the newer Linux kernel.
**Are you getting kernel 5.0 in Ubuntu 18.04?**
Do note that HWE is enabled for people who downloaded and installed Ubuntu 18.04.2. So these users will get Kernel 5.0 without any trouble.
Should you go by the trouble of enabling HWE kernel in Ubuntu? It’s entirely up to you. [Linux Kernel 5.0](https://itsfoss.com/linux-kernel-5/) has several performance improvement and better hardware support. You’ll get the benefit of the new kernel.
What do you think? Will you install kernel 5.0 or will you rather stay on the kernel 4.15? |
11,238 | 你的 Linux 系统开机时间已经击败了 99% 的电脑 | https://itsfoss.com/check-boot-time-linux/ | 2019-08-17T10:47:15 | [
"开机",
"时间"
] | https://linux.cn/article-11238-1.html | 当你打开系统电源时,你会等待制造商的徽标出现,屏幕上可能会显示一些消息(以非安全模式启动),然后是 [Grub](https://www.gnu.org/software/grub/) 屏幕、操作系统加载屏幕以及最后的登录屏。
你检查过这花费了多长时间么?也许没有。除非你真的需要知道,否则你不会在意开机时间。
但是如果你很想知道你的 Linux 系统需要很长时间才能启动完成呢?使用秒表是一种方法,但在 Linux 中,你有一种更好、更轻松地了解系统启动时间的方法。
### 在 Linux 中使用 systemd-analyze 检查启动时间

无论你是否喜欢,[systemd](https://en.wikipedia.org/wiki/Systemd) 运行在大多数流行的 Linux 发行版中。systemd 有许多管理 Linux 系统的工具。其中一个就是 `systemd-analyze`。
`systemd-analyze` 命令为你提供最近一次启动时运行的服务数量以及消耗时间的详细信息。
如果在终端中运行以下命令:
```
systemd-analyze
```
你将获得总启动时间以及固件、引导加载程序、内核和用户空间所消耗的时间:
```
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
graphical.target reached after 12.408s in userspace
```
正如你在上面的输出中所看到的,我的系统花了大约 35 秒才进入可以输入密码的页面。我正在使用戴尔 XPS Ubuntu。它使用 SSD 存储,尽管如此,它还需要很长时间才能启动。
不是那么令人印象深刻,是吗?为什么不共享你们系统的启动时间?我们来比较吧。
你可以使用以下命令将启动时间进一步细分为每个单元:
```
systemd-analyze blame
```
这将生成大量输出,所有服务按所用时间的降序列出。
```
7.347s plymouth-quit-wait.service
6.198s NetworkManager-wait-online.service
3.602s plymouth-start.service
3.271s plymouth-read-write.service
2.120s apparmor.service
1.503s [email protected]
1.213s motd-news.service
908ms snapd.service
861ms keyboard-setup.service
739ms fwupd.service
702ms bolt.service
672ms dev-nvme0n1p3.device
608ms [email protected]:intel_backlight.service
539ms snap-core-7270.mount
504ms snap-midori-451.mount
463ms snap-screencloud-1.mount
446ms snapd.seeded.service
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
420ms snap-core18-1066.mount
416ms snap-scrcpy-133.mount
412ms snap-gnome\x2dcharacters-296.mount
```
#### 额外提示:改善启动时间
如果查看此输出,你可以看到网络管理器和 [plymouth](https://wiki.archlinux.org/index.php/Plymouth) 都消耗了大量的启动时间。
Plymouth 负责你在 Ubuntu 和其他发行版中在登录页面出现之前的引导页面。网络管理器负责互联网连接,可以关闭它来加快启动时间。不要担心,在你登录后,你可以正常使用 wifi。
```
sudo systemctl disable NetworkManager-wait-online.service
```
如果要还原更改,可以使用以下命令:
```
sudo systemctl enable NetworkManager-wait-online.service
```
请不要在不知道用途的情况下自行禁用各种服务。这可能会产生危险的后果。
现在你知道了如何检查 Linux 系统的启动时间,为什么不在评论栏分享你的系统的启动时间?
---
via: <https://itsfoss.com/check-boot-time-linux/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | When you power on your system, you wait for the manufacturer’s logo to come up, a few messages on the screen perhaps (booting in insecure mode), [Grub](https://www.gnu.org/software/grub/) screen, operating system loading screen and finally the login screen.
Did you check how long did it take? Perhaps not. Unless you really need to know, you won’t bother with the boot time details.
But what if you are curious to know long long your Linux system takes to boot? Running a stopwatch is one way to find that but in Linux, you have better and easier ways to find out your system’s start up time.
## Checking boot time in Linux with systemd-analyze

Like it or not, [systemd](https://en.wikipedia.org/wiki/Systemd) is running on most of the popular Linux distributions. The systemd has a number of utilities to manage your Linux system. One of those utilities is systemd-analyze.
The systemd-analyze command gives you a detail of how many services ran at the last start up and how long they took.
If you run the following command in the terminal:
systemd-analyze
You’ll get the total boot time along with the time taken by firmware, boot loader, kernel and the userspace:
Startup finished in 7.275s (firmware) + 13.136s (loader) + 2.803s (kernel) + 12.488s (userspace) = 35.704s
graphical.target reached after 12.408s in userspace
As you can see in the output above, it took about 35 seconds for my system to reach the screen where I could enter my password. I am using Dell XPS Ubuntu edition. It uses SSD storage and despite of that it takes this much time to start.
Not that impressive, is it? Why don’t you share your system’s boot time? Let’s compare.
You can further breakdown the boot time into each unit with the following command:
systemd-analyze blame
This will produce a huge output with all the services listed in the descending order of the time taken.
```
7.347s plymouth-quit-wait.service
6.198s NetworkManager-wait-online.service
3.602s plymouth-start.service
3.271s plymouth-read-write.service
2.120s apparmor.service
1.503s
```[[email protected]](/cdn-cgi/l/email-protection)
1.213s motd-news.service
908ms snapd.service
861ms keyboard-setup.service
739ms fwupd.service
702ms bolt.service
672ms dev-nvme0n1p3.device
608ms systemd-backlight@backlight:intel_backlight.service
539ms snap-core-7270.mount
504ms snap-midori-451.mount
463ms snap-screencloud-1.mount
446ms snapd.seeded.service
440ms snap-gtk\x2dcommon\x2dthemes-1313.mount
420ms snap-core18-1066.mount
416ms snap-scrcpy-133.mount
412ms snap-gnome\x2dcharacters-296.mount
Please keep in mind that the services run in parallel.
### Bonus Tip: Improving boot time
If you look at this output, you can see that both network manager and [plymouth](https://wiki.archlinux.org/index.php/Plymouth) take a huge bunch of boot time.
Plymouth is responsible for that boot splash screen you see before the login screen in Ubuntu and other distributions. Network manager is responsible for the internet connection and may be turned off to speed up boot time. Don’t worry, once you log in, you’ll have wifi working normally.
sudo systemctl disable NetworkManager-wait-online.service
If you want to revert the change, you can use this command:
sudo systemctl enable NetworkManager-wait-online.service
Now, please don’t go disabling various services on your own without knowing what it is used for. It may have dangerous consequences.
Similarly, you can also [use systemd to investigate why your Linux system takes a long time to shut down](https://itsfoss.com/long-shutdown-linux/).
*Now that you know how to check the boot time of your Linux system, why not share your system’s boot time in the comment section?* |
11,239 | 使用 MacSVG 创建 SVG 动画 | https://opensource.com/article/18/10/macsvg-open-source-tool-animation | 2019-08-18T00:10:15 | [
"svg"
] | https://linux.cn/article-11239-1.html |
>
> 开源 SVG:墙上的魔法字。
>
>
>

新巴比伦的摄政王[伯沙撒](https://en.wikipedia.org/wiki/Belshazzar)没有注意到他在盛宴期间神奇地[书写在墙上的文字](https://en.wikipedia.org/wiki/Belshazzar%27s_feast)。但是,如果他在公元前 539 年有一台笔记本电脑和良好的互联网连接,他可能会通过在浏览器上阅读 SVG 来避开那些讨厌的波斯人。
出现在网页上的动画文本和对象是建立用户兴趣和参与度的好方法。有几种方法可以实现这一点,例如视频嵌入、动画 GIF 或幻灯片 —— 但你也可以使用[可缩放矢量图形(SVG)](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics)。
SVG 图像与 JPG 不同,因为它可以缩放而不会丢失其分辨率。矢量图像是由点而不是像素创建的,所以无论它放大到多大,它都不会失去分辨率或像素化。充分利用可缩放的静态图像的一个例子是网站的徽标。
### 动起来,动起来
你可以使用多种绘图程序创建 SVG 图像,包括开源的 [Inkscape](https://inkscape.org/) 和 Adobe Illustrator。让你的图像“能动起来”需要更多的努力。幸运的是,有一些开源解决方案甚至可以引起伯沙撒的注意。
[MacSVG](https://macsvg.org/) 是一款可以让你的图像动起来的工具。你可以在 [GitHub](https://github.com/dsward2/macSVG) 上找到源代码。
根据其[官网](https://macsvg.org/)说,MacSVG 由阿肯色州康威的 Douglas Ward 开发,是一个“用于设计 HTML5 SVG 艺术和动画的开源 Mac OS 应用程序”。
我想使用 MacSVG 来创建一个动画签名。我承认我发现这个过程有点令人困惑,并且在我第一次尝试创建一个实际的动画 SVG 图像时失败了。

重要的是首先要了解要展示的书法内容实际写的是什么。
动画文字背后的属性是 [stroke-dasharray](https://gist.github.com/mbostock/5649592)。将该术语分成三个单词有助于解释正在发生的事情:“stroke” 是指用笔(无论是物理的笔还是数字化笔)制作的线条或笔画。“dash” 意味着将笔划分解为一系列折线。“array” 意味着将整个东西生成为数组。这是一个简单的概述,但它可以帮助我理解应该发生什么以及为什么。
使用 MacSVG,你可以导入图形(.PNG)并使用钢笔工具描绘书写路径。我使用了草书来表示我的名字。然后,只需应用该属性来让书法动画起来、增加和减少笔划的粗细、改变其颜色等等。完成后,动画的书法将导出为 .SVG 文件,并可以在网络上使用。除书写外,MacSVG 还可用于许多不同类型的 SVG 动画。
### 在 WordPress 中书写
我准备在我的 [WordPress](https://macharyas.com/) 网站上传和分享我的 SVG 示例,但我发现 WordPress 不允许进行 SVG 媒体导入。幸运的是,我找到了一个方便的插件:Benbodhi 的 [SVG 支持](https://wordpress.org/plugins/svg-support/)插件允许快速、轻松地导入我的 SVG,就像我将 JPG 导入媒体库一样。我能够在世界各地向巴比伦人展示我[写在墙上的魔法字](https://macharyas.com/index.php/2018/10/14/open-source-svg/)。
我在 [Brackets](http://brackets.io/) 中开源了 SVG 的源代码,结果如下:
```
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
```
你会使用 MacSVG 做什么?
---
via: <https://opensource.com/article/18/10/macsvg-open-source-tool-animation>
作者:[Jeff Macharyas](https://opensource.com/users/rikki-endsley) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The Neo-Babylonian regent [Belshazzar](https://en.wikipedia.org/wiki/Belshazzar) did not heed [the writing on the wall](https://en.wikipedia.org/wiki/Belshazzar%27s_feast) that magically appeared during his great feast. However, if he had had a laptop and a good internet connection in 539 BC, he might have staved off those pesky Persians by reading the SVG on the browser.
Animating text and objects on web pages is a great way to build user interest and engagement. There are several ways to achieve this, such as a video embed, an animated GIF, or a slideshow—but you can also use [scalable vector graphics](https://en.wikipedia.org/wiki/Scalable_Vector_Graphics) (SVG).
An SVG image is different from, say, a JPG, because it is scalable without losing its resolution. A vector image is created by points, not dots, so no matter how large it gets, it will not lose its resolution or pixelate. An example of a good use of scalable, static images would be logos on websites.
## Move it, move it
You can create SVG images with several drawing programs, including open source [Inkscape](https://inkscape.org/) and Adobe Illustrator. Getting your images to “do something” requires a bit more effort. Fortunately, there are open source solutions that would get even Belshazzar’s attention.
[MacSVG](https://macsvg.org/) is one tool that will get your images moving. You can find the source code on [GitHub](https://github.com/dsward2/macSVG).
Developed by Douglas Ward of Conway, Arkansas, MacSVG is an “open source Mac OS app for designing HTML5 SVG art and animation,” according to its [website](https://macsvg.org/).
I was interested in using MacSVG to create an animated signature. I admit that I found the process a bit confusing and failed at my first attempts to create an actual animated SVG image.

It is important to first learn what makes “the writing on the wall” actually write.
The attribute behind the animated writing is [stroke-dasharray](https://gist.github.com/mbostock/5649592). Breaking the term into three words helps explain what is happening: *Stroke* refers to the line or stroke you would make with a pen, whether physical or digital. *Dash* means breaking the stroke down into a series of dashes. *Array* means producing the whole thing into an array. That’s a simple overview, but it helped me understand what was supposed to happen and why.
With MacSVG, you can import a graphic (.PNG) and use the pen tool to trace the path of the writing. I used a cursive representation of my first name. Then it was just a matter of applying the attributes to animate the writing, increase and decrease the thickness of the stroke, change its color, and so on. Once completed, the animated writing was exported as an .SVG file and was ready for use on the web. MacSVG can be used for many different types of SVG animation in addition to handwriting.
## The writing is on the WordPress
I was ready to upload and share my SVG example on my [WordPress](https://macharyas.com/) site, but I discovered that WordPress does not allow for SVG media imports. Fortunately, I found a handy plugin: Benbodhi’s [SVG Support](https://wordpress.org/plugins/svg-support/) allowed a quick, easy import of my SVG the same way I would import a JPG to my Media Library. I was able to showcase my [writing on the wall](https://macharyas.com/index.php/2018/10/14/open-source-svg/) to Babylonians everywhere.
I opened the source code of my SVG in [Brackets](http://brackets.io/), and here are the results:
```
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://web.resource.org/cc/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" height="360px" style="zoom: 1;" cursor="default" id="svg_document" width="480px" baseProfile="full" version="1.1" preserveAspectRatio="xMidYMid meet" viewBox="0 0 480 360"><title id="svg_document_title">Path animation with stroke-dasharray</title><desc id="desc1">This example demonstrates the use of a path element, an animate element, and the stroke-dasharray attribute to simulate drawing.</desc><defs id="svg_document_defs"></defs><g id="main_group"></g><path stroke="#004d40" id="path2" stroke-width="9px" d="M86,75 C86,75 75,72 72,61 C69,50 66,37 71,34 C76,31 86,21 92,35 C98,49 95,73 94,82 C93,91 87,105 83,110 C79,115 70,124 71,113 C72,102 67,105 75,97 C83,89 111,74 111,74 C111,74 119,64 119,63 C119,62 110,57 109,58 C108,59 102,65 102,66 C102,67 101,75 107,79 C113,83 118,85 122,81 C126,77 133,78 136,64 C139,50 147,45 146,33 C145,21 136,15 132,24 C128,33 123,40 123,49 C123,58 135,87 135,96 C135,105 139,117 133,120 C127,123 116,127 120,116 C124,105 144,82 144,81 C144,80 158,66 159,58 C160,50 159,48 161,43 C163,38 172,23 166,22 C160,21 155,12 153,23 C151,34 161,68 160,78 C159,88 164,108 163,113 C162,118 165,126 157,128 C149,130 152,109 152,109 C152,109 185,64 185,64 " fill="none" transform=""><animate values="0,1739;1739,0;" attributeType="XML" begin="0; animate1.end+5s" id="animateSig1" repeatCount="indefinite" attributeName="stroke-dasharray" fill="freeze" dur="2"></animate></path></svg>
```
What would you use MacSVG for?
## Comments are closed. |
11,241 | 开源新闻综述:GNOME 和 KDE 达成合作、Nvidia 开源 GPU 文档 | https://opensource.com/article/19/8/news-august-17 | 2019-08-18T10:38:25 | [
"GNOME",
"KDE",
"GPU"
] | /article-11241-1.html |
>
> 不要错过两周以来最大的开源头条新闻。
>
>
>

在本期开源新闻综述中,我们将介绍两种新的强大数据可视化工具、Nvidia 开源其 GPU 文档、激动人心的新工具、确保自动驾驶汽车的固件安全等等!
### GNOME 和 KDE 在 Linux 桌面上达成合作伙伴
Linux 在桌面计算机上一直处于分裂状态。在最近的一篇[公告](https://www.zdnet.com/article/gnome-and-kde-work-together-on-the-linux-desktop/)中称,“两个主要的 Linux 桌面竞争对手,[GNOME 基金会](https://www.gnome.org/) 和 [KDE](https://kde.org/) 已经同意合作。”
这两个组织将成为今年 11 月在巴塞罗那举办的 [Linux App Summit(LAS)2019](https://linuxappsummit.org/) 的赞助商。这一举措在某种程度上似乎是对桌面计算不再是争夺支配地位的最佳场所的回应。无论是什么原因,Linux 桌面的粉丝们都有新的理由希望未来出现一个标准化的 GUI 环境。
### 新的开源数据可视化工具
这个世界上很少有不是由数据驱动的。除非数据以人们可以互动的形式出现,否则它并不是很好使用。最近开源的两个数据可视化项目正在尝试使数据更有用。
第一个工具名为 **Neuroglancer**,由 [Google 的研究团队](https://www.cbronline.com/news/brain-mapping-google-ai)创建。它“使神经科医生能够在交互式可视化中建立大脑神经通路的 3D 模型。”Neuroglancer 通过使用神经网络追踪大脑中的神经元路径并构建完整的可视化来实现这一点。科学家已经使用了 Neuroglancer(你可以[从 GitHub 取得](https://github.com/google/neuroglancer))通过扫描果蝇的大脑来建立一个交互式地图。
第二个工具来自一个不太能想到的的来源:澳大利亚信号理事会。这是该国家类似 NSA 的机构,它“开源了[内部数据可视化和分析工具](https://www.computerworld.com.au/article/665286/australian-signals-directorate-open-sources-data-analysis-tool/)之一。”这个被称为 **[Constellation](https://www.constellation-app.com/)** 的工具可以“识别复杂数据集中的趋势和模式,并且能够扩展到‘数十亿输入’。”该机构总干事迈克•伯吉斯表示,他希望“这一工具将有助于产生有利于所有澳大利亚人的科学和其他方面的突破。”鉴于它是开源的,它可以使整个世界受益。
### Nvidia 开始发布 GPU 文档
多年来,图形处理单元(GPU)制造商 Nvidia 并没有做出什么让开源项目轻松开发其产品的驱动程序的努力。现在,该公司通过[发布 GPU 硬件文档](https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Open-GPU-Docs)向这些项目迈出了一大步。
该公司根据 MIT 许可证发布的文档[可在 GitHub 上获取](https://github.com/nvidia/open-gpu-doc)。它涵盖了几个关键领域,如设备初始化、内存时钟/调整和电源状态。据硬件新闻网站 Phoronix 称,开发了 Nvidia GPU 的开源驱动程序的 Nouveau 项目将是率先使用该文档来推动其开发工作的项目之一。
### 用于保护固件的新工具
似乎每周都有的消息称,移动设备或连接互联网的小设备中出现新漏洞。通常,这些漏洞存在于控制设备的固件中。自动驾驶汽车服务 Cruise [发布了一个开源工具](https://arstechnica.com/information-technology/2019/08/self-driving-car-service-open-sources-new-tool-for-securing-firmware/),用于在这些漏洞成为问题之前捕获这些漏洞。
该工具被称为 [FwAnalzyer](https://github.com/cruise-automation/fwanalyzer)。它检查固件代码中是否存在许多潜在问题,包括“识别潜在危险的可执行文件”,并查明“任何错误遗留的调试代码”。Cruise 的工程师 Collin Mulliner 曾帮助开发该工具,他说通过在代码上运行 FwAnalyzer,固件开发人员“现在能够检测并防止各种安全问题。”
### 其它新闻
* [为什么洛杉矶决定将未来寄予开源](https://www.techrepublic.com/article/why-la-decided-to-open-source-its-future/)
* [麻省理工学院出版社发布了关于开源出版软件的综合报告](https://news.mit.edu/2019/mit-press-report-open-source-publishing-software-0808)
* [华为推出鸿蒙操作系统,不会放弃 Android 智能手机](https://www.itnews.com.au/news/huawei-unveils-harmony-operating-system-wont-ditch-android-for-smartphones-529432)
*一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。*
---
via: <https://opensource.com/article/19/8/news-august-17>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,243 | 如何在安装之前检查 Linux 软件包的版本? | https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/ | 2019-08-18T11:15:45 | [
"版本"
] | https://linux.cn/article-11243-1.html | 
大多数人都知道如何在 Linux 中[查找已安装软件包的版本](https://www.ostechnix.com/find-package-version-linux/),但是,你会如何查找那些还没有安装的软件包的版本呢?很简单!本文将介绍在 Debian 及其衍生品(如 Ubuntu)中,如何在软件包安装之前检查它的版本。对于那些想在安装之前知道软件包版本的人来说,这个小技巧可能会有所帮助。
### 在安装之前检查 Linux 软件包版本
在基于 DEB 的系统中,即使软件包还没有安装,也有很多方法可以查看他的版本。接下来,我将一一介绍。
#### 方法 1 – 使用 Apt
检查软件包的版本的懒人方法:
```
$ apt show <package-name>
```
**示例:**
```
$ apt show vim
```
**示例输出:**
```
Package: vim
Version: 2:8.0.1453-1ubuntu1.1
Priority: optional
Section: editors
Origin: Ubuntu
Maintainer: Ubuntu Developers <[email protected]>
Original-Maintainer: Debian Vim Maintainers <[email protected]>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 2,852 kB
Provides: editor
Depends: vim-common (= 2:8.0.1453-1ubuntu1.1), vim-runtime (= 2:8.0.1453-1ubuntu1.1), libacl1 (>= 2.2.51-8), libc6 (>= 2.15), libgpm2 (>= 1.20.7), libpython3.6 (>= 3.6.5), libselinux1 (>= 1.32), libtinfo5 (>= 6)
Suggests: ctags, vim-doc, vim-scripts
Homepage: https://vim.sourceforge.io/
Task: cloud-image, server
Supported: 5y
Download-Size: 1,152 kB
APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
Description: Vi IMproved - enhanced vi editor
Vim is an almost compatible version of the UNIX editor Vi.
.
Many new features have been added: multi level undo, syntax
highlighting, command line history, on-line help, filename
completion, block operations, folding, Unicode support, etc.
.
This package contains a version of vim compiled with a rather
standard set of features. This package does not provide a GUI
version of Vim. See the other vim-* packages if you need more
(or less).
N: There is 1 additional record. Please use the '-a' switch to see it
```
正如你在上面的输出中看到的,`apt show` 命令显示了软件包许多重要的细节,例如:
1. 包名称,
2. 版本,
3. 来源(vim 来自哪里),
4. 维护者,
5. 包的主页,
6. 依赖,
7. 下载大小,
8. 简介,
9. 其他。
因此,Ubuntu 仓库中可用的 Vim 版本是 **8.0.1453**。如果我把它安装到我的 Ubuntu 系统上,就会得到这个版本。
或者,如果你不想看那么多的内容,那么可以使用 `apt policy` 这个命令:
```
$ apt policy vim
vim:
Installed: (none)
Candidate: 2:8.0.1453-1ubuntu1.1
Version table:
2:8.0.1453-1ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
2:8.0.1453-1ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
```
甚至更短:
```
$ apt list vim
Listing... Done
vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64
N: There is 1 additional version. Please use the '-a' switch to see it
```
`apt` 是 Ubuntu 最新版本的默认包管理器。因此,这个命令足以找到一个软件包的详细信息,给定的软件包是否安装并不重要。这个命令将简单地列出给定包的版本以及其他详细信息。
#### 方法 2 – 使用 Apt-get
要查看软件包的版本而不安装它,我们可以使用 `apt-get` 命令和 `-s` 选项。
```
$ apt-get -s install vim
```
**示例输出:**
```
NOTE: This is only a simulation!
apt-get needs root privileges for real execution.
Keep also in mind that locking is deactivated,
so don't depend on the relevance to the real current situation!
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
ctags vim-doc vim-scripts
The following NEW packages will be installed:
vim
0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64])
```
这里,`-s` 选项代表 **模拟**。正如你在输出中看到的,它不执行任何操作。相反,它只是模拟执行,好让你知道在安装 Vim 时会发生什么。
你可以将 `install` 选项替换为 `upgrade`,以查看升级包时会发生什么。
```
$ apt-get -s upgrade vim
```
#### 方法 3 – 使用 Aptitude
在 Debian 及其衍生品中,`aptitude` 是一个基于 ncurses(LCTT 译注:ncurses 是终端基于文本的字符处理的库)和命令行的前端 APT 包管理器。
使用 aptitude 来查看软件包的版本,只需运行:
```
$ aptitude versions vim
p 2:8.0.1453-1ubuntu1 bionic 500
p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500
```
你还可以使用模拟选项(`-s`)来查看安装或升级包时会发生什么。
```
$ aptitude -V -s install vim
The following NEW packages will be installed:
vim [2:8.0.1453-1ubuntu1.1]
0 packages upgraded, 1 newly installed, 0 to remove and 45 not upgraded.
Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used.
Would download/install/remove packages.
```
这里,`-V` 标志用于显示软件包的详细信息。
```
$ aptitude -V -s upgrade vim
```
类似的,只需将 `install` 替换为 `upgrade` 选项,即可查看升级包会发生什么。
```
$ aptitude search vim -F "%c %p %d %V"
```
这里,
* `-F` 用于指定应使用哪种格式来显示输出,
* `%c` – 包的状态(已安装或未安装),
* `%p` – 包的名称,
* `%d` – 包的简介,
* `%V` – 包的版本。
当你不知道完整的软件包名称时,这非常有用。这个命令将列出包含给定字符串(即 vim)的所有软件包。
以下是上述命令的示例输出:
```
[...]
p vim Vi IMproved - enhanced vi editor 2:8.0.1453-1ub
p vim-tlib Some vim utility functions 1.23-1
p vim-ultisnips snippet solution for Vim 3.1-3
p vim-vimerl Erlang plugin for Vim 1.4.1+git20120
p vim-vimerl-syntax Erlang syntax for Vim 1.4.1+git20120
p vim-vimoutliner script for building an outline editor on top of Vim 0.3.4+pristine
p vim-voom Vim two-pane outliner 5.2-1
p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git
```
#### 方法 4 – 使用 Apt-cache
`apt-cache` 命令用于查询基于 Debian 的系统中的 APT 缓存。对于要在 APT 的包缓存上执行很多操作时,它很有用。一个很好的例子是我们可以从[某个仓库或 ppa 中列出已安装的应用程序](https://www.ostechnix.com/list-installed-packages-certain-repository-linux/)。
不仅是已安装的应用程序,我们还可以找到软件包的版本,即使它没有被安装。例如,以下命令将找到 Vim 的版本:
```
$ apt-cache policy vim
```
示例输出:
```
vim:
Installed: (none)
Candidate: 2:8.0.1453-1ubuntu1.1
Version table:
2:8.0.1453-1ubuntu1.1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages
2:8.0.1453-1ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
```
正如你在上面的输出中所看到的,Vim 并没有安装。如果你想安装它,你会知道它的版本是 **8.0.1453**。它还显示 vim 包来自哪个仓库。
#### 方法 5 – 使用 Apt-show-versions
在 Debian 和基于 Debian 的系统中,`apt-show-versions` 命令用于列出已安装和可用软件包的版本。它还显示所有可升级软件包的列表。如果你有一个混合的稳定或测试环境,这是非常方便的。例如,如果你同时启用了稳定和测试仓库,那么你可以轻松地从测试库找到应用程序列表,还可以升级测试库中的所有软件包。
默认情况下系统没有安装 `apt-show-versions`,你需要使用以下命令来安装它:
```
$ sudo apt-get install apt-show-versions
```
安装后,运行以下命令查找软件包的版本,例如 Vim:
```
$ apt-show-versions -a vim
vim:amd64 2:8.0.1453-1ubuntu1 bionic archive.ubuntu.com
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-security security.ubuntu.com
vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com
vim:amd64 not installed
```
这里,`-a` 选项打印给定软件包的所有可用版本。
如果已经安装了给定的软件包,那么就不需要使用 `-a` 选项。在这种情况下,只需运行:
```
$ apt-show-versions vim
```
差不多完了。如果你还了解其他方法,在下面的评论中分享,我将检查并更新本指南。
---
via: <https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,244 | 如何衡量一个开源社区的健康度 | https://opensource.com/article/19/8/measure-project | 2019-08-19T18:47:34 | [
"开源社区"
] | https://linux.cn/article-11244-1.html |
>
> 这比较复杂。
>
>
>

作为一个经常管理软件开发团队的人,多年来我一直关注度量指标。一次次,我发现自己领导团队使用一个又一个的项目平台(例如 Jira、GitLab 和 Rally)生成了大量可测量的数据。从那时起,我已经及时投入了大量时间从记录平台中提取了有用的指标,并采用了一种我们可以理解的格式,然后使用这些指标对开发的许多方面做出更好的选择。
今年早些时候,我有幸在 [Linux 基金会](https://www.linuxfoundation.org/)遇到了一个名为<ruby> <a href="https://chaoss.community/"> 开源软件社区健康分析 </a> <rt> Community Health Analytics for Open Source Software </rt></ruby>(CHAOSS)的项目。该项目侧重于从各种来源收集和丰富指标,以便开源社区的利益相关者可以衡量他们项目的健康状况。
### CHAOSS 介绍
随着我对该项目的基本指标和目标越来越熟悉,一个问题在我的脑海中不断翻滚。什么是“健康”的开源项目,由谁来定义?
特定角色的人认为健康的东西可能另一个角色的人就不会这样认为。似乎可以用 CHAOSS 收集的细粒度数据进行市场细分实验,重点关注对特定角色可能最有意义的背景问题,以及 CHAOSS 收集哪些指标可能有助于回答这些问题。
CHAOSS 项目创建并维护了一套开源应用程序和度量标准定义,使得这个实验具有可能性,这包括:
* 许多基于服务器的应用程序,用于收集、聚合和丰富度量标准(例如 Augur 和 GrimoireLab)。
* ElasticSearch、Kibana 和 Logstash(ELK)的开源版本。
* 身份服务、数据分析服务和各种集成库。
在我过去的一个程序中,有六个团队从事于不同复杂程度的项目,我们找到了一个简洁的工具,它允许我们从简单(或复杂)的 JQL 语句中创建我们想要的任何类型的指标,然后针对这些指标开发计算。在我们注意到之前,我们仅从 Jira 中就提取了 400 多个指标,而且还有更多指标来自手动的来源。
在项目结束时,我们认定这 400 个指标中,大多数指标在*以我们的角色*做出决策时并不重要。最终,只有三个对我们非常重要:“缺陷去除效率”、“已完成的条目与承诺的条目”,以及“每个开发人员的工作进度”。这三个指标最重要,因为它们是我们对自己、客户和团队成员所做出的承诺,因此是最有意义的。
带着这些通过经验得到的教训和对什么是健康的开源项目的问题,我跳进了 CHAOSS 社区,开始建立一套角色,以提供一种建设性的方法,从基于角色的角度回答这个问题。
CHAOSS 是一个开源项目,我们尝试使用民主共识来运作。因此,我决定使用<ruby> 组成分子 <rt> constituent </rt></ruby>这个词而不是利益相关者,因为它更符合我们作为开源贡献者的责任,以创建更具共生性的价值链。
虽然创建此组成模型的过程采用了特定的“目标-问题-度量”方法,但有许多方法可以进行细分。CHAOSS 贡献者已经开发了很好的模型,可以按照矢量进行细分,例如项目属性(例如,个人、公司或联盟)和“失败容忍度”。在为 CHAOSS 开发度量定义时,每个模型都会提供建设性的影响。
基于这一切,我开始构建一个谁可能关心 CHAOSS 指标的模型,以及每个组成分子在 CHAOSS 的四个重点领域中最关心的问题:
* [多样性和包容性](https://github.com/chaoss/wg-diversity-inclusion)
* [演化](https://github.com/chaoss/wg-evolution)
* [风险](https://github.com/chaoss/wg-risk)
* [价值](https://github.com/chaoss/wg-value)
在我们深入研究之前,重要的是要注意 CHAOSS 项目明确地将背景判断留给了实施指标的团队。什么是“有意义的”和“什么是健康的?”的答案预计会因团队和项目而异。CHAOSS 软件的现成仪表板尽可能地关注客观指标。在本文中,我们关注项目创始人、项目维护者和贡献者。
### 项目组成分子
虽然这绝不是这些组成分子可能认为重要的问题的详尽清单,但这些选择感觉是一个好的起点。以下每个“目标-问题-度量”标准部分与 CHAOSS 项目正在收集和汇总的指标直接相关。
现在,进入分析的第 1 部分!
#### 项目创始人
作为**项目创始人**,我**最**关心:
* 我的项目**对其他人有用吗?**通过以下测量:
+ 随着时间推移有多少复刻?
- **指标:**存储库复刻数。
+ 随着时间的推移有多少贡献者?
- **指标:**贡献者数量。
+ 贡献净质量。
- **指标:**随着时间的推移提交的错误。
- **指标:**随着时间的回归。
+ 项目的财务状况。
- **指标:**随着时间的推移的捐赠/收入。
- **指标:**随着时间的推移的费用。
* 我的项目对其它人的**可见**程度?
+ 有谁知道我的项目?别人认为它很整洁吗?
- **指标:**社交媒体上的提及、分享、喜欢和订阅的数量。
+ 有影响力的人是否了解我的项目?
- **指标:**贡献者的社会影响力。
+ 人们在公共场所对项目有何评价?是正面还是负面?
- **指标:**跨社交媒体渠道的情感(关键字或 NLP)分析。
* 我的项目**可行性**程度?
+ 我们有足够的维护者吗?该数字是随着时间的推移而上升还是下降?
- **指标:**维护者数量。
+ 改变速度如何随时间变化?
- **指标:**代码随时间的变化百分比。
- **指标:**拉取请求、代码审查和合并之间的时间。
* 我的项目的[多样化 & 包容性](https://github.com/chaoss/wg-diversity-inclusion)如何?
+ 我们是否拥有有效的公开行为准则(CoC)?
- **度量标准:** 检查存储库中的 CoC 文件。
+ 与我的项目相关的活动是否积极包容?
- **指标:**关于活动的票务政策和活动包容性行为的手动报告。
+ 我们的项目在可访问性上做的好不好?
- **指标:**验证发布的文字会议纪要。
- **指标:**验证会议期间使用的隐藏式字幕。
- **指标:**验证在演示文稿和项目前端设计中色盲可访问的素材。
* 我的项目代表了多少[价值](https://github.com/chaoss/wg-value)?
+ 我如何帮助组织了解使用我们的项目将节省多少时间和金钱(劳动力投资)
- **指标:**仓库的议题、提交、拉取请求的数量和估计人工费率。
+ 我如何理解项目创建的下游价值的数量,以及维护我的项目对更广泛的社区的重要性(或不重要)?
- **指标:**依赖我的项目的其他项目数。
+ 为我的项目做出贡献的人有多少机会使用他们学到的东西来找到合适的工作岗位,以及在哪些组织(即生活工资)?
- **指标:**使用或贡献此库的组织数量。
- **指标:**使用此类项目的开发人员的平均工资。
- **指标:**与该项目匹配的关键字的职位发布计数。
### 项目维护者
作为**项目维护者**,我**最**关心:
* 我是**高效的**维护者吗?
+ **指标:**拉取请求在代码审查之前等待的时间。
+ **指标:**代码审查和后续拉取请求之间的时间。
+ **指标:**我的代码审核中有多少被批准?
+ **指标:**我的代码评论中有多少被拒绝或返工?
+ **指标:**代码审查的评论的情感分析。
* 我如何让**更多人**帮助我维护这件事?
+ **指标:**项目贡献者的社交覆盖面数量。
* 我们的**代码质量**随着时间的推移变得越来越好吗?
+ **指标:**计算随着时间的推移引入的回归数量。
+ **指标:**计算随着时间推移引入的错误数量。
+ **指标:**错误归档、拉取请求、代码审查、合并和发布之间的时间。
### 项目开发者和贡献者
作为**项目开发者或贡献者**,我**最**关心:
* 我可以从为这个项目做出贡献中获得哪些有价值的东西,以及实现这个价值需要多长时间?
+ **指标:**下游价值。
+ **指标:**提交、代码审查和合并之间的时间。
* 通过使用我在贡献中学到的东西来增加工作机是否有良好的前景?
+ **指标:**生活工资。
* 这个项目有多受欢迎?
+ **指标:**社交媒体帖子、分享和收藏的数量。
* 社区有影响力的人知道我的项目吗?
+ **指标:**创始人、维护者和贡献者的社交范围。
通过创建这个列表,我们开始让 CHAOSS 更加丰满了,并且在今年夏天项目中首次发布该指标时,我迫不及待地想看看广泛的开源社区可能有什么其他伟大的想法,以及我们还可以从这些贡献中学到什么(并衡量!)。
### 其它角色
接下来,你需要了解有关其他角色(例如基金会、企业开源计划办公室、业务风险和法律团队、人力资源等)以及最终用户的目标问题度量集的更多信息。他们关心开源的不同事物。
如果你是开源贡献者或组成分子,我们邀请你[来看看这个项目](https://github.com/chaoss/)并参与社区活动!
---
via: <https://opensource.com/article/19/8/measure-project>
作者:[Jon Lawrence](https://opensource.com/users/the3rdlaw) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | As a person who normally manages software development teams, over the years I’ve come to care about metrics quite a bit. Time after time, I’ve found myself leading teams using one project platform or another (Jira, GitLab, and Rally, for example) generating an awful lot of measurable data. From there, I’ve promptly invested significant amounts of time to pull useful metrics out of the platform-of-record and into a format where we could make sense of them, and then use the metrics to make better choices about many aspects of development.
Earlier this year, I had the good fortune of coming across a project at the [Linux Foundation](https://www.linuxfoundation.org/) called [Community Health Analytics for Open Source Software](https://chaoss.community/), or CHAOSS. This project focuses on collecting and enriching metrics from a wide range of sources so that stakeholders in open source communities can measure the health of their projects.
## What is CHAOSS?
As I grew familiar with the project’s underlying metrics and objectives, one question kept turning over in my head. What is a "healthy" open source project, and by whose definition?
What’s considered healthy by someone in a particular role may not be viewed that way by someone in another role. It seemed there was an opportunity to back out from the granular data that CHAOSS collects and do a market segmentation exercise, focusing on what might be the most meaningful contextual questions for a given role, and what metrics CHAOSS collects that might help answer those questions.
This exercise was made possible by the fact that the CHAOSS project creates and maintains a suite of open source applications and metric definitions, including:
- A number of server-based applications for gathering, aggregating, and enriching metrics (such as Augur and GrimoireLab).
- The open source versions of ElasticSearch, Kibana, and Logstash (ELK).
- Identity services, data analysis services, and a wide range of integration libraries.
In one of my past programs, where half a dozen teams were working on projects of varying complexity, we found a neat tool which allowed us to create any kind of metric we wanted from a simple (or complex) JQL statement, and then develop calculations against and between those metrics. Before we knew it, we were pulling over 400 metrics from Jira alone, and more from manual sources.
By the end of the project, we decided that out of the 400-ish metrics, most of them didn’t really matter when it came to making decisions *in our roles*. At the end of the day, there were only three that really mattered to us: "Defect Removal Efficiency," "Points completed vs. Points committed," and "Work-in-Progress per Developer." Those three metrics mattered most because they were promises we made to ourselves, to our clients, and to our team members, and were, therefore, the most meaningful.
Drawing from the lessons learned through that experience and the question of what is a healthy open source project?, I jumped into the CHAOSS community and started building a set of personas to offer a constructive approach to answering that question from a role-based lens.
CHAOSS is an open source project and we try to operate using democratic consensus. So, I decided that instead of stakeholders, I’d use the word *constituent*, because it aligns better with the responsibility we have as open source contributors to create a more symbiotic value chain.
While the exercise of creating this constituent model takes a particular goal-question-metric approach, there are many ways to segment. CHAOSS contributors have developed great models that segment by vectors, like project profiles (for example, individual, corporate, or coalition) and "Tolerance to Failure." Every model provides constructive influence when developing metric definitions for CHAOSS.
Based on all of this, I set out to build a model of who might care about CHAOSS metrics, and what questions each constituent might care about most in each of CHAOSS’ four focus areas:
Before we dive in, it’s important to note that the CHAOSS project expressly leaves contextual judgments to teams implementing the metrics. What’s "meaningful" and the answer to "What is healthy?" is expected to vary by team and by project. The CHAOSS software’s ready-made dashboards focus on objective metrics as much as possible. In this article, we focus on project founders, project maintainers, and contributors.
## Project constituents
While this is by no means an exhaustive list of questions these constituents might feel are important, these choices felt like a good place to start. Each of the Goal-Question-Metric segments below is directly tied to metrics that the CHAOSS project is collecting and aggregating.
Now, on to Part 1 of the analysis!
### Project founders
As a **project founder**, I care **most** about:
-
Is my project
**useful to others?**Measured as a function of:-
How many forks over time?
**Metric:**Repository forks. -
How many contributors over time?
**Metric:**Contributor count. -
Net quality of contributions.
**Metric:**Bugs filed over time.
**Metric:**Regressions over time. -
Financial health of my project.
**Metric:**Donations/Revenue over time.
**Metric:**Expenses over time.
-
-
How
**visible**is my project to others?-
Does anyone know about my project? Do others think it’s neat?
**Metric:**Social media mentions, shares, likes, and subscriptions. -
Does anyone with influence know about my project?
**Metric:**Social reach of contributors. -
What are people saying about the project in public spaces? Is it positive or negative?
**Metric:**Sentiment (keyword or NLP) analysis across social media channels.
-
-
How
**viable**is my project?-
Do we have enough maintainers? Is the number rising or falling over time?
**Metric:**Number of maintainers. -
Do we have enough contributors? Is the number rising or falling over time?
**Metric:**Number of contributors. -
How is velocity changing over time?
**Metric:**Percent change of code over time.
**Metric:**Time between pull request, code review, and merge.
-
-
How
is my project?**diverse & inclusive**-
Do we have a valid, public, Code of Conduct (CoC)?
**Metric:**CoC repository file check. -
Are events associated with my project actively inclusive?
**Metric:**Manual reporting on event ticketing policies and event inclusion activities. -
Does our project do a good job of being accessible?
**Metric:**Validation of typed meeting minutes being posted.
**Metric:**Validation of closed captioning used during meetings.
**Metric:**Validation of color-blind-accessible materials in presentations and in project front-end designs.
-
-
How much
does my project represent?**value**-
How can I help organizations understand how much time and money using our project would save them (labor investment)
**Metric:**Repo count of issues, commits, pull requests, and the estimated labor rate. -
How can I understand the amount of downstream value my project creates and how vital (or not) it is to the wider community to maintain my project?
**Metric:**Repo count of how many other projects rely on my project. -
How much opportunity is there for those contributing to my project to use what they learn working on it to land good jobs and at what organizations (aka living wage)?
**Metric:**Count of organizations using or contributing to this library.
**Metric:**Averages of salaries for developers working with this kind of project.
**Metric:**Count of job postings with keywords that match this project.
-
## Project maintainers
As a **Project Maintainer,** I care **most** about:
-
Am I an
**efficient**maintainer?
**Metric:**Time PR’s wait before a code review.
**Metric:**Time between code review and subsequent PR’s.
**Metric:**How many of my code reviews are approvals?
**Metric:**How many of my code reviews are rejections/rework requests?
**Metric:**Sentiment analysis of code review comments. -
How do I get
**more people**to help me maintain this thing?
**Metric:**Count of social reach of project contributors. -
Is our
**code quality**getting better or worse over time?
**Metric:**Count how many regressions are being introduced over time.
**Metric:**Count how many bugs are being introduced over time.
**Metric:**Time between bug filing, pull request, review, merge, and release.
## Project developers and contributors
As a **project developer or contributor**, I care most about:
-
What things of value can I gain from contributing to this project and how long might it take to realize that value?
**Metric:**Downstream value.
**Metric:**Time between commits, code reviews, and merges. -
Are there good prospects for using what I learn by contributing to increase my job opportunities?
**Metric:**Living wage. -
How popular is this project?
**Metric:**Counts of social media posts, shares, and favorites. -
Do community influencers know about my project?
**Metric:**Social reach of founders, maintainers, and contributors.
By creating this list, we’ve just begun to put meat on the contextual bones of CHAOSS, and with the first release of metrics in the project this summer, I can’t wait to see what other great ideas the broader open source community may have to contribute and what else we can all learn (and measure!) from those contributions.
## Other roles
Next, you need to learn more about goal-question-metric sets for other roles (such as foundations, corporate open source program offices, business risk and legal teams, human resources, and others) as well as end users, who have a distinctly different set of things they care about when it comes to open source.
If you’re an open source contributor or constituent, we invite you to [come check out the project](https://github.com/chaoss/) and get engaged in the community!
## Comments are closed. |
11,245 | 如何打开和关闭树莓派(绝对新手) | https://itsfoss.com/turn-on-raspberry-pi/ | 2019-08-19T19:28:37 | [
"树莓派"
] | https://linux.cn/article-11245-1.html |
>
> 这篇短文教你如何打开树莓派以及如何在之后正确关闭它。
>
>
>

[树莓派](https://www.raspberrypi.org/)是[最流行的 SBC(单板计算机)](/article-10823-1.html)之一。如果你对这个话题感兴趣,我相信你已经有了一个树莓派。我还建议你使用[其他树莓派配件](https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/)来开始使用你的设备。
你已经准备好打开并开始使用。与桌面和笔记本电脑等传统电脑相比,它有相似以及不同之处。
今天,让我们继续学习如何打开和关闭树莓派,因为它并没有真正的“电源按钮”。
在本文中,我使用的是树莓派 3B+,但对于所有树莓派变体都是如此。
### 如何打开树莓派

micro USB 口为树莓派供电,打开它的方式是将电源线插入 micro USB 口。但是开始之前,你应该确保做了以下事情。
* 根据官方[指南](https://www.raspberrypi.org/documentation/installation/installing-images/README.md)准备带有 Raspbian 的 micro SD 卡并插入 micro SD 卡插槽。
* 插入 HDMI 线、USB 键盘和鼠标。
* 插入以太网线(可选)。
成上述操作后,请插入电源线。这会打开树莓派,显示屏将亮起并加载操作系统。
如果您将其关闭并且想要再次打开它,则必须从电源插座(首选)或从电路板的电源端口拔下电源线,然后再插上。它没有电源按钮。
### 关闭树莓派
关闭树莓派非常简单,单击菜单按钮并选择关闭。

或者你可以在终端使用 [shutdown 命令](https://linuxhandbook.com/linux-shutdown-command/)
```
sudo shutdown now
```
`shutdown` 执行后**等待**它完成,接着你可以关闭电源。
再说一次,树莓派关闭后,没有真正的办法可以在不关闭再打开电源的情况下打开树莓派。你可以使用 GPIO 打开树莓派,但这需要额外的改装。
\*注意:Micro USB 口往往是脆弱的,因此请关闭/打开电源,而不是经常拔出插入 micro USB 口。
好吧,这就是关于打开和关闭树莓派的所有内容,你打算用它做什么?请在评论中告诉我。
---
via: <https://itsfoss.com/turn-on-raspberry-pi/>
作者:[Chinmay](https://itsfoss.com/author/chinmay/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

The [Raspberry Pi](https://www.raspberrypi.org/?ref=itsfoss.com) is one of the [most popular SBC (Single-Board-Computer)](https://itsfoss.com/raspberry-pi-alternatives/). If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise getting all the [additional Raspberry Pi accessories](https://itsfoss.com/raspberry-pi-accessories/) to get started with your device.
You’re ready to turn it on and start to tinker around with it. It has its own similarities and differences compared to traditional computers like desktops and laptops.
Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi, as it doesn’t really feature a ‘power button’ of sorts.
For this article, I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
## How to turn on Raspberry Pi

*If you have never used **Raspberry Pi like devices**, you would probably search for the power button to turn on the Raspberry Pi. Unfortunately, that’s not the case here.*
The micro USB port powers the Raspberry Pi and the way you turn it on is by plugging in the power cable into the micro USB port.
But, before you do that you should make sure that you have done the following things.
- Preparing the micro SD card with Raspbian according to the official
[guide](https://www.raspberrypi.org/documentation/installation/installing-images/README.md?ref=itsfoss.com)and inserting into the micro SD card slot. - Plugging in the HDMI cable, USB keyboard and a Mouse.
- Plugging in the Ethernet Cable(Optional).
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System. You should see a green light blinking periodically.
**If you turned it off and you want to turn it on again, you’ll have to unplug the power cord either from the power socket (preferred) or from the power port of the board. There is no power button.**## How to shut down the Raspberry Pi
Shutting down the Pi is pretty straightforward, click the menu button and choose shutdown.

Alternatively, you can use the [shutdown command](https://linuxhandbook.com/linux-shutdown-command/?ref=itsfoss.com) in the terminal:
`sudo shutdown now`
Once the shutdown process has started, **wait **till it completely finishes and then you can cut the power to it.
Again, once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments. |
11,247 | SSLH:让 HTTPS 和 SSH 共享同一个端口 | https://www.ostechnix.com/sslh-share-port-https-ssh/ | 2019-08-19T20:14:29 | [
"ssh",
"https"
] | https://linux.cn/article-11247-1.html | 
一些 ISP 和公司可能已经阻止了大多数端口,并且只允许少数特定端口(如端口 80 和 443)访问来加强其安全性。在这种情况下,我们别无选择,但同一个端口可以用于多个程序,比如 HTTPS 端口 443,很少被阻止。通过 SSL/SSH 多路复用器 SSLH 的帮助,它可以侦听端口 443 上的传入连接。更简单地说,SSLH 允许我们在 Linux 系统上的端口 443 上运行多个程序/服务。因此,你可以同时通过同一个端口同时使用 SSL 和 SSH。如果你遇到大多数端口被防火墙阻止的情况,你可以使用 SSLH 访问远程服务器。这个简短的教程描述了如何在类 Unix 操作系统中使用 SSLH 让 https、ssh 共享相同的端口。
### SSLH:让 HTTPS、SSH 共享端口
#### 安装 SSLH
大多数 Linux 发行版上 SSLH 都有软件包,因此你可以使用默认包管理器进行安装。
在 Debian、Ubuntu 及其衍生品上运行:
```
$ sudo apt-get install sslh
```
安装 SSLH 时,将提示你是要将 sslh 作为从 inetd 运行的服务,还是作为独立服务器运行。每种选择都有其自身的优点。如果每天只有少量连接,最好从 inetd 运行 sslh 以节省资源。另一方面,如果有很多连接,sslh 应作为独立服务器运行,以避免为每个传入连接生成新进程。

*安装 sslh*
在 Arch Linux 和 Antergos、Manjaro Linux 等衍生品上,使用 Pacman 进行安装,如下所示:
```
$ sudo pacman -S sslh
```
在 RHEL、CentOS 上,你需要添加 EPEL 存储库,然后安装 SSLH,如下所示:
```
$ sudo yum install epel-release
$ sudo yum install sslh
```
在 Fedora:
```
$ sudo dnf install sslh
```
如果它在默认存储库中不可用,你可以如[这里](https://github.com/yrutschle/sslh/blob/master/doc/INSTALL.md)所述手动编译和安装 SSLH。
#### 配置 Apache 或 Nginx Web 服务器
如你所知,Apache 和 Nginx Web 服务器默认会监听所有网络接口(即 `0.0.0.0:443`)。我们需要更改此设置以告知 Web 服务器仅侦听 `localhost` 接口(即 `127.0.0.1:443` 或 `localhost:443`)。
为此,请编辑 Web 服务器(nginx 或 apache)配置文件并找到以下行:
```
listen 443 ssl;
```
将其修改为:
```
listen 127.0.0.1:443 ssl;
```
如果你在 Apache 中使用虚拟主机,请确保你也修改了它。
```
VirtualHost 127.0.0.1:443
```
保存并关闭配置文件。不要重新启动该服务。我们还没有完成。
#### 配置 SSLH
使 Web 服务器仅在本地接口上侦听后,编辑 SSLH 配置文件:
```
$ sudo vi /etc/default/sslh
```
找到下列行:
```
Run=no
```
将其修改为:
```
Run=yes
```
然后,向下滚动一点并修改以下行以允许 SSLH 在所有可用接口上侦听端口 443(例如 `0.0.0.0:443`)。
```
DAEMON_OPTS="--user sslh --listen 0.0.0.0:443 --ssh 127.0.0.1:22 --ssl 127.0.0.1:443 --pidfile /var/run/sslh/sslh.pid"
```
这里,
* `–user sslh`:要求在这个特定的用户身份下运行。
* `–listen 0.0.0.0:443`:SSLH 监听于所有可用接口的 443 端口。
* `–sshs 127.0.0.1:22` : 将 SSH 流量路由到本地的 22 端口。
* `–ssl 127.0.0.1:443` : 将 HTTPS/SSL 流量路由到本地的 443 端口。
保存并关闭文件。
最后,启用并启动 `sslh` 服务以更新更改。
```
$ sudo systemctl enable sslh
$ sudo systemctl start sslh
```
#### 测试
检查 SSLH 守护程序是否正在监听 443。
```
$ ps -ef | grep sslh
sslh 2746 1 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
sslh 2747 2746 0 15:51 ? 00:00:00 /usr/sbin/sslh --foreground --user sslh --listen 0.0.0.0 443 --ssh 127.0.0.1 22 --ssl 127.0.0.1 443 --pidfile /var/run/sslh/sslh.pid
sk 2754 1432 0 15:51 pts/0 00:00:00 grep --color=auto sslh
```
现在,你可以使用端口 443 通过 SSH 访问远程服务器:
```
$ ssh -p 443 [email protected]
```
示例输出:
```
[email protected]'s password:
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Wed Aug 14 13:11:04 IST 2019
System load: 0.23 Processes: 101
Usage of /: 53.5% of 19.56GB Users logged in: 0
Memory usage: 9% IP address for enp0s3: 192.168.225.50
Swap usage: 0% IP address for enp0s8: 192.168.225.51
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
61 packages can be updated.
22 updates are security updates.
Last login: Wed Aug 14 13:10:33 2019 from 127.0.0.1
```

*通过 SSH 使用 443 端口访问远程系统*
看见了吗?即使默认的 SSH 端口 22 被阻止,我现在也可以通过 SSH 访问远程服务器。正如你在上面的示例中所看到的,我使用 https 端口 443 进行 SSH 连接。
我在我的 Ubuntu 18.04 LTS 服务器上测试了 SSLH,它如上所述工作得很好。我在受保护的局域网中测试了 SSLH,所以我不知道是否有安全问题。如果你在生产环境中使用它,请在下面的评论部分中告诉我们使用 SSLH 的优缺点。
有关更多详细信息,请查看下面给出的官方 GitHub 页面。
资源:
* [SSLH GitHub 仓库](https://github.com/yrutschle/sslh)
---
via: <https://www.ostechnix.com/sslh-share-port-https-ssh/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,248 | 树莓派 4 开箱记 | https://opensource.com/article/19/8/unboxing-raspberry-pi-4 | 2019-08-20T09:17:59 | [
"树莓派"
] | https://linux.cn/article-11248-1.html |
>
> 树莓派 4 与其前代产品相比具有令人印象深刻的性能提升,而入门套件使其易于快速启动和运行。
>
>
>

当树莓派 4 [在 6 月底宣布发布](https://opensource.com/article/19/6/raspberry-pi-4)时,我没有迟疑,在发布的同一天就从 [CanaKit](https://www.canakit.com/raspberry-pi-4-starter-kit.html) 订购了两套树莓派 4 入门套件。1GB RAM 版本有现货,但 4GB 版本要在 7 月 19 日才能发货。由于我想两个都试试,这两个都订购了让它们一起发货。

这是我开箱我的树莓派 4 后所看到的。
### 电源
树莓派 4 使用 USB-C 连接器供电。虽然 USB-C 电缆现在非常普遍,但你的树莓派 4 [可能不喜欢你的USB-C 线](https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/)(至少对于树莓派 4 的第一版如此)。因此,除非你确切知道自己在做什么,否则我建议你订购含有官方树莓派充电器的入门套件。如果你想尝试手头的充电设备,那么该设备的输入是 100-240V ~ 50/60Hz 0.5A,输出为 5.1V - 3.0A。

### 键盘和鼠标
官方的键盘和鼠标是和入门套件是[分开出售](https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476)的,总价 25 美元,并不很便宜,因为你的这台树莓派电脑也才只有 35 到 55 美元。但树莓派徽标印在这个键盘上(而不是 Windows 徽标),并且外观相宜。键盘也是 USB 集线器,因此它允许你插入更多设备。我插入了我的 [YubiKey](https://www.yubico.com/products/yubikey-hardware/) 安全密钥,它运行得非常好。我会把键盘和鼠标分类为“值得拥有”而不是“必须拥有”。你的常规键盘和鼠标应该也可以正常工作。
 and mouse.")

### Micro-HDMI 电缆
可能让一些人惊讶的是,与带有 Mini-HDMI 端口的树莓派 Zero 不同,树莓派 4 配备了 Micro-HDMI。它们不是同一个东西!因此,即使你手头有合适的 USB-C 线缆/电源适配器、鼠标和键盘,也很有可能需要使用 Micro-HDMI 转 HDMI 的线缆(或适配器)来将你的新树莓派接到显示器上。
### 外壳
树莓派的外壳已经有了很多年,这可能是树莓派基金会销售的第一批“官方”外围设备之一。有些人喜欢它们,而有些人不喜欢。我认为将一个树莓派放在一个盒子里可以更容易携带它,可以避免静电和针脚弯曲。
另一方面,把你的树莓派装在盒子里会使电路板过热。这款 CanaKit 入门套件还配备了处理器散热器,这可能有所帮助,因为较新的树莓派已经[以运行相当热而闻名](https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/)了。

### Raspbian 和 NOOBS
入门套件附带的另一个东西是 microSD 卡,其中预装了树莓派 4 的 [NOOBS](https://www.raspberrypi.org/downloads/noobs/) 操作系统的正确版本。(我拿到的是 3.19 版,发布于 2019 年 6 月 24 日)。如果你是第一次使用树莓派并且不确定从哪里开始,这可以为你节省大量时间。入门套件中的 microSD 卡容量为 32GB。
插入 microSD 卡并连接所有电缆后,只需启动树莓派,引导进入 NOOBS,选择 Raspbian 发行版,然后等待安装。

我注意到在安装最新的 Raspbian 时有一些改进。(如果它们已经出现了一段时间,请原谅我 —— 自从树莓派 3 出现以来我没有对树莓派进行过全新安装。)其中一个是 Raspbian 会要求你在安装后的首次启动为你的帐户设置一个密码,另一个是它将运行软件更新(假设你有网络连接)。这些都是很大的改进,有助于保持你的树莓派更安全。我很希望能有一天在安装时看到加密 microSD 卡的选项。


运行非常顺畅!
### 结语
虽然 CanaKit 不是美国唯一授权的树莓派零售商,但我发现它的入门套件的价格物超所值。
到目前为止,我对树莓派 4 的性能提升印象深刻。我打算尝试用一整个工作日将它作为我唯一的计算机,我很快就会写一篇关于我探索了多远的后续文章。敬请关注!
---
via: <https://opensource.com/article/19/8/unboxing-raspberry-pi-4>
作者:[Anderson Silva](https://opensource.com/users/ansilvahttps://opensource.com/users/bennuttall) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | When the Raspberry Pi 4 was [announced at the end of June](https://opensource.com/article/19/6/raspberry-pi-4), I wasted no time. I ordered two Raspberry Pi 4 Starter Kits the same day from [CanaKit](https://www.canakit.com/raspberry-pi-4-starter-kit.html). The 1GB RAM version was available right away, but the 4GB version wouldn't ship until July 19th. Since I wanted to try both, I ordered them to be shipped together.

Here's what I found when I unboxed my Raspberry Pi 4.
## Power supply
The Raspberry Pi 4 uses a USB-C connector for its power supply. Even though USB-C cables are very common now, your Pi 4 [may not like your USB-C cable](https://www.techrepublic.com/article/your-new-raspberry-pi-4-wont-power-on-usb-c-cable-problem-now-officially-confirmed/) (at least with these first editions of the Raspberry Pi 4). So, unless you know exactly what you are doing, I recommend ordering the Starter Kit, which comes with an official Raspberry Pi charger. In case you would rather try whatever you have on hand, the device's input reads 100-240V ~ 50/60Hz 0.5A, and the output says 5.1V --- 3.0A.

## Keyboard and mouse
The official keyboard and mouse are [sold separately](https://www.canakit.com/official-raspberry-pi-keyboard-mouse.html?defpid=4476) from the Starter Kit, and at $25 total, they aren't really cheap, given you're paying only $35 to $55 for a proper computer. But the Raspberry Pi logo is printed on this keyboard (instead of the Windows logo), and there is something compelling about having an appropriate appearance. The keyboard is also a USB hub, so it allows you to plug in even more devices. I plugged in my [YubiKey](https://www.yubico.com/products/yubikey-hardware/) security key, and it works very nicely. I would classify the keyboard and mouse as a "nice to have" versus a "must-have." Your regular keyboard and mouse should work fine.


## Micro-HDMI cable
Something that may have caught some folks by surprise is that, unlike the Raspberry Pi Zero that comes with a Mini-HDMI port, the Raspberry Pi 4 comes with a Micro-HDMI. They are not the same thing! So, even though you may have a suitable USB-C cable/power adaptor, mouse, and keyboard on hand, there is a pretty good chance you will need a Micro-HDMI-to-HDMI cable (or an adapter) to plug your new Raspberry Pi into a display.
## The case
Cases for the Raspberry Pi have been around for years and are probably one of the very first "official" peripherals the Raspberry Pi Foundation sold. Some people like them; others don't. I think putting a Pi in a case makes it easier to carry it around and avoid static electricity and bent pins.
On the other hand, keeping your Pi covered can overheat the board. This CanaKit Starter Kit also comes with heatsink for the processor, which might help, as the newer Pis are already [known for running pretty hot](https://www.theregister.co.uk/2019/07/22/raspberry_pi_4_too_hot_to_handle/).

## Raspbian and NOOBS
The other item that comes with the Starter Kit is a microSD card with the correct version of the [NOOBS](https://www.raspberrypi.org/downloads/noobs/) operating system for the Raspberry Pi 4 pre-installed. (I got version 3.1.1, released June 24, 2019). If you're using a Raspberry Pi for the first time and are not sure where to start, this could save you a lot of time. The microSD card in the Starter Kit is 32GB.
After you insert the microSD card and connect all the cables, just start up the Pi, boot into NOOBS, pick the Raspbian distribution, and wait while it installs.

I noticed a couple of improvements while installing the latest Raspbian. (Forgive me if they've been around for a while—I haven't done a fresh install on a Pi since the 3 came out.) One is that Raspbian will ask you to set up a password for your account at first boot after installation, and the other is that it will run a software update (assuming you have network connectivity). These are great improvements to help keep your Raspberry Pi a little more secure. I would love to see the option to encrypt the microSD card at installation … maybe someday?


It runs very smoothly!
## Wrapping up
Although CanaKit isn't the only authorized Raspberry Pi retailer in the US, I found its Starter Kit to provide great value for the price.
So far, I am very impressed with the performance gains in the Raspberry Pi 4. I'm planning to try spending an entire workday using it as my only computer, and I'll write a follow-up article soon about how far I can go. Stay tuned!
## 5 Comments |
11,250 | 使用 LVM 升级 Fedora | https://fedoramagazine.org/use-lvm-upgrade-fedora/ | 2019-08-20T09:57:06 | [
"Fedora",
"LVM"
] | https://linux.cn/article-11250-1.html | 
大多数用户发现使用标准流程升级[从一个 Fedora 版本升级到下一个](https://fedoramagazine.org/upgrading-fedora-27-fedora-28/)很简单。但是,Fedora 升级也不可避免地会遇到许多特殊情况。本文介绍了使用 DNF 和逻辑卷管理(LVM)进行升级的一种方法,以便在出现问题时保留可引导备份。这个例子是将 Fedora 26 系统升级到 Fedora 28。
此处展示的过程比标准升级过程更复杂。在使用此过程之前,你应该充分掌握 LVM 的工作原理。如果没有适当的技能和细心,你可能会丢失数据和/或被迫重新安装系统!如果你不知道自己在做什么,那么**强烈建议**你坚持只使用得到支持的升级方法。
### 准备系统
在开始之前,请确保你的现有系统已完全更新。
```
$ sudo dnf update
$ sudo systemctl reboot # 或采用 GUI 方式
```
检查你的根文件系统是否是通过 LVM 挂载的。
```
$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% /
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
f22 vg_sdg -wi-ao---- 15.00g
f24_64 vg_sdg -wi-ao---- 20.00g
f26 vg_sdg -wi-ao---- 20.00g
home vg_sdg -wi-ao---- 100.00g
mockcache vg_sdg -wi-ao---- 10.00g
swap vg_sdg -wi-ao---- 4.00g
test vg_sdg -wi-a----- 1.00g
vg_vm vg_sdg -wi-ao---- 20.00g
```
如果你在安装 Fedora 时使用了默认值,你可能会发现根文件系统挂载在名为 `root` 的逻辑卷(LV)上。卷组(VG)的名称可能会有所不同。看看根卷的总大小。在该示例中,根文件系统名为 `f26`,大小为 `20G`。
接下来,确保 LVM 中有足够的可用空间。
```
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg_sdg 1 8 0 wz--n- 232.39g 42.39g
```
该系统有足够的可用空间,可以为升级后的 Fedora 28 的根卷分配 20G 的逻辑卷。如果你使用的是默认安装,则你的 LVM 中将没有可用空间。对 LVM 的一般性管理超出了本文的范围,但这里有一些情形下可能采取的方法:
1、`/home` 在自己的逻辑卷,而且 `/home` 中有大量空闲空间。
你可以从图形界面中注销并切换到文本控制台,以 `root` 用户身份登录。然后你可以卸载 `/home`,并使用 `lvreduce -r` 调整大小并重新分配 `/home` 逻辑卷。你也可以从<ruby> 现场镜像 <rt> Live image </rt></ruby>启动(以便不使用 `/home`)并使用 gparted GUI 实用程序进行分区调整。
2、大多数 LVM 空间被分配给根卷,该文件系统中有大量可用空间。
你可以从现场镜像启动并使用 gparted GUI 实用程序来减少根卷的大小。此时也可以考虑将 `/home` 移动到另外的文件系统,但这超出了本文的范围。
3、大多数文件系统已满,但你有个已经不再需要逻辑卷。
你可以删除不需要的逻辑卷,释放卷组中的空间以进行此操作。
### 创建备份
首先,为升级后的系统分配新的逻辑卷。确保为系统的卷组(VG)使用正确的名称。在这个例子中它是 `vg_sdg`。
```
$ sudo lvcreate -L20G -n f28 vg_sdg
Logical volume "f28" created.
```
接下来,创建当前根文件系统的快照。此示例创建名为 `f26_s` 的快照卷。
```
$ sync
$ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26
Using default stripesize 64.00 KiB.
Logical volume "f26_s" created.
```
现在可以将快照复制到新逻辑卷。当你替换自己的卷名时,**请确保目标正确**。如果不小心,就会不可撤销地删除了数据。此外,请确保你从根卷的快照复制,**而不是**从你的现在的根卷。
```
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k
81920+0 records in
81920+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
```
给新文件系统一个唯一的 UUID。这不是绝对必要的,但 UUID 应该是唯一的,因此这避免了未来的混淆。以下是在 ext4 根文件系统上的方法:
```
$ sudo e2fsck -f /dev/vg_sdg/f28
$ sudo tune2fs -U random /dev/vg_sdg/f28
```
然后删除不再需要的快照卷:
```
$ sudo lvremove vg_sdg/f26_s
Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y
Logical volume "f26_s" successfully removed
```
如果你单独挂载了 `/home`,你可能希望在此处制作 `/home` 的快照。有时,升级的应用程序会进行与旧版 Fedora 版本不兼容的更改。如果需要,编辑**旧**根文件系统上的 `/etc/fstab` 文件以在 `/home` 上挂载快照。请记住,当快照已满时,它将消失!另外,你可能还希望给 `/home` 做个正常备份。
### 配置以使用新的根
首先,安装新的逻辑卷并备份现有的 GRUB 设置:
```
$ sudo mkdir /mnt/f28
$ sudo mount /dev/vg_sdg/f28 /mnt/f28
$ sudo mkdir /mnt/f28/f26
$ cd /boot/grub2
$ sudo cp -p grub.cfg grub.cfg.old
```
编辑 `grub.conf` 并在第一个菜单项 `menuentry` 之前添加这些,除非你已经有了:
```
menuentry 'Old boot menu' {
configfile /grub2/grub.cfg.old
}
```
编辑 `grub.conf` 并更改默认菜单项以激活并挂载新的根文件系统。改变这一行:
```
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
```
如你看到的这样。请记住使用你系统上的正确的卷组和逻辑卷条目名称!
```
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
```
编辑 `/mnt/f28/etc/default/grub` 并改变在启动时激活的默认的根卷:
```
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
```
编辑 `/mnt/f28/etc/fstab`,将挂载的根文件系统从旧的逻辑卷:
```
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
```
改为新的:
```
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
```
然后,出于参考的用途,只读挂载旧的根卷:
```
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
```
如果你的根文件系统是通过 UUID 挂载的,你需要改变这个方式。如果你的根文件系统是 ext4 你可以这样做:
```
$ sudo e2label /dev/vg_sdg/f28 F28
```
现在编辑 `/mnt/f28/etc/fstab` 使用该卷标。改变该根文件系统的挂载行,像这样:
```
LABEL=F28 / ext4 defaults 1 1
```
### 重启与升级
重新启动,你的系统将使用新的根文件系统。它仍然是 Fedora 26,但是是带有新的逻辑卷名称的副本,并可以进行 `dnf` 系统升级!如果出现任何问题,请使用旧引导菜单引导回到你的工作系统,此过程可避免触及旧系统。
```
$ sudo systemctl reboot # or GUI equivalent
...
$ df / /f26
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% /
/dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
```
你可能希望验证使用旧的引导菜单确实可以让你回到挂载在旧的根文件系统上的根。
现在按照[此维基页面](https://fedoraproject.org/wiki/DNF_system_upgrade)中的说明进行操作。如果系统升级出现任何问题,你还会有一个可以重启回去的工作系统。
### 进一步的考虑
创建新的逻辑卷并将根卷的快照复制到其中的步骤可以使用通用脚本自动完成。它只需要新的逻辑卷的名称,因为现有根的大小和设备很容易确定。例如,可以输入以下命令:
```
$ sudo copyfs / f28
```
提供挂载点以进行复制可以更清楚地了解发生了什么,并且复制其他挂载点(例如 `/home`)可能很有用。
---
via: <https://fedoramagazine.org/use-lvm-upgrade-fedora/>
作者:[Stuart D Gathman](https://fedoramagazine.org/author/sdgathman/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Most users find it simple to upgrade [from one Fedora release to the next](https://fedoramagazine.org/upgrading-fedora-27-fedora-28/) with the standard process. However, there are inevitably many special cases that Fedora can also handle. This article shows one way to upgrade using DNF along with Logical Volume Management (LVM) to keep a bootable backup in case of problems. This example upgrades a Fedora 26 system to Fedora 28.
The process shown here is more complex than the standard upgrade process. You should have a strong grasp of how LVM works before you use this process. Without proper skill and care, you could **lose data and/or be forced to reinstall your system!** If you don’t know what you’re doing, it is **highly recommended** you stick to the supported upgrade methods only.
## Preparing the system
Before you start, ensure your existing system is fully updated.
$ sudo dnf update $ sudo systemctl reboot # or GUI equivalent
Check that your root filesystem is mounted via LVM.
$ df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_sdg-f26 20511312 14879816 4566536 77% / $ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert f22 vg_sdg -wi-ao---- 15.00g f24_64 vg_sdg -wi-ao---- 20.00g f26 vg_sdg -wi-ao---- 20.00g home vg_sdg -wi-ao---- 100.00g mockcache vg_sdg -wi-ao---- 10.00g swap vg_sdg -wi-ao---- 4.00g test vg_sdg -wi-a----- 1.00g vg_vm vg_sdg -wi-ao---- 20.00g
If you used the defaults when you installed Fedora, you may find the root filesystem is mounted on a LV named *root*. The name of your volume group will likely be different. Look at the total size of the root volume. In the example, the root filesystem is named *f26* and is 20G in size.
Next, ensure you have enough free space in LVM.
$ sudo vgs VG #PV #LV #SN Attr VSize VFree vg_sdg 1 8 0 wz--n- 232.39g 42.39g
This system has enough free space to allocate a 20G logical volume for the upgraded Fedora 28 root. If you used the default install, there will be no free space in LVM. Managing LVM in general is beyond the scope of this article, but here are some possibilities:
1. */home* on its own LV, and lots of free space in */home*
You can logout of the GUI and switch to a text console, logging in as root. Then you can unmount */home*, and use *lvreduce -r* to resize and reallocate the */home* LV. You might also boot from a Live image (so as not to use */home*) and use the *gparted* GUI utility.
2. Most of the LVM space allocated to a root LV, with lots of free space in the filesystem
You can boot from a Live image and use the *gparted* GUI utility to reduce the root LV. Consider moving */home* to its own filesystem at this point, but that is beyond the scope of this article.
3. Most of the file systems are full, but you have LVs you no longer need
You can delete the unneeded LVs, freeing space in the volume group for this operation.
## Creating a backup
First, allocate a new LV for the upgraded system. Make sure to use the correct name for your system’s volume group (VG). In this example it’s *vg_sdg*.
$ sudo lvcreate -L20G -n f28 vg_sdg Logical volume "f28" created.
Next, make a snapshot of your current root filesystem. This example creates a snapshot volume named *f26_s.*
$ sync $ sudo lvcreate -s -L1G -n f26_s vg_sdg/f26 Using default stripesize 64.00 KiB. Logical volume "f26_s" created.
The snapshot can now be copied to the new LV. **Make sure you have the destination correct** when you substitute your own volume names. If you are not careful you could delete data irrevocably. Also, make sure you are copying from the snapshot of your root, **not** your live root.
$ sudo dd if=/dev/vg_sdg/f26_s of=/dev/vg_sdg/f28 bs=256k 81920+0 records in 81920+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 149.179 s, 144 MB/s
Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so this avoids future confusion. Here is how for an ext4 root filesystem:
$ sudo e2fsck -f /dev/vg_sdg/f28 $ sudo tune2fs -U random /dev/vg_sdg/f28
Then remove the snapshot volume which is no longer needed:
$ sudo lvremove vg_sdg/f26_s Do you really want to remove active logical volume vg_sdg/f26_s? [y/n]: y Logical volume "f26_s" successfully removed
You may wish to make a snapshot of */home* at this point if you have it mounted separately. Sometimes, upgraded applications make changes that are incompatible with a much older Fedora version. If needed, edit the */etc/fstab* file on the **old** root filesystem to mount the snapshot on */home*. Remember that when the snapshot is full, it will disappear! You may also wish to make a normal backup of */home* for good measure.
## Configuring to use the new root
First, mount the new LV and backup your existing GRUB settings:
$ sudo mkdir /mnt/f28 $ sudo mount /dev/vg_sdg/f28 /mnt/f28 $ sudo mkdir /mnt/f28/f26 $ cd /boot/grub2 $ sudo cp -p grub.cfg grub.cfg.old
Edit *grub.conf* and add this before the first *menuentry, *unless you already have it:
menuentry 'Old boot menu' { configfile /grub2/grub.cfg.old }
Edit *grub.conf* and change the default *menuentry* to activate and mount the new root filesystem. Change this line:
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f26 ro rd.lvm.lv=vg_sdg/f26 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
So that it reads like this. Remember to use the correct names for your system’s VG and LV entries!
linux16 /vmlinuz-4.16.11-100.fc26.x86_64 root=/dev/mapper/vg_sdg-f28 ro rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet LANG=en_US.UTF-8
Edit */mnt/f28/etc/default/grub* and change the default root LV activated at boot:
GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_sdg/f28 rd.lvm.lv=vg_sdg/swap rhgb quiet"
Edit */mnt/f28/etc/fstab* to change the mounting of the root filesystem from the old volume:
/dev/mapper/vg_sdg-f26 / ext4 defaults 1 1
to the new one:
/dev/mapper/vg_sdg-f28 / ext4 defaults 1 1
Then, add a read-only mount of the old system for reference purposes:
/dev/mapper/vg_sdg-f26 /f26 ext4 ro,nodev,noexec 0 0
If your root filesystem is mounted by UUID, you will need to change this. Here is how if your root is an ext4 filesystem:
$ sudo e2label /dev/vg_sdg/f28 F28
Now edit */mnt/f28/etc/fstab* to use the label. Change the mount line for the root file system so it reads like this:
LABEL=F28 / ext4 defaults 1 1
## Rebooting and upgrading
Reboot, and your system will be using the new root filesystem. It is still Fedora 26, but a copy with a new LV name, and ready for *dnf system-upgrade!* If anything goes wrong, use the Old Boot Menu to boot back into your working system, which this procedure avoids touching.
$ sudo systemctl reboot # or GUI equivalent ... $ df / /f26 Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_sdg-f28 20511312 14903196 4543156 77% / /dev/mapper/vg_sdg-f26 20511312 14866412 4579940 77% /f26
You may wish to verify that using the Old Boot Menu does indeed get you back to having root mounted on the old root filesystem.
Now follow the instructions at [this wiki page](https://fedoraproject.org/wiki/DNF_system_upgrade). If anything goes wrong with the system upgrade, you have a working system to boot back into.
## Future ideas
The steps to create a new LV and copy a snapshot of root to it could be automated with a generic script. It needs only the name of the new LV, since the size and device of the existing root are easy to determine. For example, one would be able to type this command:
$ sudo copyfs / f28
Supplying the mount-point to copy makes it clearer what is happening, and copying other mount-points like */home* could be useful.
## Pierre
Your article is very interesting, I learned about the snapshot logical volume which is demystified now ! Thanks
## crossingtheair.wordpress.com
I have my Fedora as LVM, so this comes just in time. Thanks!
## Stuart D Gathman
Some last minute changes didn’t get in. The most important is that changing the UUID needs to happen while the filesystem is unmounted. Insert after the step using dd:
Give the new filesystem copy a unique UUID. This is not strictly necessary, but UUIDs are supposed to be unique, so let’s avoid confusing ourselves in the future. Here is how for an ext4 root filesystem:
$ sudo e2fsck -f /dev/vg_sdg/f28
$ sudo tune2fs -U random /dev/vg_sdg/f28
## Paul W. Frields
This is now fixed in the article.
## Supersonic Tumbleweed
Thanks for this excellent write up – this is why my experience with Fedora is great so far. It gives me a sense of community because I just was exploring and learning about lvm, moving my install between disks and there you are, explaining the tough parts without me even asking 😀
Great stuff, have a nice day!
## Vratislav Podzimek
Nice article! I wish there was a better general understanding of LVM and its benefits among the desktop users. Just two notes from me:
1) This would be a lot easier and nicer if Fedora was using LVM Thin Provisioning by default.
2) Using XFS instead of ext4 would also make things nicer/faster because XFS comes with the two amazing tools — xfsdump and xfsrestore — that can be used to backup a file system or, like in this use case, clone a filesystem to another device in a lot more efficient way than using dd.
## Stuart D Gathman
Why does Vratislav prefer thinpools?
Would LVM thinpool be easier?
When I made the snapshot in the article, the snapshot is actually writable, and we
couldhave allocated plenty of space for the snapshot and used it directly for the upgraded system. The snapshot initially points to the “origin” volume, but data is gradually copied to the snapshot via a copy on write process as origin or destination is written to.So why didn’t I do that? First, the COW (copy on write) data is shuffled on the disk, making access less efficient. Second, the COW table is kept in memory, and makes activating the snapshot at startup slow. Third, while the first snapshot is reasonably efficient, additional snapshots become much slower, as origin data is copied to
allsnapshots. Fourth, you can’t take a snapshot of a snapshot – so backups would be more difficult.That last is not really an issue. Sometimes, I rename the origin, and make the old Fedora system the snapshot. The old system is not active – except when booting from the old system when something goes wrong. Eventually, the old snapshot can be deleted.
So how does thinpool help? With standard snapshots, each snapshot has its own dedicated storage pool, which you allocate when you create the snapshot. If that pool is exhausted – the snapshot effectively goes away, and all reads return an error.
A thinpool creates a single storage pool, which is shared between many logical volumes. First, any number of snapshots can be created without affecting performance – at most one COW is required for any update. Second, snapshots of snapshots work just fine.
With the root volume in a thinpool, we could just create a snapshot of f26 named f28, point grub at it, and we are done – with no copy required!
So why don’t I use a thinpool for my root fs?
First, there is still in reality a copy required – it is just done gradually via the copy on write process. All volumes in a thinpool perform like a single standard snapshot.
Second, when storage in the thinpool is exhaused,
allyour volumes disappear! Well, hopefully that should be amended to “all volumes are frozen in a read-only mode” by this time. In either case, it is much more catastrophic than the temporary “instant backup” of a standard snapshot. You Never, Never, Never want your thinpool to run out of space – and any write, can consume that space!In contrast, using a standard logical volume (not in a thinpool) performs essentially like a partition. Like a partition, standard LVs are very forgiving of admin errors, as the data is contained in a few large segments, and the segment map is not updated frequently – there will be many copies of the metadata in your backups.
## Vratislav Podzimek
That’s quite a nice summary. I have been using LVM ThinP for my root and home volumes for years now without a single problem. I actually even use it for my backup volume on a backup HDD (creating snapshots so that I can go back in history). The performance has always been great and the flexibility is much greater than with non-ThinP LVM. The only downside is that one has to check one one or two numbers from time to time. Which I do anyway because I check my free space regularly.
And it’s fair to mention that the Fedora installer leaves space in the VG for the thin pool’s data or metadata segments to grow on demand. So it’s not like one just runs into the state with “frozen” volumes easily.
But maybe I’m incorrectly thinking about Fedora Workstation as “a developer workstation” (as used to be stated somewhere, IIRC) which is not the case these days.
## Stuart D Gathman
Why does Vratislav prefer XFS?
The upgrade process presented doesn’t really care what filesystem you use for your root fs. So this is a somewhat of a digression. XFS is a fine filesystem, and performance for large files is especially impressive (hence it is popular for video storage). So why don’t I use it for my root fs?
First, XFS does not support shrinking the filesystem – other than by copying to a new smaller filesystem.
Second, root fs has mostly small files.
Third, ext4 performs better than XFS on simple flash media (i.e. a thumb drive or SD card). It is in many cases convenient to introduce people to Fedora™ by installing to such media when they are not ready to replace or overwrite their hard drive with Windows™ installed.
2a. Why did I use dd instead of rsync or dump/restore to copy the root fs?
First, the root fs in the example was 77% full. Using an image copy like dd is actually pretty efficient in that case. You can also use rsync, when the fs is mostly empty, or when you want to change to a different filesystem.
Second, using rsync -ravhX would complicate the example, because you would then have to format the new filesystem and mount the snapshot and new filesystem. Even with (xfs)dump/restore, you have to format and mount the destination filesystem.
## Vratislav Podzimek
Let’s not hijack this comments thread for a file systems battle. There are various benchmarks and aspects that one has to take into account when comparing file systems (typical workload, physical device, number of CPUs,…). Having fresh data/results for a typical HW these days (multi-core, SSD/NVMe based,…) and the current version of XFS could actually prove you wrong. Simple claims are never correct.
In my comment I was only referring to this particular use case where the xfsdump and xfsrestore tools would do an amazing work. They are not like neither rsync nor dd. They actually make use of the knowledge of the file system metadata to do the dump/restore (and thus ‘clone’ when combined) very efficiently. And one can also do delta backups easily. I’m not aware of similar tools existing for ext4, but if there are some, I’d be happy to learn about them.
## Stuart D Gathman
I agree. I mostly wanted to point out that dd is file system agnostic, and reasonably efficient when the filesystem is mostly full. Any more efficient method of cloning the filesystem is going to be very filesystem specific, and hence not ideal for my article.
## Stuart D Gathman
FYI, ext4 has dump and restore, which efficiently copy only the used blocks of an ext4 filesystem. xfsdump and xfsrestore were named after these venerable utilities.
Here is an example using dump/restore:
https://stackoverflow.com/questions/37488629/how-to-use-dump-and-restore-to-clone-a-linux-os-drive
Using either dump/restore or xfsdump/xfsrestore would complicate the cloning instructions, as both require formatting and mounting the destination filesystem – which is the main reason I didn’t go that route for the article.
I’m more and more seeing a need for an “fsclone” utility that knows about LVM and snapshots (there is no need for an fsclone with thinpool, of course), knows how to most efficiently clone some filesystem types, punts to using ‘dd’ for those it doesn’t know about, and handles formatting and mounting the destination when using (xfs)dump/restore or rsync. It might even use dd for nearly full filesystems, as there is a point at which dd is actually more efficient on any filesystem.
## Stuart D Gathman
Heh. As usual, someone has already done most of it. The utility we want seems to be ‘partclone’ – and it is already in Fedora. It knows how to efficiently clone a variety of filesystems to/from an image backup or directly between block devices. I can use that in my fsclone utility, which will additionally know about LVM!
## Stuart D Gathman
One of the supported filesystems is nilfs, which seems similar to reiserfs in concept. It is much more modest in scope than btrfs, being a simply a filesystem, and not trying to cross over into block device management as well. Hence it is well suited for use with LVM. I
likekeeping the management tools separate like that. LVM for block device management, your favorite filesystem on logical volumes.What log structured filesystems like reiserfs, nilfs, btrfs offer is continuous checkpointing. Instead of reusing the same reserved checkpoint area (log) over and over, the entire filesystem is structured as a giant log. Each point of consistency is a checkpoint. Checkpoints are garbage collected to recycle space, and selected checkpoints can be marked to become snapshots, and are not collected.
nilfs is part of Fedora, and modest enough that I will certainly start giving it a try in virtual machines, and perhaps more. It could be a potential life saver (losing a lot of time is losing part of your life) as a development filesystem. Have you ever typed rm * .o and got the dreaded “.o: not found” ?
## Stuart D Gathman
Here is an example of running partclone instead of dd to clone the filesystem:
[root@melissa ~]# partclone.ext4 -b -s /dev/vg_melissa/f27_s -o /dev/vg_melissa/f28
Partclone v0.3.11 http://partclone.org
Starting to back up device(/dev/vg_melissa/f27_s) to device(/dev/vg_melissa/f28)
Elapsed: 00:00:01, Remaining: 00:00:00, Completed: 100.00%
Total Time: 00:00:01, 100.00% completed!
done!
File system: EXTFS
Device size: 16.1 GB = 3932160 Blocks
Space in use: 10.5 GB = 2563963 Blocks
Free Space: 5.6 GB = 1368197 Blocks
Block size: 4096 Byte
Elapsed: 00:03:42, Remaining: 00:00:00, Completed: 100.00%, Rate: 2.84GB/min,
current block: 3932160, total block: 3932160, Complete: 100.00%
Total Time: 00:03:42, Ave. Rate: 2.8GB/min, 100.00% completed!
Syncing… OK!
Partclone successfully cloned the device (/dev/vg_melissa/f27_s) to the device (/dev/vg_melissa/f28)
Cloned successfully.
[root@melissa ~]# blkid /dev/vg_melissa/f27
/dev/vg_melissa/f27: LABEL=”F27″ UUID=”335c6a7e-a1a7-48a2-90b0-a44bd699c664″ TYPE=”ext4″
[root@melissa ~]# blkid /dev/vg_melissa/f28
/dev/vg_melissa/f28: LABEL=”F27″ UUID=”335c6a7e-a1a7-48a2-90b0-a44bd699c664″ TYPE=”ext4″
Notice that it also does not change the UUID – which is probably what you want for backup purposes when the filesystem is mounted by UUID.
## Vratislav Podzimek
Nice! I had no idea that the dump/restore tools were available in the old UNIX systems. And now that I see ‘partclone’ I realize it’s what ‘CloneZilla’ has been using for ages! So much wisdom available, one just has to search sometimes. 🙂
## Stuart D Gathman
I just realized an important omission. If you have an EFI install, then your grub.cfg will be under /boot/efi/EFI/fedora instead of /boot/grub2. I’ll have to get an EFI system installed to see what path to use for configfile in that case.
## Bobin
Extraordinary, Awesome, superb |
11,251 | 《代码英雄》第一季(1):操作系统战争(上) | https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1 | 2019-08-20T23:58:00 | [
"代码英雄"
] | https://linux.cn/article-11251-1.html |
>
> <ruby></ruby><ruby> 代码英雄 <rt> Command Line Heroes </rt></ruby>是世界领先的企业开源软件解决方案供应商<ruby> 红帽 <rt> Red Hat </rt></ruby>精心制作的音频播客,讲述开发人员、程序员、黑客、极客和开源反叛者如何彻底改变技术前景的真实史诗。该音频博客邀请到了谷歌、NASA 等重量级企业的众多技术大牛共同讲述开源、操作系统、容器、DevOps、混合云等发展过程中的动人故事。点击链接<https://www.redhat.com/en/command-line-heroes> 查看更多信息。
>
>
>

本文是《[代码英雄](https://www.redhat.com/en/command-line-heroes)》系列播客[第一季(1):操作系统战争(上)](https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1) 的[音频](https://img.linux.net.cn/static/audio/f7670e99.mp3)脚本。
**Saron Yitbarek:**有些故事如史诗般,惊险万分,在我脑海中似乎出现了星球大战电影开头的爬行文字。你知道的,就像——
配音:“第一集,操作系统大战”
**Saron Yitbarek:**是的,就像那样子。
配音:这是一个局势紧张加剧的时期。<ruby> 比尔·盖茨 <rt> Bill Gates </rt></ruby>与<ruby> 史蒂夫·乔布斯 <rt> Steve Jobs </rt></ruby>的帝国发起了一场无可避免的专有软件之战。[00:00:30] 盖茨与 IBM 结成了强大的联盟,而乔布斯则拒绝了对它的硬件和操作系统开放授权。他们争夺统治地位的争斗在一场操作系统战争中席卷了整个银河系。与此同时,这些帝王们所不知道的偏远之地,开源的反叛者们开始集聚。
**Saron Yitbarek:**好吧。这也许有点戏剧性,但当我们谈论上世纪八九十年代和 2000 年代的操作系统之争时,这也不算言过其实。*[00:01:00]* 确实曾经发生过一场史诗级的统治之战。史蒂夫·乔布斯和比尔·盖茨确实掌握着数十亿人的命运。掌控了操作系统,你就掌握了绝大多数人使用计算机的方式、互相通讯的方式、获取信息的方式。我可以一直罗列下去,不过你知道我的意思。掌握了操作系统,你就是帝王。
我是 Saron Yitbarek,你现在收听的是代码英雄,一款红帽公司原创的博客节目。*[00:01:30]* 你问,什么是<ruby> 代码英雄 <rt> Command Line Hero </rt></ruby>?嗯,如果你愿意创造而不仅仅是使用,如果你相信开发者拥有构建美好未来的能力,如果你希望拥有一个大家都有权利表达科技如何塑造生活的世界,那么你,我的朋友,就是一位代码英雄。在本系列节目中,我们将为你带来那些“白码起家”(LCTT 译注:原文是 “from the command line up”,应该是演绎自 “from the ground up”——白手起家)改变技术的程序员故事。*[00:02:00]* 那么我是谁,凭什么指导你踏上这段艰苦的旅程?Saron Yitbarek 是哪根葱?嗯,事实上我觉得我跟你差不多。我是一名面向初学者的开发人员,我做的任何事都依赖于开源软件,我的世界就是如此。通过在博客中讲故事,我可以跳出无聊的日常工作,鸟瞰全景,希望这对你也一样有用。
我迫不及待地想知道,开源技术从何而来?我的意思是,我对<ruby> 林纳斯·托瓦兹 <rt> Linus Torvalds </rt></ruby>和 Linux<sup> ®</sup> 的荣耀有一些了解,*[00:02:30]* 我相信你也一样,但是说真的,开源并不是一开始就有的,对吗?如果我想发自内心的感激这些最新、最棒的技术,比如 DevOps 和容器之类的,我感觉我对那些早期的开发者缺乏了解,我有必要了解这些东西来自何处。所以,让我们暂时先不用担心内存泄露和缓冲溢出。我们的旅程将从操作系统之战开始,这是一场波澜壮阔的桌面控制之战。*[00:03:00]* 这场战争亘古未有,因为:首先,在计算机时代,大公司拥有指数级的规模优势;其次,从未有过这么一场控制争夺战是如此变化多端。比尔·盖茨和史蒂夫·乔布斯? 他们也不知道结果会如何,但是到目前为止,这个故事进行到一半的时候,他们所争夺的所有东西都将发生改变、进化,最终上升到云端。
*[00:03:30]* 好的,让我们回到 1983 年的秋季。还有六年我才出生。那时候的总统还是<ruby> 罗纳德·里根 <rt> Ronald Reagan </rt></ruby>,美国和苏联扬言要把地球拖入核战争之中。在檀香山(火奴鲁鲁)的市政中心正在举办一年一度的苹果公司销售会议。一群苹果公司的员工正在等待史蒂夫·乔布斯上台。他 28 岁,热情洋溢,看起来非常自信。乔布斯很严肃地对着麦克风说他邀请了三个行业专家来就软件进行了一次小组讨论。*[00:04:00]* 然而随后发生的事情你肯定想不到。超级俗气的 80 年代音乐响彻整个房间。一堆多彩灯管照亮了舞台,然后一个播音员的声音响起-
**配音:**女士们,先生们,现在是麦金塔软件的约会游戏时间。
**Saron Yitbarek:**乔布斯的脸上露出一个大大的笑容,台上有三个 CEO 都需要轮流向他示好。这基本上就是 80 年代钻石王老五,不过是科技界的。*[00:04:30]* 两个软件大佬讲完话后,然后就轮到第三个人讲话了。仅此而已不是吗?是的。新面孔比尔·盖茨带着一个大大的遮住了半张脸的方框眼镜。他宣称在 1984 年,微软的一半收入将来自于麦金塔软件。他的这番话引来了观众热情的掌声。*[00:05:00]* 但是他们不知道的是,在一个月后,比尔·盖茨将会宣布发布 Windows 1.0 的计划。你永远也猜不到乔布斯正在跟苹果未来最大的敌人打情骂俏。但微软和苹果即将经历科技史上最糟糕的婚礼。他们会彼此背叛、相互毁灭,但又深深地、痛苦地捆绑在一起。
**James Allworth:***[00:05:30]* 我猜从哲学上来讲,一个更理想化、注重用户体验高于一切,是一个一体化的组织,而微软则更务实,更模块化 ——
**Saron Yitbarek:**这位是 James Allworth。他是一位多产的科技作家,曾在苹果零售的企业团队工作。注意他给出的苹果的定义,一个一体化的组织,那种只对自己负责的公司,一个不想依赖别人的公司,这是关键。
**James Allworth:***[00:06:00]* 苹果是一家一体化的公司,它希望专注于令人愉悦的用户体验,这意味着它希望控制整个技术栈以及交付的一切内容:从硬件到操作系统,甚至运行在操作系统上的应用程序。当新的创新、重要的创新刚刚进入市场,而你需要横跨软硬件,并且能够根据自己意愿和软件的革新来改变硬件时,这是一个优势。例如 ——,
**Saron Yitbarek:***[00:06:30]* 很多人喜欢这种一体化的模式,并因此成为了苹果的铁杆粉丝。还有很多人则选择了微软。让我们回到檀香山的销售会议上,在同一场活动中,乔布斯向观众展示了他即将发布的超级碗广告。你可能已经亲眼见过这则广告了。想想<ruby> 乔治·奥威尔 <rt> George Orwell </rt> <rt> </rt></ruby>的 《一九八四》。在这个冰冷、灰暗的世界里,无意识的机器人正在独裁者的投射凝视下徘徊。*[00:07:00]* 这些机器人就像是 IBM 的用户们。然后,代表苹果公司的漂亮而健美的<ruby> 安娅·梅杰 <rt> Anya Major </rt></ruby>穿着鲜艳的衣服跑过大厅。她向着大佬们的屏幕猛地投出大锤,将它砸成了碎片。老大哥的咒语解除了,一个低沉的声音响起,苹果公司要开始介绍麦金塔了。
**配音:**这就是为什么现实中的 1984 跟小说《一九八四》不一样了。
Saron Yitbarek:是的,现在回顾那则广告,认为苹果是一个致力于解放大众的自由斗士的想法有点过分了。但这件事触动了我的神经。*[00:07:30]* Ken Segal 曾在为苹果制作这则广告的广告公司工作过。在早期,他为史蒂夫·乔布斯做了十多年的广告。
**Ken Segal:**1984 这则广告的风险很大。事实上,它的风险太大,以至于苹果公司在看到它的时候都不想播出它。你可能听说了史蒂夫喜欢它,但苹果公司董事会的人并不喜欢它。事实上,他们很愤怒,这么多钱被花在这么一件事情上,以至于他们想解雇广告代理商。*[00:08:00]* 史蒂夫则为我们公司辩护。
**Saron Yitbarek:**乔布斯,一如既往地,慧眼识英雄。
**Ken Segal:**这则广告在公司内、在业界内都引起了共鸣,成为了苹果产品的代表。无论人们那天是否有在购买电脑,它都带来了一种持续了一年又一年的影响,并有助于定义这家公司的品质:我们是叛军,我们是拿着大锤的人。
**Saron Yitbarek:***[00:08:30]* 因此,在争夺数十亿潜在消费者心智的过程中,苹果公司和微软公司的帝王们正在学着把自己塑造成救世主、非凡的英雄、一种对生活方式的选择。但比尔·盖茨知道一些苹果难以理解的事情。那就是在一个相互连接的世界里,没有人,即便是帝王,也不能独自完成任务。
*[00:09:00]* 1985 年 6 月 25 日。盖茨给当时的苹果 CEO John Scully 发了一份备忘录。那是一个迷失的年代。乔布斯刚刚被逐出公司,直到 1996 年才回到苹果。也许正是因为乔布斯离开了,盖茨才敢写这份东西。在备忘录中,他鼓励苹果授权制造商分发他们的操作系统。我想读一下备忘录的最后部分,让你们知道这份备忘录是多么的有洞察力。*[00:09:30]* 盖茨写道:“如果没有其他个人电脑制造商的支持,苹果现在不可能让他们的创新技术成为标准。苹果必须开放麦金塔的架构,以获得快速发展和建立标准所需的支持。”换句话说,你们不要再自己玩自己的了。你们必须有与他人合作的意愿。你们必须与开发者合作。
*[00:10:00]* 多年后你依然可以看到这条哲学思想,当微软首席执行官<ruby> 史蒂夫·鲍尔默 <rt> Steve Ballmer </rt></ruby>上台做主题演讲时,他开始大喊:“开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。”你懂我的意思了吧。微软喜欢开发人员。虽然目前(LCTT 译注:本播客发布于 2018 年初)他们不打算与这些开发人员共享源代码,但是他们确实想建立起整个合作伙伴生态系统。*[00:10:30]* 而当比尔·盖茨建议苹果公司也这么做时,如你可能已经猜到的,这个想法就被苹果公司抛到了九霄云外。他们的关系产生了间隙,五个月后,微软发布了 Windows 1.0。战争开始了。
>
> 开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者,开发者。
>
>
>
*[00:11:00]* 你正在收听的是来自红帽公司的原创播客《代码英雄》。本集是第一集,我们将回到过去,重温操作系统战争的史诗故事,我们将会发现,科技巨头之间的战争是如何为我们今天所生活的开源世界扫清道路的
好的,让我们先来个背景故事吧。如果你已经听过了,那么请原谅我,但它很经典。当时是 1979 年,史蒂夫·乔布斯开车去<ruby> 帕洛阿尔托 <rt> Palo Alto </rt></ruby>的<ruby> 施乐公园研究中心 <rt> Xerox Park research center </rt></ruby>。*[00:11:30]* 那里的工程师一直在为他们所谓的图形用户界面开发一系列的元素。也许你听说过。它们有菜单、滚动条、按钮、文件夹和重叠的窗口。这是对计算机界面的一个美丽的新设想。这是前所未有的。作家兼记者 Steve Levy 会谈到它的潜力。
**Steven Levy:***[00:12:00]* 对于这个新界面来说,有很多令人激动的地方,它比以前的交互界面更友好,以前用的所谓的命令行 —— 你和电脑之间的交互方式跟现实生活中的交互方式完全不同。鼠标和电脑上的图像让你可以做到像现实生活中的交互一样,你可以像指向现实生活中的东西一样指向电脑上的东西。这让事情变得简单多了。你无需要记住所有那些命令。
**Saron Yitbarek:***[00:12:30]* 不过,施乐的高管们并没有意识到他们正坐在金矿上。一如既往地,工程师比主管们更清楚它的价值。因此那些工程师,当被要求向乔布斯展示所有这些东西是如何工作时,有点紧张。然而这是毕竟是高管的命令。乔布斯觉得,用他的话来说,“这个产品天才本来能够让施乐公司垄断整个行业,可是它最终会被公司的经营者毁掉,因为他们对产品的好坏没有概念。”*[00:13:00]* 这话有些苛刻,但是,乔布斯带着一卡车施乐高管错过的想法离开了会议。这几乎包含了他需要革新桌面计算体验的所有东西。1983 年,苹果发布了 Lisa 电脑,1984 年又发布了 Mac 电脑。这些设备的创意是抄袭自施乐公司的。
让我感兴趣的是,乔布斯对控诉他偷了图形用户界面的反应。他对此很冷静。他引用毕加索的话:“好的艺术家抄袭,伟大的艺术家偷窃。”他告诉一位记者,“我们总是无耻地窃取伟大的创意。”*[00:13:30]* 伟大的艺术家偷窃,好吧,我的意思是,我们说的并不是严格意义上的“偷窃”。没人拿到了专有的源代码并公然将其集成到他们自己的操作系统中去。这要更温和些,更像是创意的借用。这就难控制的多了,就像乔布斯自己即将学到的那样。传奇的软件奇才、真正的代码英雄 Andy Hertzfeld 就是麦金塔开发团队的最初成员。
**Andy Hertzfeld:***[00:14:00]* 是的,微软是我们的第一个麦金塔电脑软件合作伙伴。当时,我们并没有把他们当成是竞争对手。他们是苹果之外,我们第一家交付麦金塔电脑原型的公司。我通常每周都会和微软的技术主管聊一次。他们是第一个尝试我们所编写软件的外部团队。*[00:14:30]* 他们给了我们非常重要的反馈,总的来说,我认为我们的关系非常好。但我也注意到,在我与技术主管的交谈中,他开始问一些系统实现方面的问题,而他本无需知道这些,我觉得,他们想要复制麦金塔电脑。我很早以前就向史蒂夫·乔布斯反馈过这件事,但在 1983 年秋天,这件事达到了高潮。*[00:15:00]* 我们发现,他们在 1983 年 11 月的 COMDEX 上发布了 Windows,但却没有提前告诉我们。对此史蒂夫·乔布斯勃然大怒。他认为那是一种背叛。
**Saron Yitbarek:**随着新版 Windows 的发布,很明显,微软从苹果那里学到了苹果从施乐那里学来的所有想法。乔布斯很易怒。他的关于伟大艺术家如何偷窃的毕加索名言被别人学去了,而且恐怕盖茨也正是这么做的。*[00:15:30]* 据报道,当乔布斯怒斥盖茨偷了他们的东西时,盖茨回应道:“史蒂夫,我觉得这更像是我们都有一个叫施乐的富有邻居,我闯进他家偷电视机,却发现你已经偷过了”。苹果最终以窃取 GUI 的外观和风格为名起诉了微软。这个案子持续了好几年,但是在 1993 年,第 9 巡回上诉法院的一名法官最终站在了微软一边。*[00:16:00]* Vaughn Walker 法官宣布外观和风格不受版权保护。这是非常重要的。这一决定让苹果在无法垄断桌面计算的界面。很快,苹果短暂的领先优势消失了。以下是 Steven Levy 的观点。
**Steven Levy:**他们之所以失去领先地位,不是因为微软方面窃取了知识产权,而是因为他们无法巩固自己在上世纪 80 年代拥有的更好的操作系统的优势。坦率地说,他们的电脑索价过高。*[00:16:30]* 因此微软从 20 世纪 80 年代中期开始开发 Windows 系统,但直到 1990 年开发出的 Windows 3,我想,它才真正算是一个为黄金时期做好准备的版本,才真正可供大众使用。*[00:17:00]* 从此以后,微软能够将数以亿计的用户迁移到图形界面,而这是苹果无法做到的。虽然苹果公司有一个非常好的操作系统,但是那已经是 1984 年的产品了。
**Saron Yitbarek:**现在微软主导着操作系统的战场。他们占据了 90% 的市场份额,并且针对各种各样的个人电脑进行了标准化。操作系统的未来看起来会由微软掌控。此后发生了什么?*[00:17:30]* 1997 年,波士顿 Macworld 博览会上,你看到了一个几近破产的苹果,一个谦逊的多的史蒂夫·乔布斯走上舞台,开始谈论伙伴关系的重要性,特别是他们与微软的新型合作伙伴关系。史蒂夫·乔布斯呼吁双方缓和关系,停止火拼。微软将拥有巨大的市场份额。从表面看,我们可能会认为世界和平了。但当利益如此巨大时,事情就没那么简单了。*[00:18:00]* 就在苹果和微软在数十年的争斗中伤痕累累、最终败退到死角之际,一名 21 岁的芬兰计算机科学专业学生出现了。几乎是偶然地,他彻底改变了一切。
我是 Saron Yitbarek,这里是代码英雄。
正当某些科技巨头正忙着就专有软件相互攻击时,自由软件和开源软件的新领军者如雨后春笋般涌现。*[00:18:30]* 其中一位优胜者就是<ruby> 理查德·斯托尔曼 <rt> Richard Stallman </rt></ruby>。你也许对他的工作很熟悉。他想要有自由软件和自由社会。这就像言论自由一样的<ruby> 自由 <rt> free </rt></ruby>,而不是像免费啤酒一样的<ruby> 免费 <rt> free </rt></ruby>。早在 80 年代,斯托尔曼就发现,除了昂贵的专有操作系统(如 UNIX)外,就没有其他可行的替代品。因此他决定自己做一个。斯托尔曼的<ruby> 自由软件基金会 <rt> Free Software Foundation </rt></ruby>开发了 GNU,当然,它的意思是 “GNU’s not UNIX”。它将是一个像 UNIX 一样的操作系统,但不包含所有的 UNIX 代码,而且用户可以自由共享。
*[00:19:00]* 为了让你体会到上世纪 80 年代自由软件概念的重要性,从不同角度来说拥有 UNIX 代码的两家公司,<ruby> AT&T 贝尔实验室 <rt> AT&T Bell Laboratories </rt></ruby>以及<ruby> UNIX 系统实验室 <rt> UNIX System Laboratories </rt></ruby>威胁将会起诉任何看过 UNIX 源代码后又创建自己操作系统的人。这些人是次级专利所属。*[00:19:30]* 用这两家公司的话来说,所有这些程序员都在“精神上受到了污染”,因为他们都见过 UNIX 代码。在 UNIX 系统实验室和<ruby> 伯克利软件设计公司 <rt> Berkeley Software Design </rt></ruby>之间的一个著名的法庭案例中,有人认为任何功能类似的系统,即使它本身没有使用 UNIX 代码,也侵犯版权。Paul Jones 当时是一名开发人员。他现在是数字图书馆 ibiblio.org 的主管。
**Paul Jones:***[00:20:00]* 任何看过代码的人都受到了精神污染是他们的观点。因此几乎所有在安装有与 UNIX 相关操作系统的电脑上工作过的人以及任何在计算机科学部门工作的人都受到精神上的污染。因此,在 USENIX 的一年里,我们都得到了一写带有红色字母的白色小别针,上面写着“精神受到了污染”。我们很喜欢带着这些别针到处走,以表达我们跟着贝尔实验室混,因为我们的精神受到了污染。
**Saron Yitbarek:***[00:20:30]* 整个世界都被精神污染了。想要保持纯粹、保持事物的美好和专有的旧哲学正变得越来越不现实。正是在这被污染的现实中,历史上最伟大的代码英雄之一诞生了,他是一个芬兰男孩,名叫<ruby> 林纳斯·托瓦兹 <rt> Linus Torvalds </rt></ruby>。如果这是《星球大战》,那么林纳斯·托瓦兹就是我们的<ruby> 卢克·天行者 <rt> Luke Skywalker </rt></ruby>。他是赫尔辛基大学一名温文尔雅的研究生。*[00:21:00]* 有才华,但缺乏大志。典型的被逼上梁山的英雄。和其他年轻的英雄一样,他也感到沮丧。他想把 386 处理器整合到他的新电脑中。他对自己的 IBM 兼容电脑上运行的 MS-DOS 操作系统并不感冒,也负担不起 UNIX 软件 5000 美元的价格,而只有 UNIX 才能让他自由地编程。解决方案是托瓦兹在 1991 年春天基于 MINIX 开发了一个名为 Linux 的操作系统内核。他自己的操作系统内核。
**Steven Vaughan-Nichols:***[00:21:30]* 林纳斯·托瓦兹真的只是想找点乐子而已。
**Saron Yitbarek:**Steven Vaughan-Nichols 是 ZDNet.com 的特约编辑,而且他从科技行业出现以来就一直在写科技行业相关的内容。
**Steven Vaughan-Nichols:**当时有几个类似的操作系统。他最关注的是一个名叫 MINIX 的操作系统,MINIX 旨在让学生学习如何构建操作系统。林纳斯看到这些,觉得很有趣,但他想建立自己的操作系统。*[00:22:00]* 所以,它实际上始于赫尔辛基的一个 DIY 项目。一切就这样开始了,基本上就是一个大孩子在玩耍,学习如何做些什么。*[00:22:30]* 但不同之处在于,他足够聪明、足够执着,也足够友好,让所有其他人都参与进来,然后他开始把这个项目进行到底。27 年后,这个项目变得比他想象的要大得多。
**Saron Yitbarek:**到 1991 年秋季,托瓦兹发布了 10000 行代码,世界各地的人们开始评头论足,然后进行优化、添加和修改代码。*[00:23:00]* 对于今天的开发人员来说,这似乎很正常,但请记住,在那个时候,像这样的开放协作是对微软、苹果和 IBM 已经做的很好的整个专有系统的道德侮辱。随后这种开放性被奉若神明。托瓦兹将 Linux 置于 GNU 通用公共许可证(GPL)之下。曾经保障斯托尔曼的 GNU 系统自由的许可证现在也将保障 Linux 的自由。Vaughan-Nichols 解释道,这种融入到 GPL 的重要性怎么强调都不过分,它基本上能永远保证软件的自由和开放性。
**Steven Vaughan-Nichols:***[00:23:30]* 事实上,根据 Linux 所遵循的许可协议,即 GPL 第 2 版,如果你想贩卖 Linux 或者向全世界展示它,你必须与他人共享代码,所以如果你对其做了一些改进,仅仅给别人使用是不够的。事实上你必须和他们分享所有这些变化的具体细节。然后,如果这些改进足够好,就会被 Linux 所吸收。
**Saron Yitbarek:***[00:24:00]* 事实证明,这种公开的方式极具吸引力。<ruby> 埃里克·雷蒙德Eric Raymond</ruby> 是这场运动的早期传道者之一,他在他那篇著名的文章中写道:“微软和苹果这样的公司一直在试图建造软件大教堂,而 Linux 及类似的软件则提供了一个由不同议程和方法组成的巨大集市,集市比大教堂有趣多了。”
**tormy Peters:**我认为在那个时候,真正吸引人的是人们终于可以把控自己的世界了。
**Saron Yitbarek:**Stormy Peters 是一位行业分析师,也是自由和开源软件的倡导者。
**Stormy Peters:***[00:24:30]* 当开源软件第一次出现的时候,所有的操作系统都是专有的。如果不使用专有软件,你甚至不能添加打印机,你不能添加耳机,你不能自己开发一个小型硬件设备,然后让它在你的笔记本电脑上运行。你甚至不能放入 DVD 并复制它,因为你不能改变软件,即使你拥有这张 DVD,你也无法复制它。*[00:25:00]* 你无法控制你购买的硬件/软件系统。你不能从中创造出任何新的、更大的、更好的东西。这就是为什么开源操作系统在一开始就如此重要的原因。我们需要一个开源协作环境,在那里我们可以构建更大更好的东西。
**Saron Yitbarek:**请注意,Linux 并不是一个纯粹的平等主义乌托邦。林纳斯·托瓦兹不会批准对内核的所有修改,而是主导了内核的变更。他安排了十几个人来管理内核的不同部分。*[00:25:30]* 这些人也会信任自己下面的人,以此类推,形成信任金字塔。变化可能来自任何地方,但它们都是经过判断和策划的。
然而,考虑到到林纳斯的 DIY 项目一开始是多么的简陋和随意,这项成就令人十分惊讶。他完全不知道自己就是这一切中的卢克·天行者。当时他只有 21 岁,一半的时间都在编程。但是当魔盒第一次被打开,人们开始给他反馈。*[00:26:00]* 几十个,然后几百个,成千上万的贡献者。有了这样的众包基础,Linux 很快就开始成长。真的成长得很快。甚至最终引起了微软的注意。他们的首席执行官<ruby> 史蒂夫·鲍尔默 <rt> Steve Ballmer </rt></ruby>将 Linux 称为是“一种癌症,从知识产权得角度来看,它传染了接触到得任何东西 ”。Steven Levy 将会描述 Ballmer 的由来。
**Steven Levy:***[00:26:30]* 一旦微软真正巩固了它的垄断地位,而且它也确实被联邦法院判定为垄断,他们将会对任何可能对其构成威胁的事情做出强烈反应。因此,既然他们对软件收费,很自然得,他们将自由软件得出现看成是一种癌症。他们试图提出一个知识产权理论,来解释为什么这对消费者不利。
**Saron Yitbarek:***[00:27:00]* Linux 在不断传播,微软也开始担心起来。到了 2006 年,Linux 成为仅次于 Windows 的第二大常用操作系统,全球约有 5000 名开发人员在使用它。5000 名开发者。还记得比尔·盖茨给苹果公司的备忘录吗?在那份备忘录中,他向苹果公司的员工们论述了与他人合作的重要性。事实证明,开源将把伙伴关系的概念提升到一个全新的水平,这是比尔·盖茨从未预见到的。
*[00:27:30]* 我们一直在谈论操作系统之间的大战,但是到目前为止,并没有怎么提到无名英雄和开发者们。在下次的代码英雄中,情况就不同了。第二集讲的是操作系统大战的第二部分,是关于 Linux 崛起的。业界醒悟过来,认识到了开发人员的重要性。这些开源反叛者变得越来越强大,战场从桌面转移到了服务器领域。*[00:28:00]* 这里有商业间谍活动、新的英雄人物,还有科技史上最不可思议的改变。这一切都在操作系统大战的后半集内达到了高潮。
要想免费自动获得新一集的代码英雄,请点击订阅苹果播客、Spotify、谷歌 Play,或其他应用获取该播客。在这一季剩下的时间里,我们将参观最新的战场,相互争斗的版图,这里是下一代的代码英雄留下印记的地方。*[00:28:30]* 更多信息,请访问 <https://redhat.com/commandlineheroes> 。我是 Saron Yitbarek。下次之前,继续编码。
---
via: <https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-1>
作者:[redhat](https://www.redhat.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The OS wars. It is a period of mounting tensions. The empires of Bill Gates and Steve Jobs careen toward an inevitable battle over proprietary software—only one empire can emerge as the purveyor of a standard operating system for millions of users. Gates has formed a powerful alliance with IBM while Jobs tries to maintain the purity of his brand. Their struggle for dominance threatens to engulf the galaxy. Meanwhile, in distant lands, and unbeknownst to the Emperors, open source rebels have begun to gather...
Veterans from computer history, including [ Andy Hertzfeld](https://twitter.com/andyhertzfeld), from the original Macintosh team, and acclaimed tech journalist
[, recount the moments of genius, and tragic flaws, that shaped our technology for decades to come.](https://twitter.com/StevenLevy)
__Steven Levy__*Saron Yitbarek*
Some stories are so epic, with such high stakes , that in my head, it's like that crawling text at the start of a Star Wars movie. You know, like-
*Voice Actor*
Episode One, The OS Wars.
*Saron Yitbarek*
Yeah, like that.
**00:30** - *Voice Actor*
It is a period of mounting tensions. The empires of Bill Gates and Steve Jobs careen toward an inevitable battle over proprietary software. Gates has formed a powerful alliance with IBM, while Jobs refuses to license his hardware or operating system. Their battle for dominance threatens to engulf the galaxy in an OS war. Meanwhile, in distant lands, and unbeknownst to the emperors, open source rebels have begun to gather.
**01:00** - *Saron Yitbarek*
Okay. Maybe that's a bit dramatic, but when we're talking about the OS wars of the 1980s, '90s, and 2000s, it's hard to overstate things. There really was an epic battle for dominance. Steve Jobs and Bill Gates really did hold the fate of billions in their hands. Control the operating system, and you control how the vast majority of people use computers, how we communicate with each other, how we source information. I could go on, but you know all this. Control the OS, and you would be an emperor.
**01:30** - *Saron Yitbarek*
I'm Saron Yitbarek [00:01:24], and you're listening to Command Line Heroes, an original podcast from Red Hat. What is a Command Line Hero, you ask? Well, if you would rather make something than just use it, if you believe developers have the power to build a better future, if you want a world where we all get a say in how our technologies shape our lives, then you, my friend, are a command line hero. In this series, we bring you stories from the developers among us who are transforming tech from the command line up.
**02:00** - *Saron Yitbarek*
And who am I to be guiding you on this trek? Who is Saron Yitbarek? Well, actually I'm guessing I'm a lot like you. I'm a developer for starters, and everything I do depends on open source software. It's my world. The stories we tell on this podcast are a way for me to get above the daily grind of my work, and see that big picture. I hope it does the same thing for you, too.
**02:30** - *Saron Yitbarek*
What I wanted to know right off the bat was, where did open source technology even come from? I mean, I know a fair bit about Linus Torvalds and the glories of Linux® , as I'm sure you do , too, but really, there was life before open source, right? And if I want to truly appreciate the latest and greatest of things like DevOps and containers, and on and on, well, I feel like I owe it to all those earlier developers to know where this stuff came from. So, let's take a short break from worrying about memory leaks and buffer overflows.
**03:00** - *Saron Yitbarek*
Our journey begins with the OS wars, the epic battle for control of the desktop. It was like nothing the world had ever seen, and I'll tell you why. First, in the age of computing, you've got exponentially scaling advantages for the big fish ; and second, there's never been such a battle for control on ground that's constantly shifting. Bill Gates and Steve Jobs? They don't know it yet, but by the time this story is halfway done, everything they're fighting for is going to change, evolve, and even ascend into the cloud.
**03:30** - *Saron Yitbarek*
Okay, it's the fall of 1983. I was negative six years old. Ronald Reagan was president, and the U . S . and the Soviet Union are threatening to drag the planet into nuclear war. Over at the Civic Center in Honolulu, it's the annual Apple sales conference. An exclusive bunch of Apple employees are waiting for Steve Jobs to get onstage. He's this super bright-eyed 28-year-old, and he's looking pretty confident. In a very serious voice, Jobs speaks into the mic and says that he's invited three industry experts to have a panel discussion on software.
**04:00** - *Saron Yitbarek*
But the next thing that happens is not what you'd expect. Super cheesy '80s music fills the room. A bunch of multi-colored tube lights light up the stage, and then an announcer voice says-
*Voice Actor*
And now, ladies and gentlemen, the Macintosh software dating game.
**04:30** - *Saron Yitbarek*
Jobs has this big grin on his face as he reveals that the three CEOs on stage have to take turns wooing him. It's essentially an '80s version of The Bachelor, but for tech love. Two of the software bigwigs say their bit, and then it's over to contestant number three. Is that? Yup. A fresh - faced Bill Gates with large square glasses that cover half his face. He proclaims that during 1984, half of Microsoft's revenue is going to come from Macintosh software.
**05:00** - *Saron Yitbarek*
The audience loves it, and gives him a big round of applause. What they don't know is that one month after this event, Bill Gates will announce his plans to release Windows 1.0. You'd never guess Jobs is flirting with someone who'd end up as Apple's biggest rival. But Microsoft and Apple are about to live through the worst marriage in tech history. They're going to betray each other, they're going to try and destroy each other, and they're going to be deeply, painfully bound to each other.
**05:30** - *James Allworth*
I guess philosophically, one was more idealistic and focused on the user experience above all else, and was an integrated organization, whereas Microsoft much more pragmatic, a modular focus-
*Saron Yitbarek*
That's James Allworth. He's a prolific tech writer who worked inside the corporate team of Apple Retail. Notice that definition of Apple he gives. An integrated organization. That sense of a company beholden only to itself. A company that doesn't want to rely on others. That's key.
**06:00** - *James Allworth*
Apple was the integrated player, and it wanted to focus on a delightful user experience, and that meant that it wanted to control the entire stack and everything that was delivered, from the hardware to the operating system, to even some of the applications that ran on top of the operating system. That always served it well in periods where new innovations, important innovations, were coming to market where you needed to be across both hardware and software, and where being able to change the hardware based on what you wanted to do and what t was new in software was an advantage. For example-
**06:30** - *Saron Yitbarek*
A lot of people loved that integration, and became die hard Apple fans. Plenty of others stuck with Microsoft. Back to that sales conference in Honolulu. At that very same event, Jobs gave his audience a sneak peek at the Superbowl ad he was about to release. You might have seen it for yourself. Think George Orwell's 1984. In this cold and gray world, mindless automatons are shuffling along under a dictator's projected gaze.
**07:00** - *Saron Yitbarek*
They represent IBM users. Then, beautiful, athletic Anya Major, representing Apple, comes running through the hall in full color. She hurls her sledgehammer at Big Brother's screen, smashing it to bits. Big Brother's spell is broken, and a booming voice tells us that Apple is about to introduce the Macintosh.
*Voice Actor*
And you'll see why 1984 will not be like 1984.
**07:30** - *Saron Yitbarek*
And yeah, looking back at that commercial, the idea that Apple was a freedom fighter working to set the masses free is a bit much. But the thing hit a nerve. Ken Segal worked at the advertising firm that made the commercial for Apple. He was Steve Jobs' advertising guy for more than a decade in the early days.
**08:00** - *Ken Segal*
Well, the 1984 commercial came with a lot of risk. In fact, it was so risky that Apple didn't want to run it when they saw it. You've probably heard stories that Steve liked it, but the Apple board did not like it. In fact, they were so outraged that so much money had been spent on such a thing that they wanted to fire the ad agency. Steve was the one sticking up for the agency.
*Saron Yitbarek*
Jobs, as usual, knew a good mythology when he saw one.
*Ken Segal*
That commercial struck such a chord within the company, within the industry, that it became this thing for Apple. Whether or not people were buying computers that day, it had a sort of an aura that stayed around for years and years and years, and helped define the character of the company. We're the rebels. We're the guys with the sledgehammer.
**08:30** - *Saron Yitbarek*
So in their battle for the hearts and minds of literally billions of potential consumers, the emperors of Apple and Microsoft were learning to frame themselves as redeemers. As singular heroes. As lifestyle choices. But Bill Gates knew something that Apple had trouble understanding. This idea that in a wired world, nobody, not even an emperor, can really go it alone.
**09:00** - *Saron Yitbarek*
June 25th, 1985. Gates sends a memo to Apple's then CEO John Scully. This was during the wilderness years. Jobs had just been excommunicated, and wouldn't return to Apple until 1996. Maybe it was because Jobs was out that Gates felt confident enough to write what he wrote. In the memo, he encourages Apple to license their OS to clone makers. I want to read a bit from the end of the memo, just to give you a sense of how perceptive it was.
**09:30** - *Saron Yitbarek*
Gates writes, "It is now impossible for Apple to create a standard out of their innovative technology without support from other personal computer manufacturers. Apple must open the Macintosh architecture to have the independent support required to gain momentum and establish a standard." In other words, no more operating in a silo, you guys. You've got to be willing to partner with others. You have to work with developers.
**10:00** - *Saron Yitbarek*
You see this philosophy years later, when Microsoft CEO Steve Ballmer gets up on stage to give a keynote and he starts shouting, "Developers, developers, developers, developers, developers, developers. Developers, developers, developers, developers, developers, developers, developers, developers." You get the idea. Microsoft likes developers. Now, they're not about to share source code with them, but they do want to build this whole ecosystem of partners. And when Bill Gates suggests that Apple do the same, as you might have guessed, the idea is tossed out the window.
**10:30** - *Saron Yitbarek*
Apple had drawn a line in the sand, and five months after they trashed Gates' memo, Microsoft released Windows 1.0. The war was on.
Developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers, developers.
**11:00** - *Saron Yitbarek*
You're listening to Command Line Heroes, an original podcast from Red Hat. In this inaugural episode, we go back in time to relive the epic story of the OS wars, and we're going to find out, how did a war between tech giants clear the way for the open source world we all live in today?
**11:30** - *Saron Yitbarek*
Okay, a little backstory. Forgive me if you've heard this one, but it's a classic. It's 1979, and Steve Jobs drives up to the Xerox Park research center in Palo Alto. The engineers there have been developing this whole fleet of elements for what they call a graphical user interface. Maybe you've heard of it. They've got menus, they've got scroll bars, they've got buttons and folders and overlapping windows. It was a beautiful new vision of what a computer interface could look like. And nobody had any of this stuff. Author and journalist Steven Levy talks about its potential.
**12:00** - *Steven Levy*
There was a lot of excitement about this new interface that was going to be much friendlier than what we had before, which used what was called the command line, where there was really no interaction between you and the computer in the way you'd interact with something in real life. The mouse and the graphics on the computer gave you a way to do that, to point to something just like you'd point to something in real life. It made it a lot easier. You didn't have to memorize all these codes.
**12:30** - *Saron Yitbarek*
Except, the Xerox executives did not get that they were sitting on top of a platinum mine. The engineers were more aware than the execs. Typical. So those engineers were, yeah, a little stressed out that they were instructed to show Jobs how everything worked. But the executives were calling the shots. Jobs felt, quote, "The product genius that brought them to that monopolistic position gets rotted out by people running these companies that have no conception of a good product versus a bad product."
**13:00** - *Saron Yitbarek*
That's sort of harsh, but hey, Jobs walked out of that meeting with a truckload of ideas that Xerox executives had missed. Pretty much everything he needed to revolutionize the desktop computing experience. Apple releases the Lisa in 1983, and then the Mac in 1984. These devices are made by the ideas swiped from Xerox.
**13:50** - *Saron Yitbarek*
What's interesting to me is Jobs' reaction to the claim that he stole the GUI. He's pretty philosophical about it. He quotes Picasso, saying, "Good artists copy, great artists steal." He tells one reporter, "We have always been shameless about stealing great ideas." Great artists steal. Okay. I mean, we're not talking about stealing in a hard sense. Nobody's obtaining proprietary source code and blatantly incorporating it into their operating system. This is softer, more like idea borrowing. And that's much more difficult to control, as Jobs himself was about to learn. Legendary software wizard, and true command line hero, Andy Hertzfeld, was an original member of the Macintosh development team.
**14:00** - *Andy Hertzfeld*
Yeah, Microsoft was our first software partner with the Macintosh. At the time, we didn't really consider them a competitor. They were the very first company outside of Apple that we gave Macintosh prototypes to. I talked with the technical lead at Microsoft usually once a week. They were the first outside party trying out the software that we wrote.
**14:30** - *Andy Hertzfeld*
They gave us very important feedback, and in general I would say the relationship was pretty good. But I also noticed in my conversations with the technical lead, he started asking questions that he didn't really need to know about how the system was implemented, and I got the idea that they were trying to copy the Macintosh. I t old Steve Jobs about it pretty early on, but it really came to a head in the fall of 1983.
**15:00** - *Andy Hertzfeld*
We discovered that they actually, without telling us ahead of time, they announced Windows at the COMDEX in November 1983 and Steve Jobs hit the roof. He really considered that a betrayal.
**15:30** - *Saron Yitbarek*
As newer versions of Windows were released, it became pretty clear that Microsoft had lifted from Apple all the ideas that Apple had lifted from Xerox. Jobs was apoplectic. His Picasso line about how great artists steal. Yeah. That goes out the window. Though maybe Gates was using it now. Reportedly, when Jobs screamed at Gates that he'd stolen from them, Gates responded, "Well Steve, I think it's more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set, and found out that you'd already stolen it." Apple ends up suing Microsoft for stealing the look and feel of their GUI. The case goes on for years, but in 1993, a judge from the 9th Circuit Court of Appeals finally sides with Microsoft.
**16:00** - *Saron Yitbarek*
Judge Vaughn Walker declares that look and feel are not covered by copyright. This is super important. That decision prevented Apple from creating a monopoly with the interface that would dominate desktop computing. Soon enough, Apple's brief lead had vanished. Here's Steven Levy's take.
**16:30** - *Steven Levy*
They lost the lead not because of intellectual property theft on Microsoft's part, but because they were unable to consolidate their advantage in having a better operating system during the 1980s. They overcharged for their computers, quite frankly. So Microsoft had been developing Windows, starting with the mid-1980s, but it wasn't until Windows 3 in 1990, I believe, where they really came across with a version that was ready for prime time. Ready for masses of people to use.
**17:00** - *Steven Levy*
At that point is where Microsoft was able to migrate huge numbers of people, hundreds of millions, over to the graphical interface in a way that Apple had not been able to do. Even though they had a really good operating system, they used it since 1984.
**17:30** - *Saron Yitbarek*
Microsoft now dominated the OS battlefield. They held 90% of the market, and standardized their OS across a whole variety of PCs. The future of the OS looked like it'd be controlled by Microsoft. And then? Well, at the 1997 Macworld Expo in Boston, you have an almost bankrupt Apple. A more humble Steve Jobs gets on stage, and starts talking about the importance of partnerships, and one in particular, he says, has become very, very meaningful. Their new partnership with Microsoft. Steve Jobs is calling for a détente, a ceasefire. Microsoft could have their enormous market share. If we didn't know better, we might think we were entering a period of peace in the kingdom.
**18:30** - *Saron Yitbarek*
But when stakes are this high, it's never that simple. Just as Apple and Microsoft were finally retreating to their corners, pretty bruised from decades of fighting, along came a 21-year-old Finnish computer science student who, almost by accident, changed absolutely everything.
I'm Saron Yitbarek, and this is Command Line Heroes.
**18:30** - *Saron Yitbarek*
While certain tech giants were busy bashing each other over proprietary software, there were new champions of free and open source software popping up like mushrooms. One of these champions was Richard Stallman. You're probably familiar with his work. He wanted free software and a free society. That's free as in free speech, not free as in free beer. Back in the '80s, Stallman saw that there was no viable alternative to pricey, proprietary OSs, like UNIX . So, he decided to make his own. Stallman's Free Software Foundation developed GNU, which stood for GNU's not UNIX , of course. It'd be an OS like UNIX, but free of all UNIX code, and free for users to share.
**19:00** - *Saron Yitbarek*
Just to give you a sense of how important that idea of free software was in the 1980s, the companies that owned the UNIX code at different points, AT&T Bell Laboratories and then UNIX System Laboratories, they threatened lawsuits on anyone making their own OS after looking at UNIX source code. These guys were next - level proprietary.
**19:30** - *Saron Yitbarek*
All those programmers were, in the words of the two companies, "mentally contaminated," because they'd seen UNIX code. In a famous court case between UNIX System Laboratories and Berkeley Software Design, it was argued that any functionally similar system, even though it didn't use the UNIX code itself, was a bre a ch of copyright. Paul Jones was a developer at that time. He's now the director of the digital library ibiblio.org.
**20:00** - *Paul Jones*
Anyone who has seen any of the code is mentally contaminated was their argument. That would have made almost anyone who had worked on a computer operating system that involved UNIX , in any computer science department, was mentally contaminated. So in one year at USENIX, we all got little white bar pin s with red letters that say mentally contaminated, and we all wear those around to our own great pleasure, to show that we were sticking it to Bell because we were mentally contaminated.
**20:30** - *Saron Yitbarek*
The whole world was getting mentally contaminated. Staying pure, keeping things nice and proprietary, that old philosophy was getting less and less realistic. It was into this contaminated reality that one of history's biggest command line heroes was born, a boy in Finland named Linus Torvalds. If this is Star Wars, then Linus Torvalds is our Luke Skywalker. He was a mild-mannered grad student at the University of Helsinki.
**21:00** - *Saron Yitbarek*
Talented, but lacking in grand visions. The classic reluctant hero. And, like any young hero, he was also frustrated. He wanted to incorporate the 386 processor into his new PC's functions. He wasn't impressed by the MS-DOS running on his IBM clone, and he couldn't afford the $5,000 price tag on the UNIX software that would have given him some programming freedom. The solution, which Torvalds crafted on MINIX in the spring of 1991, was an OS kernel called Linux. The kernel of an OS of his very own.
**21:30** - *Steven Vaughan-Nichols*
Linus Torvalds really just wanted to have something to play with.
*Saron Yitbarek*
Steven Vaughan-Nichols is a contributing editor at ZDNet.com, and he's been writing about the business of technology since there was a business of technology.
*Steven Vaughan-Nichols*
There were a couple of operating systems like it at the time. The main one that he was concerned about was called MINIX. That was an operating system that was meant for students to learn how to build operating systems. Linus looked at that, and thought that it was interesting, but he wanted to build his own.
**22:00** - *Steven Vaughan-Nichols*
So it really started as a do-it-yourself project at Helsinki. That's how it all started, is just basically a big kid playing around and learning how to do things. But what was different in his case is that he was both bright enough and persistent enough, and also friendly enough to get all these other people working on it, and then he started seeing the project through.
**22:30** - *Steven Vaughan-Nichols*
27 years later, it is much, much bigger than he ever dreamed it would be.
**23:00** - *Saron Yitbarek*
By the fall of 1991, Torvalds releases 10,000 lines of code, and people around the world start offering comments, then tweaks, additions, edits. That might seem totally normal to you as a developer today, but remember, at that time, open collaboration like that was a moral affront to the whole proprietary system that Microsoft, Apple, and IBM had done so well by. Then that openness gets enshrined. Torvalds places Linux under the GNU general public license. The license that had kept Stallman's GNU system free was now going to keep Linux free , too. The importance of that move to incorporate GPL, basically preserving the freedom and openness of the software forever, cannot be overstated. Vaughan-Nichols explains.
**23:30** - *Steven Vaughan-Nichols*
In fact, by the license that it's under, which is called GPL version 2, you have to share the code if you're going to try to sell it or present it to the world, so that if you make an improvement, it's not enough just to give someone the improvement. You actually have to share with them the nuts and bolts of all those changes. Then they are adapted into Linux if they're good enough.
**24:00** - *Saron Yitbarek*
That public approach proved massively attractive. Eric Raymond, one of the early evangelists of the movement wrote in his famous essay that, "Corporations like Microsoft and Apple have been trying to build software cathedrals, while Linux and its kind were offering a great babbling bazaar of different agendas and approaches. The bazaar was a lot more fun than the cathedral."
*Stormy Peters*
I think at the time, what attracted people is that they were going to be in control of their own world.
*Saron Yitbarek*
Stormy Peters is an industry analyst, and an advocate for free and open source software.
**24:30** - *Stormy Peters*
When open source software first came out, the OS was all proprietary. You couldn't even add a printer without going through proprietary software. You couldn't add a headset. You couldn't develop a small hardware device of your own, and make it work with your laptop. You couldn't even put in a DVD and copy it, because you couldn't change the software. Even if you owned the DVD, you couldn't copy it.
**25:00** - *Stormy Peters*
You had no control over this hardware/software system that you'd bought. You couldn't create anything new and bigger and better out of it. That's why an open source operating system was so important at the beginning. We needed an open source collaborative environment where we could build bigger and better things.
**25:30** - *Saron Yitbarek*
Mind you, Linux isn't a purely egalitarian utopia. Linus Torvalds doesn't approve everything that goes into the kernel, but he does preside over its changes. He's installed a dozen or so people below him to manage different parts of the kernel. They, in turn, trust people under themselves, and so on, in a pyramid of trust. Changes might come from anywhere, but they're all judged and curated.
**26:00** - *Saron Yitbarek*
It is amazing, though, to think how humble, and kind of random, Linus' DIY project was to begin with. He didn't have a clue he was the Luke Skywalker figure in all this. He was just 21, and had been programming half his life. But this was the first time the silo opened up, and people started giving him feedback. Dozens, then hundreds, and thousands of contributors. With crowdsourcing like that, it doesn't take long before Linux starts growing. Really growing. It even finally gets noticed by Microsoft. Their CEO, Steve Ballmer, called Linux, and I quote, "A cancer that attaches itself in an intellectual property sense to everything it touches." Steven Levy describes where Ballmer was coming from.
**26:30** - *Steven Levy*
Once Microsoft really solidified its monopoly, and indeed it was judged in federal court as a monopoly, anything that could be a threat to that, they reacted very strongly to. So of course, the idea that free software would be emerging, when they were charging for software, they saw as a cancer. They tried to come up with an intellectual property theory about why this was going to be bad for consumers.
**27:00** - *Saron Yitbarek*
Linux was spreading, and Microsoft was worried. By 2006, Linux would become the second most widely used operating system after Windows, with about 5,000 developers working on it worldwide. Five thousand. Remember that memo that Bill Gates sent to Apple, the one where he's lecturing them about the importance of partnering with other people? Turns out, open source would take that idea of partnerships to a whole new level, in a way Bill Gates would have never foreseen.
**27:30** - *Saron Yitbarek*
We've been talking about these huge battles for the OS, but so far, the unsung heroes, the developers, haven't fully made it onto the battlefield. That changes next time, on Command Line Heroes. In episode two, part two of the OS wars, it's the rise of Linux. Businesses wake up, and realize the importance of developers.
**28:00** - *Saron Yitbarek*
These open source rebels grow stronger, and the battlefield shifts from the desktop to the server room. There's corporate espionage, new heroes, and the unlikeliest change of heart in tech history. It all comes to a head in the concluding half of the OS wars.
**28:30** - *Saron Yitbarek*
To get new episodes of Command Line Heroes delivered automatically for free, make sure you hit subscribe on Apple podcasts, Spotify, Google Play, or however you get your podcasts. Over the rest of the season, we're visiting the latest battlefields, the up-for-grab territories where the next generation of Command Line Heroes are making their mark. For more info, check us out at redhat.com/commandlineheroes. I'm Saron Yitbarek. Until next time, keep on coding.
### Keep going
### (Linux and) the enduring magic of UNIX
Ross Turk—Red Hatter, gadget lover, open source advocate—shares the reason why he fell in love with UNIX in his youth and how Linux® has kept that love going for 25 years.
### How to install SQL Server 2017 on Red Hat Enterprise Linux with Ansible
Learn how to get started with Microsoft SQL Server 2017 on Red Hat® Enterprise Linux 7. |
11,252 | 如何在 Debian/Ubuntu 上设置自动安全更新(无人值守更新) | https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/ | 2019-08-21T00:49:56 | [
"更新"
] | https://linux.cn/article-11252-1.html | 
对于 Linux 管理员来说重要的任务之一是让系统保持最新状态,这可以使得你的系统更加稳健并且可以避免不想要的访问与攻击。
在 Linux 上安装软件包是小菜一碟,用相似的方法我们也可以更新安全补丁。
这是一个向你展示如何配置系统接收自动安全更新的简单教程。当你运行自动安全包更新而不经审查会给你带来一定风险,但是也有一些好处。
如果你不想错过安全补丁,且想要与最新的安全补丁保持同步,那你应该借助无人值守更新机制设置自动安全更新。
如果你不想要自动安全更新的话,你可以[在 Debian/Ubuntu 系统上手动安装安全更新](https://www.2daygeek.com/manually-install-security-updates-ubuntu-debian/)。
我们有许多可以自动化更新的办法,然而我们将先采用官方的方法之后我们会介绍其它方法。
### 如何在 Debian/Ubuntu 上安装无人值守更新包
无人值守更新包默认应该已经装在你的系统上。但万一它没被安装,就用下面的命令来安装。
使用 [APT-GET 命令](https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/)和 [APT 命令](https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/)来安装 `unattended-upgrades` 软件包。
```
$ sudo apt-get install unattended-upgrades
```
下方两个文件可以使你自定义该机制:
```
/etc/apt/apt.conf.d/50unattended-upgrades
/etc/apt/apt.conf.d/20auto-upgrades
```
### 在 50unattended-upgrades 文件中做出必要修改
默认情况下只有安全更新需要的最必要的选项被启用。但并不限于此,你可以配置其中的许多选项以使得这个机制更加有用。
我修改了一下文件并仅加上被启用的行以方便阐述:
```
# vi /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
"${distro_id}ESM:${distro_codename}";
};
Unattended-Upgrade::DevRelease "false";
```
有三个源被启用,细节如下:
* `${distro_id}:${distro_codename}`:这是必须的,因为安全更新可能会从非安全来源拉取依赖。
* `${distro_id}:${distro_codename}-security`:这用来从来源得到安全更新。
* `${distro_id}ESM:${distro_codename}`:这是用来从 ESM(扩展安全维护)获得安全更新。
**启用邮件通知:** 如果你想要在每次安全更新后收到邮件通知,那么就修改以下行段(取消其注释并加上你的 email 账号)。
从:
```
//Unattended-Upgrade::Mail "root";
```
修改为:
```
Unattended-Upgrade::Mail "[email protected]";
```
**自动移除不用的依赖:** 你可能需要在每次更新后运行 `sudo apt autoremove` 命令来从系统中移除不用的依赖。
我们可以通过修改以下行来自动化这项任务(取消注释并将 `false` 改成 `true`)。
从:
```
//Unattended-Upgrade::Remove-Unused-Dependencies "false";
```
修改为:
```
Unattended-Upgrade::Remove-Unused-Dependencies "true";
```
**启用自动重启:** 你可能需要在安全更新安装至内核后重启你的系统。你可以在以下行做出修改:
从:
```
//Unattended-Upgrade::Automatic-Reboot "false";
```
到:取消注释并将 `false` 改成 `true`以启用自动重启。
```
Unattended-Upgrade::Automatic-Reboot "true";
```
**启用特定时段的自动重启:** 如果自动重启已启用,且你想要在特定时段进行重启,那么做出以下修改。
从:
```
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
```
到:取消注释并将时间改成你需要的时间。我将重启设置在早上 5 点。
```
Unattended-Upgrade::Automatic-Reboot-Time "05:00";
```
### 如何启用自动化安全更新?
现在我们已经配置好了必须的选项,一旦配置好,打开以下文件并确认是否这两个值都已设置好?值不应为0。(1=启用,0=禁止)。
```
# vi /etc/apt/apt.conf.d/20auto-upgrades
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
```
**详情:**
* 第一行使 `apt` 每天自动运行 `apt-get update`。
* 第一行使 `apt` 每天自动安装安全更新。
---
via: <https://www.2daygeek.com/automatic-security-update-unattended-upgrades-ubuntu-debian/>
作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tomjlw](https://github.com/tomjlw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,254 | 修复 Ubuntu 中 “E: The package cache file is corrupted, it has the wrong hash” | https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/ | 2019-08-21T22:20:25 | [] | https://linux.cn/article-11254-1.html | 
今天,我尝试更新我的 Ubuntu 18.04 LTS 的仓库列表,但收到了一条错误消息:“**E: The package cache file is corrupted, it has the wrong hash**”。这是我在终端运行的命令以及输出:
```
$ sudo apt update
```
示例输出:
```
Hit:1 http://it-mirrors.evowise.com/ubuntu bionic InRelease
Hit:2 http://it-mirrors.evowise.com/ubuntu bionic-updates InRelease
Hit:3 http://it-mirrors.evowise.com/ubuntu bionic-backports InRelease
Hit:4 http://it-mirrors.evowise.com/ubuntu bionic-security InRelease
Hit:5 http://ppa.launchpad.net/alessandro-strada/ppa/ubuntu bionic InRelease
Hit:7 http://ppa.launchpad.net/leaeasy/dde/ubuntu bionic InRelease
Hit:8 http://ppa.launchpad.net/rvm/smplayer/ubuntu bionic InRelease
Ign:6 https://dl.bintray.com/etcher/debian stable InRelease
Get:9 https://dl.bintray.com/etcher/debian stable Release [3,674 B]
Fetched 3,674 B in 3s (1,196 B/s)
Reading package lists... Done
E: The package cache file is corrupted, it has the wrong hash
```

*Ubuntu 中的 “The package cache file is corrupted, it has the wrong hash” 错误*
经过一番谷歌搜索,我找到了解决此错误的方法。
如果你遇到过这个错误,不要惊慌。只需运行下面的命令修复。
在运行命令之前,**请再次确认你在最后加入了 `*`**。在命令最后加上 `*` 很重要。如果你没有添加,它会删除 `/var/lib/apt/lists/`\*目录,而且无法恢复。我提醒过你了!
```
$ sudo rm -rf /var/lib/apt/lists/*
```
现在我再次使用下面的命令更新系统:
```
$ sudo apt update
```

现在好了!希望它有帮助。
---
via: <https://www.ostechnix.com/fix-e-the-package-cache-file-is-corrupted-it-has-the-wrong-hash-error-in-ubuntu/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,255 | 如何在 Ubuntu 上设置多语言输入法 | https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/ | 2019-08-21T23:20:10 | [
"输入法"
] | https://linux.cn/article-11255-1.html | 
或许你不知道,在印度有数以百计的语言被使用,其中 22 种被印度机构列为官方语言。我的母语不是英语,因此当我需要从英语输入或者翻译到我的母语泰米尔语时我经常使用**谷歌翻译**。嗯,我估计我不再需要依靠谷歌翻译了。我刚发现在 Ubuntu 上输入印度语的好办法。这篇教程解释了如何配置多语言输入法的方法。这个是为 Ubuntu 18.04 LTS 特别打造的,但是它可以在其它类 Ubuntu 系统例如 Linux mint、Elementary OS 上使用。
### 在 Ubuntu Linux 上设置多语言输入法
通过 **IBus** 的帮助,我们可以轻松在 Ubuntu 及其衍生物上配置多语言输入法。Ibus,代表 **I** ntelligent **I** nput **Bus**(智能输入总线),是一种针对类 Unix 操作系统下多语言输入的输入法框架。它使得我们可以在大多数 GUI 应用例如 LibreOffice 下输入母语。
### 在 Ubuntu 上安装 IBus
在 Ubuntu 上 安装 IBus 包,运行:
```
$ sudo apt install ibus-m17n
```
Ibus-m17n 包提供了许多印度语和其它国家语言包括阿姆哈拉语,阿拉伯语,阿美尼亚语,阿萨姆语,阿萨巴斯卡语,白俄罗斯语,孟加拉语,缅甸语,中高棉语,占文,**汉语**,克里语,克罗地亚语,捷克语,丹麦语,迪维希语,马尔代夫语,世界语,法语,格鲁吉亚语,古/现代希腊语,古吉拉特语,希伯来语,因纽特语,日语,卡纳达语,克什米尔语,哈萨克语,韩语,老挝语,马来语,马拉地语,尼泊尔语,欧吉布威语,欧瑞亚语,旁遮普语,波斯语,普什图语,俄语,梵语,塞尔维亚语,四川彝文,彝文,西格西卡语,信德语,僧伽罗语,斯洛伐克语,瑞典语,泰语,泰米尔语,泰卢固语,藏语,维吾尔语,乌都语,乌兹别克语,越语及意第绪语。
##### 添加输入语言
我们可以在系统里的**设置**部分添加语言。点击你的 Ubuntu 桌面右上角的下拉箭头选择底部左下角的设置图标。

*从顶部面板启动系统设置*
从设置部分,点击左侧面板的**区域及语言**选项。再点击右侧**输入来源**标签下的**+**(加号)按钮。

*设置部分的区域及语言选项*
在下个窗口,点击**三个垂直的点**按钮。

*在 Ubuntu 里添加输入来源*
搜寻并选择你想从列表中添加的输入语言。

*添加输入语言*
在本篇教程中,我将加入**泰米尔**语。在选择语言后,点击**添加**按钮。

*添加输入来源*
现在你会看到选中的输入来源已经被添加了。你会在输入来源标签下的区域及语言选项中看到它。

*Ubuntu 里的输入来源选项*
点击输入来源标签下的“管理安装的语言”按钮

*在 Ubuntu 里管理安装的语言*
接下来你会被询问是否想要为选定语言安装翻译包。如果你想的话你可以安装它们。或者仅仅选择“稍后提醒我”按钮。你下次打开的时候会收到统治。

*语言支持没完全安装好*
一旦翻译包安装好,点击**安装/移除语言**按钮。同时确保 IBus 在键盘输入法系统中被选中。

*在 Ubuntu 中安装/移除语言*
从列表中选择你想要的语言并点击采用按钮。

*选择输入语言*
到此为止了。我们已成功在 Ubuntu 18.04 桌面上配置好多语输入方法。同样的,你可以添加尽可能多的输入语言。
在添加完所有语言来源后,登出再登陆回去。
### 用印度语或者你喜欢的语言输入
一旦你添加完所有语言后,你就会从你的 Ubuntu 桌面上的顶端菜单下载栏看到它们。

*从 Ubuntu 桌面的顶端栏选择输入语言。*
你也可以使用键盘上的**徽标键+空格键**在不同语言中切换。

*在 Ubuntu 里用**徽标键+空格键**选择输入语言*
打开任何 GUI 文本编辑器/应用开始打字吧!

*在 Ubuntu 中用印度语输入*
### 将 IBus 加入启动应用
我们需要让 IBus 在每次重启后自动打开,这样每次你想要用自己喜欢的语言输入的时候就无须手动打开。
为此仅须在面板中输入“开机应用”点开开机应用选项。

在下个窗口,点击添加,在名字栏输入“Ibus”并在命令栏输入“ibus-daemon”点击添加按钮。

*在 Ubuntu 中将 Ibus 添加进开机启动项*
从现在起 IBus 将在系统启动后自动开始。
现在到你的回合了。在什么应用/工具中你用当地的印度语输入?在下方评论区让我们知道它们。
参考:
* [IBus – Ubuntu 社区百科](https://help.ubuntu.com/community/ibus)
---
via: <https://www.ostechnix.com/how-to-setup-multilingual-input-method-on-ubuntu/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tomjlw](https://github.com/tomjlw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,256 | 如何自定义 GNOME 3 桌面? | https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/ | 2019-08-22T00:02:00 | [
"GNOME"
] | https://linux.cn/article-11256-1.html | 
我们收到很多来自用户的电子邮件,要我们写一篇关于 GNOME 3 桌面自定义的文章,但是,我们没有时间来写这个主题。
在很长时间内,我一直在我的主要笔记本电脑上使用 Ubuntu 操作系统,并且,渐感无聊,我想测试一些与 Arch Linux 相关的其它的发行版。
我比较喜欢 Majaro,我在我的笔记本电脑中安装使用了 GNOME 3 桌面的 Manjaro 18.0 。
我按照我想要的自定义我的桌面。所以,我想抓住这个机会来详细撰写这篇文章,以帮助其他人。
这篇文章帮助其他人来轻松地自定义他们的桌面。
我不打算包括我所有的自定义,并且,我将强制性地添加一个对 Linux 桌面用户来说有用的必要的东西。
如果你觉得这篇文章中缺少一些调整,请你在评论区提到缺少的东西。对其它用户来说这是非常有用的 。
### 1) 如何在 GNOME 3 桌面中启动活动概述?
活动概述将显示所有运行的应用程序,或通过单击 `Super` 键 ,或在左上角上单击“活动”按钮来启动/打开窗口。
它允许你来启动一个新的应用程序、切换窗口,和在工作空间之间移动窗口。
你可以通过选择如下任一操作简单地退出活动概述,如选择一个窗口、应用程序或工作区间,或通过按 `Super` 键或 `Esc` 键。

*活动概述屏幕截图*
### 2) 如何在 GNOME 3 桌面中重新调整窗口大小?
通过下面的组合键来将启动的窗口最大化、取消最大化,并吸附到屏幕的一侧(左侧或右侧)。
* `Super Key+下箭头`:来非最大化窗口。
* `Super Key+上箭头`:来最大化窗口。
* `Super Key+右箭头`:来在填充半个屏幕的右侧窗口。
* `Super Key+作箭头`:来在填充半个屏幕的左侧窗口。

*使用 `Super Key+下箭头` 来取消最大化窗口。*

*使用 `Super Key+上箭头` 来最大化窗口。*

*使用 `Super Key+右箭头` 来在填充半个屏幕的右侧窗口。*

*使用 `Super Key+左箭头` 来在填充半个屏幕的左侧窗口。*
这个功能将帮助你可以一次查看两个应用程序,又名,拆分屏幕。

### 3) 如何在 GNOME 3 桌面中显示应用程序?
在 Dash 中,单击“显示应用程序网格”按钮来显示在你的系统上的所有已安装的应用程序。

### 4) 如何在 GNOME 3 桌面中的 Dash 中添加应用程序?
为加速你的日常活动,你可能想要把频繁使用的应用程序添加到 Dash 中,或拖拽应用程序启动器到 Dash 中。
它将允许你直接启动你的收藏夹中的应用程序,而不用先去搜索应用程序。为做到这样,在应用程序上简单地右击,并使用选项“添加到收藏夹”。

为从 Dash 中移除一个应用程序启动器(收藏的程序),要么从 Dash 中拖拽应用程序到网格按钮,或者在应用程序上简单地右击,并使用选项“从收藏夹中移除”。

### 5) 如何在 GNOME 3 桌面中的工作区间之间切换?
工作区间允许你将窗口组合在一起。它将帮助你恰当地分隔你的工作。如果你正在做多项工作,并且你想对每项工作和相关的事物进行单独地分组,那么,它将是非常便利的,对你来说是一个非常方便和完美的选项。
你可以用两种方法切换工作区间,打开活动概述,并从右手边选择一个工作区间,或者使用下面的组合键。
* 使用 `Ctrl+Alt+Up` 切换到上一个工作区间。
* 使用 `Ctrl+Alt+Down` 切换到下一个工作区间。

### 6) 如何在 GNOME 3 桌面中的应用程序之间切换 (应用程序切换器) ?
使用 `Alt+Tab` 或 `Super+Tab` 来在应用程序之间切换。为启动应用程序切换器,使用 `Alt+Tab` 或 `Super+Tab` 。
在启动后,只需要按住 `Alt` 或 `Super` 键,按 `Tab` 键来从左到右的依次移动接下来的应用程序。
### 7) 如何在 GNOME 3 桌面中添加用户姓名到顶部面板?
如果你想添加你的用户姓名到顶部面板,那么安装下面的[添加用户姓名到顶部面板](https://extensions.gnome.org/extension/1108/add-username-to-top-panel/) GNOME 扩展。

### 8) 如何在 GNOME 3 桌面中添加微软 Bing 的桌面背景?
安装下面的 [Bing 桌面背景更换器](https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/) GNOME shell 扩展,来每天更改你的桌面背景为微软 Bing 的桌面背景。

### 9) 如何在 GNOME 3 桌面中启用夜光?
夜光应用程序是著名的应用程序之一,它通过在日落后把你的屏幕从蓝光调成暗黄色,来减轻眼睛疲劳。
它在智能手机上也可用。相同目标的其它已知应用程序是 flux 和 [redshift](https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/)。
为启用这个特色,导航到**系统设置** >> **设备** >> **显示** ,并打开夜光。

在它启用后,状态图标将被放置到顶部面板上。

### 10) 如何在 GNOME 3 桌面中显示电池百分比?
电池百分比将向你精确地显示电池使用情况。为启用这个功能,遵循下面的步骤。
启动 GNOME Tweaks >> **顶部栏** >> **电池百分比** ,并打开它。

在修改后,你能够在顶部面板上看到电池百分比图标。

### 11) 如何在 GNOME 3 桌面中启用鼠标右键单击?
在 GNOME 3 桌面环境中右键单击是默认禁用的。为启用这个特色,遵循下面的步骤。
启动 GNOME Tweaks >> **键盘和鼠标** >> 鼠标点击硬件仿真,并选择“区域”选项。

### 12) 如何在 GNOME 3 桌面中启用单击最小化?
启用单击最小化功能,这将帮助我们最小化打开的窗口,而不必使用最小化选项。
```
$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
```
### 13) 如何在 GNOME 3 桌面中自定义 Dock ?
如果你想更改你的 Dock,类似于 Deepin 桌面或 Mac 桌面,那么使用下面的一组命令。
```
$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM
$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false
$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED
$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50
```

### 14) 如何在 GNOME 3桌面中显示桌面?
默认 `Super 键 + D` 快捷键不能显示你的桌面。为配置这种情况,遵循下面的步骤。
设置 >> **设备** >> **键盘** >> 单击在导航下的 **隐藏所有普通窗口** ,然后按 `Super 键 + D` ,最后按`设置`按钮来启用它。

### 15) 如何自定义日期和时间格式?
GNOME 3 默认用 `Sun 04:48` 的格式来显示日期和时间。它并不清晰易懂,如果你想获得以下格式的输出:`Sun Dec 2 4:49 AM` ,遵循下面的步骤。
**对于日期修改:** 打开 GNOME Tweaks >> **顶部栏** ,并在时钟下启用“星期”选项。

**对于时间修改:** 设置 >> **具体情况** >> **日期和时间** ,然后,在时间格式中选择 `AM/PM` 选项。

在修改后,你能够看到与下面相同的日期和时间格式。

### 16) 如何在启动程序中永久地禁用不使用的服务?
就我来说,我不使用 **蓝牙** & **cups(打印机服务)**。因此,在我的笔记本电脑上禁用这些服务。为在基于 Arch 的系统上禁用服务,使用 [Pacman 软件包管理器](https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/)。
对于蓝牙:
```
$ sudo systemctl stop bluetooth.service
$ sudo systemctl disable bluetooth.service
$ sudo systemctl mask bluetooth.service
$ systemctl status bluetooth.service
```
对于 cups:
```
$ sudo systemctl stop org.cups.cupsd.service
$ sudo systemctl disable org.cups.cupsd.service
$ sudo systemctl mask org.cups.cupsd.service
$ systemctl status org.cups.cupsd.service
```
最后,使用以下命令验证这些服务是否在启动程序中被禁用。如果你想再次确认这一点,你可以重新启动一次,并检查相同的东西。导航到以下链接来了解更多关于 [systemctl](https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/) 的用法,
```
$ systemctl list-unit-files --type=service | grep enabled
[email protected] enabled
dbus-org.freedesktop.ModemManager1.service enabled
dbus-org.freedesktop.NetworkManager.service enabled
dbus-org.freedesktop.nm-dispatcher.service enabled
display-manager.service enabled
gdm.service enabled
[email protected] enabled
linux-module-cleanup.service enabled
ModemManager.service enabled
NetworkManager-dispatcher.service enabled
NetworkManager-wait-online.service enabled
NetworkManager.service enabled
systemd-fsck-root.service enabled-runtime
tlp-sleep.service enabled
tlp.service enabled
```
### 17) 在 GNOME 3 桌面中安装图标和主题?
有大量的图标和主题可供 GNOME 桌面使用,因此,选择吸引你的 [GTK 主题](https://www.2daygeek.com/category/gtk-theme/) 和 [图标主题](https://www.2daygeek.com/category/icon-theme/)。
---
via: <https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/>
作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,258 | 如何更改 Linux 控制台字体类型和大小 | https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/ | 2019-08-23T04:21:40 | [
"字体"
] | https://linux.cn/article-11258-1.html | 
如果你有图形桌面环境,那么就很容易更改文本的字体以及大小。但你如何在没有图形环境的 Ubuntu 无头服务器中做到?别担心!本指南介绍了如何更改 Linux 控制台的字体和大小。这对于那些不喜欢默认字体类型/大小或者喜欢不同字体的人来说非常有用。
### 更改 Linux 控制台字体类型和大小
如果你还不知道,这就是无头 Ubuntu Linux 服务器控制台的样子。

*Ubuntu Linux 控制台*
据我所知,我们可以[列出已安装的字体](https://www.ostechnix.com/find-installed-fonts-commandline-linux/),但是没有办法可以像在 Linux 桌面终端仿真器中那样更改 Linux 控制台字体类型或大小。
但这并不意味着我们无法改变它。我们仍然可以更改控制台字体。
如果你正在使用 Debian、Ubuntu 和其他基于 DEB 的系统,你可以使用 `console-setup` 配置文件来设置 `setupcon`,它用于配置控制台的字体和键盘布局。该控制台设置的配置文件位于 `/etc/default/console-setup`。
现在,运行以下命令来设置 Linux 控制台的字体。
```
$ sudo dpkg-reconfigure console-setup
```
选择要在 Linux 控制台上使用的编码。只需保留默认值,选择 “OK” 并按回车继续。

*选择要在 Ubuntu 控制台上设置的编码*
接下来,在列表中选择受支持的字符集。默认情况下,它是最后一个选项,即在我的系统中 **Guess optimal character set**(猜测最佳字符集)。只需保留默认值,然后按回车键。

*在 Ubuntu 中选择字符集*
接下来选择控制台的字体,然后按回车键。我这里选择 “TerminusBold”。

*选择 Linux 控制台的字体*
这里,我们为 Linux 控制台选择所需的字体大小。

*选择 Linux 控制台的字体大小*
几秒钟后,所选的字体及大小将应用于你的 Linux 控制台。
这是在更改字体类型和大小之前,我的 Ubuntu 18.04 LTS 服务器中控制台字体的样子。

这是更改之后。

如你所见,文本更大、更好,字体类型也不同于默认。
你也可以直接编辑 `/etc/default/console-setup`,并根据需要设置字体类型和大小。根据以下示例,我的 Linux 控制台字体类型为 “Terminus Bold”,字体大小为 32。
```
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="UTF-8"
CODESET="guess"
FONTFACE="TerminusBold"
FONTSIZE="16x32"
```
### 附录:显示控制台字体
要显示你的控制台字体,只需输入:
```
$ showconsolefont
```
此命令将显示字体的字形或字母表。

*显示控制台字体*
如果你的 Linux 发行版没有 `console-setup`,你可以从[这里](https://software.opensuse.org/package/console-setup)获取它。
在使用 Systemd 的 Linux 发行版上,你可以通过编辑 `/etc/vconsole.conf` 来更改控制台字体。
以下是德语键盘的示例配置。
```
$ vi /etc/vconsole.conf
KEYMAP=de-latin1
FONT=Lat2-Terminus16
```
希望这篇文章对你有用!
---
via: <https://www.ostechnix.com/how-to-change-linux-console-font-type-and-size/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,259 | 在 Linux 中复制文档 | https://opensource.com/article/19/8/copying-files-linux | 2019-08-23T05:39:14 | [
"复制"
] | https://linux.cn/article-11259-1.html |
>
> 了解在 Linux 中多种复制文档的方式以及各自的优点。
>
>
>

在办公室里复印文档过去需要专门的员工与机器。如今,复制是电脑用户无需多加思考的任务。在电脑里复制数据是如此微不足道的事,以致于你还没有意识到复制就发生了,例如当拖动文档到外部硬盘的时候。
数字实体复制起来十分简单已是一个不争的事实,以致于大部分现代电脑用户从未考虑过其它的复制他们工作的方式。无论如何,在 Linux 中复制文档仍有几种不同的方式。每种方法取决于你的目的不同而都有其独到之处。
以下是一系列在 Linux、BSD 及 Mac 上复制文件的方式。
### 在 GUI 中复制
如大多数操作系统一样,如果你想的话,你可以完全用 GUI 来管理文件。
#### 拖拽放下
最浅显的复制文件的方式可能就是你以前在电脑中复制文件的方式:拖拽并放下。在大多数 Linux 桌面上,从一个本地文件夹拖拽放下到另一个本地文件夹是*移动*文件的默认方式,你可以通过在拖拽文件开始后按住 `Ctrl` 来改变这个行为。
你的鼠标指针可能会有一个指示,例如一个加号以显示你在复制模式。

注意如果文件是放在远程系统上的,不管它是一个 Web 服务器还是在你自己网络里用文件共享协议访问的另一台电脑,默认动作经常是复制而不是移动文件。
#### 右击
如果你觉得在你的桌面拖拽文档不够精准或者有点笨拙,或者这么做会让你的手离开键盘太久,你可以经常使用右键菜单来复制文件。这取决于你所用的文件管理器,但通常来说,右键弹出的关联菜单会包括常见的操作。
关联菜单的“复制”动作将你的[文件路径](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them)(即文件在系统的位置)保存在你的剪切板中,这样你可以将你的文件*粘贴*到别处:(LCTT 译注:此处及下面的描述不确切,这里并非复制的文件路径的“字符串”,而是复制了代表文件实体的对象/指针)

在这种情况下,你并没有将文件的内容复制到你的剪切版上。取而代之的是你复制了[文件路径](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them)。当你粘贴时,你的文件管理器会查看剪贴板上的路径并执行复制命令,将相应路径上的文件粘贴到你准备复制到的路径。
### 用命令行复制
虽然 GUI 通常是相对熟悉的复制文件方式,用终端复制却更有效率。
#### cp
在终端上等同于在桌面上复制和粘贴文件的最显而易见的方式就是 `cp` 命令。这个命令可以复制文件和目录,也相对直接。它使用熟悉的*来源*和*目的*(必须以这样的顺序)句法,因此复制一个名为 `example.txt` 的文件到你的 `Documents` 目录就像这样:
```
$ cp example.txt ~/Documents
```
就像当你拖拽文件放在文件夹里一样,这个动作并不会将 `Documents` 替换为 `example.txt`。取而代之的是,`cp` 察觉到 `Documents` 是一个文件夹,就将 `example.txt` 的副本放进去。
你同样可以便捷有效地重命名你复制的文档:
```
$ cp example.txt ~/Documents/example_copy.txt
```
重要的是,它使得你可以在与原文件相同的目录中生成一个副本:
```
$ cp example.txt example.txt
cp: 'example.txt' and 'example.txt' are the same file.
$ cp example.txt example_copy.txt
```
要复制一个目录,你必须使用 `-r` 选项(代表 `--recursive`,递归)。以这个选项对目录 `nodes` 运行 `cp` 命令,然后会作用到该目录下的所有文件。没有 `-r` 选项,`cp` 不会将目录当成一个可复制的对象:
```
$ cp notes/ notes-backup
cp: -r not specified; omitting directory 'notes/'
$ cp -r notes/ notes-backup
```
#### cat
`cat` 命令是最易被误解的命令,但这只是因为它表现了 [POSIX](/article-11222-1.html) 系统的极致灵活性。在 `cat` 可以做到的所有事情中(包括其原意的连接文件的用途),它也能复制。例如说使用 `cat` 你可以仅用一个命令就[从一个文件创建两个副本](https://opensource.com/article/19/2/getting-started-cat-command)。你用 `cp` 无法做到这一点。
使用 `cat` 复制文档要注意的是系统解释该行为的方式。当你使用 `cp` 复制文件时,该文件的属性跟着文件一起被复制,这意味着副本的权限和原件一样。
```
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
$ cp foo.jpg bar.jpg
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
```
然而用 `cat` 将一个文件的内容读取至另一个文件是让系统创建了一个新文件。这些新文件取决于你的默认 umask 设置。要了解 umask 更多的知识,请阅读 Alex Juarez 讲述 [umask](https://opensource.com/article/19/7/linux-permissions-101) 以及权限概览的文章。
运行 `unmask` 获取当前设置:
```
$ umask
0002
```
这个设置代表在该处新创建的文档被给予 `664`(`rw-rw-r--`)权限,因为该 `unmask` 设置的前几位数字没有遮掩任何权限(而且执行位不是文件创建的默认位),并且写入权限被最终位所屏蔽。
当你使用 `cat` 复制时,实际上你并没有真正复制文件。你使用 `cat` 读取文件内容并将输出重定向到了一个新文件:
```
$ cat foo.jpg > baz.jpg
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
-rw-rw-r--. 1 57368 Jul 29 13:42 baz.jpg
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
```
如你所见,`cat` 应用系统默认的 umask 设置创建了一个全新的文件。
最后,当你只是想复制一个文件时,这些手段无关紧要。但如果你想复制文件并保持默认权限时,你可以用一个命令 `cat` 完成一切。
#### rsync
有着著名的同步源和目的文件的能力,`rsync` 命令是一个复制文件的多才多艺的工具。最为简单的,`rsync` 可以类似于 `cp` 命令一样使用。
```
$ rsync example.txt example_copy.txt
$ ls
example.txt example_copy.txt
```
这个命令真正的威力藏在其能够*不做*不必要的复制的能力里。如果你使用 `rsync` 来将文件复制进目录里,且其已经存在在该目录里,那么 `rsync` 不会做复制操作。在本地这个差别不是很大,但如果你将海量数据复制到远程服务器,这个特性的意义就完全不一样了。
甚至在本地中,真正不一样的地方在于它可以分辨具有相同名字但拥有不同数据的文件。如果你曾发现你面对着同一个目录的两个相同副本时,`rsync` 可以将它们同步至一个包含每一个最新修改的目录。这种配置在尚未发现版本控制威力的业界十分常见,同时也作为需要从一个可信来源复制的备份方案。
你可以通过创建两个文件夹有意识地模拟这种情况,一个叫做 `example` 另一个叫做 `example_dupe`:
```
$ mkdir example example_dupe
```
在第一个文件夹里创建文件:
```
$ echo "one" > example/foo.txt
```
用 `rsync` 同步两个目录。这种做法最常见的选项是 `-a`(代表 “archive”,可以保证符号链接和其它特殊文件保留下来)和 `-v`(代表 “verbose”,向你提供当前命令的进度反馈):
```
$ rsync -av example/ example_dupe/
```
两个目录现在包含同样的信息:
```
$ cat example/foo.txt
one
$ cat example_dupe/foo.txt
one
```
如果你当作源分支的文件发生改变,目的文件也会随之跟新:
```
$ echo "two" >> example/foo.txt
$ rsync -av example/ example_dupe/
$ cat example_dupe/foo.txt
one
two
```
注意 `rsync` 命令是用来复制数据的,而不是充当版本管理系统的。例如假设有一个目的文件比源文件多了改变,那个文件仍将被覆盖,因为 `rsync` 比较文件的分歧并假设目的文件总是应该镜像为源文件:
```
$ echo "You will never see this note again" > example_dupe/foo.txt
$ rsync -av example/ example_dupe/
$ cat example_dupe/foo.txt
one
two
```
如果没有改变,那么就不会有复制动作发生。
`rsync` 命令有许多 `cp` 没有的选项,例如设置目标权限、排除文件、删除没有在两个目录中出现的过时文件,以及更多。可以使用 `rsync` 作为 `cp` 的强力替代或者有效补充。
### 许多复制的方式
在 POSIX 系统中有许多能够达成同样目的的方式,因此开源的灵活性名副其实。我忘了哪个复制数据的有效方式吗?在评论区分享你的复制神技。
---
via: <https://opensource.com/article/19/8/copying-files-linux>
作者:[Seth Kenlon](https://opensource.com/users/sethhttps://opensource.com/users/scottnesbitthttps://opensource.com/users/greg-p) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tomjlw](https://github.com/tomjlw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Copying documents used to require a dedicated staff member in offices, and then a dedicated machine. Today, copying is a task computer users do without a second thought. Copying data on a computer is so trivial that copies are made without you realizing it, such as when dragging a file to an external drive.
The concept that digital entities are trivial to reproduce is pervasive, so most modern computerists don’t think about the options available for duplicating their work. And yet, there are several different ways to copy a file on Linux. Each method has nuanced features that might benefit you, depending on what you need to get done.
Here are a number of ways to copy files on Linux, BSD, and Mac.
## Copying in the GUI
As with most operating systems, you can do all of your file management in the GUI, if that's the way you prefer to work.
Drag and drop
The most obvious way to copy a file is the way you’re probably used to copying files on computers: drag and drop. On most Linux desktops, dragging and dropping from one local folder to another local folder *moves* a file by default. You can change this behavior to a copy operation by holding down the **Ctrl** key after you start dragging the file.
Your cursor may show an indicator, such as a plus sign, to show that you are in copy mode:

Note that if the file exists on a remote system, whether it’s a web server or another computer on your own network that you access through a file-sharing protocol, the default action is often to copy, not move, the file.
### Right-click
If you find dragging and dropping files around your desktop imprecise or clumsy, or doing so takes your hands away from your keyboard too much, you can usually copy a file using the right-click menu. This possibility depends on the file manager you use, but generally, a right-click produces a contextual menu containing common actions.
The contextual menu copy action stores the [file path](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them) (where the file exists on your system) in your clipboard so you can then *paste* the file somewhere else:

In this case, you’re not actually copying the file’s contents to your clipboard. Instead, you're copying the [file path](https://opensource.com/article/19/7/understanding-file-paths-and-how-use-them). When you paste, your file manager looks at the path in your clipboard and then runs a copy command, copying the file located at that path to the path you are pasting into.
## Copying on the command line
While the GUI is a generally familiar way to copy files, copying in a terminal can be more efficient.
### cp
The obvious terminal-based equivalent to copying and pasting a file on the desktop is the **cp** command. This command copies files and directories and is relatively straightforward. It uses the familiar *source* and *target* (strictly in that order) syntax, so to copy a file called **example.txt** into your **Documents** directory:
```
$ cp example.txt ~/Documents
```
Just like when you drag and drop a file onto a folder, this action doesn’t replace **Documents** with **example.txt**. Instead, **cp** detects that **Documents** is a folder, and places a copy of **example.txt** into it.
You can also, conveniently (and efficiently), rename the file as you copy it:
```
$ cp example.txt ~/Documents/example_copy.txt
```
That fact is important because it allows you to make a copy of a file in the same directory as the original:
```
$ cp example.txt example.txt
cp: 'example.txt' and 'example.txt' are the same file.
$ cp example.txt example_copy.txt
```
To copy a directory, you must use the **-r** option, which stands for --**recursive**. This option runs **cp** on the directory *inode*, and then on all files within the directory. Without the **-r** option, **cp** doesn’t even recognize a directory as an object that can be copied:
```
$ cp notes/ notes-backup
cp: -r not specified; omitting directory 'notes/'
$ cp -r notes/ notes-backup
```
### cat
The **cat** command is one of the most misunderstood commands, but only because it exemplifies the extreme flexibility of a [POSIX](https://opensource.com/article/19/7/what-posix-richard-stallman-explains) system. Among everything else **cat** does (including its intended purpose of con*cat*enating files), it can also copy. For instance, with **cat** you can [create two copies from one file](https://opensource.com/article/19/2/getting-started-cat-command) with just a single command. You can’t do that with **cp**.
The significance of using **cat** to copy a file is the way the system interprets the action. When you use **cp** to copy a file, the file’s attributes are copied along with the file itself. That means that the file permissions of the duplicate are the same as the original:
```
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
$ cp foo.jpg bar.jpg
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
```
Using **cat** to read the contents of a file into another file, however, invokes a system call to create a new file. These new files are subject to your default **umask** settings. To learn more about `umask`
, read Alex Juarez’s article covering [umask](https://opensource.com/article/19/7/linux-permissions-101) and permissions in general.
Run **umask** to get the current settings:
```
$ umask
0002
```
This setting means that new files created in this location are granted **664** (**rw-rw-r--**) permission because nothing is masked by the first digits of the **umask** setting (and the executable bit is not a default bit for file creation), and the write permission is blocked by the final digit.
When you copy with **cat**, you don’t actually copy the file. You use **cat** to read the contents of the file, and then redirect the output into a new file:
```
$ cat foo.jpg > baz.jpg
$ ls -l -G -g
-rw-r--r--. 1 57368 Jul 29 13:37 bar.jpg
-rw-rw-r--. 1 57368 Jul 29 13:42 baz.jpg
-rw-r--r--. 1 57368 Jul 25 23:57 foo.jpg
```
As you can see, **cat** created a brand new file with the system’s default umask applied.
In the end, when all you want to do is copy a file, the technicalities often don’t matter. But sometimes you want to copy a file and end up with a default set of permissions, and with **cat** you can do it all in one command**.**
### rsync
The **rsync** command is a versatile tool for copying files, with the notable ability to synchronize your source and destination. At its most simple, **rsync** can be used similarly to **cp** command:
```
$ rsync example.txt example_copy.txt
$ ls
example.txt example_copy.txt
```
The command’s true power lies in its ability to *not* copy when it’s not necessary. If you use **rsync** to copy a file into a directory, but that file already exists in that directory, then **rsync** doesn’t bother performing the copy operation. Locally, that fact doesn’t necessarily mean much, but if you’re copying gigabytes of data to a remote server, this feature makes a world of difference.
What does make a difference even locally, though, is the command’s ability to differentiate files that share the same name but which contain different data. If you’ve ever found yourself faced with two copies of what is meant to be the same directory, then **rsync** can synchronize them into one directory containing the latest changes from each. This setup is a pretty common occurrence in industries that haven’t yet discovered the magic of version control, and for backup solutions in which there is one source of truth to propagate.
You can emulate this situation intentionally by creating two folders, one called **example** and the other **example_dupe**:
```
$ mkdir example example_dupe
```
Create a file in the first folder:
```
$ echo "one" > example/foo.txt
```
Use **rsync** to synchronize the two directories. The most common options for this operation are **-a** (for *archive*, which ensures symlinks and other special files are preserved) and **-v** (for *verbose*, providing feedback to you on the command’s progress):
```
$ rsync -av example/ example_dupe/
```
The directories now contain the same information:
```
$ cat example/foo.txt
one
$ cat example_dupe/foo.txt
one
```
If the file you are treating as the source diverges, then the target is updated to match:
```
$ echo "two" >> example/foo.txt
$ rsync -av example/ example_dupe/
$ cat example_dupe/foo.txt
one
two
```
Keep in mind that the **rsync** command is meant to copy data, not to act as a version control system. For instance, if a file in the destination somehow gets ahead of a file in the source, that file is still overwritten because **rsync** compares files for divergence and assumes that the destination is always meant to mirror the source:
```
$ echo "You will never see this note again" > example_dupe/foo.txt
$ rsync -av example/ example_dupe/
$ cat example_dupe/foo.txt
one
two
```
If there is no change, then no copy occurs.
The **rsync** command has many options not available in **cp**, such as the ability to set target permissions, exclude files, delete outdated files that don’t appear in both directories, and much more. Use **rsync** as a powerful replacement for **cp**, or just as a useful supplement.
## Many ways to copy
There are many ways to achieve essentially the same outcome on a POSIX system, so it seems that open source’s reputation for flexibility is well earned. Have I missed a useful way to copy data? Share your copy hacks in the comments.
## 4 Comments |
11,261 | Podman:一个更安全的运行容器的方式 | https://opensource.com/article/18/10/podman-more-secure-way-run-containers | 2019-08-23T23:50:08 | [
"Podman",
"容器"
] | https://linux.cn/article-11261-1.html |
>
> Podman 使用传统的 fork/exec 模型(相对于客户端/服务器模型)来运行容器。
>
>
>

在进入本文的主要主题 [Podman](https://podman.io) 和容器之前,我需要了解一点 Linux 审计功能的技术。
### 什么是审计?
Linux 内核有一个有趣的安全功能,叫做**审计**。它允许管理员在系统上监视安全事件,并将它们记录到`audit.log` 中,该文件可以本地存储或远程存储在另一台机器上,以防止黑客试图掩盖他的踪迹。
`/etc/shadow` 文件是一个经常要监控的安全文件,因为向其添加记录可能允许攻击者获得对系统的访问权限。管理员想知道是否有任何进程修改了该文件,你可以通过执行以下命令来执行此操作:
```
# auditctl -w /etc/shadow
```
现在让我们看看当我修改了 `/etc/shadow` 文件会发生什么:
```
# touch /etc/shadow
# ausearch -f /etc/shadow -i -ts recent
type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)`
```
审计记录中有很多信息,但我重点注意到它记录了 root 修改了 `/etc/shadow` 文件,并且该进程的审计 UID(`auid`)的所有者是 `dwalsh`。
内核修改了这个文件了么?
#### 跟踪登录 UID
登录 UID(`loginuid`),存储在 `/proc/self/loginuid` 中,它是系统上每个进程的 proc 结构的一部分。该字段只能设置一次;设置后,内核将不允许任何进程重置它。
当我登录系统时,登录程序会为我的登录过程设置 `loginuid` 字段。
我(`dwalsh`)的 UID 是 3267。
```
$ cat /proc/self/loginuid
3267
```
现在,即使我变成了 root,我的登录 UID 仍将保持不变。
```
$ sudo cat /proc/self/loginuid
3267
```
请注意,从初始登录过程 fork 并 exec 的每个进程都会自动继承 `loginuid`。这就是内核知道登录的人是 `dwalsh` 的方式。
### 容器
现在让我们来看看容器。
```
sudo podman run fedora cat /proc/self/loginuid
3267
```
甚至容器进程也保留了我的 `loginuid`。 现在让我们用 Docker 试试。
```
sudo docker run fedora cat /proc/self/loginuid
4294967295
```
### 为什么不一样?
Podman 对于容器使用传统的 fork/exec 模型,因此容器进程是 Podman 进程的后代。Docker 使用客户端/服务器模型。我执行的 `docker` 命令是 Docker 客户端工具,它通过客户端/服务器操作与 Docker 守护进程通信。然后 Docker 守护程序创建容器并处理 stdin/stdout 与 Docker 客户端工具的通信。
进程的默认 `loginuid`(在设置 `loginuid` 之前)是 `4294967295`(LCTT 译注:2<sup> 32</sup> - 1)。由于容器是 Docker 守护程序的后代,而 Docker 守护程序是 init 系统的子代,所以,我们看到 systemd、Docker 守护程序和容器进程全部具有相同的 `loginuid`:`4294967295`,审计系统视其为未设置审计 UID。
```
cat /proc/1/loginuid
4294967295
```
### 怎么会被滥用?
让我们来看看如果 Docker 启动的容器进程修改 `/etc/shadow` 文件会发生什么。
```
$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null)
```
在 Docker 情形中,`auid` 是未设置的(`4294967295`);这意味着安全人员可能知道有进程修改了 `/etc/shadow` 文件但身份丢失了。
如果该攻击者随后删除了 Docker 容器,那么在系统上谁修改 `/etc/shadow` 文件将没有任何跟踪信息。
现在让我们看看相同的场景在 Podman 下的情况。
```
$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow
$ sudo ausearch -f /etc/shadow -i type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils --coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY| O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null)
```
由于它使用传统的 fork/exec 方式,因此 Podman 正确记录了所有内容。
这只是观察 `/etc/shadow` 文件的一个简单示例,但审计系统对于观察系统上的进程非常有用。使用 fork/exec 容器运行时(而不是客户端/服务器容器运行时)来启动容器允许你通过审计日志记录保持更好的安全性。
### 最后的想法
在启动容器时,与客户端/服务器模型相比,fork/exec 模型还有许多其他不错的功能。例如,systemd 功能包括:
* `SD_NOTIFY`:如果将 Podman 命令放入 systemd 单元文件中,容器进程可以通过 Podman 返回通知,表明服务已准备好接收任务。这是在客户端/服务器模式下无法完成的事情。
* 套接字激活:你可以将连接的套接字从 systemd 传递到 Podman,并传递到容器进程以便使用它们。这在客户端/服务器模型中是不可能的。
在我看来,其最好的功能是**作为非 root 用户运行 Podman 和容器**。这意味着你永远不会在宿主机上授予用户 root 权限,而在客户端/服务器模型中(如 Docker 使用的),你必须打开以 root 身份运行的特权守护程序的套接字来启动容器。在那里,你将受到守护程序中实现的安全机制与宿主机操作系统中实现的安全机制的支配 —— 这是一个危险的主张。
---
via: <https://opensource.com/article/18/10/podman-more-secure-way-run-containers>
作者:[Daniel J Walsh](https://opensource.com/users/rhatdan) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Before I get into the main topic of this article, [Podman](https://podman.io) and containers, I need to get a little technical about the Linux audit feature.
## What is audit?
The Linux kernel has an interesting security feature called **audit**. It allows administrators to watch for security events on a system and have them logged to the audit.log, which can be stored locally or remotely on another machine to prevent a hacker from trying to cover his tracks.
The **/etc/shadow** file is a common security file to watch, since adding a record to it could allow an attacker to get return access to the system. Administrators want to know if any process modified the file. You can do this by executing the command:
`# auditctl -w /etc/shadow`
Now let's see what happens if I modify the /etc/shadow file:
`# touch /etc/shadow`
# ausearch -f /etc/shadow -i -ts recent
`type=PROCTITLE msg=audit(10/10/2018 09:46:03.042:4108) : proctitle=touch /etc/shadow`
type=SYSCALL msg=audit(10/10/2018 09:46:03.042:4108) : arch=x86_64 syscall=openat
success=yes exit=3 a0=0xffffff9c a1=0x7ffdb17f6704 a2=O_WRONLY|O_CREAT|O_NOCTTY|
O_NONBLOCK a3=0x1b6 items=2 ppid=2712 pid=3727 auid=dwalsh uid=root gid=root
euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts1 ses=3 comm=touch
exe=/usr/bin/touch subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)
There's a lot of information in the audit record, but I highlighted that it recorded that root modified the /etc/shadow file and the owner of the process' audit UID (**auid**) was **dwalsh**.
Did the kernel do that?
### Tracking the login UID
There is a field called **loginuid**, stored in **/proc/self/loginuid**, that is part of the proc struct of every process on the system. This field can be set only once; after it is set, the kernel will not allow any process to reset it.
When I log into the system, the login program sets the loginuid field for my login process.
My UID, dwalsh, is 3267.
`$ cat /proc/self/loginuid`
3267
Now, even if I become root, my login UID stays the same.
`$ sudo cat /proc/self/loginuid`
3267
Note that every process that's forked and executed from the initial login process automatically inherits the loginuid. This is how the kernel knew that the person who logged was dwalsh.
## Containers
Now let's look at containers.
`sudo podman run fedora cat /proc/self/loginuid`
3267
Even the container process retains my loginuid. Now let's try with Docker.
`sudo docker run fedora cat /proc/self/loginuid`
4294967295
## Why the difference?
Podman uses a traditional fork/exec model for the container, so the container process is an offspring of the Podman process. Docker uses a client/server model. The **docker** command I executed is the Docker client tool, and it communicates with the Docker daemon via a client/server operation. Then the Docker daemon creates the container and handles communications of stdin/stdout back to the Docker client tool.
The default loginuid of processes (before their loginuid is set) is 4294967295. Since the container is an offspring of the Docker daemon and the Docker daemon is a child of the init system, we see that systemd, Docker daemon, and the container processes all have the same loginuid, 4294967295, which audit refers to as the *unset *audit UID.
`cat /proc/1/loginuid`
4294967295
## How can this be abused?
Let's look at what would happen if a container process launched by Docker modifies the /etc/shadow file.
`$ sudo docker run --privileged -v /:/host fedora touch /host/etc/shadow`
$ sudo ausearch -f /etc/shadow -i
type=PROCTITLE msg=audit(10/10/2018 10:27:20.055:4569) : proctitle=/usr/bin/coreutils
--coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow
type=SYSCALL msg=audit(10/10/2018 10:27:20.055:4569) : arch=x86_64 syscall=openat
success=yes exit=3 a0=0xffffff9c a1=0x7ffdb6973f50 a2=O_WRONLY|O_CREAT|O_NOCTTY|
O_NONBLOCK a3=0x1b6 items=2 ppid=11863 pid=11882 auid=unset uid=root gid=root
euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset
comm=touch exe=/usr/bin/coreutils subj=system_u:system_r:spc_t:s0 key=(null)
In the Docker case, the auid is unset (4294967295); this means the security officer might know that a process modified the /etc/shadow file but the identity was lost.
If that attacker then removed the Docker container, there would be no trace on the system of who modified the /etc/shadow file.
Now let's look at the exact same scenario with Podman.
`$ sudo podman run --privileged -v /:/host fedora touch /host/etc/shadow`
$ sudo ausearch -f /etc/shadow -i
type=PROCTITLE msg=audit(10/10/2018 10:23:41.659:4530) : proctitle=/usr/bin/coreutils
--coreutils-prog-shebang=touch /usr/bin/touch /host/etc/shadow
type=SYSCALL msg=audit(10/10/2018 10:23:41.659:4530) : arch=x86_64 syscall=openat
success=yes exit=3 a0=0xffffff9c a1=0x7fffdffd0f34 a2=O_WRONLY|O_CREAT|O_NOCTTY|
O_NONBLOCK a3=0x1b6 items=2 ppid=11671 pid=11683 auid=dwalsh uid=root gid=root
euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=3 comm=touch
exe=/usr/bin/coreutils subj=unconfined_u:system_r:spc_t:s0 key=(null)
Everything is recorded correctly with Podman since it uses traditional fork/exec.
This was just a simple example of watching the /etc/shadow file, but the auditing system is very powerful for watching what processes do on a system. Using a fork/exec container runtime for launching containers (instead of a client/server container runtime) allows you to maintain better security through audit logging.
## Final thoughts
There are many other nice features about the fork/exec model versus the client/server model when launching containers. For example, systemd features include:
**SD_NOTIFY:**If you put a Podman command into a systemd unit file, the container process can return notice up the stack through Podman that the service is ready to receive tasks. This is something that can't be done in client/server mode.**Socket activation:**You can pass down connected sockets from systemd to Podman and onto the container process to use them. This is impossible in the client/server model.
The nicest feature, in my opinion, is **running Podman and containers as a non-root user**. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition.
## 4 Comments |
11,262 | 如何在双启动或单启动模式下重新安装 Ubuntu | https://itsfoss.com/reinstall-ubuntu/ | 2019-08-24T00:13:52 | [
"安装",
"重装"
] | https://linux.cn/article-11262-1.html | 如果你弄坏了你的 Ubuntu 系统,并尝试了很多方法来修复,你最终放弃并采取简单的方法:重新安装 Ubuntu。
我们一直遇到这样一种情况,重新安装 Linux 似乎比找出问题并解决来得更好。排查 Linux 故障能教你很多,但你不会总是花费更多时间来修复损坏的系统。
据我所知,Ubuntu 中没有像 Windows 那样的系统恢复分区。那么,问题出现了:如何重新安装 Ubuntu?让我告诉你如何重新安装 Ubuntu。
**警告!**
>
> **磁盘分区始终是一项危险的任务。我强烈建议你在外部磁盘上备份数据。**
>
>
>
### 如何重新安装 Ubuntu Linux

以下是重新安装 Ubuntu 的步骤。
#### 步骤 1:创建一个 live USB
首先,在网站上下载 Ubuntu。你可以下载[任何需要的 Ubuntu 版本](https://itsfoss.com/which-ubuntu-install/)。
* [下载 Ubuntu](https://ubuntu.com/download/desktop)
获得 ISO 镜像后,就可以创建 live USB 了。如果 Ubuntu 系统仍然可以使用,那么可以使用 Ubuntu 提供的启动盘创建工具创建它。
如果无法使用你的 Ubuntu,那么你可以使用其他系统。你可以参考这篇文章来学习[如何在 Windows 中创建 Ubuntu 的 live USB](https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/)。
#### 步骤 2:重新安装 Ubuntu
有了 Ubuntu 的 live USB 之后将其插入 USB 端口。重新启动系统。在启动时,按下 `F2`/`F10`/`F12` 之类的键进入 BIOS 设置,并确保已设置 “Boot from Removable Devices/USB”。保存并退出 BIOS。这将启动进入 live USB。
进入 live USB 后,选择安装 Ubuntu。你将看到选择语言和键盘布局这些常用选项。你还可以选择下载更新等。

现在是重要的步骤。你应该看到一个“<ruby> 安装类型 <rt> Installation Type </rt></ruby>”页面。你在屏幕上看到的内容在很大程度上取决于 Ubuntu 如何处理系统上的磁盘分区和安装的操作系统。
在此步骤中仔细阅读选项及它的细节。注意每个选项的说明。屏幕上的选项可能在不同的系统中看上去不同。

在这里,它发现我的系统上安装了 Ubuntu 18.04.2 和 Windows,它给了我一些选项。
第一个选项是擦除 Ubuntu 18.04.2 并重新安装它。它告诉我它将删除我的个人数据,但它没有说删除所有操作系统(即 Windows)。
如果你非常幸运或处于单一启动模式,你可能会看到一个“<ruby> 重新安装 Ubuntu <rt> Reinstall Ubuntu </rt></ruby>”的选项。此选项将保留现有数据,甚至尝试保留已安装的软件。如果你看到这个选项,那么就用它吧。
**双启动系统注意**
>
> **如果你是双启动 Ubuntu 和 Windows,并且在重新安装中,你的 Ubuntu 系统看不到 Windows,你必须选择 “Something else” 选项并从那里安装 Ubuntu。我已经在[在双启动下安装 Linux 的过程](https://itsfoss.com/replace-linux-from-dual-boot/)这篇文章中说明了。**
>
>
>
对我来说,没有重新安装并保留数据的选项,因此我选择了“<ruby> 擦除 Ubuntu 并重新安装 <rt> Erase Ubuntu and reinstall </rt></ruby>”。该选项即使在 Windows 的双启动模式下,也将重新安装 Ubuntu。
我建议为 `/` 和 `/home` 使用单独分区就是为了重新安装。这样,即使重新安装 Linux,也可以保证 `/home` 分区中的数据安全。我已在此视频中演示过:
选择重新安装 Ubuntu 后,剩下就是单击下一步。选择你的位置、创建用户账户。

以上完成后,你就完成重装 Ubuntu 了。
在本教程中,我假设你已经知道我说的东西,因为你之前已经安装过 Ubuntu。如果需要澄清任何一个步骤,请随时在评论栏询问。
---
via: <https://itsfoss.com/reinstall-ubuntu/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对: [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

If you have messed up your Ubuntu system and after trying numerous ways to fix it, you finally give up and take the easy way out: you reinstall Ubuntu.
We have all been in a situation when reinstalling Linux seems a better idea than try to troubleshoot and fix the issue for good. Troubleshooting a Linux system teaches you a lot but you cannot always afford to spend more time fixing a broken system.
There is no Windows like recovery drive system in Ubuntu as far as I know. So, the question then arises: how to reinstall Ubuntu? Let me show you the steps for the same.
## How to reinstall Ubuntu Linux

Here are the steps to follow for reinstalling Ubuntu.
### Step 1: Create a live USB
First, download Ubuntu from its website. You can download [whichever Ubuntu version](https://itsfoss.com/which-ubuntu-install/) you want to use.
Once you have got the ISO image, it’s time to create a live USB from it. If your Ubuntu system is still accessible, you can create a live disk using the startup disk creator tool provided by Ubuntu.
If you cannot access your Ubuntu system, you’ll have to use another system. You can refer to this article to learn [how to create live USB of Ubuntu in Windows](https://itsfoss.com/create-live-usb-of-ubuntu-in-windows/).
### Step 2: Reinstall Ubuntu
Once you have got the live USB of Ubuntu, plugin the USB. Reboot your system. At boot time, press F2/10/F12 key to go into the BIOS settings and make sure that you have set Boot from Removable Devices/USB option at the top. Save and exit BIOS. This will allow you to boot into live USB.
Once you are in the live USB, choose to install Ubuntu. You’ll get the usual option for choosing your language and keyboard layout. You’ll also get the option to download updates etc.

The important steps come now. You should see an “Installation Type” screen. What you see on your screen here depends heavily on how Ubuntu sees the disk partitioning and installed operating systems on your system.
Be very careful in reading the options and its details at this step. **Pay attention to what each option says**. The screen options may look different in different systems.

In my case, it finds that I have Ubuntu 18.04.2 and Windows installed on my system and it gives me a few options.
The first option here is to erase Ubuntu 18.04.2 and reinstall it. It tells me that it will delete my personal data but it says nothing about deleting all the operating systems (i.e. Windows).
If you are super lucky or in single boot mode, you may see an option where you can see a “Reinstall Ubuntu”. This option will keep your existing data and even tries to keep the installed software. If you see this option, you should go for it it.
[process of reinstalling Linux in dual boot in this tutorial](https://itsfoss.com/replace-linux-from-dual-boot/).
For me, there was no reinstall and keep the data option so I went for “Erase Ubuntu and reinstall” option. This will install Ubuntu afresh even if it is in dual boot mode with Windows.
The reinstalling part is why I recommend using separate partitions for root and home. With that, you can keep your data in the home partition safe even if you reinstall Linux. I have already demonstrated it in this video:
Once you have chosen the reinstall Ubuntu option, the rest of the process is just clicking next. Select your location and when asked, create your user account.

Once the procedure finishes, you’ll have your Ubuntu reinstalled afresh.
In this tutorial, I have assumed that you know things because [you already had Ubuntu installed before](https://itsfoss.com/install-ubuntu/). If you need clarification at any step, please feel free to ask in the comment section. |
11,264 | 4 种方式来自定义 Xfce 来给它一个现代化外观 | https://itsfoss.com/customize-xfce/ | 2019-08-25T05:07:11 | [
"Xfce"
] | https://linux.cn/article-11264-1.html |
>
> Xfce 是一个非常轻量的桌面环境,但它有一个缺点,它看起来有点老旧。但是你没有必要坚持默认外观。让我们看看你可以自定义 Xfce 的各种方法,来给它一个现代化的、漂亮的外观。
>
>
>

首先,Xfce 是最[受欢迎的桌面环境](https://itsfoss.com/best-linux-desktop-environments/)之一。作为一个轻量级桌面环境,你可以在非常低的资源上运行 Xfce,并且它仍然能很好地工作。这是为什么很多[轻量级 Linux 发行版](https://itsfoss.com/lightweight-linux-beginners/)默认使用 Xfce 的原因之一。
一些人甚至喜欢在高端设备上使用它,说明它的简单性、易用性和非资源依赖性是主要原因。
[Xfce](https://xfce.org/) 自身很小,而且只提供你需要的东西。令人烦恼的一点是会令人觉得它的外观和感觉很老了。然而,你可以简单地自定义 Xfce 以使其看起来现代化和漂亮,但又不会像 Unity/GNOME 会话那样占用系统资源。
### 4 种方式来自定义 Xfce 桌面
让我们看看一些方法,我们可以通过这些方法改善你的 Xfce 桌面环境的外观和感觉。
默认 Xfce 桌面环境看起来有些像这样:

如你所见,默认 Xfce 桌面有点乏味。我们将使用主题、图标包以及更改默认 dock 来使它看起来新鲜和有点惊艳。
#### 1. 在 Xfce 中更改主题
我们将做的第一件事是从 [xfce-look.org](http://xfce-look.org) 中找到一款主题。我最喜欢的 Xfce 主题是 [XFCE-D-PRO](https://www.xfce-look.org/p/1207818/XFCE-D-PRO)。
你可以从[这里](https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz)下载主题,并提取到某处。
你可以复制提取出的这些主题文件到你的家目录中的 `.theme` 文件夹。如果该文件夹尚不存在,你可以创建一个,同样的道理,图标需要放在一个在家目录中的 `.icons` 文件夹。
打开 **设置 > 外观 > 样式** 来选择该主题,注销并重新登录以查看更改。默认的 Adwaita-dark 也是一个不错的主题。

你可以在 Xfce 上使用各种[好的 GTK 主题](https://itsfoss.com/best-gtk-themes/)。
#### 2. 在 Xfce 中更改图标
Xfce-look.org 也提供你可以下载的图标主题,提取并放置图标到你的家目录中 `.icons` 目录。在你添加图标主题到 `.icons` 目录中后,转到 **设置 > 外观 > 图标** 来选择这个图标主题。

我已经安装 [Moka 图标集](https://snwh.org/moka) ,它看起来令人惊艳。

你也可以参考我们[令人惊艳的图标主题](https://itsfoss.com/best-icon-themes-ubuntu-16-04/)列表。
##### 可选: 通过 Synaptic 安装主题
如果你想避免手工搜索和复制文件,在你的系统中安装 Synaptic 软件包管理器。你可以通过网络来查找最佳的主题和图标集,使用 synaptic 软件包管理器,你可以搜索和安装主题。
```
sudo apt-get install synaptic
```
**通过 Synaptic 搜索和安装主题/图标**
打开 synaptic,并在**搜索**上单击。输入你期望的主题名,接下来,它将显示匹配主题的列表。勾选所有更改所需的附加依赖,并在**应用**上单击。这些操作将下载主题和安装主题。

在安装后,你可以打开**外观**选项来选择期望的主题。
在我看来,这不是在 Xfce 中安装主题的最佳方法。
#### 3. 在 Xfce 中更改桌面背景
再强调一次,默认 Xfce 桌面背景也不错。但是你可以把桌面背景更改成与你的图标和主题相匹配的东西。
为在 Xfce 中更改桌面背景,在桌面上右击,并单击**桌面设置**。从文件夹选择中选择**背景**,并选择任意一个默认背景或自定义背景。

#### 4. 在 Xfce 中更改 dock
默认 dock 也不错,恰如其分。但是,再强调一次,它看来有点平平淡淡。

不过,如果你想你的 dock 变得更好,并带有更多一点的自定义选项,你可以安装另一个 dock 。
Plank 是一个简单而轻量级的、高度可配置的 dock。
为安装 Plank ,使用下面的命令:
```
sudo apt-get install plank
```
如果 Plank 在默认存储库中没有,你可以从这个 PPA 中安装它。
```
sudo add-apt-repository ppa:ricotz/docky
sudo apt-get update
sudo apt-get install plank
```
在你使用 Plank 前,你应该通过右键单击移除默认的 dock,并在**面板设置**下,单击**删除**。
在完成后,转到 **附件 > Plank** 来启动 Plank dock 。

Plank 从你正在使用的图标主题中选取图标。因此,如果你更改图标主题,你也将在 dock 中看到相关的更改。
### 总结
XFCE 是一个轻量级、快速和高度可自定义的桌面环境。如果你的系统资源有限,它服务很好,并且你可以简单地自定义它来看起来更好。这是在应用这些步骤后,我的屏幕的外观。

这只是半个小时的努力成果,你可以使用不同的主题/图标自定义使它看起来更好。请随意在评论区分享你自定义的 XFCE 桌面屏幕,以及你正在使用的主题和图标组合。
---
via: <https://itsfoss.com/customize-xfce/>
作者:[Ambarish Kumar](https://itsfoss.com/author/ambarish/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

To start with, Xfce is one of the most [popular desktop environments](https://itsfoss.com/best-linux-desktop-environments/). Being a lightweight DE, you can run Xfce on very low resource and it still works great. This is one of the reasons why many [lightweight Linux distributions](https://itsfoss.com/lightweight-linux-beginners/) use Xfce by default.
Some people prefer it even on a high-end device stating its simplicity, easy of use and non-resource hungry nature as the main reasons. Even if you use Ubuntu GNOME, you can [install Xfce on Ubuntu](https://itsfoss.com/install-xfce-desktop-xubuntu/) and enjoy the speed.
[Xfce](https://xfce.org/) is in itself minimal and provides just what you need. The one thing that bothers is its look and feel which feel old. However, you can easily customize Xfce to look modern and beautiful without reaching the limit where a Unity/GNOME session eats up system resources.
## Four ways to Customize Xfce desktop
Let’s see some of the ways by which we can improve the look and feel of your Xfce desktop environment.
The default Xfce desktop environment looks something like this :

As you can see, the default Xfce desktop is kinda boring. We will use some themes, icon packs and change the default dock to make it look fresh and a bit revealing.
### 1. Change themes in Xfce
The simplest way to customize is to [change the Xfce theme](https://itsfoss.com/install-themes-xfce-xubuntu/).
The first thing we will do is pick up a theme from [xfce-look.org](https://xfce-look.org/). My favorite Xfce theme is [XFCE-D-PRO](https://www.xfce-look.org/p/1207818/XFCE-D-PRO).
You can download the theme from [here](https://www.xfce-look.org/p/1207818/startdownload?file_id=1523730502&file_name=XFCE-D-PRO-1.6.tar.xz&file_type=application/x-xz&file_size=105328&url=https%3A%2F%2Fdl.opendesktop.org%2Fapi%2Ffiles%2Fdownloadfile%2Fid%2F1523730502%2Fs%2F6019b2b57a1452471eac6403ae1522da%2Ft%2F1529360682%2Fu%2F%2FXFCE-D-PRO-1.6.tar.xz) and extract it somewhere.
You can copy this extracted file to **.themes** folder in your home directory. If the folder is not present by default, you can create one and the same goes for icons which needs a
*folder in the home directory.*
**.icons**Open **Settings > Appearance > Style** to select the theme, log out and login to see the change. Adwaita-dark from default is also a nice one.

You can use any [good GTK theme](https://itsfoss.com/best-gtk-themes/) on Xfce.
### 2. Change icons in Xfce
Xfce-look.org also provides icon themes which you can download, extract and put it in your home directory under **.icons** directory. Once you have added the icon theme in the .icons directory, go to **Settings > Appearance > Icons** to select that icon theme.

I have installed [Moka icon set](https://snwh.org/moka) that looks awesome.

You can also refer to our list of [awesome icon themes](https://itsfoss.com/best-icon-themes-ubuntu-16-04/).
**Optional: Installing themes through Synaptic**
If you want to avoid the manual search and copying of the files, [install Synaptic Manager](https://itsfoss.com/synaptic-package-manager/) in your system. You can look for some best themes over web and icon sets, and using synaptic manager you can search and install it.
`sudo apt-get install synaptic`
**Searching and installing theme/icons through Synaptic**
Open synaptic and click on **Search**. Enter your desired theme, and it will display the list of matching items. Mark all the additional required changes and click on **Apply**. This will download the theme and then install it.

Once done, you can open the **Appearance** option to select the desired theme.
In my opinion, this is not the best way to install themes in Xfce.
### 3. Change wallpapers in Xfce
Again, the default Xfce wallpaper is not bad at all. But you can change the wallpaper to something that matches with your icons and themes.
To change wallpapers in Xfce, right click on the desktop and click on Desktop Settings. You can change the desktop background from your custom collection or the defaults one given.
Right click on the desktop and click on **Desktop Settings**. Choose **Background** from the folder option, and choose any one of the default backgrounds or a custom one.

### 4. Change the dock in Xfce
The default dock is nice and pretty much does what it is for. But again, it looks a bit boring.

However, if you want your dock to be better and with a little more customization options, you can install another dock.
Plank is one of the simplest and lightweight docks and is highly configurable.
To install Plank use the command below:
sudo apt-get install plank
If Plank is not available in the default repository, you can install it from this PPA.
```
sudo add-apt-repository ppa:ricotz/docky
sudo apt-get update
sudo apt-get install plank
```
Before you use Plank, you should remove the default dock by right-clicking in it and under Panel Settings, clicking on delete.
Once done, go to **Accessory > Plank** to launch Plank dock.

Plank picks up icons from the one you are using. So if you change the icon themes, you’ll see the change is reflected in the dock also.
## Wrapping Up
XFCE is a lightweight, fast and highly customizable. If you are limited on system resource, it serves good and you can easily customize it to look better. Here’s how my screen looks after applying these steps.

This is just with half an hour of effort. You can make it look much better with different themes/icons customization. Feel free to share your customized XFCE desktop screen in the comments and the combination of themes and icons you are using. |
11,265 | 怎样通过示弱增强领导力 | https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader | 2019-08-25T05:24:00 | [
"多样性"
] | https://linux.cn/article-11265-1.html | 
传统观念中的领导者总是强壮、大胆、果决的。我也确实见过一些拥有这些特点的领导者。但更多时候,领导者也许看起来比传统印象中的领导者要更脆弱些,他们内心有很多这样的疑问:我的决策正确吗?我真的适合这个职位吗?我有没有在做最该做的事情?
解决这些问题的方法是把问题说出来。把问题憋在心里只会助长它们,一名开明的领导者更倾向于把自己的脆弱之处暴露出来,这样我们才能从有过相同经验的人那里得到慰藉。
为了证明这个观点,我来讲一个故事。
### 一个扰人的想法
假如你在教育领域工作,你会发现发现大家更倾向于创造[一个包容性的环境](https://opensource.com/open-organization/17/9/building-for-inclusivity) —— 一个鼓励多样性繁荣发展的环境。长话短说,我一直以来都认为自己是出于营造包容性环境的考量,而进行的“多样性雇佣”,意思就是人事雇佣我看重的是我的性别而非能力,这个想法一直困扰着我。随之而来的开始自我怀疑:我真的是这个岗位的最佳人选吗?还是只是因为我是个女人?许多年来,我都认为公司雇佣我是因为我的能力最好。但如今却发现,对那些雇主们来说,与我的能力相比,他们似乎更关注我的性别。
我开解自己:我到底是因为什么被雇佣并不重要,我知道我是这个职位的最佳人选而且我会用实际行动去证明。我工作很努力,达到过预期,也犯过错,也收获了很多,我做了一个老板想要自己雇员做的一切事情。
但那个“多样性雇佣”问题的阴影并未因此散去。我无法摆脱它,甚至回避一切与之相关的话题如蛇蝎,最终意识到自己拒绝谈论它意味着我能做的只有直面它。如果我继续回避这个问题,早晚会影响到我的工作,这是我最不希望看到的。
### 倾诉心中的困扰
直接谈论多样性和包容性这个话题有点尴尬,在进行自我剖析之前有几个问题需要考虑:
* 我们能够相信我们的同事,能够在他们面前表露脆弱吗?
* 一个团队的领导者在同事面前表露脆弱合适吗?
* 如果我玩脱了呢?会不会影响我的工作?
于是我和一位主管在午餐时间进行了一场小型的 Q&A 会议,这位主管负责着集团很多领域,并且以正直坦率著称。一位女同事问他,“我是因为多样性才被招进来的吗?”,他停下手头工作花了很长时间和一屋子女性员工解释了这件事,我不想复述他讲话的全部内容,我只说对我触动最大的几句:如果你知道自己能够胜任这个职位,并且面试很顺利,那么不必质疑招聘的结果。每个怀疑自己是因为多样性雇佣进公司的人私下都有自己的问题,你不必重蹈他们的覆辙。
完毕。
我很希望我能由衷地说我放下这个问题了,但事实上我没有。这问题挥之不去:万一我就是被破格录取的那个呢?万一我就是多样性雇佣的那个呢?我认识到我不能避免地反复思考这些问题。
几周后我和这位主管进行了一次一对一谈话,在谈话的末尾,我提到作为一位女性,自己很欣赏他那番对于多样性和包容性的坦率发言。当得知领导很有交流的意愿时,谈论这种话题变得轻松许多。我也向他提出了最初的问题,“我是因为多样性才被雇佣的吗?”,他回答得很干脆:“我们谈论过这个问题。”谈话后我意识到,我急切地想找人谈论这些需要勇气的问题,其实只是因为我需要有一个人的关心、倾听和好言劝说。
但正因为我有展露脆弱的勇气——去和那位主管谈论我的问题——我承受我的秘密困扰的能力提高了。我觉得身轻如燕,我开始组织各种对话,主要围绕着内隐偏见及其引起的一系列问题、怎样增加自身的包容性和多样性的表现等。通过这些经历,我发现每个人对于多样性都有不同的认识,如果我只囿于自己的秘密,我不会有机会组织参与这些精彩的对话。
我有谈论内心脆弱的勇气,我希望你也有。
我们可以谈谈那些影响我们领导力的秘密,这样从任何意义上来说,我们距离成为一位开明的领导就近了一些。那么适当示弱有帮助你成为更好的领导者吗?
### 作者简介
Angela Robertson 是微软的一名高管。她和她的团队对社群互助有着极大热情,并参与开源工作。在加入微软之前,Angela 就职于红帽公司。
---
via: <https://opensource.com/article/17/12/how-allowing-myself-be-vulnerable-made-me-better-leader>
作者:[Angela Robertson](https://opensource.com/users/arobertson98) 译者:[Valoniakim](https://github.com/Valoniakim) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,266 | LiVES 视频编辑器 3.0 有了显著的改善 | https://itsfoss.com/lives-video-editor/ | 2019-08-25T05:54:32 | [
"视频"
] | https://linux.cn/article-11266-1.html | 我们最近列出了一个[最佳开源视频编辑器](https://itsfoss.com/open-source-video-editors/)的清单。LiVES 是这些开源视频编辑器之一,可以免费使用。
即使许多用户还在等待 Windows 版本的发行,但在刚刚发行的 LiVES 视频编辑器 Linux 版本中(最新版本 v3.0.1)进行了一个重大更新,包括了一些新的功能和改进。
在这篇文章里,我将会列出新版本中的重要改进,并且我将会提到在 Linux 上安装的步骤。
### LiVES 视频编辑器 3.0:新的改进

总的来说,在这次重大更新中 LiVES 视频编辑器旨在提供更加平滑的回放、防止意外崩溃、优化视频录制,以及让在线视频下载器更加实用。
下面列出了变化:
* 如果需要渲染的话,可以静默渲染直到到视频播放完毕。
* 改进回放插件为 openGL,提供更加平滑的回放。
* 重新启用了 openGL 回放插件的高级选项。
* 在所有帧的 VJ/预解码中允许“Enough”
* 重构了在回放时的时基计算的代码(a/v 同步更好)。
* 彻底修复了外部音频和音频,提高了准确性并减少了 CPU 占用。
* 进入多音轨模式时自动切换至内部音频。
* 重新显示效果映射器窗口时,将会正常展示效果状态(on/off)。
* 解决了音频和视频线程之间的冲突。
* 现在可以对在线视频下载器,可以对剪辑大小和格式进行修改并添加了更新选项。
* 对实时效果实例实现了引用计数。
* 大量重写了主界面,清理代码并改进许多视觉效果。
* 优化了视频播放器运行时的录制功能。
* 改进了 projectM 过滤器封装,包括对 SDL2 的支持。
* 添加了一个选项来逆转多轨合成器中的 Z-order(后层现在可以覆盖上层了)。
* 增加了对 musl libc 的支持
* 更新了乌克兰语的翻译
如果你不是一位高级视频编辑师,也许会对上面列出的重要更新提不起太大的兴趣。但正是因为这些更新,才使得“LiVES 视频编辑器”成为了最好的开源视频编辑软件。
### 在 Linux 上安装 LiVES 视频编辑器
LiVES 几乎可以在所有主要的 Linux 发行版中使用。但是,你可能并不能在软件中心找到它的最新版本。所以,如果你想通过这种方式安装,那你就不得不耐心等待了。
如果你想要手动安装,可以从它的下载页面获取 Fedora/Open SUSE 的 RPM 安装包。它也适用于其他 Linux 发行版。
* [下载 LiVES 视频编辑器](http://lives-video.com/index.php?do=downloads#binaries)
如果你使用的是 Ubuntu(或其他基于 Ubuntu 的发行版),可以安装由 [Ubuntuhandbook](http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/) 维护的[非官方 PPA](https://itsfoss.com/ppa-guide/)。
下面由我来告诉你,你该做些什么:
1、启动终端后输入以下命令:
```
sudo add-apt-repository ppa:ubuntuhandbook1/lives
```
系统将提示你输入密码用于确认添加 PPA。
2、完成后,你现在可以轻松地更新软件包列表并安装 LiVES 视频编辑器。以下是需要你输入的命令段:
```
sudo apt update
sudo apt install life-plugins
```
3、现在,它开始下载并安装这个视频编辑器,等待大约一分钟即可完成。
### 总结
Linux 上有许多[视频编辑器](https://itsfoss.com/best-video-editing-software-linux/)。但它们通常被认为不能用于专业编辑。而我并不是一名专业人士,所以像 LiVES 这样免费的视频编辑器就足以进行简单的编辑了。
你认为怎么样呢?你在 Linux 上使用 LiVES 或其他视频编辑器的体验还好吗?在下面的评论中告诉我们你的感觉吧。
---
via: <https://itsfoss.com/lives-video-editor/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Scvoet](https://github.com/scvoet) 校对:[wxy](https://github.com/ wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | We recently covered a list of [best open source video editors](https://itsfoss.com/open-source-video-editors/). LiVES is one of those open source video editors, available for free.
Even though a lot of users are still waiting for the release on Windows, a major update just popped up for LiVES Video Editor (i.e v3.0.1 as the latest package) on Linux. The new upgrade includes some new features and improvements.
In this article, I’ll cover the key improvements in the new version and I’ll also mention the steps to install it on your Linux system.
## LiVES Video Editor 3.0: New Changes

Overall, with this major update – LiVES Video Editor aims to have a smoother playback, prevent unwanted crashes, optimized video recording, and making the online video downloader more useful.
The list of changes are:
- Render silence to end of video if necessary during rendering.
- Improvements to openGL playback plugin, including much smoother playback.
- Re-enable Advanced options for the openGL playback plugin.
- Allow “Enough” in VJ / Pre-decode all frames
- Refactor code for timebase calculations during playback (better a/v synch).
- Overhaul external audio and audio recording to improve accuracy and use fewer CPU cycles.
- Auto switch to internal audio when entering multitack mode.
- Show correct effects state (on / off) when reshowing effect mapper window.
- Eliminate some race conditions between the audio and video threads.
- Improvements to online video downloader, clip size and format can now be selected, added an update option.
- Implemented reference counting for realtime effect instances.
- Extensively rewrote the main interface, cleaning up the code and making many visual improvements.
- Optimized recording when video generators are running.
- Improvements to the projectM filter wrapper, including SDL2 support.
- Added an option to invert the Z-order in multitrack compositor (rear layers can now overlay front ones).
- Added support for musl libc
- Updated translations for Ukranian
While some of the points listed can just go over your head if you are not an advanced video editor. But, in a nutshell, all of these things make ‘LiVES Video Editor’ a better open source video editing software.
## Installing LiVES Video Editor on Linux
LiVES is normally available in the repository of all major Linux distributions. However, you may not always find the latest version in your software center.
### Installing LiVES on Ubuntu
On Ubuntu 20.04, you already have LiVES 3.0 in the universe repository. You can install it either from the software center?

Or use the command below:
`sudo apt install lives lives-plugins`
For Ubuntu 18.04 and Linux Mint 19 series, you can add the [unofficial PPA](https://itsfoss.com/ppa-guide/) maintained by [Ubuntuhandbook](http://ubuntuhandbook.org/index.php/2019/08/lives-video-editor-3-0-released-install-ubuntu/). Here’s how to do it:
**1.** Launch the terminal and enter the following command:
sudo add-apt-repository ppa:ubuntuhandbook1/lives
You will be prompted for the password to authenticate the addition of PPA.
**2. **Once done, you can now easily proceed to update the list of packages and get LiVES Video Editor installed. Here’s the set of commands that you need to enter next:
sudo apt update sudo apt install lives lives-plugins
**3.** Now, it will start downloading and installing the video editor. You should be good to go in a minute.
You should [know about removing PPA](https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/) if you want to uninstall LiVES later.
### Installing LiVES on other Linux distributions
If you want to install it on other distributions, you can get the RPM packages for Fedora/Open SUSE from its download page.
The source code is also available for Linux distros.
**Wrapping Up**
There are a handful of [video editors available on Linux](https://itsfoss.com/best-video-editing-software-linux/). But they are not often considered good enough for professional editing. I am not a professional but I do manage simple editing with such freely available video editors like LiVES.
How about you? How’s your experience with LiVES or other video editors on Linux? Let us know your thoughts in the comments below. |
11,268 | Podman 和用户命名空间:天作之合 | https://opensource.com/article/18/12/podman-and-user-namespaces | 2019-08-25T22:02:00 | [
"容器",
"Podman"
] | /article-11268-1.html |
>
> 了解如何使用 Podman 在单独的用户空间运行容器。
>
>
>

[Podman](https://podman.io/) 是 [libpod](https://github.com/containers/libpod) 库的一部分,使用户能够管理 pod、容器和容器镜像。在我的上一篇文章中,我写过将 [Podman 作为一种更安全的运行容器的方式](/article-11261-1.html)。在这里,我将解释如何使用 Podman 在单独的用户命名空间中运行容器。
作为分离容器的一个很棒的功能,我一直在思考<ruby> <a href="http://man7.org/linux/man-pages/man7/user_namespaces.7.html"> 用户命名空间 </a> <rt> user namespace </rt></ruby>,它主要是由 Red Hat 的 Eric Biederman 开发的。用户命名空间允许你指定用于运行容器的用户标识符(UID)和组标识符(GID)映射。这意味着你可以在容器内以 UID 0 运行,在容器外以 UID 100000 运行。如果容器进程逃逸出了容器,内核会将它们视为以 UID 100000 运行。不仅如此,任何未映射到用户命名空间的 UID 所拥有的文件对象都将被视为 `nobody` 所拥有(UID 是 `65534`, 由 `kernel.overflowuid` 指定),并且不允许容器进程访问,除非该对象可由“其他人”访问(即世界可读/可写)。
如果你拥有一个权限为 [660](https://chmodcommand.com/chmod-660/) 的属主为“真实” `root` 的文件,而当用户命名空间中的容器进程尝试读取它时,会阻止它们访问它,并且会将该文件视为 `nobody` 所拥有。
### 示例
以下是它是如何工作的。首先,我在 `root` 拥有的系统中创建一个文件。
```
$ sudo bash -c "echo Test > /tmp/test"
$ sudo chmod 600 /tmp/test
$ sudo ls -l /tmp/test
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
```
接下来,我将该文件卷挂载到一个使用用户命名空间映射 `0:100000:5000` 运行的容器中。
```
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
# id
uid=0(root) gid=0(root) groups=0(root)
# ls -l /tmp/test
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
# cat /tmp/test
cat: /tmp/test: Permission denied
```
上面的 `--uidmap` 设置告诉 Podman 在容器内映射一系列的 5000 个 UID,从容器外的 UID 100000 开始的范围(100000-104999)映射到容器内 UID 0 开始的范围(0-4999)。在容器内部,如果我的进程以 UID 1 运行,则它在主机上为 100001。
由于实际的 `UID=0` 未映射到容器中,因此 `root` 拥有的任何文件都将被视为 `nobody` 所拥有。即使容器内的进程具有 `CAP_DAC_OVERRIDE` 能力,也无法覆盖此种保护。`DAC_OVERRIDE` 能力使得 root 的进程能够读/写系统上的任何文件,即使进程不是 `root` 用户拥有的,也不是全局可读或可写的。
用户命名空间的功能与宿主机上的功能不同。它们是命名空间的功能。这意味着我的容器的 root 只在容器内具有功能 —— 实际上只有该范围内的 UID 映射到内用户命名空间。如果容器进程逃逸出了容器,则它将没有任何非映射到用户命名空间的 UID 之外的功能,这包括 `UID=0`。即使进程可能以某种方式进入另一个容器,如果容器使用不同范围的 UID,它们也不具备这些功能。
请注意,SELinux 和其他技术还限制了容器进程破开容器时会发生的情况。
### 使用 podman top 来显示用户名字空间
我们在 `podman top` 中添加了一些功能,允许你检查容器内运行的进程的用户名,并标识它们在宿主机上的真实 UID。
让我们首先使用我们的 UID 映射运行一个 `sleep` 容器。
```
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
```
现在运行 `podman top`:
```
$ sudo podman top --latest user huser
USER HUSER
root 100000
$ ps -ef | grep sleep
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
```
注意 `podman top` 报告用户进程在容器内以 `root` 身份运行,但在宿主机(`HUSER`)上以 UID 100000 运行。此外,`ps` 命令确认 `sleep` 过程以 UID 100000 运行。
现在让我们运行第二个容器,但这次我们将选择一个单独的 UID 映射,从 200000 开始。
```
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
$ sudo podman top --latest user huser
USER HUSER
root 200000
$ ps -ef | grep sleep
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
```
请注意,`podman top` 报告第二个容器在容器内以 `root` 身份运行,但在宿主机上是 UID=200000。
另请参阅 `ps` 命令,它显示两个 `sleep` 进程都在运行:一个为 100000,另一个为 200000。
这意味着在单独的用户命名空间内运行容器可以在进程之间进行传统的 UID 分离,而这从一开始就是 Linux/Unix 的标准安全工具。
### 用户名字空间的问题
几年来,我一直主张用户命名空间应该作为每个人应该有的安全工具,但几乎没有人使用过。原因是没有任何文件系统支持,也没有一个<ruby> 移动文件系统 <rt> shifting file system </rt></ruby>。
在容器中,你希望在许多容器之间共享**基本**镜像。上面的每个示例中使用了 Fedora 基本镜像。Fedora 镜像中的大多数文件都由真实的 `UID=0` 拥有。如果我在此镜像上使用用户名称空间 `0:100000:5000` 运行容器,默认情况下它会将所有这些文件视为 `nobody` 所拥有,因此我们需要移动所有这些 UID 以匹配用户名称空间。多年来,我想要一个挂载选项来告诉内核重新映射这些文件 UID 以匹配用户命名空间。上游内核存储开发人员还在继续研究,在此功能上已经取得一些进展,但这是一个难题。
由于由 Nalin Dahyabhai 领导的团队开发的自动 [chown](https://en.wikipedia.org/wiki/Chown) 内置于[容器/存储](https://github.com/containers/storage)中,Podman 可以在同一镜像上使用不同的用户名称空间。当 Podman 使用容器/存储,并且 Podman 在新的用户命名空间中首次使用一个容器镜像时,容器/存储会 “chown”(如,更改所有权)镜像中的所有文件到用户命名空间中映射的 UID 并创建一个新镜像。可以把它想象成一个 `fedora:0:100000:5000` 镜像。
当 Podman 在具有相同 UID 映射的镜像上运行另一个容器时,它使用“预先 chown”的镜像。当我在`0:200000:5000` 上运行第二个容器时,容器/存储会创建第二个镜像,我们称之为 `fedora:0:200000:5000`。
请注意,如果你正在执行 `podman build` 或 `podman commit` 并将新创建的镜像推送到容器注册库,Podman 将使用容器/存储来反转该移动,并将推送所有文件属主变回真实 UID=0 的镜像。
这可能会导致在新的 UID 映射中创建容器时出现真正的减速,因为 `chown` 可能会很慢,具体取决于镜像中的文件数。此外,在普通的 [OverlayFS](https://en.wikipedia.org/wiki/OverlayFS) 上,镜像中的每个文件都会被复制。普通的 Fedora 镜像最多可能需要 30 秒才能完成 `chown` 并启动容器。
幸运的是,Red Hat 内核存储团队(主要是 Vivek Goyal 和 Miklos Szeredi)在内核 4.19 中为 OverlayFS 添加了一项新功能。该功能称为“仅复制元数据”。如果使用 `metacopy=on` 选项来挂载层叠文件系统,则在更改文件属性时,它不会复制较低层的内容;内核会创建新的 inode,其中包含引用指向较低级别数据的属性。如果内容发生变化,它仍会复制内容。如果你想试用它,可以在 Red Hat Enterprise Linux 8 Beta 中使用此功能。
这意味着容器 `chown` 可能在两秒钟内发生,并且你不会倍增每个容器的存储空间。
这使得像 Podman 这样的工具在不同的用户命名空间中运行容器是可行的,大大提高了系统的安全性。
### 前瞻
我想向 Podman 添加一个新选项,比如 `--userns=auto`,它会为你运行的每个容器自动选择一个唯一的用户命名空间。这类似于 SELinux 与单独的多类别安全(MCS)标签一起使用的方式。如果设置环境变量 `PODMAN_USERNS=auto`,则甚至不需要设置该选项。
Podman 最终允许用户在不同的用户名称空间中运行容器。像 [Buildah](https://buildah.io/) 和 [CRI-O](http://cri-o.io/) 这样的工具也可以利用用户命名空间。但是,对于 CRI-O,Kubernetes 需要了解哪个用户命名空间将运行容器引擎,上游正在开发这个功能。
在我的下一篇文章中,我将解释如何在用户命名空间中将 Podman 作为非 root 用户运行。
---
via: <https://opensource.com/article/18/12/podman-and-user-namespaces>
作者:[Daniel J Walsh](https://opensource.com/users/rhatdan) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,270 | 查找 Linux 发行版名称、版本和内核详细信息 | https://www.ostechnix.com/find-out-the-linux-distribution-name-version-and-kernel-details/ | 2019-08-26T11:40:27 | [
"版本"
] | https://linux.cn/article-11270-1.html | 
本指南介绍了如何查找 Linux 发行版名称、版本和内核详细信息。如果你的 Linux 系统有 GUI 界面,那么你可以从系统设置中轻松找到这些信息。但在命令行模式下,初学者很难找到这些详情。没有问题!我这里给出了一些命令行方法来查找 Linux 系统信息。可能有很多,但这些方法适用于大多数 Linux 发行版。
### 1、查找 Linux 发行版名称、版本
有很多方法可以找出 VPS 中运行的操作系统。
#### 方法 1:
打开终端并运行以下命令:
```
$ cat /etc/*-release
```
CentOS 7 上的示例输出:
```
CentOS Linux release 7.0.1406 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CentOS Linux release 7.0.1406 (Core)
CentOS Linux release 7.0.1406 (Core)
```
Ubuntu 18.04 上的示例输出:
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS"
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
#### 方法 2:
以下命令也能获取你发行版的详细信息。
```
$ cat /etc/issue
```
Ubuntu 18.04 上的示例输出:
```
Ubuntu 18.04.2 LTS \n \l
```
#### 方法 3:
以下命令能在 Debian 及其衍生版如 Ubuntu、Linux Mint 上获取发行版详细信息。
```
$ lsb_release -a
```
示例输出:
```
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
```
### 2、查找 Linux 内核详细信息
#### 方法 1:
要查找 Linux 内核详细信息,请在终端运行以下命令。
```
$ uname -a
```
CentOS 7 上的示例输出:
```
Linux server.ostechnix.lan 3.10.0-123.9.3.el7.x86_64 #1 SMP Thu Nov 6 15:06:03 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
```
Ubuntu 18.04 上的示例输出:
```
Linux ostechnix 4.18.0-25-generic #26~18.04.1-Ubuntu SMP Thu Jun 27 07:28:31 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
或者,
```
$ uname -mrs
```
示例输出:
```
Linux 4.18.0-25-generic x86_64
```
这里,
* `Linux` – 内核名
* `4.18.0-25-generic` – 内核版本
* `x86_64` – 系统硬件架构(即 64 位系统)
有关 `uname` 命令的更多详细信息,请参考手册页。
```
$ man uname
```
#### 方法2:
在终端中,运行以下命令:
```
$ cat /proc/version
```
CentOS 7 上的示例输出:
```
Linux version 3.10.0-123.9.3.el7.x86_64 ([email protected]) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Thu Nov 6 15:06:03 UTC 2014
```
Ubuntu 18.04 上的示例输出:
```
Linux version 4.18.0-25-generic ([email protected]) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #26~18.04.1-Ubuntu SMP Thu Jun 27 07:28:31 UTC 2019
```
这些是查找 Linux 发行版的名称、版本和内核详细信息的几种方法。希望你觉得它有用。
---
via: <https://www.ostechnix.com/find-out-the-linux-distribution-name-version-and-kernel-details/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,271 | Linux 内核生日快乐 —— 那么你喜欢哪个版本? | https://opensource.com/article/19/8/linux-kernel-favorite-release | 2019-08-26T13:07:00 | [
"内核",
"生日"
] | https://linux.cn/article-11271-1.html |
>
> 自从第一个 Linux 内核发布已经过去 28 年了。自 1991 年以来发布了几十个 Linux 内核版本,你喜欢的是哪个?投个票吧!
>
>
>

让我们回到 1991 年 8 月,那个创造历史的时间。科技界经历过许多关键时刻,这些时刻仍在影响着我们。在那个 8 月,Tim Berners-Lee 宣布了一个名为<ruby> 万维网 <rt> World Wide Web </rt></ruby>的有趣项目,并推出了第一个网站;超级任天堂在美国发布,为所有年龄段的孩子们开启了新的游戏篇章;在赫尔辛基大学,一位名叫 Linus Torvalds 的学生向同好们询问(1991 年 8 月 25 日)了他作为[业余爱好](http://lkml.iu.edu/hypermail/linux/kernel/1908.3/00457.html)开发的新免费操作系统的反馈。那时 Linux 内核诞生了。
如今,我们可以浏览超过 15 亿个网站,在我们的电视机上玩另外五种任天堂游戏机,并维护着六个长期 Linux 内核。以下是我们的一些作者对他们最喜欢的 Linux 内核版本所说的话。
“引入模块的那个版本(1.2 吧?)。这是 Linux 迈向成功的未来的重要一步。” - Milan Zamazal
“2.6.9,因为它是我 2006 年加入 Red Hat 时的版本(在 RHEL4 中)。但我也更钟爱 2.6.18(RHEL5)一点,因为它在大规模部署的、我们最大客户(Telco, FSI)的关键任务工作负载中使用。它还带来了我们最大的技术变革之一:虚拟化(Xen 然后是 KVM)。” - Herve Lemaitre
“4.10。(虽然我不知道如何衡量这一点)。” - Ivan Bazulic
“Fedora 30 附带的新内核修复了我的 Thinkpad Yoga 笔记本电脑的挂起问题;挂起功能现在可以完美运行。我是一个笨人,只是忍受这个问题而从未试着提交错误报告,所以我特别感谢这项工作,我知道一定会解决这个问题。” - MáirínDuffy
“2.6.16 版本将永远在我的心中占有特殊的位置。这是我负责将其转换为在 hertz neverlost gps 系统上运行的第一个内核。我负责这项为那个设备构建内核和根文件系统的工作,对我来说这真的是一个奇迹时刻。我们在初次发布后多次更新了内核,但我想我必须还是推选那个最初版本,不过,我对于它的热爱没有任何技术原因,这纯属感性选择 =)” - Michael McCune
“我最喜欢的 Linux 内核版本是 2.4.0 系列,它集成了对 USB、LVM 和 ext3 的支持。ext3 是第一个具有日志支持的主流 Linux 文件系统,其从 2.4.15 内核可用。我使用的第一个内核版本是 2.2.13。” - Sean Nelson
“也许是 2.2.14,因为它是在我安装的第一个 Linux 上运行的版本(Mandrake Linux 7.0,在 2000 IIRC)。它也是我第一个需要重新编译以让我的视频卡或我的调制解调器(记不清了)工作的版本。” - GermánPulido
“我认为最新的一个!但我有段时间使用实时内核扩展来进行音频制作。” - Mario Torre
*在 Linux 内核超过 [52 个的版本](http://phb-crystal-ball.org/)当中,你最喜欢哪一个?参加我们的调查并在评论中告诉我们原因。*
---
via: <https://opensource.com/article/19/8/linux-kernel-favorite-release>
作者:[Lauren Pritchett](https://opensource.com/users/lauren-pritchetthttps://opensource.com/users/sethhttps://opensource.com/users/luis-ibanezhttps://opensource.com/users/mhayden) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Let's take a trip back to August 1991, when history was in the making. The tech world faced many pivotal moments that continue to impact us today. An intriguing project called the World Wide Web was announced by Tim Berners-Lee and the first website was launched. Super Nintendo was released in the United States and a new chapter of gaming began for kids of all ages. At the University of Helsinki, a student named Linus Torvalds asked his peers for feedback on a new free operating system he had been developing as a hobby. It was then that the Linux kernel was born.
Today, we can browse more than 1.5 billion websites, play with five additional Nintendo game consoles on our televisions, and maintain six longterm Linux kernels. Here's what some of our writers had to say about their favorite Linux kernel release.
"The one that introduced modules (was it 1.2?). It was a big step towards a successful Linux future." —Milan Zamazal
"2.6.9 as it was the version at the time when I joined Red Hat in 2006 (in RHEL4). But also a slightly bigger love for 2.6.18 (RHEL5) as it was the one which was deployed at massive scale / for mission critical workloads at all our largest customers (Telco, FSI). It also brought one of our biggest techno change with virtualization (Xen then KVM)." —Herve Lemaitre
"Kernel 4.10. (although I have no idea how to measure this)." —Ivan Bazulic
"The new kernel that shipped with Fedora 30 fixed a suspend issue with my Thinkpad Yoga laptop; suspend now works flawlessly. I'm a jerk and just lived with this and never filed a bug report, so I'm especially appreciative of the work I know must have gone into fixing this." —Máirín Duffy
"I will always have a special place in my heart for the 2.6.16 release. It was the first kernel that I was responsible for converting to run on the hertz neverlost gps system. I lead the effort to build the kernel and root filesystem for that device, it was truly a magical time for me. We updated the kernel several times after that initial release, but I think I'll have to go with the original version. Sadly, I don't really have any technical reasons for this love, it's purely sentimental =)" —Michael McCune
"My favorite Linux kernel release was the 2.4.0 series, it integrated support for USB, the LVM, and ext3. Ext3 being the first mainline Linux filesystem to have journaling support and available with the 2.4.15 kernel. My very first kernel release was 2.2.13." —Sean Nelson
"Maybe it's 2.2.14, because it's the version that ran on the first Linux I ever installed (Mandrake Linux 7.0, in 2000 IIRC). It was also the first version I ever had to recompile to get my video card, or my modem (can't recall) working." —Germán Pulido
"I think the latest one! But I had for a time used the realtime kernel extensions for audio production." —Mario Torre
*Of the more than 52 versions of the Linux kernel, which one is your favorite? Take our poll and tell us why in the comments.*
## 6 Comments |
11,272 | 互动小说及其开源简史 | https://opensource.com/article/18/7/interactive-fiction-tools | 2019-08-27T14:27:00 | [
"小说"
] | https://linux.cn/article-11272-1.html |
>
> 了解开源如何促进互动小说的成长和发展。
>
>
>

<ruby> <a href="http://iftechfoundation.org/"> 互动小说技术基金会 </a> <rt> Interactive Fiction Technology Foundation </rt></ruby>(IFTF) 是一个非营利组织,致力于保护和改进那些用来生成我们称之为<ruby> 互动小说 <rt> interactive fiction </rt></ruby>的数字艺术形式的技术。当 Opensource.com 的一位社区版主提出一篇关于 IFTF、它支持的技术与服务,以及它如何与开源相交织的文章时,我发现这对于我讲了数几十年的开源故事来说是个新颖的视角。互动小说的历史比<ruby> 自由及开源软件 <rt> Free and Open Source Software </rt></ruby>(FOSS)运动的历史还要长,但同时也与之密切相关。希望你们能喜欢我在这里的分享。
### 定义和历史
对于我来说,互动小说这个术语涵盖了读者主要通过文本与之交互的任何视频游戏或数字化艺术作品。这个术语起源于 20 世纪 80 年代,当时由语法解析器驱动的文本冒险游戏确立了什么是家用电脑娱乐,在美国主要以 [魔域](https://en.wikipedia.org/wiki/Zork)、[银河系漫游指南](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game)) 和 [Infocom](https://en.wikipedia.org/wiki/Infocom) 公司的其它佳作为代表。在 20 世纪 90 年代,它的主流商业价值被挖掘殆尽,但在线爱好者社区接过了该传统,继续发布这类游戏和游戏创建工具。
在四分之一个世纪之后的今天,互动小说包括了品种繁多并且妙趣橫生的作品,如从充满谜题的文字冒险游戏到衍生改良的超文本类型。定期举办的在线竞赛和节日为品鉴和试玩新作品提供了个好地方—英语互动小说世界每年都会举办一些活动,包括 [Spring Thing](http://www.springthing.net/) 和 [IFComp](http://ifcomp.org/)。后者是自 1995 年以来现代互动小说的核心活动,这也使它成为在同类型活动中持续举办时间最长的游戏展示活动。[IFComp 从 2017 年开始的评选和排名记录](https://ifcomp.org/comp/2017) 显示了如今基于文本的游戏在形式、风格和主题方面的惊人多样性。
(作者注:以上我特指英语,因为可能出于写作方面的技术原因,互动小说社区倾向于按语言进行区分。例如也有 [法语](http://www.fiction-interactive.fr/) 或 [意大利语](http://www.oldgamesitalia.net/content/marmellata-davventura-2018) 的互动小说年度活动,我就听说过至少一届的中文互动小说节。幸运的是,这些边界易于打破。在我管理 IFComp 的四年中,我们很欢迎来自国际社区的所有英语翻译工作。)

*在解释器 Lectrote 上启动 Emily Short 的“假冒的猴子”新游戏(二者皆为开源软件)。*
此外由于互动小说专注于文本,它为玩家和作者都提供了最方便的平台。几乎所有能阅读数字化文本的人(包括能通过文字转语音软件等辅助技术阅读的用户)都能玩大部分的互动小说作品。同样,互动小说的创作对所有愿意学习和使用其工具和技术的作家开放。
这使我们了解了互动小说与开源的长期关系,以及从它的全盛时期以来,对于艺术形式可用性的积极影响。接下来我将概述当代开源互动小说创建工具,并讨论共享源代码的互动小说作品古老且有点稀奇的传统。
### 开源互动小说工具的世界
一些开发平台(其中大部分是开源的)可用于创建传统的语法解析器驱动互动小说,其中用户可通过输入命令(如 `向北走`、`拾取提灯`、`收养小猫` 或 `向 Zoe 询问量子机械学`)来与游戏世界交互。20 世纪 90 年代初期出现了几个<ruby> 适于魔改 <rt> hacker-friendly </rt></ruby>的语法解析器游戏开发工具,其中目前还在使用的有 [TADS](http://tads.org/)、[Alan](https://www.alanif.se/) 和 [Quest](http://textadventures.co.uk/quest/),它们都是开源的,并且后两者兼容 FOSS 许可证。
其中最出名的是 [Inform](http://inform7.com/),1993 年 Graham Nelson 发布了第一个版本,目前由 Nelson 领导的一个团队进行维护。Inform 的源代码是不太寻常的半开源:Inform 6 是前一个主要版本,[它通过 Artistic 许可证开放源码](https://github.com/DavidKinder/Inform6)。这其中蕴涵着比我们所看到的更直接的相关性,因为它专有的 Inform 7 将 Inform 6 作为其核心,可以让它在将作品编译为机器码之前,将其 [杰出的自然语言语法](http://inform7.com/learn/man/RB_4_1.html#e307) (LCTT 译注:此链接已遗失)翻译为其前一代的那种更类似 C 的代码。

*Inform 7 集成式开发环境,打开了文档以及一个示例项目*
Inform 游戏运行在虚拟机上,这是 Infocom 时代的遗留产物。当时的发行者为了让同一个游戏可以运行在 Apple II、Commodore 4、Atari 800 以及其它种类的“[家用计算机](https://www.youtube.com/watch?v=bu55q_3YtOY)”上,将虚拟机作为解决方案。这些原本流行的操作系统中只有少数至今仍存在,但 Inform 的虚拟机使得它创建的作品能够通过 Inform 解释器运行在任何的计算机上。这些虚拟机包括相对现代的 [Glulx](http://ifwiki.org/index.php/Glulx),或者通过对 Infocom 过去的虚拟机进行逆向工程克隆得到的可爱的古董 [Z-machine](http://ifwiki.org/index.php/Z-machine)。现在,流行的跨平台解释器包括如 [lectrote](https://github.com/erkyrath/lectrote) 和 [Gargoyle](https://github.com/garglk/garglk/) 等桌面程序,以及如 [Quixe](http://eblong.com/zarf/glulx/quixe/) 和 [Parchment](https://github.com/curiousdannii/parchment) 等基于浏览器的程序。以上所有均为开源软件。
如其它的流行开源项目一样,如果 Inform 的发展进程随着它的成熟而逐渐变缓,它为我们留下的最重要的财富就是其活跃透明的生态环境。对于 Inform 来说,(这些财富)包括前面提到的解释器、[一套语言扩展](https://github.com/i7/extensions)(通常混合使用 Inform 6 和 Inform 7 写成),当然也包括所有用它们写成并分享于全世界的作品,有的时候也包括那些源代码。(在这篇文章的后半部分我会回到这个话题)
互动小说创建工具发明于 21 世纪,力求在传统的语法解析器之外探索一种新的玩家交互方式,即创建任何现代 Web 浏览器都能加载的超文本驱动作品。其中的领头羊是 [Twine](http://twinery.org/),原本由 Klimas 在 2009 年开发,目前是 [GNU 许可证开源项目](https://github.com/klembot/twinejs),有许多贡献者正在积极开发。(事实上,[Twine](https://opensource.com/article/18/7/twine-vs-renpy-interactive-fiction) 的开源软件血统可追溯到 [TiddlyWiki](https://tiddlywiki.com/),Klimas 的项目最初是从该项目衍生而来的)
对于互动小说开发来说,Twine 代表着一系列最开放及最可用的方法。由于它天生的 FOSS 属性,它将其输出渲染为一个自包含的网站,不依赖于需要进一步特殊解析的机器码,而是使用开放并且成熟的 HTML、CSS 和 JavaScript 标准。作为一个创建工具,Twine 能够根据创建者的技能等级,展示出与之相匹配的复杂度。拥有很少或没有编程知识的用户能够创建简单但是可玩的互动小说作品,但那些拥有更多编码和设计技能(包括通过开发 Twine 游戏获得的技能提升)的用户能够创建更复杂的项目。这也难怪近几年 Twine 在教育领域的曝光率和流行度有不小的提升。
另一些值得注意的开源互动小说开发项目包括由 Ian Millington 开发的以 MIT 许可证发布的 [Undum](https://github.com/idmillington/undum),以及由 Dan Fabulich 和 [Choice of Games](https://www.choiceofgames.com/) 团队开发的 [ChoiceScript](https://github.com/dfabulich/choicescript),两者也专注于将 Web 浏览器作为游戏平台。除了以上专用的开发系统以外,基于 Web 的互动小说也呈现给我们以开源作品的丰富、变幻的生态。比如 Furkle 的 [Twine 扩展工具集](https://github.com/furkle),以及 Liza Daly 为自己的互动小说游戏创建的名为 [Windrift](https://github.com/lizadaly/windrift) 的 JavaScript 框架。
### 程序、游戏,以及游戏程序
Twine 受益于 [一个致力于提供支持的长期 IFTF 计划](http://iftechfoundation.org/committees/twine/),公众可以为其维护和发展提供资助。IFTF 还直接支持两个长期公共服务:IFComp 和<ruby> 互动小说归档 <rt> IF Archive </rt></ruby>,这两个服务都依赖并回馈开源软件和技术。

*由 Liza Daly 开发的“Harmonia”的开场画面,该游戏使用 Windrift 开源互动小说创建框架创建。*
自 2014 年以来,用于运行 IFComp 网站的基于 Perl 和 JavaScript 的应用程序一直是 [一个共享源代码项目](https://github.com/iftechfoundation/ifcomp),它反映了 [互动小说特有子组件使用的 FOSS 许可证是个大杂烩](https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md),其中包括那些可以让以语法解析器驱动的竞争项目在 Web 浏览器中运行的各式各样的代码库。在 1992 年上线并 [在 2017 年成为一个 IFTF 项目](http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html) 的 <ruby> <a href="https://www.ifarchive.org/"> 互动小说归档 </a> <rt> IF Archive </rt></ruby>,是一套完全基于古老且稳定的互联网标准的镜像仓库,只使用了 [一点开源 Pyhon 脚本](https://github.com/iftechfoundation/ifarchive-ifmap-py) 用来处理索引。
### 最后,也是最有趣的部分,让我们聊聊开源文字游戏
互动小说归档的主体 [由游戏组成](https://www.ifarchive.org/indexes/if-archiveXgames),当然,是那些历经岁月的游戏。它们反映了数十年来不断发展的游戏设计趋势和互动小说工具发展。
许多互动小说作品都共享其源代码,要快速找到它们的快速很简单 —— [在 IFDB 中搜索标签 “source available”](http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22)。IFDB 是另一个长期运行的互动小说社区服务,由 TADS 的创立者 Mike Roberts 私人运营。对更加简单的界面感到舒适的用户,也可以浏览互动小说归档的 [games/source 目录](https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html),该目录按开发平台和编写语言对内容运行分组(也有很多作品,由于太繁杂或太古老而无法分类,只能浮于顶级目录)。
对这些代码共享游戏随机抽取的几个样本,揭示了一个有趣的窘境:与更广阔的开源软件世界不同,互动小说社区缺少一种普遍认同的方式来授权它生成的所有代码。与软件工具(包括我们用来创建互动小说的所有工具)不同的是,从字面意思上讲,交互式小说游戏是一件艺术作品。这意味着,将面向软件的开源许可证用于交互式小说游戏,并不会比将其用于其它像散文或诗歌作品更适合。但同样,互动小说游戏也是一个软件,它展示了创建者希望合法地与世界分享的源代码模式和技术。一个拥有开源意识的互动小说创建者会怎么做呢?
有些游戏通过将其代码放到公共领域来解决这一问题,或者通过明确的许可证,亦或者如 [42 年前由 Crowther 和 Woods 开发的“<ruby> 冒险之旅 <rt> Adventure </rt></ruby>”](http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv) 一样通过社区发布。一些人试图将其中的不同部分分开,应用他们自己的许可证,允许免费复用游戏公开的业务逻辑,但禁止针对其散文内容的再创作。这是我在开源自己的游戏 <ruby> <a href="https://github.com/jmacdotorg/warblers-nest/"> 莺巢 </a> <rt> The Warbler’s Nest </rt></ruby> 时采取的策略。天知道这是否能在法律上站得住脚,但我当时没有更好的主意。
当然,你会发现一些作品对所有部分使用单一的许可证,而不介意反对者。一个突出的例子就是 [Emily Short 的史诗作品“<ruby> 假冒的猴子 <rt> Counterfeit Monkey </rt></ruby>”](https://github.com/i7/counterfeit-monkey),其全部采用 Creative Commons 4.0 许可证发布。[CC 对其应用于代码感到不满](https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software),但你可以认为 [Inform 7 源码这种不寻常的散文风格特性](https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x) 至少比传统的软件项目更适合 CC 许可证。
### 接下来要做什么呢,冒险者?
如果你希望开始探索互动小说的世界,这里有几个链接可供你参考:
* 如上所述,[IFDB](http://ifdb.tads.org/) 和[互动小说归档](https://ifarchive.org/)都提供了可浏览的界面,用于浏览超过 40 年价值的互动小说作品。其中大部分可以在 Web 浏览器中玩,但有些需要额外的解释器程序。IFDB 能帮助你找到并安装它们。[IFComp 的年度结果页面](https://ifcomp.org/comp/last_comp)展现了另一个视图,帮助你了解最佳的免费和归档可用作品。
* [互动小说技术基金会](http://iftechfoundation.org/)是一个非营利组织,主要帮助并支持 Twine、IFComp 和互动小说归档的发展,以及提升互动小说的无障碍功能、探索互动小说在教育领域中的应用等等。加入其[邮件列表](http://iftechfoundation.org/cgi-bin/mailman/listinfo/friends),可以接收 IFTF 的每月资讯,浏览其[博客](http://blog.iftechfoundation.org/),亦或浏览[一些主题商品](http://blog.iftechfoundation.org/2017-12-20-the-iftf-gift-shop-is-now-open.html)。
* 在今年的早些时候,John Paul Wohlscheid 写了这篇[关于开源互动小说工具](https://itsfoss.com/create-interactive-fiction/)的文章。它涵盖了一些这里没有提及的平台,所以如果你还想了解更多,请看一看这篇文章。
---
via: <https://opensource.com/article/18/7/interactive-fiction-tools>
作者:[Jason Mclntosh](https://opensource.com/users/jmac) 选题:[lujun9972](https://github.com/lujun9972) 译者:[cycoe](https://github.com/cycoe) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The [Interactive Fiction Technology Foundation](http://iftechfoundation.org/) (IFTF) is a non-profit organization dedicated to the preservation and improvement of technologies enabling the digital art form we call *interactive fiction*. When a Community Moderator for Opensource.com suggested an article about IFTF, the technologies and services it supports, and how it all intersects with open source, I found it a novel angle to the decades-long story I’ve so often told. The history of IF is longer than—but quite enmeshed with—the modern FOSS movement. I hope you’ll enjoy my sharing it here.
## Definitions and history
To me, the term *interactive fiction* includes any video game or digital artwork whose audience interacts with it primarily through text. The term originated in the 1980s when parser-driven text adventure games—epitomized in the United States by [ Zork](https://en.wikipedia.org/wiki/Zork),
[, and the rest of](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy_(video_game))
*The Hitchhiker’s Guide to the Galaxy*[Infocom](https://en.wikipedia.org/wiki/Infocom)’s canon—defined home-computer entertainment. Its mainstream commercial viability had guttered by the 1990s, but online hobbyist communities carried on the tradition, releasing both games and game-creation tools.
After a quarter century, interactive fiction now comprises a broad and sparkling variety of work, from puzzle-laden text adventures to sprawling and introspective hypertexts. Regular online competitions and festivals provide a great place to peruse and play new work: The English-language IF world enjoys annual events including [Spring Thing](http://www.springthing.net/) and [IFComp](http://ifcomp.org/), the latter a centerpiece of modern IF since 1995—which also makes it the longest-lived continually running game showcase event of its kind in any genre. [IFComp’s crop of judged-and-ranked entries from 2017](https://ifcomp.org/comp/2017) shows the amazing diversity in form, style, and subject matter that text-based games boast today.
(I specify "English-language" above because IF communities tend to self-segregate by language, perhaps due to the technology's focus on writing. There are also annual IF events in [French](http://www.fiction-interactive.fr/) and [Italian](http://www.oldgamesitalia.net/content/marmellata-davventura-2018), for example, and I've heard of at least one Chinese IF festival. Happily, these borders are porous; during the four years I managed IFComp, it has welcomed English-translated work from all international communities.)

Starting a new game of Emily Short's "Counterfeit Monkey," running on the interpreter Lectrote (both open source software).
Also due to its focus on text, IF presents some of the most accessible platforms for both play and authorship. Almost anyone who can read digital text—including users of assistive technology such as text-to-speech software—can play most IF works. Likewise, IF creation is open to all writers willing to learn and work with its tools and techniques.
This brings us to IF’s long relationship with open source, which has helped enable the art form’s availability since its commercial heyday. I'll provide an overview of contemporary open-source IF creation tools, and then discuss the ancient and sometimes curious tradition of IF works that share their source code.
## The world of open source IF tools
A number of development platforms, most of which are open source, are available to create traditional parser-driven IF in which the user types commands—for example, `go north,`
`get lamp`
, `pet the cat`
, or `ask Zoe about quantum mechanics`
—to interact with the game’s world. The early 1990s saw the emergence of several hacker-friendly parser-game development kits; those still in use today include [TADS](http://tads.org/), [Alan](https://www.alanif.se/), and [Quest](http://textadventures.co.uk/quest/)—all open, with the latter two bearing FOSS licenses.
But by far the most prominent of these is [Inform](http://inform7.com/), first released by Graham Nelson in 1993 and now maintained by a team Nelson still leads. Inform source is semi-open, in an unusual fashion: Inform 6, the previous major version, [makes its source available through the Artistic License](https://github.com/DavidKinder/Inform6). This has more immediate relevance than may be obvious, since the otherwise proprietary Inform 7 holds Inform 6 at its core, translating its [remarkable natural-language syntax](http://inform7.com/learn/man/RB_4_1.html#e307) into its predecessor’s more C-like code before letting it compile the work down into machine code.

The Inform 7 IDE, loaded up with documentation and a sample project.
Inform games run on a virtual machine, a relic of the Infocom era when that publisher targeted a VM so that it could write a single game that would run on Apple II, Commodore 64, Atari 800, and other flavors of the "[home computer](https://www.youtube.com/watch?v=bu55q_3YtOY)." Fewer popular operating systems exist today, but Inform’s virtual machines—the relatively modern [Glulx](http://ifwiki.org/index.php/Glulx) or the charmingly antique [Z-machine](http://ifwiki.org/index.php/Z-machine), a reverse-engineered clone of Infocom’s historical VM—let Inform-created work run on any computer with an Inform interpreter. Currently, popular cross-platform interpreters include desktop programs like [Lectrote](https://github.com/erkyrath/lectrote) and [Gargoyle](https://github.com/garglk/garglk/) or browser-based ones like [Quixe](http://eblong.com/zarf/glulx/quixe/) and [Parchment](https://github.com/curiousdannii/parchment). All are open source.
If the pace of Inform’s development has slowed in its maturity, it remains vital through an active and transparent ecosystem—just like any other popular open source project. In Inform’s case, this includes the aforementioned interpreters, [a collection of language extensions](https://github.com/i7/extensions) (usually written in a mix of Inform 6 and 7), and of course, all the work created with it and shared with the world, sometimes with source included (I’ll return to that topic later in this article).
IF creation tools invented in the 21st century tend to explore player interactions outside of the traditional parser, generating hypertext-driven work that any modern web browser can load. Chief among these is [Twine](http://twinery.org/), originally developed by Chris Klimas in 2009 and under active development by many contributors today as [a GNU-licensed open source project](https://github.com/klembot/twinejs). (In fact, [Twine](https://opensource.com/article/18/7/twine-vs-renpy-interactive-fiction) can trace its OSS lineage back to [TiddlyWiki](https://tiddlywiki.com/), the project from which Klimas initially derived it.)
Twine represents a sort of maximally [open and accessible approach](https://opensource.com/article/18/7/twine-vs-renpy-interactive-fiction) to IF development: Beyond its own FOSS nature, it renders its output as self-contained websites, relying not on machine code requiring further specialized interpretation but the open and well-exercised standards of HTML, CSS, and JavaScript. As a creative tool, Twine can match its own exposed complexity to the creator’s skill level. Users with little or no programming knowledge can create simple but playable IF work, while those with more coding and design skills—including those developing these skills by making Twine games—can develop more sophisticated projects. Little wonder that Twine’s visibility and popularity in educational contexts has grown quite a bit in recent years.
Other noteworthy open source IF development projects include the MIT-licensed [Undum](https://github.com/idmillington/undum) by Ian Millington, and [ChoiceScript](https://github.com/dfabulich/choicescript) by Dan Fabulich and the [Choice of Games](https://www.choiceofgames.com/) team—both of which also target the web browser as the gameplay platform. Looking beyond strict development systems like these, web-based IF gives us a rich and ever-churning ecosystem of open source work, such as furkle’s [collection of Twine-extending tools](https://github.com/furkle) and Liza Daly’s [Windrift](https://github.com/lizadaly/windrift), a JavaScript framework purpose-built for her own IF games.
## Programs, games, and game-programs
Twine benefits from [a standing IFTF program dedicated to its support](http://iftechfoundation.org/committees/twine/), allowing the public to help fund its maintenance and development. IFTF also directly supports two long-time public services, IFComp and the IF Archive, both of which depend upon and contribute back into open software and technologies.

The opening of Liza Daly's "Harmonia," created with the Windrift open source IF-creation framework.
The Perl- and JavaScript-based application that runs the IFComp’s website has been [a shared-source project](https://github.com/iftechfoundation/ifcomp) since 2014, and it reflects [the stew of FOSS licenses used by its IF-specific sub-components](https://github.com/iftechfoundation/ifcomp/blob/master/LICENSE.md), including the various code libraries that allow parser-driven competition entries to run in a web browser. [The IF Archive](https://www.ifarchive.org/)—online since 1992 and [an IFTF project since 2017](http://blog.iftechfoundation.org/2017-06-30-iftf-is-adopting-the-if-archive.html)—is a set of mirrored repositories based entirely on ancient and stable internet standards, with [a little open source Python script](https://github.com/iftechfoundation/ifarchive-ifmap-py) to handle indexing.
## At last, the fun part: Let's talk about open source text games
The bulk of the archive [comprises games](https://www.ifarchive.org/indexes/if-archiveXgames), of course—years and years of games, reflecting decades of evolving game-design trends and IF tool development.
Lots of IF work shares its source code, and the community’s quick-start solution for finding it is simple: [Search the IFDB for the tag "source available"](http://ifdb.tads.org/search?sortby=ratu&searchfor=%22source+available%22). (The IFDB is yet another long-running IF community service, run privately by TADS creator Mike Roberts.) Users who are comfortable with a more bare-bones interface may also wish to browse [the /games/source directory](https://www.ifarchive.org/indexes/if-archiveXgamesXsource.html) of the IF Archive, which groups content by development platform and written language (there's also a lot of work either too miscellaneous or too ancient to categorize floating at the top).
A little bit of random sampling of these code-sharing games reveals an interesting dilemma: Unlike the wider world of open source software, the IF community lacks a generally agreed-upon way of licensing all the code that it generates. Unlike a software tool—including all the tools we use to build IF—an interactive fiction game is a work of art in the most literal sense, meaning that an open source license intended for software would fit it no better than it would any other work of prose or poetry. But then again, an IF game is *also* a piece of software, and it exhibits source-code patterns and techniques that its creator may legitimately wish to share with the world. What is an open source-aware IF creator to do?
Some games address this by passing their code into the public domain, either through explicit license or—as in the case of [the original 42-year-old Adventure by Crowther and Woods](http://ifdb.tads.org/viewgame?id=fft6pu91j85y4acv)—through community fiat. Some try to split the difference, rolling their own license that allows for free re-use of a game’s exposed business logic but prohibits the creation of work derived specifically from its prose. This is the tack I took when I opened up the source of my own game,
[. Lord knows how well that’d stand up in court, but I didn’t have any better ideas at the time.](https://github.com/jmacdotorg/warblers-nest/)
*The Warbler’s Nest*Naturally, you can find work that simply puts everything under a single common license and never mind the naysayers. A prominent example is [Emily Short’s epic Counterfeit Monkey](https://github.com/i7/counterfeit-monkey), released in its entirety under a Creative Commons 4.0 license.
[CC frowns at its application to code](https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software), but you could argue that
[the strangely prose-like nature of Inform 7 source](https://github.com/i7/counterfeit-monkey/blob/master/Counterfeit%20Monkey.materials/Extensions/Counterfeit%20Monkey/Liquids.i7x)makes it at least a little more compatible with a CC license than a more traditional software project would be.
## What now, adventurer?
If you are eager to start exploring the world of interactive fiction, here are a few links to check out:
-
As mentioned above,
[IFDB](http://ifdb.tads.org/)and[the IF Archive](https://ifarchive.org/)both present browsable interfaces to more than 40 years worth of collected interactive fiction work. Much of this is playable in a web browser, but some require additional interpreter programs. IFDB can help you find and install these.[IFComp’s annual results pages](https://ifcomp.org/comp/last_comp)provide another view into the best of this free and archive-available work. -
[The Interactive Fiction Technology Foundation](http://iftechfoundation.org/)is a charitable non-profit organization that helps support Twine, IFComp, and the IF Archive, as well as improve the accessibility of IF, explore IF’s use in education, and more.[Join its mailing list](http://iftechfoundation.org/cgi-bin/mailman/listinfo/friends)to receive IFTF’s monthly newsletter,[peruse its blog](http://blog.iftechfoundation.org/), and browse[some thematic merchandise](http://blog.iftechfoundation.org/2017-12-20-the-iftf-gift-shop-is-now-open.html). -
John Paul Wohlscheid wrote
[this article about open-source IF tools](https://itsfoss.com/create-interactive-fiction/)earlier this year. It covers some platforms not mentioned here, so if you’re still hungry for more, have a look.
## 1 Comment |
11,274 | Linux 文件系统类型导览 | https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html | 2019-08-27T17:55:00 | [
"文件系统"
] | https://linux.cn/article-11274-1.html |
>
> Linux 文件系统多年来在不断发展,让我们来看一下文件系统类型。
>
>
>

虽然对于普通用户来说可能并不明显,但在过去十年左右的时间里,Linux 文件系统已经发生了显著的变化,这使它们能够更好对抗损坏和性能问题。
如今大多数 Linux 系统使用名为 ext4 的文件系统。 “ext” 代表“<ruby> 扩展 <rt> extended </rt></ruby>”,“4” 表示这是此文件系统的第 4 代。随着时间的推移添加的功能包括:能够提供越来越大的文件系统(目前大到 1,000,000 TiB)和更大的文件(高达 16 TiB),更抗系统崩溃,更少碎片(将单个文件分散为存在多个位置的块)以提高性能。
ext4 文件系统还带来了对性能、可伸缩性和容量的其他改进。实现了元数据和日志校验和以增强可靠性。时间戳现在可以跟踪纳秒级变化,以便更好地对文件打戳(例如,文件创建和最后更新时间)。并且,在时间戳字段中增加了两个位,2038 年的问题(存储日期/时间的字段将从最大值翻转到零)已被推迟到了 400 多年之后(到 2446)。
### 文件系统类型
要确定 Linux 系统上文件系统的类型,请使用 `df` 命令。下面显示的命令中的 `-T` 选项显示文件系统类型。 `-h` 显示“易读的”磁盘大小。换句话说,调整报告的单位(如 M 和 G),使人们更好地理解。
```
$ df -hT | head -10
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 2.9G 0 2.9G 0% /dev
tmpfs tmpfs 596M 1.5M 595M 1% /run
/dev/sda1 ext4 110G 50G 55G 48% /
/dev/sdb2 ext4 457G 642M 434G 1% /apps
tmpfs tmpfs 3.0G 0 3.0G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/loop0 squashfs 89M 89M 0 100% /snap/core/7270
/dev/loop2 squashfs 142M 142M 0 100% /snap/hexchat/42
```
请注意,`/`(根)和 `/apps` 的文件系统都是 ext4,而 `/dev` 是 devtmpfs 文件系统(一个由内核填充的自动化设备节点)。其他的文件系统显示为 tmpfs(驻留在内存和/或交换分区中的临时文件系统)和 squashfs(只读压缩文件系统的文件系统,用于快照包)。
还有 proc 文件系统,用于存储正在运行的进程的信息。
```
$ df -T /proc
Filesystem Type 1K-blocks Used Available Use% Mounted on
proc proc 0 0 0 - /proc
```
当你在整个文件系统中游览时,可能会遇到许多其他文件系统类型。例如,当你移动到目录中并想了解它的文件系统时,可以运行以下命令:
```
$ cd /dev/mqueue; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
mqueue mqueue 0 0 0 - /dev/mqueue
$ cd /sys; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
sysfs sysfs 0 0 0 - /sys
$ cd /sys/kernel/security; df -T .
Filesystem Type 1K-blocks Used Available Use% Mounted on
securityfs securityfs 0 0 0 - /sys/kernel/security
```
与其他 Linux 命令一样,这里的 `.` 代表整个文件系统的当前位置。
这些和其他独特的文件系统提供了一些特殊功能。例如,securityfs 提供支持安全模块的文件系统。
Linux 文件系统需要能够抵抗损坏,能够承受系统崩溃并提供快速、可靠的性能。由几代 ext 文件系统和新一代专用文件系统提供的改进使 Linux 系统更易于管理和更可靠。
---
via: <https://www.networkworld.com/article/3432990/a-guided-tour-of-linux-file-system-types.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,275 | 区块链 2.0:Hyperledger 项目简介(八) | https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ | 2019-08-27T23:21:00 | [
"区块链",
"Hyperledger"
] | https://linux.cn/article-11275-1.html | 
一旦一个新技术平台在积极发展和商业利益方面达到了一定程度的受欢迎程度,全球的主要公司和小型的初创企业都急于抓住这块蛋糕。在当时 Linux 就是这样的一个平台。一旦实现了其应用程序的普及,个人、公司和机构就开始对其表现出兴趣,到 2000 年,Linux 基金会就成立了。
Linux 基金会旨在通过赞助他们的开发团队来将 Linux 作为一个平台来标准化和发展。Linux 基金会是一个由软件和 IT 巨头([如微软、甲骨文、三星、思科、 IBM 、英特尔等](https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation))支持的非营利组织。这不包括为改进该平台而提供服务的数百名个人开发者。多年来,Linux 基金会已经在旗下开展了许多项目。**Hyperledger** 项目是迄今为止发展最快的项目。
在将技术推进至可用且有用的方面上,这种联合主导的开发具有很多优势。为大型项目提供开发标准、库和所有后端协议既昂贵又耗费资源,而且不会从中产生丝毫收入。因此,对于公司来说,通过支持这些组织来汇集他们的资源来开发常见的那些 “烦人” 部分是有很意义的,以及随后完成这些标准部分的工作以简单地即插即用和定制他们的产品。除了这种模型的经济性之外,这种合作努力还产生了标准,使其容易使用和集成到优秀的产品和服务中。
上述联盟模式,在曾经或当下的创新包括 WiFi(Wi-Fi 联盟)、移动电话等标准。
### Hyperledger 项目(HLP)简介
Hyperledger 项目(HLP)于 2015 年 12 月由 Linux 基金会启动,目前是其孵化的增长最快的项目之一。它是一个<ruby> 伞式组织 <rt> umbrella organization </rt></ruby>,用于合作开发和推进基于[区块链](/article-10650-1.html)的分布式账本技术 (DLT) 的工具和标准。支持该项目的主要行业参与者包括 IBM、英特尔 和 SAP Ariba [等](https://www.hyperledger.org/members)。HLP 旨在为个人和公司创建框架,以便根据需要创建共享和封闭的区块链,以满足他们自己的需求。其设计原则是开发一个专注于隐私和未来可审计性的全球可部署、可扩展、强大的区块链平台。<sup id="fnref1"> <a href="#fn1" rel="footnote"> 1 </a></sup> 同样要注意的是大多数提出的区块链及其框架。
### 开发目标和构造:即插即用
虽然面向企业的平台有以太坊联盟之类的产品,但根据定义,HLP 是面向企业的,并得到行业巨头的支持,他们在 HLP 旗下的许多模块中做出贡献并进一步发展。HLP 还孵化开发的周边项目,并这些创意项目推向公众。HLP 的成员贡献了他们自己的力量,例如 IBM 为如何协作开发贡献了他们的 Fabric 平台。该代码库由 IBM 在其项目组内部研发,并开源出来供所有成员使用。
这些过程使得 HLP 中的模块具有高度灵活的插件框架,这将支持企业环境中的快速开发和部署。此外,默认情况下,其他对比的平台是开放的<ruby> 免许可链 <rt> permission-less blockchain </rt></ruby>或是<ruby> 公有链 <rt> public blockchain </rt></ruby>,甚至可以将它们应用到特定应用当中。HLP 模块本身支持该功能。
有关公有链和私有链的差异和用例更多地涵盖在[这篇](/article-11080-1.html)比较文章当中。
根据该项目执行董事 Brian Behlendorf 的说法,Hyperledger 项目的使命有四个。
分别是:
1. 创建企业级 DLT 框架和标准,任何人都可以移植以满足其特定的行业或个人需求。
2. 创建一个强大的开源社区来帮助生态系统发展。
3. 促进所述的生态系统的行业成员(如成员公司)的参与。
4. 为 HLP 社区提供中立且无偏见的基础设施,以收集和分享相关的更新和发展。
可以在这里访问[原始文档](http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html)。
### HLP 的架构
HLP 由 12 个项目组成,这些项目被归类为独立的模块,每个项目通常都是结构化的,可以独立开发其模块的。在孵化之前,首先对它们的能力和活力进行研究。该组织的任何成员都可以提出附加建议。在项目孵化后,就会进行积极开发,然后才会推出。这些模块之间的互操作性具有很高的优先级,因此这些组之间的定期通信由社区维护。目前,这些项目中有 4 个被归类为活跃项目。被标为活跃意味着它们已经准备好使用,但还没有准备好发布主要版本。这 4 个模块可以说是推动区块链革命的最重要或相当基本的模块。稍后,我们将详细介绍各个模块及其功能。然而,Hyperledger Fabric 平台的简要描述,可以说是其中最受欢迎的。
### Hyperledger Fabric
Hyperledger Fabric 是一个完全开源的、基于区块链的许可 (非公开) DLT 平台,设计时考虑了企业的使用。该平台提供了适合企业环境的功能和结构。它是高度模块化的,允许开发人员在不同的共识协议、链上代码协议([智能合约](/article-10956-1.html))或身份管理系统等中进行选择。这是一个基于区块链的许可平台,它利用身份管理系统,这意味着参与者将知道彼此在企业环境中的身份。Fabric 允许以各种主流编程语言 (包括 Java、Javascript、Go 等) 开发智能合约(“<ruby> 链码 <rt> chaincode </rt></ruby>”,是 Hyperledger 团队使用的术语)。这使得机构和企业可以利用他们在该领域的现有人才,而无需雇佣或重新培训开发人员来开发他们自己的智能合约。与标准订单验证系统相比,Fabric 还使用<ruby> 执行顺序验证 <rt> execute-order-validate </rt></ruby>系统来处理智能合约,以提供更好的可靠性,这些系统由提供智能合约功能的其他平台使用。与标准订单验证系统相比,Fabric还使用执行顺序验证系统来处理智能合约,以提供更好的可靠性,这些系统由提供智能合约功能的其他平台使用。Fabric 的其他功能还有可插拔性能、身份管理系统、数据库管理系统、共识平台等,这些功能使它在竞争中保持领先地位。
### 结论
诸如 Hyperledger Fabric 平台这样的项目能够在主流用例中更快地采用区块链技术。Hyperledger 社区结构本身支持开放治理原则,并且由于所有项目都是作为开源平台引导的,因此这提高了团队在履行承诺时表现出来的安全性和责任感。
由于此类项目的主要应用涉及与企业合作及进一步开发平台和标准,因此 Hyperledger 项目目前在其他类似项目前面处于有利地位。
---
1. E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018. [↩](#fnref1)
---
via: <https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/>
作者:[ostechnix](https://www.ostechnix.com/author/editor/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[zionfuo](https://github.com/zionfuo) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,276 | 如何使用 sed 命令删除文件中的行 | https://www.2daygeek.com/linux-remove-delete-lines-in-file-sed-command/ | 2019-08-28T10:02:17 | [
"sed"
] | https://linux.cn/article-11276-1.html | 
Sed 代表<ruby> 流编辑器 <rt> Stream Editor </rt></ruby>,常用于 Linux 中基本的文本处理。`sed` 命令是 Linux 中的重要命令之一,在文件处理方面有着重要作用。可用于删除或移动与给定模式匹配的特定行。
它还可以删除文件中的特定行,它能够从文件中删除表达式,文件可以通过指定分隔符(例如逗号、制表符或空格)进行标识。
本文列出了 15 个使用范例,它们可以帮助你掌握 `sed` 命令。
如果你能理解并且记住这些命令,在你需要使用 `sed` 时,这些命令就能派上用场,帮你节约很多时间。
注意:为了方便演示,我在执行 `sed` 命令时,不使用 `-i` 选项(因为这个选项会直接修改文件内容),被移除了行的文件内容将打印到 Linux 终端。
但是,如果你想在实际环境中从源文件中删除行,请在 `sed` 命令中使用 `-i` 选项。
演示之前,我创建了 `sed-demo.txt` 文件,并添加了以下内容和相应行号以便更好地理解。
```
# cat sed-demo.txt
1 Linux Operating System
2 Unix Operating System
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 1) 如何删除文件的第一行?
使用以下语法删除文件首行。
`N` 表示文件中的第 N 行,`d` 选项在 `sed` 命令中用于删除一行。
语法:
```
sed 'Nd' file
```
使用以下 `sed` 命令删除 `sed-demo.txt` 中的第一行。
```
# sed '1d' sed-demo.txt
2 Unix Operating System
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 2) 如何删除文件的最后一行?
使用以下语法删除文件最后一行。
`$` 符号表示文件的最后一行。
使用以下 `sed` 命令删除 `sed-demo.txt` 中的最后一行。
```
# sed '$d' sed-demo.txt
1 Linux Operating System
2 Unix Operating System
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
```
### 3) 如何删除指定行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中的第 3 行。
```
# sed '3d' sed-demo.txt
1 Linux Operating System
2 Unix Operating System
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 4) 如何删除指定范围内的行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中的第 5 到 7 行。
```
# sed '5,7d' sed-demo.txt
1 Linux Operating System
2 Unix Operating System
3 RHEL
4 Red Hat
8 Debian
9 Ubuntu
10 openSUSE
```
### 5) 如何删除多行内容?
`sed` 命令能够删除给定行的集合。
本例中,下面的 `sed` 命令删除了第 1 行、第 5 行、第 9 行和最后一行。
```
# sed '1d;5d;9d;$d' sed-demo.txt
2 Unix Operating System
3 RHEL
4 Red Hat
6 Arch Linux
7 CentOS
8 Debian
```
### 5a) 如何删除指定范围以外的行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中第 3 到 6 行范围以外的所有行。
```
# sed '3,6!d' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
```
### 6) 如何删除空行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中的空行。
```
# sed '/^$/d' sed-demo.txt
1 Linux Operating System
2 Unix Operating System
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 7) 如何删除包含某个模式的行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中匹配到 `System` 模式的行。
```
# sed '/System/d' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 8) 如何删除包含字符串集合中某个字符串的行?
使用以下 `sed` 命令删除 `sed-demo.txt` 中匹配到 `System` 或 `Linux` 表达式的行。
```
# sed '/System\|Linux/d' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 9) 如何删除以指定字符开头的行?
为了测试,我创建了 `sed-demo-1.txt` 文件,并添加了以下内容。
```
# cat sed-demo-1.txt
Linux Operating System
Unix Operating System
RHEL
Red Hat
Fedora
debian
ubuntu
Arch Linux - 1
2 - Manjaro
3 4 5 6
```
使用以下 `sed` 命令删除以 `R` 字符开头的所有行。
```
# sed '/^R/d' sed-demo-1.txt
Linux Operating System
Unix Operating System
Fedora
debian
ubuntu
Arch Linux - 1
2 - Manjaro
3 4 5 6
```
使用以下 `sed` 命令删除 `R` 或者 `F` 字符开头的所有行。
```
# sed '/^[RF]/d' sed-demo-1.txt
Linux Operating System
Unix Operating System
debian
ubuntu
Arch Linux - 1
2 - Manjaro
3 4 5 6
```
### 10) 如何删除以指定字符结尾的行?
使用以下 `sed` 命令删除 `m` 字符结尾的所有行。
```
# sed '/m$/d' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
使用以下 `sed` 命令删除 `x` 或者 `m` 字符结尾的所有行。
```
# sed '/[xm]$/d' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 11) 如何删除所有大写字母开头的行?
使用以下 `sed` 命令删除所有大写字母开头的行。
```
# sed '/^[A-Z]/d' sed-demo-1.txt
debian
ubuntu
2 - Manjaro
3 4 5 6
```
### 12) 如何删除指定范围内匹配模式的行?
使用以下 `sed` 命令删除第 1 到 6 行中包含 `Linux` 表达式的行。
```
# sed '1,6{/Linux/d;}' sed-demo.txt
2 Unix Operating System
3 RHEL
4 Red Hat
5 Fedora
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 13) 如何删除匹配模式的行及其下一行?
使用以下 `sed` 命令删除包含 `System` 表达式的行以及它的下一行。
```
# sed '/System/{N;d;}' sed-demo.txt
3 RHEL
4 Red Hat
5 Fedora
6 Arch Linux
7 CentOS
8 Debian
9 Ubuntu
10 openSUSE
```
### 14) 如何删除包含数字的行?
使用以下 `sed` 命令删除所有包含数字的行。
```
# sed '/[0-9]/d' sed-demo-1.txt
Linux Operating System
Unix Operating System
RHEL
Red Hat
Fedora
debian
ubuntu
```
使用以下 `sed` 命令删除所有以数字开头的行。
```
# sed '/^[0-9]/d' sed-demo-1.txt
Linux Operating System
Unix Operating System
RHEL
Red Hat
Fedora
debian
ubuntu
Arch Linux - 1
```
使用以下 `sed` 命令删除所有以数字结尾的行。
```
# sed '/[0-9]$/d' sed-demo-1.txt
Linux Operating System
Unix Operating System
RHEL
Red Hat
Fedora
debian
ubuntu
2 - Manjaro
```
### 15) 如何删除包含字母的行?
使用以下 `sed` 命令删除所有包含字母的行。
```
# sed '/[A-Za-z]/d' sed-demo-1.txt
3 4 5 6
```
---
via: <https://www.2daygeek.com/linux-remove-delete-lines-in-file-sed-command/>
作者:[Magesh Maruthamuthu](https://www.2daygeek.com/author/magesh/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[hello-wn](https://github.com/hello-wn) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,278 | 使用 KeePassXC 管理凭据 | https://fedoramagazine.org/managing-credentials-with-keepassxc/ | 2019-08-28T23:50:51 | [
"密码"
] | https://linux.cn/article-11278-1.html | 
[上一篇文章](/article-11181-1.html)我们讨论了使用服务器端技术的密码管理工具。这些工具非常有趣而且适合云安装。在本文中,我们将讨论 KeePassXC,这是一个简单的多平台开源软件,它使用本地文件作为数据库。
这种密码管理软件的主要优点是简单。无需服务器端技术专业知识,因此可供任何类型的用户使用。
### 介绍 KeePassXC
KeePassXC 是一个开源的跨平台密码管理器:它是作为 KeePassX 的一个分支开始开发的,这是个不错的产品,但开发不是非常活跃。它使用 256 位密钥的 AES 算法将密钥保存在加密数据库中,这使得在云端设备(如 pCloud 或 Dropbox)中保存数据库相当安全。
除了密码,KeePassXC 还允许你在加密皮夹中保存各种信息和附件。它还有一个有效的密码生成器,可以帮助用户正确地管理他的凭据。
### 安装
这个程序在标准的 Fedora 仓库和 Flathub 仓库中都有。不幸的是,在沙箱中运行的程序无法使用浏览器集成,所以我建议通过 dnf 安装程序:
```
sudo dnf install keepassxc
```
### 创建你的皮夹
要创建新数据库,有两个重要步骤:
* 选择加密设置:默认设置相当安全,增加转换轮次也会增加解密时间。
* 选择主密钥和额外保护:主密钥必须易于记忆(如果丢失它,你的皮夹就会丢失!)而足够强大,一个至少有 4 个随机单词的密码可能是一个不错的选择。作为额外保护,你可以选择密钥文件(请记住:你必须始终都有它,否则无法打开皮夹)和/或 YubiKey 硬件密钥。


数据库文件将保存到文件系统。如果你想与其他计算机/设备共享,可以将它保存在 U 盘或 pCloud 或 Dropbox 等云存储中。当然,如果你选择云存储,建议使用特别强大的主密码,如果有额外保护则更好。
### 创建你的第一个条目
创建数据库后,你可以开始创建第一个条目。对于 Web 登录,请在“条目”选项卡中输入用户名、密码和 URL。你可以根据个人策略指定凭据的到期日期,也可以通过按右侧的按钮下载网站的 favicon 并将其关联为条目的图标,这是一个很好的功能。


KeePassXC 还提供了一个很好的密码/口令生成器,你可以选择长度和复杂度,并检查对暴力攻击的抵抗程度:

### 浏览器集成
KeePassXC 有一个适用于所有主流浏览器的扩展。该扩展允许你填写所有已指定 URL 条目的登录信息。
必须在 KeePassXC(工具菜单 -> 设置)上启用浏览器集成,指定你要使用的浏览器:

安装扩展后,必须与数据库建立连接。要执行此操作,请按扩展按钮,然后按“连接”按钮:如果数据库已打开并解锁,那么扩展程序将创建关联密钥并将其保存在数据库中,该密钥对于浏览器是唯一的,因此我建议对它适当命名:

当你打开 URL 字段中的登录页并且数据库是解锁的,那么这个扩展程序将为你提供与该页面关联的所有凭据:

通过这种方式,你可以通过 KeePassXC 获取互联网凭据,而无需将其保存在浏览器中。
### SSH 代理集成
KeePassXC 的另一个有趣功能是与 SSH 集成。如果你使用 ssh 代理,KeePassXC 能够与之交互并添加你上传的 ssh 密钥到条目中。
首先,在常规设置(工具菜单 -> 设置)中,你必须启用 ssh 代理并重启程序:

此时,你需要以附件方式上传你的 ssh 密钥对到条目中。然后在 “SSH 代理” 选项卡中选择附件下拉列表中的私钥,此时会自动填充公钥。不要忘记选择上面的两个复选框,以便在数据库打开/解锁时将密钥添加到代理,并在数据库关闭/锁定时删除:

现在打开和解锁数据库,你可以使用皮夹中保存的密钥登录 ssh。
唯一的限制是可以添加到代理的最大密钥数:ssh 服务器默认不接受超过 5 次登录尝试,出于安全原因,建议不要增加此值。
---
via: <https://fedoramagazine.org/managing-credentials-with-keepassxc/>
作者:[Marco Sarti](https://fedoramagazine.org/author/msarti/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | A [previous article](https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/) discussed password management tools that use server-side technology. These tools are very interesting and suitable for a cloud installation.
In this article we will talk about KeePassXC, a simple multi-platform open source software that uses a local file as a database.
The main advantage of this type of password management is simplicity. No server-side technology expertise is required and can therefore be used by any type of user.
## Introducing KeePassXC
KeePassXC is an open source cross platform password manager: its development started as a fork of KeePassX, a good product but with a not very active development. It saves the secrets in an encrypted database with AES algorithm using 256 bit key, this makes it reasonably safe to save the database in a cloud drive storage such as pCloud or Dropbox.
In addition to the passwords, KeePassXC allows you to save various information and attachments in the encrypted wallet. It also has a valid password generator that helps the user to correctly manage his credentials.
## Installation
The program is available both in the standard Fedora repository and in the Flathub repository. Unfortunately the integration with the browser does not work with the application running in the sandbox, so I suggest to install the program via dnf:
sudo dnf install keepassxc
## Creating your wallet
To create a new database there are two important steps:
- Choose the encryption settings: the default settings are reasonably safe, increasing the transform rounds also increases the decryption time.
- Choose the master key and additional protections: the master key must be easy to remember (if you lose it your wallet is lost!) but strong enough, a passphrase with at least 4 random words can be a good choice. As additional protection you can choose a key file (remember: you must always have it available otherwise you cannot open the wallet) and / or a YubiKey hardware key.


The database file will be saved to the file system. If you want to share with other computers / devices you can save it on a USB key or in a cloud storage like pCloud or Dropbox. Of course, if you choose a cloud storage, a particularly strong master password is recommended, better if accompanied by additional protection.
## Creating your first entry
Once the database has been created, you can start creating your first entry. For a web login specify a username, password and url in the Entry tab. Optionally you can specify an expiration date for the credentials based on your personal policy: also by pressing the button on the right the favicon of the site is downloaded and associated as an icon of the entry, this is a nice feature.


KeePassXC also offers a good password / passphrase generator, you can choose length and complexity and check the degree of resistance to a brute force attack:

## Browser integration
KeePassXC has an extension available for all major browsers. The extension allows you to fill in the login information for all the entries whose URL is specified.
Browser integration must be enabled on KeePassXC (Tools menu -> Settings) specifying which browsers you intend to use:

Once the extension is installed, it is necessary to create a connection with the database. To do this, press the extension button and then the Connect button: if the database is open and unlocked the extension will create an association key and save it in the database, the key is unique to the browser so I suggest naming it appropriately :

When you reach the login page specified in the Url field and the database is unlocked, the extension will offer you all the credentials you have associated with that page:

In this way, browsing with KeePassXC running you will have your internet credentials available without necessarily saving them in the browser.
## SSH agent integration
Another interesting feature of KeePassXC is the integration with SSH. If you have ssh-agent running KeePassXC is able to interact and add the ssh keys that you have uploaded as attachments to your entries.
First of all in the general settings (Tools menu -> Settings) you have to enable the ssh agent and restart the program:

At this point it is required to upload your ssh key pair as an attachment to your entry. Then in the “SSH agent” tab select the private key in the attachment drop-down list, the public key will be populated automatically. Don’t forget to select the two checkboxes above to allow the key to be added to the agent when the database is opened / unlocked and removed when the database is closed / locked:

Now with the database open and unlocked you can log in ssh using the keys saved in your wallet.
The only limitation is in the maximum number of keys that can be added to the agent: ssh servers do not accept by default more than 5 login attempts, for security reasons it is not recommended to increase this value.
## Milos
In your ssh config, you can specify a
publickey for the IdentityFile option instead of the private key.If you have tons of keys in your agent, that gets around the 5 key limitation as specifying the public key works quite well with the agent.
Try it with ssh -vv. It will be the first key tried.
## Johannes
Exactly. I think you could also add
Hosts *
IdentitiesOnly yes
to your ~/.ssh/config file to work around the maximum keys limitation
## Patrick
Thanks for this article. Typo: Ubikey -> Yubikey
Besides a Yubikey you can also use a Nitrokey.
## Clément Verna
Thanks I fixed the typo 🙂
## Joshua
KeePassXC challenge-response will also support the OnlyKey in version 2.5, which gives you reliable (not time based) MFA for your password database.
## Walter
A bit offtopic, but anyway…
I have about 100 logins/passwords saved in Firefox and am not willing to re-enter them into the keepassxc-database.
Is it possible to import them right from key4.db file ?
I failed to export the passwords from Firefox with pw-exporter or the like.
## hammerhead corvette
There is a “Password Exporter” for Firefox versions 58+(Quantum) to which you can export to CSV & JSON formats. KeepassXC allows for import to CSV.
## John
You could export passwords in CSV, then import it back in Keepass, I used https://github.com/kspearrin/ff-password-exporter but I’m sure there are other alternatives 🙂
## G. Fernandes
I don’t know how Firefox stores it’s password database, but you’d need some way to export the encrypted (I’m assuming it must be, or there wouldn’t be much point!) database as XML data or CSV.
I’ve done this for transferring data from Revelation to KeePassXC, with a simple Java program (https://github.com/g5-f8s/f8s.java.utils/blob/master/src/main/java/org/g5/pwdmgr/converter/RevelationToKeePassConverter.java).
There is a password plugin that will export passwords stored in Firefox (https://www.intowindows.com/3-ways-to-backup-passwords-saved-in-firefox-57-58/).
## Walter
Thanks for all your answers!
But you are referring to the exact tool that didn’t work for me -> passwort-exporter.
I tried it some weeks ago hence I can’t remember the correct error message. But it was something like “wrong master password” .
## Spurdo
Can you recommend an iPhone app to read keepassxc containers?
## Max
You might try these:
Minikeepass: https://apps.apple.com/us/app/minikeepass/id451661808
KeePass Touch https://apps.apple.com/us/app/keepass-touch/id966759076
I tried one of them, but you should really have a go for yourself to see if it fulfil your needs.
Sincerely
Max
## Max
Also Strongbox and keepassium seems nice. will actually try out Keepassium after reading the reddit stream: https://www.reddit.com/r/KeePass/comments/cl35es/in_your_opinion_which_is_the_better_ios_client/
Strongbox: https://strongboxsafe.com/
and
Keepassium: https://keepassium.com/
## hammerhead corvette
KeePassXC has been a blessing for me. I came over to Linux some years ago with a password file in a crypt of my own making (Lol). The Auto Type feature is good, and i typically save my db’s to Mega and sync them across several devices. All you have to do is close you db, mount the new one and you are good to go ! The browser implementation is great. Some sites require me to occasionally Redetect login fields but that is one click away.
## MX
Sorry, this is compatible with the .kdbx file.
https://play.google.com/store/apps/details?id=keepass2android.keepass2android&hl=en
and
https://centos.pkgs.org/7/epel-x86_64/keepassx2-2.0.3-2.el7.x86_64.rpm.html
## Nathan D
I was using Keepassx and Keepassxc for years. I just switched to Buttercup.pw to try it out.
It’s worth a check for anyone interested in a password management tool. You can import from Keepassx without issues too.
## Max
I believe that with KeePassXC you have a more flexible design of your security level. Having the database in an airgapped vm, a file in your dropbox for your phone, mac, Linux, windows needs, etc.
It’s all about choice. And maybe KeePassXC wil come for iPhone one day. that’s the only thing needed.
I decided to use, for password manager, the test repository, because I’d rather have the app not working, instead of having a security issue. But that’s just me 🙂
https://www.militant.dk/2019/06/10/keepassxc-2-4-2-on-fedora-30/
Sincerely
Max
## Serge Meeuwsen
Nice overview and I would love to migrate back to keepassXC and use the YubiKey; The only thing holding me back is the fact that there doesn’t seem to be an Android / iOS version of keepassXC supporting the Yubikey as well…
## Krzysztof
What’s the advantage of using KeePass over saving passwords in Firefox? Firefox password just works on any device set up with Firefox Sync. For KeyPass I have to set it up on all devices (including Dropbox for storing the password database file)
## FOOBAZ
I second this, plus Firefox Lockwise works great on my android phone, I can unlock and autofill passwords in different apps using its fingerprint sensor.
## tuxflo
The main advantage is, that you can also use Keepass for storing non browser related credentials like the mentioned ssh keys or even credit card Pin numbers. Or for example login combinations for remote connections (RDP) or virtual machines.
Also it is possible to store entire documents or files as attachments (like a scanned copy of your ID) inside the Keepass database.
## Kuba
It supporting webdav or nextcloud sync?
## zuser
Yes, I have been using for several years keeping database on my personal nc server. Syncs nicely to all computers in the house and android app on mobile devices. |
11,280 | 学习 Python 的 12 个方式 | https://opensource.com/article/19/8/dozen-ways-learn-python | 2019-08-29T08:35:17 | [
"Python"
] | https://linux.cn/article-11280-1.html |
>
> 这些资源将帮助你入门并熟练掌握 Python。
>
>
>

Python 是世界上[最受欢迎的](https://insights.stackoverflow.com/survey/2019#most-popular-technologies)编程语言之一,它受到了全世界各地的开发者和创客的欢迎。大多数 Linux 和 MacOS 计算机都预装了某个版本的 Python,现在甚至一些 Windows 计算机供应商也开始安装 Python 了。
也许你尚未学会它,想学习但又不知道在哪里入门。这里的 12 个资源将帮助你入门并熟练掌握 Python。
### 课程、书籍、文章和文档
1、[Python 软件基金会](https://www.python.org/)提供了出色的信息和文档,可帮助你迈上编码之旅。请务必查看 [Python 入门指南](https://www.python.org/about/gettingstarted/)。它将帮助你得到最新版本的 Python,并提供有关编辑器和开发环境的有用提示。该组织还有可以来进一步指导你的[优秀文档](https://docs.python.org/3/)。
2、我的 Python 旅程始于[海龟模块](https://opensource.com/life/15/8/python-turtle-graphics)。我首先在 Bryson Payne 的《[教你的孩子编码](https://opensource.com/education/15/9/review-bryson-payne-teach-your-kids-code)》中找到了关于 Python 和海龟的内容。这本书是一个很好的资源,购买这本书可以让你看到几十个示例程序,这将激发你的编程好奇心。Payne 博士还在 [Udemy](https://www.udemy.com/teach-your-kids-to-code/) 上以相同的名称开设了一门便宜的课程。
3、Payne 博士的书激起了我的好奇心,我渴望了解更多。这时我发现了 Al Sweigart 的《[用 Python 自动化无聊的东西](https://automatetheboringstuff.com/)》。你可以购买这本书,也可以使用它的在线版本,它与印刷版完全相同且可根据知识共享许可免费获得和分享。Al 的这本书让我学习到了 Python 的基础知识、函数、列表、字典和如何操作字符串等等。这是一本很棒的书,我已经购买了许多本捐赠给了当地图书馆。Al 还提供 [Udemy](https://www.udemy.com/automate/?couponCode=PAY_10_DOLLARS) 课程;使用他的网站上的优惠券代码,只需 10 美元即可参加。
4、Eric Matthes 撰写了《[Python 速成](https://nostarch.com/pythoncrashcourse2e)》,这是由 No Starch Press 出版的 Python 的逐步介绍(如同上面的两本书)。Matthes 还有一个很棒的[伴侣网站](https://ehmatthes.github.io/pcc/),其中包括了如何在你的计算机上设置 Python 以及一个用以简化学习曲线的[速查表](https://ehmatthes.github.io/pcc/cheatsheets/README.html)。
5、[Python for Everybody](https://www.py4e.com/) 是另一个很棒的 Python 学习资源。该网站可以免费访问 [Charles Severance](http://www.dr-chuck.com/dr-chuck/resume/bio.htm) 的 Coursera 和 edX 认证课程的资料。该网站分为入门、课程和素材等部分,其中 17 个课程按从安装到数据可视化的主题进行分类组织。Severance([@drchuck on Twitter](https://twitter.com/drchuck/)),是密歇根大学信息学院的临床教授。
6、[Seth Kenlon](https://opensource.com/users/seth),我们 Opensource.com 的 Python 大师,撰写了大量关于 Python 的文章。Seth 有很多很棒的文章,包括“[用 JSON 保存和加载 Python 数据](/article-11133-1.html)”,“[用 Python 学习面向对象编程](https://opensource.com/article/19/7/get-modular-python-classes)”,“[在 Python 游戏中用 Pygame 放置平台](/article-10902-1.html)”,等等。
### 在设备上使用 Python
7、最近我对 [Circuit Playground Express](https://opensource.com/article/19/7/circuit-playground-express) 非常感兴趣,这是一个运行 [CircuitPython](https://circuitpython.org/) 的设备,CircuitPython 是为微控制器设计的 Python 编程语言的子集。我发现 Circuit Playground Express 和 CircuitPython 是向学生介绍 Python(以及一般编程)的好方法。它的制造商 Adafruit 有一个很好的[系列教程](https://learn.adafruit.com/welcome-to-circuitpython),可以让你快速掌握 CircuitPython。
8、[BBC:Microbit](https://opensource.com/article/19/8/getting-started-bbc-microbit) 是另一种入门 Python 的好方法。你可以学习如何使用 [MicroPython](https://micropython.org/) 对其进行编程,这是另一种用于编程微控制器的 Python 实现。
9、学习 Python 的文章如果没有提到[树莓派](https://www.raspberrypi.org/)单板计算机那是不完整的。一旦你有了[舒适](https://projects.raspberrypi.org/en/pathways/getting-started-with-raspberry-pi)而强大的树莓派,你就可以在 Opensource.com 上找到[成吨的](https://opensource.com/sitewide-search?search_api_views_fulltext=Raspberry%20Pi)使用它的灵感,包括“[7 个值得探索的树莓派项目](https://opensource.com/article/19/3/raspberry-pi-projects)”,“[在树莓派上复活 Amiga](https://opensource.com/article/19/3/amiga-raspberry-pi)”,和“[如何使用树莓派作为 VPN 服务器](https://opensource.com/article/19/6/raspberry-pi-vpn-server)”。
10、许多学校为学生提供了 iOS 设备以支持他们的教育。在尝试帮助这些学校的老师和学生学习用 Python 编写代码时,我发现了 [Trinket.io](https://trinket.io/)。Trinket 允许你在浏览器中编写和执行 Python 3 代码。 Trinket 的 [Python 入门](https://docs.trinket.io/getting-started-with-python#/welcome/where-we-ll-go)教程将向你展示如何在 iOS 设备上使用 Python。
### 播客
11、我喜欢在开车的时候听播客,我在 Kelly Paredes 和 Sean Tibor 的 [Teaching Python](https://www.teachingpython.fm/) 播客上找到了大量的信息。他们的内容很适合教育领域。
12、如果你正在寻找一些更通用的东西,我推荐 Michael Kennedy 的 [Talk Python to Me](https://talkpython.fm/) 播客。它提供了有关 Python 及相关技术的最佳信息。
你学习 Python 最喜欢的资源是什么?请在评论中分享。
计算机编程可能是一个有趣的爱好,正如我以前在 Apple II 计算机上编程时所学到的……
---
via: <https://opensource.com/article/19/8/dozen-ways-learn-python>
作者:[Don Watkins](https://opensource.com/users/don-watkins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Python is [one of the most popular](https://insights.stackoverflow.com/survey/2019#most-popular-technologies) programming languages on the planet. It's embraced by developers and makers everywhere. Most Linux and MacOS computers come with a version of Python pre-installed, and now even a few Windows computer vendors are installing Python too.
Maybe you're late to the party, and you want to learn but don't know where to turn. These 12 resources will get you started and well on your way to proficiency with Python.
## Courses, books, articles, and documentation
- The
[Python Software Foundation](https://www.python.org/)has excellent information and documentation to help you get started on your coding journey. Be sure to check out the[Python for beginners](https://www.python.org/about/gettingstarted/)guide. It will help you get the latest version of Python and offers helpful tips on editors and development environments. The organization also has[excellent documentation](https://docs.python.org/3/)to guide you. - My Python journey began with the
[Turtle module](https://opensource.com/life/15/8/python-turtle-graphics). I first found answers to my questions about Python and the Turtle in Bryson Payne's. The book is a great resource, and buying it gives you access to dozens of example programs that will spark your programming curiosity. Dr. Payne also teaches an inexpensive course by the same title on[Teach Your Kids to Code](https://opensource.com/education/15/9/review-bryson-payne-teach-your-kids-code)[Udemy](https://www.udemy.com/teach-your-kids-to-code/). - Dr. Payne's book piqued my curiosity, and I yearned to learn more. This was when I discovered
by Al Sweigart. You can buy the book or use the online materials, which are identical to the print edition and freely available and shareable under a Creative Commons license. Thanks to Al, I learned Python basics, functions, lists, dictionaries, manipulating strings, and much more. It's a great book, and I have purchased many copies to donate to local libraries. Al also offers a course on[Automate the Boring Stuff with Python](https://automatetheboringstuff.com/)[Udemy](https://www.udemy.com/automate/?couponCode=PAY_10_DOLLARS); with a coupon code on his website, you can get it for only $10. - Eric Matthes wrote
, a step-by-step introduction to Python published (like the two books above) by No Starch Press. Matthes also has a wonderful[Python Crash Course](https://nostarch.com/pythoncrashcourse2e)[companion website](https://ehmatthes.github.io/pcc/)that includes how to set up Python on your computer as well as links to[cheat sheets](https://ehmatthes.github.io/pcc/cheatsheets/README.html)to ease the learning curve. [Python for Everybody](https://www.py4e.com/)is another great Python learning resource. The site offers free access to materials from[Charles Severance](http://www.dr-chuck.com/dr-chuck/resume/bio.htm)'s Coursera and edX certification courses. The site is divided into Get Started, Lessons, and Materials sections, with its 17 lessons well-organized by topic area, from installation to data visualization. Severance,[@drchuck on Twitter](https://twitter.com/drchuck/), is a clinical professor in the School of Information at the University of Michigan.[Seth Kenlon](https://opensource.com/users/seth), our master Pythonista at Opensource.com, has written extensively about Python. Seth has many great articles, including "[Save and load Python data with JSON](https://opensource.com/article/19/7/save-and-load-data-python-json)," "[Learn object-oriented programming with Python](https://opensource.com/article/19/7/get-modular-python-classes)," "[Put platforms in a Python game with Pygame](https://opensource.com/article/18/7/put-platforms-python-game)," and many more.
## Use Python on devices
- Recently I have become very interested in the
[Circuit Playground Express](https://opensource.com/article/19/7/circuit-playground-express), a device that runs on[CircuitPython](https://circuitpython.org/), a subset of the Python programming language designed for microcontrollers. I have found that the Circuit Playground Express and CircuitPython are great ways to introduce students to Python (and programming in general). Its maker, Adafruit, has an excellent[series of tutorials](https://learn.adafruit.com/welcome-to-circuitpython)that will get you up to speed with CircuitPython. -
A
[BBC:Microbit](https://opensource.com/article/19/8/getting-started-bbc-microbit)is another great way to get started with Python. You can learn how to program it with[MicroPython](https://micropython.org/), another Python implementation for programming microcontrollers. - No article about learning Python would be complete without mentioning the
[Raspberry Pi](https://www.raspberrypi.org/)single-board computer. Once you[get comfortable](https://projects.raspberrypi.org/en/pathways/getting-started-with-raspberry-pi)with the mighty Pi, you can find a[ton of ideas](https://opensource.com/sitewide-search?search_api_views_fulltext=Raspberry%20Pi)on Opensource.com for using it, including "[7 Raspberry Pi projects to explore](https://opensource.com/article/19/3/raspberry-pi-projects)," "[Resurrecting the Amiga on the Raspberry Pi](https://opensource.com/article/19/3/amiga-raspberry-pi)," and "[How to use your Raspberry Pi as a VPN server](https://opensource.com/article/19/6/raspberry-pi-vpn-server)." - A lot of schools provide students with iOS devices to support their education. While trying to help teachers and students in these schools learn to code with Python, I discovered
[Trinket.io](https://trinket.io/). Trinket allows you to write and execute Python 3 code in a browser. Trinket's[Getting started with Python](https://docs.trinket.io/getting-started-with-python#/welcome/where-we-ll-go)tutorial will show you how to use Python on your iOS device.
## Podcasts
- I enjoy listening to podcasts when I am driving, and I have found a wealth of information on
[Teaching Python](https://www.teachingpython.fm/)with Kelly Paredes and Sean Tibor. Their content is well-tuned to the education space. - If you're looking for something a little more general, I recommend Michael Kennedy's
[Talk Python to Me](https://talkpython.fm/)podcast. It offers excellent information about what's going on in Python and related technologies.
What is your favorite resource for learning Python? Please share it in the comments.
## 4 Comments |
11,282 | 如何在 Ubuntu 上安装 VirtualBox | https://itsfoss.com/install-virtualbox-ubuntu | 2019-08-30T07:21:26 | [
"VirtualBox"
] | https://linux.cn/article-11282-1.html |
>
> 本新手教程解释了在 Ubuntu 和其他基于 Debian 的 Linux 发行版上安装 VirtualBox 的各种方法。
>
>
>

Oracle 公司的自由开源产品 [VirtualBox](https://www.virtualbox.org) 是一款出色的虚拟化工具,专门用于桌面操作系统。与另一款虚拟化工具 [Linux 上的 VMWare Workstation](https://itsfoss.com/install-vmware-player-ubuntu-1310/) 相比起来,我更喜欢它。
你可以使用 VirtualBox 等虚拟化软件在虚拟机中安装和使用其他操作系统。
例如,你可以[在 Windows 上的 VirtualBox 中安装 Linux](https://itsfoss.com/install-linux-in-virtualbox/)。同样地,你也可以[用 VirtualBox 在 Linux 中安装 Windows](https://itsfoss.com/install-windows-10-virtualbox-linux/)。
你也可以用 VirtualBox 在你当前的 Linux 系统中安装别的 Linux 发行版。事实上,这就是我用它的原因。如果我听说了一个不错的 Linux 发行版,我会在虚拟机上测试它,而不是安装在真实的系统上。当你想要在安装之前尝试一下别的发行版时,用虚拟机会很方便。

*安装在 Ubuntu 18.04 内的 Ubuntu 18.10*
在本新手教程中,我将向你展示在 Ubuntu 和其他基于 Debian 的 Linux 发行版上安装 VirtualBox 的各种方法。
### 在 Ubuntu 和基于 Debian 的 Linux 发行版上安装 VirtualBox
这里提出的安装方法也适用于其他基于 Debian 和 Ubuntu 的 Linux 发行版,如 Linux Mint、elementar OS 等。
#### 方法 1:从 Ubuntu 仓库安装 VirtualBox
**优点**:安装简便
**缺点**:较旧版本
在 Ubuntu 上下载 VirtualBox 最简单的方法可能是从软件中心查找并下载。

*VirtualBox 在 Ubuntu 软件中心提供*
你也可以使用这条命令从命令行安装:
```
sudo apt install virtualbox
```
然而,如果[在安装前检查软件包版本](https://itsfoss.com/know-program-version-before-install-ubuntu/),你会看到 Ubuntu 仓库提供的 VirtualBox 版本已经很老了。
举个例子,在写下本教程时 VirtualBox 的最新版本是 6.0,但是在软件中心提供的是 5.2。这意味着你无法获得[最新版 VirtualBox](https://itsfoss.com/oracle-virtualbox-release/) 中引入的新功能。
#### 方法 2:使用 Oracle 网站上的 Deb 文件安装 VirtualBox
**优点**:安装简便,最新版本
**缺点**:不能更新
如果你想要在 Ubuntu 上使用 VirtualBox 的最新版本,最简单的方法就是[使用 Deb 文件](https://itsfoss.com/install-deb-files-ubuntu/)。
Oracle 为 VirtiualBox 版本提供了开箱即用的二进制文件。如果查看其下载页面,你将看到为 Ubuntu 和其他发行版下载 deb 安装程序的选项。

你只需要下载 deb 文件并双击它即可安装。就是这么简单。
* [下载 virtualbox for Ubuntu](https://www.virtualbox.org/wiki/Linux_Downloads)
然而,这种方法的问题在于你不能自动更新到最新的 VirtualBox 版本。唯一的办法是移除现有版本,下载最新版本并再次安装。不太方便,是吧?
#### 方法 3:用 Oracle 的仓库安装 VirtualBox
**优点**:自动更新
**缺点**:安装略微复杂
现在介绍的是命令行安装方法,它看起来可能比较复杂,但与前两种方法相比,它更具有优势。你将获得 VirtualBox 的最新版本,并且未来它还将自动更新到更新的版本。我想那就是你想要的。
要通过命令行安装 VirtualBox,请在你的仓库列表中添加 Oracle VirtualBox 的仓库。添加 GPG 密钥以便你的系统信任此仓库。现在,当你安装 VirtualBox 时,它会从 Oracle 仓库而不是 Ubuntu 仓库安装。如果发布了新版本,本地 VirtualBox 将跟随一起更新。让我们看看怎么做到这一点:
首先,添加仓库的密钥。你可以通过这一条命令下载和添加密钥:
```
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
>
> Mint 用户请注意:
>
>
> 下一步只适用于 Ubuntu。如果你使用的是 Linux Mint 或其他基于 Ubuntu 的发行版,请将命令行中的 `$(lsb_release -cs)` 替换成你当前版本所基于的 Ubuntu 版本。例如,Linux Mint 19 系列用户应该使用 bionic,Mint 18 系列用户应该使用 xenial,像这样:
>
>
>
> ```
> sudo add-apt-repository “deb [arch=amd64] <http://download.virtualbox.org/virtualbox/debian> **bionic** contrib“`
> ```
>
>
现在用以下命令来将 Oracle VirtualBox 仓库添加到仓库列表中:
```
sudo add-apt-repository "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib"
```
如果你有读过我的文章[检查 Ubuntu 版本](https://itsfoss.com/how-to-know-ubuntu-unity-version/),你大概知道 `lsb_release -cs` 将打印你的 Ubuntu 系统的代号。
**注**:如果你看到 “[add-apt-repository command not found](https://itsfoss.com/add-apt-repository-command-not-found/)” 错误,你需要下载 `software-properties-common` 包。
现在你已经添加了正确的仓库,请通过此仓库刷新可用包列表并安装 VirtualBox:
```
sudo apt update && sudo apt install virtualbox-6.0
```
**提示**:一个好方法是输入 `sudo apt install virtualbox-` 并点击 `tab` 键以查看可用于安装的各种 VirtualBox 版本,然后通过补全命令来选择其中一个版本。

### 如何从 Ubuntu 中删除 VirtualBox
现在你已经学会了如何安装 VirtualBox,我还想和你提一下删除它的步骤。
如果你是从软件中心安装的,那么删除它最简单的方法是从软件中心下手。你只需要在[已安装的应用程序列表](https://itsfoss.com/list-installed-packages-ubuntu/)中找到它,然后单击“删除”按钮。
另一种方式是使用命令行:
```
sudo apt remove virtualbox virtualbox-*
```
请注意,这不会删除你用 VirtualBox 安装的操作系统关联的虚拟机和文件。这并不是一件坏事,因为你可能希望以后或在其他系统中使用它们是安全的。
### 最后…
我希望你能在以上方法中选择一种安装 VirtualBox。我还将在另一篇文章中写到如何有效地使用 VirtualBox。目前,如果你有点子、建议或任何问题,请随时在下面发表评论。
---
via: <https://itsfoss.com/install-virtualbox-ubuntu>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[beamrolling](https://github.com/beamrolling) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,283 | 如何在 Ubuntu 中修复 VirtualBox 的 “Kernel driver not installed (rc=-1908)” 错 | https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/ | 2019-08-30T07:31:05 | [
"Virtualbox"
] | https://linux.cn/article-11283-1.html | 
我使用 Oracle VirtualBox 来测试各种 Linux 和 Unix 发行版。到目前为止,我已经在 VirtualBox 中测试了上百个虚拟机。今天,我在我的 Ubuntu 18.04 桌面上启动了 Ubuntu 18.04 服务器版虚拟机,我收到了以下错误。
```
Kernel driver not installed (rc=-1908)
The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall virtualbox-dkms package and load the kernel module by executing
'modprobe vboxdrv'
as root.
where: suplibOsInit what: 3 VERR_VM_DRIVER_NOT_INSTALLED (-1908) - The support driver is not installed. On linux, open returned ENOENT.
```

*Ubuntu 中的 “Kernel driver not installed (rc=-1908)” 错误*
我点击了 OK 关闭消息框,然后在后台看到了另一条消息。
```
Failed to open a session for the virtual machine Ubuntu 18.04 LTS Server.
The virtual machine 'Ubuntu 18.04 LTS Server' has terminated unexpectedly during startup with exit code 1 (0x1).
Result Code:
NS_ERROR_FAILURE (0x80004005)
Component:
MachineWrap
Interface:
IMachine {85cd948e-a71f-4289-281e-0ca7ad48cd89}
```

*启动期间虚拟机意外终止,退出代码为 1(0x1)*
我不知道该先做什么。我运行以下命令来检查是否有用。
```
$ sudo modprobe vboxdrv
```
我收到了这个错误:
```
modprobe: FATAL: Module vboxdrv not found in directory /lib/modules/5.0.0-23-generic
```
仔细阅读这两个错误消息后,我意识到我应该更新 Virtualbox 程序。
如果你在 Ubuntu 及其衍生版(如 Linux Mint)中遇到此错误,你只需使用以下命令重新安装或更新 `virtualbox-dkms` 包:
```
$ sudo apt install virtualbox-dkms
```
或者,最好更新整个系统:
```
$ sudo apt upgrade
```
错误消失了,我可以正常在 VirtualBox 中启动虚拟机了。
---
via: <https://www.ostechnix.com/how-to-fix-kernel-driver-not-installed-rc-1908-virtualbox-error-in-ubuntu/>
作者:[sk](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
11,285 | 如何转职为 DevOps 工程师 | https://opensource.com/article/19/7/how-transition-career-devops-engineer | 2019-08-30T17:14:00 | [
"DevOps"
] | /article-11285-1.html |
>
> 无论你是刚毕业的大学生,还是想在职业中寻求进步的经验丰富的 IT 专家,这些提示都可以帮你成为 DevOps 工程师。
>
>
>

DevOps 工程是一个备受称赞的热门职业。不管你是刚毕业正在找第一份工作,还是在利用之前的行业经验的同时寻求学习新技能的机会,本指南都能帮你通过正确的步骤成为 [DevOps 工程师](https://opensource.com/article/19/7/devops-vs-sysadmin)。
### 让自己沉浸其中
首先学习 [DevOps](https://opensource.com/resources/devops) 的基本原理、实践以及方法。在使用工具之前,先了解 DevOps 背后的“为什么”。DevOps 工程师的主要目标是在整个软件开发生命周期(SDLC)中提高速度并保持或提高质量,以提供最大的业务价值。阅读文章、观看 YouTube 视频、参加当地小组聚会或者会议 —— 成为热情的 DevOps 社区中的一员,在那里你将从先行者的错误和成功中学习。
### 考虑你的背景
如果你有从事技术工作的经历,例如软件开发人员、系统工程师、系统管理员、网络运营工程师或者数据库管理员,那么你已经拥有了广泛的见解和有用的经验,它们可以帮助你在未来成为 DevOps 工程师。如果你在完成计算机科学或任何其他 STEM(LCTT 译注:STEM 是<ruby> 科学 <rt> Science </rt></ruby>、<ruby> 技术 <rt> Technology </rt></ruby>、<ruby> 工程 <rt> Engineering </rt></ruby>和<ruby> 数学 <rt> Math </rt></ruby>四个学科的首字母缩略字)领域的学业后刚开始职业生涯,那么你将拥有在这个过渡期间需要的一些基本踏脚石。
DevOps 工程师的角色涵盖了广泛的职责。以下是企业最有可能使用他们的三种方向:
* **偏向于开发(Dev)的 DevOps 工程师**,在构建应用中扮演软件开发的角色。他们日常工作的一部分是利用持续集成 / 持续交付(CI/CD)、共享仓库、云和容器,但他们不一定负责构建或实施工具。他们了解基础架构,并且在成熟的环境中,能将自己的代码推向生产环境。
* **偏向于运维技术(Ops)的 DevOps 工程师**,可以与系统工程师或系统管理员相比较。他们了解软件的开发,但并不会把一天的重心放在构建应用上。相反,他们更有可能支持软件开发团队实现手动流程的自动化,并提高人员和技术系统的效率。这可能意味着分解遗留代码,并用不太繁琐的自动化脚本来运行相同的命令,或者可能意味着安装、配置或维护基础结构和工具。他们确保为任何有需要的团队安装可使用的工具。他们也会通过教团队如何利用 CI / CD 和其他 DevOps 实践来帮助他们。
* **网站可靠性工程师(SRE)**,就像解决运维和基础设施的软件工程师。SRE 专注于创建可扩展、高可用且可靠的软件系统。
在理想的世界中,DevOps 工程师将了解以上所有领域;这在成熟的科技公司中很常见。然而,顶级银行和许多财富 500 强企业的 DevOps 职位通常会偏向开发(Dev)或运营(Ops)。
### 要学习的技术
DevOps 工程师需要了解各种技术才能有效完成工作。无论你的背景如何,请从作为 DevOps 工程师需要使用和理解的基本技术开始。
#### 操作系统
操作系统是一切运行的地方,拥有相关的基础知识十分重要。[Linux](https://opensource.com/resources/linux) 是你最有可能每天使用的操作系统,尽管有的组织会使用 Windows 操作系统。要开始使用,你可以在家中安装 Linux,在那里你可以随心所欲地中断,并在此过程中学习。
#### 脚本
接下来,选择一门语言来学习脚本编程。有很多语言可供选择,包括 Python、Go、Java、Bash、PowerShell、Ruby 和 C / C++。我建议[从 Python 开始](https://opensource.com/resources/python),因为它相对容易学习和解释,是最受欢迎的语言之一。Python 通常是遵循面向对象编程(OOP)的准则编写的,可用于 Web 开发、软件开发以及创建桌面 GUI 和业务应用程序。
#### 云
学习了 [Linux](https://opensource.com/resources/linux) 和 [Python](https://opensource.com/resources/python) 之后,我认为下一个该学习的是云计算。基础设施不再只是“运维小哥”的事情了,因此你需要接触云平台,例如 AWS 云服务、Azure 或者谷歌云平台。我会从 AWS 开始,因为它有大量免费学习工具,可以帮助你降低作为开发人员、运维人员,甚至面向业务的部门的任何障碍。事实上,你可能会被它提供的东西所淹没。考虑从 EC2、S3 和 VPC 开始,然后看看你从其中想学到什么。
#### 编程语言
如果你对 DevOps 的软件开发充满热情,请继续提高你的编程技能。DevOps 中的一些优秀和常用的编程语言和你用于脚本编程的相同:Python、Go、Java、Bash、PowerShell、Ruby 和 C / C++。你还应该熟悉 Jenkins 和 Git / Github,你将会在 CI / CD 过程中经常使用到它们。
#### 容器
最后,使用 Docker 和编排平台(如 Kubernetes)等工具开始学习[容器化](https://opensource.com/article/18/8/sysadmins-guide-containers)。网上有大量的免费学习资源,大多数城市都有本地的线下小组,你可以在友好的环境中向有经验的人学习(还有披萨和啤酒哦!)。
#### 其他的呢?
如果你缺乏开发经验,你依然可以通过对自动化的热情,提高效率,与他人协作以及改进自己的工作来[参与 DevOps](https://opensource.com/resources/devops)。我仍然建议学习上述工具,但重点不要放在编程 / 脚本语言上。了解基础架构即服务、平台即服务、云平台和 Linux 会非常有用。你可能会设置工具并学习如何构建具有弹性和容错能力的系统,并在编写代码时利用它们。
### 找一份 DevOps 的工作
求职过程会有所不同,具体取决于你是否一直从事技术工作,是否正在进入 DevOps 领域,或者是刚开始职业生涯的毕业生。
#### 如果你已经从事技术工作
如果你正在从一个技术领域转入 DevOps 角色,首先尝试在你当前的公司寻找机会。你能通过和其他的团队一起工作来重新掌握技能吗?尝试跟随其他团队成员,寻求建议,并在不离开当前工作的情况下获得新技能。如果做不到这一点,你可能需要换另一家公司。如果你能从上面列出的一些实践、工具和技术中学习,你将能在面试时展示相关知识从而占据有利位置。关键是要诚实,不要担心失败。大多数招聘主管都明白你并不知道所有的答案;如果你能展示你一直在学习的东西,并解释你愿意学习更多,你应该有机会获得 DevOps 的工作。
#### 如果你刚开始职业生涯
申请雇用初级 DevOps 工程师的公司的空缺机会。不幸的是,许多公司表示他们希望寻找更富有经验的人,并建议你在获得经验后再申请该职位。这是“我们需要经验丰富的人”的典型,令人沮丧的场景,并且似乎没人愿意给你第一次机会。
然而,并不是所有求职经历都那么令人沮丧;一些公司专注于培训和提升刚从大学毕业的学生。例如,我工作的 [MThree](https://www.mthreealumni.com/) 会聘请应届毕业生并且对其进行 8 周的培训。当完成培训后,参与者们可以充分了解到整个 SDLC,并充分了解它在财富 500 强公司环境中的运用方式。毕业生被聘为 MThree 的客户公司的初级 DevOps 工程师 —— MThree 在前 18 - 24 个月内支付全职工资和福利,之后他们将作为直接雇员加入客户。这是弥合从大学到技术职业的间隙的好方法。
### 总结
转职成 DevOps 工程师的方法有很多种。这是一条非常有益的职业路线,可能会让你保持繁忙和挑战 — 并增加你的收入潜力。
---
via: <https://opensource.com/article/19/7/how-transition-career-devops-engineer>
作者:[Conor Delanbanque](https://opensource.com/users/cdelanbanquehttps://opensource.com/users/daniel-ohhttps://opensource.com/users/herontheclihttps://opensource.com/users/marcobravohttps://opensource.com/users/cdelanbanque) 选题:[lujun9972](https://github.com/lujun9972) 译者:[beamrolling](https://github.com/beamrolling) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,286 | 如何在 Debian 10 上安装 Ansible | https://www.linuxtechi.com/install-ansible-automation-tool-debian10/ | 2019-08-31T10:19:47 | [
"Ansible"
] | https://linux.cn/article-11286-1.html | 在如今的 IT 领域,自动化一个是热门话题,每个组织都开始采用自动化工具,像 Puppet、Ansible、Chef、CFEngine、Foreman 和 Katello。在这些工具中,Ansible 是几乎所有 IT 组织中管理 UNIX 和 Linux 系统的首选。在本文中,我们将演示如何在 Debian 10 Sever 上安装和使用 Ansible。

我的实验室环境:
* Debian 10 – Ansible 服务器/ 控制节点 – 192.168.1.14
* CentOS 7 – Ansible 主机 (Web 服务器)– 192.168.1.15
* CentOS 7 – Ansible 主机(DB 服务器)– 192.169.1.17
我们还将演示如何使用 Ansible 服务器管理 Linux 服务器
### 在 Debian 10 Server 上安装 Ansible
我假设你的 Debian 10 中有一个拥有 root 或 sudo 权限的用户。在我这里,我有一个名为 `pkumar` 的本地用户,它拥有 sudo 权限。
Ansible 2.7 包存在于 Debian 10 的默认仓库中,在命令行中运行以下命令安装 Ansible,
```
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt install ansible -y
```
运行以下命令验证 Ansible 版本,
```
root@linuxtechi:~$ sudo ansible --version
```

要安装最新版本的 Ansible 2.8,首先我们必须设置 Ansible 仓库。
一个接一个地执行以下命令,
```
root@linuxtechi:~$ echo "deb http://ppa.launchpad.net/ansible/ansible/ubuntu bionic main" | sudo tee -a /etc/apt/sources.list
root@linuxtechi:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
root@linuxtechi:~$ sudo apt update
root@linuxtechi:~$ sudo apt install ansible -y
root@linuxtechi:~$ sudo ansible --version
```

### 使用 Ansible 管理 Linux 服务器
请参考以下步骤,使用 Ansible 控制器节点管理 Linux 类的服务器,
#### 步骤 1:在 Ansible 服务器及其主机之间交换 SSH 密钥
在 Ansible 服务器生成 ssh 密钥并在 Ansible 主机之间共享密钥。
```
root@linuxtechi:~$ sudo -i
root@linuxtechi:~# ssh-keygen
root@linuxtechi:~# ssh-copy-id root@linuxtechi
root@linuxtechi:~# ssh-copy-id root@linuxtechi
```
#### 步骤 2:创建 Ansible 主机清单
安装 Ansible 后会自动创建 `/etc/ansible/hosts`,在此文件中我们可以编辑 Ansible 主机或其客户端。我们还可以在家目录中创建自己的 Ansible 主机清单,
运行以下命令在我们的家目录中创建 Ansible 主机清单。
```
root@linuxtechi:~$ vi $HOME/hosts
[Web]
192.168.1.15
[DB]
192.168.1.17
```
保存并退出文件。
注意:在上面的主机文件中,我们也可以使用主机名或 FQDN,但为此我们必须确保 Ansible 主机可以通过主机名或者 FQDN 访问。
#### 步骤 3:测试和使用默认的 Ansible 模块
Ansible 附带了许多可在 `ansible` 命令中使用的默认模块,示例如下所示。
语法:
```
# ansible -i <host_file> -m <module> <host>
```
这里:
* `-i ~/hosts`:包含 Ansible 主机列表
* `-m`:在之后指定 Ansible 模块,如 ping 和 shell
* `<host>`:我们要运行 Ansible 模块的 Ansible 主机
使用 Ansible ping 模块验证 ping 连接,
```
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping all
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping Web
root@linuxtechi:~$ sudo ansible -i ~/hosts -m ping DB
```
命令输出如下所示:

使用 shell 模块在 Ansible 主机上运行 shell 命令
语法:
```
ansible -i <hosts_file> -m shell -a <shell_commands> <host>
```
例子:
```
root@linuxtechi:~$ sudo ansible -i ~/hosts -m shell -a "uptime" all
192.168.1.17 | CHANGED | rc=0 >>
01:48:34 up 1:07, 3 users, load average: 0.00, 0.01, 0.05
192.168.1.15 | CHANGED | rc=0 >>
01:48:39 up 1:07, 3 users, load average: 0.00, 0.01, 0.04
root@linuxtechi:~$
root@linuxtechi:~$ sudo ansible -i ~/hosts -m shell -a "uptime ; df -Th / ; uname -r" Web
192.168.1.15 | CHANGED | rc=0 >>
01:52:03 up 1:11, 3 users, load average: 0.12, 0.07, 0.06
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 13G 1017M 12G 8% /
3.10.0-327.el7.x86_64
root@linuxtechi:~$
```
上面的命令输出表明我们已成功设置 Ansible 控制器节点。
让我们创建一个安装 nginx 的示例剧本,下面的剧本将在所有服务器上安装 nginx,这些服务器是 Web 主机组的一部分,但在这里,我的主机组下只有一台 centos 7 机器。
```
root@linuxtechi:~$ vi nginx.yaml
---
- hosts: Web
tasks:
- name: Install latest version of nginx on CentOS 7 Server
yum: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
```
现在使用以下命令执行剧本。
```
root@linuxtechi:~$ sudo ansible-playbook -i ~/hosts nginx.yaml
```
上面命令的输出类似下面这样,

这表明 Ansible 剧本成功执行了。
本文就是这些了,请分享你的反馈和评论。
---
via: <https://www.linuxtechi.com/install-ansible-automation-tool-debian10/>
作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,288 | Hexdump 如何工作 | https://opensource.com/article/19/8/dig-binary-files-hexdump | 2019-08-31T11:17:33 | [
"hexdump"
] | https://linux.cn/article-11288-1.html |
>
> Hexdump 能帮助你查看二进制文件的内容。让我们来学习 Hexdump 如何工作。
>
>
>

Hexdump 是个用十六进制、十进制、八进制数或 ASCII 码显示二进制文件内容的工具。它是个用于检查的工具,也可用于[数据恢复](https://www.redhat.com/sysadmin/find-lost-files-scalpel)、逆向工程和编程。
### 学习基本用法
Hexdump 让你毫不费力地得到输出结果,依你所查看文件的尺寸,输出结果可能会非常多。本文中我们会创建一个 1x1 像素的 PNG 文件。你可以用图像处理应用如 [GIMP](http://gimp.org) 或 [Mtpaint](https://opensource.com/article/17/2/mtpaint-pixel-art-animated-gifs) 来创建该文件,或者也可以在终端内用 [ImageMagick](https://opensource.com/article/17/8/imagemagick) 创建。
用 ImagiMagick 生成 1x1 像素 PNG 文件的命令如下:
```
$ convert -size 1x1 canvas:black pixel.png
```
你可以用 `file` 命令确认此文件是 PNG 格式:
```
$ file pixel.png
pixel.png: PNG image data, 1 x 1, 1-bit grayscale, non-interlaced
```
你可能好奇 `file` 命令是如何判断文件是什么类型。巧的是,那正是 `hexdump` 将要揭示的原理。眼下你可以用你常用的图像查看软件来看看你的单一像素图片(它看上去就像这样:`.`),或者你可以用 `hexdump` 查看文件内部:
```
$ hexdump pixel.png
0000000 5089 474e 0a0d 0a1a 0000 0d00 4849 5244
0000010 0000 0100 0000 0100 0001 0000 3700 f96e
0000020 0024 0000 6704 4d41 0041 b100 0b8f 61fc
0000030 0005 0000 6320 5248 004d 7a00 0026 8000
0000040 0084 fa00 0000 8000 00e8 7500 0030 ea00
0000050 0060 3a00 0098 1700 9c70 51ba 003c 0000
0000060 6202 474b 0044 dd01 138a 00a4 0000 7407
0000070 4d49 0745 07e3 081a 3539 a487 46b0 0000
0000080 0a00 4449 5441 d708 6063 0000 0200 0100
0000090 21e2 33bc 0000 2500 4574 7458 6164 6574
00000a0 633a 6572 7461 0065 3032 3931 302d 2d37
00000b0 3532 3254 3a30 3735 353a 2b33 3231 303a
00000c0 ac30 5dcd 00c1 0000 7425 5845 6474 7461
00000d0 3a65 6f6d 6964 7966 3200 3130 2d39 3730
00000e0 322d 5435 3032 353a 3a37 3335 312b 3a32
00000f0 3030 90dd 7de5 0000 0000 4549 444e 42ae
0000100 8260
0000102
```
透过一个你以前可能从未用过的视角,你所见的是该示例 PNG 文件的内容。它和你在图像查看软件中看到的是完全一样的数据,只是用一种你或许不熟悉的方式编码。
### 提取熟悉的字符串
尽管默认的数据输出结果看上去毫无意义,那并不意味着其中没有有价值的信息。你可以用 `--canonical` 选项将输出结果,或至少是其中可翻译的部分,翻译成更加熟悉的字符集:
```
$ hexdump --canonical foo.png
00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG........IHDR|
00000010 00 00 00 01 00 00 00 01 01 00 00 00 00 37 6e f9 |.............7n.|
00000020 24 00 00 00 04 67 41 4d 41 00 00 b1 8f 0b fc 61 |$....gAMA......a|
00000030 05 00 00 00 20 63 48 52 4d 00 00 7a 26 00 00 80 |.... cHRM..z&...|
00000040 84 00 00 fa 00 00 00 80 e8 00 00 75 30 00 00 ea |...........u0...|
00000050 60 00 00 3a 98 00 00 17 70 9c ba 51 3c 00 00 00 |`..:....p..Q<...|
00000060 02 62 4b 47 44 00 01 dd 8a 13 a4 00 00 00 07 74 |.bKGD..........t|
00000070 49 4d 45 07 e3 07 1a 08 39 35 87 a4 b0 46 00 00 |IME.....95...F..|
00000080 00 0a 49 44 41 54 08 d7 63 60 00 00 00 02 00 01 |..IDAT..c`......|
00000090 e2 21 bc 33 00 00 00 25 74 45 58 74 64 61 74 65 |.!.3...%tEXtdate|
000000a0 3a 63 72 65 61 74 65 00 32 30 31 39 2d 30 37 2d |:create.2019-07-|
000000b0 32 35 54 32 30 3a 35 37 3a 35 33 2b 31 32 3a 30 |25T20:57:53+12:0|
000000c0 30 ac cd 5d c1 00 00 00 25 74 45 58 74 64 61 74 |0..]....%tEXtdat|
000000d0 65 3a 6d 6f 64 69 66 79 00 32 30 31 39 2d 30 37 |e:modify.2019-07|
000000e0 2d 32 35 54 32 30 3a 35 37 3a 35 33 2b 31 32 3a |-25T20:57:53+12:|
000000f0 30 30 dd 90 e5 7d 00 00 00 00 49 45 4e 44 ae 42 |00...}....IEND.B|
00000100 60 82 |`.|
00000102
```
在右侧的列中,你看到的是和左侧一样的数据,但是以 ASCII 码展现的。如果你仔细看,你可以从中挑选出一些有用的信息,如文件格式(PNG)以及文件创建、修改日期和时间(向文件底部寻找一下)。
`file` 命令通过头 8 个字节获取文件类型。程序员会参考 [libpng 规范](http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html) 来知晓需要查看什么。具体而言,那就是你能在该图像文件的头 8 个字节中看到的字符串 `PNG`。这个事实显而易见,因为它揭示了 `file` 命令是如何知道要报告的文件类型。
你也可以控制 `hexdump` 显示多少字节,这在处理大于一个像素的文件时很实用:
```
$ hexdump --length 8 pixel.png
0000000 5089 474e 0a0d 0a1a
0000008
```
`hexdump` 不只限于查看 PNG 或图像文件。你也可以用 `hexdump` 查看你日常使用的二进制文件,如 [ls](https://opensource.com/article/19/7/master-ls-command)、[rsync](https://opensource.com/article/19/5/advanced-rsync),或你想检查的任何二进制文件。
### 用 hexdump 实现 cat 命令
阅读 PNG 规范的时候你可能会注意到头 8 个字节中的数据与 `hexdump` 提供的结果看上去不一样。实际上,那是一样的数据,但以一种不同的转换方式展现出来。所以 `hexdump` 的输出是正确的,但取决于你在寻找的信息,其输出结果对你而言不总是直接了当的。出于这个原因,`hexdump` 有一些选项可供用于定义格式和转化其转储的原始数据。
转换选项可以很复杂,所以用无关紧要的东西练习会比较实用。下面这个简易的介绍,通过重新实现 [cat](https://opensource.com/article/19/2/getting-started-cat-command) 命令来演示如何格式化 `hexdump` 的输出。首先,对一个文本文件运行 `hexdump` 来查看其原始数据。通常你可以在硬盘上某处找到 <ruby> <a href="https://en.wikipedia.org/wiki/GNU_General_Public_License"> GNU 通用许可证 </a> <rt> GNU General Public License </rt></ruby>(GPL)的一份拷贝,也可以用你手头的任何文本文件。你的输出结果可能不同,但下面是如何在你的系统中找到一份 GPL(或至少其部分)的拷贝:
```
$ find /usr/share/doc/ -type f -name "COPYING" | tail -1
/usr/share/doc/libblkid-devel/COPYING
```
对其运行 `hexdump`:
```
$ hexdump /usr/share/doc/libblkid-devel/COPYING
0000000 6854 7369 6c20 6269 6172 7972 6920 2073
0000010 7266 6565 7320 666f 7774 7261 3b65 7920
0000020 756f 6320 6e61 7220 6465 7369 7274 6269
0000030 7475 2065 7469 6120 646e 6f2f 0a72 6f6d
0000040 6964 7966 6920 2074 6e75 6564 2072 6874
0000050 2065 6574 6d72 2073 666f 7420 6568 4720
0000060 554e 4c20 7365 6573 2072 6547 656e 6172
0000070 206c 7550 6c62 6369 4c0a 6369 6e65 6573
0000080 6120 2073 7570 6c62 7369 6568 2064 7962
[...]
```
如果该文件输出结果很长,用 `--length`(或短选项 `-n`)来控制输出长度使其易于管理。
原始数据对你而言可能没什么意义,但你已经知道如何将其转换为 ASCII 码:
```
hexdump --canonical /usr/share/doc/libblkid-devel/COPYING
00000000 54 68 69 73 20 6c 69 62 72 61 72 79 20 69 73 20 |This library is |
00000010 66 72 65 65 20 73 6f 66 74 77 61 72 65 3b 20 79 |free software; y|
00000020 6f 75 20 63 61 6e 20 72 65 64 69 73 74 72 69 62 |ou can redistrib|
00000030 75 74 65 20 69 74 20 61 6e 64 2f 6f 72 0a 6d 6f |ute it and/or.mo|
00000040 64 69 66 79 20 69 74 20 75 6e 64 65 72 20 74 68 |dify it under th|
00000050 65 20 74 65 72 6d 73 20 6f 66 20 74 68 65 20 47 |e terms of the G|
00000060 4e 55 20 4c 65 73 73 65 72 20 47 65 6e 65 72 61 |NU Lesser Genera|
00000070 6c 20 50 75 62 6c 69 63 0a 4c 69 63 65 6e 73 65 |l Public.License|
[...]
```
这个输出结果有帮助但太累赘且难于阅读。要将 `hexdump` 的输出结果转换为其选项不支持的其他格式,可组合使用 `--format`(或 `-e`)和专门的格式代码。用来自定义格式的代码和 `printf` 命令使用的类似,所以如果你熟悉 `printf` 语句,你可能会觉得 `hexdump` 自定义格式不难学会。
在 `hexdump` 中,字符串 `%_p` 告诉 `hexdump` 用你系统的默认字符集输出字符。`--format` 选项的所有格式符号必须以*单引号*包括起来:
```
$ hexdump -e'"%_p"' /usr/share/doc/libblkid-devel/COPYING
This library is fre*
software; you can redistribute it and/or.modify it under the terms of the GNU Les*
er General Public.License as published by the Fre*
Software Foundation; either.version 2.1 of the License, or (at your option) any later.version..*
The complete text of the license is available in the..*
/Documentation/licenses/COPYING.LGPL-2.1-or-later file..
```
这次的输出好些了,但依然不方便阅读。传统上 UNIX 文本文件假定 80 个字符的输出宽度(因为很久以前显示器一行只能显示 80 个字符)。
尽管这个输出结果未被自定义格式限制输出宽度,你可以用附加选项强制 `hexdump` 一次处理 80 字节。具体而言,通过 80 除以 1 这种形式,你可以告诉 `hexdump` 将 80 字节作为一个单元对待:
```
$ hexdump -e'80/1 "%_p"' /usr/share/doc/libblkid-devel/COPYING
This library is free software; you can redistribute it and/or.modify it under the terms of the GNU Lesser General Public.License as published by the Free Software Foundation; either.version 2.1 of the License, or (at your option) any later.version...The complete text of the license is available in the.../Documentation/licenses/COPYING.LGPL-2.1-or-later file..
```
现在该文件被分割成 80 字节的块处理,但没有任何换行。你可以用 `\n` 字符自行添加换行,在 UNIX 中它代表换行:
```
$ hexdump -e'80/1 "%_p""\n"'
This library is free software; you can redistribute it and/or.modify it under th
e terms of the GNU Lesser General Public.License as published by the Free Softwa
re Foundation; either.version 2.1 of the License, or (at your option) any later.
version...The complete text of the license is available in the.../Documentation/
licenses/COPYING.LGPL-2.1-or-later file..
```
现在你已经(大致上)用 `hexdump` 自定义格式实现了 `cat` 命令。
### 控制输出结果
实际上自定义格式是让 `hexdump` 变得有用的方法。现在你已经(至少是原则上)熟悉 `hexdump` 自定义格式,你可以让 `hexdump -n 8` 的输出结果跟 `libpng` 官方规范中描述的 PNG 文件头相匹配了。
首先,你知道你希望 `hexdump` 以 8 字节的块来处理 PNG 文件。此外,你可能通过识别这些整数从而知道 PNG 格式规范是以十进制数表述的,根据 `hexdump` 文档,十进制用 `%d` 来表示:
```
$ hexdump -n8 -e'8/1 "%d""\n"' pixel.png
13780787113102610
```
你可以在每个整数后面加个空格使输出结果变得完美:
```
$ hexdump -n8 -e'8/1 "%d ""\n"' pixel.png
137 80 78 71 13 10 26 10
```
现在输出结果跟 PNG 规范完美匹配了。
### 好玩又有用
Hexdump 是个迷人的工具,不仅让你更多地领会计算机如何处理和转换信息,而且让你了解文件格式和编译的二进制文件如何工作。日常工作时你可以随机地试着对不同文件运行 `hexdump`。你永远不知道你会发现什么样的信息,或是什么时候具有这种洞察力会很实用。
---
via: <https://opensource.com/article/19/8/dig-binary-files-hexdump>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[0x996](https://github.com/0x996) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Hexdump is a utility that displays the contents of binary files in hexadecimal, decimal, octal, or ASCII. It’s a utility for inspection and can be used for [data recovery](https://www.redhat.com/sysadmin/find-lost-files-scalpel), reverse engineering, and programming.
## Learning the basics
Hexdump provides output with very little effort on your part and depending on the size of the file you’re looking at, there can be a lot of output. For the purpose of this article, create a 1x1 PNG file. You can do this with a graphics application such as [GIMP](http://gimp.org) or [Mtpaint](https://opensource.com/article/17/2/mtpaint-pixel-art-animated-gifs), or you can create it in a terminal with [ImageMagick](https://opensource.com/article/17/8/imagemagick).
Here’s a command to generate a 1x1 pixel PNG with ImageMagick:
```
$ convert -size 1x1 canvas:black pixel.png
```
You can confirm that this file is a PNG with the **file** command:
```
$ file pixel.png
pixel.png: PNG image data, 1 x 1, 1-bit grayscale, non-interlaced
```
You may wonder how the **file** command is able to determine what kind of file it is. Coincidentally, that’s what **hexdump** will reveal. For now, you can view your one-pixel graphic in the image viewer of your choice (it looks like this: **.** ), or you can view what’s inside the file with **hexdump**:
```
$ hexdump pixel.png
0000000 5089 474e 0a0d 0a1a 0000 0d00 4849 5244
0000010 0000 0100 0000 0100 0001 0000 3700 f96e
0000020 0024 0000 6704 4d41 0041 b100 0b8f 61fc
0000030 0005 0000 6320 5248 004d 7a00 0026 8000
0000040 0084 fa00 0000 8000 00e8 7500 0030 ea00
0000050 0060 3a00 0098 1700 9c70 51ba 003c 0000
0000060 6202 474b 0044 dd01 138a 00a4 0000 7407
0000070 4d49 0745 07e3 081a 3539 a487 46b0 0000
0000080 0a00 4449 5441 d708 6063 0000 0200 0100
0000090 21e2 33bc 0000 2500 4574 7458 6164 6574
00000a0 633a 6572 7461 0065 3032 3931 302d 2d37
00000b0 3532 3254 3a30 3735 353a 2b33 3231 303a
00000c0 ac30 5dcd 00c1 0000 7425 5845 6474 7461
00000d0 3a65 6f6d 6964 7966 3200 3130 2d39 3730
00000e0 322d 5435 3032 353a 3a37 3335 312b 3a32
00000f0 3030 90dd 7de5 0000 0000 4549 444e 42ae
0000100 8260
0000102
```
What you’re seeing is the contents of the sample PNG file through a lens you may have never used before. It’s the exact same data you see in an image viewer, encoded in a way that’s probably unfamiliar to you.
## Extracting familiar strings
Just because the default data dump seems meaningless, that doesn’t mean it’s devoid of valuable information. You can translate this output or at least the parts that actually translate, to a more familiar character set with the **--canonical** option:
```
$ hexdump --canonical foo.png
00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG........IHDR|
00000010 00 00 00 01 00 00 00 01 01 00 00 00 00 37 6e f9 |.............7n.|
00000020 24 00 00 00 04 67 41 4d 41 00 00 b1 8f 0b fc 61 |$....gAMA......a|
00000030 05 00 00 00 20 63 48 52 4d 00 00 7a 26 00 00 80 |.... cHRM..z&...|
00000040 84 00 00 fa 00 00 00 80 e8 00 00 75 30 00 00 ea |...........u0...|
00000050 60 00 00 3a 98 00 00 17 70 9c ba 51 3c 00 00 00 |`..:....p..Q<...|
00000060 02 62 4b 47 44 00 01 dd 8a 13 a4 00 00 00 07 74 |.bKGD..........t|
00000070 49 4d 45 07 e3 07 1a 08 39 35 87 a4 b0 46 00 00 |IME.....95...F..|
00000080 00 0a 49 44 41 54 08 d7 63 60 00 00 00 02 00 01 |..IDAT..c`......|
00000090 e2 21 bc 33 00 00 00 25 74 45 58 74 64 61 74 65 |.!.3...%tEXtdate|
000000a0 3a 63 72 65 61 74 65 00 32 30 31 39 2d 30 37 2d |:create.2019-07-|
000000b0 32 35 54 32 30 3a 35 37 3a 35 33 2b 31 32 3a 30 |25T20:57:53+12:0|
000000c0 30 ac cd 5d c1 00 00 00 25 74 45 58 74 64 61 74 |0..]....%tEXtdat|
000000d0 65 3a 6d 6f 64 69 66 79 00 32 30 31 39 2d 30 37 |e:modify.2019-07|
000000e0 2d 32 35 54 32 30 3a 35 37 3a 35 33 2b 31 32 3a |-25T20:57:53+12:|
000000f0 30 30 dd 90 e5 7d 00 00 00 00 49 45 4e 44 ae 42 |00...}....IEND.B|
00000100 60 82 |`.|
00000102
```
In the right column, you see the same data that’s on the left but presented as ASCII. If you look carefully, you can pick out some useful information, such as the file’s format (PNG) and—toward the bottom—the date and time the file was created and last modified. The dots represent symbols that aren’t present in the ASCII character set, which is to be expected because binary formats aren’t restricted to mundane letters and numbers.
The **file** command knows from the first 8 bytes what this file is. The [libpng specification](http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html) alerts programmers what to look for. You can see that within the first 8 bytes of this image file, specifically, is the string **PNG**. That fact is significant because it reveals how the **file** command knows what kind of file to report.
You can also control how many bytes **hexdump** displays, which is useful with files larger than one pixel:
```
$ hexdump --length 8 pixel.png
0000000 5089 474e 0a0d 0a1a
0000008
```
You don’t have to limit **hexdump** to PNG or graphic files. You can run **hexdump** against binaries you run on a daily basis as well, such as [ls](https://opensource.com/article/19/7/master-ls-command), [rsync](https://opensource.com/article/19/5/advanced-rsync), or any binary format you want to inspect.
## Implementing cat with hexdump
If you read the PNG spec, you may notice that the data in the first 8 bytes looks different than what **hexdump** provides. Actually, it’s the same data, but it’s presented using a different conversion. So, the output of **hexdump** is true, but not always directly useful to you, depending on what you’re looking for. For that reason, **hexdump** has options to format and convert the raw data it dumps.
The conversion options can get complex, so it’s useful to practice with something trivial first. Here’s a gentle introduction to formatting **hexdump** output by reimplementing the [ cat](https://opensource.com/article/19/2/getting-started-cat-command) command. First, run
**hexdump**on a text file to see its raw data. You can usually find a copy of the
[GNU General Public License (GPL)](https://en.wikipedia.org/wiki/GNU_General_Public_License)license somewhere on your hard drive, or you can use any text file you have handy. Your output may differ, but here’s how to find a copy of the GPL on your system (or at least part of it):
```
$ find /usr/share/doc/ -type f -name "COPYING" | tail -1
/usr/share/doc/libblkid-devel/COPYING
```
Run **hexdump** against it:
```
$ hexdump /usr/share/doc/libblkid-devel/COPYING
0000000 6854 7369 6c20 6269 6172 7972 6920 2073
0000010 7266 6565 7320 666f 7774 7261 3b65 7920
0000020 756f 6320 6e61 7220 6465 7369 7274 6269
0000030 7475 2065 7469 6120 646e 6f2f 0a72 6f6d
0000040 6964 7966 6920 2074 6e75 6564 2072 6874
0000050 2065 6574 6d72 2073 666f 7420 6568 4720
0000060 554e 4c20 7365 6573 2072 6547 656e 6172
0000070 206c 7550 6c62 6369 4c0a 6369 6e65 6573
0000080 6120 2073 7570 6c62 7369 6568 2064 7962
[...]
```
If the file’s output is very long, use the **--length** (or **-n** for short) to make it manageable for yourself.
The raw data probably means nothing to you, but you already know how to convert it to ASCII:
```
hexdump --canonical /usr/share/doc/libblkid-devel/COPYING
00000000 54 68 69 73 20 6c 69 62 72 61 72 79 20 69 73 20 |This library is |
00000010 66 72 65 65 20 73 6f 66 74 77 61 72 65 3b 20 79 |free software; y|
00000020 6f 75 20 63 61 6e 20 72 65 64 69 73 74 72 69 62 |ou can redistrib|
00000030 75 74 65 20 69 74 20 61 6e 64 2f 6f 72 0a 6d 6f |ute it and/or.mo|
00000040 64 69 66 79 20 69 74 20 75 6e 64 65 72 20 74 68 |dify it under th|
00000050 65 20 74 65 72 6d 73 20 6f 66 20 74 68 65 20 47 |e terms of the G|
00000060 4e 55 20 4c 65 73 73 65 72 20 47 65 6e 65 72 61 |NU Lesser Genera|
00000070 6c 20 50 75 62 6c 69 63 0a 4c 69 63 65 6e 73 65 |l Public.License|
[...]
```
That output is helpful but unwieldy and difficult to read. To format **hexdump**’s output beyond what’s offered by its own options, use **--format** (or **-e**) along with specialized formatting codes. The shorthand used for formatting is similar to what the **printf** command uses, so if you are familiar with **printf** statements, you may find **hexdump** formatting easier to learn.
In **hexdump**, the character sequence **%_p** tells **hexdump** to print a character in your system’s default character set. All formatting notation for the **--format** option must be enclosed in *single quotes*:
```
$ hexdump -e'"%_p"' /usr/share/doc/libblkid-devel/COPYING
This library is fre*
software; you can redistribute it and/or.modify it under the terms of the GNU Les*
er General Public.License as published by the Fre*
Software Foundation; either.version 2.1 of the License, or (at your option) any later.version..*
The complete text of the license is available in the..*
/Documentation/licenses/COPYING.LGPL-2.1-or-later file..
```
This output is better, but still inconvenient to read. Traditionally, UNIX text files assume an 80-character output width (because long ago, monitors tended to fit only 80 characters across).
While this output isn’t bound by formatting, you can force **hexdump** to process 80 bytes at a time with additional options. Specifically, by dividing 80 by one, you can tell **hexdump** to treat 80 bytes as one unit:
```
$ hexdump -e'80/1 "%_p"' /usr/share/doc/libblkid-devel/COPYING
This library is free software; you can redistribute it and/or.modify it under the terms of the GNU Lesser General Public.License as published by the Free Software Foundation; either.version 2.1 of the License, or (at your option) any later.version...The complete text of the license is available in the.../Documentation/licenses/COPYING.LGPL-2.1-or-later file..
```
Now the file is processed in 80-byte chunks, but it’s lost any sense of new lines. You can add your own with the **\n** character, which in UNIX represents a new line:
```
$ hexdump -e'80/1 "%_p""\n"'
This library is free software; you can redistribute it and/or.modify it under th
e terms of the GNU Lesser General Public.License as published by the Free Softwa
re Foundation; either.version 2.1 of the License, or (at your option) any later.
version...The complete text of the license is available in the.../Documentation/
licenses/COPYING.LGPL-2.1-or-later file..
```
You have now (approximately) implemented the **cat** command with **hexdump** formatting.
## Controlling the output
Formatting is, realistically, how you make **hexdump** useful. Now that you’re familiar, in principle at least, with **hexdump** formatting, you can make the output of **hexdump -n 8** match the output of the PNG header as described by the official **libpng** spec.
First, you know that you want **hexdump** to process the PNG file in 8-byte chunks. Furthermore, you may know by integer recognition that the PNG spec is documented in decimal, which is represented by **%d** according to the **hexdump** documentation:
```
$ hexdump -n8 -e'8/1 "%d""\n"' pixel.png
13780787113102610
```
You can make the output perfect by adding a blank space after each integer:
```
$ hexdump -n8 -e'8/1 "%d ""\n"' pixel.png
137 80 78 71 13 10 26 10
```
The output is now a perfect match to the PNG specification.
## Hexdumping for fun and profit
Hexdump is a fascinating tool that not only teaches you more about how computers process and convert information, but also about how file formats and compiled binaries function. You should try running **hexdump** on files at random throughout the day as you work. You never know what kinds of information you may find, nor when having that insight may be useful.
## 4 Comments |
11,289 | 你可能意识不到的使用 Linux 的 11 种方式 | https://opensource.com/article/19/8/everyday-tech-runs-linux | 2019-09-01T00:00:00 | [
"Linux"
] | /article-11289-1.html |
>
> 什么技术运行在 Linux 上?你可能会惊讶于日常生活中使用 Linux 的频率。
>
>
>

现在 Linux 几乎可以运行每样东西,但很多人都没有意识到这一点。有些人可能知道 Linux,可能听说过超级计算机运行着这个操作系统。根据 [Top500](https://www.top500.org/),Linux 现在驱动着世界上最快的 500 台计算机。你可以转到他们的网站并[搜索“Linux”](https://www.top500.org/statistics/sublist/)自己查看一下结果。
### NASA 运行在 Linux 之上
你可能不知道 Linux 为 NASA(美国国家航空航天局)提供支持。NASA 的 [Pleiades](https://www.nas.nasa.gov/hecc/resources/pleiades.html) 超级计算机运行着 Linux。由于操作系统的可靠性,国际空间站六年前[从 Windows 切换到了 Linux](https://www.extremetech.com/extreme/155392-international-space-station-switches-from-windows-to-linux-for-improved-reliability)。NASA 甚至最近给国际空间站部署了三台[运行着 Linux](https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180003515.pdf) 的“Astrobee”机器人。
### 电子书运行在 Linux 之上
我读了很多书,我的首选设备是亚马逊 Kindle Paperwhite,它运行 Linux(虽然大多数人完全没有意识到这一点)。如果你使用亚马逊的任何服务,从[亚马逊弹性计算云(Amazon EC2)](https://aws.amazon.com/amazon-linux-ami/) 到 Fire TV,你就是在 Linux 上运行。当你问 Alexa 现在是什么时候,或者你最喜欢的运动队得分时,你也在使用 Linux,因为 Alexa 由 [Fire OS](https://en.wikipedia.org/wiki/Fire_OS)(基于 Android 的操作系统)提供支持。实际上,[Android](https://en.wikipedia.org/wiki/Android_(operating_system)) 是由谷歌开发的用于移动手持设备的 Linux,而且占据了当今移动电话的[76% 的市场](https://gs.statcounter.com/os-market-share/mobile/worldwide/)。
### 电视运行在 Linux 之上
如果你有一台 [TiVo](https://tivo.pactsafe.io/legal.html#open-source-software),那么你也在运行 Linux。如果你是 Roku 用户,那么你也在使用 Linux。[Roku OS](https://en.wikipedia.org/wiki/Roku) 是专为 Roku 设备定制的 Linux 版本。你可以选择使用在 Linux 上运行的 Chromecast 看流媒体。不过,Linux 不只是为机顶盒和流媒体设备提供支持。它也可能运行着你的智能电视。LG 使用 webOS,它是基于 Linux 内核的。Panasonic 使用 Firefox OS,它也是基于 Linux 内核的。三星、菲利普斯以及更多厂商都使用基于 Linux 的操作系统支持其设备。
### 智能手表和平板电脑运行在 Linux 之上
如果你拥有智能手表,它可能正在运行 Linux。世界各地的学校系统一直在实施[一对一系统](https://en.wikipedia.org/wiki/One-to-one_computing),让每个孩子都有自己的笔记本电脑。越来越多的机构为学生配备了 Chromebook。这些轻巧的笔记本电脑使用 [Chrome OS](https://en.wikipedia.org/wiki/Chrome_OS),它基于 Linux。
### 汽车运行在 Linux 之上
你驾驶的汽车可能正在运行 Linux。 [汽车级 Linux(AGL)](https://opensource.com/life/16/8/agl-provides-common-open-code-base) 是一个将 Linux 视为汽车标准代码库的项目,它列入了丰田、马自达、梅赛德斯-奔驰和大众等汽车制造商。你的[车载信息娱乐(IVI)](https://opensource.com/business/16/5/interview-alison-chaiken-steven-crumb)系统也可能运行 Linux。[GENIVI 联盟](https://www.genivi.org/faq)在其网站称,它开发了“用于集成在集中连接的车辆驾驶舱中的操作系统和中间件的标准方法”。
### 游戏运行在 Linux 之上
如果你是游戏玩家,那么你可能正在使用 [SteamOS](https://store.steampowered.com/steamos/),这是一个基于 Linux 的操作系统。此外,如果你使用 Google 的众多服务,那么你也运行在 Linux上。
### 社交媒体运行在 Linux 之上
当你刷屏和评论时,你可能会意识到这些平台正在做的很多工作。也许 Instagram、Facebook、YouTube 和 Twitter 都在 Linux 上运行并不令人惊讶。
此外,社交媒体的新浪潮,去中心化的联合社区的联盟节点,如 [Mastodon](https://opensource.com/article/17/4/guide-to-mastodon)、[GNU Social](https://www.gnu.org/software/social/)、[Nextcloud](https://apps.nextcloud.com/apps/social)(类似 Twitter 的微博平台)、[Pixelfed](https://pixelfed.org/)(分布式照片共享)和[Peertube](https://joinpeertube.org/en/)(分布式视频共享)至少默认情况下在 Linux 上运行。由于开源,它们可以在任何平台上运行,这本身就是一个强大的优先级。
### 商业和政务运行在 Linux 之上
与五角大楼一样,纽约证券交易所也在 Linux 上运行。美国联邦航空管理局每年处理超过 1600 万次航班,他们在 Linux 上运营。美国国会图书馆、众议院、参议院和白宫都使用 Linux。
### 零售运行在 Linux 之上
最新航班座位上的娱乐系统很可能在 Linux 上运行。你最喜欢的商店的 POS 机可能正运行在 Linux 上。基于 Linux 的 [Tizen OS](https://wiki.tizen.org/Devices) 为智能家居和其他智能设备提供支持。许多公共图书馆现在在 [Evergreen](https://evergreen-ils.org/) 和 [Koha](https://koha-community.org/) 上托管他们的综合图书馆系统。这两个系统都在 Linux 上运行。
### Apple 运行在 Linux 之上
如果你是使用 [iCloud](https://toolbar.netcraft.com/site_report?url=https://www.icloud.com/) 的 iOS 用户,那么你也在使用运行在 Linux 上的系统。Apple 公司的网站在 Linux 上运行。如果你想知道在 Linux 上运行的其他网站,请务必使用 [Netcraft](https://www.netcraft.com/) 并检查“该网站运行在什么之上?”的结果。
### 路由器运行在 Linux 之上
在你家里将你连接到互联网的路由器可能正运行在 Linux 上。如果你当前的路由器没有运行 Linux 而你想改变它,那么这里有一个[优秀的方法](https://opensource.com/life/16/6/why-i-built-my-own-linux-router)。
如你所见,Linux 从许多方面为今天的世界提供动力。还有什么运行在 Linux 之上的东西是人们还没有意识到的?请让我们在评论中知道。
---
via: <https://opensource.com/article/19/8/everyday-tech-runs-linux>
作者:[Don Watkins](https://opensource.com/users/don-watkins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,291 | 开源新闻综述:谷歌开源 Android 语音转录和手势追踪、Twitter 的遥测工具 | https://opensource.com/19/8/news-august-31 | 2019-09-01T10:51:00 | [
"语音识别"
] | https://linux.cn/article-11291-1.html |
>
> 不要错过两周以来最大的开源头条新闻。
>
>
>

在本期的开源新闻综述中,我们来看看谷歌发布的两个开源软件、Twitter 的最新可观测性工具、动漫工作室对 Blender 的采用在增多等等新闻!
### 谷歌的开源双响炮
搜索引擎巨头谷歌的开发人员最近一直忙于开源。在过去的两周里,他们以开源的方式发布了两个截然不同的软件。
第一个是 Android 的语音识别和转录工具 Live Transcribe 的[语音引擎](https://venturebeat.com/2019/08/16/google-open-sources-live-transcribes-speech-engine/),它可以“在移动设备上使用机器学习算法将音频变成实时字幕”。谷歌的声明称,它正在开源 Live Transcribe 以“让所有开发人员可以为长篇对话提供字幕”。你可以[在 GitHub 上](https://github.com/google/live-transcribe-speech-engine)浏览或下载 Live Transcribe 的源代码。
谷歌还为 Android 和 iOS 开源了[手势跟踪软件](https://venturebeat.com/2019/08/19/google-open-sources-gesture-tracking-ai-for-mobile-devices/),它建立在其 [MediaPipe](https://github.com/google/mediapipe) 机器学习框架之上。该软件结合了三种人工智能组件:手掌探测器、“返回 3D 手点”的模型和手势识别器。据谷歌研究人员称,其目标是改善“跨各种技术领域和平台的用户体验”。该软件的源代码和文档[可在 GitHub 上获得](https://github.com/google/mediapipe/blob/master/mediapipe/docs/hand_tracking_mobile_gpu.md)。
### Twitter 开源 Rezolus 遥测工具
当想到网络中断时,我们想到的是影响站点或服务性能的大崩溃或减速。让我们感到惊讶的是性能慢慢被吃掉的小尖峰的重要性。为了更容易地检测这些尖峰,Twitter 开发了一个名为 Rezolus 的工具,该公司[已将其开源](https://blog.twitter.com/engineering/en_us/topics/open-source/2019/introducing-rezolus.html)。
>
> 我们现有的按分钟采样的遥测技术未能反映出这些异常现象。这是因为相对于该异常发生的长度,较低的采样率掩盖了这些持续时间大约为 10 秒的异常。这使得很难理解正在发生的事情并调整系统以获得更高的性能。
>
>
>
Rezolus 旨在检测“非常短暂但有时显著的性能异常”——仅持续几秒钟。Twitter 已经运行了 Rezolus 大约一年,并且一直在使用它收集的内容“与后端服务日志来确定峰值的来源”。
如果你对将 Rezolus 添加到可观测性堆栈中的结果感到好奇,请查看 Twitter 的 [GitHub 存储库](https://github.com/twitter/rezolus)中的源代码。
### 日本的 Khara 动画工作室采用 Blender
Blender 被认为是开源的动画和视觉效果软件的黄金标准。它被几家制作公司采用,其中最新的是[日本动漫工作室 Khara](https://www.neowin.net/news/anime-studio-khara-is-planning-to-use-open-source-blender-software/)。
Khara 正在使用 Blender 开发 Evangelion: 3.0+1.0,这是基于流行动漫系列《Neon Genesis Evangelion》的电影系列的最新版本。虽然该电影的工作不能在 Blender 中全部完成,但 Khara 的员工“将从 2020 年 6 月开始使用 Blender 进行大部分工作”。为了强调其对 Blender 和开源的承诺,“Khara 宣布它将作为企业会员加入 Blender 基金会的发展基金。“
### NSA 分享其固件安全工具
继澳大利亚同行[共享他们的一个工具](/article-11241-1.html)之后,美国国家安全局(NSA)正在[提供](https://www.cyberscoop.com/nsa-firmware-open-source-coreboot-stm-pe-eugene-myers/)的一个项目的成果“可以更好地保护机器免受固件攻击“。这个最新的软件以及其他保护固件的开源工作可以在 [Coreboot Gerrit 存储库](https://review.coreboot.org/admin/repos)下找到。
这个名为“具有受保护执行的 SMI 传输监视器”(STM-PE)的软件“将与运行 Coreboot 的 x86 处理器配合使用”以防止固件攻击。根据 NSA 高级网络安全实验室的 Eugene Meyers 的说法,STM-PE 采用低级操作系统代码“并将其置于一个盒子中,以便它只能访问它需要访问的设备系统”。这有助于防止篡改,Meyers 说,“这将提高系统的安全性。”
### 其它新闻
* [Linux 内核中的 exFAT?是的!](https://cloudblogs.microsoft.com/opensource/2019/08/28/exfat-linux-kernel/)
* [瓦伦西亚继续支持 Linux 学校发行版](https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/120000-lliurex-desktops)
* [西班牙首个开源卫星](https://hackaday.com/2019/08/15/spains-first-open-source-satellite/)
* [Western Digital 从开放标准到开源芯片的长途旅行](https://www.datacenterknowledge.com/open-source/western-digitals-long-trip-open-standards-open-source-chips)
* [用于自动驾驶汽车多模传感器的 Waymo 开源数据集](https://venturebeat.com/2019/08/21/waymo-open-sources-data-set-for-autonomous-vehicle-multimodal-sensors/)
一如既往地感谢 Opensource.com 的工作人员和主持人本周的帮助。
---
via: <https://opensource.com/19/8/news-august-31>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In this edition of our open source news roundup, we take a look two open source releases from Google, Twitter's latest observability tooling, anime studio adopts Blender, and more!
## A double hit of open source from Google
Developers at search engine giant Google have been busy on the open source front lately. In the last two weeks, they've released two very different systems as open source.
The first of those is the [speech engine for Live Transcribe](https://venturebeat.com/2019/08/16/google-open-sources-live-transcribes-speech-engine/), a speech recognition and transcription tool for Android, which "uses machine learning algorithms to turn audio into real-time captions" on mobile devices. Google's announcement states it is making Live Transcribe open source to "let any developer deliver captions for long-form conversations." You can browse or download Live Transcribe's source code [on GitHub](https://github.com/google/live-transcribe-speech-engine).
Google also open sourced [gesture tracking software](https://venturebeat.com/2019/08/19/google-open-sources-gesture-tracking-ai-for-mobile-devices/) for Android and iOS, built on top of its [MediaPipe](https://github.com/google/mediapipe) machine learning framework. The software combines three artificial intelligence components: a palm detector, a model that "returns 3D hand points," and a gesture recognizer. The goal, according to Google's researchers, is to improve "the user experience across a variety of technological domains and platforms." The source code and documentation for the software is [available on GitHub](https://github.com/google/mediapipe/blob/master/mediapipe/docs/hand_tracking_mobile_gpu.md).
## Twitter open sources Rezolus telemetry tool
When you think of network outages, what comes to mind are big crashes or slowdowns that affect the performance of a site or service. What may surprise us is the importance of small blips that slowly eat away at performance. To make detecting those blips easier, Twitter developed a tool called Rezolus which the company [has open sourced](https://blog.twitter.com/engineering/en_us/topics/open-source/2019/introducing-rezolus.html).
Our existing telemetry, which samples minutely, was failing to reflect these anomalies. This was because the anomalies, which were about 10 seconds in duration, were being masked by a low sample rate relative to the length of the anomalies. This made it difficult to understand what was happening and tune the system for higher performance.
Rezolus is designed to detect "very brief but sometimes significant performance anomalies," which only last a few seconds. Twitter has been running Rezolus for about a year and has been using what it collects "with the backend service logs to determine the source of spikes."
If you're curious about adding Rezolus to your Observability stack, check out the source code in Twitter's [GitHub repository](https://github.com/twitter/rezolus).
## Japan's Khara animation studio adopts Blender
Blender is considered the gold standard of open source animation and visual effects software. It's been adopted by several production companies, the latest of which is [Japanese anime studio Khara](https://www.neowin.net/news/anime-studio-khara-is-planning-to-use-open-source-blender-software/).
Khara is using Blender to develop *Evangelion: 3.0+1.0*, the latest installment of the film series based on the popular anime series *Neon Genesis Evangelion*. While the work for the movie won't be completely done in Blender, Khara's employees "will start using Blender for the majority of their work" starting in June, 2020. To underscore its commitment to both Blender and open source, "Khara announced that it would be joining the Blender Foundation’s Development Fund as a corporate member."
## NSA to share its firmware security tool
Following on the heels of its Australian counterpart [sharing one of its tools](https://opensource.com/article/19/8/news-august-17#ASD), the National Security Agency (NSA) is [making available](https://www.cyberscoop.com/nsa-firmware-open-source-coreboot-stm-pe-eugene-myers/) the fruits of a project that "that could better protect machines from firmware attacks." This latest release, along with other open source efforts to protect firmware, can be found under the [Coreboot Gerrit repository](https://review.coreboot.org/admin/repos).
The catchy name SMI Transfer Monitor with Protected Execution (STM-PE) "will work with x86 processors that run Coreboot" to guard against firmware attacks. According to Eugene Meyers of the NSA's Laboratory for Advanced Cybersecurity, STM-PE takes low-level operating system code "and puts it in a box such that it can only access the device system that it needs to access." This helps prevent tampering and, Myers said, "will improve the security of the system."
### In other news
[exFAT in the Linux kernel? Yes!](https://cloudblogs.microsoft.com/opensource/2019/08/28/exfat-linux-kernel/)[Valencia continues its support for Linux school distro](https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/120000-lliurex-desktops)[Spain's First Open Source Satellite](https://hackaday.com/2019/08/15/spains-first-open-source-satellite/)[Western Digital's Long Trip from Open Standards to Open Source Chips](https://www.datacenterknowledge.com/open-source/western-digitals-long-trip-open-standards-open-source-chips)[Waymo open-sources data set for autonomous vehicle multimodal sensors](ttps://venturebeat.com/2019/08/21/waymo-open-sources-data-set-for-autonomous-vehicle-multimodal-sensors/)
*Thanks, as always, to Opensource.com staff members and moderators for their help this week.*
## Comments are closed. |
11,293 | 在 Fedora 上开启 Go 语言之旅 | https://fedoramagazine.org/getting-started-with-go-on-fedora/ | 2019-09-01T22:37:00 | [
"Go"
] | https://linux.cn/article-11293-1.html | 
[Go](https://golang.org/) 编程语言于 2009 年首次公开发布,此后被广泛使用。特别是,Go 已经成为云基础设施领域的一种代表性语言,例如 [Kubernetes](https://kubernetes.io/)、[OpenShift](https://www.openshift.com/) 或 [Terraform](https://www.terraform.io/) 等大型项目都使用了 Go。
Go 越来越受欢迎的原因是性能好、易于编写高并发的程序、语法简单和编译快。
让我们来看看如何在 Fedora 上开始 Go 语言编程吧。
### 在 Fedora 上安装 Go
Fedora 可以通过官方库简单快速地安装 Go 语言。
```
$ sudo dnf install -y golang
$ go version
go version go1.12.7 linux/amd64
```
既然装好了 Go ,让我们来写个简单的程序,编译并运行。
### 第一个 Go 程序
让我们来用 Go 语言写一波 “Hello, World!”。首先创建 `main.go` 文件,然后输入或者拷贝以下内容。
```
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
```
运行这个程序很简单。
```
$ go run main.go
Hello, World!
```
Go 会在临时目录将 `main.go` 编译成二进制文件并执行,然后删除临时目录。这个命令非常适合在开发过程中快速运行程序,它还凸显了 Go 的编译速度。
编译一个可执行程序就像运行它一样简单。
```
$ go build main.go
$ ./main
Hello, World!
```
### 使用 Go 的模块
Go 1.11 和 1.12 引入了对模块的初步支持。模块可用于管理应用程序的各种依赖包。Go 通过 `go.mod` 和 `go.sum` 这两个文件,显式地定义依赖包的版本。
为了演示如何使用模块,让我们为 `hello world` 程序添加一个依赖。
在更改代码之前,需要初始化模块。
```
$ go mod init helloworld
go: creating new go.mod: module helloworld
$ ls
go.mod main main.go
```
然后按照以下内容修改 `main.go` 文件。
```
package main
import "github.com/fatih/color"
func main () {
color.Blue("Hello, World!")
}
```
在修改后的 `main.go` 中,不再使用标准库 `fmt` 来打印 “Hello, World!” ,而是使用第三方库打印出有色字体。
让我们来跑一下新版的程序吧。
```
$ go run main.go
Hello, World!
```
因为程序依赖于 `github.com/fatih/color` 库,它需要在编译前下载所有依赖包。 然后把依赖包都添加到 `go.mod` 中,并将它们的版本号和哈希值记录在 `go.sum` 中。
---
via: <https://fedoramagazine.org/getting-started-with-go-on-fedora/>
作者:[Clément Verna](https://fedoramagazine.org/author/cverna/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[hello-wn](https://github.com/hello-wn) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The [Go](https://golang.org/) programming language was first publicly announced in 2009, since then the language has become widely adopted. In particular Go has become a reference in the world of cloud infrastructure with big projects like [Kubernetes](https://kubernetes.io/), [OpenShift](https://www.openshift.com/) or [Terraform](https://www.terraform.io/) for example.
Some of the main reasons for Go’s increasing popularity are the performances, the ease to write fast concurrent application, the simplicity of the language and fast compilation time. So let’s see how to get started with Go on Fedora.
## Install Go in Fedora
Fedora provides an easy way to install the Go programming language via the official repository.
$ sudo dnf install -y golang $ go version go version go1.12.7 linux/amd64
Now that Go is installed, let’s write a simple program, compile it and execute it.
## First program in Go
Let’s write the traditional “Hello, World!” program in Go. First create a *main.go* file and type or copy the following.
package main import "fmt" func main() { fmt.Println("Hello, World!") }
Running this program is quite simple.
$ go run main.go Hello, World!
This will build a binary from main.go in a temporary directory, execute the binary, then delete the temporary directory. This command is really great to quickly run the program during development and it also highlights the speed of Go compilation.
Building an executable of the program is as simple as running it.
$ go build main.go $ ./main Hello, World!
## Using Go modules
Go 1.11 and 1.12 introduce preliminary support for modules. Modules are a solution to manage application dependencies. This solution is based on 2 files *go.mod* and *go.sum* used to explicitly define the version of the dependencies.
To show how to use modules, let’s add a dependency to the hello world program.
Before changing the code, the module needs to be initialized.
$ go mod init helloworld go: creating new go.mod: module helloworld $ ls go.mod main main.go
Next modify the main.go file as follow.
package main import "github.com/fatih/color" func main () { color.Blue("Hello, World!") }
In the modified main.go, instead of using the standard library “*fmt*” to print the “Hello, World!”. The application uses an external library which makes it easy to print text in color.
Let’s run this version of the application.
$ go run main.go Hello, World!
Now that the application is depending on the *github.com/fatih/color* library, it needs to download all the dependencies before compiling it. The list of dependencies is then added to *go.mod *and the exact version and commit hash of these dependencies is recorded in *go.sum*.
## hammerhead corvette
Thank you for this post ! I guess, I could do a Getting started with flatpaks or The Rust programming language.
## Clément Verna
Yes please 🙂 that would be great. You can check the contributing guide here https://docs.fedoraproject.org/en-US/fedora-magazine/contributing/
## MX
And now that setting up any go-paths is not needed?
## Clément Verna
If you use modules, yes it is not needed anymore. you can use the go env command to check all the environment variables.
## Robert P. J. Day
It would be massively useful to show how to add Go 1.13-rc1 to an existing Go 1.12 Fedora system, so users can start playing with 1.13.
## Yazan Al Monshed
Thankes you, but that not work on Sliverblue!. How can install it in SB?
## MX
Sorry did not understand the question.
Are you sure to put this from the run in the toolbox?
## Clément Verna
You should probably check Toolbox https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/
## DaveO
Caveat – for newbies to golang like me. Don’t create a directory named “go” in you home directory to do your golang programming. You will get error; “$GOPATH/go.mod exists but should not”.
## e4ert
better using crystal
nice language
## leslie Satenstein
I looked at Rust, and it appeared interesting. Rust provides a false sense of security.
It does simple memory management by having the compiler verify when malloc’ed space goes out of scope.
An experienced C programmer, by eyeballing his program, does the same. Generally though, the programmer with Rust will have to verify that the compiler missed some scope rules
The C programmer, due to strong typing will have to verify that his eyeballing of the code or using valgrind, missed some scope rules.
I’ll stick to C. |
11,294 | Emacs 注释中的拼写检查 | https://emacsredux.com/blog/2019/05/24/spell-checking-comments/ | 2019-09-01T23:03:00 | [
"拼写"
] | https://linux.cn/article-11294-1.html | 
我出了名的容易拼错单词(特别是在播客当中)。谢天谢地 Emacs 内置了一个名为 `flyspell` 的超棒模式来帮助像我这样的可怜的打字员。flyspell 会在你输入时突出显示拼错的单词 (也就是实时的) 并提供有用的快捷键来快速修复该错误。
大多输入通常会对派生自 `text-mode`(比如 `markdown-mode`,`adoc-mode` )的主模式启用 `flyspell`,但是它对程序员也有所帮助,可以指出他在注释中的错误。所需要的只是启用 `flyspell-prog-mode`。我通常在所有的编程模式中(至少在 `prog-mode` 派生的模式中)都启用它:
```
(add-hook 'prog-mode-hook #'flyspell-prog-mode)
```
现在当你在注释中输入错误时,就会得到即时反馈了。要修复单词只需要将光标置于单词后,然后按下 `C-c $` (`M-x flyspell-correct-word-before-point`)。(还有许多其他方法可以用 `flyspell` 来纠正拼写错误的单词,但为了简单起见,我们暂时忽略它们。)

今天的分享就到这里!我要继续修正这些讨厌的拼写错误了!
---
via: <https://emacsredux.com/blog/2019/05/24/spell-checking-comments/>
作者:[Bozhidar Batsov](https://emacsredux.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Spell Checking Comments
I’m notorious for all the typos I make. 1 Thankfully Emacs features an awesome built-in mode named
`flyspell`
to help poor typists like me.
Flyspell highlights misspelled words as you type (a.k.a. on the fly) and has useful keybindings to quickly fix them.Most people typically enable `flyspell`
only for major modes derived from `text-mode`
(e.g. `markdown-mode`
, `adoc-mode`
), but it can really help programmers as well by
pointing out typos they make in comments. All you need to do is enable `flyspell-prog-mode`
. I typically enable it for all programming modes 2 like this:
```
(add-hook 'prog-mode-hook #'flyspell-prog-mode)
```
Now you’ll get instant feedback when you make some typo in a
comment. To fix a word just press `C-c $`
(```
M-x
flyspell-correct-word-before-point
```
), while your cursor is behind it.[3](#fn:3)
That’s all I have for you today! Keep fixing those nasty typos! |
11,295 | 使用 Python 函数进行模块化 | https://opensource.com/article/19/7/get-modular-python-functions | 2019-09-01T23:43:25 | [
"Python"
] | /article-11295-1.html |
>
> 使用 Python 函数来最大程度地减少重复任务编码工作量。
>
>
>

你是否对函数、类、方法、库和模块等花哨的编程术语感到困惑?你是否在与变量作用域斗争?无论你是自学成才的还是经过正式培训的程序员,代码的模块化都会令人困惑。但是类和库鼓励模块化代码,因为模块化代码意味着只需构建一个多用途代码块集合,就可以在许多项目中使用它们来减少编码工作量。换句话说,如果你按照本文对 [Python](https://www.python.org/) 函数的研究,你将找到更聪明的工作方法,这意味着更少的工作。
本文假定你对 Python 很熟(LCTT 译注:稍微熟悉就可以),并且可以编写和运行一个简单的脚本。如果你还没有使用过 Python,请首先阅读我的文章:[Python 简介](https://opensource.com/article/17/10/python-10)。
### 函数
函数是迈向模块化过程中重要的一步,因为它们是形式化的重复方法。如果在你的程序中,有一个任务需要反复执行,那么你可以将代码放入一个函数中,根据需要随时调用该函数。这样,你只需编写一次代码,就可以随意使用它。
以下一个简单函数的示例:
```
#!/usr/bin/env python3
import time
def Timer():
print("Time is " + str(time.time() ))
```
创建一个名为 `mymodularity` 的目录,并将以上函数代码保存为该目录下的 `timestamp.py`。
除了这个函数,在 `mymodularity` 目录中创建一个名为 `__init__.py` 的文件,你可以在文件管理器或 bash shell 中执行此操作:
```
$ touch mymodularity/__init__.py
```
现在,你已经创建了属于你自己的 Python 库(Python 中称为“模块”),名为 `mymodularity`。它不是一个特别有用的模块,因为它所做的只是导入 `time` 模块并打印一个时间戳,但这只是一个开始。
要使用你的函数,像对待任何其他 Python 模块一样对待它。以下是一个小应用,它使用你的 `mymodularity` 软件包来测试 Python `sleep()` 函数的准确性。将此文件保存为 `sleeptest.py`,注意要在 `mymodularity` 文件夹 *之外*,因为如果你将它保存在 `mymodularity` *里面*,那么它将成为你的包中的一个模块,你肯定不希望这样。
```
#!/usr/bin/env python3
import time
from mymodularity import timestamp
print("Testing Python sleep()...")
# modularity
timestamp.Timer()
time.sleep(3)
timestamp.Timer()
```
在这个简单的脚本中,你从 `mymodularity` 包中调用 `timestamp` 模块两次。从包中导入模块时,通常的语法是从包中导入你所需的模块,然后使用 *模块名称 + 一个点 + 要调用的函数名*(例如 `timestamp.Timer()`)。
你调用了两次 `Timer()` 函数,所以如果你的 `timestamp` 模块比这个简单的例子复杂些,那么你将节省大量重复代码。
保存文件并运行:
```
$ python3 ./sleeptest.py
Testing Python sleep()...
Time is 1560711266.1526039
Time is 1560711269.1557732
```
根据测试,Python 中的 `sleep` 函数非常准确:在三秒钟等待之后,时间戳成功且正确地增加了 3,在微秒单位上差距很小。
Python 库的结构看起来可能令人困惑,但其实它并不是什么魔法。Python *被编程* 为一个包含 Python 代码的目录,并附带一个 `__init__.py` 文件,那么这个目录就会被当作一个包,并且 Python 会首先在当前目录中查找可用模块。这就是为什么语句 `from mymodularity import timestamp` 有效的原因:Python 在当前目录查找名为 `mymodularity` 的目录,然后查找 `timestamp.py` 文件。
你在这个例子中所做的功能和以下这个非模块化的版本是一样的:
```
#!/usr/bin/env python3
import time
from mymodularity import timestamp
print("Testing Python sleep()...")
# no modularity
print("Time is " + str(time.time() ) )
time.sleep(3)
print("Time is " + str(time.time() ) )
```
对于这样一个简单的例子,其实没有必要以这种方式编写测试,但是对于编写自己的模块来说,最佳实践是你的代码是通用的,可以将它重用于其他项目。
通过在调用函数时传递信息,可以使代码更通用。例如,假设你想要使用模块来测试的不是 *系统* 的 `sleep` 函数,而是 *用户自己实现* 的 `sleep` 函数,更改 `timestamp` 代码,使它接受一个名为 `msg` 的传入变量,它将是一个字符串,控制每次调用 `timestamp` 时如何显示:
```
#!/usr/bin/env python3
import time
# 更新代码
def Timer(msg):
print(str(msg) + str(time.time() ) )
```
现在函数比以前更抽象了。它仍会打印时间戳,但是它为用户打印的内容 `msg` 还是未定义的。这意味着你需要在调用函数时定义它。
`Timer` 函数接受的 `msg` 参数是随便命名的,你可以使用参数 `m`、`message` 或 `text`,或是任何对你来说有意义的名称。重要的是,当调用 `timestamp.Timer` 函数时,它接收一个文本作为其输入,将接收到的任何内容放入 `msg` 变量中,并使用该变量完成任务。
以下是一个测试测试用户正确感知时间流逝能力的新程序:
```
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press the RETURN key. Count to 3, and press RETURN again.")
input()
timestamp.Timer("Started timer at ")
print("Count to 3...")
input()
timestamp.Timer("You slept until ")
```
将你的新程序保存为 `response.py`,运行它:
```
$ python3 ./response.py
Press the RETURN key. Count to 3, and press RETURN again.
Started timer at 1560714482.3772075
Count to 3...
You slept until 1560714484.1628013
```
### 函数和所需参数
新版本的 `timestamp` 模块现在 *需要* 一个 `msg` 参数。这很重要,因为你的第一个应用程序将无法运行,因为它没有将字符串传递给 `timestamp.Timer` 函数:
```
$ python3 ./sleeptest.py
Testing Python sleep()...
Traceback (most recent call last):
File "./sleeptest.py", line 8, in <module>
timestamp.Timer()
TypeError: Timer() missing 1 required positional argument: 'msg'
```
你能修复你的 `sleeptest.py` 应用程序,以便它能够与更新后的模块一起正确运行吗?
### 变量和函数
通过设计,函数限制了变量的范围。换句话说,如果在函数内创建一个变量,那么这个变量 *只* 在这个函数内起作用。如果你尝试在函数外部使用函数内部出现的变量,就会发生错误。
下面是对 `response.py` 应用程序的修改,尝试从 `timestamp.Timer()` 函数外部打印 `msg` 变量:
```
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press the RETURN key. Count to 3, and press RETURN again.")
input()
timestamp.Timer("Started timer at ")
print("Count to 3...")
input()
timestamp.Timer("You slept for ")
print(msg)
```
试着运行它,查看错误:
```
$ python3 ./response.py
Press the RETURN key. Count to 3, and press RETURN again.
Started timer at 1560719527.7862902
Count to 3...
You slept for 1560719528.135406
Traceback (most recent call last):
File "./response.py", line 15, in <module>
print(msg)
NameError: name 'msg' is not defined
```
应用程序返回一个 `NameError` 消息,因为没有定义 `msg`。这看起来令人困惑,因为你编写的代码定义了 `msg`,但你对代码的了解比 Python 更深入。调用函数的代码,不管函数是出现在同一个文件中,还是打包为模块,都不知道函数内部发生了什么。一个函数独立地执行它的计算,并返回你想要它返回的内容。这其中所涉及的任何变量都只是 *本地的*:它们只存在于函数中,并且只存在于函数完成其目的所需时间内。
#### Return 语句
如果你的应用程序需要函数中特定包含的信息,那么使用 `return` 语句让函数在运行后返回有意义的数据。
时间就是金钱,所以修改 `timestamp` 函数,以使其用于一个虚构的收费系统:
```
#!/usr/bin/env python3
import time
def Timer(msg):
print(str(msg) + str(time.time() ) )
charge = .02
return charge
```
现在,`timestamp` 模块每次调用都收费 2 美分,但最重要的是,它返回每次调用时所收取的金额。
以下一个如何使用 `return` 语句的演示:
```
#!/usr/bin/env python3
from mymodularity import timestamp
print("Press RETURN for the time (costs 2 cents).")
print("Press Q RETURN to quit.")
total = 0
while True:
kbd = input()
if kbd.lower() == "q":
print("You owe $" + str(total) )
exit()
else:
charge = timestamp.Timer("Time is ")
total = total+charge
```
在这个示例代码中,变量 `charge` 为 `timestamp.Timer()` 函数的返回,它接收函数返回的任何内容。在本例中,函数返回一个数字,因此使用一个名为 `total` 的新变量来跟踪已经进行了多少更改。当应用程序收到要退出的信号时,它会打印总花费:
```
$ python3 ./charge.py
Press RETURN for the time (costs 2 cents).
Press Q RETURN to quit.
Time is 1560722430.345412
Time is 1560722430.933996
Time is 1560722434.6027434
Time is 1560722438.612629
Time is 1560722439.3649364
q
You owe $0.1
```
#### 内联函数
函数不必在单独的文件中创建。如果你只是针对一个任务编写一个简短的脚本,那么在同一个文件中编写函数可能更有意义。唯一的区别是你不必导入自己的模块,但函数的工作方式是一样的。以下是时间测试应用程序的最新迭代:
```
#!/usr/bin/env python3
import time
total = 0
def Timer(msg):
print(str(msg) + str(time.time() ) )
charge = .02
return charge
print("Press RETURN for the time (costs 2 cents).")
print("Press Q RETURN to quit.")
while True:
kbd = input()
if kbd.lower() == "q":
print("You owe $" + str(total) )
exit()
else:
charge = Timer("Time is ")
total = total+charge
```
它没有外部依赖(Python 发行版中包含 `time` 模块),产生与模块化版本相同的结果。它的优点是一切都位于一个文件中,缺点是你不能在其他脚本中使用 `Timer()` 函数,除非你手动复制和粘贴它。
#### 全局变量
在函数外部创建的变量没有限制作用域,因此它被视为 *全局* 变量。
全局变量的一个例子是在 `charge.py` 中用于跟踪当前花费的 `total` 变量。`total` 是在函数之外创建的,因此它绑定到应用程序而不是特定函数。
应用程序中的函数可以访问全局变量,但要将变量传入导入的模块,你必须像发送 `msg` 变量一样将变量传入模块。
全局变量很方便,因为它们似乎随时随地都可用,但也很难跟踪它们,很难知道哪些变量不再需要了但是仍然在系统内存中停留(尽管 Python 有非常好的垃圾收集机制)。
但是,全局变量很重要,因为不是所有的变量都可以是函数或类的本地变量。现在你知道了如何向函数传入变量并获得返回,事情就变得容易了。
### 总结
你已经学到了很多关于函数的知识,所以开始将它们放入你的脚本中 —— 如果它不是作为单独的模块,那么作为代码块,你不必在一个脚本中编写多次。在本系列的下一篇文章中,我将介绍 Python 类。
---
via: <https://opensource.com/article/19/7/get-modular-python-functions>
作者:[Seth Kenlon](https://opensource.com/users/seth/users/xd-deng/users/nhuntwalker/users/don-watkins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
11,296 | 《代码英雄》第一季(2):操作系统战争(下)Linux 崛起 | https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux | 2019-09-02T10:18:00 | [
"代码英雄"
] | https://linux.cn/article-11296-1.html |
>
> <ruby> 代码英雄 <rp> ( </rp> <rt> Command Line Heroes </rt> <rp> ) </rp></ruby>是世界领先的企业开源软件解决方案供应商<ruby> 红帽 <rp> ( </rp> <rt> Red Hat </rt> <rp> ) </rp></ruby>精心制作的音频播客,讲述开发人员、程序员、黑客、极客和开源反叛者如何彻底改变技术前景的真实史诗。该音频博客邀请到了谷歌、NASA 等重量级企业的众多技术大牛共同讲述开源、操作系统、容器、DevOps、混合云等发展过程中的动人故事。点击链接<https://www.redhat.com/en/command-line-heroes> 查看更多信息。
>
>
>

本文是《[代码英雄](https://www.redhat.com/en/command-line-heroes)》系列播客[第一季(2):操作系统战争(下)](https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux) 的[音频](https://dts.podtrac.com/redirect.mp3/audio.simplecast.com/2199861a.mp3)脚本。
>
> 微软帝国控制着 90% 的用户。操作系统的完全标准化似乎是板上钉钉的事了。但是一个不太可能的英雄出现在开源反叛组织中。戴着眼镜,温文尔雅的<ruby> 林纳斯·托瓦兹 <rt> Linus Torvalds </rt></ruby>免费发布了他的 Linux® 程序。微软打了个趔趄,并且开始重整旗鼓而来,战场从个人电脑转向互联网。
>
>
>
**Saron Yitbarek:** 这玩意开着的吗?让我们进一段史诗般的星球大战的开幕吧,开始了。
配音:第二集:Linux® 的崛起。微软帝国控制着 90% 的桌面用户。操作系统的全面标准化似乎是板上钉钉的事了。然而,互联网的出现将战争的焦点从桌面转向了企业,在该领域,所有商业组织都争相构建自己的服务器。*[00:00:30]*与此同时,一个不太可能的英雄出现在开源反叛组织中。固执、戴着眼镜的 <ruby> 林纳斯·托瓦兹 <rt> Linus Torvalds </rt></ruby>免费发布了他的 Linux 系统。微软打了个趔趄,并且开始重整旗鼓而来。
**Saron Yitbarek:** 哦,我们书呆子就是喜欢那样。上一次我们讲到哪了?苹果和微软互相攻伐,试图在一场争夺桌面用户的战争中占据主导地位。*[00:01:00]* 在第一集的结尾,我们看到微软获得了大部分的市场份额。很快,由于互联网的兴起以及随之而来的开发者大军,整个市场都经历了一场地震。互联网将战场从在家庭和办公室中的个人电脑用户转移到拥有数百台服务器的大型商业客户中。
这意味着巨量资源的迁移。突然间,所有相关企业不仅被迫为服务器空间和网站建设付费,而且还必须集成软件来进行资源跟踪和数据库监控等工作。*[00:01:30]* 你需要很多开发人员来帮助你。至少那时候大家都是这么做的。
在操作系统之战的第二部分,我们将看到优先级的巨大转变,以及像林纳斯·托瓦兹和<ruby> 理查德·斯托尔曼 <rt> Richard Stallman </rt></ruby>这样的开源反逆者是如何成功地在微软和整个软件行业的核心地带引发恐惧的。
我是 Saron Yitbarek,你现在收听的是代码英雄,一款红帽公司原创的播客节目。*[00:02:00]* 每一集,我们都会给你带来“从码开始”改变技术的人的故事。
好。想象一下你是 1991 年时的微软。你自我感觉良好,对吧?满怀信心。确立了全球主导的地位感觉不错。你已经掌握了与其他企业合作的艺术,但是仍然将大部分开发人员、程序员和系统管理员排除在联盟之外,而他们才是真正的步兵。*[00:02:30]* 这时出现了个叫林纳斯·托瓦兹的芬兰极客。他和他的开源程序员团队正在开始发布 Linux,这个操作系统内核是由他们一起编写出来的。
坦白地说,如果你是微软公司,你并不会太在意 Linux,甚至不太关心开源运动,但是最终,Linux 的规模变得如此之大,以至于微软不可能不注意到。*[00:03:00]* Linux 第一个版本出现在 1991 年,当时大概有 1 万行代码。十年后,变成了 300 万行代码。如果你想知道,今天则是 2000 万行代码。
*[00:03:30]* 让我们停留在 90 年代初一会儿。那时 Linux 还没有成为我们现在所知道的庞然大物。这个奇怪的病毒式的操作系统只是正在这个星球上蔓延,全世界的极客和黑客都爱上了它。那时候我还太年轻,但有点希望我曾经经历过那个时候。在那个时候,发现 Linux 就如同进入了一个秘密社团一样。就像其他人分享地下音乐混音带一样,程序员与朋友们分享 Linux CD 集。
开发者 Tristram Oaten *[00:03:40]* 讲讲你 16 岁时第一次接触 Linux 的故事吧。
**Tristram Oaten:** 我和我的家人去了红海的 Hurghada 潜水度假。那是一个美丽的地方,强烈推荐。第一天,我喝了自来水。也许,我妈妈跟我说过不要这么做。我整个星期都病得很厉害,没有离开旅馆房间。*[00:04:00]* 当时我只带了一台新安装了 Slackware Linux 的笔记本电脑,我听说过这玩意并且正在尝试使用它。所有的东西都在 8 张 cd 里面。这种情况下,我只能整个星期都去了解这个外星一般的系统。我阅读手册,摆弄着终端。我记得当时我甚至不知道一个点(表示当前目录)和两个点(表示前一个目录)之间的区别。
*[00:04:30]* 我一点头绪都没有。犯过很多错误,但慢慢地,在这种强迫的孤独中,我突破了障碍,开始理解并明白命令行到底是怎么回事。假期结束时,我没有看过金字塔、尼罗河等任何埃及遗址,但我解锁了现代世界的一个奇迹。我解锁了 Linux,接下来的事大家都知道了。
**Saron Yitbarek:** 你会从很多人那里听到关于这个故事的不同说法。访问 Linux 命令行是一种革命性的体验。
**David Cantrell:** *[00:05:00]* 它给了我源代码。我当时的感觉是,“太神奇了”。
**Saron Yitbarek:** 我们正在参加一个名为 Flock to Fedora 的 2017 年 Linux 开发者大会。
**David Cantrell:** ……非常有吸引力。我觉得我对这个系统有了更多的控制力,它越来越吸引我。我想,从 1995 年我第一次编译 Linux 内核那时起,我就迷上了它。
**Saron Yitbarek:** 开发者 David Cantrell 与 Joe Brockmire。
**Joe Brockmeier:** *[00:05:30]* 我在 Cheap Software 转的时候发现了一套四张 CD 的 Slackware Linux。它看起来来非常令人兴奋而且很有趣,所以我把它带回家,安装在第二台电脑上,开始摆弄它,有两件事情让我感到很兴奋:一个是,我运行的不是 Windows,另一个是 Linux 的开源特性。
**Saron Yitbarek:** *[00:06:00]* 某种程度上来说,对命令行的使用总是存在的。在开源真正开始流行还要早的几十年前,人们(至少在开发人员中是这样)总是希望能够做到完全控制。让我们回到操作系统大战之前的那个时代,在苹果和微软为他们的 GUI 而战之前。那时也有代码英雄。<ruby> 保罗·琼斯 <rt> Paul Jones </rt></ruby>教授(在线图书馆 ibiblio.org 的负责人)在那个古老的时代,就是一名开发人员。
**Paul Jones:** *[00:06:30]* 从本质上讲,互联网在那个时候客户端-服务器架构还是比较少的,而更多的是点对点架构的。确实,我们会说,某种 VAX 到 VAX 的连接(LCTT 译注:DEC 的一种操作系统),某种科学工作站到科学工作站的连接。这并不意味着没有客户端-服务端的架构及应用程序,但这的确意味着,最初的设计是思考如何实现点对点,*[00:07:00]* 它与 IBM 一直在做的东西相对立。IBM 给你的只有哑终端,这种终端只能让你管理用户界面,却无法让你像真正的终端一样为所欲为。
**Saron Yitbarek:** 图形用户界面在普通用户中普及的同时,在工程师和开发人员中总是存在着一股相反的力量。早在 Linux 出现之前的二十世纪七八十年代,这股力量就存在于 Emacs 和 GNU 中。有了斯托尔曼的自由软件基金会后,总有某些人想要使用命令行,但上世纪 90 年代的 Linux 提供了前所未有的东西。
*[00:07:30]* Linux 和其他开源软件的早期爱好者是都是先驱。我正站在他们的肩膀上。我们都是。
你现在收听的是代码英雄,一款由红帽公司原创的播客。这是操作系统大战的第二部分:Linux 崛起。
**Steven Vaughan-Nichols:** 1998 年的时候,情况发生了变化。
**Saron Yitbarek:** *[00:08:00]* Steven Vaughan-Nichols 是 zdnet.com 的特约编辑,他已经写了几十年关于技术商业方面的文章了。他将向我们讲述 Linux 是如何慢慢变得越来越流行,直到自愿贡献者的数量远远超过了在 Windows 上工作的微软开发人员的数量的。不过,Linux 从未真正追上微软桌面客户的数量,这也许就是微软最开始时忽略了 Linux 及其开发者的原因。Linux 真正大放光彩的地方是在服务器机房。当企业开始线上业务时,每个企业都需要一个满足其需求的独特编程解决方案。
*[00:08:30]* WindowsNT 于 1993 年问世,当时它已经在与其他的服务器操作系统展开竞争了,但是许多开发人员都在想,“既然我可以通过 Apache 构建出基于 Linux 的廉价系统,那我为什么要购买 AIX 设备或大型 Windows 设备呢?”关键点在于,Linux 代码已经开始渗透到几乎所有网上的东西中。
**Steven Vaughan-Nichols:** *[00:09:00]* 令微软感到惊讶的是,它开始意识到,Linux 实际上已经开始有一些商业应用,不是在桌面环境,而是在商业服务器上。因此,他们发起了一场运动,我们称之为 FUD - <ruby> 恐惧、不确定和怀疑 <rt> fear, uncertainty and double </rt></ruby>。他们说,“哦,Linux 这玩意,真的没有那么好。它不太可靠。你一点都不能相信它”。
**Saron Yitbarek:** 这种软宣传式的攻击持续了一段时间。微软也不是唯一一个对 Linux 感到紧张的公司。这其实是整个行业在对抗这个奇怪新人的挑战。*[00:09:30]* 例如,任何与 UNIX 有利害关系的人都可能将 Linux 视为篡夺者。有一个案例很著名,那就是 SCO 组织(它发行过一种 UNIX 版本)在过去 10 多年里发起一系列的诉讼,试图阻止 Linux 的传播。SCO 最终失败而且破产了。与此同时,微软一直在寻找机会,他们必须要采取动作,只是不清楚具体该怎么做。
**Steven Vaughan-Nichols:** *[00:10:00]* 让微软真正担心的是,第二年,在 2000 年的时候,IBM 宣布,他们将于 2001 年投资 10 亿美元在 Linux 上。现在,IBM 已经不再涉足个人电脑业务。(那时)他们还没有走出去,但他们正朝着这个方向前进,他们将 Linux 视为服务器和大型计算机的未来,在这一点上,剧透警告,IBM 是正确的。*[00:10:30]* Linux 将主宰服务器世界。
**Saron Yitbarek:** 这已经不再仅仅是一群黑客喜欢他们对命令行的绝地武士式的控制了。金钱的投入对 Linux 助力极大。<ruby> Linux 国际 <rt> Linux International </rt></ruby>的执行董事 John “Mad Dog” Hall 有一个故事可以解释为什么会这样。我们通过电话与他取得了联系。
**John Hall:** *[00:11:00]* 我有一个名叫 Dirk Holden 的朋友,他是德国德意志银行的系统管理员,他也参与了个人电脑上早期 X Windows 系统图形项目的工作。有一天我去银行拜访他,我说:“Dirk,你银行里有 3000 台服务器,用的都是 Linux。为什么不用 Microsoft NT 呢?”*[00:11:30]* 他看着我说:“是的,我有 3000 台服务器,如果使用微软的 Windows NT 系统,我需要 2999 名系统管理员。”他继续说道:“而使用 Linux,我只需要四个。”这真是完美的答案。
**Saron Yitbarek:** 程序员们着迷的这些东西恰好对大公司也极具吸引力。但由于 FUD 的作用,一些企业对此持谨慎态度。*[00:12:00]* 他们听到开源,就想:“开源。这看起来不太可靠,很混乱,充满了 BUG”。但正如那位银行经理所指出的,金钱有一种有趣的方式,可以说服人们克服困境。甚至那些只需要网站的小公司也加入了 Linux 阵营。与一些昂贵的专有选择相比,使用一个廉价的 Linux 系统在成本上是无法比拟的。如果你是一家雇佣专业人员来构建网站的商店,那么你肯定想让他们使用 Linux。
让我们快进几年。Linux 运行每个人的网站上。Linux 已经征服了服务器世界,然后智能手机也随之诞生。*[00:12:30]* 当然,苹果和他们的 iPhone 占据了相当大的市场份额,而且微软也希望能进入这个市场,但令人惊讶的是,Linux 也在那,已经做好准备了,迫不及待要大展拳脚。
作家兼记者 James Allworth。
**James Allworth:** 肯定还有容纳第二个竞争者的空间,那本可以是微软,但是实际上却是 Android,而 Andrid 基本上是基于 Linux 的。众所周知,Android 被谷歌所收购,现在运行在世界上大部分的智能手机上,谷歌在 Linux 的基础上创建了 Android。*[00:13:00]* Linux 使他们能够以零成本从一个非常复杂的操作系统开始。他们成功地实现了这一目标,最终将微软挡在了下一代设备之外,至少从操作系统的角度来看是这样。
**Saron Yitbarek:** *[00:13:30]* 这可是个大地震,很大程度上,微软有被埋没的风险。John Gossman 是微软 Azure 团队的首席架构师。他还记得当时困扰公司的困惑。
**John Gossman:** 像许多公司一样,微软也非常担心知识产权污染。他们认为,如果允许开发人员使用开源代码,那么他们可能只是将一些代码复制并粘贴到某些产品中,就会让某种病毒式的许可证生效从而引发未知的风险……他们也很困惑,*[00:14:00]* 我认为,这跟公司文化有关,很多公司,包括微软,都对开源开发的意义和商业模式之间的分歧感到困惑。有一种观点认为,开源意味着你所有的软件都是免费的,人们永远不会付钱。
**Saron Yitbarek:** 任何投资于旧的、专有软件模型的人都会觉得这里发生的一切对他们构成了威胁。当你威胁到像微软这样的大公司时,是的,他们一定会做出反应。*[00:14:30]* 他们推动所有这些 FUD —— 恐惧、不确定性和怀疑是有道理的。当时,商业运作的方式基本上就是相互竞争。不过,如果是其他公司的话,他们可能还会一直怀恨在心,抱残守缺,但到了 2013 年的微软,一切都变了。
微软的云计算服务 Azure 上线了,令人震惊的是,它从第一天开始就提供了 Linux 虚拟机。*[00:15:00]* <ruby> 史蒂夫·鲍尔默 <rt> Steve Ballmer </rt></ruby>,这位把 Linux 称为癌症的首席执行官,已经离开了,代替他的是一位新的有远见的首席执行官<ruby> 萨提亚·纳德拉 <rt> Satya Nadella </rt></ruby>。
**John Gossman:** 萨提亚有不同的看法。他属于另一个世代。比保罗、比尔和史蒂夫更年轻的世代,他对开源有不同的看法。
**Saron Yitbarek:** 还是来自微软 Azure 团队的 John Gossman。
**John Gossman:** *[00:15:30]* 大约四年前,处于实际需要,我们在 Azure 中添加了 Linux 支持。如果访问任何一家企业客户,你都会发现他们并不会才试着决定是使用 Windows 还是使用 Linux、 使用 .net 还是使用 Java <sup> TM</sup> 。他们在很久以前就做出了决定 —— 大约 15 年前才有这样的一些争论。*[00:16:00]* 现在,我见过的每一家公司都混合了 Linux 和 Java、Windows 和 .net、SQL Server、Oracle 和 MySQL —— 基于专有源代码的产品和开放源代码的产品。
如果你打算运维一个云服务,允许这些公司在云上运行他们的业务,那么你根本不能告诉他们,“你可以使用这个软件,但你不能使用那个软件。”
**Saron Yitbarek:** *[00:16:30]* 这正是萨提亚·纳德拉采纳的哲学思想。2014 年秋季,他站在舞台上,希望传递一个重要信息。“微软爱 Linux”。他接着说,“20% 的 Azure 业务量已经是 Linux 了,微软将始终对 Linux 发行版提供一流的支持。”没有哪怕一丝对开源的宿怨。
为了说明这一点,在他们的背后有一个巨大的标志,上面写着:“Microsoft ❤️ Linux”。哇噢。对我们中的一些人来说,这种转变有点令人震惊,但实际上,无需如此震惊。下面是 Steven Levy,一名科技记者兼作家。
**Steven Levy:** *[00:17:00]* 当你在踢足球的时候,如果草坪变滑了,那么你也许会换一种不同的鞋子。他们当初就是这么做的。*[00:17:30]* 他们不能否认现实,而且他们里面也有聪明人,所以他们必须意识到,这就是这个世界的运行方式,不管他们早些时候说了什么,即使他们对之前的言论感到尴尬,但是让他们之前关于开源多么可怕的言论影响到现在明智的决策那才真的是疯了。
**Saron Yitbarek:** 微软低下了它高傲的头。你可能还记得苹果公司,经过多年的孤立无援,最终转向与微软构建合作伙伴关系。现在轮到微软进行 180 度转变了。*[00:18:00]* 经过多年的与开源方式的战斗后,他们正在重塑自己。要么改变,要么死亡。Steven Vaughan-Nichols。
**Steven Vaughan-Nichols:** 即使是像微软这样规模的公司也无法与数千个开发着包括 Linux 在内的其它大项目的开源开发者竞争。很长时间以来他们都不愿意这么做。前微软首席执行官史蒂夫·鲍尔默对 Linux 深恶痛绝。*[00:18:30]* 由于它的 GPL 许可证,他视 Linux 为一种癌症,但一旦鲍尔默被扫地出门,新的微软领导层说,“这就好像试图命令潮流不要过来,但潮水依然会不断涌进来。我们应该与 Linux 合作,而不是与之对抗。”
**Saron Tiebreak:** 事实上,互联网技术史上最大的胜利之一就是微软最终决定做出这样的转变。*[00:19:00]* 当然,当微软出现在开源圈子时,老一代的铁杆 Linux 支持者是相当怀疑的。他们不确定自己是否能接受这些家伙,但正如 Vaughan-Nichols 所指出的,今天的微软已经不是以前的微软了。
**Steven Vaughan-Nichols:** 2017 年的微软既不是史蒂夫·鲍尔默的微软,也不是比尔·盖茨的微软。这是一家完全不同的公司,有着完全不同的方法,而且,一旦使用了开源,你就无法退回到之前。*[00:19:30]* 开源已经吞噬了整个技术世界。从未听说过 Linux 的人可能对它并不了解,但是每次他们访问 Facebook,他们都在运行 Linux。每次执行谷歌搜索时,你都在运行 Linux。
*[00:20:00]* 每次你用 Android 手机,你都在运行 Linux。它确实无处不在,微软无法阻止它,而且我认为以为微软能以某种方式接管它的想法,太天真了。
**Saron Yitbarek:** 开源支持者可能一直担心微软会像混入羊群中的狼一样,但事实是,开源软件的本质保护了它无法被完全控制。*[00:20:30]* 没有一家公司能够拥有 Linux 并以某种特定的方式控制它。Greg Kroah-Hartman 是 Linux 基金会的一名成员。
**Greg Kroah-Hartman:** 每个公司和个人都以自私的方式为 Linux 做出贡献。他们之所以这样做是因为他们想要解决他们所面临的问题,可能是硬件无法工作,或者是他们想要添加一个新功能来做其他事情,又或者想在他们的产品中使用它。这很棒,因为他们会把代码贡献回去,此后每个人都会从中受益,这样每个人都可以用到这份代码。正是因为这种自私,所有的公司,所有的人都能从中受益。
**Saron Yitbarek:** *[00:21:00]* 微软已经意识到,在即将到来的云战争中,与 Linux 作战就像与空气作战一样。Linux 和开源不是敌人,它们是空气。如今,微软以白金会员的身份加入了 Linux 基金会。他们成为 GitHub 开源项目的头号贡献者。*[00:21:30]* 2017 年 9 月,他们甚至加入了<ruby> 开源促进联盟 <rt> Open Source Initiative </rt></ruby>。现在,微软在开源许可证下发布了很多代码。微软的 John Gossman 描述了他们开源 .net 时所发生的事情。起初,他们并不认为自己能得到什么回报。
**John Gossman:** 我们本没有指望来自社区的贡献,然而,三年后,超过 50% 的对 .net 框架库的贡献来自于微软之外。这包括大量的代码。*[00:22:00]* 三星为 .net 提供了 ARM 支持。Intel 和 ARM 以及其他一些芯片厂商已经为 .net 框架贡献了特定于他们处理器的代码生成,以及数量惊人的修复、性能改进等等 —— 既有单个贡献者也有社区。
**Saron Yitbarek:** 直到几年前,今天的这个微软,这个开放的微软,还是不可想象的。
*[00:22:30]* 我是 Saron Yitbarek,这里是代码英雄。好吧,我们已经看到了为了赢得数百万桌面用户的爱而战的激烈场面。我们已经看到开源软件在专有软件巨头的背后悄然崛起,并攫取了巨大的市场份额。*[00:23:00]* 我们已经看到了一批批的代码英雄将编程领域变成了我你今天看到的这个样子。如今,大企业正在吸收开源软件,通过这一切,每个人都从他人那里受益。
在技术的西部荒野,一贯如此。苹果受到施乐的启发,微软受到苹果的启发,Linux 受到 UNIX 的启发。进化、借鉴、不断成长。如果比喻成大卫和歌利亚(LCTT 译注:西方经典的以弱胜强战争中的两个主角)的话,开源软件不再是大卫,但是,你知道吗?它也不是歌利亚。*[00:23:30]* 开源已经超越了传统。它已经成为其他人战斗的战场。随着开源道路变得不可避免,新的战争,那些在云计算中进行的战争,那些在开源战场上进行的战争正在加剧。
这是 Steven Levy,他是一名作者。
**Steven Levy:** 基本上,到目前为止,包括微软在内,有四到五家公司,正以各种方式努力把自己打造成为全方位的平台,比如人工智能领域。你能看到智能助手之间的战争,你猜怎么着?*[00:24:00]* 苹果有一个智能助手,叫 Siri。微软有一个,叫 Cortana。谷歌有谷歌助手。三星也有一个智能助手。亚马逊也有一个,叫 Alexa。我们看到这些战斗遍布各地。也许,你可以说,最热门的人工智能平台将控制我们生活中所有的东西,而这五家公司就是在为此而争斗。
**Saron Yitbarek:** *[00:24:30]* 如果你正在寻找另一个反叛者,它们就像 Linux 奇袭微软那样,偷偷躲在 Facebook、谷歌或亚马逊身后,你也许要等很久,因为正如作家 James Allworth 所指出的,成为一个真正的反叛者只会变得越来越难。
**James Allworth:** 规模一直以来都是一种优势,但规模优势本质上……怎么说呢,我认为以前它们在本质上是线性的,现在它们在本质上是指数型的了,所以,一旦你开始以某种方法走在前面,另一个新玩家要想赶上来就变得越来越难了。*[00:25:00]* 我认为在互联网时代这大体来说来说是正确的,无论是因为规模,还是数据赋予组织的竞争力的重要性和优势。一旦你走在前面,你就会吸引更多的客户,这就给了你更多的数据,让你能做得更好,这之后,客户还有什么理由选择排名第二的公司呢,难道是因为因为他们落后了这么远么?*[00:25:30]* 我认为在云的时代这个逻辑也不会有什么不同。
**Saron Yitbarek:** 这个故事始于史蒂夫·乔布斯和比尔·盖茨这样的非凡的英雄,但科技的进步已经呈现出一种众包、有机的感觉。我认为据说我们的开源英雄林纳斯·托瓦兹在第一次发明 Linux 内核时甚至没有一个真正的计划。他无疑是一位才华横溢的年轻开发者,但他也像潮汐前的一滴水一样。*[00:26:00]* 变革是不可避免的。据估计,对于一家专有软件公司来说,用他们老式的、专有的方式创建一个 Linux 发行版将花费他们超过 100 亿美元。这说明了开源的力量。
最后,这并不是一个专有模型所能与之竞争的东西。成功的公司必须保持开放。这是最大、最终极的教训。*[00:26:30]* 还有一点要记住:当我们连接在一起的时候,我们在已有基础上成长和建设的能力是无限的。不管这些公司有多大,我们都不必坐等他们给我们更好的东西。想想那些为了纯粹的创造乐趣而学习编码的新开发者,那些自己动手丰衣足食的人。
未来的优秀程序员无管来自何方,只要能够访问代码,他们就能构建下一个大项目。
*[00:27:00]* 以上就是我们关于操作系统战争的两个故事。这场战争塑造了我们的数字生活。争夺主导地位的斗争从桌面转移到了服务器机房,最终进入了云计算领域。过去的敌人难以置信地变成了盟友,众包的未来让一切都变得开放。*[00:27:30]* 听着,我知道,在这段历史之旅中,还有很多英雄我们没有提到,所以给我们写信吧。分享你的故事。[Redhat.com/commandlineheroes](https://www.redhat.com/commandlineheroes) 。我恭候佳音。
在本季剩下的时间里,我们将学习今天的英雄们在创造什么,以及他们要经历什么样的战斗才能将他们的创造变为现实。让我们从壮丽的编程一线回来看看更多的传奇故事吧。我们每两周放一集新的博客。几周后,我们将为你带来第三集:敏捷革命。
*[00:28:00]* 代码英雄是一款红帽公司原创的播客。要想免费自动获得新一集的代码英雄,请订阅我们的节目。只要在苹果播客、Spotify、 谷歌 Play,或其他应用中搜索“Command Line Heroes”。然后点击“订阅”。这样你就会第一个知道什么时候有新剧集了。
我是 Saron Yitbarek。感谢收听。继续编码。
---
via: <https://www.redhat.com/en/command-line-heroes/season-1/os-wars-part-2-rise-of-linux>
作者:[redhat](https://www.redhat.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The empire of Microsoft controls 90% of users. Complete standardization of operating systems seems assured. But an unlikely hero arises from amongst the band of open source rebels. [ Linus Torvalds](https://twitter.com/linus__torvalds)—meek, bespectacled—releases his Linux® program free of charge. While Microsoft reels and regroups, the battleground shifts from personal computers to the Internet.
Acclaimed tech journalist [ Steven Vaughan-Nichols](https://twitter.com/sjvn) is joined by a team of veterans who relive the tech revolution that reimagined our future.
Editor's note: A previous version of this episode featured a short clip with Jon “maddog” Hall. It has been removed at his request.
*Saron Yitbarek*
Is this thing on? Cue the epic Star Wars crawl, and, action.
**00:30** - *Voice Actor*
Episode Two: Rise of Linux® . The empire of Microsoft controls 90 % of desktop users . Complete standardization of operating systems seems assured. However, the advent of the internet swerves the focus of the war from the desktop toward enterprise, where all businesses scramble to claim a server of their own. Meanwhile, an unlikely hero arises from amongst the band of open source rebels . Linus Torvalds, head strong, bespectacled, releases his Linux system free of charge. Microsoft reels — and regroups.
**01:00** - *Saron Yitbarek*
Oh, the nerd in me just loves that. So, where were we? Last time, Apple and Microsoft were trading blows, trying to dominate in a war over desktop users. By the end of episode one, we saw Microsoft claiming most of the prize. Soon, the entire landscape went through a seismic upheaval. That's all because of the rise of the internet and the army of developers that rose with it. The internet moves the battlefield from PC users in their home offices to giant business clients with hundreds of servers.
**01:30** - *Saron Yitbarek*
This is a huge resource shift. Not only does every company out there wanting to remain relevant suddenly have to pay for server space and get a website built — they also have to integrate software to track resources, monitor databases, et cetera, et cetera. You're going to need a lot of developers to help you with that. At least, back then you did.
In part two of the OS wars, we'll see how that enormous shift in priorities, and the work of a few open source rebels like Linus Torvalds and Richard Stallman, managed to strike fear in the heart of Microsoft, and an entire software industry.
**02:00** - *Saron Yitbarek*
I'm Saron Yitbarek and you're listening to Command Line Heroes, an original podcast from Red Hat. In each episode, we're bringing you stories about the people who transform technology from the command line up.
**02:30** - *Saron Yitbarek*
Okay. Imagine for a second that you're Microsoft in 1991. You're feeling pretty good, right? Pretty confident. Assured global domination feels nice. You've mastered the art of partnering with other businesses, but you're still pretty much cutting out the developers, programmers, and sys admins that are the real foot soldiers out there. There is this Finnish geek named Linus Torvalds. He and his team of open source programmers are starting to put out versions of Linux, this OS kernel that they're duct taping together.
**03:00** - *Saron Yitbarek*
If you're Microsoft, frankly, you're not too concerned about Linux or even about open source in general, but eventually, the sheer size of Linux gets so big that it becomes impossible for Microsoft not to notice. The first version comes out in 1991 and it's got maybe 10,000 lines of code. A decade later, there will be three million lines of code. In case you're wondering, today it's at 20 million.
**03:30** - *Saron Yitbarek*
For a moment, let's stay in the early 90s. Linux hasn't yet become the behemoth we know now. It's just this strangely viral OS that's creeping across the planet, and the geeks and hackers of the world are falling in love with it. I was too young in those early days, but I sort of wish I'd been there. At that time, discovering Linux was like gaining access to a secret society. Programmers would share the Linux CD set with friends the same way other people would share mixtapes of underground music.
Developer Tristram Oaten [00:03:40] tells the story of how he first encountered Linux when he was 16 years old.
**04:00** - *Tristram Oaten*
We went on a scuba diving holiday, my family and I, to Hurghada, which is on the Red Sea. Beautiful place, highly recommend it. The first day, I drank the tap water. Probably, my mom told me not to. I was really sick the whole week — didn't leave the hotel room. All I had with me was a laptop with a fresh install of Slackware Linux, this thing that I'd heard about and was giving it a try. There were no extra apps, just what came on the eight CDs. By necessity, all I had to do this whole week was to get to grips with this alien system. I read man pages, played around with the terminal. I remember not knowing the difference between a single dot, meaning the current directory, and two dots, meaning the previous directory.
**04:30** - *Tristram Oaten*
I had no clue. I must have made so many mistakes, but slowly, over the course of this forcible solitude, I broke through this barrier and started to understand and figure out what this command line thing was all about. By the end of the holiday, I hadn't seen the pyramids, the Nile, or any Egyptian sites, but I had unlocked one of the wonders of the modern world. I'd unlocked Linux, and the rest is history.
*Saron Yitbarek*
You can hear some variation on that story from a lot of people. Getting access to that Linux command line was a transformative experience.
**05:00** - *David Cantrell*
This thing gave me the source code. I was like, "That's amazing."
*Saron Yitbarek*
We're at a 2017 Linux developers conference called Flock to Fedora.
*David Cantrell*
... very appealing. I felt like I had more control over the system and it just drew me in more and more. From there, I guess, after my first Linux kernel compile in 1995, I was hooked, so, yeah.
*Saron Yitbarek*
Developers David Cantrell and Joe Brockmire.
**05:30** - *Joe Brockmeier*
I was going through the cheap software and found a four - CD set of Slackware Linux. It sounded really exciting and interesting so I took it home, installed it on a second computer, started playing with it, and really got excited about two things. One was, I was excited not to be running Windows, and I was excited by the open source nature of Linux.
**06:00** - *Saron Yitbarek*
That access to the command line was, in some ways, always there. Decades before open source really took off, there was always a desire to have complete control, at least among developers. Go way back to a time before the OS wars, before Apple and Microsoft were fighting over their GUIs. There were command line heroes then, too. Professor Paul Jones is the director of the online library ibiblio.org. He worked as a developer during those early days.
**06:30** - *Paul Jones*
The internet, by its nature, at that time, was less client server, totally, and more peer to peer. We're talking about, really, some sort of VAX to VAX, some sort of scientific workstation, the scientific workstation. That doesn't mean that client and server relationships and applications weren't there, but it does mean that the original design was to think of how to do peer - to - peer things, the opposite of what IBM had been doing, in which they had dumb terminals that had only enough intelligence to manage the user interface, but not enough intelligence to actually let you do anything in the terminal that would expose anything to it.
**07:00** - *Saron Yitbarek*
As popular as GUI was becoming among casual users, there was always a pull in the opposite direction for the engineers and developers. Before Linux in the 1970s and 80s, that resistance was there, with EMAX and GNU . W ith Stallman's free software foundation, certain folks were always begging for access to the command line, but it was Linux in the 1990s that delivered like no other.
**07:30** - *Saron Yitbarek*
The early lovers of Linux and other open source software were pioneers. I'm standing on their shoulders. We all are.
You're listening to Command Line Heroes, an original podcast from Red Hat. This is part two of the OS wars: Rise of Linux.
*Steven Vaughan-Nichols*
By 1998, things have changed.
**08:00** - *Saron Yitbarek*
Steven Vaughan-Nichols is a contributing editor at zdnet.com, and he's been writing for decades about the business side of technology. He describes how Linux slowly became more and more popular until the number of volunteer contributors was way larger than the number of Microsoft developers working on Windows. Linux never really went after Microsoft's desktop customers, though, and maybe that's why Microsoft ignored them at first. Where Linux did shine was in the server room. When businesses went online, each one required a unique programming solution for their needs.
**08:30** - *Saron Yitbarek*
Windows NT comes out in 1993 and it's competing with other server operating systems, but lots of developers are thinking, "Why am I going to buy an AIX box or a large windows box when I could set up a cheap Linux-based system with Apache?" Point is, Linux code started seeping into just about everything online.
**09:00** - *Steven Vaughan-Nichols*
Microsoft realizes that Linux, quite to their surprise, is actually beginning to get some of the business, not so much on the desktop, but on business servers. As a result of that, they start a campaign, what we like to call FUD — fear, uncertainty and doubt — saying, "Oh this Linux stuff, it's really not that good. It's not very reliable. You can't trust it with anything."
**09:30** - *Saron Yitbarek*
That soft propaganda style attack goes on for a while. Microsoft wasn't the only one getting nervous about Linux, either. It was really a whole industry versus that weird new guy. For example, anyone with a stake in UNIX was likely to see Linux as a usurper. Famously, the SCO Group, which had produced a version of UNIX, waged lawsuits for over a decade to try and stop the spread of Linux. SCO ultimately failed and went bankrupt. Meanwhile, Microsoft kept searching for their opening. They were a company that needed to make a move. It just wasn't clear what that move was going to be.
**10:00** - *Steven Vaughan-Nichols*
What will make Microsoft really concerned about it is the next year, in 2000, IBM will announce that they will invest a billion dollars in Linux in 2001. Now, IBM is not really in the PC business anymore. They're not out yet, but they're going in that direction, but what they are doing is they see Linux as being the future of servers and mainframe computers, which, spoiler alert, IBM was correct.
**10:30** - *Steven Vaughan-Nichols*
Linux is going to dominate the server world.
*Saron Yitbarek*
This was no longer just about a bunch of hackers loving their Jedi-like control of the command line. This was about the money side working in Linux's favor in a major way. John "Mad Dog" Hall, the executive director of Linux International, has a story that explains why that was. We reached him by phone.
**11:00** - *John Hall*
A friend of mine named Dirk Holden [00:10:56] was a German systems administrator at Deutsche Bank in Germany, and he also worked in the graphics projects for the early days of the X Windows system for PCs. I visited him one day at the bank, and I said, "Dirk, you have 3,000 servers here at the bank and you use Linux.
**11:30** - *John Hall*
Why don't you use Microsoft NT?" He looked at me and he said, "Yes, I have 3,000 servers , and if I used Microsoft Windows NT, I would need 2,999 systems administrators." He says, "With Linux, I only need four." That was the perfect answer.
**12:00** - *Saron Yitbarek*
The thing programmers are getting obsessed with also happens to be deeply attractive to big business. Some businesses were wary. The FUD was having an effect. They heard open source and thought, "Open. That doesn't sound solid. It's going to be chaotic, full of bugs," but as that bank manager pointed out, money has a funny way of convincing people to get over their hangups. Even little businesses, all of which needed websites, were coming on board. The cost of working with a cheap Linux system over some expensive proprietary option, there was really no comparison. If you were a shop hiring a pro to build your website, you wanted them to use Linux.
**12:30** - *Saron Yitbarek*
Fast forward a few years. Linux runs everybody's website. Linux has conquered the server world, and then, along comes the smartphone. Apple and their iPhones take a sizeable share of the market, of course, and Microsoft hoped to get in on that, except, surprise, Linux was there, too, ready and raring to go.
Author and journalist James Allworth.
**13:00** - *James Allworth*
There was certainly room for a second player, and that could well have been Microsoft, but for the fact of Android, which was fundamentally based on Linux, and because Android, famously acquired by Google, and now running a majority of the world's smartphones, Google built it on top of that. They were able to start with a very sophisticated operating system and a cost basis of zero. They managed to pull it off, and it ended up locking Microsoft out of the next generation of devices, by and large, at least from an operating system perspective.
**13:30** - *Saron Yitbarek*
The ground was breaking up, big time, and Microsoft was in danger of falling into the cracks. John Gossman is the chief architect on the Azure team at Microsoft. He remembers the confusion that gripped the company at that time.
**14:00** - *John Gossman*
Like a lot of companies, Microsoft was very concerned about IP pollution. They thought that if you let developers use open source they would likely just copy and paste bits of code into some product and then some sort of a viral license might take effect that ... They were also very confused, I think, it was just culturally, a lot of companies, Microsoft included, were confused on the difference between what open source development meant and what the business model was. There was this idea that open source meant that all your software was free and people were never going to pay anything.
**14:30** - *Saron Yitbarek*
Anybody invested in the old, proprietary model of software is going to feel threatened by what's happening here. When you threaten an enormous company like Microsoft, yeah, you can bet they're going to react. It makes sense they were pushing all that FUD — fear, uncertainty and doubt. At the time, an “ us versus them ” attitude was pretty much how business worked. If they'd been any other company, though, they might have kept that old grudge, that old thinking, but then, in 2013, everything changes.
**15:00** - *Saron Yitbarek*
Microsoft's cloud computing service, Azure, goes online and, shockingly, it offers Linux virtual machines from day one. Steve Ballmer, the CEO who called Linux a cancer, he's out, and a new forward - thinking CEO, Satya Nadella, has been brought in.
*John Gossman*
Satya has a different attitude. He's another generation. He's a generation younger than Paul and Bill and Steve were, and had a different perspective on open source.
*Saron Yitbarek*
John Gossman, again, from Microsoft's Azure team.
**15:30** - *John Gossman*
We added Linux support into Azure about four years ago, and that was for very pragmatic reasons. If you go to any enterprise customer, you will find that they are not trying to decide whether to use Windows or to use Linux or to use .net or to use Java TM . They made all those decisions a long time ago — about 15 years or so ago, there was some of this argument.
**16:00** - *John Gossman*
Now, every company that I have ever seen has a mix of Linux and Java and Windows and .net and SQL Server and Oracle and MySQL — proprietary source code - based products and open source code products.
f you're going to operate a cloud and you're going to allow and enable those companies to run their businesses on the cloud, you simply cannot tell them, "You can use this software but you can't use this software."
**16:30** - *Saron Yitbarek*
That's exactly the philosophy that Satya Nadella adopted. In the fall of 2014, he gets up on stage and he wants to get across one big, fat point. Microsoft loves Linux. He goes on to say that 20 % of Azure is already Linux and that Microsoft will always have first - class support for Linux distros. There's not even a whiff of that old antagonism toward open source.
To drive the point home, there's literally a giant sign behind them that reads, "Microsoft hearts Linux." Aww. For some of us, that turnaround was a bit of a shock, but really, it shouldn't have been. Here's Steven Levy, a tech journalist and author.
**17:00** - *Steven Levy*
When you're playing a football game and the turf becomes really slick, maybe you switch to a different kind of footwear in order to play on that turf. That's what they were doing.
**17:30** - *Steven Levy*
They can't deny reality and there are smart people there so they had to realize that this is the way the world is and put aside what they said earlier, even though they might be a little embarrassed at their earlier statements, but it would be crazy to let their statements about how horrible open source was earlier, affect their smart decisions now.
**18:00** - *Saron Yitbarek*
Microsoft swallowed its pride in a big way. You might remember that Apple, after years of splendid isolation, finally shifted toward a partnership with Microsoft. Now it was Microsoft's turn to do a 180. After years of battling the open source approach, they were reinventing themselves. It was change or perish. Steven Vaughan-Nichols.
**18:30** - *Steven Vaughan-Nichols*
Even a company the size of Microsoft simply can't compete with the thousands of open source developers working on all these other major projects , including Linux. They were very loath e to do so for a long time. The former Microsoft CEO, Steve Ballmer, hated Linux with a passion. Because of its GPL license, it was a cancer, but once Ballmer was finally shown the door, the new Microsoft leadership said, "This is like trying to order the tide to stop coming in. The tide is going to keep coming in. We should work with Linux, not against it."
**19:30** - *Steven Vaughan-Nichols*
Microsoft 2017 is not Steve Ballmer's Microsoft, nor is it Bill Gates' Microsoft. It's an entirely different company with a very different approach and, again, once you start using open source, it's not like you can really pull back. Open source has devoured the entire technology world. People who have never heard of Linux as such, don't know it, but every time they're on Facebook, they're running Linux. Every time you do a Google search, you're running Linux.
**20:00** - *Steven Vaughan-Nichols*
Every time you do anything with your Android phone, you're running Linux again. It literally is everywhere, and Microsoft can't stop that, and thinking that Microsoft can somehow take it all over, I think is naïve.
**20:30** - *Saron Yitbarek*
Open source supporters might have been worrying about Microsoft coming in like a wolf in the flock, but the truth is, the very nature of open source software protects it from total domination. No single company can own Linux and control it in any specific way. Greg Kroah-Hartman is a fellow at the Linux Foundation.
*Greg Kroah-Hartman*
Every company and every individual contributes to Linux in a selfish manner. They're doing so because they want to solve a problem that they have, be it hardware isn't working , or they want to add a new feature to do something else , or want to take it in a direction that they'll build that they can use for their product. That's great, because then everybody benefits from that because they're releasing the code back, so that everybody can use it. It's because of that selfishness that all companies and all people have, everybody benefits.
**21:00** - *Saron Yitbarek*
Microsoft has realized that in the coming cloud wars, fighting Linux would be like going to war with, well, a cloud. Linux and open source aren't the enemy, they're the atmosphere. Today, Microsoft has joined the Linux Foundation as a platinum member. They became the number one contributor to open source on GitHub.
**21:30** - *Saron Yitbarek*
In September, 2017, they even joined the Open Source Initiative. These days, Microsoft releases a lot of its code under open licenses. Microsoft's John Gossman describes what happened when they open sourced .net. At first, they didn't really think they'd get much back.
**22:00** - *John Gossman*
We didn't count on contributions from the community, and yet, three years in, over 50 per cent of the contributions to the .net framework libraries, now, are coming from outside of Microsoft. This includes big pieces of code. Samsung has contributed ARM support to .net. Intel and ARM and a couple other chip people have contributed code generation specific for their processors to the .net framework, as well as a surprising number of fixes, performance improvements , and stuff — from just individual contributors to the community.
*Saron Yitbarek*
Up until a few years ago, the Microsoft we have today, this open Microsoft, would have been unthinkable.
**22:30** - *Saron Yitbarek*
I'm Saron Yitbarek, and this is Command Line Heroes. Okay, we've seen titanic battles for the love of millions of desktop users. We've seen open source software creep up behind the proprietary titans, and nab huge market share.
**23:00** - *Saron Yitbarek*
We've seen fleets of command line heroes transform the programming landscape into the one handed down to people like me and you. Today, big business is absorbing open source software, and through it all, everybody is still borrowing from everybody.
**22:30** - *Saron Yitbarek*
In the tech wild west, it's always been that way. Apple gets inspired by Xerox, Microsoft gets inspired by Apple, Linux gets inspired by UNIX. Evolve, borrow, constantly grow. In David and Goliath terms, open source software is no longer a David, but, you know what? It's not even Goliath, either. Open source has transcended. It's become the battlefield that others fight on. As the open source approach becomes inevitable, new wars, wars that are fought in the cloud, wars that are fought on the open source battlefield, are ramping up.
Here's author Steven Levy.
**24:00** - *Steven Levy*
Basically, right now, we have four or five companies, if you count Microsoft, that in various ways are fighting to be the platform for all we do, for artificial intelligence, say. You see wars between intelligent assistants, and guess what? Apple has an intelligent assistant, Siri. Microsoft has one, Cortana. Google has the Google Assistant. Samsung has an intelligent assistant. Amazon has one, Alexa. We see these battles shifting to different areas, there. Maybe, you could say, the hottest one is would be, whose AI platform is going to control all the stuff in our lives there, and those five companies are all competing for that.
**24:30** - *Saron Yitbarek*
If you're looking for another rebel that's going to sneak up behind Facebook or Google or Amazon and blindside them the way Linux blindsided Microsoft, you might be looking a long time, because as author James Allworth points out, being a true rebel is only getting harder and harder.
**25:00** - *James Allworth*
Scale's always been an advantage but the nature of scale advantages are almost ... Whereas, I think previously they were more linear in nature, now it's more exponential in nature, and so, once you start to get out in front with something like this , it becomes harder and harder for a new player to come in and catch up. I think this is true of the internet era in general, whether it's scale like that or the importance and advantages that data bestow on an organization in terms of its ability to compete.
**25:30** - *James Allworth*
Once you get out in front, you attract more customers, and then that gives you more data and that enables you to do an even better job, and then, why on earth would you want to go with the number two player, because they're so far behind? I think it's going to be no different in cloud.
**26:00** - *Saron Yitbarek*
This story began with singular heroes like Steve Jobs and Bill Gates, but the progress of technology has taken on a crowdsourced, organic feel. I think it's telling that our open source hero, Linus Torvalds, didn't even have a real plan when he first invented the Linux kernel. He was a brilliant, young developer for sure, but he was also like a single drop of water at the very front of a tidal wave. The revolution was inevitable. It's been estimated that for a proprietary company to create a Linux distribution in their old - fashioned, proprietary way, it would cost them well over $10 billion. That points to the power of open source.
**26:30** - *Saron Yitbarek*
In the end, it's not something that a proprietary model is going to compete with. Successful companies have to remain open. That's the big, ultimate lesson in all this. Something else to keep in mind: W hen we're wired together, our capacity to grow and build on what we've already accomplished becomes limitless. As big as these companies get, we don't have to sit around waiting for them to give us something better. Think about the new developer who learns to code for the sheer joy of creating, the mom who decides that if nobody's going to build what she needs, then she'll build it herself.
Wherever tomorrow's great programmers come from, they're always going to have the capacity to build the next big thing, so long as there's access to the command line.
**27:00** - *Saron Yitbarek*
That's it for our two - part tale on the OS wars that shaped our digital lives. The struggle for dominance moved from the desktop to the server room, and ultimately into the cloud. Old enemies became unlikely allies, and a crowdsourced future left everything open.
**27:30** - *Saron Yitbarek*
Listen, I know, there are a hundred other heroes we didn't have space for in this history trip, so drop us a line. Share your story. Redhat.com/commandlineheroes. I'm listening.
We're spending the rest of the season learning what today's heroes are creating, and what battles they're going through to bring their creations to life. Come back for more tales — from the epic front lines of programming. We drop a new episode every two weeks. In a couple weeks' time, we bring you episode three: the Agile Revolution.
**28:00** - *Saron Yitbarek*
Command Line Heroes is an original podcast from Red Hat. To get new episodes delivered automatically for free, make sure to subscribe to the show. Just search for “ Command Line Heroes ” in Apple p odcast s , Spotify, Google Play, and pretty much everywhere else you can find podcasts. Then, hit “ subscribe ” so you will be the first to know when new episodes are available.
I'm Saron Yitbarek. Thanks for listening. Keep on coding.
### Keep going
### From police officer to open source devotee: One man's story
How Red Hat’s Thomas Cameron became an accidental technologist, and fell in love with Linux along the way
### Red Hat + Microsoft: To boldly go where no partnership has gone before
A partnership that would have once been deemed unimaginable?and what it means for developers, sysadmins, and DevOps engineers |
11,298 | 如何升级 Linux Mint 19.1 为 Linux Mint 19.2 | https://www.2daygeek.com/upgrade-linux-mint-19-1-tessa-to-linux-mint-19-2-tina/ | 2019-09-02T23:15:38 | [
"Mint"
] | https://linux.cn/article-11298-1.html | 
Linux Mint 19.2 “Tina” 在 2019 年 8 月 2 日发布,它是一个基于 Ubuntu 18.04 LTS (Bionic Beaver) 的长期支持版本。它将被支持到 2023 年。它带来更新的软件和精细的改进和很多新的特色来使你的桌面使用地更舒适。
Linux Mint 19.2 特色有 Cinnamon 4.2 、Linux 内核 4.15 和 Ubuntu 18.04 基础软件包。
**注意:** 不要忘记备份你的重要数据。如果一些东西出错,在最新的安装后,你可以从备份中恢复数据。
备份可以通过 rsnapshot 或 timeshift 完成。
Linux Mint 19.2 “Tina” 发布日志可以在下面的链接中找到。
* [Linux Mint 19.2 (Tina) 发布日志](https://www.linuxtechnews.com/linux-mint-19-2-tina-released-check-what-is-new-feature/)
这里有三种方法,能让我们升级为 Linux Mint 19.2 “Tina”。
* 使用本地方法升级 Linux Mint 19.2 (Tina)
* 使用 Mintupgrade 实用程序方法升级 Linux Mint 19.2 (Tina)
* 使用 GUI 升级 Linux Mint 19.2 (Tina)
### 如何从 Linux Mint 19.1 (Tessa) 升级为 Linux Mint 19.2 (Tina)?
升级 Linux Mint 系统是一项简单轻松的任务。有三种方法可以完成。
### 方法-1:使用本地方法升级 Linux Mint 19.2 (Tina)
这是执行升级 Linux Mint 系统的本地和标准的方法之一。为做到这点,遵循下面的程序步骤。
确保你当前 Linux Mint 系统是最新的。使用下面的命令来更新你现在的软件为最新可用版本。
#### 步骤-1:
通过运行下面的命令来刷新存储库索引。
```
$ sudo apt update
```
运行下面的命令来在系统上安装可用的更新。
```
$ sudo apt upgrade
```
运行下面的命令来在版本中执行可用的次要更新。
```
$ sudo apt full-upgrade
```
默认情况下,它将通过上面的命令来移除过时的软件包。但是,我建议你运行下面的命令。
```
$ sudo apt autoremove
$ sudo apt clean
```
如果安装一个新的内核,你可能需要重启系统。如果是这样,运行下面的命令。
```
$ sudo shutdown -r now
```
最后检查当前安装的版本。
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Linux Mint
Description: Linux Mint 19.1 (Tessa)
Release: 19.1
Codename: Tessa
```
#### 步骤-2:更新/修改 /etc/apt/sources.list 文件
在重启后,修改 `sources.list` 文件,并从 Linux Mint 19.1 (Tessa) 指向 Linux Mint 19.2 (Tina)。
首先,使用 `cp` 命令备份下面的配置文件。
```
$ sudo cp /etc/apt/sources.list /root
$ sudo cp -r /etc/apt/sources.list.d/ /root
```
修改 `sources.list` 文件,并指向 Linux Mint 19.2 (Tina)。
```
$ sudo sed -i 's/tessa/tina/g' /etc/apt/sources.list
$ sudo sed -i 's/tessa/tina/g' /etc/apt/sources.list.d/*
```
通过运行下面的命令来刷新存储库索引。
```
$ sudo apt update
```
运行下面的命令来在系统上安装可用的更新。在升级过程中,你可用需要确认服务重启和配置文件替换,因此,只需遵循屏幕上的指令。
升级可能花费一些时间,具体依赖于更新的数量和你的网络速度。
```
$ sudo apt upgrade
```
运行下面的命令来执行一次完整的系统升级。
```
$ sudo apt full-upgrade
```
默认情况下,上面的命令将移除过时的软件包。但是,我建议你再次运行下面的命令。
```
$ sudo apt autoremove
$ sudo apt clean
```
最后重启系统来启动 Linux Mint 19.2 (Tina)。
```
$ sudo shutdown -r now
```
升级后的 Linux Mint 版本可以通过运行下面的命令验证。
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Linux Mint
Description: Linux Mint 19.2 (Tina)
Release: 19.2
Codename: Tina
```
### 方法-2:使用 Mintupgrade 实用程序升级 Linux Mint 19.2 (Tina)
这是 Mint 官方实用程序,它允许我们对 Linux Mint 系统执行平滑升级。
使用下面的命令来安装 mintupgrade 软件包。
```
$ sudo apt install mintupgrade
```
确保你已经安装 mintupgrade 软件包的最新版本。
```
$ apt version mintupgrade
```
以一个普通用户来运行下面的命令以模拟一次升级,遵循屏幕上的指令。
```
$ mintupgrade check
```
使用下面的命令来下载需要的软件包来升级为 Linux Mint 19.2 (Tina) ,遵循屏幕上的指令。
```
$ mintupgrade download
```
运行下面的命令来运用升级,最新屏幕上的指令。
```
$ mintupgrade upgrade
```
在成功升级后,重启系统,并检查升级后的版本。
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Linux Mint
Description: Linux Mint 19.2 (Tina)
Release: 19.2
Codename: Tina
```
### 方法-3:使用 GUI 升级 Linux Mint 19.2 (Tina)
或者,我们可以通过 GUI 执行升级。
#### 步骤-1:
通过 Timeshift 创建一个系统快照。如果一些东西出错,你可以简单地恢复你的操作系统到它先前状态。
#### 步骤-2:
打开更新管理器,单击刷新按钮来检查 mintupdate 和 mint-upgrade-info 的任何新版本。如果有这些软件包的更新,应用它们。
通过单击 “编辑-> 升级到 Linux Mint 19.2 Tina”来启动系统升级。

遵循屏幕上的指令。如果被询问是否保留或替换配置文件,选择替换它们。

#### 步骤-3:
在升级完成后,重启你的电脑。
---
via: <https://www.2daygeek.com/upgrade-linux-mint-19-1-tessa-to-linux-mint-19-2-tina/>
作者:[2daygeek](http://www.2daygeek.com/author/2daygeek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[robsean](https://github.com/robsean) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
11,299 | 五大物联网网络安全错误 | https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html | 2019-09-03T00:26:00 | [
"IoT"
] | https://linux.cn/article-11299-1.html |
>
> IT 供应商兄弟国际公司分享了五种最常见的物联网安全错误,这是从它们的打印机和多功能设备买家中看到的。
>
>
>

尽管[兄弟国际公司](https://www.brother-usa.com/business)是许多 IT 产品的供应商,从[机床](https://www.brother-usa.com/machinetool/default?src=default)到[头戴式显示器](https://www.brother-usa.com/business/hmd#sort=%40productcatalogsku%20ascending)再到[工业缝纫机](https://www.brother-usa.com/business/industrial-sewing),但它最知名的产品是打印机。在当今世界,这些打印机不再是独立的设备,而是物联网的组成部分。
这也是我为什么对罗伯特•伯内特提供的这份列表感兴趣的原因。伯内特是兄弟公司的总监,负责 B2B 产品和提供解决方案。基本上是该公司负责大客户实施的关键人物。所以他对打印机相关的物联网安全错误非常关注,并且分享了兄弟国际公司对于处理这五大错误的建议。
### #5:不控制访问和授权
伯内特说:“过去,成本控制是管理谁可以使用机器、何时结束工作背后的推动力。”当然,这在今天也仍然很重要,但他指出安全性正迅速成为管理控制打印和扫描设备的关键因素。这不仅适用于大型企业,也适用于各种规模的企业。
### #4:无法定期更新固件
让我们来面对这一现实,大多数 IT 专业人员都忙于保持服务器和其他网络基础设施设备的更新,确保其基础设施尽可能的安全高效。“在这日常的流程中,像打印机这样的设备经常被忽视。”但过时的固件可能会使基础设施面临新的威胁。
### #3:设备意识不足
伯内特说:“正确理解谁在使用什么设备,以及整套设备中所有连接设备的功能是什么,这是至关重要的。使用端口扫描技术、协议分析和其他检测技术检查这些设备应作为你的网络基础设施整体安全审查中的一部分。 他常常提醒人们说:“处理打印设备的方法是:如果没有损坏,就不要修理!”但即使是可靠运行多年的设备也应该成为安全审查的一部分。这是因为旧设备可能无法提供更强大的安全设置,或者可能需要更新其配置才能满足当今更高的安全要求,这其中包括设备的监控/报告功能。
### #2:用户培训不足
“应该把培训团队在工作过程中管理文档的最佳实践作为强有力的安全计划中的一部分。”伯内特说道,“然而,事实却是,无论你如何努力地去保护物联网设备,人为因素通常是一家企业在保护重要和敏感信息方面最薄弱的环节。像这些简单的事情,如无意中将重要文件留在打印机上供任何人查看,或者将文件扫描到错误的目的地,不仅会给企业带来经济损失和巨大的负面影响,还会影响企业的知识产权、声誉,引起合规性/监管问题。”
### #1:使用默认密码
“只是因为它很方便并不意味着它不重要!”伯内特说,“保护打印机和多功能设备免受未经授权的管理员访问不仅有助于保护敏感的机器配置设置和报告信息,还可以防止访问个人信息,例如,像可能用于网络钓鱼攻击的用户名。”
---
via: <https://www.networkworld.com/article/3433476/top-5-iot-networking-security-mistakes.html>
作者:[Fredric Paul](https://www.networkworld.com/author/Fredric-Paul/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Morisun029](https://github.com/Morisun029) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,300 | 如何在 Linux 上重命名一组文件 | https://www.networkworld.com/article/3433865/how-to-rename-a-group-of-files-on-linux.html | 2019-09-03T00:54:09 | [
"重命名"
] | https://linux.cn/article-11300-1.html |
>
> 要用单个命令重命名一组文件,请使用 rename 命令。它需要使用正则表达式,并且可以在开始前告诉你会有什么更改。
>
>
>

几十年来,Linux 用户一直使用 `mv` 命令重命名文件。它很简单,并且能做到你要做的。但有时你需要重命名一大组文件。在这种情况下,`rename` 命令可以使这个任务更容易。它只需要一些正则表达式的技巧。
与 `mv` 命令不同,`rename` 不允许你简单地指定旧名称和新名称。相反,它使用类似于 Perl 中的正则表达式。在下面的例子中,`s` 指定我们将第一个字符串替换为第二个字符串(旧的),从而将 `this.new` 变为 `this.old`。
```
$ rename 's/new/old/' this.new
$ ls this*
this.old
```
使用 `mv this.new this.old` 可以更容易地进行更改一个,但是将字符串 `this` 变成通配符 `*`,你可以用一条命令将所有的 `*.new` 文件重命名为 `*.old`:
```
$ ls *.new
report.new schedule.new stats.new this.new
$ rename 's/new/old/' *.new
$ ls *.old
report.old schedule.old stats.old this.old
```
正如你所料,`rename` 命令不限于更改文件扩展名。如果你需要将名为 `report.*` 的文件更改为 `review.*`,那么可以使用以下命令做到:
```
$ rename 's/report/review/' *
```
正则表达式中的字符串可以更改文件名的任何部分,无论是文件名还是扩展名。
```
$ rename 's/123/124/' *
$ ls *124*
status.124 report124.txt
```
如果你在 `rename` 命令中添加 `-v` 选项,那么该命令将提供一些反馈,以便你可以看到所做的更改,或许会包含你没注意的。这让你注意到并按需还原更改。
```
$ rename -v 's/123/124/' *
status.123 renamed as status.124
report123.txt renamed as report124.txt
```
另一方面,使用 `-n`(或 `--nono`)选项会使 `rename` 命令告诉你将要做的但不会实际做的更改。这可以让你免于执行不不想要的操作,然后再还原更改。
```
$ rename -n 's/old/save/' *
rename(logger.man-old, logger.man-save)
rename(lyrics.txt-old, lyrics.txt-save)
rename(olderfile-, saveerfile-)
rename(oldfile, savefile)
rename(review.old, review.save)
rename(schedule.old, schedule.save)
rename(stats.old, stats.save)
rename(this.old, this.save)
```
如果你对这些更改满意,那么就可以运行不带 `-n` 选项的命令来更改文件名。
但请注意,正则表达式中的 `.` **不会**被视为句点,而是作为匹配任何字符的通配符。上面和下面的示例中的一些更改可能不是输入命令的人希望的。
```
$ rename -n 's/.old/.save/' *
rename(logger.man-old, logger.man.save)
rename(lyrics.txt-old, lyrics.txt.save)
rename(review.old, review.save)
rename(schedule.old, schedule.save)
rename(stats.old, stats.save)
rename(this.old, this.save)
```
为确保句点按照字面意思执行,请在它的前面加一个反斜杠。这将使其不被解释为通配符并匹配任何字符。请注意,进行此更改时,仅选择了 `.old` 文件。
```
$ rename -n 's/\.old/.save/' *
rename(review.old, review.save)
rename(schedule.old, schedule.save)
rename(stats.old, stats.save)
rename(this.old, this.save)
```
下面的命令会将文件名中的所有大写字母更改为小写,除了使用 `-n` 选项来确保我们在命令执行之前检查将做的修改。注意在正则表达式中使用了 `y`,这是改变大小写所必需的。
```
$ rename -n 'y/A-Z/a-z/' W*
rename(WARNING_SIGN.pdf, warning_sign.pdf)
rename(Will_Gardner_buttons.pdf, will_gardner_buttons.pdf)
rename(Wingding_Invites.pdf, wingding_invites.pdf)
rename(WOW-buttons.pdf, wow-buttons.pdf)
```
在上面的例子中,我们将所有大写字母更改为了小写,但这仅对以大写字母 `W` 开头的文件名。
### 总结
当你需要重命名大量文件时,`rename` 命令非常有用。请注意不要做比预期更多的更改。请记住,`-n`(或者 `--nono`)选项可以帮助你避免耗时的错误。
---
via: <https://www.networkworld.com/article/3433865/how-to-rename-a-group-of-files-on-linux.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,302 | 如何发现截断的数据项 | https://www.polydesmida.info/BASHing/2018-07-04.html | 2019-09-03T17:44:00 | [
"截断"
] | https://linux.cn/article-11302-1.html | 
**截断**(形容词):缩写、删节、缩减、剪切、剪裁、裁剪、修剪……
数据项被截断的一种情况是将其输入到数据库字段中,该字段的字符限制比数据项的长度要短。例如,字符串:
```
Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE of Yermo CA
```
是 60 个字符长。如果你将其输入到具有 50 个字符限制的“位置”字段,则可以获得:
```
Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE #末尾带有一个空格
```
截断也可能导致数据错误,比如你打算输入:
```
Sally Ann Hunter (aka Sally Cleveland)
```
但是你忘记了闭合的括号:
```
Sally Ann Hunter (aka Sally Cleveland
```
这会让使用数据的用户觉得 Sally 是否有被修剪掉了数据项的其它的别名。
截断的数据项很难检测。在审核数据时,我使用三种不同的方法来查找可能的截断,但我仍然可能会错过一些。
**数据项的长度分布。**第一种方法是捕获我在各个字段中找到的大多数截断的数据。我将字段传递给 `awk` 命令,该命令按字段宽度计算数据项,然后我使用 `sort` 以宽度的逆序打印计数。例如,要检查以 `tab` 分隔的文件 `midges` 中的第 33 个字段:
```
awk -F"\t" 'NR>1 {a[length($33)]++} \
END {for (i in a) print i FS a[i]}' midges | sort -nr
```

最长的条目恰好有 50 个字符,这是可疑的,并且在该宽度处存在数据项的“凸起”,这更加可疑。检查这些 50 个字符的项目会发现截断:

我用这种方式检查的其他数据表有 100、200 和 255 个字符的“凸起”。在每种情况下,这种“凸起”都包含明显的截断。
**未匹配的括号。**第二种方法查找类似 `...(Sally Cleveland` 的数据项。一个很好的起点是数据表中所有标点符号的统计。这里我检查文件 `mag2`:
```
grep -o "[[:punct:]]" file | sort | uniqc
```

请注意,`mag2` 中的开括号和闭括号的数量不相等。要查看发生了什么,我使用 `unmatched` 函数,它接受三个参数并检查数据表中的所有字段。第一个参数是文件名,第二个和第三个是开括号和闭括号,用引号括起来。
```
unmatched()
{
awk -F"\t" -v start="$2" -v end="$3" \
'{for (i=1;i<=NF;i++) \
if (split($i,a,start) != split($i,b,end)) \
print "line "NR", field "i":\n"$i}' "$1"
}
```
如果在字段中找到开括号和闭括号之间不匹配,则 `unmatched` 会报告行号和字段号。这依赖于 `awk` 的 `split` 函数,它返回由分隔符分隔的元素数(包括空格)。这个数字总是比分隔符的数量多一个:

这里 `ummatched` 检查 `mag2` 中的圆括号并找到一些可能的截断:

我使用 `unmatched` 来找到不匹配的圆括号 `()`、方括号 `[]`、花括号 `{}` 和尖括号 `<>`,但该函数可用于任何配对的标点字符。
**意外的结尾。**第三种方法查找以尾随空格或非终止标点符号结尾的数据项,如逗号或连字符。这可以在单个字段上用 `cut` 用管道输入到 `grep` 完成,或者用 `awk` 一步完成。在这里,我正在检查以制表符分隔的表 `herp5` 的字段 47,并提取可疑数据项及其行号:
```
cut -f47 herp5 | grep -n "[ ,;:-]$"
或
awk -F"\t" '$47 ~ /[ ,;:-]$/ {print NR": "$47}' herp5
```

用于制表符分隔文件的 awk 命令的全字段版本是:
```
awk -F"\t" '{for (i=1;i<=NF;i++) if ($i ~ /[ ,;:-]$/) \
print "line "NR", field "i":\n"$i}' file
```
**谨慎的想法。**在我对字段进行的验证测试期间也会出现截断。例如,我可能会在“年”的字段中检查合理的 4 位数条目,并且有个 `198` 可能是 198n?还是 1898 年?带有丢失字符的截断数据项是个谜。 作为数据审计员,我只能报告(可能的)字符损失,并建议数据编制者或管理者恢复(可能)丢失的字符。
---
via: <https://www.polydesmida.info/BASHing/2018-07-04.html>
作者:[polydesmida](https://www.polydesmida.info/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,303 | 用 Git 建立和托管网站 | https://opensource.com/article/19/4/building-hosting-website-git | 2019-09-04T13:43:22 | [
"Git",
"Hugo"
] | https://linux.cn/article-11303-1.html |
>
> 你可以让 Git 帮助你轻松发布你的网站。在我们《鲜为人知的 Git 用法》系列的第一篇文章中学习如何做到。
>
>
>

[Git](https://git-scm.com/) 是一个少有的能将如此多的现代计算封装到一个程序之中的应用程序,它可以用作许多其他应用程序的计算引擎。虽然它以跟踪软件开发中的源代码更改而闻名,但它还有许多其他用途,可以让你的生活更轻松、更有条理。在这个 Git 系列中,我们将分享七种鲜为人知的使用 Git 的方法。
创建一个网站曾经是极其简单的,而同时它又是一种黑魔法。回到 Web 1.0 的旧时代(不是每个人都会这样称呼它),你可以打开任何网站,查看其源代码,并对 HTML 及其内联样式和基于表格的布局进行反向工程,在这样的一两个下午之后,你就会感觉自己像一个程序员一样。不过要让你创建的页面放到互联网上,仍然有一些问题,因为这意味着你需要处理服务器、FTP 以及 webroot 目录和文件权限。虽然从那时起,现代网站变得愈加复杂,但如果你让 Git 帮助你,自出版可以同样容易(或更容易!)。
### 用 Hugo 创建一个网站
[Hugo](http://gohugo.io) 是一个开源的静态站点生成器。静态网站是过去的 Web 的基础(如果你回溯到很久以前,那就是 Web 的*全部*了)。静态站点有几个优点:它们相对容易编写,因为你不必编写代码;它们相对安全,因为页面上没有执行代码;并且它们可以非常快,因为除了在页面上传输的任何内容之外没有任何处理。
Hugo 并不是唯一一个静态站点生成器。[Grav](http://getgrav.org)、[Pico](http://picocms.org/)、[Jekyll](https://jekyllrb.com)、[Podwrite](http://slackermedia.info/podwrite/) 以及许多其他的同类软件都提供了一种创建一个功能最少的、只需要很少维护的网站的简单方法。Hugo 恰好是内置集成了 GitLab 集成的一个静态站点生成器,这意味着你可以使用免费的 GitLab 帐户生成和托管你的网站。
Hugo 也有一些非常大的用户。例如,如果你曾经去过 [Let’s Encrypt](https://letsencrypt.org/) 网站,那么你已经用过了一个用 Hugo 构建的网站。

#### 安装 Hugo
Hugo 是跨平台的,你可以在 [Hugo 的入门资源](https://gohugo.io/getting-started/installing)中找到适用于 MacOS、Windows、Linux、OpenBSD 和 FreeBSD 的安装说明。
如果你使用的是 Linux 或 BSD,最简单的方法是从软件存储库或 ports 树安装 Hugo。确切的命令取决于你的发行版,但在 Fedora 上,你应该输入:
```
$ sudo dnf install hugo
```
通过打开终端并键入以下内容确认你已正确安装:
```
$ hugo help
```
这将打印 `hugo` 命令的所有可用选项。如果你没有看到,你可能没有正确安装 Hugo 或需要[将该命令添加到你的路径](https://opensource.com/article/17/6/set-path-linux)。
#### 创建你的站点
要构建 Hugo 站点,你必须有个特定的目录结构,通过输入以下命令 Hugo 将为你生成它:
```
$ hugo new site mysite
```
你现在有了一个名为 `mysite` 的目录,它包含构建 Hugo 网站所需的默认目录。
Git 是你将网站放到互联网上的接口,因此切换到你新的 `mysite` 文件夹,并将其初始化为 Git 存储库:
```
$ cd mysite
$ git init .
```
Hugo 与 Git 配合的很好,所以你甚至可以使用 Git 为你的网站安装主题。除非你计划开发你正在安装的主题,否则可以使用 `--depth` 选项克隆该主题的源的最新状态:
```
$ git clone --depth 1 https://github.com/darshanbaral/mero.git themes/mero
```
现在为你的网站创建一些内容:
```
$ hugo new posts/hello.md
```
使用你喜欢的文本编辑器编辑 `content/posts` 目录中的 `hello.md` 文件。Hugo 接受 Markdown 文件,并会在发布时将它们转换为经过主题化的 HTML 文件,因此你的内容必须采用 [Markdown 格式](https://commonmark.org/help/)。
如果要在帖子中包含图像,请在 `static` 目录中创建一个名为 `images` 的文件夹。将图像放入此文件夹,并使用以 `/images` 开头的绝对路径在标记中引用它们。例如:
```

```
#### 选择主题
你可以在 [themes.gohugo.io](https://themes.gohugo.io/) 找到更多主题,但最好在测试时保持一个基本主题。标准的 Hugo 测试主题是 [Ananke](https://themes.gohugo.io/gohugo-theme-ananke/)。某些主题具有复杂的依赖关系,而另外一些主题如果没有复杂的配置的话,也许不会以你预期的方式呈现页面。本例中使用的 Mero 主题捆绑了一个详细的 `config.toml` 配置文件,但是(为了简单起见)我将在这里只提供基本的配置。在文本编辑器中打开名为 `config.toml` 的文件,并添加三个配置参数:
```
languageCode = "en-us"
title = "My website on the web"
theme = "mero"
[params]
author = "Seth Kenlon"
description = "My hugo demo"
```
#### 预览
在你准备发布之前不必(预先)在互联网上放置任何内容。在你开发网站时,你可以通过启动 Hugo 附带的仅限本地访问的 Web 服务器来预览你的站点。
```
$ hugo server --buildDrafts --disableFastRender
```
打开 Web 浏览器并导航到 <http://localhost:1313> 以查看正在进行的工作。
### 用 Git 发布到 GitLab
要在 GitLab 上发布和托管你的站点,请为你的站点内容创建一个存储库。
要在 GitLab 中创建存储库,请单击 GitLab 的 “Projects” 页面中的 “New Project” 按钮。创建一个名为 `yourGitLabUsername.gitlab.io` 的空存储库,用你的 GitLab 用户名或组名替换 `yourGitLabUsername`。你必须使用此命名方式作为该项目的名称。你也可以稍后为其添加自定义域。
不要在 GitLab 上包含许可证或 README 文件(因为你已经在本地启动了一个项目,现在添加这些文件会使将你的数据推向 GitLab 时更加复杂,以后你可以随时添加它们)。
在 GitLab 上创建空存储库后,将其添加为 Hugo 站点的本地副本的远程位置,该站点已经是一个 Git 存储库:
```
$ git remote add origin [email protected]:skenlon/mysite.git
```
创建名为 `.gitlab-ci.yml` 的 GitLab 站点配置文件并输入以下选项:
```
image: monachus/hugo
variables:
GIT_SUBMODULE_STRATEGY: recursive
pages:
script:
- hugo
artifacts:
paths:
- public
only:
- master
```
`image` 参数定义了一个为你的站点提供服务的容器化图像。其他参数是告诉 GitLab 服务器在将新代码推送到远程存储库时要执行的操作的说明。有关 GitLab 的 CI/CD(持续集成和交付)选项的更多信息,请参阅 [GitLab 文档的 CI/CD 部分](https://docs.gitlab.com/ee/ci/#overview)。
#### 设置排除的内容
你的 Git 存储库已配置好,在 GitLab 服务器上构建站点的命令也已设置,你的站点已准备好发布了。对于你的第一个 Git 提交,你必须采取一些额外的预防措施,以便你不会对你不打算进行版本控制的文件进行版本控制。
首先,将构建你的站点时 Hugo 创建的 `/public` 目录添加到 `.gitignore` 文件。你无需在 Git 中管理已完成发布的站点;你需要跟踪的是你的 Hugo 源文件。
```
$ echo "/public" >> .gitignore
```
如果不创建 Git 子模块,则无法在 Git 存储库中维护另一个 Git 存储库。为了简单起见,请移除嵌入的存储库的 `.git` 目录,以使主题(存储库)只是一个主题(目录)。
请注意,你**必须**将你的主题文件添加到你的 Git 存储库,以便 GitLab 可以访问该主题。如果不提交主题文件,你的网站将无法成功构建。
```
$ mv themes/mero/.git ~/.local/share/Trash/files/
```
你也可以像使用[回收站](http://slackermedia.info/trashy)一样使用 `trash`:
```
$ trash themes/mero/.git
```
现在,你可以将本地项目目录的所有内容添加到 Git 并将其推送到 GitLab:
```
$ git add .
$ git commit -m 'hugo init'
$ git push -u origin HEAD
```
### 用 GitLab 上线
将代码推送到 GitLab 后,请查看你的项目页面。有个图标表示 GitLab 正在处理你的构建。第一次推送代码可能需要几分钟,所以请耐心等待。但是,请不要**一直**等待,因为该图标并不总是可靠地更新。

当你在等待 GitLab 组装你的站点时,请转到你的项目设置并找到 “Pages” 面板。你的网站准备就绪后,它的 URL 就可以用了。该 URL 是 `yourGitLabUsername.gitlab.io/yourProjectName`。导航到该地址以查看你的劳动成果。

如果你的站点无法正确组装,GitLab 提供了可以深入了解 CI/CD 管道的日志。查看错误消息以找出发生了什么问题。
### Git 和 Web
Hugo(或 Jekyll 等类似工具)只是利用 Git 作为 Web 发布工具的一种方式。使用服务器端 Git 挂钩,你可以使用最少的脚本设计你自己的 Git-to-web 工作流。使用 GitLab 的社区版,你可以自行托管你自己的 GitLab 实例;或者你可以使用 [Gitolite](http://gitolite.com) 或 [Gitea](http://gitea.io) 等替代方案,并使用本文作为自定义解决方案的灵感来源。祝你玩得开心!
---
via: <https://opensource.com/article/19/4/building-hosting-website-git>
作者:[Seth Kenlon](https://opensource.com/users/seth) 选题:[lujun9972](https://github.com/lujun9972) 译者:[wxy](https://github.com/wxy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [Git](https://git-scm.com/) is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git.
Creating a website used to be both sublimely simple and a form of black magic all at once. Back in the old days of Web 1.0 (that's not what anyone actually called it), you could just open up any website, view its source code, and reverse engineer the HTML—with all its inline styling and table-based layout—and you felt like a programmer after an afternoon or two. But there was still the matter of getting the page you created on the internet, which meant dealing with servers and FTP and webroot directories and file permissions. While the modern web has become far more complex since then, self-publication can be just as easy (or easier!) if you let Git help you out.
## Create a website with Hugo
[Hugo](http://gohugo.io) is an open source static site generator. Static sites are what the web used to be built on (if you go back far enough, it was *all* the web was). There are several advantages to static sites: they're relatively easy to write because you don't have to code them, they're relatively secure because there's no code executed on the pages, and they can be quite fast because there's no processing aside from transferring whatever you have on the page.
Hugo isn't the only static site generator out there. [Grav](http://getgrav.org), [Pico](http://picocms.org/), [Jekyll](https://jekyllrb.com), [Podwrite](http://slackermedia.info/podwrite/), and many others provide an easy way to create a full-featured website with minimal maintenance. Hugo happens to be one with GitLab integration built in, which means you can generate and host your website with a free GitLab account.
Hugo has some pretty big fans, too. For instance, if you've ever gone to the Let's Encrypt website, then you've used a site built with Hugo.

### Install Hugo
Hugo is cross-platform, and you can find installation instructions for MacOS, Windows, Linux, OpenBSD, and FreeBSD in [Hugo's getting started resources](https://gohugo.io/getting-started/installing).
If you're on Linux or BSD, it's easiest to install Hugo from a software repository or ports tree. The exact command varies depending on what your distribution provides, but on Fedora you would enter:
`$ sudo dnf install hugo`
Confirm you have installed it correctly by opening a terminal and typing:
`$ hugo help`
This prints all the options available for the **hugo** command. If you don't see that, you may have installed Hugo incorrectly or need to [add the command to your path](https://opensource.com/article/17/6/set-path-linux).
### Create your site
To build a Hugo site, you must have a specific directory structure, which Hugo will generate for you by entering:
`$ hugo new site mysite`
You now have a directory called **mysite**, and it contains the default directories you need to build a Hugo website.
Git is your interface to get your site on the internet, so change directory to your new **mysite** folder and initialize it as a Git repository:
```
$ cd mysite
$ git init .
```
Hugo is pretty Git-friendly, so you can even use Git to install a theme for your site. Unless you plan on developing the theme you're installing, you can use the **--depth** option to clone the latest state of the theme's source:
```
$ git clone --depth 1 \
https://github.com/darshanbaral/mero.git\
themes/mero
```
Now create some content for your site:
`$ hugo new posts/hello.md`
Use your favorite text editor to edit the **hello.md** file in the **content/posts** directory. Hugo accepts Markdown files and converts them to themed HTML files at publication, so your content must be in [Markdown format](https://commonmark.org/help/).
If you want to include images in your post, create a folder called **images** in the **static** directory. Place your images into this folder and reference them in your markup using the absolute path starting with **/images**. For example:
``
### Choose a theme
You can find more themes at [themes.gohugo.io](https://themes.gohugo.io/), but it's best to stay with a basic theme while testing. The canonical Hugo test theme is [Ananke](https://themes.gohugo.io/gohugo-theme-ananke/). Some themes have complex dependencies, and others don't render pages the way you might expect without complex configuration. The Mero theme used in this example comes bundled with a detailed **config.toml** configuration file, but (for the sake of simplicity) I'll provide just the basics here. Open the file called **config.toml** in a text editor and add three configuration parameters:
```
languageCode = "en-us"
title = "My website on the web"
theme = "mero"
[params]
author = "Seth Kenlon"
description = "My hugo demo"
```
### Preview your site
You don't have to put anything on the internet until you're ready to publish it. While you work, you can preview your site by launching the local-only web server that ships with Hugo.
`$ hugo server --buildDrafts --disableFastRender`
Open a web browser and navigate to ** http://localhost:1313** to see your work in progress.
## Publish with Git to GitLab
To publish and host your site on GitLab, create a repository for the contents of your site.
To create a repository in GitLab, click on the **New Project** button in your GitLab Projects page. Create an empty repository called **yourGitLabUsername.gitlab.io**, replacing **yourGitLabUsername** with your GitLab user name or group name. You must use this scheme as the name of your project. If you want to add a custom domain later, you can.
Do not include a license or a README file (because you've started a project locally, adding these now would make pushing your data to GitLab more complex, and you can always add them later).
Once you've created the empty repository on GitLab, add it as the remote location for the local copy of your Hugo site, which is already a Git repository:
`$ git remote add origin [email protected]:skenlon/mysite.git`
Create a GitLab site configuration file called **.gitlab-ci.yml** and enter these options:
```
image: monachus/hugo
variables:
GIT_SUBMODULE_STRATEGY: recursive
pages:
script:
- hugo
artifacts:
paths:
- public
only:
- master
```
The **image** parameter defines a containerized image that will serve your site. The other parameters are instructions telling GitLab's servers what actions to execute when you push new code to your remote repository. For more information on GitLab's CI/CD (Continuous Integration and Delivery) options, see the [CI/CD section of GitLab's docs](https://docs.gitlab.com/ee/ci/#overview).
### Set the excludes
Your Git repository is configured, the commands to build your site on GitLab's servers are set, and your site ready to publish. For your first Git commit, you must take a few extra precautions so you're not version-controlling files you don't intend to version-control.
First, add the **/public** directory that Hugo creates when building your site to your **.gitignore** file. You don't need to manage the finished site in Git; all you need to track are your source Hugo files.
`$ echo "/public" >> .gitignore`
You can't maintain a Git repository within a Git repository without creating a Git submodule. For the sake of keeping this simple, move the embedded **.git** directory so that the theme is just a theme.
Note that you *must* add your theme files to your Git repository so GitLab will have access to the theme. Without committing your theme files, your site cannot successfully build.
`$ mv themes/mero/.git ~/.local/share/Trash/files/`
Alternately, use a **trash** command such as [Trashy](http://slackermedia.info/trashy):
`$ trash themes/mero/.git`
Now you can add all the contents of your local project directory to Git and push it to GitLab:
```
$ git add .
$ git commit -m 'hugo init'
$ git push -u origin HEAD
```
## Go live with GitLab
Once your code has been pushed to GitLab, take a look at your project page. An icon indicates GitLab is processing your build. It might take several minutes the first time you push your code, so be patient. However, don't be *too* patient, because the icon doesn't always update reliably.

While you're waiting for GitLab to assemble your site, go to your project settings and find the **Pages** panel. Once your site is ready, its URL will be provided for you. The URL is **yourGitLabUsername.gitlab.io/yourProjectName**. Navigate to that address to view the fruits of your labor.

If your site fails to assemble correctly, GitLab provides insight into the CI/CD pipeline logs. Review the error message for an indication of what went wrong.
## Git and the web
Hugo (or Jekyll or similar tools) is just one way to leverage Git as your web publishing tool. With server-side Git hooks, you can design your own Git-to-web pipeline with minimal scripting. With the community edition of GitLab, you can self-host your own GitLab instance or you can use an alternative like [Gitolite](http://gitolite.com) or [Gitea](http://gitea.io) and use this article as inspiration for a custom solution. Have fun!
## Comments are closed. |
11,305 | 你的企业安全软件是否在背后偷传数据? | https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html | 2019-09-04T14:28:58 | [
"安全"
] | https://linux.cn/article-11305-1.html |
>
> ExtraHop 发现一些企业安全和分析软件正在“私下回拨”,悄悄地将信息上传到客户网络外的服务器上。
>
>
>

当这个博客专注于微软的一切事情时,我常常抱怨、反对 Windows 10 的间谍活动方面。嗯,显然,这些跟企业安全、分析和硬件管理工具所做的相比,都不算什么。
一家叫做 ExtraHop 的分析公司检查了其客户的网络,并发现客户的安全和分析软件悄悄地将信息上传到客户网络外的服务器上。这家公司上周发布了一份[报告来进行警示](https://www.extrahop.com/company/press-releases/2019/extrahop-issues-warning-about-phoning-home/)。
ExtraHop 特意选择不对这四个例子中的企业安全工具进行点名,这些工具在没有警告用户或使用者的情况发送了数据。这家公司的一位发言人通过电子邮件告诉我,“ExtraHop 希望关注报告的这个趋势,我们已经多次观察到了这种令人担心的情况。这个严重的问题需要企业的更多关注,而只是关注几个特定的软件会削弱这个严重的问题需要得到更多关注的观点。”
### 产品在安全提交传输方面玩忽职守,并且偷偷地传输数据到异地
[ExtraHop 的报告](https://www.extrahop.com/resources/whitepapers/eh-security-advisory-calling-home-success/)中称发现了一系列的产品在偷偷地传输数据回自己的服务器上,包括终端安全软件、医院设备管理软件、监控摄像头、金融机构使用的安全分析软件等。报告中同样指出,这些应用涉嫌违反了欧洲的[通用数据隐私法规(GDPR)](https://www.csoonline.com/article/3202771/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.html)。
在每个案例里,ExtraHop 都提供了这些软件传输数据到异地的证据,在其中一个案例中,一家公司注意到,大约每隔 30 分钟,一台连接了网络的设备就会发送 UDP 数据包给一个已知的恶意 IP 地址。有问题的是一台某国制造的安全摄像头,这个摄像头正在偷偷联系一个和某国有联系的已知的恶意 IP 地址。
该摄像头很可能由其办公室的一名员工出于其个人安全的用途独自设置的,这显示出影子 IT 的缺陷。
而对于医院设备的管理工具和金融公司的分析工具,这些工具违反了数据安全法,公司面临着法律风险——即使公司不知道这个事。
该医院的医疗设备管理产品应该只使用医院的 WiFi 网络,以此来确保患者的数据隐私和 HIPAA 合规。ExtraHop 注意到管理初始设备部署的工作站正在打开加密的 ssl:443 来连接到供应商自己的云存储服务器,这是一个重要的 HIPAA 违规。
ExtraHop 指出,尽管这些例子中可能没有任何恶意活动。但它仍然违反了法律规定,管理员需要密切关注他们的网络,以此来监视异常活动的流量。
“要明确的是,我们不知道供应商为什么要把数据偷偷传回自己的服务器。这些公司都是受人尊敬的安全和 IT 供应商,并且很有可能,这些数据是由他们的程序框架设计用于合法目的的,或者是错误配置的结果”,报告中说。
### 如何减轻数据外传的安全风险
为了解决这种安全方面玩忽职守的问题,ExtraHop 建议公司做下面这五件事:
* 监视供应商的活动:在你的网络上密切注意供应商的非正常活动,无论他们是活跃供应商、以前的供应商,还是评估后的供应商。
* 监控出口流量:了解出口流量,尤其是来自域控制器等敏感资产的出口流量。当检测到出口流量时,始终将其与核准的应用程序和服务进行匹配。
* 跟踪部署:在评估过程中,跟踪软件代理的部署。
* 理解监管方面的考量因素:了解数据跨越政治、地理边界的监管和合规考量因素。
* 理解合同协议:跟踪数据的使用是否符合供应商合同上的协议。
---
via: <https://www.networkworld.com/article/3429559/is-your-enterprise-software-committing-security-malpractice.html>
作者:[Andy Patrizio](https://www.networkworld.com/author/Andy-Patrizio/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
11,306 | Emacs:Eldoc 全局化了 | https://emacsredux.com/blog/2018/11/13/eldoc-goes-global/ | 2019-09-05T04:58:12 | [
"Emacs"
] | https://linux.cn/article-11306-1.html | 
最近我注意到 Emacs 25.1 增加了一个名为 `global-eldoc-mode` 的模式,它是流行的 `eldoc-mode` 的一个全局化的变体。而且与 `eldoc-mode` 不同的是,`global-eldoc-mode` 默认是开启的!
这意味着你可以删除 Emacs 配置中为主模式开启 `eldoc-mode` 的代码了:
```
;; That code is now redundant
(add-hook 'emacs-lisp-mode-hook #'eldoc-mode)
(add-hook 'ielm-mode-hook #'eldoc-mode)
(add-hook 'cider-mode-hook #'eldoc-mode)
(add-hook 'cider-repl-mode-hook #'eldoc-mode)
```
[有人说](https://emacs.stackexchange.com/questions/31414/how-to-globally-disable-eldoc) `global-eldoc-mode` 在某些不支持的模式中会有性能问题。我自己从未遇到过,但若你像禁用它则只需要这样:
```
(global-eldoc-mode -1)
```
现在是时候清理我的配置了!删除代码就是这么爽!
---
via: <https://emacsredux.com/blog/2018/11/13/eldoc-goes-global/>
作者:[Bozhidar Batsov](https://emacsredux.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Eldoc Goes Global
I recently noticed that Emacs 25.1 had added a global variant of the
popular `eldoc-mode`
, called `global-eldoc-mode`
. What’s more -
unlike `eldoc-mode`
, `global-eldoc-mode`
is enabled by default!
This means that you can get rid of all the code in your Emacs config that was
wiring up `eldoc-mode`
for major modes that support it:
```
;; That code is now redundant
(add-hook 'emacs-lisp-mode-hook #'eldoc-mode)
(add-hook 'ielm-mode-hook #'eldoc-mode)
(add-hook 'cider-mode-hook #'eldoc-mode)
(add-hook 'cider-repl-mode-hook #'eldoc-mode)
```
There are [some
reports](https://emacs.stackexchange.com/questions/31414/how-to-globally-disable-eldoc)
that `global-eldoc-mode`
is causing performance issues in modes that
don’t support it. I’ve never experienced this myself, but if you want
to disable it you can simply do so like this:
```
(global-eldoc-mode -1)
```
Now it’s time to clean up my config! Deleting code always feels so good! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.