id
int64 2.05k
16.6k
| title
stringlengths 5
75
| fromurl
stringlengths 19
185
| date
timestamp[s] | tags
sequencelengths 0
11
| permalink
stringlengths 20
37
| content
stringlengths 342
82.2k
| fromurl_status
int64 200
526
⌀ | status_msg
stringclasses 339
values | from_content
stringlengths 0
229k
⌀ |
---|---|---|---|---|---|---|---|---|---|
10,030 | Cloud Commander:一个有控制台和编辑器的 Web 文件管理器 | https://www.ostechnix.com/cloud-commander-a-web-file-manager-with-console-and-editor/ | 2018-09-19T09:01:56 | [
"文件管理器"
] | https://linux.cn/article-10030-1.html | 
**Cloud Commander** 是一个基于 web 的文件管理程序,它允许你通过任何计算机、移动端或平板电脑的浏览器查看、访问或管理系统文件或文件夹。它有两个简单而又经典的面板,并且会像你设备的显示尺寸一样自动转换大小。它也拥有两款内置的叫做 **Dword** 和 **Edward** 的文本编辑器,它们支持语法高亮,并带有一个支持系统命令行的控制台。因此,您可以随时随地编辑文件。Cloud Commander 服务器是一款在 Linux、Windows、Mac OS X 运行的跨平台应用,而且该应用客户端可以在任何一款浏览器上运行。它是用 **JavaScript/Node.Js** 写的,并使用 **MIT** 许可证。
在这个简易教程中,让我们看一看如何在 Ubuntu 18.04 LTS 服务器上安装 Cloud Commander。
### 前提条件
像我之前提到的,是用 Node.js 写的。所以为了安装 Cloud Commander,我们需要首先安装 Node.js。要执行安装,参考下面的指南。
* [如何在 Linux 上安装 Node.js](https://www.ostechnix.com/install-node-js-linux/)
### 安装 Cloud Commander
在安装 Node.js 之后,运行下列命令安装 Cloud Commander:
```
$ npm i cloudcmd -g
```
祝贺!Cloud Commander 已经安装好了。让我们往下继续看看 Cloud Commander 的基本使用。
### 开始使用 Cloud Commander
运行以下命令启动 Cloud Commander:
```
$ cloudcmd
```
**输出示例:**
```
url: http://localhost:8000
```
现在,打开你的浏览器并转到链接:`http://localhost:8000` 或 `http://IP-address:8000`。
从现在开始,您可以直接在本地系统或远程系统或移动设备,平板电脑等Web浏览器中创建,删除,查看,管理文件或文件夹。

如你所见上面的截图,Clouder Commander 有两个面板,十个热键 (`F1` 到 `F10`),还有控制台。
每个热键执行的都是一个任务。
* `F1` – 帮助
* `F2` – 重命名文件/文件夹
* `F3` – 查看文件/文件夹
* `F4` – 编辑文件
* `F5` – 复制文件/文件夹
* `F6` – 移动文件/文件夹
* `F7` – 创建新目录
* `F8` – 删除文件/文件夹
* `F9` – 打开菜单
* `F10` – 打开设置
#### Cloud Commmander 控制台
点击控制台图标。这即将打开系统默认的命令行界面。

在此控制台中,您可以执行各种管理任务,例如安装软件包、删除软件包、更新系统等。您甚至可以关闭或重新引导系统。 因此,Cloud Commander 不仅仅是一个文件管理器,还具有远程管理工具的功能。
#### 创建文件/文件夹
要创建新的文件或文件夹就右键单击任意空位置并找到 “New - >File or Directory”。

#### 查看文件
你可以查看图片,查看音视频文件。

#### 上传文件
另一个很酷的特性是我们可以从任何系统或设备简单地上传一个文件到 Cloud Commander 系统。
要上传文件,右键单击 Cloud Commander 面板的任意空白处,并且单击“Upload”选项。

选择你想要上传的文件。
另外,你也可以上传来自像 Google 云盘、Dropbox、Amazon 云盘、Facebook、Twitter、Gmail、GitHub、Picasa、Instagram 还有很多的云服务上的文件。
要从云端上传文件, 右键单击面板的任意空白处,并且右键单击面板任意空白处并选择“Upload from Cloud”。

选择任意一个你选择的网络服务,例如谷歌云盘。点击“Connect to Google drive”按钮。

下一步,用 Cloud Commander 验证你的谷歌云端硬盘,从谷歌云端硬盘选择文件并点击“Upload”。

#### 更新 Cloud Commander
要更新到最新的可用版本,执行下面的命令:
```
$ npm update cloudcmd -g
```
#### 总结
据我测试,它运行很好。在我的 Ubuntu 服务器测试期间,我没有遇到任何问题。此外,Cloud Commander 不仅是基于 Web 的文件管理器,还可以充当执行大多数 Linux 管理任务的远程管理工具。 您可以创建文件/文件夹、重命名、删除、编辑和查看它们。此外,您可以像在终端中在本地系统中那样安装、更新、升级和删除任何软件包。当然,您甚至可以从 Cloud Commander 控制台本身关闭或重启系统。 还有什么需要的吗? 尝试一下,你会发现它很有用。
目前为止就这样吧。 我将很快在这里发表另一篇有趣的文章。 在此之前,请继续关注我们。
祝贺!
---
via: <https://www.ostechnix.com/cloud-commander-a-web-file-manager-with-console-and-editor/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[fuzheng1998](https://github.com/fuzheng1998) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,031 | 如何使用 Apache 构建 URL 缩短服务 | https://opensource.com/article/18/7/apache-url-shortener | 2018-09-19T22:30:27 | [
"短链接",
"Apache"
] | https://linux.cn/article-10031-1.html |
>
> 用 Apache HTTP 服务器的 mod\_rewrite 功能创建你自己的短链接。
>
>
>

很久以前,人们开始在 Twitter 上分享链接。140 个字符的限制意味着 URL 可能消耗一条推文的大部分(或全部),因此人们使用 URL 缩短服务。最终,Twitter 加入了一个内置的 URL 缩短服务([t.co](http://t.co))。
字符数现在不重要了,但还有其他原因要缩短链接。首先,缩短服务可以提供分析功能 —— 你可以看到你分享的链接的受欢迎程度。它还简化了制作易于记忆的 URL。例如,[bit.ly/INtravel](http://bit.ly/INtravel) 比<https://www.in.gov/ai/appfiles/dhs-countyMap/dhsCountyMap.html>更容易记住。如果你想预先共享一个链接,但还不知道最终地址,这时 URL 缩短服务可以派上用场。。
与任何技术一样,URL 缩短服务并非都是正面的。通过屏蔽最终地址,缩短的链接可用于指向恶意或冒犯性内容。但是,如果你仔细上网,URL 缩短服务是一个有用的工具。
我们之前在网站上[发布过缩短服务的文章](https://opensource.com/article/17/3/url-link-shortener),但也许你想要运行一些由简单的文本文件支持的缩短服务。在本文中,我们将展示如何使用 Apache HTTP 服务器的 mod\_rewrite 功能来设置自己的 URL 缩短服务。如果你不熟悉 Apache HTTP 服务器,请查看 David Both 关于[安装和配置](https://opensource.com/article/18/2/how-configure-apache-web-server)它的文章。
### 创建一个 VirtualHost
在本教程中,我假设你购买了一个很酷的域名,你将它专门用于 URL 缩短服务。例如,我的网站是 [funnelfiasco.com](http://funnelfiasco.com),所以我买了 [funnelfias.co](http://funnelfias.co) 用于我的 URL 缩短服务(好吧,它不是很短,但它可以满足我的虚荣心)。如果你不将缩短服务作为单独的域运行,请跳到下一部分。
第一步是设置将用于 URL 缩短服务的 VirtualHost。有关 VirtualHost 的更多信息,请参阅 [David Both 的文章](https://opensource.com/article/18/3/configuring-multiple-web-sites-apache)。这步只需要几行:
```
<VirtualHost *:80>
ServerName funnelfias.co
</VirtualHost>
```
### 创建重写规则
此服务使用 HTTPD 的重写引擎来重写 URL。如果你在上面的部分中创建了 VirtualHost,则下面的配置跳到你的 VirtualHost 部分。否则跳到服务器的 VirtualHost 或主 HTTPD 配置。
```
RewriteEngine on
RewriteMap shortlinks txt:/data/web/shortlink/links.txt
RewriteRule ^/(.+)$ ${shortlinks:$1} [R=temp,L]
```
第一行只是启用重写引擎。第二行在文本文件构建短链接的映射。上面的路径只是一个例子。你需要使用系统上使用有效路径(确保它可由运行 HTTPD 的用户帐户读取)。最后一行重写 URL。在此例中,它接受任何字符并在重写映射中查找它们。你可能希望重写时使用特定的字符串。例如,如果你希望所有缩短的链接都是 “slX”(其中 X 是数字),则将上面的 `(.+)` 替换为 `(sl\d+)`。
我在这里使用了临时重定向(HTTP 302)。这能让我稍后更新目标 URL。如果希望短链接始终指向同一目标,则可以使用永久重定向(HTTP 301)。用 `permanent` 替换第三行的 `temp`。
### 构建你的映射
编辑配置文件 `RewriteMap` 行中的指定文件。格式是空格分隔的键值存储。在每一行上放一个链接:
```
osdc https://opensource.com/users/bcotton
twitter https://twitter.com/funnelfiasco
swody1 https://www.spc.noaa.gov/products/outlook/day1otlk.html
```
### 重启 HTTPD
最后一步是重启 HTTPD 进程。这是通过 `systemctl restart httpd` 或类似命令完成的(命令和守护进程名称可能因发行版而不同)。你的链接缩短服务现已启动并运行。当你准备编辑映射时,无需重新启动 Web 服务器。你所要做的就是保存文件,Web 服务器将获取到差异。
### 未来的工作
此示例为你提供了基本的 URL 缩短服务。如果你想将开发自己的管理接口作为学习项目,它可以作为一个很好的起点。或者你可以使用它分享容易记住的链接到那些容易忘记的 URL。
---
via: <https://opensource.com/article/18/7/apache-url-shortener>
作者:[Ben Cotton](https://opensource.com/users/bcotton) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Long ago, folks started sharing links on Twitter. The 140-character limit meant that URLs might consume most (or all) of a tweet, so people turned to URL shorteners. Eventually, Twitter added a built-in URL shortener ([t.co](http://t.co)).
Character count isn't as important now, but there are still other reasons to shorten links. For one, the shortening service may provide analytics—you can see how popular the links are that you share. It also simplifies making easy-to-remember URLs. For example, [bit.ly/INtravel](http://bit.ly/INtravel) is much easier to remember than [https://www.in.gov/ai/appfiles/dhs-countyMap/dhsCountyMap.html](https://www.in.gov/ai/appfiles/dhs-countyMap/dhsCountyMap.html). And URL shorteners can come in handy if you want to pre-share a link but don't know the final destination yet.
Like any technology, URL shorteners aren't all positive. By masking the ultimate destination, shortened links can be used to direct people to malicious or offensive content. But if you surf carefully, URL shorteners are a useful tool.
We [covered shorteners previously](https://opensource.com/article/17/3/url-link-shortener) on this site, but maybe you want to run something simple that's powered by a text file. In this article, we'll show how to use the Apache HTTP server's mod_rewrite feature to set up your own URL shortener. If you're not familiar with the Apache HTTP server, check out David Both's article on [installing and configuring](https://opensource.com/article/18/2/how-configure-apache-web-server) it.
## Create a VirtualHost
In this tutorial, I'm assuming you bought a cool domain that you'll use exclusively for the URL shortener. For example, my website is [funnelfiasco.com](http://funnelfiasco.com), so I bought [funnelfias.co](http://funnelfias.co) to use for my URL shortener (okay, it's not exactly short, but it feeds my vanity). If you won't run the shortener as a separate domain, skip to the next section.
The first step is to set up the VirtualHost that will be used for the URL shortener. For more information on VirtualHosts, see [David Both's article](https://opensource.com/article/18/3/configuring-multiple-web-sites-apache). This setup requires just a few basic lines:
```
<VirtualHost *:80>
ServerName funnelfias.co
</VirtualHost>
```
## Create the rewrites
This service uses HTTPD's rewrite engine to rewrite the URLs. If you created a VirtualHost in the section above, the configuration below goes into your VirtualHost section. Otherwise, it goes in the VirtualHost or main HTTPD configuration for your server.
```
RewriteEngine on
RewriteMap shortlinks txt:/data/web/shortlink/links.txt
RewriteRule ^/(.+)$ ${shortlinks:$1} [R=temp,L]
```
The first line simply enables the rewrite engine. The second line builds a map of the short links from a text file. The path above is only an example; you will need to use a valid path on your system (make sure it's readable by the user account that runs HTTPD). The last line rewrites the URL. In this example, it takes any characters and looks them up in the rewrite map. You may want to have your rewrites use a particular string at the beginning. For example, if you wanted all your shortened links to be of the form "slX" (where X is a number), you would replace `(.+)`
above with `(sl\d+)`
.
I used a temporary (HTTP 302) redirect here. This allows me to update the destination URL later. If you want the short link to always point to the same target, you can use a permanent (HTTP 301) redirect instead. Replace `temp`
on line three with `permanent`
.
## Build your map
Edit the file you specified on the `RewriteMap`
line of the configuration. The format is a space-separated key-value store. Put one link on each line:
```
osdc https://opensource.com/users/bcotton
twitter https://twitter.com/funnelfiasco
swody1 https://www.spc.noaa.gov/products/outlook/day1otlk.html
```
## Restart HTTPD
The last step is to restart the HTTPD process. This is done with `systemctl restart httpd`
or similar (the command and daemon name may differ by distribution). Your link shortener is now up and running. When you're ready to edit your map, you don't need to restart the web server. All you have to do is save the file, and the web server will pick up the differences.
## Future work
This example gives you a basic URL shortener. It can serve as a good starting point if you want to develop your own management interface as a learning project. Or you can just use it to share memorable links to forgettable URLs.
## 1 Comment |
10,032 | 我从编程面试中学到的 | https://medium.freecodecamp.org/what-i-learned-from-programming-interviews-29ba49c9b851 | 2018-09-19T23:25:00 | [
"面试"
] | https://linux.cn/article-10032-1.html | 
在 2017 年,我参加了 ‘计算机行业中的女性’ 的[Grace Hopper 庆祝活动](https://anitab.org/event/2017-grace-hopper-celebration-women-computing/)。这个活动是这类科技活动中最大的一个。共有 17,000 名女性IT工作者参加。
这个会议有个大型的配套招聘会,会上有招聘公司来面试会议参加者。有些人甚至现场拿到 offer。我在现场晃荡了一下,注意到一些应聘者看上去非常紧张忧虑。我还隐隐听到应聘者之间的谈话,其中一些人谈到在面试中做的并不好。
我走近我听到谈话的那群人并和她们聊了起来并给了一些面试上的小建议。我想我的建议还是比较偏基本的,如“(在面试时)一开始给出个能工作的解决方案也还说的过去”之类的,但是当她们听到我的一些其他的建议时还是颇为吃惊。
为了能更多的帮到像她们一样的小白面试者,我收集了一些过去对我有用的小点子,这些小点子我已经发表在了 [prodcast episode](https://thewomenintechshow.com/2017/12/18/programming-interviews/) 上。它们也是这篇文章的主题。
为了实习生职位和全职工作,我做过很多次的面试。当我还在大学主修计算机科学时,学校每个秋季学期都有招聘会,第一轮招聘会在校园里举行。(我在第一和最后一轮都搞砸过。)不过,每次面试后,我都会反思哪些方面我能做的更好,我还会和朋友们做模拟面试,这样我就能从他们那儿得到更多的面试反馈。
不管我们怎么样找工作: 工作中介、网络,或者学校招聘,他们的招聘流程中都会涉及到技术面试:
近年来,我注意到了一些新的不同的面试形式出现了:
* 与招聘方的一位工程师结对编程
* 网络在线测试及在线编码
* 白板编程(LCTT 译注: 这种形式应该不新了)
我将重点谈谈白板面试,这种形式我经历的最多。我有过很多次面试,有些挺不错的,有些被我搞砸了。
#### 我做错的地方
首先,我想回顾一下我做的不好的地方。知错能改,善莫大焉。
当面试者提出一个要我解决的问题时, 我立即马上立刻开始在白板上写代码,什么都不问。
这里我犯了两个错误:
#### 没有澄清对解决问题有关键作用的信息
比如,我们是否只用处理数字或者字符串?我们要支持多种数据类型吗?如果你在开始解题前不去问这些问题的话,你的面试官会有一种不好的印象:这个人在我们公司的话,他不会在开始项目工作之前不问清楚到底要做什么。而这恰恰是在工作场合很重要的一个工作习惯。公司可不像学校,你在开始工作前可不会得到写有所有详细步骤的作业说明。你得靠自己找到这些步骤并自己定义他们。
#### 只会默默思考,不去记录想法或和面试官沟通
在面试中,很多时候我也会傻傻站在那思考,什么都不写。我和一个朋友模拟面试的时候,他告诉我因为他曾经和我一起工作过所以他知道我在思考,但是如果他是个陌生的面试官的话,他会觉得我正站在那冥思苦想,毫无头绪。不要急匆匆的直奔解题而去是很重要的。花点时间多想想各种解题的可能性。有时候面试官会乐意和你一起探索解题的步骤。不管怎样,这就是在一家公司开工作会议的的普遍方式,大家各抒己见,一起讨论如何解决问题。
### 想到一个解题方法
在你开始写代码之前,如果你能总结一下要使用到的算法就太棒了。不要上来就写代码并认为你的代码肯定能解决问题。
这是对我管用的步骤:
1. 头脑风暴
2. 写代码
3. 处理错误路径
4. 测试
#### 1、 头脑风暴
对我来说,我会首先通过一些例子来视觉化我要解决的问题。比如说如果这个问题和数据结构中的树有关,我就会从树底层的空节点开始思考,如何处理一个节点的情况呢?两个节点呢?三个节点呢?这能帮助你从具体例子里抽象出你的解决方案。
在白板上先写下你的算法要做的事情列表。这样做,你往往能在开始写代码前就发现 bug 和缺陷(不过你可得掌握好时间)。我犯过的一个错误是我花了过多的时间在澄清问题和头脑风暴上,最后几乎没有留下时间给我写代码。你的面试官可能没有机会看你在白板上写下代码,这可太糟了。你可以带块手表,或者房间有钟的话,你也可以抬头看看时间。有些时候面试者会提醒你你已经得到了所有的信息(这时你就不要再问别的了),“我想我们已经把所有需要的信息都澄清了,让我们写代码实现吧”。
#### 2、 开始写代码,一气呵成
如果你还没有得到问题的完美解决方法,从最原始的解法开始总是可以的。当你在向面试官解释最显而易见的解法时,你要想想怎么去完善它,并指明这种做法是最原始的,未加优化的。(请熟悉算法中的 `O()` 的概念,这对面试非常有用。)在向面试者提交前请仔细检查你的解决方案两三遍。面试者有时会给你些提示, “还有更好的方法吗?”,这句话的意思是面试官提示你有更优化的解决方案。
#### 3、 错误处理
当你在编码时,对你想做错误处理的代码行做个注释。当面试者说,“很好,这里你想到了错误处理。你想怎么处理呢?抛出异常还是返回错误码?”,这将给你个机会去引出关于代码质量的一番讨论。当然,这种地方提出几个就够了。有时,面试者为了节省编码的时间,会告诉你可以假设外界输入的参数都已经通过了校验。不管怎样,你都要展现你对错误处理和编码质量的重要性的认识。
#### 4、 测试
在编码完成后,用你在前面头脑风暴中写的用例来在你脑子里“跑”一下你的代码,确定万无一失。例如你可以说,“让我用前面写下的树的例子来跑一下我的代码,如果是一个节点是什么结果,如果是两个节点是什么结果……”
在你结束之后,面试者有时会问你你将会怎么测试你的代码,你会涉及什么样的测试用例。我建议你用下面不同的分类来组织你的错误用例:
一些分类可以为:
1. 性能
2. 错误用例
3. 期望的正常用例
对于性能测试,要考虑极端数量下的情况。例如,如果问题是关于列表的,你可以说你将会使用一个非常大的列表以及的非常小的列表来测试。如果和数字有关,你将会测试系统中的最大整数和最小整数。我建议读一些有关软件测试的书来得到更多的知识。在这个领域我最喜欢的书是 《[我们在微软如何测试软件](https://www.amazon.com/How-We-Test-Software-Microsoft/dp/0735624259)》。
对于错误用例,想一下什么是期望的错误情况并一一写下。
对于正向期望用例,想想用户需求是什么?你的解决方案要解决什么问题?这些都可以成为正向期望用例。
### “你还有什么要问我的吗?”
面试最后总是会留几分钟给你问问题。我建议你在面试前写下你想问的问题。千万别说,“我没什么问题了”,就算你觉得面试砸了或者你对这间公司不怎么感兴趣,你总有些东西可以问问。你甚至可以问面试者他最喜欢自己的工作什么,最讨厌自己的工作什么。或者你可以问问面试官的工作具体是什么,在用什么技术和实践。不要因为觉得自己在面试中做的不好而心灰意冷,不想问什么问题。
### 申请一份工作
关于找工作和申请工作,有人曾经告诉我,你应该去找你真正有激情工作的地方。去找一家你喜欢的公司,或者你喜欢使用的产品,看看你能不能去那儿工作。
我个人并不推荐你用上述的方法去找工作。你会排除很多很好的公司,特别是你是在找实习工作或者入门级的职位时。
你也可以集中在其他的一些目标上。如:我想从这个工作里得到哪方面的更多经验?这个工作是关于云计算?Web 开发?或是人工智能?当在招聘会上与招聘公司沟通时,看看他们的工作单位有没有在这些领域的。你可能会在一家并非在你的想去公司列表上的公司(或非盈利机构)里找到你想找的职位。
#### 换组
在这家公司里的第一个组里呆了一年半以后,我觉得是时候去探索一下不同的东西了。我找到了一个我喜欢的组并进行了 4 轮面试。结果我搞砸了。
我什么都没有准备,甚至都没在白板上练练手。我当时的逻辑是,如果我都已经在一家公司干了快 2 年了,我还需要练什么?我完全错了,我在接下去的白板面试中跌跌撞撞。我的板书写得太小,而且因为没有从最左上角开始写代码,我的代码大大超出了一个白板的空间,这些都导致了白板面试失败。
我在面试前也没有刷过数据结构和算法题。如果我做了的话,我将会在面试中更有信心。就算你已经在一家公司担任了软件工程师,在你去另外一个组面试前,我强烈建议你在一块白板上演练一下如何写代码。
对于换项目组这件事,如果你是在公司内部换组的话,事先能同那个组的人非正式聊聊会很有帮助。对于这一点,我发现几乎每个人都很乐于和你一起吃个午饭。人一般都会在中午有空,约不到人或者别人正好有会议冲突的风险会很低。这是一种非正式的途径来了解你想去的组正在干什么,以及这个组成员个性是怎么样的。相信我,你能从一次午餐中得到很多信息,这可会对你的正式面试帮助不小。
非常重要的一点是,你在面试一个特定的组时,就算你在面试中做的很好,因为文化不契合的原因,你也很可能拿不到 offer。这也是为什么我一开始就想去见见组里不同的人的原因(有时这也不太可能),我希望你不要被一次拒绝所击倒,请保持开放的心态,选择新的机会,并多多练习。
以上内容选自 《[The Women in Tech Show: Technical Interviews with Prominent Women in Tech](https://thewomenintechshow.com/)》的 “[编程面试](https://thewomenintechshow.com/2017/12/18/programming-interviews/)”章节,
---
作者简介:
微软研究院 Software Engineer II, [www.thewomenintechshow.com](http://www.thewomenintechshow.com) 站长,所有观点都只代表本人意见。
---
via: <https://medium.freecodecamp.org/what-i-learned-from-programming-interviews-29ba49c9b851>
作者:[Edaena Salinas](https://medium.freecodecamp.org/@edaenas) 译者:[DavidChenLiang](https://github.com/DavidChenLiang) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,034 | 初学者指南:ZFS 是什么,为什么要使用 ZFS? | https://itsfoss.com/what-is-zfs/ | 2018-09-20T23:17:00 | [
"文件系统",
"ZFS"
] | https://linux.cn/article-10034-1.html | 
今天,我们来谈论一下 ZFS,一个先进的文件系统。我们将讨论 ZFS 从何而来,它是什么,以及为什么它在科技界和企业界如此受欢迎。
虽然我是一个美国人,但我更喜欢读成 ZedFS 而不是 ZeeFS,因为前者听起来更酷一些。你可以根据你的个人喜好来发音。
>
> 注意:在这篇文章中,你将会看到 ZFS 被提到很多次。当我在谈论特性和安装的时候,我所指的是 OpenZFS 。自从<ruby> 甲骨文 <rt> Oracle </rt></ruby>公司放弃 OpenSolaris 项目之后,ZFS(由甲骨文公司开发)和 OpenZFS 已经走向了不同的发展道路。
>
>
>
### ZFS 的历史
<ruby> Z 文件系统 <rt> Z File System </rt></ruby>(ZFS)是由 [Matthew Ahrens 和 Jeff Bonwick](https://wiki.gentoo.org/wiki/ZFS) 在 2001 年开发的。ZFS 是作为[<ruby> 太阳微系统 <rt> Sun MicroSystem </rt></ruby>](http://en.wikipedia.org/wiki/Sun_Microsystems) 公司的 [OpenSolaris](http://en.wikipedia.org/wiki/Opensolaris) 的下一代文件系统而设计的。在 2008 年,ZFS 被移植到了 FreeBSD 。同一年,一个移植 [ZFS 到 Linux](https://zfsonlinux.org/) 的项目也启动了。然而,由于 ZFS 是<ruby> <a href="https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License"> 通用开发和发布许可证 </a> <rt> Common Development and Distribution License </rt></ruby>(CDDL)许可的,它和 [GNU 通用公共许可证](https://en.wikipedia.org/wiki/GNU_General_Public_License) 不兼容,因此不能将它迁移到 Linux 内核中。为了解决这个问题,绝大多数 Linux 发行版提供了一些方法来安装 ZFS 。
在甲骨文公司收购太阳微系统公司之后不久,OpenSolaris 就闭源了,这使得 ZFS 的之后的开发也变成闭源的了。许多 ZFS 开发者对这件事情非常不满。[三分之二的 ZFS 核心开发者](https://wiki.gentoo.org/wiki/ZFS),包括 Ahrens 和 Bonwick,因为这个决定而离开了甲骨文公司。他们加入了其它公司,并于 2013 年 9 月创立了 [OpenZFS](http://www.open-zfs.org/wiki/Main_Page) 这一项目。该项目引领着 ZFS 的开源开发。
让我们回到上面提到的许可证问题上。既然 OpenZFS 项目已经和 Oracle 公司分离开了,有人可能好奇他们为什么不使用和 GPL 兼容的许可证,这样就可以把它加入到 Linux 内核中了。根据 [OpenZFS 官网](http://www.open-zfs.org/wiki/FAQ#Do_you_plan_to_release_OpenZFS_under_a_license_other_than_the_CDDL.3F) 的介绍,更改许可证需要联系所有为当前 OpenZFS 实现贡献过代码的人(包括初始的公共 ZFS 代码以及 OpenSolaris 代码),并得到他们的许可才行。这几乎是不可能的(因为一些贡献者可能已经去世了或者很难找到),因此他们决定保留原来的许可证。
### ZFS 是什么,它有什么特性?
正如前面所说过的,ZFS 是一个先进的文件系统。因此,它有一些有趣的[特性](https://wiki.archlinux.org/index.php/ZFS)。比如:
* 存储池
* 写时拷贝
* 快照
* 数据完整性验证和自动修复
* RAID-Z
* 最大单个文件大小为 16 EB(1 EB = 1024 PB)
* 最大 256 千万亿(256\*10<sup> 15</sup> )的 ZB(1 ZB = 1024 EB)的存储
让我们来深入了解一下其中一些特性。
#### 存储池
与大多数文件系统不同,ZFS 结合了文件系统和卷管理器的特性。这意味着,它与其他文件系统不同,ZFS 可以创建跨越一系列硬盘或池的文件系统。不仅如此,你还可以通过添加硬盘来增大池的存储容量。ZFS 可以进行[分区和格式化](https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/)。

*ZFS 存储池*
#### 写时拷贝
<ruby> <a href="https://www.freebsd.org/doc/handbook/zfs-term.html"> 写时拷贝 </a> <rt> Copy-on-write </rt></ruby>是另一个有趣并且很酷的特性。在大多数文件系统上,当数据被重写时,它将永久丢失。而在 ZFS 中,新数据会写到不同的块。写完成之后,更新文件系统元数据信息,使之指向新的数据块(LCTT 译注:更新之后,原数据块成为磁盘上的垃圾,需要有对应的垃圾回收机制)。这确保了如果在写新数据的时候系统崩溃(或者发生其它事,比如突然断电),那么原数据将会保存下来。这也意味着,在系统发生崩溃之后,不需要运行 [fsck](https://en.wikipedia.org/wiki/Fsck) 来检查和修复文件系统。
#### 快照
写时拷贝使得 ZFS 有了另一个特性:<ruby> 快照 <rt> snapshots </rt></ruby>。ZFS 使用快照来跟踪文件系统中的更改。[快照](https://www.freebsd.org/doc/handbook/zfs-term.html)包含文件系统的原始版本(文件系统的一个只读版本),实时文件系统则包含了自从快照创建之后的任何更改。没有使用额外的空间。因为新数据将会写到实时文件系统新分配的块上。如果一个文件被删除了,那么它在快照中的索引也会被删除。所以,快照主要是用来跟踪文件的更改,而不是文件的增加和创建。
快照可以挂载成只读的,以用来恢复一个文件的过去版本。实时文件系统也可以回滚到之前的快照。回滚之后,自从快照创建之后的所有更改将会丢失。
#### 数据完整性验证和自动修复
当向 ZFS 写入新数据时,会创建该数据的校验和。在读取数据的时候,使用校验和进行验证。如果前后校验和不匹配,那么就说明检测到了错误,然后,ZFS 会尝试自动修正错误。
#### RAID-Z
ZFS 不需要任何额外软件或硬件就可以处理 RAID(磁盘阵列)。毫不奇怪,因为 ZFS 有自己的 RAID 实现:RAID-Z 。RAID-Z 是 RAID-5 的一个变种,不过它克服了 RAID-5 的写漏洞:意外重启之后,数据和校验信息会变得不同步(LCTT 译注:RAID-5 的条带在正写入数据时,如果这时候电源中断,那么奇偶校验数据将跟该部分数据不同步,因此前边的写无效;RAID-Z 用了 “可变宽的 RAID 条带” 技术,因此所有的写都是全条带写入)。为了使用[基本级别的 RAID-Z](https://wiki.archlinux.org/index.php/ZFS/Virtual_disks#Creating_and_Destroying_Zpools)(RAID-Z1),你需要至少三块磁盘,其中两块用来存储数据,另外一块用来存储[奇偶校验信息](https://www.pcmag.com/encyclopedia/term/60364/raid-parity)。而 RAID-Z2 需要至少两块磁盘存储数据以及两块磁盘存储校验信息。RAID-Z3 需要至少两块磁盘存储数据以及三块磁盘存储校验信息。另外,只能向 RAID-Z 池中加入偶数倍的磁盘,而不能是奇数倍的。
#### 巨大的存储潜力
创建 ZFS 的时候,它是作为[最后一个文件系统](https://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/)而设计的 。那时候,大多数文件系统都是 64 位的,ZFS 的创建者决定直接跳到 128 位,等到将来再来证明这是对的。这意味着 ZFS 的容量大小是 32 位或 64 位文件系统的 1600 亿亿倍。事实上,Jeff Bonwick(其中一个创建者)说:“完全填满一个 128 位的存储池所需要的[能量](https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-high),从字面上讲,比煮沸海洋需要的还多。”
### 如何安装 ZFS?
如果你想立刻使用 ZFS(开箱即用),那么你需要安装 [FreeBSD](https://www.freebsd.org/) 或一个[使用 illumos 内核的操作系统](https://wiki.illumos.org/display/illumos/Distributions)。[illumos](https://wiki.illumos.org/display/illumos/illumos+Home) 是 OpenSolaris 内核的一个克隆版本。
事实上,支持 [ZFS 是一些有经验的 Linux 用户选择 BSD 的主要原因](https://itsfoss.com/why-use-bsd/)。
如果你想在 Linux 上尝试 ZFS,那么只能在存储文件系统上使用。据我所知,没有任何 Linux 发行版可以在根目录上安装 ZFS,实现开箱即用。如果你对在 Linux 上尝试 ZFS 感兴趣,那么 [ZFS on Linux 项目](https://zfsonlinux.org/) 上有大量的教程可以指导你怎么做。
### 附加说明
这篇文章论述了 ZFS 的优点。现在,让我来告诉你一个关于 ZFS 很现实的问题。使用 RAID-Z [会很贵](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html),因为你需要购买大量的磁盘来增大存储空间。
你已经使用过 ZFS 了吗?你的使用经验是什么样的?请在下面的评论中告诉我们。
如果你觉得这篇文章有趣,请花一分钟的时间把它分享到社交媒体、极客新闻或 [Reddit](http://reddit.com/r/linuxusersgroup) 。
---
via: <https://itsfoss.com/what-is-zfs/>
作者:[John Paul](https://itsfoss.com/author/john/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

Today, we will take a look at ZFS, an advanced file system. We will discuss where it came from, what it is, and why it is so popular among techies and enterprise.
Even though I’m from the US, I prefer to pronounce it ZedFS instead of ZeeFS because it sounds cooler. You are free to pronounce it however you like.
[Oracle shutdown OpenSolaris](https://itsfoss.com/solaris-alternatives/). (More on that later.)
## History of ZFS
The Z File System (ZFS) was created by [Matthew Ahrens and Jeff Bonwick](https://wiki.gentoo.org/wiki/ZFS?ref=itsfoss.com) in 2001. ZFS was designed to be a next generation file system for [Sun Microsystems’](https://en.wikipedia.org/wiki/Sun_Microsystems?ref=itsfoss.com) [OpenSolaris](https://en.wikipedia.org/wiki/Opensolaris?ref=itsfoss.com). In 2008, ZFS was ported to FreeBSD. The same year a project was started to port [ZFS to Linux](https://zfsonlinux.org/?ref=itsfoss.com). However, since ZFS is licensed under the [Common Development and Distribution License](https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License?ref=itsfoss.com), which is incompatible with the [GNU General Public License](https://en.wikipedia.org/wiki/GNU_General_Public_License?ref=itsfoss.com), it cannot be included in the Linux kernel. To get around this problem, most Linux distros offer methods to install ZFS.
Shortly after Oracle purchased Sun Microsystems, OpenSolaris became close-source. All further development of ZFS became closed source, as well. Many of the developers of ZFS where unhappy about this turn of events. [Two-thirds of the core ZFS devlopers](https://wiki.gentoo.org/wiki/ZFS?ref=itsfoss.com), including Ahrens and Bonwick, left Oracle due to this decision. They joined other companies and created the [OpenZFS project](http://www.open-zfs.org/wiki/Main_Page?ref=itsfoss.com) in September of 2013. The project has spearheaded the open-source development of ZFS.
Let’s go back to the license issue mentioned above. Since the OpenZFS project is separate from Oracle, some probably wonder why they don’t change the license to something that is compatible with the GPL so it can be included in the Linux kernel. According to the [OpenZFS website](http://www.open-zfs.org/wiki/FAQ?ref=itsfoss.com#Do_you_plan_to_release_OpenZFS_under_a_license_other_than_the_CDDL.3F), changing the license would involve contacting anyone who contributed code to the current OpenZFS implementation (including the initial, common ZFS code till OpenSolaris) and get their permission to change the license. Since this job is near impossible (because some contributors may be dead or hard to find), they have decided to keep the license they have.
## What is ZFS? What are its features?

As I said before, ZFS is an advanced file system. As such, it has some interesting [features](https://wiki.archlinux.org/index.php/ZFS?ref=itsfoss.com). Such as:
- Pooled storage
- Copy-on-write
- Snapshots
- Data integrity verification and automatic repair
- RAID-Z
- Maximum 16 Exabyte file size
- Maximum 256 Quadrillion Zettabytes storage
Let’s break down a couple of those features.
### Pooled Storage
Unlike most files systems, ZFS combines the features of a file system and a volume manager. This means that unlike other file systems, ZFS can create a file system that spans across a series of drives or a pool. Not only that but you can add storage to a pool by adding another drive. ZFS will handle [partitioning and formatting](https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/?ref=itsfoss.com).

### Copy-on-write
[Copy-on-write](https://www.freebsd.org/doc/handbook/zfs-term.html?ref=itsfoss.com) is another interesting (and cool) features. On most files system, when data is overwritten, it is lost forever. On ZFS, the new information is written to a different block. Once the write is complete, the file systems metadata is updated to point to the new info. This ensures that if the system crashes (or something else happens) while the write is taking place, the old data will be preserved. It also means that the system does not need to run [fsck](https://linuxhandbook.com/fsck-command/?ref=itsfoss.com) after a system crash.
### Snapshots
Copy-on-write leads into another ZFS feature: snapshots. ZFS uses snapshots to track changes in the file system. “[The snapshot](https://www.freebsd.org/doc/handbook/zfs-term.html?ref=itsfoss.com) contains the original version of the file system, and the live filesystem contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data.” If a file is deleted, the snapshot reference is removed, as well. So, snapshots are mainly designed to track changes to files, but not the addition and creation of files.
Snapshots can be mounted as read-only to recover a past version of a file. It is also possible to rollback the live system to a previous snapshot. All changes made since the snapshot will be lost.
### Data integrity verification and automatic repair
Whenever new data is written to ZFS, it creates a checksum for that data. When that data is read, the checksum is verified. If the checksum does not match, then ZFS knows that an error has been detected. ZFS will then automatically attempt to correct the error.
### RAID-Z
ZFS can handle RAID without requiring any extra software or hardware. Unsurprisingly, ZFS has its own implementation of RAID: RAID-Z. RAID-Z is actually a variation of RAID-5. However, it is designed to overcome the RAID-5 write hole error, “in which the data and parity information become inconsistent after an unexpected restart”. To use the basic [level of RAID-Z](https://wiki.archlinux.org/index.php/ZFS/Virtual_disks?ref=itsfoss.com#Creating_and_Destroying_Zpools) (RAID-Z1) you need at least two disks for storage and one for [parity](https://www.pcmag.com/encyclopedia/term/60364/raid-parity?ref=itsfoss.com). RAID-Z2 required at least two storage drives and two drive for parity. RAID-Z3 requires at least two storage drives and three drive for parity. When drives are added to the RAID-Z pools, they have to be added in multiples of two.
### Huge Storage potential
When ZFS was created, it was designed to be [the last word in file systems](https://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/). At a time when most file systems where 64-bit, the ZFS creators decided to jump right to 128-bit to future proof it. This means that ZFS “offers 16 billion billion times the capacity of 32- or 64-bit systems”. In fact, Jeff Bonwick (one of the creators) said [that powering](https://blogs.oracle.com/bonwick/128-bit-storage:-are-you-high?ref=itsfoss.com) a “fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.”
## How to Install ZFS?
If you want to use ZFS out of the box, it would require installing either [FreeBSD](https://www.freebsd.org/?ref=itsfoss.com) or an [operating system using the illumos kernel](https://wiki.illumos.org/display/illumos/Distributions?ref=itsfoss.com). [illumos](https://wiki.illumos.org/display/illumos/illumos+Home?ref=itsfoss.com) is a fork of the OpenSolaris kernel.
In fact, support for [ZFS is one of the main reasons why some experienced Linux users opt for BSD](https://itsfoss.com/why-use-bsd/).
If you want to try ZFS on Linux, you can use it as your storage file system. Recently, Ubuntu 19.10 introduced the ability to install ZFS on your root out of the box. Read more about [using ZFS on Ubuntu](https://itsfoss.com/zfs-ubuntu/). If you are interested in trying ZFS on Linux, the [ZFS on Linux project](https://zfsonlinux.org/?ref=itsfoss.com) has a number of tutorials on how to do that.
[can be expensive](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html?ref=itsfoss.com)because of the number of drives you need to purchase to add storage space.
Have you ever used ZFS? What was your experience like? Let us know in the comments below. If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit](http://reddit.com/r/linuxusersgroup?ref=itsfoss.com). |
10,035 | Flash Player 的两种开源替代方案 | https://opensource.com/alternatives/flash-media-player | 2018-09-21T16:10:18 | [
"Flash"
] | https://linux.cn/article-10035-1.html |
>
> Adobe 将于 2020 年终止对 Flash 媒体播放器的支持,但仍有很多人们希望访问的 Flash 视频。这里有两个开源的替代品或许可以帮到你。
>
>
>

2017 年 7 月,Adobe 为 Flash Media Player 敲响了[丧钟](https://theblog.adobe.com/adobe-flash-update/),宣布将在 2020 年终止对曾经无处不在的在线视频播放器的支持。但事实上,在一系列损害了其声誉的零日攻击后,Flash 的份额在过去的 8 年一直在下跌。苹果公司在 2010 年宣布它不会支持这项技术后,其未来趋于黯淡,并且在谷歌停止在 Chrome 浏览器中默认启用 Flash(支持 HTML5)后,它的消亡在 2016 年进一步加速。
即便如此,Adobe 仍然每月发布该软件的更新,截至 2018 年 8 月,它在网站的使用率从 2011 年的 28.5% 下降到[仅 4.4%](https://w3techs.com/technologies/details/cp-flash/all/all)。还有更多证据表明 Flash 的下滑:谷歌工程总监 [Parisa Tabriz 说](https://www.bleepingcomputer.com/news/security/google-chrome-flash-usage-declines-from-80-percent-in-2014-to-under-8-percent-today/)通过浏览器访问 Flash 内容的 Chrome 用户数量从 2014 年的 80% 下降到 2018 年的 8%。
尽管如今很少有视频创作者以 Flash 格式发布,但仍有很多人们希望在未来几年内访问的 Flash 视频。鉴于官方支持的日期已经屈指可数,开源软件创建者有很好的机会涉足 Adobe Flash 媒体播放器的替代品。这其中两个应用是 Lightspark 和 GNU Gnash。它们都不是完美的替代品,但来自贡献者的帮助可以使它们成为可行的替代品。
### Lightspark
[Lightspark](http://lightspark.github.io/) 是 Linux 上的 Flash Player 替代品。虽然它仍处于 alpha 状态,但自从 Adobe 在 2017 宣布废弃 Adobe 以来,开发速度已经加快。据其网站称,Lightspark 实现了 60% 的 Flash API,可在许多流行网站包括 BBC 新闻、Google Play 音乐和亚马逊音乐上[使用](https://github.com/lightspark/lightspark/wiki/Site-Support)。
Lightspark 是用 C++/C 编写的,并在 [LGPLv3](https://github.com/lightspark/lightspark/blob/master/COPYING) 下许可。该项目列出了 41 个贡献者,并正在积极征求错误报告和其他贡献。有关更多信息,请查看其 [GitHub 仓库](https://github.com/lightspark/lightspark/wiki/Site-Support)。
### GNU Gnash
[GNU Gnash](https://www.gnu.org/software/gnash/) 是一个用于 GNU/Linux 操作系统,包括 Ubuntu、Fedora 和 Debian 的 Flash Player。它作为独立软件和插件可用于 Firefox 和 Konqueror 浏览器中。
Gnash 的主要缺点是它不支持最新版本的 Flash 文件 —— 它支持大多数 Flash SWF v7 功能,一些 v8 和 v9 功能,不支持 v10 文件。它处于测试阶段,由于它在 [GNU GPLv3 或更高版本](http://www.gnu.org/licenses/gpl-3.0.html)下许可,因此你可以帮助实现它的现代化。访问其[项目页面](http://savannah.gnu.org/projects/gnash/)获取更多信息。
### 想要创建 Flash 吗?
仅因为大多数人都不会发布 Flash 视频,但这并不意味着永远不需要创建 SWF 文件。如果你发现自己需要,这两个开源工具可能会有所帮助:
* [Motion-Twin ActionScript 2 编译器](http://tech.motion-twin.com/mtasc.html)(MTASC):一个命令行编译器,它可以在没有 Adobe Animate(Adobe 当前的视频创建软件)的情况下生成 SWF 文件。
* [Ming](http://www.libming.org/):用 C 编写的可以生成 SWF 文件的库。它还包含一些可用于处理 Flash 的[程序](http://www.libming.org/WhatsIncluded)。
---
via: <https://opensource.com/alternatives/flash-media-player>
作者:[Opensource.com](https://opensource.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In July 2017, Adobe sounded the [death knell](https://theblog.adobe.com/adobe-flash-update/) for its Flash Media Player, announcing it would end support for the once-ubiquitous online video player in 2020. In truth, however, Flash has been on the decline for the past eight years following a rash of zero-day attacks that damaged its reputation. Its future dimmed after Apple announced in 2010 it would not support the technology, and its demise accelerated in 2016 after Google stopped enabling Flash by default (in favor of HTML5) in the Chrome browser.
Even so, Adobe is still issuing monthly updates for the software, which has slipped from being used on 28.5% of all websites in 2011 to [only 4.4.%](https://w3techs.com/technologies/details/cp-flash/all/all) as of August 2018. More evidence of Flash’s decline: Google director of engineering [Parisa Tabriz said](https://www.bleepingcomputer.com/news/security/google-chrome-flash-usage-declines-from-80-percent-in-2014-to-under-8-percent-today/) the number of Chrome users who access Flash content via the browser has declined from 80% in 2014 to under eight percent in 2018.
Although few[*](#*) video creators are publishing in Flash format today, there are still a lot of Flash videos out there that people will want to access for years to come. Given that the official application’s days are numbered, open source software creators have a great opportunity to step in with alternatives to Adobe Flash Media Player. Two of those applications are Lightspark and GNU Gnash. Neither are perfect substitutions, but help from willing contributors could make them viable alternatives.
## Lightspark
[Lightspark](http://lightspark.github.io/) is a Flash Player alternative for Linux machines. While it’s still in alpha, development has accelerated since Adobe announced it would sunset Flash in 2017. According to its website, Lightspark implements about 60% of the Flash APIs and [works](https://github.com/lightspark/lightspark/wiki/Site-Support) on many leading websites including BBC News, Google Play Music, and Amazon Music.
Lightspark is written in C++/C and licensed under [LGPLv3](https://github.com/lightspark/lightspark/blob/master/COPYING). The project lists 41 contributors and is actively soliciting bug reports and other contributions. For more information, check out its [GitHub repository](https://github.com/lightspark/lightspark/wiki/Site-Support).
## GNU Gnash
[GNU Gnash](https://www.gnu.org/software/gnash/) is a Flash Player for GNU/Linux operating systems including Ubuntu, Fedora, and Debian. It works as standalone software and as a plugin for the Firefox and Konqueror browsers.
Gnash’s main drawback is that it doesn’t support the latest versions of Flash files—it supports most Flash SWF v7 features, some v8 and v9 features, and offers no support for v10 files. It’s in beta release, and since it’s licensed under the [GNU GPLv3 or later](http://www.gnu.org/licenses/gpl-3.0.html), you can help contribute to modernizing it. Access its [project page](http://savannah.gnu.org/projects/gnash/) for more information.
## Want to create Flash?
[*Just because ]*most* people aren't publishing Flash videos these days, that doesn't mean there will never, ever be a need to create SWF files. If you find yourself in that position, these two open source tools might help:
[Motion-Twin ActionScript 2 Compiler](http://tech.motion-twin.com/mtasc.html)(MTASC): A command-line compiler that can generate SWF files without Adobe Animate (the current iteration of Adobe's video-creator software).[Ming](http://www.libming.org/): A library written in C that can generate SWF files. It also contains some[utilities](http://www.libming.org/WhatsIncluded)you can use to work with Flash files.
Clearly, there’s an opening for open source software to take Flash Player’s place in the broader market. If you know of another open source Flash alternative that’s worth a closer look (or needs contributors), please share it in the comments. Or even better, check out the great Flash-free open source tools for working with animation.
## Comments are closed. |
10,036 | 区块链简史 | https://www.myblockchainblog.com/blog/a-brief-history-of-blockchain | 2018-09-21T16:42:00 | [
"区块链"
] | https://linux.cn/article-10036-1.html | 
### 序幕
很久以前,在一个遥远的星系……一份题为“[比特币:点对点电子现金系统](http://bitcoin.org/bitcoin.pdf)”的神秘白皮书以笔名<ruby> 中本聪 <rt> Satoshi Nakamoto </rt></ruby>发布。该文件用他们自己的话说,<ruby> “不依赖于信任的电子交易系统” <rt> a system for electronic transactions without relying on trust <br/> </rt></ruby>。
时间是 2008 年 11 月,我们所说的遥远的星系就是互联网。这是比特币历史的开端。直到今天,没有人确切知道中本聪是谁。我们所知道的是,第一个开源比特币客户端于 2009 年 1 月发布,在接下来的几年中,中本聪积累了大约 100 万比特币,然后在 2010 年中期完全从比特币世界中消失。直到今天,他(或他们)庞大的比特币财富仍未受到影响,分布在几个已知的比特币账户中。截至 2017 年中期,这些比特币总价值约为 40 亿美元。

### 比特币的历史
2009 年推出的比特币是区块链技术的第一次真实应用。在接下来的五年里,区块链的历史几乎与比特币的历史同义。以下是此期间的粗略时间表:

### 以太坊的历史
2014 年是区块链历史上一个重要里程碑。在此之前,区块链技术的应用仅限于加密货币。尽管比特币协议已在该领域证明了自己,但它缺乏开发区块链应用程序所需的脚本语言,以拓展到加密货币外的应用领域。
Vitalik Buterin 多年来一直是比特币的重要发烧友,并且在 2012 年他 18 岁时已经共同创立了《<ruby> 比特币杂志 <rt> Bitcoin Magazine </rt></ruby>》!在他想更新原始比特币协议未获比特币社区同意后,Vitalik 就聚集了一个超级程序员团队,开发一个全新的区块链协议,其中包含所谓的<ruby> 智能合约 <rt> smart contract </rt></ruby>,允许程序员在其区块链中构建称作合约的脚本,并在满足某些条件时执行。Vitalik 将他的新区块链命名为<ruby> 以太坊 <rt> Ethereum </rt></ruby>。
在以太坊区块链上使用智能合约需要小额支付以太币,即以太坊的加密货币。这些小额支付称为“<ruby> 燃料 <rt> gas </rt></ruby>”,并奖励给“挖出了”包含该交易的数据块的计算机节点。智能合约的使用案例非常多样化,很可能在未来许多年中我们不会完全理解它的用处(就像 90 年代初期互联网刚兴起时,我们不知道 Facebook、YouTube 和 Skype 将怎样改变世界)。
一个有助于描述智能合约有用性的简单例子是去中心化彩票。在下面的示例中,开发了具有以下功能的智能合约并将其存储在以太坊区块链中:
* 任何人可以发送以太币给智能合约。
* 每 24 小时,智能合约随机选择一个贡献地址,并将合约中的所有以太币返回到该地址。
* 你贡献的以太币越多,获胜的机会就越高。
* 由于智能合约存储在以太坊区块链中,其内容是公开的,任何人都可以检查它以确保它不包含任何错误或蹊跷的逻辑。没有人(甚至是开发者)能够动存储在智能合约上的资金。从理论上讲,这样的彩票运营支出最小(只有燃料成本和创建者在智能合约中内置的其他费用)。这种彩票相比传统彩票,优势显著:
* 由于运营支出减少,获胜的几率可以大大提高。
* 整个系统是完全透明的,每个参与者将能够在参与彩票之前准确计算他们获胜的机会。
* 由于它是完全去中心化的,区块链彩票将不会面临破产以及许多其他外部风险因素。
* 支付是保证和即时的。
* 参与者是<ruby> 伪匿名 <rt> pseudo-anonymous </rt></ruby>的。

自 2014 年推出以来,以太坊区块链经历了一个显著的增长期,现在成为仅次于比特币的区块链。以下时间表显示了 2014 年以后比特币相关事件的历史。

### 未来会怎样
现在你已经了解了区块链的历史,让我们简单预测一下它的未来。如前所述,与传统的会计和记录保存方法相比,区块链应用程序的去中心化性质提供了显著的优势。在过去的 12 个月中,区块链技术向主流认可迈出了重要一步,数百家蓝筹公司在其基础设施上投入巨资(参见 [Finextra](https://www.finextra.com/newsarticle/29093/2016-wall-street-blockchain-investment-to-top-1bn))。几乎所有主要的咨询公司都公开宣称他们看好区块链技术对一系列行业的潜在影响([埃森哲](https://www.accenture.com/us-en/insight-perspectives-capital-markets-blockchain-future)、[路透社](http://www.reuters.com/article/us-banks-blockchain-accenture-idUSKBN1511OU)、[德勤](https://www2.deloitte.com/global/en/pages/financial-services/articles/gfsi-disruptive-innovation-blockchain.html)、[普华永道](http://www.pwc.com/us/en/energy-mining/exploring-the-disruptive-potential-of-decentralized-storage-and-peer-to-peer-transactions-in-the-energy-industry.html)),一些分析机构预测价格在未来十年价格会大幅上涨([NewsBTC](http://www.newsbtc.com/2017/07/06/bitcoin-price-positive-future/)、[CNBC](http://www.cnbc.com/2017/05/31/bitcoin-price-forecast-hit-100000-in-10-years.html),[Money Morning](https://moneymorning.com/2017/06/15/why-a-bitcoin-price-prediction-of-1-million-isnt-crazy/))。虽然我们并没有拥有预测的水晶球,而且区块链的大规模使用肯定存在很多障碍,但这种技术的未来似乎比以往更加光明。
你喜欢这篇博文吗?我们是否错过了任何重要的区块链里程碑?您对区块链的未来有何看法?我们很乐意在下面的留言板上收到您的来信!我们的下一篇博文将为您提供我们称之为区块链生态系统的概述。希望能在那里见到你!
---
via: <https://www.myblockchainblog.com/blog/a-brief-history-of-blockchain>
作者:[Vegard Nordahl & Meghana Rao](https://www.myblockchainblog.com/?author=57ca44c0bebafb5698fc5e7d) 选题:[jasminepeng](https://github.com/jasminepeng) 译者:[jasminepeng](https://github.com/jasminepeng) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](/article-9986-1.html) 荣誉推出
| 200 | OK | null |
10,038 | Autotrash:一个自动清除旧垃圾的命令行工具 | https://www.ostechnix.com/autotrash-a-cli-tool-to-automatically-purge-old-trashed-files/ | 2018-09-22T13:30:07 | [
"Autotrash",
"回收站",
"删除"
] | https://linux.cn/article-10038-1.html | 
**Autotrash** 是一个命令行程序,它用于自动清除旧的已删除文件。它将清除超过指定天数的在回收站中的文件。你不需要清空回收站或执行 `SHIFT+DELETE` 以永久清除文件/文件夹。Autortrash 将处理回收站中的内容,并在特定时间段后自动删除它们。简而言之,Autotrash 永远不会让你的垃圾变得太大。
### 安装 Autotrash
Autotrash 默认存在于基于 Debian 系统的仓库中。要在 Debian、Ubuntu、Linux Mint 上安装 `autotrash`,请运行:
```
$ sudo apt-get install autotrash
```
在 Fedora 上:
```
$ sudo dnf install autotrash
```
对于 Arch linux 及其变体,你可以使用任何 AUR 助手程序, 如 [**Yay**](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/) 安装它。
```
$ yay -S autotrash-git
```
### 自动清除旧的垃圾文件
每当你运行 `autotrash` 时,它会扫描你的 `~/.local/share/Trash/info` 目录并读取 `.trashinfo` 以找出它们的删除日期。如果文件已在回收站中超过指定的日期,那么就会删除它们。
让我举几个例子。
要删除回收站中超过 30 天的文件,请运行:
```
$ autotrash -d 30
```
如上例所示,如果回收站中的文件超过 30 天,Autotrash 会自动将其从回收站中删除。你无需手动删除它们。只需将没用的文件放到回收站即可忘记。Autotrash 将处理已删除的文件。
以上命令仅处理当前登录用户的垃圾目录。如果要使 autotrash 处理所有用户的垃圾目录(不仅仅是在你的家目录中),请使用 `-t` 选项,如下所示。
```
$ autotrash -td 30
```
Autotrash 还允许你根据回收站可用容量或磁盘可用空间来删除已删除的文件。
例如,看下下面的例子:
```
$ autotrash --max-free 1024 -d 30
```
根据上面的命令,如果回收站的剩余的空间少于 **1GB**,那么 autotrash 将从回收站中清除超过 **30 天**的已删除文件。如果你的回收站空间不足,这可能很有用。
我们还可以从回收站中按最早的时间清除文件直到回收站至少有 1GB 的空间。
```
$ autotrash --min-free 1024
```
在这种情况下,对旧的已删除文件没有限制。
你可以将这两个选项(`--min-free` 和 `--max-free`)组合在一个命令中,如下所示。
```
$ autotrash --max-free 2048 --min-free 1024 -d 30
```
根据上面的命令,如果可用空间小于 **2GB**,`autotrash` 将读取回收站,接着关注容量。此时,删除超过 30 天的文件,如果少于 **1GB** 的可用空间,则删除更新的文件。
如你所见,所有命令都应由用户手动运行。你可能想知道,我该如何自动执行此任务?这很容易!只需将 `autotrash` 添加为 crontab 任务即可。现在,命令将在计划的时间自动运行,并根据定义的选项清除回收站中的文件。
要在 crontab 中添加这些命令,请运行:
```
$ crontab -e
```
添加任务,例如:
```
@daily /usr/bin/autotrash -d 30
```
现在,autotrash 将每天清除回收站中超过 30 天的文件。
有关计划任务的更多详细信息,请参阅以下链接。
* [Cron 任务的初学者指南]][2](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/)
* [如何在 Linux 中轻松安全地管理 Cron 作业]][3](https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/)
请注意,如果你无意中删除了任何重要文件,它们将在规定的日期后永久消失,所以请小心。
请参阅手册页以了解有关 Autotrash 的更多信息。
```
$ man autotrash
```
清空回收站或按 `SHIFT+DELETE` 永久删除 Linux 系统中没用的东西没什么大不了的。它只需要几秒钟。但是,如果你需要额外的程序来处理垃圾文件,Autotrash 可能会有所帮助。试一下,看看它是如何工作的。
就是这些了。希望这个有用。还有更多的好东西。
干杯!
---
via: <https://www.ostechnix.com/autotrash-a-cli-tool-to-automatically-purge-old-trashed-files/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,039 | 增强 Vim 编辑器,提高编辑效率 | https://opensource.com/article/18/9/vi-editor-productivity-powerhouse | 2018-09-22T14:08:00 | [
"Vim"
] | https://linux.cn/article-10039-1.html |
>
> 这 20 多个有用的命令可以增强你使用 Vim 的体验。
>
>
>

*编者注:标题和文章最初称呼的 `vi` 编辑器,现已更新为编辑器的正确名称:`Vim`。*
`Vim` 作为一款功能强大、选项丰富的编辑器,为许多用户所热爱。本文介绍了一些在 `Vim` 中默认未启用但实际非常有用的选项。虽然可以在每个 `Vim` 会话中单独启用,但为了创建一个开箱即用的高效编辑环境,还是建议在 `Vim` 的配置文件中配置这些命令。
### 开始前的准备
这里所说的选项或配置均位于用户主目录中的 `Vim` 启动配置文件 `.vimrc`。 按照下面的说明在 `.vimrc` 中设置选项:
(注意:`vimrc` 文件也用于 `Linux` 中的全局配置,如 `/etc/vimrc` 或 `/etc/vim/vimrc`。本文所说的 `.vimrc` 均是指位于用户主目录中的 `.vimrc` 文件。)
Linux 系统中:
* 用 `Vim` 打开 `.vimrc` 文件: `vim ~/.vimrc`
* 复制本文最后的 `选项列表` 粘贴到 `.vimrc` 文件
* 保存并关闭 (`:wq`)
(LCTT 译注:此处不建议使用 `Vim` 编辑 `.vimrc` 文件,因为很可能无法粘贴成功,可以选择 `gedit` 编辑器编辑 `.vimrc` 文件。)
Windows 系统中:
* 首先,[安装 gvim](https://www.vim.org/download.php#pc)
* 打开 `gvim`
* 单击 “编辑” -> “启动设置”,打开 `_vimrc` 文件
* 复制本文最后的 “选项列表” 粘贴到 `_vimrc` 文件
* 单击 “文件” -> “保存”
(LCTT 译注:此处应注意不要使用 `Windows` 自带的记事本编辑该 `_vimrc` 文件,否则可能会因为行结束符不同而导致问题。)
下面,我们将深入研究提高 `Vim` 编辑效率的选项。主要分为以下几类:
1. 缩进 & 制表符
2. 显示 & 格式化
3. 搜索
4. 浏览 & 滚动
5. 拼写
6. 其他选项
### 1. 缩进 & 制表符
使 `Vim` 在创建新行的时候使用与上一行同样的缩进:
```
set autoindent
```
创建新行时使用智能缩进,主要用于 `C` 语言一类的程序。通常,打开 `smartindent` 时也应该打开 `autoindent`:
```
set smartindent
```
注意:`Vim` 具有语言感知功能,且其默认设置可以基于文件中的编程语言来改变配置以提高效率。有许多默认的配置选项,包括 `axs cindent`,`cinoptions`,`indentexpr` 等,没有在这里说明。 `syn` 是一个非常有用的命令,用于设置文件的语法以更改显示模式。
(LCTT 译注:这里的 `syn` 是指 `syntax`,可用于设置文件所用的编程语言,开启对应的语法高亮,以及执行自动事件 (`autocmd`)。)
设置文件里的制表符 `(TAB)` 的宽度(以空格的数量表示):
```
set tabstop=4
```
设置移位操作 `>>` 或 `<<` 的缩进长度(以空格的数量表示):
```
set shiftwidth=4
```
如果你更喜欢在编辑文件时使用空格而不是制表符,设置以下选项可以使 `Vim` 在你按下 `Tab` 键时用空格代替制表符。
```
set expandtab
```
注意:这可能会导致依赖于制表符的 `Python` 等编程语言出现问题。这时,你可以根据文件类型设置该选项(请参考 `autocmd`)。
### 2. 显示 & 格式化
要在每行的前面显示行号:
```
set number
```

要在文本行超过一定长度时自动换行:
```
set textwidth=80
```
要根据从窗口右侧向左数的列数来自动换行:
```
set wrapmargin=2
```
(LCTT 译注:如果 `textwidth` 选项不等于零,本选项无效。)
当光标遍历文件时经过括号时,高亮标识匹配的括号:
```
set showmatch
```

### 3. 搜索
高亮搜索内容的所有匹配位置:
```
set hlsearch
```

搜索过程中动态显示匹配内容:
```
set incsearch
```

搜索时忽略大小写:
```
set ignorecase
```
在打开 `ignorecase` 选项的条件下,搜索内容包含部分大写字符时,要使搜索大小写敏感:
```
set smartcase
```
例如,如果文件内容是:
```
test
Test
```
当打开 `ignorecase` 和 `smartcase` 选项时,搜索 `test` 时的突出显示:
>
> test
> Test
>
>
>
搜索 `Test` 时的突出显示:
>
> test
> Test
>
>
>
### 4. 浏览 & 滚动
为获得更好的视觉体验,你可能希望将光标放在窗口中间而不是第一行,以下选项使光标距窗口上下保留 5 行。
```
set scrolloff=5
```
一个例子:
第一张图中 `scrolloff=0`,第二张图中 `scrolloff=5`。

提示:如果你没有设置选项 `nowrap`,那么设置 `sidescrolloff` 将非常有用。
在 `Vim` 窗口底部显示一个永久状态栏,可以显示文件名、行号和列号等内容:
```
set laststatus=2
```

### 5. 拼写
`Vim` 有一个内置的拼写检查器,对于文本编辑和编码非常有用。`Vim` 可以识别文件类型并仅对代码中的注释进行拼写检查。使用下面的选项打开英语拼写检查:
```
set spell spelllang=en_us
```
(LCTT 译注:中文、日文或其它东亚语字符通常会在打开拼写检查时被标为拼写错误,因为拼写检查不支持这些语种,可以在 `spelllang` 选项中加入 `cjk` 来忽略这些错误标注。)
### 6. 其他选项
禁止创建备份文件:启用此选项后,`Vim` 将在覆盖文件前创建一个备份,文件成功写入后保留该备份。如果不想保留该备份文件,可以按下面的方式关闭:
```
set nobackup
```
禁止创建交换文件:启用此选项后,`Vim` 将在编辑该文件时创建一个交换文件。 交换文件用于在崩溃或发生使用冲突时恢复文件。交换文件是以 `.` 开头并以 `.swp` 结尾的隐藏文件。
```
set noswapfile
```
如果需要在同一个 `Vim` 窗口中编辑多个文件并进行切换。默认情况下,工作目录是打开的第一个文件的目录。而将工作目录自动切换到正在编辑的文件的目录是非常有用的。要自动切换工作目录:
```
set autochdir
```
`Vim` 自动维护编辑的历史记录,允许撤消更改。默认情况下,该历史记录仅在文件关闭之前有效。`Vim` 包含一个增强功能,使得即使在文件关闭后也可以维护撤消历史记录,这意味着即使在保存、关闭和重新打开文件后,也可以撤消之前的更改。历史记录文件是使用 `.un~` 扩展名保存的隐藏文件。
```
set undofile
```
错误信息响铃,只对错误信息起作用:
```
set errorbells
```
如果你愿意,还可以设置错误视觉提示:
```
set visualbell
```
### 惊喜
`Vim` 提供长格式和短格式命令,两种格式都可用于设置或取消选项配置。
`autoindent` 选项的长格式是:
```
set autoindent
```
`autoindent` 选项的短格式是:
```
set ai
```
要在不更改选项当前值的情况下查看其当前设置,可以在 `Vim` 的命令行上使用在末尾加上 `?` 的命令:
```
set autoindent?
```
在大多数选项前加上 `no` 前缀可以取消或关闭选项:
```
set noautoindent
```
可以为单独的文件配置选项,而不必修改全局配置文件。需要的话,请打开文件并输入 `:`,然后键入 `set`命令。这样的话,配置仅对当前的文件编辑会话有效。

使用命令行获取帮助:
```
:help autoindent
```

注意:此处列出的命令仅对 Linux 上的 Vim 7.4 版本和 Windows 上的 Vim 8.0 版本进行了测试。
这些有用的命令肯定会增强您的 `Vim` 使用体验。你会推荐哪些其他有用的命令?
### 选项列表
复制该选项列表粘贴到 `.vimrc` 文件中:
```
" Indentation & Tabs
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4
set expandtab
set smarttab
" Display & format
set number
set textwidth=80
set wrapmargin=2
set showmatch
" Search
set hlsearch
set incsearch
set ignorecase
set smartcase
" Browse & Scroll
set scrolloff=5
set laststatus=2
" Spell
set spell spelllang=en_us
" Miscellaneous
set nobackup
set noswapfile
set autochdir
set undofile
set visualbell
set errorbells
```
---
via: <https://opensource.com/article/18/9/vi-editor-productivity-powerhouse>
作者:[Girish Managoli](https://opensource.com/users/gammay) 选题:[lujun9972](https://github.com/lujun9972) 译者:[idea2act](https://github.com/idea2act) 校对:[apemost](https://github.com/apemost), [wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *Editor's note: The headline and article originally referred to the "vi editor." It has been updated to the correct name of the editor: "vim."*
A versatile and powerful editor, vim includes a rich set of potent commands that make it a popular choice for many users. This article specifically looks at commands that are not enabled by default in vim but are nevertheless useful. The commands recommended here are expected to be set in a vim configuration file. Though it is possible to enable commands individually from each vim session, the purpose of this article is to create a highly productive environment out of the box.
## Before you begin
The commands or configurations discussed here go into the vim startup configuration file, vimrc, located in the user home directory. Follow the instructions below to set the commands in vimrc:
*(Note: The vimrc file is also used for system-wide configurations in Linux, such as /etc/vimrc or /etc/vim/vimrc. In this article, we'll consider only user-specific vimrc, present in user home folder.)*
In Linux:
- Open the file with
`vi $HOME/.vimrc`
- Type or copy/paste the commands in the cheat sheet at the end of this article
- Save and close (
`:wq`
)
In Windows:
- First,
[install gvim](https://www.vim.org/download.php#pc) - Open gvim
- Click Edit --> Startup settings, which opens the _vimrc file
- Type or copy/paste the commands in the cheat sheet at the end of this article
- Click File --> Save
Let's delve into the individual vi productivity commands. These commands are classified into the following categories:
- Indentation & Tabs
- Display & Format
- Search
- Browse & Scroll
- Spell
- Miscellaneous
## 1. Indentation & Tabs
To automatically align the indentation of a line in a file:
`set autoindent`
Smart Indent uses the code syntax and style to align:
`set smartindent`
Tip: vim is language-aware and provides a default setting that works efficiently based on the programming language used in your file. There are many default configuration commands, including `axs cindent`
, `cinoptions`
, `indentexpr`
, etc., which are not explained here. `syn`
is a helpful command that shows or sets the file syntax.
To set the number of spaces to display for a tab:
`set tabstop=4`
To set the number of spaces to display for a “shift operation” (such as ‘>>’ or ‘<<’):
`set shiftwidth=4`
If you prefer to use spaces instead of tabs, this option inserts spaces when the Tab key is pressed. This may cause problems for languages such as Python that rely on tabs instead of spaces. In such cases, you may set this option based on the file type (see `autocmd`
).
`set expandtab`
## 2. Display & Format
To show line numbers:
`set number`

To wrap text when it crosses the maximum line width:
`set textwidth=80`
To wrap text based on a number of columns from the right side:
`set wrapmargin=2`
To identify open and close brace positions when you traverse through the file:
`set showmatch`

## 3. Search
To highlight the searched term in a file:
`set hlsearch`

To perform incremental searches as you type:
`set incsearch`

To search ignoring case (many users prefer not to use this command; set it only if you think it will be useful):
`set ignorecase`
To search without considering `ignorecase`
when both `ignorecase`
and `smartcase`
are set and the search pattern contains uppercase:
`set smartcase`
For example, if the file contains:
test
Test
When both `ignorecase`
and `smartcase`
are set, a search for “test” finds and highlights both:
test
Test
A search for “Test” highlights or finds only the second line:
test
Test
## 4. Browse & Scroll
For a better visual experience, you may prefer to have the cursor somewhere in the middle rather than on the first line. The following option sets the cursor position to the 5th row.
`set scrolloff=5`
Example:
The first image is with scrolloff=0 and the second image is with scrolloff=5.

Tip: `set sidescrolloff`
is useful if you also set `nowrap.`
To display a permanent status bar at the bottom of the vim screen showing the filename, row number, column number, etc.:
`set laststatus=2`

## 5. Spell
vim has a built-in spell-checker that is quite useful for text editing as well as coding. vim recognizes the file type and checks the spelling of comments only in code. Use the following command to turn on spell-check for the English language:
`set spell spelllang=en_us`
## 6. Miscellaneous
Disable creating backup file: When this option is on, vim creates a backup of the previous edit. If you do not want this feature, disable it as shown below. Backup files are named with a tilde (~) at the end of the filename.
`set nobackup`
Disable creating a swap file: When this option is on, vim creates a swap file that exists until you start editing the file. Swapfile is used to recover a file in the event of a crash or a use conflict. Swap files are hidden files that begin with `.`
and end with `.swp`
.
`set noswapfile`
Suppose you need to edit multiple files in the same vim session and switch between them. An annoying feature that's not readily apparent is that the working directory is the one from which you opened the first file. Often it is useful to automatically switch the working directory to that of the file being edited. To enable this option:
`set autochdir`
vim maintains an undo history that lets you undo changes. By default, this history is active only until the file is closed. vim includes a nifty feature that maintains the undo history even after the file is closed, which means you may undo your changes even after the file is saved, closed, and reopened. The undo file is a hidden file saved with the `.un~`
extension.
`set undofile`
To set audible alert bells (which sound a warning if you try to scroll beyond the end of a line):
`set errorbells`
If you prefer, you may set visual alert bells:
`set visualbell`
## Bonus
vim provides long-format as well as short-format commands. Either format can be used to set or unset the configuration.
Long format for the `autoindent`
command:
`set autoindent`
Short format for the `autoindent`
command:
`set ai`
To see the current configuration setting of a command without changing its current value, use `?`
at the end:
`set autoindent?`
To unset or turn off a command, most commands take `no`
as a prefix:
`set noautoindent`
It is possible to set a command for one file but not for the global configuration. To do this, open the file and type `:`
, followed by the `set`
command. This configuration is effective only for the current file editing session.

For help on a command:
`:help autoindent`

*Note: The commands listed here were tested on Linux with Vim version 7.4 (2013 Aug 10) and Windows with Vim 8.0 (2016 Sep 12).*
These useful commands are sure to enhance your vim experience. Which other commands do you recommend?
## Cheat sheet
Copy/paste this list of commands in your vimrc file:
```
" Indentation & Tabs
set autoindent
set smartindent
set tabstop=4
set shiftwidth=4
set expandtab
set smarttab
" Display & format
set number
set textwidth=80
set wrapmargin=2
set showmatch
" Search
set hlsearch
set incsearch
set ignorecase
set smartcase
" Browse & Scroll
set scrolloff=5
set laststatus=2
" Spell
set spell spelllang=en_us
" Miscellaneous
set nobackup
set noswapfile
set autochdir
set undofile
set visualbell
set errorbells
```
## 5 Comments |
10,040 | GNU GPL 许可证常见问题解答(七) | https://www.gnu.org/licenses/gpl-faq.html | 2018-09-22T14:20:00 | [
"GPL"
] | https://linux.cn/article-10040-1.html | 
本文由高级咨询师薛亮据自由软件基金会(FSF)的[英文原文](https://www.gnu.org/licenses/gpl-faq.html)翻译而成,这篇常见问题解答澄清了在使用 GNU 许可证中遇到许多问题,对于企业和软件开发者在实际应用许可证和解决许可证问题时具有很强的实践指导意义。
1. [关于 GNU 项目、自由软件基金会(FSF)及其许可证的基本问题](/article-9062-1.html)
2. [对于 GNU 许可证的一般了解](/article-8834-1.html)
3. [在您的程序中使用 GNU 许可证](/article-8761-1.html)
4. [依据GNU许可证分发程序](/article-9222-1.html)
5. [在编写其他程序时采用依据 GNU 许可证发布的程序](/article-9448-1.html)
6. [将作品与依据 GNU 许可证发布的代码相结合](/article-9826-1.html)
7. 关于违反 GNU 许可证的问题
### 7 关于违反 GNU 许可证的问题
#### 7.1 如果发现了可能违反 GPL 许可证的行为,我该怎么办?
您应该[进行报告](https://www.gnu.org/licenses/gpl-violation.html)。 首先,尽可能检查事实。然后告诉发行者或版权所有者涉及的具体 GPL 程序。如果是自由软件基金会,请写信给 <[email protected]>。此外,程序的维护者可能是版权所有者,他可能会告诉您如何联系版权所有者,因此将其报告给维护者。
#### 7.2 谁有权力执行 GPL 许可证?(同 1.10)
由于 GPL 是版权许可,软件的版权所有者将是有权执行 GPL 的人。如果您发现违反 GPL 的行为,您应该向遵循 GPL 的该软件的开发人员通报。他们是版权所有者,或与版权所有者有关。若要详细了解如何报告 GPL 违规,请参阅“[如果发现了可能违反 GPL 许可证的行为,我该怎么办?](https://www.gnu.org/licenses/gpl-faq.html#ReportingViolation)”。
#### 7.3 我听说有人依据另一个许可证获得了 GPL 程序的副本。这可能吗?
GNU GPL 不允许用户将其他许可证附加到程序。但是,程序的版权持有人可以依据几个不同的许可证并行发布。其中一个可能是 GNU GPL。
对于您的副本中的许可证来说,假设由版权所有者放置,并且您合法地获得该副本,则是适用于您的副本的许可证。
#### 7.4 遵循 GPL 的程序的开发者是否受 GPL 的约束?开发者的行为是否会违反 GPL?
严格来说,GPL 是开发者授予其他人使用、分发和更改程序的许可。开发者本身并不受其约束,所以无论开发者做什么,都不是 GPL 的“违规”。
但是,如果开发者做某些违反 GPL 的行为,那么开发者一定会失去在社区的道德地位。
#### 7.5 我刚刚发现一家公司有一份 GPL 程序的副本,获取该副本需支付费用。他们会因为不能在互联网上提供副本而违反 GPL 吗?(同 4.12)
不会,GPL 不要求任何人使用互联网进行分发。它也不要求任何人特意去再分发程序。而且(一个特殊情况之外),即使有人决定再分发该程序,GPL 也不会要求他必须特意向您或其他人分发副本。
GPL 要求的是,如果他愿意,他必须有权将副本分发给你。一旦版权所有者将程序的副本分发给某人,如果某人认为合适,那么该人可以将程序再分发给您或任何其他人。
#### 7.6 我可以在如果客户不继续支付订阅费用就停止运行的设备上使用遵循 GPL 的软件吗?
不可以。在这种情况下,继续支付费用的要求限制了用户运行程序的能力。 这是在 GPL 之上的额外要求,GPL 禁止这种行为。
#### 7.7 <ruby> “纠正” <rp> ( </rp> <rt> cure </rt> <rp> ) </rp></ruby>违反 GPL v3 的行为意味着什么?
纠正违规行为意味着调整您的做法以符合许可证的要求。
#### 7.8 如果有人在笔记本电脑上安装遵循 GPL 的软件,然后将笔记本电脑借给朋友,而不提供软件的源代码,他们是否违反了 GPL?
在我们调查这个问题的司法管辖区,这种借用不会被算作、<ruby> 传递 <rp> ( </rp> <rt> convey </rt> <rp> ) </rp></ruby>。笔记本电脑的所有者不会依据 GPL 承担任何义务。
#### 7.9 假设两家公司试图通过让一家公司发布已签约的软件,另一家公司发布仅运行第一家公司签约软件的用户产品,以此来规避提供安装信息的要求。这是否违反了 GPL v3?
是的。如果双方通过一起工作来逃避 GPL 的要求,就可以追究它们的侵权行为。这是特别真实的情形,因为<ruby> 传递 <rp> ( </rp> <rt> convey </rt> <rp> ) </rp></ruby>的定义明确地包括会使某人对二次侵权负责的活动。
---
译者介绍:薛亮,集慧智佳知识产权咨询公司高级咨询师,擅长专利检索、专利分析、竞争对手跟踪、FTO 分析、开源软件知识产权风险分析,致力于为互联网企业、高科技公司提供知识产权咨询服务。

| 200 | OK | ## Frequently Asked Questions about the GNU Licenses
### Table of Contents
**Basic questions about the GNU Project, the Free Software Foundation, and its licenses****General understanding of the GNU licenses****Using GNU licenses for your programs****Distribution of programs released under the GNU licenses****Using programs released under the GNU licenses when writing other programs****Combining work with code released under the GNU licenses****Questions about violations of the GNU licenses**
#### Basic questions about the GNU Project, the Free Software Foundation, and its licenses
[What does “GPL” stand for?](#WhatDoesGPLStandFor)[Does free software mean using the GPL?](#DoesFreeSoftwareMeanUsingTheGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Does all GNU software use the GNU GPL as its license?](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)[Does using the GPL for a program make it GNU software?](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[Are there translations of the GPL into other languages?](#GPLTranslations)[Why are some GNU libraries released under the ordinary GPL rather than the Lesser GPL?](#WhySomeGPLAndNotLGPL)[Who has the power to enforce the GPL?](#WhoHasThePower)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[Can I modify the GPL and make a modified license?](#ModifyGPL)[Why did you decide to write the GNU Affero GPLv3 as a separate license?](#SeparateAffero)
#### General understanding of the GNU licenses
[Why does the GPL permit users to publish their modified versions?](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)[Does the GPL require that source code of modified versions be posted to the public?](#GPLRequireSourcePostedPublic)[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[If I know someone has a copy of a GPL-covered program, can I demand they give me a copy?](#CanIDemandACopy)[What does “written offer valid for any third party” mean in GPLv2? Does that mean everyone in the world can get the source to any GPLed program no matter what?](#WhatDoesWrittenOfferValid)[The GPL says that modified versions, if released, must be “licensed … to all third parties.” Who are these third parties?](#TheGPLSaysModifiedVersions)[Does the GPL allow me to sell copies of the program for money?](#DoesTheGPLAllowMoney)[Does the GPL allow me to charge a fee for downloading the program from my distribution site?](#DoesTheGPLAllowDownloadFee)[Does the GPL allow me to require that anyone who receives the software must pay me a fee and/or notify me?](#DoesTheGPLAllowRequireFee)[If I distribute GPLed software for a fee, am I required to also make it available to the public without a charge?](#DoesTheGPLRequireAvailabilityToPublic)[Does the GPL allow me to distribute a copy under a nondisclosure agreement?](#DoesTheGPLAllowNDA)[Does the GPL allow me to distribute a modified or beta version under a nondisclosure agreement?](#DoesTheGPLAllowModNDA)[Does the GPL allow me to develop a modified version under a nondisclosure agreement?](#DevelopChangesUnderNDA)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[What if the work is not very long?](#WhatIfWorkIsShort)[Am I required to claim a copyright on my modifications to a GPL-covered program?](#RequiredToClaimCopyright)[What does the GPL say about translating some code to a different programming language?](#TranslateCode)[If a program combines public-domain code with GPL-covered code, can I take the public-domain part and use it as public domain code?](#CombinePublicDomainWithGPL)[I want to get credit for my work. I want people to know what I wrote. Can I still get credit if I use the GPL?](#IWantCredit)[Does the GPL allow me to add terms that would require citation or acknowledgment in research papers which use the GPL-covered software or its output?](#RequireCitation)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[What does it mean to say that two licenses are “compatible”?](#WhatIsCompatible)[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)[Why is the original BSD license incompatible with the GPL?](#OrigBSD)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[When it comes to determining whether two pieces of software form a single work, does the fact that the code is in one or more containers have any effect?](#AggregateContainers)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[If I use a piece of software that has been obtained under the GNU GPL, am I allowed to modify the original code into a new program, then distribute and sell that new program commercially?](#GPLCommercially)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[I'd like to license my code under the GPL, but I'd also like to make it clear that it can't be used for military and/or commercial uses. Can I do this?](#NoMilitary)[Can I use the GPL to license hardware?](#GPLHardware)[Does prelinking a GPLed binary to various libraries on the system, to optimize its performance, count as modification?](#Prelinking)[How does the LGPL work with Java?](#LGPLJava)[Why did you invent the new terms “propagate” and “convey” in GPLv3?](#WhyPropagateAndConvey)[Is “convey” in GPLv3 the same thing as what GPLv2 means by “distribute”?](#ConveyVsDistribute)[If I only make copies of a GPL-covered program and run them, without distributing or conveying them to others, what does the license require of me?](#NoDistributionRequirements)[GPLv3 gives “making available to the public” as an example of propagation. What does this mean? Is making available a form of conveying?](#v3MakingAvailable)[Since distribution and making available to the public are forms of propagation that are also conveying in GPLv3, what are some examples of propagation that do not constitute conveying?](#PropagationNotConveying)[How does GPLv3 make BitTorrent distribution easier?](#BitTorrent)[What is tivoization? How does GPLv3 prevent it?](#Tivoization)[Does GPLv3 prohibit DRM?](#DRMProhibited)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[Does GPLv3 have a “patent retaliation clause”?](#v3PatentRetaliation)[In GPLv3 and AGPLv3, what does it mean when it says “notwithstanding any other provision of this License”?](#v3Notwithstanding)[In AGPLv3, what counts as “ interacting with [the software] remotely through a computer network?”](#AGPLv3InteractingRemotely)[How does GPLv3's concept of “you” compare to the definition of “Legal Entity” in the Apache License 2.0?](#ApacheLegalEntity)[In GPLv3, what does “the Program” refer to? Is it every program ever released under GPLv3?](#v3TheProgram)[If some network client software is released under AGPLv3, does it have to be able to provide source to the servers it interacts with?](#AGPLv3ServerAsUser)[For software that runs a proxy server licensed under the AGPL, how can I provide an offer of source to users interacting with that code?](#AGPLProxy)
#### Using GNU licenses for your programs
[How do I upgrade from (L)GPLv2 to (L)GPLv3?](#v3HowToUpgrade)[Could you give me step by step instructions on how to apply the GPL to my program?](#CouldYouHelpApplyGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[Is putting a copy of the GNU GPL in my repository enough to apply the GPL?](#LicenseCopyOnly)[Why should I put a license notice in each source file?](#NoticeInSourceFile)[What if the work is not very long?](#WhatIfWorkIsShort)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[How do I get a copyright on my program in order to release it under the GPL?](#HowIGetCopyright)[What if my school might want to make my program into its own proprietary software product?](#WhatIfSchool)[I would like to release a program I wrote under the GNU GPL, but I would like to use the same code in nonfree programs.](#ReleaseUnderGPLAndNF)[Can the developer of a program who distributed it under the GPL later license it to another party for exclusive use?](#CanDeveloperThirdParty)[Can the US Government release a program under the GNU GPL?](#GPLUSGov)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Why should programs say “Version 3 of the GPL or any later version”?](#VersionThreeOrLater)[Is it a good idea to use a license saying that a certain program can be used only under the latest version of the GNU GPL?](#OnlyLatestVersion)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[How does the GPL apply to fonts?](#FontException)[What license should I use for website maintenance system templates?](#WMS)[Can I release a program under the GPL which I developed using nonfree tools?](#NonFreeTools)[I use public key cryptography to sign my code to assure its authenticity. Is it true that GPLv3 forces me to release my private signing keys?](#GiveUpKeys)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[The warranty and liability disclaimers in GPLv3 seem specific to U.S. law. Can I add my own disclaimers to my own code?](#v3InternationalDisclaimers)[My program has interactive user interfaces that are non-visual in nature. How can I comply with the Appropriate Legal Notices requirement in GPLv3?](#NonvisualLegalNotices)
#### Distribution of programs released under the GNU licenses
[Can I release a modified version of a GPL-covered program in binary form only?](#ModifiedJustBinary)[I downloaded just the binary from the net. If I distribute copies, do I have to get the source and distribute that too?](#UnchangedJustBinary)[I want to distribute binaries via physical media without accompanying sources. Can I provide source code by FTP instead of by mail order?](#DistributeWithSourceOnInternet)[My friend got a GPL-covered binary with an offer to supply source, and made a copy for me. Can I use the offer to obtain the source?](#RedistributedBinariesGetSource)[Can I put the binaries on my Internet server and put the source on a different Internet site?](#SourceAndBinaryOnDifferentSites)[I want to distribute an extended version of a GPL-covered program in binary form. Is it enough to distribute the source for the original version?](#DistributeExtendedBinary)[I want to distribute binaries, but distributing complete source is inconvenient. Is it ok if I give users the diffs from the “standard” version along with the binaries?](#DistributingSourceIsInconvenient)[Can I make binaries available on a network server, but send sources only to people who order them?](#AnonFTPAndSendSources)[How can I make sure each user who downloads the binaries also gets the source?](#HowCanIMakeSureEachDownloadGetsSource)[Does the GPL require me to provide source code that can be built to match the exact hash of the binary I am distributing?](#MustSourceBuildToMatchExactHashOfBinary)[Can I release a program with a license which says that you can distribute modified versions of it under the GPL but you can't distribute the original itself under the GPL?](#ReleaseNotOriginal)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[A company is running a modified version of a GPLed program on a web site. Does the GPL say they must release their modified sources?](#UnreleasedMods)[A company is running a modified version of a program licensed under the GNU Affero GPL (AGPL) on a web site. Does the AGPL say they must release their modified sources?](#UnreleasedModsAGPL)[Is use within one organization or company “distribution”?](#InternalDistribution)[If someone steals a CD containing a version of a GPL-covered program, does the GPL give him the right to redistribute that version?](#StolenCopy)[What if a company distributes a copy of some other developers' GPL-covered work to me as a trade secret?](#TradeSecretRelease)[What if a company distributes a copy of its own GPL-covered work to me as a trade secret?](#TradeSecretRelease2)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Does moving a copy to a majority-owned, and controlled, subsidiary constitute distribution?](#DistributeSubsidiary)[Can software installers ask people to click to agree to the GPL? If I get some software under the GPL, do I have to agree to anything?](#ClickThrough)[I would like to bundle GPLed software with some sort of installation software. Does that installer need to have a GPL-compatible license?](#GPLCompatInstaller)[Does a distributor violate the GPL if they require me to “represent and warrant” that I am located in the US, or that I intend to distribute the software in compliance with relevant export control laws?](#ExportWarranties)[The beginning of GPLv3 section 6 says that I can convey a covered work in object code form “under the terms of sections 4 and 5” provided I also meet the conditions of section 6. What does that mean?](#v3Under4and5)[My company owns a lot of patents. Over the years we've contributed code to projects under “GPL version 2 or any later version”, and the project itself has been distributed under the same terms. If a user decides to take the project's code (incorporating my contributions) under GPLv3, does that mean I've automatically granted GPLv3's explicit patent license to that user?](#v2OrLaterPatentLicense)[If I distribute a GPLv3-covered program, can I provide a warranty that is voided if the user modifies the program?](#v3ConditionalWarranty)[If I give a copy of a GPLv3-covered program to a coworker at my company, have I “conveyed” the copy to that coworker?](#v3CoworkerConveying)[Am I complying with GPLv3 if I offer binaries on an FTP server and sources by way of a link to a source code repository in a version control system, like CVS or Subversion?](#SourceInCVS)[Can someone who conveys GPLv3-covered software in a User Product use remote attestation to prevent a user from modifying that software?](#RemoteAttestation)[What does “rules and protocols for communication across the network” mean in GPLv3?](#RulesProtocols)[Distributors that provide Installation Information under GPLv3 are not required to provide “support service” for the product. What kind of “support service” do you mean?](#SupportService)
#### Using programs released under the GNU licenses when writing other programs
[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[Can I use GPL-covered editors such as GNU Emacs to develop nonfree programs? Can I use GPL-covered tools such as GCC to compile them?](#CanIUseGPLToolsForNF)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[In what cases is the output of a GPL program covered by the GPL too?](#WhatCaseIsOutputGPL)[If I port my program to GNU/Linux, does that mean I have to release it as free software under the GPL or some other free software license?](#PortProgramToGPL)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[If I distribute a proprietary program that links against an LGPLv3-covered library that I've modified, what is the “contributor version” for purposes of determining the scope of the explicit patent license grant I'm making—is it just the library, or is it the whole combination?](#LGPLv3ContributorVersion)[Under AGPLv3, when I modify the Program under section 13, what Corresponding Source does it have to offer?](#AGPLv3CorrespondingSource)[Where can I learn more about the GCC Runtime Library Exception?](#LibGCCException)
#### Combining work with code released under the GNU licenses
[Is GPLv3 compatible with GPLv2?](#v2v3Compatibility)[Does GPLv2 have a requirement about delivering installation information?](#InstInfo)[How are the various GNU licenses compatible with each other?](#AllCompatibility)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Does the GPL have different requirements for statically vs dynamically linked modules with a covered work?](#GPLStaticVsDynamic)[Does the LGPL have different requirements for statically vs dynamically linked modules with a covered work?](#LGPLStaticVsDynamic)[If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license?](#IfLibraryIsGPL)[You have a GPLed program that I'd like to link with my code to build a proprietary program. Does the fact that I link with your program mean I have to GPL my program?](#LinkingWithGPL)[If so, is there any chance I could get a license of your program under the Lesser GPL?](#SwitchToLGPL)[If a programming language interpreter is released under the GPL, does that mean programs written to be interpreted by it must be under GPL-compatible licenses?](#IfInterpreterIsGPL)[If a programming language interpreter has a license that is incompatible with the GPL, can I run GPL-covered programs on it?](#InterpreterIncompat)[If I add a module to a GPL-covered program, do I have to use the GPL as the license for my module?](#GPLModuleLicense)[When is a program and its plug-ins considered a single combined program?](#GPLPlugins)[If I write a plug-in to use with a GPL-covered program, what requirements does that impose on the licenses I can use for distributing my plug-in?](#GPLAndPlugins)[Can I apply the GPL when writing a plug-in for a nonfree program?](#GPLPluginsInNF)[Can I release a nonfree program that's designed to load a GPL-covered plug-in?](#NFUseGPLPlugins)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[Using a certain GNU program under the GPL does not fit our project to make proprietary software. Will you make an exception for us? It would mean more users of that program.](#WillYouMakeAnException)[I'd like to incorporate GPL-covered software in my proprietary system. Can I do this by putting a “wrapper” module, under a GPL-compatible lax permissive license (such as the X11 license) in between the GPL-covered part and the proprietary part?](#GPLWrapper)[Can I write free software that uses nonfree libraries?](#FSWithNFLibs)[Can I link a GPL program with a proprietary system library?](#SystemLibraryException)[In what ways can I link or combine AGPLv3-covered and GPLv3-covered code?](#AGPLGPL)[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)[I'm writing a Windows application with Microsoft Visual C++ and I will be releasing it under the GPL. Is dynamically linking my program with the Visual C++ runtime library permitted under the GPL?](#WindowsRuntimeAndGPL)[I'd like to modify GPL-covered programs and link them with the portability libraries from Money Guzzler Inc. I cannot distribute the source code for these libraries, so any user who wanted to change these versions would have to obtain those libraries separately. Why doesn't the GPL permit this?](#MoneyGuzzlerInc)[If license for a module Q has a requirement that's incompatible with the GPL, but the requirement applies only when Q is distributed by itself, not when Q is included in a larger program, does that make the license GPL-compatible? Can I combine or link Q with a GPL-covered program?](#GPLIncompatibleAlone)[In an object-oriented language such as Java, if I use a class that is GPLed without modifying, and subclass it, in what way does the GPL affect the larger program?](#OOPLang)[Does distributing a nonfree driver meant to link with the kernel Linux violate the GPL?](#NonfreeDriverKernelLinux)[How can I allow linking of proprietary modules with my GPL-covered library under a controlled interface only?](#LinkingOverControlledInterface)[Consider this situation: 1) X releases V1 of a project under the GPL. 2) Y contributes to the development of V2 with changes and new code based on V1. 3) X wants to convert V2 to a non-GPL license. Does X need Y's permission?](#Consider)[I have written an application that links with many different components, that have different licenses. I am very confused as to what licensing requirements are placed on my program. Can you please tell me what licenses I may use?](#ManyDifferentLicenses)[Can I use snippets of GPL-covered source code within documentation that is licensed under some license that is incompatible with the GPL?](#SourceCodeInDocumentation)
#### Questions about violations of the GNU licenses
[What should I do if I discover a possible violation of the GPL?](#ReportingViolation)[Who has the power to enforce the GPL?](#WhoHasThePower)[I heard that someone got a copy of a GPLed program under another license. Is this possible?](#HeardOtherLicense)[Is the developer of a GPL-covered program bound by the GPL? Could the developer's actions ever be a violation of the GPL?](#DeveloperViolate)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[Can I use GPLed software on a device that will stop operating if customers do not continue paying a subscription fee?](#SubscriptionFee)[What does it mean to “cure” a violation of GPLv3?](#Cure)[If someone installs GPLed software on a laptop, and then lends that laptop to a friend without providing source code for the software, have they violated the GPL?](#LaptopLoan)[Suppose that two companies try to circumvent the requirement to provide Installation Information by having one company release signed software, and the other release a User Product that only runs signed software from the first company. Is this a violation of GPLv3?](#TwoPartyTivoization)
This page is maintained by the Free Software
Foundation's Licensing and Compliance Lab. You can support our efforts by
[making a donation](http://donate.fsf.org) to the FSF.
You can use our publications to understand how GNU licenses work or help you advocate for free software, but they are not legal advice. The FSF cannot give legal advice. Legal advice is personalized advice from a lawyer who has agreed to work for you. Our answers address general questions and may not apply in your specific legal situation.
Have a
question not answered here? Check out some of our other [licensing resources](https://www.fsf.org/licensing) or contact the
Compliance Lab at [[email protected]](mailto:[email protected]).
- What does “GPL” stand for?
(
[#WhatDoesGPLStandFor](#WhatDoesGPLStandFor)) “GPL” stands for “General Public License”. The most widespread such license is the GNU General Public License, or GNU GPL for short. This can be further shortened to “GPL”, when it is understood that the GNU GPL is the one intended.
- Does free software mean using
the GPL?
(
[#DoesFreeSoftwareMeanUsingTheGPL](#DoesFreeSoftwareMeanUsingTheGPL)) Not at all—there are many other free software licenses. We have an
[incomplete list](/licenses/license-list.html). Any license that provides the user[certain specific freedoms](/philosophy/free-sw.html)is a free software license.- Why should I use the GNU GPL rather than other
free software licenses?
(
[#WhyUseGPL](#WhyUseGPL)) Using the GNU GPL will require that all the
[released improved versions be free software](/philosophy/pragmatic.html). This means you can avoid the risk of having to compete with a proprietary modified version of your own work. However, in some special situations it can be better to use a[more permissive license](/licenses/why-not-lgpl.html).- Does all GNU
software use the GNU GPL as its license?
(
[#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)) Most GNU software packages use the GNU GPL, but there are a few GNU programs (and parts of programs) that use looser licenses, such as the Lesser GPL. When we do this, it is a matter of
[strategy](/licenses/why-not-lgpl.html).- Does using the
GPL for a program make it GNU software?
(
[#DoesUsingTheGPLForAProgramMakeItGNUSoftware](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)) Anyone can release a program under the GNU GPL, but that does not make it a GNU package.
Making the program a GNU software package means explicitly contributing to the GNU Project. This happens when the program's developers and the GNU Project agree to do it. If you are interested in contributing a program to the GNU Project, please write to
[<[email protected]>](mailto:[email protected]).- What should I do if I discover a possible
violation of the GPL?
(
[#ReportingViolation](#ReportingViolation)) You should
[report it](/licenses/gpl-violation.html). First, check the facts as best you can. Then tell the publisher or copyright holder of the specific GPL-covered program. If that is the Free Software Foundation, write to[<[email protected]>](mailto:[email protected]). Otherwise, the program's maintainer may be the copyright holder, or else could tell you how to contact the copyright holder, so report it to the maintainer.- Why
does the GPL permit users to publish their modified versions?
(
[#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)) A crucial aspect of free software is that users are free to cooperate. It is absolutely essential to permit users who wish to help each other to share their bug fixes and improvements with other users.
Some have proposed alternatives to the GPL that require modified versions to go through the original author. As long as the original author keeps up with the need for maintenance, this may work well in practice, but if the author stops (more or less) to do something else or does not attend to all the users' needs, this scheme falls down. Aside from the practical problems, this scheme does not allow users to help each other.
Sometimes control over modified versions is proposed as a means of preventing confusion between various versions made by users. In our experience, this confusion is not a major problem. Many versions of Emacs have been made outside the GNU Project, but users can tell them apart. The GPL requires the maker of a version to place his or her name on it, to distinguish it from other versions and to protect the reputations of other maintainers.
- Does the GPL require that
source code of modified versions be posted to the public?
(
[#GPLRequireSourcePostedPublic](#GPLRequireSourcePostedPublic)) The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.
But
*if*you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL.Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you.
- Can I have a GPL-covered
program and an unrelated nonfree program on the same computer?
(
[#GPLAndNonfreeOnSameMachine](#GPLAndNonfreeOnSameMachine)) Yes.
- If I know someone has a copy of a GPL-covered
program, can I demand they give me a copy?
(
[#CanIDemandACopy](#CanIDemandACopy)) No. The GPL gives a person permission to make and redistribute copies of the program
*if and when that person chooses to do so*. That person also has the right not to choose to redistribute the program.- What does “written offer
valid for any third party” mean in GPLv2? Does that mean
everyone in the world can get the source to any GPLed program
no matter what?
(
[#WhatDoesWrittenOfferValid](#WhatDoesWrittenOfferValid)) If you choose to provide source through a written offer, then anybody who requests the source from you is entitled to receive it.
If you commercially distribute binaries not accompanied with source code, the GPL says you must provide a written offer to distribute the source code later. When users non-commercially redistribute the binaries they received from you, they must pass along a copy of this written offer. This means that people who did not get the binaries directly from you can still receive copies of the source code, along with the written offer.
The reason we require the offer to be valid for any third party is so that people who receive the binaries indirectly in that way can order the source code from you.
- GPLv2 says that modified
versions, if released, must be “licensed … to all third
parties.” Who are these third parties?
(
[#TheGPLSaysModifiedVersions](#TheGPLSaysModifiedVersions)) Section 2 says that modified versions you distribute must be licensed to all third parties under the GPL. “All third parties” means absolutely everyone—but this does not require you to
*do*anything physically for them. It only means they have a license from you, under the GPL, for your version.- Am I required to claim a copyright
on my modifications to a GPL-covered program?
(
[#RequiredToClaimCopyright](#RequiredToClaimCopyright)) You are not required to claim a copyright on your changes. In most countries, however, that happens automatically by default, so you need to place your changes explicitly in the public domain if you do not want them to be copyrighted.
Whether you claim a copyright on your changes or not, either way you must release the modified version, as a whole, under the GPL (
[if you release your modified version at all](#GPLRequireSourcePostedPublic)).- What does the GPL say about translating
some code to a different programming language?
(
[#TranslateCode](#TranslateCode)) Under copyright law, translation of a work is considered a kind of modification. Therefore, what the GPL says about modified versions applies also to translated versions. The translation is covered by the copyright on the original program.
If the original program carries a free license, that license gives permission to translate it. How you can use and license the translated program is determined by that license. If the original program is licensed under certain versions of the GNU GPL, the translated program must be covered by the same versions of the GNU GPL.
- If a program combines
public-domain code with GPL-covered code, can I take the
public-domain part and use it as public domain code?
(
[#CombinePublicDomainWithGPL](#CombinePublicDomainWithGPL)) You can do that, if you can figure out which part is the public domain part and separate it from the rest. If code was put in the public domain by its developer, it is in the public domain no matter where it has been.
- Does the GPL allow me to sell copies of
the program for money?
(
[#DoesTheGPLAllowMoney](#DoesTheGPLAllowMoney)) Yes, the GPL allows everyone to do this. The
[right to sell copies](/philosophy/selling.html)is part of the definition of free software. Except in one special situation, there is no limit on what price you can charge. (The one exception is the required written offer to provide source code that must accompany binary-only release.)- Does the GPL allow me to charge a
fee for downloading the program from my distribution site?
(
[#DoesTheGPLAllowDownloadFee](#DoesTheGPLAllowDownloadFee)) Yes. You can charge any fee you wish for distributing a copy of the program. Under GPLv2, if you distribute binaries by download, you must provide “equivalent access” to download the source—therefore, the fee to download source may not be greater than the fee to download the binary. If the binaries being distributed are licensed under the GPLv3, then you must offer equivalent access to the source code in the same way through the same place at no further charge.
- Does the GPL allow me to require
that anyone who receives the software must pay me a fee and/or
notify me?
(
[#DoesTheGPLAllowRequireFee](#DoesTheGPLAllowRequireFee)) No. In fact, a requirement like that would make the program nonfree. If people have to pay when they get a copy of a program, or if they have to notify anyone in particular, then the program is not free. See the
[definition of free software](/philosophy/free-sw.html).The GPL is a free software license, and therefore it permits people to use and even redistribute the software without being required to pay anyone a fee for doing so.
You
*can*charge people a fee to[get a copy](#DoesTheGPLAllowMoney). You can't require people to pay you when they get a copy*from you**from someone else*.- If I
distribute GPLed software for a fee, am I required to also make
it available to the public without a charge?
(
[#DoesTheGPLRequireAvailabilityToPublic](#DoesTheGPLRequireAvailabilityToPublic)) No. However, if someone pays your fee and gets a copy, the GPL gives them the freedom to release it to the public, with or without a fee. For example, someone could pay your fee, and then put her copy on a web site for the general public.
- Does the GPL allow me to distribute copies
under a nondisclosure agreement?
(
[#DoesTheGPLAllowNDA](#DoesTheGPLAllowNDA)) No. The GPL says that anyone who receives a copy from you has the right to redistribute copies, modified or not. You are not allowed to distribute the work on any more restrictive basis.
If someone asks you to sign an NDA for receiving GPL-covered software copyrighted by the FSF, please inform us immediately by writing to
[[email protected]](mailto:[email protected]).If the violation involves GPL-covered code that has some other copyright holder, please inform that copyright holder, just as you would for any other kind of violation of the GPL.
- Does the GPL allow me to distribute a
modified or beta version under a nondisclosure agreement?
(
[#DoesTheGPLAllowModNDA](#DoesTheGPLAllowModNDA)) No. The GPL says that your modified versions must carry all the freedoms stated in the GPL. Thus, anyone who receives a copy of your version from you has the right to redistribute copies (modified or not) of that version. You may not distribute any version of the work on a more restrictive basis.
- Does the GPL allow me to develop a
modified version under a nondisclosure agreement?
(
[#DevelopChangesUnderNDA](#DevelopChangesUnderNDA)) Yes. For instance, you can accept a contract to develop changes and agree not to release
*your changes*until the client says ok. This is permitted because in this case no GPL-covered code is being distributed under an NDA.You can also release your changes to the client under the GPL, but agree not to release them to anyone else unless the client says ok. In this case, too, no GPL-covered code is being distributed under an NDA, or under any additional restrictions.
The GPL would give the client the right to redistribute your version. In this scenario, the client will probably choose not to exercise that right, but does
*have*the right.- I want to get credit
for my work. I want people to know what I wrote. Can I still get
credit if I use the GPL?
(
[#IWantCredit](#IWantCredit)) You can certainly get credit for the work. Part of releasing a program under the GPL is writing a copyright notice in your own name (assuming you are the copyright holder). The GPL requires all copies to carry an appropriate copyright notice.
- Does the GPL allow me to add terms
that would require citation or acknowledgment in research papers
which use the GPL-covered software or its output?
(
[#RequireCitation](#RequireCitation)) No, this is not permitted under the terms of the GPL. While we recognize that proper citation is an important part of academic publications, citation cannot be added as an additional requirement to the GPL. Requiring citation in research papers which made use of GPLed software goes beyond what would be an acceptable additional requirement under section 7(b) of GPLv3, and therefore would be considered an additional restriction under Section 7 of the GPL. And copyright law does not allow you to place such a
[requirement on the output of software](#GPLOutput), regardless of whether it is licensed under the terms of the GPL or some other license.- Why does the GPL
require including a copy of the GPL with every copy of the program?
(
[#WhyMustIInclude](#WhyMustIInclude)) Including a copy of the license with the work is vital so that everyone who gets a copy of the program can know what their rights are.
It might be tempting to include a URL that refers to the license, instead of the license itself. But you cannot be sure that the URL will still be valid, five years or ten years from now. Twenty years from now, URLs as we know them today may no longer exist.
The only way to make sure that people who have copies of the program will continue to be able to see the license, despite all the changes that will happen in the network, is to include a copy of the license in the program.
- Is it enough just to put a copy
of the GNU GPL in my repository?
(
[#LicenseCopyOnly](#LicenseCopyOnly)) Just putting a copy of the GNU GPL in a file in your repository does not explicitly state that the code in the same repository may be used under the GNU GPL. Without such a statement, it's not entirely clear that the permissions in the license really apply to any particular source file. An explicit statement saying that eliminates all doubt.
A file containing just a license, without a statement that certain other files are covered by that license, resembles a file containing just a subroutine which is never called from anywhere else. The resemblance is not perfect: lawyers and courts might apply common sense and conclude that you must have put the copy of the GNU GPL there because you wanted to license the code that way. Or they might not. Why leave an uncertainty?
This statement should be in each source file. A clear statement in the program's README file is legally sufficient
*as long as that accompanies the code*, but it is easy for them to get separated. Why take a risk of[uncertainty about your code's license](#NoticeInSourceFile)?This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- Why should I put a license notice in each
source file?
(
[#NoticeInSourceFile](#NoticeInSourceFile)) You should put a notice at the start of each source file, stating what license it carries, in order to avoid risk of the code's getting disconnected from its license. If your repository's README says that source file is under the GNU GPL, what happens if someone copies that file to another program? That other context may not show what the file's license is. It may appear to have some other license, or
[no license at all](/licenses/license-list.html#NoLicense)(which would make the code nonfree).Adding a copyright notice and a license notice at the start of each source file is easy and makes such confusion unlikely.
This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- What if the work is not very long?
(
[#WhatIfWorkIsShort](#WhatIfWorkIsShort)) If a whole software package contains very little code—less than 300 lines is the benchmark we use—you may as well use a lax permissive license for it, rather than a copyleft license like the GNU GPL. (Unless, that is, the code is specially important.) We
[recommend the Apache License 2.0](/licenses/license-recommendations.html#software)for such cases.- Can I omit the preamble of the GPL, or the
instructions for how to use it on your own programs, to save space?
(
[#GPLOmitPreamble](#GPLOmitPreamble)) The preamble and instructions are integral parts of the GNU GPL and may not be omitted. In fact, the GPL is copyrighted, and its license permits only verbatim copying of the entire GPL. (You can use the legal terms to make
[another license](#ModifyGPL)but it won't be the GNU GPL.)The preamble and instructions add up to some 1000 words, less than 1/5 of the GPL's total size. They will not make a substantial fractional change in the size of a software package unless the package itself is quite small. In that case, you may as well use a simple all-permissive license rather than the GNU GPL.
- What does it
mean to say that two licenses are “compatible”?
(
[#WhatIsCompatible](#WhatIsCompatible)) In order to combine two programs (or substantial parts of them) into a larger work, you need to have permission to use both programs in this way. If the two programs' licenses permit this, they are compatible. If there is no way to satisfy both licenses at once, they are incompatible.
For some licenses, the way in which the combination is made may affect whether they are compatible—for instance, they may allow linking two modules together, but not allow merging their code into one module.
If you just want to install two separate programs in the same system, it is not necessary that their licenses be compatible, because this does not combine them into a larger work.
- What does it mean to say a license is
“compatible with the GPL?”
(
[#WhatDoesCompatMean](#WhatDoesCompatMean)) It means that the other license and the GNU GPL are compatible; you can combine code released under the other license with code released under the GNU GPL in one larger program.
All GNU GPL versions permit such combinations privately; they also permit distribution of such combinations provided the combination is released under the same GNU GPL version. The other license is compatible with the GPL if it permits this too.
GPLv3 is compatible with more licenses than GPLv2: it allows you to make combinations with code that has specific kinds of additional requirements that are not in GPLv3 itself. Section 7 has more information about this, including the list of additional requirements that are permitted.
- Can I write
free software that uses nonfree libraries?
(
[#FSWithNFLibs](#FSWithNFLibs)) If you do this, your program won't be fully usable in a free environment. If your program depends on a nonfree library to do a certain job, it cannot do that job in the Free World. If it depends on a nonfree library to run at all, it cannot be part of a free operating system such as GNU; it is entirely off limits to the Free World.
So please consider: can you find a way to get the job done without using this library? Can you write a free replacement for that library?
If the program is already written using the nonfree library, perhaps it is too late to change the decision. You may as well release the program as it stands, rather than not release it. But please mention in the README that the need for the nonfree library is a drawback, and suggest the task of changing the program so that it does the same job without the nonfree library. Please suggest that anyone who thinks of doing substantial further work on the program first free it from dependence on the nonfree library.
Note that there may also be legal issues with combining certain nonfree libraries with GPL-covered free software. Please see
[the question on GPL software with GPL-incompatible libraries](#GPLIncompatibleLibs)for more information.- Can I link a GPL program with a
proprietary system library? (
[#SystemLibraryException](#SystemLibraryException)) Both versions of the GPL have an exception to their copyleft, commonly called the system library exception. If the GPL-incompatible libraries you want to use meet the criteria for a system library, then you don't have to do anything special to use them; the requirement to distribute source code for the whole program does not include those libraries, even if you distribute a linked executable containing them.
The criteria for what counts as a “system library” vary between different versions of the GPL. GPLv3 explicitly defines “System Libraries” in section 1, to exclude it from the definition of “Corresponding Source.” GPLv2 deals with this issue slightly differently, near the end of section 3.
- In what ways can I link or combine
AGPLv3-covered and GPLv3-covered code?
(
[#AGPLGPL](#AGPLGPL)) Each of these licenses explicitly permits linking with code under the other license. You can always link GPLv3-covered modules with AGPLv3-covered modules, and vice versa. That is true regardless of whether some of the modules are libraries.
- What legal issues
come up if I use GPL-incompatible libraries with GPL software?
(
[#GPLIncompatibleLibs](#GPLIncompatibleLibs)) -
If you want your program to link against a library not covered by the system library exception, you need to provide permission to do that. Below are two example license notices that you can use to do that; one for GPLv3, and the other for GPLv2. In either case, you should put this text in each file to which you are granting this permission.
Only the copyright holders for the program can legally release their software under these terms. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
When other people modify the program, they do not have to make the same exception for their code—it is their choice whether to do so.
If the libraries you intend to link with are nonfree, please also see
[the section on writing Free Software which uses nonfree libraries](#FSWithNFLibs).If you're using GPLv3, you can accomplish this goal by granting an additional permission under section 7. The following license notice will do that. You must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Additional permission under GNU GPL version 3 section 7
If you modify this Program, or any covered work, by linking or combining it with
`[name of library]`(or a modified version of that library), containing parts covered by the terms of`[name of library's license]`, the licensors of this Program grant you additional permission to convey the resulting work. {Corresponding Source for a non-source form of such a combination shall include the source code for the parts of`[name of library]`used as well as that of the covered work.}If you're using GPLv2, you can provide your own exception to the license's terms. The following license notice will do that. Again, you must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Linking
`[name of your program]`statically or dynamically with other modules is making a combined work based on`[name of your program]`. Thus, the terms and conditions of the GNU General Public License cover the whole combination.In addition, as a special exception, the copyright holders of
`[name of your program]`give you permission to combine`[name of your program]`with free software programs or libraries that are released under the GNU LGPL and with code included in the standard release of`[name of library]`under the`[name of library's license]`(or modified versions of such code, with unchanged license). You may copy and distribute such a system following the terms of the GNU GPL for`[name of your program]`and the licenses of the other code concerned{, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code}.Note that people who make modified versions of
`[name of your program]`are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. - How do I get a copyright on my program
in order to release it under the GPL?
(
[#HowIGetCopyright](#HowIGetCopyright)) Under the Berne Convention, everything written is automatically copyrighted from whenever it is put in fixed form. So you don't have to do anything to “get” the copyright on what you write—as long as nobody else can claim to own your work.
However, registering the copyright in the US is a very good idea. It will give you more clout in dealing with an infringer in the US.
The case when someone else might possibly claim the copyright is if you are an employee or student; then the employer or the school might claim you did the job for them and that the copyright belongs to them. Whether they would have a valid claim would depend on circumstances such as the laws of the place where you live, and on your employment contract and what sort of work you do. It is best to consult a lawyer if there is any possible doubt.
If you think that the employer or school might have a claim, you can resolve the problem clearly by getting a copyright disclaimer signed by a suitably authorized officer of the company or school. (Your immediate boss or a professor is usually NOT authorized to sign such a disclaimer.)
- What if my school
might want to make my program into its own proprietary software product?
(
[#WhatIfSchool](#WhatIfSchool)) Many universities nowadays try to raise funds by restricting the use of the knowledge and information they develop, in effect behaving little different from commercial businesses. (See “The Kept University”, Atlantic Monthly, March 2000, for a general discussion of this problem and its effects.)
If you see any chance that your school might refuse to allow your program to be released as free software, it is best to raise the issue at the earliest possible stage. The closer the program is to working usefully, the more temptation the administration might feel to take it from you and finish it without you. At an earlier stage, you have more leverage.
So we recommend that you approach them when the program is only half-done, saying, “If you will agree to releasing this as free software, I will finish it.” Don't think of this as a bluff. To prevail, you must have the courage to say, “My program will have liberty, or never be born.”
- Could
you give me step by step instructions on how to apply the GPL to my program?
(
[#CouldYouHelpApplyGPL](#CouldYouHelpApplyGPL)) See the page of
[GPL instructions](/licenses/gpl-howto.html).- I heard that someone got a copy
of a GPLed program under another license. Is this possible?
(
[#HeardOtherLicense](#HeardOtherLicense)) The GNU GPL does not give users permission to attach other licenses to the program. But the copyright holder for a program can release it under several different licenses in parallel. One of them may be the GNU GPL.
The license that comes in your copy, assuming it was put in by the copyright holder and that you got the copy legitimately, is the license that applies to your copy.
- I would like to release a program I wrote
under the GNU GPL, but I would
like to use the same code in nonfree programs.
(
[#ReleaseUnderGPLAndNF](#ReleaseUnderGPLAndNF)) To release a nonfree program is always ethically tainted, but legally there is no obstacle to your doing this. If you are the copyright holder for the code, you can release it under various different non-exclusive licenses at various times.
- Is the
developer of a GPL-covered program bound by the GPL? Could the
developer's actions ever be a violation of the GPL?
(
[#DeveloperViolate](#DeveloperViolate)) Strictly speaking, the GPL is a license from the developer for others to use, distribute and change the program. The developer itself is not bound by it, so no matter what the developer does, this is not a “violation” of the GPL.
However, if the developer does something that would violate the GPL if done by someone else, the developer will surely lose moral standing in the community.
- Can the developer of a program who distributed
it under the GPL later license it to another party for exclusive use?
(
[#CanDeveloperThirdParty](#CanDeveloperThirdParty)) No, because the public already has the right to use the program under the GPL, and this right cannot be withdrawn.
- Can I use GPL-covered editors such as
GNU Emacs to develop nonfree programs? Can I use GPL-covered tools
such as GCC to compile them?
(
[#CanIUseGPLToolsForNF](#CanIUseGPLToolsForNF)) Yes, because the copyright on the editors and tools does not cover the code you write. Using them does not place any restrictions, legally, on the license you use for your code.
Some programs copy parts of themselves into the output for technical reasons—for example, Bison copies a standard parser program into its output file. In such cases, the copied text in the output is covered by the same license that covers it in the source code. Meanwhile, the part of the output which is derived from the program's input inherits the copyright status of the input.
As it happens, Bison can also be used to develop nonfree programs. This is because we decided to explicitly permit the use of the Bison standard parser program in Bison output files without restriction. We made the decision because there were other tools comparable to Bison which already permitted use for nonfree programs.
- Do I have “fair use”
rights in using the source code of a GPL-covered program?
(
[#GPLFairUse](#GPLFairUse)) Yes, you do. “Fair use” is use that is allowed without any special permission. Since you don't need the developers' permission for such use, you can do it regardless of what the developers said about it—in the license or elsewhere, whether that license be the GNU GPL or any other free software license.
Note, however, that there is no world-wide principle of fair use; what kinds of use are considered “fair” varies from country to country.
- Can the US Government release a program under the GNU GPL?
(
[#GPLUSGov](#GPLUSGov)) If the program is written by US federal government employees in the course of their employment, it is in the public domain, which means it is not copyrighted. Since the GNU GPL is based on copyright, such a program cannot be released under the GNU GPL. (It can still be
[free software](/philosophy/free-sw.html), however; a public domain program is free.)However, when a US federal government agency uses contractors to develop software, that is a different situation. The contract can require the contractor to release it under the GNU GPL. (GNU Ada was developed in this way.) Or the contract can assign the copyright to the government agency, which can then release the software under the GNU GPL.
- Can the US Government
release improvements to a GPL-covered program?
(
[#GPLUSGovAdd](#GPLUSGovAdd)) Yes. If the improvements are written by US government employees in the course of their employment, then the improvements are in the public domain. However, the improved version, as a whole, is still covered by the GNU GPL. There is no problem in this situation.
If the US government uses contractors to do the job, then the improvements themselves can be GPL-covered.
- Does the GPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#GPLStaticVsDynamic](#GPLStaticVsDynamic)) No. Linking a GPL covered work statically or dynamically with other modules is making a combined work based on the GPL covered work. Thus, the terms and conditions of the GNU General Public License cover the whole combination. See also
[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)- Does the LGPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#LGPLStaticVsDynamic](#LGPLStaticVsDynamic)) For the purpose of complying with the LGPL (any extant version: v2, v2.1 or v3):
(1) If you statically link against an LGPLed library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application.
(2) If you dynamically link against an LGPLed library
*already present on the user's computer*, you need not convey the library's source. On the other hand, if you yourself convey the executable LGPLed library along with your application, whether linked with statically or dynamically, you must also convey the library's sources, in one of the ways for which the LGPL provides.- Is there some way that
I can GPL the output people get from use of my program? For example,
if my program is used to develop hardware designs, can I require that
these designs must be free?
(
[#GPLOutput](#GPLOutput)) In general this is legally impossible; copyright law does not give you any say in the use of the output people make from their data using your program. If the user uses your program to enter or convert her own data, the copyright on the output belongs to her, not you. More generally, when a program translates its input into some other form, the copyright status of the output inherits that of the input it was generated from.
So the only way you have a say in the use of the output is if substantial parts of the output are copied (more or less) from text in your program. For instance, part of the output of Bison (see above) would be covered by the GNU GPL, if we had not made an exception in this specific case.
You could artificially make a program copy certain text into its output even if there is no technical reason to do so. But if that copied text serves no practical purpose, the user could simply delete that text from the output and use only the rest. Then he would not have to obey the conditions on redistribution of the copied text.
- In what cases is the output of a GPL
program covered by the GPL too?
(
[#WhatCaseIsOutputGPL](#WhatCaseIsOutputGPL)) The output of a program is not, in general, covered by the copyright on the code of the program. So the license of the code of the program does not apply to the output, whether you pipe it into a file, make a screenshot, screencast, or video.
The exception would be when the program displays a full screen of text and/or art that comes from the program. Then the copyright on that text and/or art covers the output. Programs that output audio, such as video games, would also fit into this exception.
If the art/music is under the GPL, then the GPL applies when you copy it no matter how you copy it. However,
[fair use](#GPLFairUse)may still apply.Keep in mind that some programs, particularly video games, can have artwork/audio that is licensed separately from the underlying GPLed game. In such cases, the license on the artwork/audio would dictate the terms under which video/streaming may occur. See also:
[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)- If I add a module to a GPL-covered program,
do I have to use the GPL as the license for my module?
(
[#GPLModuleLicense](#GPLModuleLicense)) The GPL says that the whole combined program has to be released under the GPL. So your module has to be available for use under the GPL.
But you can give additional permission for the use of your code. You can, if you wish, release your module under a license which is more lax than the GPL but compatible with the GPL. The
[license list page](/licenses/license-list.html)gives a partial list of GPL-compatible licenses.- If a library is released under the GPL
(not the LGPL), does that mean that any software which uses it
has to be under the GPL or a GPL-compatible license?
(
[#IfLibraryIsGPL](#IfLibraryIsGPL)) Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole must be licensed under the GPL. See also:
[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)- If a programming language interpreter
is released under the GPL, does that mean programs written to be
interpreted by it must be under GPL-compatible licenses?
(
[#IfInterpreterIsGPL](#IfInterpreterIsGPL)) When the interpreter just interprets a language, the answer is no. The interpreted program, to the interpreter, is just data; a free software license like the GPL, based on copyright law, cannot limit what data you use the interpreter on. You can run it on any data (interpreted program), any way you like, and there are no requirements about licensing that data to anyone.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. So if these facilities are released under the GPL, the interpreted program that uses them must be released in a GPL-compatible way. The JNI or Java Native Interface is an example of such a binding mechanism; libraries that are accessed in this way are linked dynamically with the Java programs that call them. These libraries are also linked with the interpreter. If the interpreter is linked statically with these libraries, or if it is designed to
[link dynamically with these specific libraries](#GPLPluginsInNF), then it too needs to be released in a GPL-compatible way.Another similar and very common case is to provide libraries with the interpreter which are themselves interpreted. For instance, Perl comes with many Perl modules, and a Java implementation comes with many Java classes. These libraries and the programs that call them are always dynamically linked together.
A consequence is that if you choose to use GPLed Perl modules or Java classes in your program, you must release the program in a GPL-compatible way, regardless of the license used in the Perl or Java interpreter that the combined Perl or Java program will run on.
- I'm writing a Windows application with
Microsoft Visual C++ (or Visual Basic) and I will be releasing it
under the GPL. Is dynamically linking my program with the Visual
C++ (or Visual Basic) runtime library permitted under the GPL?
(
[#WindowsRuntimeAndGPL](#WindowsRuntimeAndGPL)) You may link your program to these libraries, and distribute the compiled program to others. When you do this, the runtime libraries are “System Libraries” as GPLv3 defines them. That means that you don't need to worry about including their source code with the program's Corresponding Source. GPLv2 provides a similar exception in section 3.
You may not distribute these libraries in compiled DLL form with the program. To prevent unscrupulous distributors from trying to use the System Library exception as a loophole, the GPL says that libraries can only qualify as System Libraries as long as they're not distributed with the program itself. If you distribute the DLLs with the program, they won't be eligible for this exception anymore; then the only way to comply with the GPL would be to provide their source code, which you are unable to do.
It is possible to write free programs that only run on Windows, but it is not a good idea. These programs would be “
[trapped](/philosophy/java-trap.html)” by Windows, and therefore contribute zero to the Free World.- Why is the original BSD
license incompatible with the GPL?
(
[#OrigBSD](#OrigBSD)) Because it imposes a specific requirement that is not in the GPL; namely, the requirement on advertisements of the program. Section 6 of GPLv2 states:
You may not impose any further restrictions on the recipients' exercise of the rights granted herein.
GPLv3 says something similar in section 10. The advertising clause provides just such a further restriction, and thus is GPL-incompatible.
The revised BSD license does not have the advertising clause, which eliminates the problem.
- When is a program and its plug-ins considered a single combined program?
(
[#GPLPlugins](#GPLPlugins)) It depends on how the main program invokes its plug-ins. If the main program uses fork and exec to invoke plug-ins, and they establish intimate communication by sharing complex data structures, or shipping complex data structures back and forth, that can make them one single combined program. A main program that uses simple fork and exec to invoke plug-ins and does not establish intimate communication between them results in the plug-ins being a separate program.
If the main program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single combined program, which must be treated as an extension of both the main program and the plug-ins. If the main program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
Using shared memory to communicate with complex data structures is pretty much equivalent to dynamic linking.
- If I write a plug-in to use with a GPL-covered
program, what requirements does that impose on the licenses I can
use for distributing my plug-in?
(
[#GPLAndPlugins](#GPLAndPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate works](#GPLPlugins).If the main program and the plugins are a single combined program then this means you must license the plug-in under the GPL or a GPL-compatible free software license and distribute it with source code in a GPL-compliant way. A main program that is separate from its plug-ins makes no requirements for the plug-ins.
- Can I apply the
GPL when writing a plug-in for a nonfree program?
(
[#GPLPluginsInNF](#GPLPluginsInNF)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program this means that combination of the GPL-covered plug-in with the nonfree main program would violate the GPL. However, you can resolve that legal problem by adding an exception to your plug-in's license, giving permission to link it with the nonfree main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- Can I release a nonfree program
that's designed to load a GPL-covered plug-in?
(
[#NFUseGPLPlugins](#NFUseGPLPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program then the main program must be released under the GPL or a GPL-compatible free software license, and the terms of the GPL must be followed when the main program is distributed for use with these plug-ins.
However, if they are separate works then the license of the plug-in makes no requirements about the main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- You have a GPLed program that I'd like
to link with my code to build a proprietary program. Does the fact
that I link with your program mean I have to GPL my program?
(
[#LinkingWithGPL](#LinkingWithGPL)) Not exactly. It means you must release your program under a license compatible with the GPL (more precisely, compatible with one or more GPL versions accepted by all the rest of the code in the combination that you link). The combination itself is then available under those GPL versions.
- If so, is there
any chance I could get a license of your program under the Lesser GPL?
(
[#SwitchToLGPL](#SwitchToLGPL)) You can ask, but most authors will stand firm and say no. The idea of the GPL is that if you want to include our code in your program, your program must also be free software. It is supposed to put pressure on you to release your program in a way that makes it part of our community.
You always have the legal alternative of not using our code.
- Does distributing a nonfree driver
meant to link with the kernel Linux violate the GPL?
(
[#NonfreeDriverKernelLinux](#NonfreeDriverKernelLinux)) Linux (the kernel in the GNU/Linux operating system) is distributed under GNU GPL version 2. Does distributing a nonfree driver meant to link with Linux violate the GPL?
Yes, this is a violation, because effectively this makes a larger combined work. The fact that the user is expected to put the pieces together does not really change anything.
Each contributor to Linux who holds copyright on a substantial part of the code can enforce the GPL and we encourage each of them to take action against those distributing nonfree Linux-drivers.
- How can I allow linking of
proprietary modules with my GPL-covered library under a controlled
interface only?
(
[#LinkingOverControlledInterface](#LinkingOverControlledInterface)) Add this text to the license notice of each file in the package, at the end of the text that says the file is distributed under the GNU GPL:
Linking ABC statically or dynamically with other modules is making a combined work based on ABC. Thus, the terms and conditions of the GNU General Public License cover the whole combination.
As a special exception, the copyright holders of ABC give you permission to combine ABC program with free software programs or libraries that are released under the GNU LGPL and with independent modules that communicate with ABC solely through the ABCDEF interface. You may copy and distribute such a system following the terms of the GNU GPL for ABC and the licenses of the other code concerned, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code and provided that you do not modify the ABCDEF interface.
Note that people who make modified versions of ABC are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. If you modify the ABCDEF interface, this exception does not apply to your modified version of ABC, and you must remove this exception when you distribute your modified version.
This exception is an additional permission under section 7 of the GNU General Public License, version 3 (“GPLv3”)
This exception enables linking with differently licensed modules over the specified interface (“ABCDEF”), while ensuring that users would still receive source code as they normally would under the GPL.
Only the copyright holders for the program can legally authorize this exception. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
- I have written an application that links
with many different components, that have different licenses. I am
very confused as to what licensing requirements are placed on my
program. Can you please tell me what licenses I may use?
(
[#ManyDifferentLicenses](#ManyDifferentLicenses)) To answer this question, we would need to see a list of each component that your program uses, the license of that component, and a brief (a few sentences for each should suffice) describing how your library uses that component. Two examples would be:
- To make my software work, it must be linked to the FOO library, which is available under the Lesser GPL.
- My software makes a system call (with a command line that I built) to run the BAR program, which is licensed under “the GPL, with a special exception allowing for linking with QUUX”.
- What is the difference between an
“aggregate” and other kinds of “modified versions”?
(
[#MereAggregation](#MereAggregation)) An “aggregate” consists of a number of separate programs, distributed together on the same CD-ROM or other media. The GPL permits you to create and distribute an aggregate, even when the licenses of the other software are nonfree or GPL-incompatible. The only condition is that you cannot release the aggregate under a license that prohibits users from exercising rights that each program's individual license would grant them.
Where's the line between two separate programs, and one program with two parts? This is a legal question, which ultimately judges will decide. We believe that a proper criterion depends both on the mechanism of communication (exec, pipes, rpc, function calls within a shared address space, etc.) and the semantics of the communication (what kinds of information are interchanged).
If the modules are included in the same executable file, they are definitely combined in one program. If modules are designed to run linked together in a shared address space, that almost surely means combining them into one program.
By contrast, pipes, sockets and command-line arguments are communication mechanisms normally used between two separate programs. So when they are used for communication, the modules normally are separate programs. But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program.
- When it comes to determining
whether two pieces of software form a single work, does the fact
that the code is in one or more containers have any effect?
(
[#AggregateContainers](#AggregateContainers)) No, the analysis of whether they are a
[single work or an aggregate](#MereAggregation)is unchanged by the involvement of containers.- Why does
the FSF require that contributors to FSF-copyrighted programs assign
copyright to the FSF? If I hold copyright on a GPLed program, should
I do this, too? If so, how?
(
[#AssignCopyright](#AssignCopyright)) Our lawyers have told us that to be in the
[best position to enforce the GPL](/licenses/why-assign.html)in court against violators, we should keep the copyright status of the program as simple as possible. We do this by asking each contributor to either assign the copyright on contributions to the FSF, or disclaim copyright on contributions.We also ask individual contributors to get copyright disclaimers from their employers (if any) so that we can be sure those employers won't claim to own the contributions.
Of course, if all the contributors put their code in the public domain, there is no copyright with which to enforce the GPL. So we encourage people to assign copyright on large code contributions, and only put small changes in the public domain.
If you want to make an effort to enforce the GPL on your program, it is probably a good idea for you to follow a similar policy. Please contact
[<[email protected]>](mailto:[email protected])if you want more information.- Can I modify the GPL
and make a modified license?
(
[#ModifyGPL](#ModifyGPL)) It is possible to make modified versions of the GPL, but it tends to have practical consequences.
You can legally use the GPL terms (possibly modified) in another license provided that you call your license by another name and do not include the GPL preamble, and provided you modify the instructions-for-use at the end enough to make it clearly different in wording and not mention GNU (though the actual procedure you describe may be similar).
If you want to use our preamble in a modified license, please write to
[<[email protected]>](mailto:[email protected])for permission. For this purpose we would want to check the actual license requirements to see if we approve of them.Although we will not raise legal objections to your making a modified license in this way, we hope you will think twice and not do it. Such a modified license is almost certainly
[incompatible with the GNU GPL](#WhatIsCompatible), and that incompatibility blocks useful combinations of modules. The mere proliferation of different free software licenses is a burden in and of itself.Rather than modifying the GPL, please use the exception mechanism offered by GPL version 3.
- If I use a
piece of software that has been obtained under the GNU GPL, am I
allowed to modify the original code into a new program, then
distribute and sell that new program commercially?
(
[#GPLCommercially](#GPLCommercially)) You are allowed to sell copies of the modified program commercially, but only under the terms of the GNU GPL. Thus, for instance, you must make the source code available to the users of the program as described in the GPL, and they must be allowed to redistribute and modify it as described in the GPL.
These requirements are the condition for including the GPL-covered code you received in a program of your own.
- Can I use the GPL for something other than
software?
(
[#GPLOtherThanSoftware](#GPLOtherThanSoftware)) You can apply the GPL to any kind of work, as long as it is clear what constitutes the “source code” for the work. The GPL defines this as the preferred form of the work for making changes in it.
However, for manuals and textbooks, or more generally any sort of work that is meant to teach a subject, we recommend using the GFDL rather than the GPL.
- How does the LGPL work with Java?
(
[#LGPLJava](#LGPLJava)) [See this article for details.](/licenses/lgpl-java.html)It works as designed, intended, and expected.- Consider this situation:
1) X releases V1 of a project under the GPL.
2) Y contributes to the development of V2 with changes and new code
based on V1.
3) X wants to convert V2 to a non-GPL license.
Does X need Y's permission?
(
[#Consider](#Consider)) Yes. Y was required to release its version under the GNU GPL, as a consequence of basing it on X's version V1. Nothing required Y to agree to any other license for its code. Therefore, X must get Y's permission before releasing that code under another license.
- I'd like to incorporate GPL-covered
software in my proprietary system. I have no permission to use
that software except what the GPL gives me. Can I do this?
(
[#GPLInProprietarySystem](#GPLInProprietarySystem)) You cannot incorporate GPL-covered software in a proprietary system. The goal of the GPL is to grant everyone the freedom to copy, redistribute, understand, and modify a program. If you could incorporate GPL-covered software into a nonfree system, it would have the effect of making the GPL-covered software nonfree too.
A system incorporating a GPL-covered program is an extended version of that program. The GPL says that any extended version of the program must be released under the GPL if it is released at all. This is for two reasons: to make sure that users who get the software get the freedom they should have, and to encourage people to give back improvements that they make.
However, in many cases you can distribute the GPL-covered software alongside your proprietary system. To do this validly, you must make sure that the free and nonfree programs communicate at arms length, that they are not combined in a way that would make them effectively a single program.
The difference between this and “incorporating” the GPL-covered software is partly a matter of substance and partly form. The substantive part is this: if the two programs are combined so that they become effectively two parts of one program, then you can't treat them as two separate programs. So the GPL has to cover the whole thing.
If the two programs remain well separated, like the compiler and the kernel, or like an editor and a shell, then you can treat them as two separate programs—but you have to do it properly. The issue is simply one of form: how you describe what you are doing. Why do we care about this? Because we want to make sure the users clearly understand the free status of the GPL-covered software in the collection.
If people were to distribute GPL-covered software calling it “part of” a system that users know is partly proprietary, users might be uncertain of their rights regarding the GPL-covered software. But if they know that what they have received is a free program plus another program, side by side, their rights will be clear.
- Using a certain GNU program under the
GPL does not fit our project to make proprietary software. Will you
make an exception for us? It would mean more users of that program.
(
[#WillYouMakeAnException](#WillYouMakeAnException)) Sorry, we don't make such exceptions. It would not be right.
Maximizing the number of users is not our aim. Rather, we are trying to give the crucial freedoms to as many users as possible. In general, proprietary software projects hinder rather than help the cause of freedom.
We do occasionally make license exceptions to assist a project which is producing free software under a license other than the GPL. However, we have to see a good reason why this will advance the cause of free software.
We also do sometimes change the distribution terms of a package, when that seems clearly the right way to serve the cause of free software; but we are very cautious about this, so you will have to show us very convincing reasons.
- I'd like to incorporate GPL-covered software in
my proprietary system. Can I do this by putting a “wrapper”
module, under a GPL-compatible lax permissive license (such as the X11
license) in between the GPL-covered part and the proprietary part?
(
[#GPLWrapper](#GPLWrapper)) No. The X11 license is compatible with the GPL, so you can add a module to the GPL-covered program and put it under the X11 license. But if you were to incorporate them both in a larger program, that whole would include the GPL-covered part, so it would have to be licensed
*as a whole*under the GNU GPL.The fact that proprietary module A communicates with GPL-covered module C only through X11-licensed module B is legally irrelevant; what matters is the fact that module C is included in the whole.
- Where can I learn more about the GCC
Runtime Library Exception?
(
[#LibGCCException](#LibGCCException)) The GCC Runtime Library Exception covers libgcc, libstdc++, libfortran, libgomp, libdecnumber, and other libraries distributed with GCC. The exception is meant to allow people to distribute programs compiled with GCC under terms of their choice, even when parts of these libraries are included in the executable as part of the compilation process. To learn more, please read our
[FAQ about the GCC Runtime Library Exception](/licenses/gcc-exception-faq.html).- I'd like to
modify GPL-covered programs and link them with the portability
libraries from Money Guzzler Inc. I cannot distribute the source code
for these libraries, so any user who wanted to change these versions
would have to obtain those libraries separately. Why doesn't the
GPL permit this?
(
[#MoneyGuzzlerInc](#MoneyGuzzlerInc)) There are two reasons for this. First, a general one. If we permitted company A to make a proprietary file, and company B to distribute GPL-covered software linked with that file, the effect would be to make a hole in the GPL big enough to drive a truck through. This would be carte blanche for withholding the source code for all sorts of modifications and extensions to GPL-covered software.
Giving all users access to the source code is one of our main goals, so this consequence is definitely something we want to avoid.
More concretely, the versions of the programs linked with the Money Guzzler libraries would not really be free software as we understand the term—they would not come with full source code that enables users to change and recompile the program.
- If the license for a module Q has a
requirement that's incompatible with the GPL,
but the requirement applies only when Q is distributed by itself, not when
Q is included in a larger program, does that make the license
GPL-compatible? Can I combine or link Q with a GPL-covered program?
(
[#GPLIncompatibleAlone](#GPLIncompatibleAlone)) If a program P is released under the GPL that means *any and every part of it* can be used under the GPL. If you integrate module Q, and release the combined program P+Q under the GPL, that means any part of P+Q can be used under the GPL. One part of P+Q is Q. So releasing P+Q under the GPL says that Q any part of it can be used under the GPL. Putting it in other words, a user who obtains P+Q under the GPL can delete P, so that just Q remains, still under the GPL.
If the license of module Q permits you to give permission for that, then it is GPL-compatible. Otherwise, it is not GPL-compatible.
If the license for Q says in no uncertain terms that you must do certain things (not compatible with the GPL) when you redistribute Q on its own, then it does not permit you to distribute Q under the GPL. It follows that you can't release P+Q under the GPL either. So you cannot link or combine P with Q.
- Can I release a modified
version of a GPL-covered program in binary form only?
(
[#ModifiedJustBinary](#ModifiedJustBinary)) No. The whole point of the GPL is that all modified versions must be
[free software](/philosophy/free-sw.html)—which means, in particular, that the source code of the modified version is available to the users.- I
downloaded just the binary from the net. If I distribute copies,
do I have to get the source and distribute that too?
(
[#UnchangedJustBinary](#UnchangedJustBinary)) Yes. The general rule is, if you distribute binaries, you must distribute the complete corresponding source code too. The exception for the case where you received a written offer for source code is quite limited.
- I want to distribute
binaries via physical media without accompanying sources. Can I provide
source code by FTP?
(
[#DistributeWithSourceOnInternet](#DistributeWithSourceOnInternet)) Version 3 of the GPL allows this; see option 6(b) for the full details. Under version 2, you're certainly free to offer source via FTP, and most users will get it from there. However, if any of them would rather get the source on physical media by mail, you are required to provide that.
If you distribute binaries via FTP,
[you should distribute source via FTP.](#AnonFTPAndSendSources)- My friend got a GPL-covered
binary with an offer to supply source, and made a copy for me.
Can I use the offer myself to obtain the source?
(
[#RedistributedBinariesGetSource](#RedistributedBinariesGetSource)) Yes, you can. The offer must be open to everyone who has a copy of the binary that it accompanies. This is why the GPL says your friend must give you a copy of the offer along with a copy of the binary—so you can take advantage of it.
- Can I put the binaries on my
Internet server and put the source on a different Internet site?
(
[#SourceAndBinaryOnDifferentSites](#SourceAndBinaryOnDifferentSites)) Yes. Section 6(d) allows this. However, you must provide clear instructions people can follow to obtain the source, and you must take care to make sure that the source remains available for as long as you distribute the object code.
- I want to distribute an extended
version of a GPL-covered program in binary form. Is it enough to
distribute the source for the original version?
(
[#DistributeExtendedBinary](#DistributeExtendedBinary)) No, you must supply the source code that corresponds to the binary. Corresponding source means the source from which users can rebuild the same binary.
Part of the idea of free software is that users should have access to the source code for
*the programs they use*. Those using your version should have access to the source code for your version.A major goal of the GPL is to build up the Free World by making sure that improvement to a free program are themselves free. If you release an improved version of a GPL-covered program, you must release the improved source code under the GPL.
- I want to distribute
binaries, but distributing complete source is inconvenient. Is it ok if
I give users the diffs from the “standard” version along with
the binaries?
(
[#DistributingSourceIsInconvenient](#DistributingSourceIsInconvenient)) This is a well-meaning request, but this method of providing the source doesn't really do the job.
A user that wants the source a year from now may be unable to get the proper version from another site at that time. The standard distribution site may have a newer version, but the same diffs probably won't work with that version.
So you need to provide complete sources, not just diffs, with the binaries.
- Can I make binaries available
on a network server, but send sources only to people who order them?
(
[#AnonFTPAndSendSources](#AnonFTPAndSendSources)) If you make object code available on a network server, you have to provide the Corresponding Source on a network server as well. The easiest way to do this would be to publish them on the same server, but if you'd like, you can alternatively provide instructions for getting the source from another server, or even a
[version control system](#SourceInCVS). No matter what you do, the source should be just as easy to access as the object code, though. This is all specified in section 6(d) of GPLv3.The sources you provide must correspond exactly to the binaries. In particular, you must make sure they are for the same version of the program—not an older version and not a newer version.
- How can I make sure each
user who downloads the binaries also gets the source?
(
[#HowCanIMakeSureEachDownloadGetsSource](#HowCanIMakeSureEachDownloadGetsSource)) You don't have to make sure of this. As long as you make the source and binaries available so that the users can see what's available and take what they want, you have done what is required of you. It is up to the user whether to download the source.
Our requirements for redistributors are intended to make sure the users can get the source code, not to force users to download the source code even if they don't want it.
- Does the GPL require
me to provide source code that can be built to match the exact
hash of the binary I am distributing?
(
[#MustSourceBuildToMatchExactHashOfBinary](#MustSourceBuildToMatchExactHashOfBinary)) Complete corresponding source means the source that the binaries were made from, but that does not imply your tools must be able to make a binary that is an exact hash of the binary you are distributing. In some cases it could be (nearly) impossible to build a binary from source with an exact hash of the binary being distributed — consider the following examples: a system might put timestamps in binaries; or the program might have been built against a different (even unreleased) compiler version.
- A company
is running a modified version of a GPLed program on a web site.
Does the GPL say they must release their modified sources?
(
[#UnreleasedMods](#UnreleasedMods)) The GPL permits anyone to make a modified version and use it without ever distributing it to others. What this company is doing is a special case of that. Therefore, the company does not have to release the modified sources. The situation is different when the modified program is licensed under the terms of the
[GNU Affero GPL](#UnreleasedModsAGPL).Compare this to a situation where the web site contains or links to separate GPLed programs that are distributed to the user when they visit the web site (often written in
[JavaScript](/philosophy/javascript-trap.html), but other languages are used as well). In this situation the source code for the programs being distributed must be released to the user under the terms of the GPL.- A company is running a modified
version of a program licensed under the GNU Affero GPL (AGPL) on a
web site. Does the AGPL say they must release their modified
sources?
(
[#UnreleasedModsAGPL](#UnreleasedModsAGPL)) The
[GNU Affero GPL](/licenses/agpl.html)requires that modified versions of the software offer all users interacting with it over a computer network an opportunity to receive the source. What the company is doing falls under that meaning, so the company must release the modified source code.- Is making and using multiple copies
within one organization or company “distribution”?
(
[#InternalDistribution](#InternalDistribution)) No, in that case the organization is just making the copies for itself. As a consequence, a company or other organization can develop a modified version and install that version through its own facilities, without giving the staff permission to release that modified version to outsiders.
However, when the organization transfers copies to other organizations or individuals, that is distribution. In particular, providing copies to contractors for use off-site is distribution.
- If someone steals
a CD containing a version of a GPL-covered program, does the GPL
give the thief the right to redistribute that version?
(
[#StolenCopy](#StolenCopy)) If the version has been released elsewhere, then the thief probably does have the right to make copies and redistribute them under the GPL, but if thieves are imprisoned for stealing the CD, they may have to wait until their release before doing so.
If the version in question is unpublished and considered by a company to be its trade secret, then publishing it may be a violation of trade secret law, depending on other circumstances. The GPL does not change that. If the company tried to release its version and still treat it as a trade secret, that would violate the GPL, but if the company hasn't released this version, no such violation has occurred.
- What if a company distributes a copy of
some other developers' GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease](#TradeSecretRelease)) The company has violated the GPL and will have to cease distribution of that program. Note how this differs from the theft case above; the company does not intentionally distribute a copy when a copy is stolen, so in that case the company has not violated the GPL.
- What if a company distributes a copy
of its own GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease2](#TradeSecretRelease2)) If the program distributed does not incorporate anyone else's GPL-covered work, then the company is not violating the GPL (see “
[Is the developer of a GPL-covered program bound by the GPL?](#DeveloperViolate)” for more information). But it is making two contradictory statements about what you can do with that program: that you can redistribute it, and that you can't. It would make sense to demand clarification of the terms for use of that program before you accept a copy.- Why are some GNU libraries released under
the ordinary GPL rather than the Lesser GPL?
(
[#WhySomeGPLAndNotLGPL](#WhySomeGPLAndNotLGPL)) Using the Lesser GPL for any particular library constitutes a retreat for free software. It means we partially abandon the attempt to defend the users' freedom, and some of the requirements to share what is built on top of GPL-covered software. In themselves, those are changes for the worse.
Sometimes a localized retreat is a good strategy. Sometimes, using the LGPL for a library might lead to wider use of that library, and thus to more improvement for it, wider support for free software, and so on. This could be good for free software if it happens to a large extent. But how much will this happen? We can only speculate.
It would be nice to try out the LGPL on each library for a while, see whether it helps, and change back to the GPL if the LGPL didn't help. But this is not feasible. Once we use the LGPL for a particular library, changing back would be difficult.
So we decide which license to use for each library on a case-by-case basis. There is a
[long explanation](/licenses/why-not-lgpl.html)of how we judge the question.- Why should programs say
“Version 3 of the GPL or any later version”?
(
[#VersionThreeOrLater](#VersionThreeOrLater)) From time to time, at intervals of years, we change the GPL—sometimes to clarify it, sometimes to permit certain kinds of use not previously permitted, and sometimes to tighten up a requirement. (The last two changes were in 2007 and 1991.) Using this “indirect pointer” in each program makes it possible for us to change the distribution terms on the entire collection of GNU software, when we update the GPL.
If each program lacked the indirect pointer, we would be forced to discuss the change at length with numerous copyright holders, which would be a virtual impossibility. In practice, the chance of having uniform distribution terms for GNU software would be nil.
Suppose a program says “Version 3 of the GPL or any later version” and a new version of the GPL is released. If the new GPL version gives additional permission, that permission will be available immediately to all the users of the program. But if the new GPL version has a tighter requirement, it will not restrict use of the current version of the program, because it can still be used under GPL version 3. When a program says “Version 3 of the GPL or any later version”, users will always be permitted to use it, and even change it, according to the terms of GPL version 3—even after later versions of the GPL are available.
If a tighter requirement in a new version of the GPL need not be obeyed for existing software, how is it useful? Once GPL version 4 is available, the developers of most GPL-covered programs will release subsequent versions of their programs specifying “Version 4 of the GPL or any later version”. Then users will have to follow the tighter requirements in GPL version 4, for subsequent versions of the program.
However, developers are not obligated to do this; developers can continue allowing use of the previous version of the GPL, if that is their preference.
- Is it a good idea to use a license saying
that a certain program can be used only under the latest version
of the GNU GPL?
(
[#OnlyLatestVersion](#OnlyLatestVersion)) The reason you shouldn't do that is that it could result some day in withdrawing automatically some permissions that the users previously had.
Suppose a program was released in 2000 under “the latest GPL version”. At that time, people could have used it under GPLv2. The day we published GPLv3 in 2007, everyone would have been suddenly compelled to use it under GPLv3 instead.
Some users may not even have known about GPL version 3—but they would have been required to use it. They could have violated the program's license unintentionally just because they did not get the news. That's a bad way to treat people.
We think it is wrong to take back permissions already granted, except due to a violation. If your freedom could be revoked, then it isn't really freedom. Thus, if you get a copy of a program version under one version of a license, you should
*always*have the rights granted by that version of the license. Releasing under “GPL version N or any later version” upholds that principle.- Why don't you use the GPL for manuals?
(
[#WhyNotGPLForManuals](#WhyNotGPLForManuals)) It is possible to use the GPL for a manual, but the GNU Free Documentation License (GFDL) is much better for manuals.
The GPL was designed for programs; it contains lots of complex clauses that are crucial for programs, but that would be cumbersome and unnecessary for a book or manual. For instance, anyone publishing the book on paper would have to either include machine-readable “source code” of the book along with each printed copy, or provide a written offer to send the “source code” later.
Meanwhile, the GFDL has clauses that help publishers of free manuals make a profit from selling copies—cover texts, for instance. The special rules for Endorsements sections make it possible to use the GFDL for an official standard. This would permit modified versions, but they could not be labeled as “the standard”.
Using the GFDL, we permit changes in the text of a manual that covers its technical topic. It is important to be able to change the technical parts, because people who change a program ought to change the documentation to correspond. The freedom to do this is an ethical imperative.
Our manuals also include sections that state our political position about free software. We mark these as “invariant”, so that they cannot be changed or removed. The GFDL makes provisions for these “invariant sections”.
- How does the GPL apply to fonts?
(
[#FontException](#FontException)) Font licensing is a complex issue which needs serious consideration. The following license exception is experimental but approved for general use. We welcome suggestions on this subject—please see this this
[explanatory essay](http://www.fsf.org/blogs/licensing/20050425novalis)and write to[[email protected]](mailto:[email protected]).To use this exception, add this text to the license notice of each file in the package (to the extent possible), at the end of the text that says the file is distributed under the GNU GPL:
As a special exception, if you create a document which uses this font, and embed this font or unaltered portions of this font into the document, this font does not by itself cause the resulting document to be covered by the GNU General Public License. This exception does not however invalidate any other reasons why the document might be covered by the GNU General Public License. If you modify this font, you may extend this exception to your version of the font, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- I am writing a website maintenance system
(called a “
[content management system](/philosophy/words-to-avoid.html#Content)” by some), or some other application which generates web pages from templates. What license should I use for those templates? ([#WMS](#WMS)) Templates are minor enough that it is not worth using copyleft to protect them. It is normally harmless to use copyleft on minor works, but templates are a special case, because they are combined with data provided by users of the application and the combination is distributed. So, we recommend that you license your templates under simple permissive terms.
Some templates make calls into JavaScript functions. Since Javascript is often non-trivial, it is worth copylefting. Because the templates will be combined with user data, it's possible that template+user data+JavaScript would be considered one work under copyright law. A line needs to be drawn between the JavaScript (copylefted), and the user code (usually under incompatible terms).
Here's an exception for JavaScript code that does this:
As a special exception to the GPL, any HTML file which merely makes function calls to this code, and for that purpose includes it by reference shall be deemed a separate work for copyright law purposes. In addition, the copyright holders of this code give you permission to combine this code with free software libraries that are released under the GNU LGPL. You may copy and distribute such a system following the terms of the GNU GPL for this code and the LGPL for the libraries. If you modify this code, you may extend this exception to your version of the code, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- Can I release
a program under the GPL which I developed using nonfree tools?
(
[#NonFreeTools](#NonFreeTools)) Which programs you used to edit the source code, or to compile it, or study it, or record it, usually makes no difference for issues concerning the licensing of that source code.
However, if you link nonfree libraries with the source code, that would be an issue you need to deal with. It does not preclude releasing the source code under the GPL, but if the libraries don't fit under the “system library” exception, you should affix an explicit notice giving permission to link your program with them.
[The FAQ entry about using GPL-incompatible libraries](#GPLIncompatibleLibs)provides more information about how to do that.- Are there translations
of the GPL into other languages?
(
[#GPLTranslations](#GPLTranslations)) It would be useful to have translations of the GPL into languages other than English. People have even written translations and sent them to us. But we have not dared to approve them as officially valid. That carries a risk so great we do not dare accept it.
A legal document is in some ways like a program. Translating it is like translating a program from one language and operating system to another. Only a lawyer skilled in both languages can do it—and even then, there is a risk of introducing a bug.
If we were to approve, officially, a translation of the GPL, we would be giving everyone permission to do whatever the translation says they can do. If it is a completely accurate translation, that is fine. But if there is an error in the translation, the results could be a disaster which we could not fix.
If a program has a bug, we can release a new version, and eventually the old version will more or less disappear. But once we have given everyone permission to act according to a particular translation, we have no way of taking back that permission if we find, later on, that it had a bug.
Helpful people sometimes offer to do the work of translation for us. If the problem were a matter of finding someone to do the work, this would solve it. But the actual problem is the risk of error, and offering to do the work does not avoid the risk. We could not possibly authorize a translation written by a non-lawyer.
Therefore, for the time being, we are not approving translations of the GPL as globally valid and binding. Instead, we are doing two things:
Referring people to unofficial translations. This means that we permit people to write translations of the GPL, but we don't approve them as legally valid and binding.
An unapproved translation has no legal force, and it should say so explicitly. It should be marked as follows:
This translation of the GPL is informal, and not officially approved by the Free Software Foundation as valid. To be completely sure of what is permitted, refer to the original GPL (in English).
But the unapproved translation can serve as a hint for how to understand the English GPL. For many users, that is sufficient.
However, businesses using GNU software in commercial activity, and people doing public ftp distribution, should need to check the real English GPL to make sure of what it permits.
Publishing translations valid for a single country only.
We are considering the idea of publishing translations which are officially valid only for one country. This way, if there is a mistake, it will be limited to that country, and the damage will not be too great.
It will still take considerable expertise and effort from a sympathetic and capable lawyer to make a translation, so we cannot promise any such translations soon.
- If a programming language interpreter has a
license that is incompatible with the GPL, can I run GPL-covered
programs on it?
(
[#InterpreterIncompat](#InterpreterIncompat)) When the interpreter just interprets a language, the answer is yes. The interpreted program, to the interpreter, is just data; the GPL doesn't restrict what tools you process the program with.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. The JNI or Java Native Interface is an example of such a facility; libraries that are accessed in this way are linked dynamically with the Java programs that call them.
So if these facilities are released under a GPL-incompatible license, the situation is like linking in any other way with a GPL-incompatible library. Which implies that:
- If you are writing code and releasing it under the GPL, you can state an explicit exception giving permission to link it with those GPL-incompatible facilities.
- If you wrote and released the program under the GPL, and you designed it specifically to work with those facilities, people can take that as an implicit exception permitting them to link it with those facilities. But if that is what you intend, it is better to say so explicitly.
- You can't take someone else's GPL-covered code and use it that way, or add such exceptions to it. Only the copyright holders of that code can add the exception.
- Who has the power to enforce the GPL?
(
[#WhoHasThePower](#WhoHasThePower)) Since the GPL is a copyright license, it can be enforced by the copyright holders of the software. If you see a violation of the GPL, you should inform the developers of the GPL-covered software involved. They either are the copyright holders, or are connected with the copyright holders.
In addition, we encourage the use of any legal mechanism available to users for obtaining complete and corresponding source code, as is their right, and enforcing full compliance with the GNU GPL. After all, we developed the GNU GPL to make software free for all its users.
- In an object-oriented language such as Java,
if I use a class that is GPLed without modifying, and subclass it,
in what way does the GPL affect the larger program?
(
[#OOPLang](#OOPLang)) Subclassing is creating a derivative work. Therefore, the terms of the GPL affect the whole program where you create a subclass of a GPLed class.
- If I port my program to GNU/Linux,
does that mean I have to release it as free software under the GPL
or some other Free Software license?
(
[#PortProgramToGPL](#PortProgramToGPL)) In general, the answer is no—this is not a legal requirement. In specific, the answer depends on which libraries you want to use and what their licenses are. Most system libraries either use the
[GNU Lesser GPL](/licenses/lgpl.html), or use the GNU GPL plus an exception permitting linking the library with anything. These libraries can be used in nonfree programs; but in the case of the Lesser GPL, it does have some requirements you must follow.Some libraries are released under the GNU GPL alone; you must use a GPL-compatible license to use those libraries. But these are normally the more specialized libraries, and you would not have had anything much like them on another platform, so you probably won't find yourself wanting to use these libraries for simple porting.
Of course, your software is not a contribution to our community if it is not free, and people who value their freedom will refuse to use it. Only people willing to give up their freedom will use your software, which means that it will effectively function as an inducement for people to lose their freedom.
If you hope some day to look back on your career and feel that it has contributed to the growth of a good and free society, you need to make your software free.
- I just found out that a company has a
copy of a GPLed program, and it costs money to get it. Aren't they
violating the GPL by not making it available on the Internet?
(
[#CompanyGPLCostsMoney](#CompanyGPLCostsMoney)) No. The GPL does not require anyone to use the Internet for distribution. It also does not require anyone in particular to redistribute the program. And (outside of one special case), even if someone does decide to redistribute the program sometimes, the GPL doesn't say he has to distribute a copy to you in particular, or any other person in particular.
What the GPL requires is that he must have the freedom to distribute a copy to you
*if he wishes to*. Once the copyright holder does distribute a copy of the program to someone, that someone can then redistribute the program to you, or to anyone else, as he sees fit.- Can I release a program with a license which
says that you can distribute modified versions of it under the GPL
but you can't distribute the original itself under the GPL?
(
[#ReleaseNotOriginal](#ReleaseNotOriginal)) No. Such a license would be self-contradictory. Let's look at its implications for me as a user.
Suppose I start with the original version (call it version A), add some code (let's imagine it is 1000 lines), and release that modified version (call it B) under the GPL. The GPL says anyone can change version B again and release the result under the GPL. So I (or someone else) can delete those 1000 lines, producing version C which has the same code as version A but is under the GPL.
If you try to block that path, by saying explicitly in the license that I'm not allowed to reproduce something identical to version A under the GPL by deleting those lines from version B, in effect the license now says that I can't fully use version B in all the ways that the GPL permits. In other words, the license does not in fact allow a user to release a modified version such as B under the GPL.
- Does moving a copy to a majority-owned,
and controlled, subsidiary constitute distribution?
(
[#DistributeSubsidiary](#DistributeSubsidiary)) Whether moving a copy to or from this subsidiary constitutes “distribution” is a matter to be decided in each case under the copyright law of the appropriate jurisdiction. The GPL does not and cannot override local laws. US copyright law is not entirely clear on the point, but appears not to consider this distribution.
If, in some country, this is considered distribution, and the subsidiary must receive the right to redistribute the program, that will not make a practical difference. The subsidiary is controlled by the parent company; rights or no rights, it won't redistribute the program unless the parent company decides to do so.
- Can software installers ask people
to click to agree to the GPL? If I get some software under the GPL,
do I have to agree to anything?
(
[#ClickThrough](#ClickThrough)) Some software packaging systems have a place which requires you to click through or otherwise indicate assent to the terms of the GPL. This is neither required nor forbidden. With or without a click through, the GPL's rules remain the same.
Merely agreeing to the GPL doesn't place any obligations on you. You are not required to agree to anything to merely use software which is licensed under the GPL. You only have obligations if you modify or distribute the software. If it really bothers you to click through the GPL, nothing stops you from hacking the GPLed software to bypass this.
- I would
like to bundle GPLed software with some sort of installation software.
Does that installer need to have a GPL-compatible license?
(
[#GPLCompatInstaller](#GPLCompatInstaller)) No. The installer and the files it installs are separate works. As a result, the terms of the GPL do not apply to the installation software.
- Some distributors of GPLed software
require me in their umbrella EULAs or as part of their downloading
process to “represent and warrant” that I am located in
the US or that I intend to distribute the software in compliance with
relevant export control laws. Why are they doing this and is it a
violation of those distributors' obligations under GPL?
(
[#ExportWarranties](#ExportWarranties)) This is not a violation of the GPL. Those distributors (almost all of whom are commercial businesses selling free software distributions and related services) are trying to reduce their own legal risks, not to control your behavior. Export control law in the United States
*might*make them liable if they knowingly export software into certain countries, or if they give software to parties they know will make such exports. By asking for these statements from their customers and others to whom they distribute software, they protect themselves in the event they are later asked by regulatory authorities what they knew about where software they distributed was going to wind up. They are not restricting what you can do with the software, only preventing themselves from being blamed with respect to anything you do. Because they are not placing additional restrictions on the software, they do not violate section 10 of GPLv3 or section 6 of GPLv2.The FSF opposes the application of US export control laws to free software. Not only are such laws incompatible with the general objective of software freedom, they achieve no reasonable governmental purpose, because free software is currently and should always be available from parties in almost every country, including countries that have no export control laws and which do not participate in US-led trade embargoes. Therefore, no country's government is actually deprived of free software by US export control laws, while no country's citizens
*should*be deprived of free software, regardless of their governments' policies, as far as we are concerned. Copies of all GPL-licensed software published by the FSF can be obtained from us without making any representation about where you live or what you intend to do. At the same time, the FSF understands the desire of commercial distributors located in the US to comply with US laws. They have a right to choose to whom they distribute particular copies of free software; exercise of that right does not violate the GPL unless they add contractual restrictions beyond those permitted by the GPL.- Can I use
GPLed software on a device that will stop operating if customers do
not continue paying a subscription fee?
(
[#SubscriptionFee](#SubscriptionFee)) No. In this scenario, the requirement to keep paying a fee limits the user's ability to run the program. This is an additional requirement on top of the GPL, and the license prohibits it.
- How do I upgrade from (L)GPLv2 to (L)GPLv3?
(
[#v3HowToUpgrade](#v3HowToUpgrade)) First, include the new version of the license in your package. If you're using LGPLv3 in your project, be sure to include copies of both GPLv3 and LGPLv3, since LGPLv3 is now written as a set of additional permissions on top of GPLv3.
Second, replace all your existing v2 license notices (usually at the top of each file) with the new recommended text available on
[the GNU licenses howto](/licenses/gpl-howto.html). It's more future-proof because it no longer includes the FSF's postal mailing address.Of course, any descriptive text (such as in a README) which talks about the package's license should also be updated appropriately.
- How does GPLv3 make BitTorrent distribution easier?
(
[#BitTorrent](#BitTorrent)) Because GPLv2 was written before peer-to-peer distribution of software was common, it is difficult to meet its requirements when you share code this way. The best way to make sure you are in compliance when distributing GPLv2 object code on BitTorrent would be to include all the corresponding source in the same torrent, which is prohibitively expensive.
GPLv3 addresses this problem in two ways. First, people who download this torrent and send the data to others as part of that process are not required to do anything. That's because section 9 says “Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance [of the license].”
Second, section 6(e) of GPLv3 is designed to give distributors—people who initially seed torrents—a clear and straightforward way to provide the source, by telling recipients where it is available on a public network server. This ensures that everyone who wants to get the source can do so, and it's almost no hassle for the distributor.
- What is tivoization? How does GPLv3 prevent it?
(
[#Tivoization](#Tivoization)) Some devices utilize free software that can be upgraded, but are designed so that users are not allowed to modify that software. There are lots of different ways to do this; for example, sometimes the hardware checksums the software that is installed, and shuts down if it doesn't match an expected signature. The manufacturers comply with GPLv2 by giving you the source code, but you still don't have the freedom to modify the software you're using. We call this practice tivoization.
When people distribute User Products that include software under GPLv3, section 6 requires that they provide you with information necessary to modify that software. User Products is a term specially defined in the license; examples of User Products include portable music players, digital video recorders, and home security systems.
- Does GPLv3 prohibit DRM?
(
[#DRMProhibited](#DRMProhibited)) It does not; you can use code released under GPLv3 to develop any kind of DRM technology you like. However, if you do this, section 3 says that the system will not count as an effective technological “protection” measure, which means that if someone breaks the DRM, she will be free to distribute her software too, unhindered by the DMCA and similar laws.
As usual, the GNU GPL does not restrict what people do in software, it just stops them from restricting others.
- Can I use the GPL to license hardware?
(
[#GPLHardware](#GPLHardware)) Any material that can be copyrighted can be licensed under the GPL. GPLv3 can also be used to license materials covered by other copyright-like laws, such as semiconductor masks. So, as an example, you can release a drawing of a physical object or circuit under the GPL.
In many situations, copyright does not cover making physical hardware from a drawing. In these situations, your license for the drawing simply can't exert any control over making or selling physical hardware, regardless of the license you use. When copyright does cover making hardware, for instance with IC masks, the GPL handles that case in a useful way.
- I use public key cryptography to sign my code to
assure its authenticity. Is it true that GPLv3 forces me to release
my private signing keys?
(
[#GiveUpKeys](#GiveUpKeys)) No. The only time you would be required to release signing keys is if you conveyed GPLed software inside a User Product, and its hardware checked the software for a valid cryptographic signature before it would function. In that specific case, you would be required to provide anyone who owned the device, on demand, with the key to sign and install modified software on the device so that it will run. If each instance of the device uses a different key, then you need only give each purchaser a key for that instance.
- Does GPLv3 require that voters be able to
modify the software running in a voting machine?
(
[#v3VotingMachine](#v3VotingMachine)) No. Companies distributing devices that include software under GPLv3 are at most required to provide the source and Installation Information for the software to people who possess a copy of the object code. The voter who uses a voting machine (like any other kiosk) doesn't get possession of it, not even temporarily, so the voter also does not get possession of the binary software in it.
Note, however, that voting is a very special case. Just because the software in a computer is free does not mean you can trust the computer for voting. We believe that computers cannot be trusted for voting. Voting should be done on paper.
- Does GPLv3 have a “patent retaliation
clause”?
(
[#v3PatentRetaliation](#v3PatentRetaliation)) In effect, yes. Section 10 prohibits people who convey the software from filing patent suits against other licensees. If someone did so anyway, section 8 explains how they would lose their license and any patent licenses that accompanied it.
- Can I use snippets of GPL-covered
source code within documentation that is licensed under some license
that is incompatible with the GPL?
(
[#SourceCodeInDocumentation](#SourceCodeInDocumentation)) If the snippets are small enough that you can incorporate them under fair use or similar laws, then yes. Otherwise, no.
- The beginning of GPLv3 section 6 says that I can
convey a covered work in object code form “under the terms of
sections 4 and 5” provided I also meet the conditions of
section 6. What does that mean?
(
[#v3Under4and5](#v3Under4and5)) This means that all the permissions and conditions you have to convey source code also apply when you convey object code: you may charge a fee, you must keep copyright notices intact, and so on.
- My company owns a lot of patents.
Over the years we've contributed code to projects under “GPL
version 2 or any later version”, and the project itself has
been distributed under the same terms. If a user decides to take the
project's code (incorporating my contributions) under GPLv3, does
that mean I've automatically granted GPLv3's explicit patent license
to that user?
(
[#v2OrLaterPatentLicense](#v2OrLaterPatentLicense)) No. When you convey GPLed software, you must follow the terms and conditions of one particular version of the license. When you do so, that version defines the obligations you have. If users may also elect to use later versions of the GPL, that's merely an additional permission they have—it does not require you to fulfill the terms of the later version of the GPL as well.
Do not take this to mean that you can threaten the community with your patents. In many countries, distributing software under GPLv2 provides recipients with an implicit patent license to exercise their rights under the GPL. Even if it didn't, anyone considering enforcing their patents aggressively is an enemy of the community, and we will defend ourselves against such an attack.
- If I distribute a proprietary
program that links against an LGPLv3-covered library that I've
modified, what is the “contributor version” for purposes of
determining the scope of the explicit patent license grant I'm
making—is it just the library, or is it the whole
combination?
(
[#LGPLv3ContributorVersion](#LGPLv3ContributorVersion)) The “contributor version” is only your version of the library.
- Is GPLv3 compatible with GPLv2?
(
[#v2v3Compatibility](#v2v3Compatibility)) No. Many requirements have changed from GPLv2 to GPLv3, which means that the precise requirement of GPLv2 is not present in GPLv3, and vice versa. For instance, the Termination conditions of GPLv3 are considerably more permissive than those of GPLv2, and thus different from the Termination conditions of GPLv2.
Due to these differences, the two licenses are not compatible: if you tried to combine code released under GPLv2 with code under GPLv3, you would violate section 6 of GPLv2.
However, if code is released under GPL “version 2 or later,” that is compatible with GPLv3 because GPLv3 is one of the options it permits.
- Does GPLv2 have a requirement about delivering installation
information?
(
[#InstInfo](#InstInfo)) GPLv3 explicitly requires redistribution to include the full necessary “Installation Information.” GPLv2 doesn't use that term, but it does require redistribution to include
scripts used to control compilation and installation of the executable
with the complete and corresponding source code. This covers part, but not all, of what GPLv3 calls “Installation Information.” Thus, GPLv3's requirement about installation information is stronger.- What does it mean to “cure” a violation of GPLv3?
(
[#Cure](#Cure)) To cure a violation means to adjust your practices to comply with the requirements of the license.
- The warranty and liability
disclaimers in GPLv3 seem specific to U.S. law. Can I add my own
disclaimers to my own code?
(
[#v3InternationalDisclaimers](#v3InternationalDisclaimers)) Yes. Section 7 gives you permission to add your own disclaimers, specifically 7(a).
- My program has interactive user
interfaces that are non-visual in nature. How can I comply with the
Appropriate Legal Notices requirement in GPLv3?
(
[#NonvisualLegalNotices](#NonvisualLegalNotices)) All you need to do is ensure that the Appropriate Legal Notices are readily available to the user in your interface. For example, if you have written an audio interface, you could include a command that reads the notices aloud.
- If I give a copy of a GPLv3-covered
program to a coworker at my company, have I “conveyed” the
copy to that coworker?
(
[#v3CoworkerConveying](#v3CoworkerConveying)) As long as you're both using the software in your work at the company, rather than personally, then the answer is no. The copies belong to the company, not to you or the coworker. This copying is propagation, not conveying, because the company is not making copies available to others.
- If I distribute a GPLv3-covered
program, can I provide a warranty that is voided if the user modifies
the program?
(
[#v3ConditionalWarranty](#v3ConditionalWarranty)) Yes. Just as devices do not need to be warranted if users modify the software inside them, you are not required to provide a warranty that covers all possible activities someone could undertake with GPLv3-covered software.
- Why did you decide to write the GNU Affero GPLv3
as a separate license?
(
[#SeparateAffero](#SeparateAffero)) Early drafts of GPLv3 allowed licensors to add an Affero-like requirement to publish source in section 7. However, some companies that develop and rely upon free software consider this requirement to be too burdensome. They want to avoid code with this requirement, and expressed concern about the administrative costs of checking code for this additional requirement. By publishing the GNU Affero GPLv3 as a separate license, with provisions in it and GPLv3 to allow code under these licenses to link to each other, we accomplish all of our original goals while making it easier to determine which code has the source publication requirement.
- Why did you invent the new terms
“propagate” and “convey” in GPLv3?
(
[#WhyPropagateAndConvey](#WhyPropagateAndConvey)) The term “distribute” used in GPLv2 was borrowed from United States copyright law. Over the years, we learned that some jurisdictions used this same word in their own copyright laws, but gave it different meanings. We invented these new terms to make our intent as clear as possible no matter where the license is interpreted. They are not used in any copyright law in the world, and we provide their definitions directly in the license.
- I'd like to license my code under the GPL, but I'd
also like to make it clear that it can't be used for military and/or
commercial uses. Can I do this?
(
[#NoMilitary](#NoMilitary)) No, because those two goals contradict each other. The GNU GPL is designed specifically to prevent the addition of further restrictions. GPLv3 allows a very limited set of them, in section 7, but any other added restriction can be removed by the user.
More generally, a license that limits who can use a program, or for what, is
[not a free software license](/philosophy/programs-must-not-limit-freedom-to-run.html).- Is “convey” in GPLv3 the same
thing as what GPLv2 means by “distribute”?
(
[#ConveyVsDistribute](#ConveyVsDistribute)) Yes, more or less. During the course of enforcing GPLv2, we learned that some jurisdictions used the word “distribute” in their own copyright laws, but gave it different meanings. We invented a new term to make our intent clear and avoid any problems that could be caused by these differences.
- GPLv3 gives “making available to the
public” as an example of propagation. What does this mean?
Is making available a form of conveying?
(
[#v3MakingAvailable](#v3MakingAvailable)) One example of “making available to the public” is putting the software on a public web or FTP server. After you do this, some time may pass before anybody actually obtains the software from you—but because it could happen right away, you need to fulfill the GPL's obligations right away as well. Hence, we defined conveying to include this activity.
- Since distribution and making
available to the public are forms of propagation that are also
conveying in GPLv3, what are some examples of propagation that do not
constitute conveying?
(
[#PropagationNotConveying](#PropagationNotConveying)) Making copies of the software for yourself is the main form of propagation that is not conveying. You might do this to install the software on multiple computers, or to make backups.
- Does prelinking a
GPLed binary to various libraries on the system, to optimize its
performance, count as modification?
(
[#Prelinking](#Prelinking)) No. Prelinking is part of a compilation process; it doesn't introduce any license requirements above and beyond what other aspects of compilation would. If you're allowed to link the program to the libraries at all, then it's fine to prelink with them as well. If you distribute prelinked object code, you need to follow the terms of section 6.
- If someone installs GPLed software on a laptop, and
then lends that laptop to a friend without providing source code for
the software, have they violated the GPL?
(
[#LaptopLoan](#LaptopLoan)) No. In the jurisdictions where we have investigated this issue, this sort of loan would not count as conveying. The laptop's owner would not have any obligations under the GPL.
- Suppose that two companies try to
circumvent the requirement to provide Installation Information by
having one company release signed software, and the other release a
User Product that only runs signed software from the first company. Is
this a violation of GPLv3?
(
[#TwoPartyTivoization](#TwoPartyTivoization)) Yes. If two parties try to work together to get around the requirements of the GPL, they can both be pursued for copyright infringement. This is especially true since the definition of convey explicitly includes activities that would make someone responsible for secondary infringement.
- Am I complying with GPLv3 if I offer binaries on an
FTP server and sources by way of a link to a source code repository
in a version control system, like CVS or Subversion?
(
[#SourceInCVS](#SourceInCVS)) This is acceptable as long as the source checkout process does not become burdensome or otherwise restrictive. Anybody who can download your object code should also be able to check out source from your version control system, using a publicly available free software client. Users should be provided with clear and convenient instructions for how to get the source for the exact object code they downloaded—they may not necessarily want the latest development code, after all.
- Can someone who conveys GPLv3-covered
software in a User Product use remote attestation to prevent a user
from modifying that software?
(
[#RemoteAttestation](#RemoteAttestation)) No. The definition of Installation Information, which must be provided with source when the software is conveyed inside a User Product, explicitly says: “The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.” If the device uses remote attestation in some way, the Installation Information must provide you some means for your modified software to report itself as legitimate.
- What does “rules and protocols for
communication across the network” mean in GPLv3?
(
[#RulesProtocols](#RulesProtocols)) This refers to rules about traffic you can send over the network. For example, if there is a limit on the number of requests you can send to a server per day, or the size of a file you can upload somewhere, your access to those resources may be denied if you do not respect those limits.
These rules do not include anything that does not pertain directly to data traveling across the network. For instance, if a server on the network sent messages for users to your device, your access to the network could not be denied merely because you modified the software so that it did not display the messages.
- Distributors that provide Installation Information
under GPLv3 are not required to provide “support service”
for the product. What kind of “support service”do you mean?
(
[#SupportService](#SupportService)) This includes the kind of service many device manufacturers provide to help you install, use, or troubleshoot the product. If a device relies on access to web services or similar technology to function properly, those should normally still be available to modified versions, subject to the terms in section 6 regarding access to a network.
- In GPLv3 and AGPLv3, what does it mean when it
says “notwithstanding any other provision of this License”?
(
[#v3Notwithstanding](#v3Notwithstanding)) This simply means that the following terms prevail over anything else in the license that may conflict with them. For example, without this text, some people might have claimed that you could not combine code under GPLv3 with code under AGPLv3, because the AGPL's additional requirements would be classified as “further restrictions” under section 7 of GPLv3. This text makes clear that our intended interpretation is the correct one, and you can make the combination.
This text only resolves conflicts between different terms of the license. When there is no conflict between two conditions, then you must meet them both. These paragraphs don't grant you carte blanche to ignore the rest of the license—instead they're carving out very limited exceptions.
- Under AGPLv3, when I modify the Program
under section 13, what Corresponding Source does it have to offer?
(
[#AGPLv3CorrespondingSource](#AGPLv3CorrespondingSource)) “Corresponding Source” is defined in section 1 of the license, and you should provide what it lists. So, if your modified version depends on libraries under other licenses, such as the Expat license or GPLv3, the Corresponding Source should include those libraries (unless they are System Libraries). If you have modified those libraries, you must provide your modified source code for them.
The last sentence of the first paragraph of section 13 is only meant to reinforce what most people would have naturally assumed: even though combinations with code under GPLv3 are handled through a special exception in section 13, the Corresponding Source should still include the code that is combined with the Program this way. This sentence does not mean that you
*only*have to provide the source that's covered under GPLv3; instead it means that such code is*not*excluded from the definition of Corresponding Source.- In AGPLv3, what counts as
“interacting with [the software] remotely through a computer
network?”
(
[#AGPLv3InteractingRemotely](#AGPLv3InteractingRemotely)) If the program is expressly designed to accept user requests and send responses over a network, then it meets these criteria. Common examples of programs that would fall into this category include web and mail servers, interactive web-based applications, and servers for games that are played online.
If a program is not expressly designed to interact with a user through a network, but is being run in an environment where it happens to do so, then it does not fall into this category. For example, an application is not required to provide source merely because the user is running it over SSH, or a remote X session.
- How does GPLv3's concept of
“you” compare to the definition of “Legal Entity”
in the Apache License 2.0?
(
[#ApacheLegalEntity](#ApacheLegalEntity)) They're effectively identical. The definition of “Legal Entity” in the Apache License 2.0 is very standard in various kinds of legal agreements—so much so that it would be very surprising if a court did not interpret the term in the same way in the absence of an explicit definition. We fully expect them to do the same when they look at GPLv3 and consider who qualifies as a licensee.
- In GPLv3, what does “the Program”
refer to? Is it every program ever released under GPLv3?
(
[#v3TheProgram](#v3TheProgram)) The term “the Program” means one particular work that is licensed under GPLv3 and is received by a particular licensee from an upstream licensor or distributor. The Program is the particular work of software that you received in a given instance of GPLv3 licensing, as you received it.
“The Program” cannot mean “all the works ever licensed under GPLv3”; that interpretation makes no sense for a number of reasons. We've published an
[analysis of the term “the Program”](/licenses/gplv3-the-program.html)for those who would like to learn more about this.- If I only make copies of a
GPL-covered program and run them, without distributing or conveying them to
others, what does the license require of me?
(
[#NoDistributionRequirements](#NoDistributionRequirements)) Nothing. The GPL does not place any conditions on this activity.
- If some network client software is
released under AGPLv3, does it have to be able to provide source to
the servers it interacts with?
(
[#AGPLv3ServerAsUser](#AGPLv3ServerAsUser)) -
AGPLv3 requires a program to offer source code to “all users interacting with it remotely through a computer network.” It doesn't matter if you call the program a “client” or a “server,” the question you need to ask is whether or not there is a reasonable expectation that a person will be interacting with the program remotely over a network.
- For software that runs a proxy server licensed
under the AGPL, how can I provide an offer of source to users
interacting with that code?
(
[#AGPLProxy](#AGPLProxy)) For software on a proxy server, you can provide an offer of source through a normal method of delivering messages to users of that kind of proxy. For example, a Web proxy could use a landing page. When users initially start using the proxy, you can direct them to a page with the offer of source along with any other information you choose to provide.
The AGPL says you must make the offer to “all users.” If you know that a certain user has already been shown the offer, for the current version of the software, you don't have to repeat it to that user again.
- How are the various GNU licenses
compatible with each other?
(
[#AllCompatibility](#AllCompatibility)) The various GNU licenses enjoy broad compatibility between each other. The only time you may not be able to combine code under two of these licenses is when you want to use code that's
*only*under an older version of a license with code that's under a newer version.Below is a detailed compatibility matrix for various combinations of the GNU licenses, to provide an easy-to-use reference for specific cases. It assumes that someone else has written some software under one of these licenses, and you want to somehow incorporate code from that into a project that you're releasing (either your own original work, or a modified version of someone else's software). Find the license for your project in a column at the top of the table, and the license for the other code in a row on the left. The cell where they meet will tell you whether or not this combination is permitted.
When we say “copy code,” we mean just that: you're taking a section of code from one source, with or without modification, and inserting it into your own program, thus forming a work based on the first section of code. “Use a library” means that you're not copying any source directly, but instead interacting with it through linking, importing, or other typical mechanisms that bind the sources together when you compile or run the code.
Each place that the matrix states GPLv3, the same statement about compatibility is true for AGPLv3 as well.
I want to license my code under: | |||||||
---|---|---|---|---|---|---|---|
GPLv2 only | GPLv2 or later | GPLv3 or later | LGPLv2.1 only | LGPLv2.1 or later | LGPLv3 or later | ||
I want to copy code under: | GPLv2 only | OK | OK
|
[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[6]](#compat-matrix-footnote-6)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[5]](#compat-matrix-footnote-5)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[4]](#compat-matrix-footnote-4)[[2]](#compat-matrix-footnote-2)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[9]](#compat-matrix-footnote-9)1: You must follow the terms of GPLv2 when incorporating the code in this case. You cannot take advantage of terms in later versions of the GPL.
2: While you may release under GPLv2-or-later both your original work, and/or modified versions of work you received under GPLv2-or-later, the GPLv2-only code that you're using must remain under GPLv2 only. As long as your project depends on that code, you won't be able to upgrade the license of your own code to GPLv3-or-later, and the work as a whole (any combination of both your project and the other code) can only be conveyed under the terms of GPLv2.
3: If you have the ability to release the project under GPLv2 or any later version, you can choose to release it under GPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under GPLv3.
4: If you have the ability to release the project under LGPLv2.1 or any later version, you can choose to release it under LGPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under LGPLv3.
5: You must follow the terms of LGPLv2.1 when incorporating the code in this case. You cannot take advantage of terms in later versions of the LGPL.
6: If you do this, as long as the project contains the code released under LGPLv2.1 only, you will not be able to upgrade the project's license to LGPLv3 or later.
7: LGPLv2.1 gives you permission to relicense the code under any version of the GPL since GPLv2. If you can switch the LGPLed code in this case to using an appropriate version of the GPL instead (as noted in the table), you can make this combination.
8: LGPLv3 is GPLv3 plus extra permissions that you can ignore in this case.
9: Because GPLv2 does not permit combinations with LGPLv3, you must convey the project under GPLv3's terms in this case, since it will allow that combination. |
10,041 | Git 使用简介 | https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git | 2018-09-22T19:38:20 | [
"Git",
"GitHub"
] | https://linux.cn/article-10041-1.html |
>
> 我将向你介绍让 Git 的启动、运行,并和 GitHub 一起使用的基础知识。
>
>
>

如果你是一个开发者,那你应该熟悉许多开发工具。你已经花了多年时间来学习一种或者多种编程语言并打磨你的技巧。你可以熟练运用图形工具或者命令行工具开发。在你看来,没有任何事可以阻挡你。你的代码, 好像你的思想和你的手指一样,将会创建一个优雅的,完美评价的应用程序,并会风靡世界。
然而,如果你和其他人共同开发一个项目会发生什么呢?或者,你开发的应用程序变地越来越大,下一步你将如何去做?如果你想成功地和其他开发者合作,你定会想用一个分布式版本控制系统。使用这样一个系统,合作开发一个项目变得非常高效和可靠。这样的一个系统便是 [Git](https://git-scm.com/)。还有一个叫 [GitHub](https://github.com/) 的方便的存储仓库,用来存储你的项目代码,这样你的团队可以检查和修改代码。
我将向你介绍让 Git 的启动、运行,并和 GitHub 一起使用的基础知识,可以让你的应用程序的开发可以提升到一个新的水平。我将在 Ubuntu 18.04 上进行演示,因此如果您选择的发行版本不同,您只需要修改 Git 安装命令以适合你的发行版的软件包管理器。
### Git 和 GitHub
第一件事就是创建一个免费的 GitHub 账号,打开 [GitHub 注册页面](https://github.com/join?source=header-home),然后填上需要的信息。完成这个之后,你就注备好开始安装 Git 了(这两件事谁先谁后都可以)。
安装 Git 非常简单,打开一个命令行终端,并输入命令:
```
sudo apt install git-all
```
这将会安装大量依赖包,但是你将了解使用 Git 和 GitHub 所需的一切。
附注:我使用 Git 来下载程序的安装源码。有许多时候,内置的软件管理器不提供某个软件,除了去第三方库中下载源码,我经常去这个软件项目的 Git 主页,像这样克隆:
```
git clone ADDRESS
```
“ADDRESS” 就是那个软件项目的 Git 主页。这样我就可以确保自己安装那个软件的最新发行版了。
### 创建一个本地仓库并添加一个文件
下一步就是在你的电脑里创建一个本地仓库(本文称之为 newproject,位于 `~/` 目录下),打开一个命令行终端,并输入下面的命令:
```
cd ~/
mkdir newproject
cd newproject
```
现在你需要初始化这个仓库。在 `~/newproject` 目录下,输入命令 `git init`,当命令运行完,你就可以看到一个刚刚创建的空的 Git 仓库了(图1)。

*图 1: 初始化完成的新仓库*
下一步就是往项目里添加文件。我们在项目根目录(`~/newproject`)输入下面的命令:
```
touch readme.txt
```
现在项目里多了个空文件。输入 `git status` 来验证 Git 已经检测到多了个新文件(图2)。

*图 2: Git 检测到新文件readme.txt*
即使 Git 检测到新的文件,但它并没有被真正的加入这个项目仓库。为此,你要输入下面的命令:
```
git add readme.txt
```
一旦完成这个命令,再输入 `git status` 命令,可以看到,`readme.txt` 已经是这个项目里的新文件了(图3)。

*图 3: 我们的文件已经被添加进临时环境*
### 第一次提交
当新文件添加进临时环境之后,我们现在就准备好创建第一个<ruby> 提交 <rt> commit </rt></ruby>了。什么是提交呢?简单的说,一个提交就是你更改的项目的文件的记录。创建一个提交也是非常简单的。但是,为提交包含一个描述信息非常重要。通过这样做,你可以添加有关该提交包含的内容的注释,比如你对文件做出的何种修改。然而,在这样做之前,我们需要告知 Git 我们的账户,输入以下命令:
```
git config --global user.email EMAIL
git config --global user.name “FULL NAME”
```
“EMAIL” 即你的 email 地址,“FULL NAME” 则是你的姓名。
现在你可以通过以下命令创建一个提交:
```
git commit -m “Descriptive Message”
```
“Descriptive Message” 即为你的提交的描述性信息。比如,当你第一个提交是提交一个 `readme.txt` 文件,你可以这样提交:
```
git commit -m “First draft of readme.txt file”
```
你可以看到输出表明一个文件已经修改,并且,为 `readme.txt` 创建了一个新的文件模式(图4)

*图4:提交成功*
### 创建分支并推送至 GitHub
分支是很重要的,它允许你在项目状态间中移动。假如,你想给你的应用创建一个新的特性。为了这样做,你创建了个新分支。一旦你完成你的新特性,你可以把这个新分支合并到你的主分支中去,使用以下命令创建一个新分支:
```
git checkout -b BRANCH
```
“BRANCH” 即为你新分支的名字,一旦执行完命令,输入 `git branch` 命令来查看是否创建了新分支(图5)

*图5:名为 featureX 的新分支*
接下来,我们需要在 GitHub 上创建一个仓库。 登录 GitHub 帐户,请单击帐户主页上的“New Repository”按钮。 填写必要的信息,然后单击 “Create repository”(图6)。

*图6:在 GitHub 上新建一个仓库*
在创建完一个仓库之后,你可以看到一个用于推送本地仓库的地址。若要推送,返回命令行窗口(`~/newproject` 目录中),输入以下命令:
```
git remote add origin URL
git push -u origin master
```
“URL” 即为我们 GitHub 上新建的仓库地址。
系统会提示您,输入 GitHub 的用户名和密码,一旦授权成功,你的项目将会被推送到 GitHub 仓库中。
### 拉取项目
如果你的同事改变了你们 GitHub 上项目的代码,并且已经合并那些更改,你可以拉取那些项目文件到你的本地机器,这样,你系统中的文件就可以和远程用户的文件保持匹配。你可以输入以下命令来做这件事(`~/newproject` 在目录中),
```
git pull origin master
```
以上的命令可以拉取任何新文件或修改过的文件到你的本地仓库。
### 基础
这就是从命令行使用 Git 来处理存储在 GitHub 上的项目的基础知识。 还有很多东西需要学习,所以我强烈建议你使用 `man git`,`man git-push` 和 `man git-pull` 命令来更深入地了解 `git` 命令可以做什么。
开发快乐!
了解更多关于 Linux 的 内容,请访问来自 Linux 基金会和 edX 的免费的 [Introduction to Linux](https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux)课程。
---
via: <https://www.linux.com/learn/intro-to-linux/2018/7/introduction-using-git>
作者:[Jack Wallen](https://www.linux.com/users/jlwallen) 选题:[lujun9972](https://github.com/lujun9972) 译者:[distant1219](https://github.com/distant1219) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,042 | 如何为 Linux 虚拟控制台配置鼠标支持 | https://www.ostechnix.com/how-to-configure-mouse-support-for-linux-virtual-consoles/ | 2018-09-23T21:49:41 | [
"鼠标"
] | https://linux.cn/article-10042-1.html | 
我使用 Oracle VirtualBox 来测试各种类 Unix 操作系统。我的大多数虚拟机都是<ruby> 无头 <rt> headless </rt></ruby>服务器,它们没有图形桌面环境。很长一段时间,我一直想知道如何在无头 Linux 服务器的基于文本的终端中使用鼠标。感谢 **GPM**,今天我了解到我们可以在虚拟控制台中使用鼠标进行复制和粘贴操作。 **GPM**,是<ruby> 通用鼠标 <rt> General Purpose Mouse </rt></ruby>的首字母缩写,它是一个守护程序,可以帮助你配置 Linux 虚拟控制台的鼠标支持。请不要将 GPM 与 **GDM**(<ruby> GNOME 显示管理器 <rt> GNOME Display manager </rt></ruby>)混淆。两者有完全不同的用途。
GPM 在以下场景中特别有用:
* 新的 Linux 服务器安装或默认情况下不能或不使用 X Windows 的系统,如 Arch Linux 和 Gentoo。
* 在虚拟终端/控制台中使用复制/粘贴操作。
* 在基于文本的编辑器和浏览器中使用复制/粘贴(例如,emacs、lynx)。
* 在文本文件管理器中使用复制/粘贴(例如 Ranger、Midnight commander)。
在这个简短的教程中,我们将看到如何在类 Unix 操作系统中在基于文本的终端中使用鼠标。
### 安装 GPM
要在纯文本 Linux 系统中启用鼠标支持,请安装 GPM 包。它在大多数 Linux 发行版的默认仓库中都有。
在 Arch Linux 及其变体如 Antergos、Manjaro Linux 上,运行以下命令来安装 GPM:
```
$ sudo pacman -S gpm
```
在 Debian、Ubuntu、Linux Mint 中:
```
$ sudo apt install gpm
```
在 Fedora 上:
```
$ sudo dnf install gpm
```
在 openSUSE 上:
```
$ sudo zypper install gpm
```
安装后,使用以下命令启用并启动 GPM 服务:
```
$ sudo systemctl enable gpm
$ sudo systemctl start gpm
```
在基于 Debian 的系统中,gpm 服务将在你安装后自动启动,因此你无需如上所示手动启动服务。
### 为 Linux 虚拟控制台配置鼠标支持
无需特殊配置。GPM 将在你安装并启动 gpm 服务后立即开始工作。
在安装 GPM 之前,看下我的 Ubuntu 18.04 LTS 服务器的屏幕截图:

正如你在上面的截图中看到的,我的 Ubuntu 18.04 LTS 无头服务器中没有可见的鼠标指针。只有一个闪烁的光标,它不能让我选择文本,使用鼠标复制/粘贴文本。在仅限 CLI 的 Linux 服务器中,鼠标根本没用。
在安装 GPM 后查看 Ubuntu 18.04 LTS 服务器的以下截图:

看见了吗?我现在可以选择文字了。
要选择,复制和粘贴文本,请执行以下操作:
* 要选择文本,请按下鼠标左键并拖动鼠标。
* 选择文本后,放开鼠标左键,并按下中键在同一个或另一个控制台中粘贴文本。
* 右键用于扩展选择,就像在 `xterm` 中。
* 如果你使用的是双键鼠标,请使用右键粘贴文本。
就这么简单!
就像我已经说过的那样,GPM 工作得很好,并且不需要额外的配置。以下是 GPM 配置文件 `/etc/gpm.conf`(或在某些发行版中是 `/etc/conf.d/gpm`)的示例内容:
```
# protected from evaluation (i.e. by quoting them).
#
# This file is used by /etc/init.d/gpm and can be modified by
# "dpkg-reconfigure gpm" or by hand at your option.
#
device=/dev/input/mice
responsiveness=
repeat_type=none
type=exps2
append=''
sample_rate=
```
在我的例子中,我使用 USB 鼠标。如果你使用的是其他鼠标,则可能需要更改 `device=/dev/input/mice` 和 `type=exps2` 参数的值。
有关更多详细信息,请参阅手册页。
```
$ man gpm
```
就是这些了。希望这个有用。还有更多的好东西。敬请期待!
干杯!
---
via: <https://www.ostechnix.com/how-to-configure-mouse-support-for-linux-virtual-consoles/>
作者:[SK](https://www.ostechnix.com/author/sk/)
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,043 | 使用 mDNS 在局域网中轻松发现系统 | https://fedoramagazine.org/find-systems-easily-lan-mdns/ | 2018-09-23T22:06:42 | [
"mDNS",
"零配置系统"
] | https://linux.cn/article-10043-1.html | 
mDNS(<ruby> 多播 DNS <rt> Multicast DNS </rt></ruby>)允许系统在局域网中广播查询其他资源的名称。Fedora 用户经常在没有复杂名称服务的路由器上接有多个 Linux 系统。在这种情况下,mDNS 允许你按名称与多个系统通信 —— 多数情况下不用路由器。你也不必在所有本地系统上同步类似 `/etc/hosts` 之类的文件。本文介绍如何设置它。
mDNS 是一个零配置网络服务,它已经诞生了很长一段时间。Fedora Workstation 带有零配置系统 Avahi(它包含 mDNS)。 (mDNS 也是 Bonjour 的一部分,可在 Mac OS 上找到。)
本文假设你有两个系统运行受支持的 Fedora 版本(27 或 28)。它们的主机名是 castor 和 pollux。
### 安装包
确保系统上安装了 nss-mdns 和 avahi 软件包。你可能是不同的版本,这也没问题:
```
$ rpm -q nss-mdns avahi
nss-mdns-0.14.1-1.fc28.x86_64
avahi-0.7-13.fc28.x86_64
```
Fedora Workstation 默认提供这两个包。如果不存在,请安装它们:
```
$ sudo dnf install nss-mdns avahi
```
确保 `avahi-daemon.service` 单元已启用并正在运行。同样,这是 Fedora Workstation 的默认设置。
```
$ sudo systemctl enable --now avahi-daemon.service
```
虽然是可选的,但你可能还需要安装 avahi-tools 软件包。该软件包包括许多方便的程序,用于检查系统上的零配置服务的工作情况。使用这个 `sudo` 命令:
```
$ sudo dnf install avahi-tools
```
`/etc/nsswitch.conf` 控制系统使用哪个服务用于解析,以及它们的顺序。你应该在那个文件中看到这样的一行:
```
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
```
注意命令 `mdns4_minimal [NOTFOUND=return]`。它们告诉你的系统使用多播 DNS 解析程序将主机名解析为 IP 地址。即使该服务有效,如果名称无法解析,也会尝试其余服务。
如果你没有看到与此类似的配置,则可以(以 root 用户身份)对其进行编辑。但是,nss-mdns 包会为你处理此问题。如果你对自己编辑它感到不舒服,请删除并重新安装该软件包以修复该文件。
在**两个系统**中执行同样的步骤 。
### 设置主机名并测试
现在你已完成常见的配置工作,请使用以下方法之一设置每个主机的名称:
1. 如果你正在使用 Fedora Workstation,[你可以使用这个步骤](https://fedoramagazine.org/set-hostname-fedora/)。
2. 如果没有,请使用 `hostnamectl` 来做。在第一台机器上这么做:`$ hostnamectl set-hostname castor`。
3. 你还可以编辑 `/etc/avahi/avahi-daemon.conf`,删除主机名设置行上的注释,并在那里设置名称。但是,默认情况下,Avahi 使用系统提供的主机名,因此你**不应该**需要此方法。
接下来,重启 Avahi 守护进程,以便它接收更改:
```
$ sudo systemctl restart avahi-daemon.service
```
然后正确设置另一台机器:
```
$ hostnamectl set-hostname pollux
$ sudo systemctl restart avahi-daemon.service
```
只要你的路由器没有禁止 mDNS 流量,你现在应该能够登录到 castor 并 ping 通另一台机器。你应该使用默认的 .local 域名,以便解析正常工作:
```
$ ping pollux.local
PING pollux.local (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=1 ttl=64 time=3.17 ms
64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=2 ttl=64 time=1.24 ms
...
```
如果你在 pollux `ping castor.local`,同样的技巧也适用。现在在网络中访问你的系统更方便了!
此外,如果你的路由器也支持这个服务,请不要感到惊讶。现代 WiFi 和有线路由器通常提供这些服务,以使消费者的生活更轻松。
此过程适用于大多数系统。但是,如果遇到麻烦,请使用 avahi-browse 和 avahi-tools 软件包中的其他工具来查看可用的服务。
---
via: <https://fedoramagazine.org/find-systems-easily-lan-mdns/>
作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Multicast DNS, or mDNS, lets systems broadcast queries on a local network to find other resources by name. Fedora users often own multiple Linux systems on a router without sophisticated name services. In that case, mDNS lets you talk to your multiple systems by name — without touching the router in most cases. You also don’t have to keep files like */etc/hosts* in sync on all the local systems. This article shows you how to set it up.
mDNS is a zero-configuration networking service that’s been around for quite a while. Fedora ships Avahi, a zero-configuration stack that includes mDNS, as part of Workstation. (mDNS is also part of Bonjour, found on Mac OS.)
This article assumes you have two systems running supported versions of Fedora (27 or 28). Their host names are meant to be *castor* and *pollux*.
## Installing packages
Make sure the *nss-mdns* and *avahi* packages are installed on your system. You might have a different version, which is fine:
$rpm -q nss-mdns avahinss-mdns-0.14.1-1.fc28.x86_64 avahi-0.7-13.fc28.x86_64
Fedora Workstation provides both of these packages by default. If not present, install them:
$sudo dnf install nss-mdns avahi
Make sure the *avahi-daemon.service* unit is enabled and running. Again, this is the default on Fedora Workstation.
$sudo systemctl enable --now avahi-daemon.service
Although optional, you might also want to install the *avahi-tools* package. This package includes a number of handy utilities for checking how well the zero-configuration services on your system are working. Use this *sudo* command:
$sudo dnf install avahi-tools
The */etc/nsswitch.conf* file controls which services your system uses to resolve services, and in what order. You should see a line like this in that file:
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
Notice the commands *mdns4_minimal [NOTFOUND=return].* They tell your system to use the multicast DNS resolver to resolve a hostname to an IP address. Even if that service works, the remaining services are tried if the name doesn’t resolve.
If you don’t see a configuration similar to this, you can edit it (as the *root* user). However, the *nss-mdns* package handles this for you. Remove and reinstall that package to fix the file, if you’re uncomfortable editing it yourself.
Follow the steps above for **both systems**.
## Setting host name and testing
Now that you’ve done the common configuration work, set up each host’s name in one of these ways:
- If you’re using Fedora Workstation,
[you can use this procedure](https://fedoramagazine.org/set-hostname-fedora/). - If not, use
*hostnamectl*to do the honors. Do this for the first box:$
**hostnamectl set-hostname castor** - You can also edit the
*/etc/avahi/avahi-daemon.conf*file, remove the comment on the*host-name*setting line, and set the name there. By default, though, Avahi uses the system provided host name, so you**shouldn’t**need this method.
Next, restart the Avahi daemon so it picks up changes:
$sudo systemctl restart avahi-daemon.service
Then set your other box properly:
$hostnamectl set-hostname pollux$sudo systemctl restart avahi-daemon.service
As long as your network router is not disallowing mDNS traffic, you should now be able to login to *castor* and ping the other box. You should use the default *.local* domain name so resolution works correctly:
$ping pollux.localPING pollux.local (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=1 ttl=64 time=3.17 ms 64 bytes from 192.168.0.1 (192.168.0.1): icmp_seq=2 ttl=64 time=1.24 ms ...
The same trick should also work from *pollux* if you ping *castor.local*. It’s much more convenient now to access your systems around the network!
Moreover, don’t be surprised if your router advertises services. Modern WiFi and wired routers often provide these services to make life easier for consumers.
This process works for most systems. However, if you run into trouble, use *avahi-browse* and other tools from the *avahi-tools* package to see what services are available.
## Edgar Hoch
You mention that package nss-mdns modifies /etc/nsswitch.conf in scriptlets. Yes, it does, and this is the problem: It conflicts with authselect on Fedora 28, because no other program than authselect should modify /etc/nsswitch.conf. See https://bugzilla.redhat.com/show_bug.cgi?id=1577243 .
## Paul W. Frields
@Edgar: It could equally be said that authselect’s expectation that only it ever touches nsswitch.conf, without reasonable facilities for handling more situations required by modern users, is also the problem. Apparently the upstreams are sorting the situation out, and I look forward to amending this article if/when that happens.
## Edgar Hoch
Paul, yes, I agree. The change from authconfig to authselect in Fedora 28 was to early, authselect is still not ready for all situations, there should be tools to modify configuration files but only to create a complete new profile. I had to do this for nis, because the developers had decided that only sssd and winbind will be neccessary. But sssd doen’t work well with nis. And I did need nscd, because Fedora 28 had decided to set IPAddressDeny=any in systemd-logind.service. Now they have changed there mind and nss_nis adds a file that set IPAddressAllow=any.
I am already following upstream authselect at github. They have created a nis profile, that’s fine. I am not sure if adding file user-nsswitch.conf is a good solution for the problem. But we man see…
Another note: Is it intended that you set hostname in your example to a name without a domain? Shouldn’t it contain at least .localdomain? I think that some programs may have problems (e.g. sendmail may delay startup).
## Paul W. Frields
@Edgar: Indeed, some users may need to name their box e.g. “castor.localdomain” if they’re using certain other services. That’s not often the case for home network users, but it’s a very good point nonetheless — thanks.
## Slavisa
I have been using Fedora 27 for a long time and I am quite happy. Is there any reason for the update?
Thanks in advance!
Kindest regards,
Slavisa
## vortex
avahi is great, except it is unfortunate that it has never properly supported DNS-SD in that registrations over mdns can be reflected in unicast dynamic DNS, unlike the macos equivalent mDNSresponder (or the posix version).
## Andreas WERNER
Are there more infos about using mDNS?
What happens if there are more than one mDNS services/servers are running in a LAN? Are they concurrent?
I have a prof audio device from MOTU which is also using mDNS.
How to see all the MDNS systems running in a LAN? And all *.local domain names announced?
Best regards
Andreas WERNER
## Paul W. Frields
@Andreas: While I’m not an expert at mDNS, the way the service is designed is that each system that implements it advertises itself. No central server is used or expected, and all the services on the LAN are expected to coexist. The .local domain is the default domain for mDNS so it’s expected that services would support it. Check your unit’s manufacturer documentation or website for more information on their implementation if you don’t see expected results.
## Andreas WERNER
@ Paul Frields:
Thank you for your reply!
## RG
There are firewalls that block mDNS traffic. E.g. Freifunk in Germany does such kind because it heavily increases needed bandwith. For home or a virtual environment, mDNS is great.
## vortex
Hi RG,
mDNS consists of a multicast packet transmitted with a TTL of one, it never leaves your local network segment unless specific multicast routing is present on the LAN.
Cheers.
## driverrestore
That’s not often the case for home network users, but it’s a very good point nonetheless — thanks.I have been using Fedora 27 for a long time and I am quite happy.
## dag
Might seem obvious but if you’re trying to setup dual stack ipv4/6 make sure you change the setting from mdns4_minimal to mdns_minimal so that ipv6 addresses are also resolved. |
10,044 | 最好的 3 个开源 JavaScript 图表库 | https://opensource.com/article/18/9/open-source-javascript-chart-libraries | 2018-09-24T07:54:53 | [
"图表",
"JavaScript"
] | https://linux.cn/article-10044-1.html |
>
> 图表及其它可视化方式让传递数据的信息变得更简单。
>
>
>

对于数据可视化和制作精美网站来说,图表和图形很重要。视觉上的展示让分析大块数据及传递信息变得更简单。JavaScript 图表库能让数据以极好的、易于理解的和交互的方式进行可视化,还能够优化你的网站设计。
本文会带你学习最好的 3 个开源 JavaScript 图表库。
### 1、 Chart.js
[Chart.js](https://www.chartjs.org/) 是一个开源的 JavaScript 库,你可以在自己的应用中用它创建生动美丽和交互式的图表。使用它需要遵循 MIT 协议。
使用 Chart.js,你可以创建各种各样令人印象深刻的图表和图形,包括条形图、折线图、范围图、线性标度和散点图。它可以响应各种设备,使用 HTML5 Canvas 元素进行绘制。
示例代码如下,它使用该库绘制了一个条形图。本例中我们使用 Chart.js 的内容分发网络(CDN)来包含这个库。注意这里使用的数据仅用于展示。
```
<!DOCTYPE html>
<html>
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>
</head>
<body>
<canvas id="bar-chart" width=300" height="150"></canvas>
<script>
new Chart(document.getElementById("bar-chart"), {
type: 'bar',
data: {
labels: ["North America", "Latin America", "Europe", "Asia", "Africa"],
datasets: [
{
label: "Number of developers (millions)",
backgroundColor: ["red", "blue","yellow","green","pink"],
data: [7,4,6,9,3]
}
]
},
options: {
legend: { display: false },
title: {
display: true,
text: 'Number of Developers in Every Continent'
},
scales: {
yAxes: [{
ticks: {
beginAtZero:true
}
}]
}
}
});
</script>
</body>
</html>
```
如你所见,通过设置 `type` 和 `bar` 来构造条形图。你可以把条形体的方向改成其他类型 —— 比如把 `type` 设置成 `horizontalBar`。
在 `backgroundColor` 数组参数中提供颜色类型,就可以设置条形图的颜色。
颜色被分配给关联数组中相同索引的标签和数据。例如,第二个标签 “Latin American”,颜色会是 “蓝色(blue)”(第二个颜色),数值是 4(data 中的第二个数字)。
代码的执行结果如下。

### 2、 Chartist.js
[Chartist.js](https://gionkunz.github.io/chartist-js/) 是一个简单的 JavaScript 动画库,你能够自制美丽的响应式图表,或者进行其他创作。使用它需要遵循 WTFPL 或者 MIT 协议。
这个库是由一些对现有图表工具不满的开发者进行开发的,它可以为设计师或程序员提供美妙的功能。
在项目中包含 Chartist.js 库后,你可以使用它们来创建各式各样的图表,包括动画,条形图和折线图。它使用 SVG 来动态渲染图表。
这里是使用该库绘制一个饼图的例子。
```
<!DOCTYPE html>
<html>
<head>
<link href="https//cdn.jsdelivr.net/chartist.js/latest/chartist.min.css" rel="stylesheet" type="text/css" />
<style>
.ct-series-a .ct-slice-pie {
fill: hsl(100, 20%, 50%); /* filling pie slices */
stroke: white; /*giving pie slices outline */
stroke-width: 5px; /* outline width */
}
.ct-series-b .ct-slice-pie {
fill: hsl(10, 40%, 60%);
stroke: white;
stroke-width: 5px;
}
.ct-series-c .ct-slice-pie {
fill: hsl(120, 30%, 80%);
stroke: white;
stroke-width: 5px;
}
.ct-series-d .ct-slice-pie {
fill: hsl(90, 70%, 30%);
stroke: white;
stroke-width: 5px;
}
.ct-series-e .ct-slice-pie {
fill: hsl(60, 140%, 20%);
stroke: white;
stroke-width: 5px;
}
</style>
</head>
<body>
<div class="ct-chart ct-golden-section"></div>
<script src="https://cdn.jsdelivr.net/chartist.js/latest/chartist.min.js"></script>
<script>
var data = {
series: [45, 35, 20]
};
var sum = function(a, b) { return a + b };
new Chartist.Pie('.ct-chart', data, {
labelInterpolationFnc: function(value) {
return Math.round(value / data.series.reduce(sum) * 100) + '%';
}
});
</script>
</body>
</html>
```
使用 Chartist JavaScript 库,你可以使用各种预先构建好的 CSS 样式,而不是在项目中指定各种与样式相关的部分。你可以使用这些样式来设置已创建的图表的外观。
比如,预创建的 CSS 类 `.ct-chart` 是用来构建饼状图的容器。还有 `.ct-golden-section` 类可用于获取纵横比,它基于响应式设计进行缩放,帮你解决了计算固定尺寸的麻烦。Chartist 还提供了其它类别的比例容器,你可以在自己的项目中使用它们。
为了给各个扇形设置样式,可以使用默认的 `.ct-serials-a` 类。字母 `a` 是根据系列的数量变化的(a、b、c,等等),因此它与每个要设置样式的扇形相对应。
`Chartist.Pie` 方法用来创建一个饼状图。要创建另一种类型的图表,比如折线图,请使用 `Chartist.Line`。
代码的执行结果如下。

### 3、 D3.js
[D3.js](https://d3js.org/) 是另一个好用的开源 JavaScript 图表库。使用它需要遵循 BSD 许可证。D3 的主要用途是,根据提供的数据,处理和添加文档的交互功能,。
借助这个 3D 动画库,你可以通过 HTML5、SVG 和 CSS 来可视化你的数据,并且让你的网站变得更精美。更重要的是,使用 D3,你可以把数据绑定到文档对象模型(DOM)上,然后使用基于数据的函数改变文档。
示例代码如下,它使用该库绘制了一个简单的条形图。
```
<!DOCTYPE html>
<html>
<head>
<style>
.chart div {
font: 15px sans-serif;
background-color: lightblue;
text-align: right;
padding:5px;
margin:5px;
color: white;
font-weight: bold;
}
</style>
</head>
<body>
<div class="chart"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.5.0/d3.min.js"></script>
<script>
var data = [342,222,169,259,173];
d3.select(".chart")
.selectAll("div")
.data(data)
.enter()
.append("div")
.style("width", function(d){ return d + "px"; })
.text(function(d) { return d; });
</script>
</body>
</html>
```
使用 D3 库的主要概念是应用 CSS 样式选择器来定位 DOM 节点,然后对其执行操作,就像其它的 DOM 框架,比如 JQuery。
将数据绑定到文档上后,.`enter()` 函数会被调用,为即将到来的数据构建新的节点。所有在 .`enter()` 之后调用的方法会为数据中的每一个项目调用一次。
代码的执行结果如下。

### 总结
[JavaScript](https://www.liveedu.tv/guides/programming/javascript/) 图表库提供了强大的工具,你可以将自己的网络资源进行数据可视化。通过这三个开源库,你可以把自己的网站变得更好看,更容易使用。
你知道其它强大的用于创造 JavaScript 动画效果的前端库吗?请在下方的评论区留言分享。
---
via: <https://opensource.com/article/18/9/open-source-javascript-chart-libraries>
作者:[Dr.Michael J.Garbade](https://opensource.com/users/drmjg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[BriFuture](https://github.com/brifuture) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Charts and graphs are important for visualizing data and making websites appealing. Visual presentations make it easier to analyze big chunks of data and convey information. JavaScript chart libraries enable you to visualize data in a stunning, easy to comprehend, and interactive manner and improve your website's design.
In this article, learn about three top open source JavaScript chart libraries.
## 1. Chart.js
[Chart.js](https://www.chartjs.org/) is an open source JavaScript library that allows you to create animated, beautiful, and interactive charts on your application. It's available under the MIT License.
With Chart.js, you can create various impressive charts and graphs, including bar charts, line charts, area charts, linear scale, and scatter charts. It is completely responsive across various devices and utilizes the HTML5 Canvas element for rendering.
Here is example code that draws a bar chart using the library. We'll include it in this example using the Chart.js content delivery network (CDN). Note that the data used is for illustration purposes only.
```
<!DOCTYPE html>
<html>
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.min.js"></script>
</head>
<body>
<canvas id="bar-chart" width=300" height="150"></canvas>
<script>
new Chart(document.getElementById("bar-chart"), {
type: 'bar',
data: {
labels: ["North America", "Latin America", "Europe", "Asia", "Africa"],
datasets: [
{
label: "Number of developers (millions)",
backgroundColor: ["red", "blue","yellow","green","pink"],
data: [7,4,6,9,3]
}
]
},
options: {
legend: { display: false },
title: {
display: true,
text: 'Number of Developers in Every Continent'
},
scales: {
yAxes: [{
ticks: {
beginAtZero:true
}
}]
}
}
});
</script>
</body>
</html>
```
As you can see from this code, bar charts are constructed by setting **type** to **bar**. You can change the direction of the bar to other types—such as setting **type** to **horizontalBar**.
The bars' colors are set by providing the type of color in the **backgroundColor** array parameter.
The colors are allocated to the label and data that share the same index in their corresponding array. For example, "Latin America," the second label, will be set to "blue" (the second color) and 4 (the second number in the data).
Here is the output of this code.

## 2. Chartist.js
[Chartist.js](https://gionkunz.github.io/chartist-js/) is a simple JavaScript animation library that allows you to create customizable and beautiful responsive charts and other designs. The open source library is available under the WTFPL or MIT License.
The library was developed by a group of developers who were dissatisfied with existing charting tools, so it offers wonderful functionalities to designers and developers.
After including the Chartist.js library and its CSS files in your project, you can use them to create various types of charts, including animations, bar charts, and line charts. It utilizes SVG to render the charts dynamically.
Here is an example of code that draws a pie chart using the library.
```
<!DOCTYPE html>
<html>
<head>
<link href="https://opensource.com/https//cdn.jsdelivr.net/chartist.js/latest/chartist.min.css" rel="stylesheet" type="text/css" />
<style>
.ct-series-a .ct-slice-pie {
fill: hsl(100, 20%, 50%); /* filling pie slices */
stroke: white; /*giving pie slices outline */
stroke-width: 5px; /* outline width */
}
.ct-series-b .ct-slice-pie {
fill: hsl(10, 40%, 60%);
stroke: white;
stroke-width: 5px;
}
.ct-series-c .ct-slice-pie {
fill: hsl(120, 30%, 80%);
stroke: white;
stroke-width: 5px;
}
.ct-series-d .ct-slice-pie {
fill: hsl(90, 70%, 30%);
stroke: white;
stroke-width: 5px;
}
.ct-series-e .ct-slice-pie {
fill: hsl(60, 140%, 20%);
stroke: white;
stroke-width: 5px;
}
</style>
</head>
<body>
<div class="ct-chart ct-golden-section"></div>
<script src="https://cdn.jsdelivr.net/chartist.js/latest/chartist.min.js"></script>
<script>
var data = {
series: [45, 35, 20]
};
var sum = function(a, b) { return a + b };
new Chartist.Pie('.ct-chart', data, {
labelInterpolationFnc: function(value) {
return Math.round(value / data.series.reduce(sum) * 100) + '%';
}
});
</script>
</body>
</html>
```
Instead of specifying various style-related components of your project, the Chartist JavaScript library allows you to use various pre-built CSS styles. You can use them to control the appearance of the created charts.
For example, the pre-created CSS class **.ct-chart** is used to build the container for the pie chart. And, the **.ct-golden-section** class is used to get the aspect ratios, which scale with responsive designs and saves you the hassle of calculating fixed dimensions. Chartist also provides other classes of container ratios you can utilize in your project.
For styling the various pie slices, you can use the default .**ct-series-a** class. The letter **a** is iterated with every series count (a, b, c, etc.) such that it corresponds with the slice to be styled.
The **Chartist.Pie** method is used for creating a pie chart. To create another type of chart, such as a line chart, use **Chartist.Line.**
Here is the output of the code.

## 3. D3.js
[D3.js](https://d3js.org/) is another great open source JavaScript chart library. It's available under the BSD license. D3 is mainly used for manipulating and adding interactivity to documents based on the provided data.
You can use this amazing 3D animation library to visualize your data using HTML5, SVG, and CSS and make your website appealing. Essentially, D3 enables you to bind data to the Document Object Model (DOM) and then use data-based functions to make changes to the document.
Here is example code that draws a simple bar chart using the library.
```
<!DOCTYPE html>
<html>
<head>
<style>
.chart div {
font: 15px sans-serif;
background-color: lightblue;
text-align: right;
padding:5px;
margin:5px;
color: white;
font-weight: bold;
}
</style>
</head>
<body>
<div class="chart"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/5.5.0/d3.min.js"></script>
<script>
var data = [342,222,169,259,173];
d3.select(".chart")
.selectAll("div")
.data(data)
.enter()
.append("div")
.style("width", function(d){ return d + "px"; })
.text(function(d) { return d; });
</script>
</body>
</html>
```
The main concept in using the D3 library is to first apply CSS-style selections to point to the DOM nodes and then apply operators to manipulate them—just like in other DOM frameworks like jQuery.
After the data is bound to a document, the .**enter()** function is invoked to build new nodes for incoming data. All the methods invoked after the .**enter()** function will be called for every item in the data.
Here is the output of the code.

## Wrapping up
[JavaScript](https://www.liveedu.tv/guides/programming/javascript/) charting libraries provide you with powerful tools for implementing data visualization on your web properties. With these three open source libraries, you can enhance the beauty and interactivity of your websites.
Do you know of another powerful frontend library for creating JavaScript animation effects? Please let us know in the comment section below.
*Training on LiveEdu.tv provides one way to learn more about JavaScript.*
## Comments are closed. |
10,045 | 何谓开源编程? | https://opensource.com/article/18/3/what-open-source-programming | 2018-09-25T14:58:05 | [
"开源"
] | https://linux.cn/article-10045-1.html |
>
> 开源就是丢一些代码到 GitHub 上。了解一下它是什么,以及不是什么?
>
>
>
最简单的来说,开源编程就是编写一些大家可以随意取用、修改的代码。但你肯定听过关于 Go 语言的那个老笑话,说 Go 语言“简单到看一眼就可以明白规则,但需要一辈子去学会运用它”。其实写开源代码也是这样的。往 GitHub、Bitbucket、SourceForge 等网站或者是你自己的博客或网站上丢几行代码不是难事,但想要卓有成效,还需要个人的努力付出和高瞻远瞩。

### 我们对开源编程的误解
首先我要说清楚一点:把你的代码放在 GitHub 的公开仓库中并不意味着把你的代码开源了。在几乎全世界,根本不用创作者做什么,只要作品形成,版权就随之而生了。在创作者进行授权之前,只有作者可以行使版权相关的权力。未经创作者授权的代码,不论有多少人在使用,都是一颗定时炸弹,只有愚蠢的人才会去用它。
有些创作者很善良,认为“很明显我的代码是免费提供给大家使用的。”,他也并不想起诉那些用了他的代码的人,但这并不意味着这些代码可以放心使用。不论在你眼中创作者们多么善良,他们都 *有权力* 起诉任何使用、修改代码,或未经明确授权就将代码嵌入的人。
很明显,你不应该在没有指定开源许可证的情况下将你的源代码发布到网上然后期望别人使用它并为其做出贡献。我建议你也尽量避免使用这种代码,甚至疑似未授权的也不要使用。如果你开发了一个函数和例程,它和之前一个疑似未授权代码很像,源代码作者就可以对你就侵权提起诉讼。
举个例子,Jill Schmill 写了 AwesomeLib 然后未明确授权就把它放到了 GitHub 上,就算 Jill Schmill 不起诉任何人,只要她把 AwesomeLib 的完整版权都卖给 EvilCorp,EvilCorp 就会起诉之前违规使用这段代码的人。这种行为就好像是埋下了计算机安全隐患,总有一天会为人所用。
没有许可证的代码的危险的,切记。
### 选择恰当的开源许可证
假设你正要写一个新程序,而且打算让人们以开源的方式使用它,你需要做的就是选择最贴合你需求的[许可证](https://opensource.com/tags/licensing)。和宣传中说的一样,你可以从 GitHub 所支持的 [choosealicense.com](https://choosealicense.com/) 开始。这个网站设计得像个简单的问卷,特别方便快捷,点几下就能找到合适的许可证。
警示:在选择许可证时不要过于自负,如果你选的是 [Apache 许可证](https://choosealicense.com/licenses/apache-2.0/)或者 [GPLv3](https://choosealicense.com/licenses/gpl-3.0/) 这种广为使用的许可证,人们很容易理解他们和你都有什么权利,你也不需要请律师来排查其中的漏洞。你选择的许可证使用的人越少,带来的麻烦就越多。
最重要的一点是: *千万不要试图自己制造许可证!* 自己制造许可证会给大家带来更多的困惑和困扰,不要这样做。如果在现有的许可证中确实找不到你需要的条款,你可以在现有的许可证中附加上你的要求,并且重点标注出来,提醒使用者们注意。
我知道有些人会站出来说:“我才懒得管什么许可证,我已经把代码发到<ruby> 公开领域 <rt> public domain </rt></ruby>了。”但问题是,公开领域的法律效力并不是受全世界认可的。在不同的国家,公开领域的效力和表现形式不同。在有些国家的政府管控下,你甚至不可以把自己的源代码发到公开领域。万幸,[Unlicense](https://choosealicense.com/licenses/unlicense/) 可以弥补这些漏洞,它语言简洁,使用几个词清楚地描述了“就把它放到公开领域”,但其效力为全世界认可。
### 怎样引入许可证
确定使用哪个许可证之后,你需要清晰而无疑义地指定它。如果你是在 GitHub、GitLab 或 BitBucket 这几个网站发布,你需要构建很多个文件夹,在根文件夹中,你应把许可证创建为一个以 `LICENSE.txt` 命名的明文文件。
创建 `LICENSE.txt` 这个文件之后还有其它事要做。你需要在每个重要文件的头部添加注释块来申明许可证。如果你使用的是一个现有的许可证,这一步对你来说十分简便。一个 `# 项目名 (c)2018 作者名,GPLv3 许可证,详情见 https://www.gnu.org/licenses/gpl-3.0.en.html` 这样的注释块比隐约指代的许可证的效力要强得多。
如果你是要发布在自己的网站上,步骤也差不多。先创建 `LICENSE.txt` 文件,放入许可证,再表明许可证出处。
### 开源代码的不同之处
开源代码和专有代码的一个主要区别是开源代码写出来就是为了给别人看的。我是个 40 多岁的系统管理员,已经写过许许多多的代码。最开始我写代码是为了工作,为了解决公司的问题,所以其中大部分代码都是专有代码。这种代码的目的很简单,只要能在特定场合通过特定方式发挥作用就行。
开源代码则大不相同。在写开源代码时,你知道它可能会被用于各种各样的环境中。也许你的用例的环境条件很局限,但你仍旧希望它能在各种环境下发挥理想的效果。不同的人使用这些代码时会出现各种用例,你会看到各类冲突,还有你没有考虑过的思路。虽然代码不一定要满足所有人,但至少应该得体地处理他们遇到的问题,就算解决不了,也可以转换回常见的逻辑,不会给使用者添麻烦。(例如“第 583 行出现零除错误”就不能作为错误地提供命令行参数的响应结果)
你的源代码也可能逼疯你,尤其是在你一遍又一遍地修改错误的函数或是子过程后,终于出现了你希望的结果,这时你不会叹口气就继续下一个任务,你会把过程清理干净,因为你不会愿意别人看出你一遍遍尝试的痕迹。比如你会把 `$variable`、`$lol` 全都换成有意义的 `$iterationcounter` 和 `$modelname`。这意味着你要认真专业地进行注释(尽管对于你所处的背景知识热度来说它并不难懂),因为你期望有更多的人可以使用你的代码。
这个过程难免有些痛苦沮丧,毕竟这不是你常做的事,会有些不习惯。但它会使你成为一位更好的程序员,也会让你的代码升华。即使你的项目只有你一位贡献者,清理代码也会节约你后期的很多工作,相信我一年后你再看你的 app 代码时,你会庆幸自己写下的是 `$modelname`,还有清晰的注释,而不是什么不知名的数列,甚至连 `$lol` 也不是。
### 你并不是为你一人而写
开源的真正核心并不是那些代码,而是社区。更大的社区的项目维持时间更长,也更容易为人们所接受。因此不仅要加入社区,还要多多为社区发展贡献思路,让自己的项目能够为社区所用。
蝙蝠侠为了完成目标暗中独自花了很大功夫,你用不着这样,你可以登录 Twitter、Reddit,或者给你项目的相关人士发邮件,发布你正在筹备新项目的消息,仔细聊聊项目的设计初衷和你的计划,让大家一起帮忙,向大家征集数据输入,类似的使用案例,把这些信息整合起来,用在你的代码里。你不用接受所有的建议和请求,但你要对它有个大概把握,这样在你之后完善时可以躲过一些陷阱。
发布了首次通告这个过程还不算完整。如果你希望大家能够接受你的作品并且使用它,你就要以此为初衷来设计。公众说不定可以帮到你,你不必对公开这件事如临大敌。所以不要闭门造车,既然你是为大家而写,那就开设一个真实、公开的项目,想象你在社区的帮助和监督下,认真地一步步完成它。
### 建立项目的方式
你可以在 GitHub、GitLab 或 BitBucket 上免费注册账号来管理你的项目。注册之后,创建知识库,建立 `README` 文件,分配一个许可证,一步步写入代码。这样可以帮你建立好习惯,让你之后和现实中的团队一起工作时,也能目的清晰地朝着目标稳妥地开展工作。这样你做得越久,就越有兴趣 —— 通常会有用户先对你的项目产生兴趣。
用户会开始提一些问题,这会让你开心也会让你不爽,你应该亲切礼貌地对待他们,就算他们很多人对项目有很多误解甚至根本不知道你的项目做的是什么,你也应该礼貌专业地对待。一方面,你可以引导他们,让他们了解你在干什么。另一方面,他们也会慢慢地将你带入更大的社区。
如果你的项目很受用户青睐,总会有高级开发者出现,并表示出兴趣。这也许是好事,也可能激怒你。最开始你可能只会做简单的问题修复,但总有一天你会收到拉取请求,有可能是硬编码或特殊用例(可能会让项目变得难以维护),它可能改变你项目的作用域,甚至改变你项目的初衷。你需要学会分辨哪个有贡献,根据这个决定合并哪个,婉拒哪个。
### 我们为什么要开源?
开源听起来任务繁重,它也确实是这样。但它对你也有很多好处。它可以在无形之中磨练你,让你写出纯净持久的代码,也教会你与人沟通,团队协作。对于一个志向远大的专业开发者来说,它是最好的简历素材。你的未来雇主很有可能点开你的仓库,了解你的能力范围;而社区项目的开发者也有可能给你带来工作。
最后,为开源工作,意味着个人的提升,因为你在做的事不是为了你一个人,这比养活自己重要得多。
---
via: <https://opensource.com/article/18/3/what-open-source-programming>
作者:[Jim Salter](https://opensource.com/users/jim-salter) 译者:[Valoniakim](https://github.com/Valoniakim) 校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | At the simplest level, open source programming is merely writing code that other people can freely use and modify. But you've heard the old chestnut about playing Go, right? "So simple it only takes a minute to learn the rules, but so complex it requires a lifetime to master." Writing open source code is a pretty similar experience. It's easy to chuck a few lines of code up on GitHub, Bitbucket, SourceForge, or your own blog or site. But doing it *right* requires some personal investment, effort, and forethought.

opensource.com
## What open source programming *isn't*
Let's be clear up front about something: Just being on GitHub in a public repo does not make your code open source. Copyright in nearly all countries attaches automatically when a work is fixed in a medium, without need for any action by the author. For any code that has not been licensed by the author, it is only the author who can exercise the rights associated with copyright ownership. Unlicensed code—no matter how publicly accessible—is a ticking time bomb for anyone who is unwise enough to use it.
A well-meaning author may think, "well, it's obvious this is free to use," and have no plans ever to sue anyone, but that doesn't mean the code is safe to use. No matter what you think someone will do, that author has the *right* to sue anyone who uses, modifies, or embeds that code anywhere else without an expressly granted license.
Clearly, you shouldn't put your own code out in public without a license and expect others to use or contribute to it. I would also recommend you avoid using (or even *looking* at) such code yourself. If you create a highly similar function or routine to a piece of unlicensed work you inspected at some point in the past, you could open yourself or your employer to infringement lawsuits.
Let's say that Jill Schmill writes AwesomeLib and puts it on GitHub without a license. Even if Jill never sues anybody, she might eventually sell all the rights to AwesomeLib to EvilCorp, who will. (Think of it as a lurking vulnerability, just waiting to be exploited.)
Unlicensed code is unsafe code, period.
## Choosing the right license
OK, you've decided you want to write a new program, and you want people to have open source rights to use it. The next step is figuring out which [license](https://opensource.com/tags/licensing) best fits your needs. You can get started with the GitHub-curated [choosealicense.com](https://choosealicense.com/), which is just what it says on the tin. The site is laid out a bit like a simple quiz, and most people should be one or two clicks at most from finding the right license for their project.
[Apache License](https://choosealicense.com/licenses/apache-2.0/)or the
[GPLv3](https://choosealicense.com/licenses/gpl-3.0/), it's easy for people to understand what their rights are and what your rights are without needing a team of lawyers to look for pitfalls and problems. The further you stray from the beaten path, though, the more problems you open yourself and others up to.
Most importantly, *do not write your own license!* Making up your own license is an unnecessary source of confusion for everyone. Don't do it. If you absolutely *must* have your own special terms that you can't find in any existing license, write them as an addendum to an otherwise well-understood license... and keep the main license and your addendum clearly separated so everyone involved knows which parts they've got to be extra careful about.
I know some people stubborn up and say, "I don't care about licenses and don't want to think about them; it's public domain." The problem with that is that "public domain" isn't a universally understood term in a legal sense. It means different things from one country to the next, with different rights and terms attached. In some countries, you can't even place *your own *works in the public domain, because the government reserves control over that. Luckily, the [Unlicense](https://choosealicense.com/licenses/unlicense/) has you covered. The Unlicense uses as few words as possible to clearly describe what "just make it public domain!" means in a clear and universally enforceable way.
## How to apply the license
Once you've chosen a license, you need to clearly and unambiguously apply it. If you're publishing somewhere like GitHub or GitLab or BitBucket, you'll have what amounts to a folder structure for your project's files. In the root folder of your project, you should have a plaintext file called LICENSE.txt that contains the text of the license you selected.
Putting LICENSE.txt in the root folder of your project isn't quite the last step—you also need a comment block declaring the license at the header of each significant file in your project. This is one of those times where it comes in handy to be using a well-established license. A comment that says: * # this work (c)2018 myname, licensed GPLv3—see https://www.gnu.org/licenses/gpl-3.0.en.html* is much,
*much*stronger and more useful than a comment block that merely makes a cryptic reference to a completely custom license.
If you're self-publishing your code on your own site, you'll want to follow basically the same process. Have a LICENSE.txt, put the full copy of your license in it, and link to your license in an abbreviated comment block at the head of each significant file.
## Open source code is different
A big difference between proprietary and open source code is that open source code is meant to be seen. As a 40-something sysadmin, I've written a lot of code. Most of it has been effectively proprietary—I started out writing code for myself to make my own jobs easier and scratch my own and/or my company's itches. The goal of such code is simple: All it has to do is work, in the exact way and under the exact circumstance its creator planned. As long as the thing you expected to happen when you invoked the program happens more frequently than not, it's a success.
Open source code is very different. When you write open source code, you know that it not only has to work, it has to work in situations you never dreamed of and may not have planned for. Maybe*you*only had one very narrow use case for your code and invoked it in exactly the same way every time. The people you share it with, though... they'll expose use cases, mixtures of arguments, and just plain strange thought processes you never considered. Your code doesn't necessarily have to satisfy all of them—but it at least needs to handle their requests gracefully, and fail in predictable and logical ways when it can't service them. (For example: "Division by zero on line 583" is not an acceptable response to a failure to supply a command-line argument.)
Your open source code also has to avoid unduly embarrassing you. That means that after you struggle and struggle to get a balky function or sub to finally produce the output you expected, you don't just sigh and move on to the next thing—you *clean it up*, because you don't want the rest of the world seeing your obvious house of cards. It means that you stop littering your code with variables like `$variable`
and `$lol`
and replace them with meaningful names like `$iterationcounter`
or `$modelname`
. And it means commenting things professionally (even if they're obvious to you in the heat of the moment) since you expect other people to be able to follow your code later.
This can be a little painful and frustrating at first—it's work you're not accustomed to doing. It makes you a better programmer, though, and it makes your code better as well. Just as important: Even if you're the only contributor your project ever has, it saves you work in the long run. Trust me, a year from now when you have to revisit your app, you're going to be very glad that `$modelname`
, which gets parsed by several stunningly opaque regular expressions before getting socked into some other array somewhere, isn't named `$lol`
anymore.
## You're not writing just for yourself
The true heart of open source isn't the code at all: it's the community. Projects with a strong community survive longer and are adopted much more heavily than those that don't. With that in mind, it's a good idea not only to embrace but actively plan for the community you hope to build around your project.
Batman might spend hundreds of hours in seclusion furiously building a project in secrecy, but you don't have to. Take to Twitter, Reddit, or mailing lists relevant to your project's scope, and announce that you're thinking of creating a new project. Talk about your design goals and how you plan to achieve them. Request input, listen to similar (but maybe not identical) use cases, and build that information into your process as you write code. You don't have to accept every suggestion or request—but if you know about them ahead of time, you can avoid pitfalls that require arduous major overhauls later.
This process doesn't end with the initial announcement. If you want your project to be adopted and used by other people, you need to *develop* it that way too. This isn't a barrier to entry; it's just a pattern to use. So don't just hunker down privately on your own machine with a text editor—start a real, publicly accessible project at one of the big foundries, and treat it as though the community was already there and watching.
## Ways to build a real public project
You can open accounts for open source projects at GitHub, GitLab, or BitBucket for free. Once you've opened your account and created a repository for your project, *use* it—create a README, assign a LICENSE, and push code incrementally as you develop it. This will build the habits you'll need to work with a real team later as you get accustomed to writing your code in measurable, documented commits with clear goals. The further you go, the more likely you'll start generating interest—usually in the form of end users first.
The users will start opening tickets, which will both delight and annoy you. You should take those tickets seriously and treat their owners courteously. Some of them will be based on tremendous misunderstandings of what your project is and what is or isn't within its scope—treat those courteously and professionally, also. In some cases, you'll guide those users into the fold of what you're doing. In others, however haltingly, they'll guide you into realizing the larger—or slightly differently centered—scope you probably should have planned for in the first place.
If you do a good job with the users, eventually fellow developers will show up and take an interest. This will also both delight and annoy you. At first, you'll probably just get trivial bugfixes. Eventually, you'll start to get pull requests that would either hardcode really, really niche special use-cases into your project (which would be a nightmare to maintain) or significantly alter the scope or even the focus of your project. You'll need to learn how to recognize which contributions are which and decide which ones you want to embrace and which you should politely reject.
## Why bother with all of this?
If all of this sounds like a lot of work, there's a good reason: it is. But it's *rewarding* work that you can cash in on in plenty of ways. Open source work sharpens your skills in ways you never realized were dull—from writing cleaner, more maintainable code to learning how to communicate well and work as a team. It's also the best possible resume builder for a working or aspiring professional developer; potential employers can hit your repository and see what you're capable of, and developers you've worked with on community projects may want to bring you in on paying gigs.
Ultimately, working on open source projects—yours or others'—means personal growth, because you're working on something larger than yourself.
## 1 Comment |
10,046 | 如何在 Ubuntu 和其他 Linux 发行版中创建照片幻灯片 | https://itsfoss.com/photo-slideshow-ubuntu/ | 2018-09-25T15:08:43 | [
"幻灯片",
"图片"
] | https://linux.cn/article-10046-1.html |
>
> 创建照片幻灯片只需点击几下。以下是如何在 Ubuntu 18.04 和其他 Linux 发行版中制作照片幻灯片。
>
>
>

想象一下,你的朋友和亲戚正在拜访你,并请你展示最近的活动/旅行照片。
你将照片保存在计算机上,并整齐地放在单独的文件夹中。你邀请计算机附近的所有人。你进入该文件夹,单击其中一张图片,然后按箭头键逐个显示照片。
但那太累了!如果这些图片每隔几秒自动更改一次,那将会好很多。
这称之为幻灯片,我将向你展示如何在 Ubuntu 中创建照片幻灯片。这能让你在文件夹中循环播放图片并以全屏模式显示它们。
### 在 Ubuntu 18.04 和其他 Linux 发行版中创建照片幻灯片
虽然有几种图像浏览器可以做到,但我将向你展示大多数发行版中应该提供的两种最常用的工具。
#### 方法 1:使用 GNOME 默认图像浏览器浏览照片幻灯片
如果你在 Ubuntu 18.04 或任何其他发行版中使用 GNOME,那么你很幸运。Gnome 的默认图像浏览器,Eye of GNOME,能够在当前文件夹中显示图片的幻灯片。
只需单击其中一张图片,你将在程序的右上角菜单中看到设置选项。它看起来像堆叠在一起的三条横栏。
你会在这里看到几个选项。勾选幻灯片选项,它将全屏显示图像。

默认情况下,图像以 5 秒的间隔变化。你可以进入 “Preferences -> Slideshow” 来更改幻灯片放映间隔。

#### 方法 2:使用 Shotwell Photo Manager 进行照片幻灯片放映
[Shotwell](https://wiki.gnome.org/Apps/Shotwell) 是一款流行的 [Linux 照片管理程序](https://itsfoss.com/linux-photo-management-software/)。适用于所有主要的 Linux 发行版。
如果尚未安装,请在你的发行版软件中心中搜索 Shotwell 并安装。
Shotwell 的运行略有不同。如果你在 Shotwell Viewer 中直接打开照片,则不会看到首选项或者幻灯片的选项。
对于幻灯片放映和其他选项,你必须打开 Shotwell 并导入包含这些图片的文件夹。导入文件夹后,从左侧窗格中选择该文件夹,然后单击菜单中的 “View”。你应该在此处看到幻灯片选项。只需单击它即可创建所选文件夹中所有图像的幻灯片。

你还可以更改幻灯片设置。当图像以全屏显示时,将显示此选项。只需将鼠标悬停在底部,你就会看到一个设置选项。
#### 创建照片幻灯片很容易
如你所见,在 Linux 中创建照片幻灯片非常简单。我希望你觉得这个简单的提示有用。如果你有任何问题或建议,请在下面的评论栏告诉我们。
---
via: <https://itsfoss.com/photo-slideshow-ubuntu/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Creating a slideshow of photos is a matter of a few clicks. Here’s how to make a slideshow of pictures in Ubuntu and other Linux distributions.

Imagine yourself in a situation where your friends and family are visiting you and request you to show the pictures of a recent event/trip.
You have the photos saved on your computers, neatly in a separate folder. You invite everyone near the computer. You go to the folder, click on one of the pictures and start showing them the photos one by one by pressing the arrow keys.
But that’s tiring! It will be a lot better if those images get changed automatically every few seconds.
That’s called a slideshow and I am going to show you how to create a slideshow of photos in Ubuntu. This will allow you to loop pictures from a folder and display them in fullscreen mode.
## Creating photo slideshow in Ubuntu 18.04 and other Linux distributions
While you could use several image viewers for this purpose, I am going to show you two of the most popular tools that should be available in most distributions.
### Method 1: Photo slideshow with GNOME’s default image viewer
If you are using GNOME in Ubuntu or any other distribution, you are in luck. The default image viewer of Gnome, Eye of GNOME, is well capable of displaying
Just click on one of the pictures and you’ll see the settings option on the top right side of the application menu. It looks like three bars stacked over the top of one another.
You’ll see several options here. Check the Slideshow box and it will go fullscreen displaying the images.

By default, the images change at an interval of 5 seconds. You can change the slideshow interval by going to the Preferences->Slideshow.

**Recommended Read:**
### Method 2: Photo slideshow with Shotwell Photo Manager
[Shotwell](https://wiki.gnome.org/Apps/Shotwell) is a popular [photo management application for Linux](https://itsfoss.com/linux-photo-management-software/). and available for all major Linux distributions.
If it is not installed already, search for Shotwell in your distribution’s software center and install it.
Shotwell works slightly different. If you directly open a photo in Shotwell Viewer, you won’t see preferences or options for a slideshow.
For

You can also change the slideshow settings. This option is presented when the images are displayed in the full view. Just hover the mouse to the lower bottom and you’ll see a settings option appearing.
### It’s easy to create photo slideshow
As you can see, it’s really simple to create slideshow of photos in Linux. I hope you find this simple tip useful. If you have questions or suggestions, please let me know in the comment section below. |
10,047 | 用 zsh 提高生产力的 5 个技巧 | https://opensource.com/article/18/9/tips-productivity-zsh | 2018-09-25T15:49:44 | [
"zsh",
"命令行"
] | https://linux.cn/article-10047-1.html |
>
> zsh 提供了数之不尽的功能和特性,这里有五个可以让你在命令行效率暴增的方法。
>
>
>

Z shell([zsh](http://www.zsh.org/))是 Linux 和类 Unix 系统中的一个[命令解析器](https://en.wikipedia.org/wiki/Shell_(computing))。 它跟 sh (Bourne shell) 家族的其它解析器(如 bash 和 ksh)有着相似的特点,但它还提供了大量的高级特性以及强大的命令行编辑功能,如增强版 Tab 补全。
在这里不可能涉及到 zsh 的所有功能,[描述](http://zsh.sourceforge.net/Doc/Release/zsh_toc.html)它的特性需要好几百页。在本文中,我会列出 5 个技巧,让你通过在命令行使用 zsh 来提高你的生产力。
### 1、主题和插件
多年来,开源社区已经为 zsh 开发了数不清的主题和插件。主题是一个预定义提示符的配置,而插件则是一组常用的别名命令和函数,可以让你更方便的使用一种特定的命令或者编程语言。
如果你现在想开始用 zsh 的主题和插件,那么使用一种 zsh 的配置框架是你最快的入门方式。在众多的配置框架中,最受欢迎的则是 [Oh My Zsh](https://ohmyz.sh/)。在默认配置中,它就已经为 zsh 启用了一些合理的配置,同时它也自带上百个主题和插件。
主题会在你的命令行提示符之前添加一些有用的信息,比如你 Git 仓库的状态,或者是当前使用的 Python 虚拟环境,所以它会让你的工作更高效。只需要看到这些信息,你就不用再敲命令去重新获取它们,而且这些提示也相当酷炫。下图就是我选用的主题 [Powerlevel9k](https://github.com/bhilburn/powerlevel9k):

*zsh 主题 Powerlevel9k*
除了主题,Oh my Zsh 还自带了大量常用的 zsh 插件。比如,通过启用 Git 插件,你可以用一组简便的命令别名操作 Git, 比如
```
$ alias | grep -i git | sort -R | head -10
g=git
ga='git add'
gapa='git add --patch'
gap='git apply'
gdt='git diff-tree --no-commit-id --name-only -r'
gau='git add --update'
gstp='git stash pop'
gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'
gcs='git commit -S'
glg='git log --stat'
```
zsh 还有许多插件可以用于许多编程语言、打包系统和一些平时在命令行中常用的工具。以下是我 Ferdora 工作站中用到的插件表:
```
git golang fedora docker oc sudo vi-mode virtualenvwrapper
```
### 2、智能的命令别名
命令别名在 zsh 中十分有用。为你常用的命令定义别名可以节省你的打字时间。Oh My Zsh 默认配置了一些常用的命令别名,包括目录导航命令别名,为常用的命令添加额外的选项,比如:
```
ls='ls --color=tty'
grep='grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'
```
除了命令别名以外, zsh 还自带两种额外常用的别名类型:后缀别名和全局别名。
后缀别名可以让你基于文件后缀,在命令行中利用指定程序打开这个文件。比如,要用 vim 打开 YAML 文件,可以定义以下命令行别名:
```
alias -s {yml,yaml}=vim
```
现在,如果你在命令行中输入任何后缀名为 `yml` 或 `yaml` 文件, zsh 都会用 vim 打开这个文件。
```
$ playbook.yml
# Opens file playbook.yml using vim
```
全局别名可以让你创建一个可在命令行的任何地方展开的别名,而不仅仅是在命令开始的时候。这个在你想替换常用文件名或者管道命令的时候就显得非常有用了。比如:
```
alias -g G='| grep -i'
```
要使用这个别名,只要你在想用管道命令的时候输入 `G` 就好了:
```
$ ls -l G do
drwxr-xr-x. 5 rgerardi rgerardi 4096 Aug 7 14:08 Documents
drwxr-xr-x. 6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads
```
接着,我们就来看看 zsh 是如何导航文件系统的。
### 3、便捷的目录导航
当你使用命令行的时候,在不同的目录之间切换访问是最常见的工作了。 zsh 提供了一些十分有用的目录导航功能来简化这个操作。这些功能已经集成到 Oh My Zsh 中了, 而你可以用以下命令来启用它
```
setopt autocd autopushd \ pushdignoredups
```
使用了上面的配置后,你就不用输入 `cd` 来切换目录了,只需要输入目录名称,zsh 就会自动切换到这个目录中:
```
$ pwd
/home/rgerardi
$ /tmp
$ pwd
/tmp
```
如果想要回退,只要输入 `-`:
zsh 会记录你访问过的目录,这样下次你就可以快速切换到这些目录中。如果想要看这个目录列表,只要输入 `dirs -v`:
```
$ dirs -v
0 ~
1 /var/log
2 /var/opt
3 /usr/bin
4 /usr/local
5 /usr/lib
6 /tmp
7 ~/Projects/Opensource.com/zsh-5tips
8 ~/Projects
9 ~/Projects/ansible
10 ~/Documents
```
如果想要切换到这个列表中的其中一个目录,只需输入 `~#` (`#` 代表目录在列表中的序号)就可以了。比如
```
$ pwd
/home/rgerardi
$ ~4
$ pwd
/usr/local
```
你甚至可以用别名组合这些命令,这样切换起来就变得更简单:
```
d='dirs -v | head -10'
1='cd -'
2='cd -2'
3='cd -3'
4='cd -4'
5='cd -5'
6='cd -6'
7='cd -7'
8='cd -8'
9='cd -9'
```
现在你可以通过输入 `d` 来查看这个目录列表的前10个,然后用目录的序号来进行切换:
```
$ d
0 /usr/local
1 ~
2 /var/log
3 /var/opt
4 /usr/bin
5 /usr/lib
6 /tmp
7 ~/Projects/Opensource.com/zsh-5tips
8 ~/Projects
9 ~/Projects/ansible
$ pwd
/usr/local
$ 6
/tmp
$ pwd
/tmp
```
最后,你可以在 zsh 中利用 Tab 来自动补全目录名称。你可以先输入目录的首字母,然后按 `TAB` 键来补全它们:
```
$ pwd
/home/rgerardi
$ p/o/z (TAB)
$ Projects/Opensource.com/zsh-5tips/
```
以上仅仅是 zsh 强大的 Tab 补全系统中的一个功能。接来下我们来探索它更多的功能。
### 4、先进的 Tab 补全
zsh 强大的补全系统是它的卖点之一。为了简便起见,我称它为 Tab 补全,然而在系统底层,它起到了几个作用。这里通常包括展开以及命令补全,我会在这里用讨论它们。如果想了解更多,详见 [用户手册](http://zsh.sourceforge.net/Guide/zshguide06.html#l144)。
在 Oh My Zsh 中,命令补全是默认启用的。要启用它,你只要在 `.zshrc` 文件中添加以下命令:
```
autoload -U compinit
compinit
```
zsh 的补全系统非常智能。它会尝试唯一提示可用在当前上下文环境中的项目 —— 比如,你输入了 `cd` 和 `TAB`,zsh 只会为你提示目录名,因为它知道其它的项目放在 `cd` 后面没用。
反之,如果你使用与用户相关的命令便会提示用户名,而 `ssh` 或者 `ping` 这类则会提示主机名。
zsh 拥有一个巨大而又完整的库,因此它能识别许多不同的命令。比如,如果你使用 `tar` 命令, 你可以按 `TAB` 键,它会为你展示一个可以用于解压的文件列表:
```
$ tar -xzvf test1.tar.gz test1/file1 (TAB)
file1 file2
```
如果使用 `git` 的话,这里有个更高级的示例。在这个示例中,当你按 `TAB` 键, zsh 会自动补全当前库可以操作的文件:
```
$ ls
original plan.txt zsh-5tips.md zsh_theme_small.png
$ git status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: zsh-5tips.md
no changes added to commit (use "git add" and/or "git commit -a")
$ git add (TAB)
$ git add zsh-5tips.md
```
zsh 还能识别命令行选项,同时它只会提示与选中子命令相关的命令列表:
```
$ git commit - (TAB)
--all -a -- stage all modified and deleted paths
--allow-empty -- allow recording an empty commit
--allow-empty-message -- allow recording a commit with an empty message
--amend -- amend the tip of the current branch
--author -- override the author name used in the commit
--branch -- show branch information
--cleanup -- specify how the commit message should be cleaned up
--date -- override the author date used in the commit
--dry-run -- only show the list of paths that are to be committed or not, and any untracked
--edit -e -- edit the commit message before committing
--file -F -- read commit message from given file
--gpg-sign -S -- GPG-sign the commit
--include -i -- update the given files and commit the whole index
--interactive -- interactively update paths in the index file
--message -m -- use the given message as the commit message
... TRUNCATED ...
```
在按 `TAB` 键之后,你可以使用方向键来选择你想用的命令。现在你就不用记住所有的 `git` 命令项了。
zsh 还有很多有用的功能。当你用它的时候,你就知道哪些对你才是最有用的。
### 5、命令行编辑与历史记录
zsh 的命令行编辑功能也十分有用。默认条件下,它是模拟 emacs 编辑器的。如果你是跟我一样更喜欢用 vi/vim,你可以用以下命令启用 vi 的键绑定。
```
$ bindkey -v
```
如果你使用 Oh My Zsh,`vi-mode` 插件可以启用额外的绑定,同时会在你的命令提示符上增加 vi 的模式提示 —— 这个非常有用。
当启用 vi 的绑定后,你可以在命令行中使用 vi 命令进行编辑。比如,输入 `ESC+/` 来查找命令行记录。在查找的时候,输入 `n` 来找下一个匹配行,输入 `N` 来找上一个。输入 `ESC` 后,常用的 vi 命令都可以使用,如输入 `0` 跳转到行首,输入 `$` 跳转到行尾,输入 `i` 来插入文本,输入 `a` 来追加文本等等,即使是跟随的命令也同样有效,比如输入 `cw` 来修改单词。
除了命令行编辑,如果你想修改或重新执行之前使用过的命令,zsh 还提供几个常用的命令行历史功能。比如,你打错了一个命令,输入 `fc`,你可以在你偏好的编辑器中修复最后一条命令。使用哪个编辑是参照 `$EDITOR` 变量的,而默认是使用 vi。
另外一个有用的命令是 `r`, 它会重新执行上一条命令;而 `r <WORD>` 则会执行上一条包含 `WORD` 的命令。
最后,输入两个感叹号(`!!`),可以在命令行中回溯最后一条命令。这个十分有用,比如,当你忘记使用 `sudo` 去执行需要权限的命令时:
```
$ less /var/log/dnf.log
/var/log/dnf.log: Permission denied
$ sudo !!
$ sudo less /var/log/dnf.log
```
这个功能让查找并且重新执行之前命令的操作更加方便。
### 下一步呢?
这里仅仅介绍了几个可以让你提高生产率的 zsh 特性;其实还有更多功能有待你的发掘;想知道更多的信息,你可以访问以下的资源:
* [An Introduction to the Z Shell](http://zsh.sourceforge.net/Intro/intro_toc.html)
* [A User’s Guide to ZSH](http://zsh.sourceforge.net/Guide/)
* [Archlinux Wiki](https://wiki.archlinux.org/index.php/zsh)
* [zsh-lovers](https://grml.org/zsh/)
你有使用 zsh 提高生产力的技巧可以分享吗?我很乐意在下方评论中看到它们。
---
via: <https://opensource.com/article/18/9/tips-productivity-zsh>
作者:[Ricardo Gerardi](https://opensource.com/users/rgerardi) 选题:[lujun9972](https://github.com/lujun9972) 译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The Z shell known as [zsh](http://www.zsh.org/) is a [shell](https://en.wikipedia.org/wiki/Shell_(computing)) for Linux/Unix-like operating systems. It has similarities to other shells in the `sh`
(Bourne shell) family, such as as `bash`
and `ksh`
, but it provides many advanced features and powerful command line editing options, such as enhanced Tab completion.
It would be impossible to cover all the options of zsh here; there are literally hundreds of pages [documenting](http://zsh.sourceforge.net/Doc/Release/zsh_toc.html) its many features. In this article, I'll present five tips to make you more productive using the command line with zsh.
## 1. Themes and plugins
Through the years, the open source community has developed countless themes and plugins for zsh. A theme is a predefined prompt configuration, while a plugin is a set of useful aliases and functions that make it easier to use a specific command or programming language.
The quickest way to get started using themes and plugins is to use a zsh configuration framework. There are many available, but the most popular is [Oh My Zsh](https://ohmyz.sh/). By default, it enables some sensible zsh configuration options and it comes loaded with hundreds of themes and plugins.
A theme makes you more productive as it adds useful information to your prompt, such as the status of your Git repository or Python virtualenv in use. Having this information at a glance saves you from typing the equivalent commands to obtain it, and it's a cool look. Here's an example of [Powerlevel9k](https://github.com/bhilburn/powerlevel9k), my theme of choice:

The Powerlevel9k theme for zsh
In addition to themes, Oh My Zsh bundles tons of useful plugins for zsh. For example, enabling the Git plugin gives you access to a number of useful aliases, such as:
```
$ alias | grep -i git | sort -R | head -10
g=git
ga='git add'
gapa='git add --patch'
gap='git apply'
gdt='git diff-tree --no-commit-id --name-only -r'
gau='git add --update'
gstp='git stash pop'
gbda='git branch --no-color --merged | command grep -vE "^(\*|\s*(master|develop|dev)\s*$)" | command xargs -n 1 git branch -d'
gcs='git commit -S'
glg='git log --stat'
```
There are plugins available for many programming languages, packaging systems, and other tools you commonly use on the command line. Here's a list of plugins I use in my Fedora workstation:
`git golang fedora docker oc sudo vi-mode virtualenvwrapper`
## 2. Clever aliases
Aliases are very useful in zsh. Defining aliases for your most-used commands saves you a lot of typing. Oh My Zsh configures several useful aliases by default, including aliases to navigate directories and replacements for common commands with additional options such as:
```
ls='ls --color=tty'
grep='grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}'
```
In addition to command aliases, zsh enables two additional useful alias types: the *suffix alias* and the *global alias*.
A suffix alias allows you to open the file you type in the command line using the specified program based on the file extension. For example, to open YAML files using vim, define the following alias:
`alias -s {yml,yaml}=vim`
Now if you type any file name ending with `yml`
or `yaml`
in the command line, zsh opens that file using vim:
```
$ playbook.yml
# Opens file playbook.yml using vim
```
A global alias enables you to create an alias that is expanded anywhere in the command line, not just at the beginning. This is very useful to replace common filenames or piped commands. For example:
`alias -g G='| grep -i'`
To use this alias, type `G`
anywhere you would type the piped command:
```
$ ls -l G do
drwxr-xr-x. 5 rgerardi rgerardi 4096 Aug 7 14:08 Documents
drwxr-xr-x. 6 rgerardi rgerardi 4096 Aug 24 14:51 Downloads
```
Next, let's see how zsh helps to navigate the filesystem.
## 3. Easy directory navigation
When you're using the command line, navigating across different directories is one of the most common tasks. Zsh makes this easier by providing some useful directory navigation features. These features are enabled with Oh My Zsh, but you can enable them by using this command:
`setopt autocd autopushd \ pushdignoredups`
With these options set, you don't need to type `cd`
to change directories. Just type the directory name, and zsh switches to it:
```
$ pwd
/home/rgerardi
$ /tmp
$ pwd
/tmp
```
To move back, type `-`
:
Zsh keeps the history of directories you visited so you can quickly switch to any of them. To see the list, type `dirs -v`
:
```
$ dirs -v
0 ~
1 /var/log
2 /var/opt
3 /usr/bin
4 /usr/local
5 /usr/lib
6 /tmp
7 ~/Projects/Opensource.com/zsh-5tips
8 ~/Projects
9 ~/Projects/ansible
10 ~/Documents
```
Switch to any directory in this list by typing `~#`
where # is the number of the directory in the list. For example:
```
$ pwd
/home/rgerardi
$ ~4
$ pwd
/usr/local
```
Combine these with aliases to make it even easier to navigate:
```
d='dirs -v | head -10'
1='cd -'
2='cd -2'
3='cd -3'
4='cd -4'
5='cd -5'
6='cd -6'
7='cd -7'
8='cd -8'
9='cd -9'
```
Now you can type `d`
to see the first ten items in the list and the number to switch to it:
```
$ d
0 /usr/local
1 ~
2 /var/log
3 /var/opt
4 /usr/bin
5 /usr/lib
6 /tmp
7 ~/Projects/Opensource.com/zsh-5tips
8 ~/Projects
9 ~/Projects/ansible
$ pwd
/usr/local
$ 6
/tmp
$ pwd
/tmp
```
Finally, zsh automatically expands directory names with Tab completion. Type the first letters of the directory names and `TAB`
to use it:
```
$ pwd
/home/rgerardi
$ p/o/z (TAB)
$ Projects/Opensource.com/zsh-5tips/
```
This is just one of the features enabled by zsh's powerful Tab completion system. Let's look at some more.
## 4. Advanced Tab completion
Zsh's powerful completion system is one of its hallmarks. For simplification, I call it Tab completion, but under the hood, more than one thing is happening. There's usually expansion and command completion. I'll discuss them together here. For details, check this [User's Guide](http://zsh.sourceforge.net/Guide/zshguide06.html#l144).
Command completion is enabled by default with Oh My Zsh. To enable it, add the following lines to your `.zshrc`
file:
```
autoload -U compinit
compinit
```
Zsh's completion system is smart. It tries to suggest only items that can be used in certain contexts—for example, if you type `cd`
and `TAB`
, zsh suggests only directory names as it knows `cd`
does not work with anything else.
Conversely, it suggests usernames when running user-related commands or hostnames when using `ssh`
or `ping`
, for example.
It has a vast completion library and understands many different commands. For example, if you're using the `tar`
command, you can press Tab to see a list of files available in the package as candidates for extraction:
```
$ tar -xzvf test1.tar.gz test1/file1 (TAB)
file1 file2
```
Here's a more advanced example, using `git`
. In this example, when typing `TAB`
, zsh automatically completes the name of the only file in the repository that can be staged:
```
$ ls
original plan.txt zsh-5tips.md zsh_theme_small.png
$ git status
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: zsh-5tips.md
no changes added to commit (use "git add" and/or "git commit -a")
$ git add (TAB)
$ git add zsh-5tips.md
```
It also understands command line options and suggests only the ones that are relevant to the subcommand selected:
```
$ git commit - (TAB)
--all -a -- stage all modified and deleted paths
--allow-empty -- allow recording an empty commit
--allow-empty-message -- allow recording a commit with an empty message
--amend -- amend the tip of the current branch
--author -- override the author name used in the commit
--branch -- show branch information
--cleanup -- specify how the commit message should be cleaned up
--date -- override the author date used in the commit
--dry-run -- only show the list of paths that are to be committed or not, and any untracked
--edit -e -- edit the commit message before committing
--file -F -- read commit message from given file
--gpg-sign -S -- GPG-sign the commit
--include -i -- update the given files and commit the whole index
--interactive -- interactively update paths in the index file
--message -m -- use the given message as the commit message
... TRUNCATED ...
```
After typing `TAB`
, you can use the arrow keys to navigate the options list and select the one you need. Now you don't need to memorize all those Git options.
There are many options available. The best way to find what is most helpful to you is by using it.
## 5. Command line editing and history
Zsh's command line editing capabilities are also useful. By default, it emulates emacs. If, like me, you prefer vi/vim, enable vi bindings with the following command:
`$ bindkey -v`
If you're using Oh My Zsh, the `vi-mode`
plugin enables additional bindings and a mode indicator on your prompt—very useful.
After enabling vi bindings, you can edit the command line using vi commands. For example, press `ESC+/`
to search the command line history. While searching, pressing `n`
brings the next matching line, and `N`
the previous one. Most common vi commands work after pressing `ESC`
such as `0`
to jump to the start of the line, `$`
to jump to the end, `i`
to insert, `a`
to append, etc. Even commands followed by motion work, such as `cw`
to change a word.
In addition to command line editing, zsh provides several useful command line history features if you want to fix or re-execute previous used commands. For example, if you made a mistake, typing `fc`
brings the last command in your favorite editor to fix it. It respects the `$EDITOR`
variable and by default uses vi.
Another useful command is `r`
, which re-executes the last command; and `r <WORD>`
, which executes the last command that contains the string `WORD`
.
Finally, typing double bangs (`!!`
) brings back the last command anywhere in the line. This is useful, for instance, if you forgot to type `sudo`
to execute commands that require elevated privileges:
```
$ less /var/log/dnf.log
/var/log/dnf.log: Permission denied
$ sudo !!
$ sudo less /var/log/dnf.log
```
These features make it easier to find and re-use previously typed commands.
## Where to go from here?
These are just a few of the zsh features that can make you more productive; there are many more. For additional information, consult the following resources:
[An Introduction to the Z Shell](http://zsh.sourceforge.net/Intro/intro_toc.html)
Do you have any zsh productivity tips to share? I would love to hear about them in the comments below.
## 4 Comments |
10,048 | 用 Hugo 30 分钟搭建静态博客 | https://opensource.com/article/18/3/start-blog-30-minutes-hugo | 2018-09-25T22:36:31 | [
"Hugo",
"博客"
] | https://linux.cn/article-10048-1.html |
>
> 了解 Hugo 如何使构建网站变得有趣。
>
>
>

你是不是强烈地想搭建博客来将自己对软件框架等的探索学习成果分享呢?你是不是面对缺乏指导文档而一团糟的项目就有一种想去改变它的冲动呢?或者换个角度,你是不是十分期待能创建一个属于自己的个人博客网站呢?
很多人在想搭建博客之前都有一些严重的迟疑顾虑:感觉自己缺乏内容管理系统(CMS)的相关知识,更缺乏时间去学习这些知识。现在,如果我说不用花费大把的时间去学习 CMS 系统、学习如何创建一个静态网站、更不用操心如何去强化网站以防止它受到黑客攻击的问题,你就可以在 30 分钟之内创建一个博客?你信不信?利用 Hugo 工具,就可以实现这一切。

Hugo 是一个基于 Go 语言开发的静态站点生成工具。也许你会问,为什么选择它?
* 无需数据库、无需需要各种权限的插件、无需跑在服务器上的底层平台,更没有额外的安全问题。
* 都是静态站点,因此拥有轻量级、快速响应的服务性能。此外,所有的网页都是在部署的时候生成,所以服务器负载很小。
* 极易操作的版本控制。一些 CMS 平台使用它们自己的版本控制软件(VCS)或者在网页上集成 Git 工具。而 Hugo,所有的源文件都可以用你所选的 VCS 软件来管理。
### 0-5 分钟:下载 Hugo,生成一个网站
直白的说,Hugo 使得写一个网站又一次变得有趣起来。让我们来个 30 分钟计时,搭建一个网站。
为了简化 Hugo 安装流程,这里直接使用 Hugo 可执行安装文件。
1. 下载和你操作系统匹配的 Hugo [版本](https://github.com/gohugoio/hugo/releases);
2. 压缩包解压到指定路径,例如 windows 系统的 `C:\hugo_dir` 或者 Linux 系统的 `~/hugo_dir` 目录;下文中的变量 `${HUGO_HOME}` 所指的路径就是这个安装目录;
3. 打开命令行终端,进入安装目录:`cd ${HUGO_HOME}`;
4. 确认 Hugo 已经启动:
* Unix 系统:`${HUGO_HOME}/[hugo version]`;
* Windows 系统:`${HUGO_HOME}\[hugo.exe version]`,例如:cmd 命令行中输入:`c:\hugo_dir\hugo version`。为了书写上的简化,下文中的 `hugo` 就是指 hugo 可执行文件所在的路径(包括可执行文件),例如命令 `hugo version` 就是指命令 `c:\hugo_dir\hugo version` 。(LCTT 译注:可以把 hugo 可执行文件所在的路径添加到系统环境变量下,这样就可以直接在终端中输入 `hugo version`)
如果命令 `hugo version` 报错,你可能下载了错误的版本。当然,有很多种方法安装 Hugo,更多详细信息请查阅 [官方文档](https://gohugo.io/getting-started/installing/)。最稳妥的方法就是把 Hugo 可执行文件放在某个路径下,然后执行的时候带上路径名
5. 创建一个新的站点来作为你的博客,输入命令:`hugo new site awesome-blog`;
6. 进入新创建的路径下: `cd awesome-blog`;
恭喜你!你已经创建了自己的新博客。
### 5-10 分钟:为博客设置主题
Hugo 中你可以自己构建博客的主题或者使用网上已经有的一些主题。这里选择 [Kiera](https://themes.gohugo.io/) 主题,因为它简洁漂亮。按以下步骤来安装该主题:
1. 进入主题所在目录:`cd themes`;
2. 克隆主题:`git clone https://github.com/avianto/hugo-kiera kiera`。如果你没有安装 Git 工具:
* 从 [Github](https://github.com/avianto/hugo-kiera) 上下载 hugo 的 .zip 格式的文件;
* 解压该 .zip 文件到你的博客主题 `theme` 路径;
* 重命名 `hugo-kiera-master` 为 `kiera`;
3. 返回博客主路径:`cd awesome-blog`;
4. 激活主题;通常来说,主题(包括 Kiera)都自带文件夹 `exampleSite`,里面存放了内容配置的示例文件。激活 Kiera 主题需要拷贝它提供的 `config.toml` 到你的博客下:
* Unix 系统:`cp themes/kiera/exampleSite/config.toml .`;
* Windows 系统:`copy themes\kiera\exampleSite\config.toml .`;
* 选择 `Yes` 来覆盖原有的 `config.toml`;
5. ( 可选操作 )你可以选择可视化的方式启动服务器来验证主题是否生效:`hugo server -D` 然后在浏览器中输入 `http://localhost:1313`。可用通过在终端中输入 `Crtl+C` 来停止服务器运行。现在你的博客还是空的,但这也给你留了写作的空间。它看起来如下所示:

你已经成功的给博客设置了主题!你可以在官方 [Hugo 主题](https://themes.gohugo.io/) 网站上找到上百种漂亮的主题供你使用。
### 10-20 分钟:给博客添加内容
对于碗来说,它是空的时候用处最大,可以用来盛放东西;但对于博客来说不是这样,空博客几乎毫无用处。在这一步,你将会给博客添加内容。Hugo 和 Kiera 主题都为这个工作提供了方便性。按以下步骤来进行你的第一次提交:
1. archetypes 将会是你的内容模板。
2. 添加主题中的 archtypes 至你的博客:
* Unix 系统: `cp themes/kiera/archetypes/* archetypes/`
* Windows 系统:`copy themes\kiera\archetypes\* archetypes\`
* 选择 `Yes` 来覆盖原来的 `default.md` 内容架构类型
3. 创建博客 posts 目录:
* Unix 系统: `mkdir content/posts`
* Windows 系统: `mkdir content\posts`
4. 利用 Hugo 生成你的 post:
* Unix 系统:`hugo nes posts/first-post.md`;
* Windows 系统:`hugo new posts\first-post.md`;
5. 在文本编辑器中打开这个新建的 post 文件:
* Unix 系统:`gedit content/posts/first-post.md`;
* Windows 系统:`notepadd content\posts\first-post.md`;
此刻,你可以疯狂起来了。注意到你的提交文件中包括两个部分。第一部分是以 `+++` 符号分隔开的。它包括了提交文档的主要数据,例如名称、时间等。在 Hugo 中,这叫做前缀。在前缀之后,才是正文。下面编辑第一个提交文件内容:
```
+++
title = "First Post"
date = 2018-03-03T13:23:10+01:00
draft = false
tags = ["Getting started"]
categories = []
+++
Hello Hugo world! No more excuses for having no blog or documentation now!
```
现在你要做的就是启动你的服务器:`hugo server -D`;然后打开浏览器,输入 `http://localhost:1313/`。

### 20-30 分钟:调整网站
前面的工作很完美,但还有一些问题需要解决。例如,简单地命名你的站点:
1. 终端中按下 `Ctrl+C` 以停止服务器。
2. 打开 `config.toml`,编辑博客的名称,版权,你的姓名,社交网站等等。
当你再次启动服务器后,你会发现博客私人订制味道更浓了。不过,还少一个重要的基础内容:主菜单。快速的解决这个问题。返回 `config.toml` 文件,在末尾插入如下一段:
```
[[menu.main]]
name = "Home" #Name in the navigation bar
weight = 10 #The larger the weight, the more on the right this item will be
url = "/" #URL address
[[menu.main]]
name = "Posts"
weight = 20
url = "/posts/"
```
上面这段代码添加了 `Home` 和 `Posts` 到主菜单中。你还需要一个 `About` 页面。这次是创建一个 `.md` 文件,而不是编辑 `config.toml` 文件:
1. 创建 `about.md` 文件:`hugo new about.md` 。注意它是 `about.md`,不是 `posts/about.md`。该页面不是博客提交内容,所以你不想它显示到博客内容提交当中吧。
2. 用文本编辑器打开该文件,输入如下一段:
```
+++
title = "About"
date = 2018-03-03T13:50:49+01:00
menu = "main" #Display this page on the nav menu
weight = "30" #Right-most nav item
meta = "false" #Do not display tags or categories
+++
> Waves are the practice of the water. Shunryu Suzuki
```
当你启动你的服务器并输入:`http://localhost:1313/`,你将会看到你的博客。(访问我 Gihub 主页上的 [例子](https://m-czernek.github.io/awesome-blog/) )如果你想让文章的菜单栏和 Github 相似,给 `themes/kiera/static/css/styles.css` 打上这个 [补丁](https://github.com/avianto/hugo-kiera/pull/18/files)。
---
via: <https://opensource.com/article/18/3/start-blog-30-minutes-hugo>
作者:[Marek Czernek](https://opensource.com/users/mczernek)
译者:[jrg](https://github.com/jrglinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Do you want to start a blog to share your latest adventures with various software frameworks? Do you love a project that is poorly documented and want to fix that? Or do you just want to create a personal website?
Many people who want to start a blog have a significant caveat: lack of knowledge about a content management system (CMS) or time to learn. Well, what if I said you don't need to spend days learning a new CMS, setting up a basic website, styling it, and hardening it against attackers? What if I said you could create a blog in 30 minutes, start to finish, with [Hugo](https://gohugo.io/)?

opensource.com
Hugo is a static site generator written in Go. Why use Hugo, you ask?
- Because there is no database, no plugins requiring any permissions, and no underlying platform running on your server, there's no added security concern.
- The blog is a set of static websites, which means lightning-fast serve time. Additionally, all pages are rendered at deploy time, so your server's load is minimal.
- Version control is easy. Some CMS platforms use their own version control system (VCS) or integrate Git into their interface. With Hugo, all your source files can live natively on the VCS of your choice.
To put it bluntly, Hugo is here to make writing a website fun again. Let's time the 30 minutes, shall we?
## Minutes 0-5: Download Hugo and generate a site
To simplify the installation of Hugo, download the binary file. To do so:
- Download the appropriate
[archive](https://github.com/gohugoio/hugo/releases)for your operating system. - Unzip the archive into a directory of your choice, for example
`C:\hugo_dir`
or`~/hugo_dir`
; this path will be referred to as`${HUGO_HOME}`
. - Open the command line and change into your directory:
`cd ${HUGO_HOME}`
. - Verify that Hugo is working:
- On Unix:
`${HUGO_HOME}/[hugo version]`
- On Windows:
`${HUGO_HOME}\[hugo.exe version]`
For example,`c:\hugo_dir\hugo version`
.
For simplicity, I'll refer to the path to the Hugo binary (including the binary) as
`hugo`
. For example,`hugo version`
would translate to`C:\hugo_dir\hugo version`
on your computer.If you get an error message, you may have downloaded the wrong version. Also note there are many possible ways to install Hugo. See the
[official documentation](https://gohugo.io/getting-started/installing/)for more information. Ideally, you put the Hugo binary on PATH. For this quick start, it's fine to use the full path of the Hugo binary. - On Unix:
- Create a new site that will become your blog:
`hugo new site awesome-blog`
. - Change into the newly created directory:
`cd awesome-blog`
.
Congratulations! You have just created your new blog.
## Minutes 5-10: Theme your blog
With Hugo, you can either theme your blog yourself or use one of the beautiful, ready-made [themes](https://themes.gohugo.io/). I chose [Kiera](https://github.com/avianto/hugo-kiera) because it is deliciously simple. To install the theme:
- Change into the themes directory:
`cd themes`
. - Clone your theme:
`git clone https://github.com/avianto/hugo-kiera kiera`
. If you do not have Git installed:- Download the .zip file from
[GitHub](https://github.com/avianto/hugo-kiera). - Unzip it to your site's
`themes`
directory. - Rename the directory from
`hugo-kiera-master`
to`kiera`
.
- Download the .zip file from
- Change the directory to the awesome-blog level:
`cd awesome-blog`
. - Activate the theme. Themes (including Kiera) often come with a directory called
`exampleSite`
, which contains example content and an example settings file. To activate Kiera, copy the provided`config.toml`
file to your blog:- On Unix:
`cp themes/kiera/exampleSite/config.toml .`
- On Windows:
`copy themes\kiera\exampleSite\config.toml .`
- Confirm
`Yes`
to override the old`config.toml`
- On Unix:
- (Optional) You can start your server to visually verify the theme is activated:
`hugo server -D`
and access`http://localhost:1313`
in your web browser. Once you've reviewed your blog, you can turn off the server by pressing`Ctrl+C`
in the command line. Your blog is empty, but we're getting someplace. It should look something like this:

opensource.com
You have just themed your blog! You can find hundreds of beautiful themes on the official [Hugo themes](https://themes.gohugo.io/) site.
## Minutes 10-20: Add content to your blog
Whereas a bowl is most useful when it is empty, this is not the case for a blog. In this step, you'll add content to your blog. Hugo and the Kiera theme simplify this process. To add your first post:
- Article archetypes are templates for your content.
- Add theme archetypes to your blog site:
- On Unix:
`cp themes/kiera/archetypes/* archetypes/`
- On Windows:
`copy themes\kiera\archetypes\* archetypes\`
- Confirm
`Yes`
to override the`default.md`
archetype
- On Unix:
- Create a new directory for your blog posts:
- On Unix:
`mkdir content/posts`
- On Windows:
`mkdir content\posts`
- On Unix:
- Use Hugo to generate your post:
- On Unix:
`hugo new posts/first-post.md`
- On Windows:
`hugo new posts\first-post.md`
- On Unix:
- Open the new post in a text editor of your choice:
- On Unix:
`gedit content/posts/first-post.md`
- On Windows:
`notepad content\posts\first-post.md`
- On Unix:
At this point, you can go wild. Notice that your post consists of two sections. The first one is separated by `+++`
. It contains metadata about your post, such as its title. In Hugo, this is called *front matter*. After the front matter, the article begins. Create the first post:
```
``````
+++
title = "First Post"
date = 2018-03-03T13:23:10+01:00
draft = false
tags = ["Getting started"]
categories = []
+++
Hello Hugo world! No more excuses for having no blog or documentation now!
```
All you need to do now is start the server: `hugo server -D`
. Open your browser and enter: `http://localhost:1313/`
.

opensource.com
## Minutes 20-30: Tweak your site
What we've done is great, but there are still a few niggles to iron out. For example, naming your site is simple:
- Stop your server by pressing
`Ctrl+C`
on the command line. - Open
`config.toml`
and edit settings such as the blog's title, copyright, name, your social network links, etc.
When you start your server again, you'll see your blog has a bit more personalization. One more basic thing is missing: menus. That's a quick fix as well. Back in `config.toml`
, insert the following at the bottom:
```
``````
[[menu.main]]
name = "Home" #Name in the navigation bar
weight = 10 #The larger the weight, the more on the right this item will be
url = "/" #URL address
[[menu.main]]
name = "Posts"
weight = 20
url = "/posts/"
```
This adds menus for Home and Posts. You still need an About page. Instead of referencing it from the `config.toml`
file, reference it from a markdown file:
- Create an About file:
`hugo new about.md`
. Notice that it's`about.md`
, not`posts/about.md`
. The About page is not a blog post, so you don't want it displayed in the Posts section. - Open the file in a text editor and enter the following:
```
``````
+++
title = "About"
date = 2018-03-03T13:50:49+01:00
menu = "main" #Display this page on the nav menu
weight = "30" #Right-most nav item
meta = "false" #Do not display tags or categories
+++
> Waves are the practice of the water. Shunryu Suzuki
```
When you start your Hugo server and open `http://localhost:1313/`
, you should see your new blog ready to be used. (Check out [my example](https://m-czernek.github.io/awesome-blog/) on my GitHub page.) If you'd like to change the active style of menu items to make the padding slightly nicer (like the GitHub live version), apply [this patch](https://github.com/avianto/hugo-kiera/pull/18/files) to your `themes/kiera/static/css/styles.css`
file.
## 10 Comments |
10,049 | 公钥基础设施和密码学中的私钥的角色 | https://opensource.com/article/18/7/private-keys | 2018-09-26T09:09:11 | [
"PKI",
"公钥",
"证书",
"SSL"
] | /article-10049-1.html |
>
> 了解如何验证某人所声称的身份。
>
>
>

在[上一篇文章](/article-9792-1.html)中,我们概述了密码学并讨论了密码学的核心概念:<ruby> 保密性 <rt> confidentiality </rt></ruby> (让数据保密)、<ruby> 完整性 <rt> integrity </rt></ruby> (防止数据被篡改)和<ruby> 身份认证 <rt> authentication </rt></ruby> (确认数据源的<ruby> 身份 <rt> identity </rt></ruby>)。由于要在存在各种身份混乱的现实世界中完成身份认证,人们逐渐建立起一个复杂的<ruby> 技术生态体系 <rt> technological ecosystem </rt></ruby>,用于证明某人就是其声称的那个人。在本文中,我们将大致介绍这些体系是如何工作的。
### 快速回顾公钥密码学及数字签名
互联网世界中的身份认证依赖于公钥密码学,其中密钥分为两部分:拥有者需要保密的私钥和可以对外公开的公钥。经过公钥加密过的数据,只能用对应的私钥解密。举个例子,对于希望与[记者](https://theintercept.com/2014/10/28/smuggling-snowden-secrets/)建立联系的举报人来说,这个特性非常有用。但就本文介绍的内容而言,私钥更重要的用途是与一个消息一起创建一个<ruby> 数字签名 <rt> digital signature </rt></ruby>,用于提供完整性和身份认证。
在实际应用中,我们签名的并不是真实消息,而是经过<ruby> 密码学哈希函数 <rt> cryptographic hash function </rt></ruby>处理过的消息<ruby> 摘要 <rt> digest </rt></ruby>。要发送一个包含源代码的压缩文件,发送者会对该压缩文件的 256 比特长度的 [SHA-256](https://en.wikipedia.org/wiki/SHA-2) 摘要进行签名,而不是文件本身进行签名,然后用明文发送该压缩包(和签名)。接收者会独立计算收到文件的 SHA-256 摘要,然后结合该摘要、收到的签名及发送者的公钥,使用签名验证算法进行验证。验证过程取决于加密算法,加密算法不同,验证过程也相应不同;而且,很微妙的是签名验证[漏洞](https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html)依然[层出不穷](https://www.imperialviolet.org/2014/09/26/pkcs1.html)。如果签名验证通过,说明文件在传输过程中没有被篡改而且来自于发送者,这是因为只有发送者拥有创建签名所需的私钥。
### 方案中缺失的环节
上述方案中缺失了一个重要的环节:我们从哪里获得发送者的公钥?发送者可以将公钥与消息一起发送,但除了发送者的自我宣称,我们无法核验其身份。假设你是一名银行柜员,一名顾客走过来向你说,“你好,我是 Jane Doe,我要取一笔钱”。当你要求其证明身份时,她指着衬衫上贴着的姓名标签说道,“看,Jane Doe!”。如果我是这个柜员,我会礼貌的拒绝她的请求。
如果你认识发送者,你们可以私下见面并彼此交换公钥。如果你并不认识发送者,你们可以私下见面,检查对方的证件,确认真实性后接受对方的公钥。为提高流程效率,你可以举办[聚会](https://en.wikipedia.org/wiki/Key_signing_party)并邀请一堆人,检查他们的证件,然后接受他们的公钥。此外,如果你认识并信任 Jane Doe(尽管她在银行的表现比较反常),Jane 可以参加聚会,收集大家的公钥然后交给你。事实上,Jane 可以使用她自己的私钥对这些公钥(及对应的身份信息)进行签名,进而你可以从一个[线上密钥库](https://en.wikipedia.org/wiki/Key_server_(cryptographic))获取公钥(及对应的身份信息)并信任已被 Jane 签名的那部分。如果一个人的公钥被很多你信任的人(即使你并不认识他们)签名,你也可能选择信任这个人。按照这种方式,你可以建立一个<ruby> <a href="https://en.wikipedia.org/wiki/Web_of_trust"> 信任网络 </a> <rt> Web of Trust </rt></ruby>。
但事情也变得更加复杂:我们需要建立一种标准的编码机制,可以将公钥和其对应的身份信息编码成一个<ruby> 数字捆绑 <rt> digital bundle </rt></ruby>,以便我们进一步进行签名。更准确的说,这类数字捆绑被称为<ruby> 证书 <rt> cerificate </rt></ruby>。我们还需要可以创建、使用和管理这些证书的工具链。满足诸如此类的各种需求的方案构成了<ruby> 公钥基础设施 <rt> public key infrastructure </rt></ruby>(PKI)。
### 比信任网络更进一步
你可以用人际关系网类比信任网络。如果人们之间广泛互信,可以很容易找到(两个人之间的)一条<ruby> 短信任链 <rt> short path of trust </rt></ruby>:就像一个社交圈。基于 [GPG](https://www.gnupg.org/gph/en/manual/x547.html) 加密的邮件依赖于信任网络,([理论上](https://blog.cryptographyengineering.com/2014/08/13/whats-matter-with-pgp/))只适用于与少量朋友、家庭或同事进行联系的情形。
(LCTT 译注:作者提到的“短信任链”应该是暗示“六度空间理论”,即任意两个陌生人之间所间隔的人一般不会超过 6 个。对 GPG 的唱衰,一方面是因为密钥管理的复杂性没有改善,另一方面 Yahoo 和 Google 都提出了更便利的端到端加密方案。)
在实际应用中,信任网络有一些“<ruby> <a href="https://lists.torproject.org/pipermail/tor-talk/2013-September/030235.html"> 硬伤 </a> <rt> significant problems </rt></ruby>”,主要是在可扩展性方面。当网络规模逐渐增大或者人们之间的连接较少时,信任网络就会慢慢失效。如果信任链逐渐变长,信任链中某人有意或无意误签证书的几率也会逐渐增大。如果信任链不存在,你不得不自己创建一条信任链,与其它组织建立联系,验证它们的密钥以符合你的要求。考虑下面的场景,你和你的朋友要访问一个从未使用过的在线商店。你首先需要核验网站所用的公钥属于其对应的公司而不是伪造者,进而建立安全通信信道,最后完成下订单操作。核验公钥的方法包括去实体店、打电话等,都比较麻烦。这样会导致在线购物变得不那么便利(或者说不那么安全,毕竟很多人会图省事,不去核验密钥)。
如果世界上有那么几个格外值得信任的人,他们专门负责核验和签发网站证书,情况会怎样呢?你可以只信任他们,那么浏览互联网也会变得更加容易。整体来看,这就是当今互联网的工作方式。那些“格外值得信任的人”就是被称为<ruby> 证书颁发机构 <rt> cerificate authoritie </rt></ruby>(CA)的公司。当网站希望获得公钥签名时,只需向 CA 提交<ruby> 证书签名请求 <rt> certificate signing request </rt></ruby>(CSR)。
CSR 类似于包括公钥和身份信息(在本例中,即服务器的主机名)的<ruby> 存根 <rt> stub </rt></ruby>证书,但 CA 并不会直接对 CSR 本身进行签名。CA 在签名之前会进行一些验证。对于一些证书类型(LCTT 译注:<ruby> 域名证实 <rt> Domain Validated </rt></ruby>(DV) 类型),CA 只验证申请者的确是 CSR 中列出主机名对应域名的控制者(例如通过邮件验证,让申请者完成指定的域名解析)。[对于另一些证书类型](https://en.wikipedia.org/wiki/Extended_Validation_Certificate) (LCTT 译注:链接中提到<ruby> 扩展证实 <rt> Extended Validated </rt></ruby>(EV)类型,其实还有 <ruby> OV <rt> Organization Validated </rt></ruby> 类型),CA 还会检查相关法律文书,例如公司营业执照等。一旦验证完成,CA(一般在申请者付费后)会从 CSR 中取出数据(即公钥和身份信息),使用 CA 自己的私钥进行签名,创建一个(签名)证书并发送给申请者。申请者将该证书部署在网站服务器上,当用户使用 HTTPS (或其它基于 [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) 加密的协议)与服务器通信时,该证书被分发给用户。
当用户访问该网站时,浏览器获取该证书,接着检查证书中的主机名是否与当前正在连接的网站一致(下文会详细说明),核验 CA 签名有效性。如果其中一步验证不通过,浏览器会给出安全警告并切断与网站的连接。反之,如果验证通过,浏览器会使用证书中的公钥来核验该服务器发送的签名信息,确认该服务器持有该证书的私钥。有几种算法用于协商后续通信用到的<ruby> 共享密钥 <rt> shared secret key </rt></ruby>,其中一种也用到了服务器发送的签名信息。<ruby> 密钥交换 <rt> key exchange </rt></ruby>算法不在本文的讨论范围,可以参考这个[视频](https://www.youtube.com/watch?v=YEBfamv-_do),其中仔细说明了一种密钥交换算法。
### 建立信任
你可能会问,“如果 CA 使用其私钥对证书进行签名,也就意味着我们需要使用 CA 的公钥验证证书。那么 CA 的公钥从何而来,谁对其进行签名呢?” 答案是 CA 对自己签名!可以使用证书公钥对应的私钥,对证书本身进行签名!这类签名证书被称为是<ruby> 自签名的 <rt> self-signed </rt></ruby>;在 PKI 体系下,这意味着对你说“相信我”。(为了表达方便,人们通常说用证书进行了签名,虽然真正用于签名的私钥并不在证书中。)
通过遵守[浏览器](https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/)和[操作系统](https://technet.microsoft.com/en-us/library/cc751157.aspx)供应商建立的规则,CA 表明自己足够可靠并寻求加入到浏览器或操作系统预装的一组自签名证书中。这些证书被称为“<ruby> 信任锚 <rt> trust anchor </rt></ruby>”或 <ruby> CA 根证书 <rt> root CA certificate </rt></ruby>,被存储在根证书区,我们<ruby> 约定 <rt> implicitly </rt></ruby>信任该区域内的证书。
CA 也可以签发一种特殊的证书,该证书自身可以作为 CA。在这种情况下,它们可以生成一个证书链。要核验证书链,需要从“信任锚”(也就是 CA 根证书)开始,使用当前证书的公钥核验下一层证书的签名(或其它一些信息)。按照这个方式依次核验下一层证书,直到证书链底部。如果整个核验过程没有问题,信任链也建立完成。当向 CA 付费为网站签发证书时,实际购买的是将证书放置在证书链下的权利。CA 将卖出的证书标记为“不可签发子证书”,这样它们可以在适当的长度终止信任链(防止其继续向下扩展)。
为何要使用长度超过 2 的信任链呢?毕竟网站的证书可以直接被 CA 根证书签名。在实际应用中,很多因素促使 CA 创建<ruby> 中间 CA 证书 <rt> intermediate CA certificate </rt></ruby>,最主要是为了方便。由于价值连城,CA 根证书对应的私钥通常被存放在特定的设备中,一种需要多人解锁的<ruby> 硬件安全模块 <rt> hardware security module </rt></ruby>(HSM),该模块完全离线并被保管在配备监控和报警设备的[地下室](https://arstechnica.com/information-technology/2012/11/inside-symantecs-ssl-certificate-vault/)中。
<ruby> CA/浏览器论坛 <rt> CAB Forum, CA/Browser Forum </rt></ruby>负责管理 CA,[要求](https://cabforum.org/baseline-requirements-documents/)任何与 CA 根证书(LCTT 译注:就像前文提到的那样,这里是指对应的私钥)相关的操作必须由人工完成。设想一下,如果每个证书请求都需要员工将请求内容拷贝到保密介质中、进入地下室、与同事一起解锁 HSM、(使用 CA 根证书对应的私钥)签名证书,最后将签名证书从保密介质中拷贝出来;那么每天为大量网站签发证书是相当繁重乏味的工作。因此,CA 创建内部使用的中间 CA,用于证书签发自动化。
如果想查看证书链,可以在 Firefox 中点击地址栏的锁型图标,接着打开页面信息,然后点击“安全”面板中的“查看证书”按钮。在本文写作时,[opensource.com](http://opensource.com) 使用的证书链如下:
```
DigiCert High Assurance EV Root CA
DigiCert SHA2 High Assurance Server CA
opensource.com
```
### 中间人
我之前提到,浏览器需要核验证书中的主机名与已经建立连接的主机名一致。为什么需要这一步呢?要回答这个问题,需要了解所谓的[<ruby> 中间人攻击 <rt> man-in-the-middle, MIMT </rt></ruby>](http://www.shortestpathfirst.net/2010/11/18/man-in-the-middle-mitm-attacks-explained-arp-poisoining/)。有一类[网络攻击](http://www.shortestpathfirst.net/2010/11/18/man-in-the-middle-mitm-attacks-explained-arp-poisoining/)可以让攻击者将自己置身于客户端和服务端中间,冒充客户端与服务端连接,同时冒充服务端与客户端连接。如果网络流量是通过 HTTPS 传输的,加密的流量无法被窃听。此时,攻击者会创建一个代理,接收来自受害者的 HTTPS 连接,解密信息后构建一个新的 HTTPS 连接到原始目的地(即服务端)。为了建立假冒的 HTTPS 连接,代理必须返回一个攻击者具有对应私钥的证书。攻击者可以生成自签名证书,但受害者的浏览器并不会信任该证书,因为它并不是根证书库中的 CA 根证书签发的。换一个方法,攻击者使用一个受信任 CA 签发但主机名对应其自有域名的证书,结果会怎样呢?
再回到银行的那个例子,我们是银行柜员,一位男性顾客进入银行要求从 Jane Doe 的账户上取钱。当被要求提供身份证明时,他给出了 Joe Smith 的有效驾驶执照。如果这个交易可以完成,我们无疑会被银行开除。类似的,如果检测到证书中的主机名与连接对应的主机名不一致,浏览器会给出类似“连接不安全”的警告和查看更多内容的选项。在 Firefox 中,这类错误被标记为 `SSL_ERROR_BAD_CERT_DOMAIN`。
我希望你阅读完本文起码记住这一点:如果看到这类警告,**不要无视它们**!它们出现意味着,或者该网站配置存在严重问题(不推荐访问),或者你已经是中间人攻击的潜在受害者。
### 总结
虽然本文只触及了 PKI 世界的一些皮毛,我希望我已经为你展示了便于后续探索的大致蓝图。密码学和 PKI 是美与复杂性的结合体。越深入研究,越能发现更多的美和复杂性,就像分形那样。
---
via: <https://opensource.com/article/18/7/private-keys>
作者:[Alex Wood](https://opensource.com/users/awood) 选题:[lujun9972](https://github.com/lujun9972) 译者:[pinewall](https://github.com/pinewall) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
10,050 | 如何在 Linux 上使用 tcpdump 命令捕获和分析数据包 | https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/ | 2018-09-26T09:54:19 | [
"tcpdump"
] | https://linux.cn/article-10050-1.html | `tcpdump` 是一个有名的命令行**数据包分析**工具。我们可以使用 `tcpdump` 命令捕获实时 TCP/IP 数据包,这些数据包也可以保存到文件中。之后这些捕获的数据包可以通过 `tcpdump` 命令进行分析。`tcpdump` 命令在网络层面进行故障排除时变得非常方便。

`tcpdump` 在大多数 Linux 发行版中都能用,对于基于 Debian 的Linux,可以使用 `apt` 命令安装它。
```
# apt install tcpdump -y
```
在基于 RPM 的 Linux 操作系统上,可以使用下面的 `yum` 命令安装 `tcpdump`。
```
# yum install tcpdump -y
```
当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口的数据包。因此,要停止或取消 `tcpdump` 命令,请键入 `ctrl+c`。在本教程中,我们将使用不同的实例来讨论如何捕获和分析数据包。
### 示例:1)从特定接口捕获数据包
当我们在没用任何选项的情况下运行 `tcpdump` 命令时,它将捕获所有接口上的数据包,因此,要从特定接口捕获数据包,请使用选项 `-i`,后跟接口名称。
语法:
```
# tcpdump -i {接口名}
```
假设我想从接口 `enp0s3` 捕获数据包。
输出将如下所示,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
06:43:22.905890 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952160:21952540, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 380
06:43:22.906045 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952540:21952760, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
06:43:22.906150 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952760:21952980, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
06:43:22.906291 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [.], ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 0
06:43:22.906303 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [P.], seq 13537:13609, ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 72
06:43:22.906322 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952980:21953200, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
^C
109930 packets captured
110065 packets received by filter
133 packets dropped by kernel
[[email protected] ~]#
```
### 示例:2)从特定接口捕获特定数量数据包
假设我们想从特定接口(如 `enp0s3`)捕获 12 个数据包,这可以使用选项 `-c {数量} -I {接口名称}` 轻松实现。
```
root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
```
上面的命令将生成如下所示的输出,
[](https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface.jpg)
### 示例:3)显示 tcpdump 的所有可用接口
使用 `-D` 选项显示 `tcpdump` 命令的所有可用接口,
```
[root@compute-0-1 ~]# tcpdump -D
1.enp0s3
2.enp0s8
3.ovs-system
4.br-int
5.br-tun
6.nflog (Linux netfilter log (NFLOG) interface)
7.nfqueue (Linux netfilter queue (NFQUEUE) interface)
8.usbmon1 (USB bus number 1)
9.usbmon2 (USB bus number 2)
10.qbra692e993-28
11.qvoa692e993-28
12.qvba692e993-28
13.tapa692e993-28
14.vxlan_sys_4789
15.any (Pseudo-device that captures on all interfaces)
16.lo [Loopback]
[[email protected] ~]#
```
我正在我的一个 openstack 计算节点上运行 `tcpdump` 命令,这就是为什么在输出中你会看到数字接口、标签接口、网桥和 vxlan 接口
### 示例:4)捕获带有可读时间戳的数据包(`-tttt` 选项)
默认情况下,在 `tcpdump` 命令输出中,不显示可读性好的时间戳,如果您想将可读性好的时间戳与每个捕获的数据包相关联,那么使用 `-tttt` 选项,示例如下所示,
```
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
2018-08-25 23:23:36.954883 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1449206247:1449206435, ack 3062020950, win 291, options [nop,nop,TS val 86178422 ecr 21583714], length 188
2018-08-25 23:23:36.955046 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13585, options [nop,nop,TS val 21583717 ecr 86178422], length 0
2018-08-25 23:23:37.140097 IP controller0.example.com.amqp > compute-0-1.example.com.57818: Flags [P.], seq 814607956:814607964, ack 2387094506, win 252, options [nop,nop,TS val 86172228 ecr 86176695], length 8
2018-08-25 23:23:37.140175 IP compute-0-1.example.com.57818 > controller0.example.com.amqp: Flags [.], ack 8, win 237, options [nop,nop,TS val 86178607 ecr 86172228], length 0
2018-08-25 23:23:37.355238 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [P.], seq 1080415080:1080417400, ack 1690909362, win 237, options [nop,nop,TS val 86178822 ecr 86163054], length 2320
2018-08-25 23:23:37.357119 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 2320, win 1432, options [nop,nop,TS val 86172448 ecr 86178822], length 0
2018-08-25 23:23:37.357545 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [P.], seq 1:22, ack 2320, win 1432, options [nop,nop,TS val 86172449 ecr 86178822], length 21
2018-08-25 23:23:37.357572 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 22, win 237, options [nop,nop,TS val 86178825 ecr 86172449], length 0
8 packets captured
134 packets received by filter
69 packets dropped by kernel
[[email protected] ~]#
```
### 示例:5)捕获数据包并将其保存到文件(`-w` 选项)
使用 `tcpdump` 命令中的 `-w` 选项将捕获的 TCP/IP 数据包保存到一个文件中,以便我们可以在将来分析这些数据包以供进一步分析。
语法:
```
# tcpdump -w 文件名.pcap -i {接口名}
```
注意:文件扩展名必须为 `.pcap`。
假设我要把 `enp0s3` 接口捕获到的包保存到文件名为 `enp0s3-26082018.pcap`。
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
```
上述命令将生成如下所示的输出,
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
^C841 packets captured
845 packets received by filter
0 packets dropped by kernel
[root@compute-0-1 ~]# ls
anaconda-ks.cfg enp0s3-26082018.pcap
[root@compute-0-1 ~]#
```
捕获并保存大小**大于 N 字节**的数据包。
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024
```
捕获并保存大小**小于 N 字节**的数据包。
```
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024
```
### 示例:6)从保存的文件中读取数据包(`-r` 选项)
在上面的例子中,我们已经将捕获的数据包保存到文件中,我们可以使用选项 `-r` 从文件中读取这些数据包,例子如下所示,
```
[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap
```
用可读性高的时间戳读取包内容,
```
[root@compute-0-1 ~]# tcpdump -tttt -r enp0s3-26082018.pcap
reading from file enp0s3-26082018.pcap, link-type EN10MB (Ethernet)
2018-08-25 22:03:17.249648 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1426167803:1426167927, ack 3061962134, win 291, options
[nop,nop,TS val 81358717 ecr 20378789], length 124
2018-08-25 22:03:17.249840 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 124, win 564, options [nop,nop,TS val 20378791 ecr 81358
717], length 0
2018-08-25 22:03:17.454559 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 1079416895, win 1432, options [nop,nop,TS v
al 81352560 ecr 81353913], length 0
2018-08-25 22:03:17.454642 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 8135892
2 ecr 81317504], length 0
2018-08-25 22:03:17.646945 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [.], seq 106760587:106762035, ack 688390730, win 237
, options [nop,nop,TS val 81359114 ecr 81350901], length 1448
2018-08-25 22:03:17.647043 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [P.], seq 1448:1956, ack 1, win 237, options [nop,no
p,TS val 81359114 ecr 81350901], length 508
2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813
52753 ecr 81359114], length 0
.........................................................................................................................
```
### 示例:7)仅捕获特定接口上的 IP 地址数据包(`-n` 选项)
使用 `tcpdump` 命令中的 `-n` 选项,我们能只捕获特定接口上的 IP 地址数据包,示例如下所示,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3
```
上述命令输出如下,
```
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:22:28.537904 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433301395:1433301583, ack 3061976250, win 291, options [nop,nop,TS val 82510005 ecr 20666610], length 188
22:22:28.538173 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20666613 ecr 82510005], length 0
22:22:28.538573 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 364
22:22:28.538736 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
22:22:28.538874 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
22:22:28.539042 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
22:22:28.539178 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
22:22:28.539282 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
22:22:28.539479 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666614], length 340
22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340
.........................................................................
```
您还可以使用 `tcpdump` 命令中的 `-c` 和 `-N` 选项捕获 N 个 IP 地址包,
```
[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3
```
### 示例:8)仅捕获特定接口上的 TCP 数据包
在 `tcpdump` 命令中,我们能使用 `tcp` 选项来只捕获 TCP 数据包,
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:36:54.521053 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433336467:1433336655, ack 3061986618, win 291, options [nop,nop,TS val 83375988 ecr 20883106], length 188
22:36:54.521474 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20883109 ecr 83375988], length 0
22:36:54.522214 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 83375989 ecr 20883109], length 364
22:36:54.522508 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20883109 ecr 83375989], length 0
22:36:54.522867 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523006 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20883109 ecr 83375990], length 0
22:36:54.523304 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523461 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20883110 ecr 83375990], length 0
22:36:54.523604 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 83375991 ecr 20883110], length 340
...................................................................................................................................................
```
### 示例:9)从特定接口上的特定端口捕获数据包
使用 `tcpdump` 命令,我们可以从特定接口 `enp0s3` 上的特定端口(例如 22)捕获数据包。
语法:
```
# tcpdump -i {interface-name} port {Port_Number}
```
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 port 22
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:54:45.032412 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1435010787:1435010975, ack 3061993834, win 291, options [nop,nop,TS val 84446499 ecr 21150734], length 188
22:54:45.032631 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 9131, options [nop,nop,TS val 21150737 ecr 84446499], length 0
22:54:55.037926 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 188:576, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21150737], length 388
22:54:55.038106 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 576, win 9154, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038286 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 576:940, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21153238], length 364
22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
............................................................................................................................
```
### 示例:10)在特定接口上捕获来自特定来源 IP 的数据包
在 `tcpdump` 命令中,使用 `src` 关键字后跟 IP 地址,我们可以捕获来自特定来源 IP 的数据包,
语法:
```
# tcpdump -n -i {接口名} src {IP 地址}
```
例子如下,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 src 169.144.0.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:03:45.912733 IP 169.144.0.10.amqp > 169.144.0.20.57800: Flags [.], ack 526623844, win 243, options [nop,nop,TS val 84981008 ecr 84982372], length 0
23:03:46.136757 IP 169.144.0.10.amqp > 169.144.0.20.57796: Flags [.], ack 2535995970, win 252, options [nop,nop,TS val 84981232 ecr 84982596], length 0
23:03:46.153398 IP 169.144.0.10.amqp > 169.144.0.20.57798: Flags [.], ack 3623063621, win 243, options [nop,nop,TS val 84981248 ecr 84982612], length 0
23:03:46.361160 IP 169.144.0.10.amqp > 169.144.0.20.57802: Flags [.], ack 2140263945, win 252, options [nop,nop,TS val 84981456 ecr 84982821], length 0
23:03:46.376926 IP 169.144.0.10.amqp > 169.144.0.20.57808: Flags [.], ack 175946224, win 252, options [nop,nop,TS val 84981472 ecr 84982836], length 0
23:03:46.505242 IP 169.144.0.10.amqp > 169.144.0.20.57810: Flags [.], ack 1016089556, win 252, options [nop,nop,TS val 84981600 ecr 84982965], length 0
23:03:46.616994 IP 169.144.0.10.amqp > 169.144.0.20.57812: Flags [.], ack 832263835, win 252, options [nop,nop,TS val 84981712 ecr 84983076], length 0
23:03:46.809344 IP 169.144.0.10.amqp > 169.144.0.20.57814: Flags [.], ack 2781799939, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:46.809485 IP 169.144.0.10.amqp > 169.144.0.20.57816: Flags [.], ack 1662816815, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:47.033301 IP 169.144.0.10.amqp > 169.144.0.20.57818: Flags [.], ack 2387094362, win 252, options [nop,nop,TS val 84982128 ecr 84983492], length 0
^C
10 packets captured
12 packets received by filter
0 packets dropped by kernel
```
### 示例:11)在特定接口上捕获来自特定目的 IP 的数据包
语法:
```
# tcpdump -n -i {接口名} dst {IP 地址}
```
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:10:43.520967 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1439564171:1439564359, ack 3062005550, win 291, options [nop,nop,TS val 85404988 ecr 21390356], length 188
23:10:43.521441 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:408, ack 1, win 291, options [nop,nop,TS val 85404988 ecr 21390359], length 220
23:10:43.521719 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 408:604, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.521993 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 604:800, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
.........................................................................................
```
### 示例:12)捕获两台主机之间的 TCP 数据包通信
假设我想捕获两台主机 169.144.0.1 和 169.144.0.20 之间的 TCP 数据包,示例如下所示,
```
[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)
```
使用 `tcpdump` 命令只捕获两台主机之间的 SSH 数据包流,
```
[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22
```
### 示例:13)捕获两台主机之间(来回)的 UDP 网络数据包
语法:
```
# tcpdump -w -s -i udp and \(host and host \)
```
```
[root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\)
```
### 示例:14)捕获十六进制和 ASCII 格式的数据包
使用 `tcpdump` 命令,我们可以以 ASCII 和十六进制格式捕获 TCP/IP 数据包,
要使用 `-A` 选项捕获 ASCII 格式的数据包,示例如下所示:
```
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
00:37:10.520060 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452637331:1452637519, ack 3062125586, win 333, options [nop,nop,TS val 90591987 ecr 22687106], length 188
E...[root@compute-0-1 @...............V.|...T....MT......
.fR..Z-....b.:..Z5...{.'p....]."}...Z..9.?......."root@compute-0-1 <.....V..C.....{,...OKP.2.*...`..-sS..1S...........:.O[.....{G..%ze.Pn.T..N.... ....qB..5...n.....`...:=...[..0....k.....S.:..5!.9..G....!-..'..
00:37:10.520319 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13930, options [nop,nop,TS val 22687109 ecr 90591987], length 0
root@compute-0-1 @.|+..............T.V.}O..6j.d.....
.Z-..fR.
00:37:11.687543 IP controller0.example.com.amqp > compute-0-1.example.com.57800: Flags [.], ack 526624548, win 243, options [nop,nop,TS val 90586768 ecr 90588146], length 0
root@compute-0-1 @.!L...
.....(..g....c.$...........
.f>..fC.
00:37:11.687612 IP compute-0-1.example.com.57800 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 90593155 ecr 90551716], length 0
root@compute-0-1 @..........
...(.c.$g.......Se.....
.fW..e..
..................................................................................................................................................
```
要同时以十六进制和 ASCII 格式捕获数据包,请使用 `-XX` 选项。
```
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
00:39:15.124363 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452640859:1452641047, ack 3062126346, win 333, options [nop,nop,TS val 90716591 ecr 22718257], length 188
0x0000: 0a00 2700 0000 0800 27f4 f935 0800 4510 ..'.....'..5..E.
0x0010: 00f0 5bc6 4000 4006 8afc a990 0014 a990 ..[root@compute-0-1 @.........
0x0020: 0001 0016 99ee 5695 8a5b b684 570a 8018 ......V..[..W...
0x0030: 014d 5418 0000 0101 080a 0568 39af 015a .MT........h9..Z
0x0040: a731 adb7 58b6 1a0f 2006 df67 c9b6 4479 .1..X......g..Dy
0x0050: 19fd 2c3d 2042 3313 35b9 a160 fa87 d42c ..,=.B3.5..`...,
0x0060: 89a9 3d7d dfbf 980d 2596 4f2a 99ba c92a ..=}....%.O*...*
0x0070: 3e1e 7bf7 3af2 a5cc ee4f 10bc 7dfc 630d >.{.:....O..}.c.
0x0080: 898a 0e16 6825 56c7 b683 1de4 3526 ff04 ....h%V.....5&..
0x0090: 68d1 4f7d babd 27ba 84ae c5d3 750b 01bd h.O}..'.....u...
0x00a0: 9c43 e10a 33a6 8df2 a9f0 c052 c7ed 2ff5 .C..3......R../.
0x00b0: bfb1 ce84 edfc c141 6dad fa19 0702 62a7 .......Am.....b.
0x00c0: 306c db6b 2eea 824e eea5 acd7 f92e 6de3 0l.k...N......m.
0x00d0: 85d0 222d f8bf 9051 2c37 93c8 506d 5cb5 .."-...Q,7..Pm\.
0x00e0: 3b4a 2a80 d027 49f2 c996 d2d9 a9eb c1c4 ;J*..'I.........
0x00f0: 7719 c615 8486 d84c e42d 0ba3 698c w......L.-..i.
00:39:15.124648 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13971, options [nop,nop,TS val 22718260 ecr 90716591], length 0
0x0000: 0800 27f4 f935 0a00 2700 0000 0800 4510 ..'..5..'.....E.
0x0010: 0034 6b70 4000 4006 7c0e a990 0001 a990 root@compute-0-1 @.|.......
0x0020: 0014 99ee 0016 b684 570a 5695 8b17 8010 ........W.V.....
0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h
0x0040: 39af
.......................................................................
```
这就是本文的全部内容,我希望您能了解如何使用 `tcpdump` 命令捕获和分析 TCP/IP 数据包。请分享你的反馈和评论。
---
via: <https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/>
作者:[Pradeep Kumar](http://www.linuxtechi.com/author/pradeep/)
选题:[lujun9972](https://github.com/lujun9972)
译者:[ypingcn](https://github.com/ypingcn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | tcpdump is a well known command line **packet analyzer** tool. Using tcpdump command we can capture the live TCP/IP packets and these packets can also be saved to a file. Later on these captured packets can be analyzed via tcpdump command. tcpdump command becomes very handy when it comes to troubleshooting on network level.
tcpdump is available in most of the Linux distributions, for Debian based Linux, it be can be installed using apt command,
# apt install tcpdump -y
On RPM based Linux OS, tcpdump can be installed using below yum command
# yum install tcpdump -y
When we run the tcpdump command without any options then it will capture packets of all the interfaces. So to stop or cancel the tcpdump command, type “**ctrl+c**” . In this tutorial we will discuss how to capture and analyze packets using different practical examples,
#### Example:1) Capturing packets from a specific interface
When we run the tcpdump command without any options, it will capture packets on the all interfaces, so to capture the packets from a specific interface use the option ‘**-i**‘ followed by the interface name.
Syntax :
# tcpdump -i {interface-name}
Let’s assume, i want to capture packets from interface “enp0s3”
`[root@compute-0-1 ~]# tcpdump -i enp0s3`
Output would be something like below,
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 06:43:22.905890 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952160:21952540, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 380 06:43:22.906045 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952540:21952760, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220 06:43:22.906150 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952760:21952980, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220 06:43:22.906291 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [.], ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 0 06:43:22.906303 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [P.], seq 13537:13609, ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 72 06:43:22.906322 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952980:21953200, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220 ^C 109930 packets captured 110065 packets received by filter 133 packets dropped by kernel [root@compute-0-1 ~]#
#### Example:2) Capturing specific number number of packet from a specific interface
Let’s assume we want to capture 12 packets from the specific interface like “enp0s3”, this can be easily achieved using the options “**-c {number} -i {interface-name}**”
`root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3`
Above command will generate the output something like below
#### Example:3) Display all the available Interfaces for tcpdump
Use ‘**-D**‘ option to display all the available interfaces for tcpdump command,
```
[root@compute-0-1 ~]# tcpdump -D
1.enp0s3
2.enp0s8
3.ovs-system
4.br-int
5.br-tun
6.nflog (Linux netfilter log (NFLOG) interface)
7.nfqueue (Linux netfilter queue (NFQUEUE) interface)
8.usbmon1 (USB bus number 1)
9.usbmon2 (USB bus number 2)
10.qbra692e993-28
11.qvoa692e993-28
12.qvba692e993-28
13.tapa692e993-28
14.vxlan_sys_4789
15.any (Pseudo-device that captures on all interfaces)
16.lo [Loopback]
[root@compute-0-1 ~]#
```
I am running the tcpdump command on one of my openstack compute node, that’s why in the output you have seen number interfaces, tab interface, bridges and vxlan interface.
#### Example:4) Capturing packets with human readable timestamp (-tttt option)
By default in tcpdump command output, there is no proper human readable timestamp, if you want to associate human readable timestamp to each captured packet then use ‘**-tttt**‘ option, example is shown below,
```
[root@compute-0-1 ~]# tcpdump -c 8 -tttt -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
2018-08-25 23:23:36.954883 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1449206247:1449206435, ack 3062020950, win 291, options [nop,nop,TS val 86178422 ecr 21583714], length 188
2018-08-25 23:23:36.955046 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13585, options [nop,nop,TS val 21583717 ecr 86178422], length 0
2018-08-25 23:23:37.140097 IP controller0.example.com.amqp > compute-0-1.example.com.57818: Flags [P.], seq 814607956:814607964, ack 2387094506, win 252, options [nop,nop,TS val 86172228 ecr 86176695], length 8
2018-08-25 23:23:37.140175 IP compute-0-1.example.com.57818 > controller0.example.com.amqp: Flags [.], ack 8, win 237, options [nop,nop,TS val 86178607 ecr 86172228], length 0
2018-08-25 23:23:37.355238 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [P.], seq 1080415080:1080417400, ack 1690909362, win 237, options [nop,nop,TS val 86178822 ecr 86163054], length 2320
2018-08-25 23:23:37.357119 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 2320, win 1432, options [nop,nop,TS val 86172448 ecr 86178822], length 0
2018-08-25 23:23:37.357545 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [P.], seq 1:22, ack 2320, win 1432, options [nop,nop,TS val 86172449 ecr 86178822], length 21
2018-08-25 23:23:37.357572 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 22, win 237, options [nop,nop,TS val 86178825 ecr 86172449], length 0
8 packets captured
134 packets received by filter
69 packets dropped by kernel
[root@compute-0-1 ~]#
```
#### Example:5) Capturing and saving packets to a file (-w option)
Use “**-w**” option in tcpdump command to save the capture TCP/IP packet to a file, so that we can analyze those packets in the future for further analysis.
Syntax :
# tcpdump -w file_name.pcap -i {interface-name}
Note: Extension of file must be **.pcap**
Let’s assume i want to save the captured packets of interface “**enp0s3**” to a file name **enp0s3-26082018.pcap**
`[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3`
Above command will generate the output something like below,
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3 tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes ^C841 packets captured 845 packets received by filter 0 packets dropped by kernel [root@compute-0-1 ~]# ls anaconda-ks.cfg enp0s3-26082018.pcap [root@compute-0-1 ~]#
Capturing and Saving the packets whose size **greater** than** N bytes**
`[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024`
Capturing and Saving the packets whose size **less** than **N bytes**
`[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024`
#### Example:6) Reading packets from the saved file ( -r option)
In the above example we have saved the captured packets to a file, we can read those packets from the file using the option ‘**-r**‘, example is shown below,
`[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap`
Reading the packets with human readable timestamp,
```
[root@compute-0-1 ~]# tcpdump -tttt -r enp0s3-26082018.pcap
reading from file enp0s3-26082018.pcap, link-type EN10MB (Ethernet)
2018-08-25 22:03:17.249648 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1426167803:1426167927, ack 3061962134, win 291, options
[nop,nop,TS val 81358717 ecr 20378789], length 124
2018-08-25 22:03:17.249840 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 124, win 564, options [nop,nop,TS val 20378791 ecr 81358
717], length 0
2018-08-25 22:03:17.454559 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 1079416895, win 1432, options [nop,nop,TS v
al 81352560 ecr 81353913], length 0
2018-08-25 22:03:17.454642 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 8135892
2 ecr 81317504], length 0
2018-08-25 22:03:17.646945 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [.], seq 106760587:106762035, ack 688390730, win 237
, options [nop,nop,TS val 81359114 ecr 81350901], length 1448
2018-08-25 22:03:17.647043 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [P.], seq 1448:1956, ack 1, win 237, options [nop,no
p,TS val 81359114 ecr 81350901], length 508
2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813
52753 ecr 81359114], length 0
.........................................................................................................................
```
Read More on : **How to Install and Use Wireshark on Debian 9 / Ubuntu 16.04**
#### Example:7) Capturing only IP address packets on a specific Interface (-n option)
Using -n option in tcpdum command we can capture only IP address packets on specific interface, example is shown below,
`[root@compute-0-1 ~]# tcpdump -n -i enp0s3`
Output of above command would be something like below,
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 22:22:28.537904 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433301395:1433301583, ack 3061976250, win 291, options [nop,nop,TS val 82510005 ecr 20666610], length 188 22:22:28.538173 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20666613 ecr 82510005], length 0 22:22:28.538573 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 364 22:22:28.538736 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0 22:22:28.538874 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340 22:22:28.539042 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0 22:22:28.539178 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340 22:22:28.539282 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0 22:22:28.539479 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666614], length 340 22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0 22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340 .........................................................................
You can also capture N number of IP address packets using -c and -n option in tcpdump command,
`[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3`
#### Example:8) Capturing only TCP packets on a specific interface
In tcpdump command we can capture only tcp packets using the ‘**tcp**‘ option,
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:36:54.521053 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433336467:1433336655, ack 3061986618, win 291, options [nop,nop,TS val 83375988 ecr 20883106], length 188
22:36:54.521474 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20883109 ecr 83375988], length 0
22:36:54.522214 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 83375989 ecr 20883109], length 364
22:36:54.522508 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20883109 ecr 83375989], length 0
22:36:54.522867 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523006 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20883109 ecr 83375990], length 0
22:36:54.523304 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
22:36:54.523461 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20883110 ecr 83375990], length 0
22:36:54.523604 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 83375991 ecr 20883110], length 340
...................................................................................................................................................
```
#### Example:9) Capturing packets from a specific port on a specific interface
Using tcpdump command we can capture packet from a specific port (e.g 22) on a specific interface enp0s3
Syntax :
# tcpdump -i {interface-name} port {Port_Number}
```
[root@compute-0-1 ~]# tcpdump -i enp0s3 port 22
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
22:54:45.032412 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1435010787:1435010975, ack 3061993834, win 291, options [nop,nop,TS val 84446499 ecr 21150734], length 188
22:54:45.032631 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 9131, options [nop,nop,TS val 21150737 ecr 84446499], length 0
22:54:55.037926 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 188:576, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21150737], length 388
22:54:55.038106 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 576, win 9154, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038286 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 576:940, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21153238], length 364
22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
............................................................................................................................
[root@compute-0-1 ~]#
```
#### Example:10) Capturing the packets from a Specific Source IP on a Specific Interface
Using “**src**” keyword followed by “**ip address**” in tcpdump command we can capture the packets from a specific Source IP,
syntax :
# tcpdump -n -i {interface-name} src {ip-address}
Example is shown below,
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 src 169.144.0.10
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:03:45.912733 IP 169.144.0.10.amqp > 169.144.0.20.57800: Flags [.], ack 526623844, win 243, options [nop,nop,TS val 84981008 ecr 84982372], length 0
23:03:46.136757 IP 169.144.0.10.amqp > 169.144.0.20.57796: Flags [.], ack 2535995970, win 252, options [nop,nop,TS val 84981232 ecr 84982596], length 0
23:03:46.153398 IP 169.144.0.10.amqp > 169.144.0.20.57798: Flags [.], ack 3623063621, win 243, options [nop,nop,TS val 84981248 ecr 84982612], length 0
23:03:46.361160 IP 169.144.0.10.amqp > 169.144.0.20.57802: Flags [.], ack 2140263945, win 252, options [nop,nop,TS val 84981456 ecr 84982821], length 0
23:03:46.376926 IP 169.144.0.10.amqp > 169.144.0.20.57808: Flags [.], ack 175946224, win 252, options [nop,nop,TS val 84981472 ecr 84982836], length 0
23:03:46.505242 IP 169.144.0.10.amqp > 169.144.0.20.57810: Flags [.], ack 1016089556, win 252, options [nop,nop,TS val 84981600 ecr 84982965], length 0
23:03:46.616994 IP 169.144.0.10.amqp > 169.144.0.20.57812: Flags [.], ack 832263835, win 252, options [nop,nop,TS val 84981712 ecr 84983076], length 0
23:03:46.809344 IP 169.144.0.10.amqp > 169.144.0.20.57814: Flags [.], ack 2781799939, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:46.809485 IP 169.144.0.10.amqp > 169.144.0.20.57816: Flags [.], ack 1662816815, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
23:03:47.033301 IP 169.144.0.10.amqp > 169.144.0.20.57818: Flags [.], ack 2387094362, win 252, options [nop,nop,TS val 84982128 ecr 84983492], length 0
^C
10 packets captured
12 packets received by filter
0 packets dropped by kernel
[root@compute-0-1 ~]#
```
#### Example:11) Capturing packets from a specific destination IP on a specific Interface
Syntax :
# tcpdump -n -i {interface-name} dst {IP-address}
```
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
23:10:43.520967 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1439564171:1439564359, ack 3062005550, win 291, options [nop,nop,TS val 85404988 ecr 21390356], length 188
23:10:43.521441 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:408, ack 1, win 291, options [nop,nop,TS val 85404988 ecr 21390359], length 220
23:10:43.521719 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 408:604, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.521993 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 604:800, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
.........................................................................................
```
#### Example:12) Capturing TCP packet communication between two Hosts
Let’s assume i want to capture tcp packets between two hosts 169.144.0.1 & 169.144.0.20, example is shown below,
`[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)`
Capturing only SSH packet flow between two hosts using tcpdump command,
`[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22`
#### Example:13) Capturing the udp network packets (to & fro) between two hosts
Syntax :
# tcpdump -w -s -i udp and \(host and host \)
`[root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\)`
#### Example:14) Capturing packets in HEX and ASCII Format
Using tcpdump command, we can capture tcp/ip packet in ASCII and HEX format,
To capture the packets in ASCII format use **-A** option, example is shown below,
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes 00:37:10.520060 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452637331:1452637519, ack 3062125586, win 333, options [nop,nop,TS val 90591987 ecr 22687106], length 188 E...[.@[[email protected]].|...T....MT...... .fR..Z-....b.:..Z5...{.'p....]."}...Z..9.?.......".@<.....V..C.....{,...OKP.2.*...`..-sS..1S...........:.O[.....{G..%ze.Pn.T..N.... ....qB..5...n.....`...:=...[..0....k.....S.:..5!.9..G....!-..'.. 00:37:10.520319 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13930, options [nop,nop,TS val 22687109 ecr 90591987], length 0 E..4kS@.@.|+..............T.V.}O..6j.d..... .Z-..fR. 00:37:11.687543 IP controller0.example.com.amqp > compute-0-1.example.com.57800: Flags [.], ack 526624548, win 243, options [nop,nop,TS val 90586768 ecr 90588146], length 0 E..4.9@.@.!L... .....(..g....c.$........... .f>..fC. 00:37:11.687612 IP compute-0-1.example.com.57800 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 90593155 ecr 90551716], length 0 E..4..@.@.......... ...(.c.$g.......Se..... .fW..e.. ..................................................................................................................................................
To Capture the packets both in HEX and ASCII format use **-XX** option
```
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
00:39:15.124363 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452640859:1452641047, ack 3062126346, win 333, options [nop,nop,TS val 90716591 ecr 22718257], length 188
0x0000: 0a00 2700 0000 0800 27f4 f935 0800 4510 ..'.....'..5..E.
0x0010: 00f0 5bc6 4000 4006 8afc a990 0014 a990 ..[.@.@.........
0x0020: 0001 0016 99ee 5695 8a5b b684 570a 8018 ......V..[..W...
0x0030: 014d 5418 0000 0101 080a 0568 39af 015a .MT........h9..Z
0x0040: a731 adb7 58b6 1a0f 2006 df67 c9b6 4479 .1..X......g..Dy
0x0050: 19fd 2c3d 2042 3313 35b9 a160 fa87 d42c ..,=.B3.5..`...,
0x0060: 89a9 3d7d dfbf 980d 2596 4f2a 99ba c92a ..=}....%.O*...*
0x0070: 3e1e 7bf7 3af2 a5cc ee4f 10bc 7dfc 630d >.{.:....O..}.c.
0x0080: 898a 0e16 6825 56c7 b683 1de4 3526 ff04 ....h%V.....5&..
0x0090: 68d1 4f7d babd 27ba 84ae c5d3 750b 01bd h.O}..'.....u...
0x00a0: 9c43 e10a 33a6 8df2 a9f0 c052 c7ed 2ff5 .C..3......R../.
0x00b0: bfb1 ce84 edfc c141 6dad fa19 0702 62a7 .......Am.....b.
0x00c0: 306c db6b 2eea 824e eea5 acd7 f92e 6de3 0l.k...N......m.
0x00d0: 85d0 222d f8bf 9051 2c37 93c8 506d 5cb5 .."-...Q,7..Pm\.
0x00e0: 3b4a 2a80 d027 49f2 c996 d2d9 a9eb c1c4 ;J*..'I.........
0x00f0: 7719 c615 8486 d84c e42d 0ba3 698c w......L.-..i.
00:39:15.124648 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13971, options [nop,nop,TS val 22718260 ecr 90716591], length 0
0x0000: 0800 27f4 f935 0a00 2700 0000 0800 4510 ..'..5..'.....E.
0x0010: 0034 6b70 4000 4006 7c0e a990 0001 a990 .4kp@.@.|.......
0x0020: 0014 99ee 0016 b684 570a 5695 8b17 8010 ........W.V.....
0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h
0x0040: 39af
.......................................................................
```
That’s all from this article, i hope you got an idea how to capture and analyze tcp/ip packets using tcpdump command. Please do share your feedback and comments.
ShivrajHi , you are teaching how to capture packet not to analyzing it .So kindly teach us how to analyze a packet
Sum Yung GaiFor analyzing a packet, I would suggest using something like Wireshark. There is plenty of information out there on how to do packet analysis. It will entail your reading several documents, though, including the RFC’s on IP, ICMP, UDP, and TCP. Additionally, you would do very well to read the 802.3 Ethernet specifications, since virtually all LAN’s are Ethernet-based nowadays.
BashirHi, Thanks for the great article. I was wandering if tcpdump could be used to monitor network traffic over a period of time and only report on tcp/udp traffic showing ports of only unique transactions. I.E. only show an ssh session between the localhost and remote host once even if there has been multiple sessions? i wold like to see src ip/dst, protocol and port only between a linux server and remote hosts
Todd SNice article, very informative. |
10,051 | 使用 top 命令了解 Fedora 的内存使用情况 | https://fedoramagazine.org/understand-fedora-memory-usage-top/ | 2018-09-26T10:15:01 | [
"top",
"内存"
] | https://linux.cn/article-10051-1.html | 
如果你使用过 `top` 命令来查看 Fedora 系统中的内存使用情况,你可能会惊讶,看起来消耗的数量比系统可用的内存更多。下面会详细介绍内存使用情况以及如何理解这些数据。
### 内存实际使用情况
操作系统对内存的使用方式并不是太通俗易懂。事实上,其背后有很多不为人知的巧妙技术在发挥着作用。通过这些方式,可以在无需用户干预的情况下,让操作系统更有效地使用内存。
大多数应用程序都不是系统自带的,但每个应用程序都依赖于安装在系统中的库中的一些函数集。在 Fedora 中,RPM 包管理系统能够确保在安装应用程序时也会安装所依赖的库。
当应用程序运行时,操作系统并不需要将它要用到的所有信息都加载到物理内存中。而是会为存放代码的存储空间构建一个映射,称为虚拟内存。操作系统只把需要的部分加载到内存中,当某一个部分不再需要后,这一部分内存就会被释放掉。
这意味着应用程序可以映射大量的虚拟内存,而使用较少的系统物理内存。特殊情况下,映射的虚拟内存甚至可以比系统实际可用的物理内存更多!而且在操作系统中这种情况也并不少见。
另外,不同的应用程序可能会对同一个库都有依赖。Fedora 中的 Linux 内核通常会在各个应用程序之间共享内存,而不需要为不同应用分别加载同一个库的多个副本。类似地,对于同一个应用程序的不同实例也是采用这种方式共享内存。
如果不首先了解这些细节,`top` 命令显示的数据可能会让人摸不着头脑。下面就举例说明如何正确查看内存使用量。
### 使用 `top` 命令查看内存使用量
如果你还没有使用过 `top` 命令,可以打开终端直接执行查看。使用 `Shift + M` 可以按照内存使用量来进行排序。下图是在 Fedora Workstation 中执行的结果,在你的机器上显示的结果可能会略有不同:

主要通过以下三列来查看内存使用情况:`VIRT`、`RES` 和 `SHR`。目前以 KB 为单位显示相关数值。
`VIRT` 列代表该进程映射的<ruby> 虚拟 <rt> virtual </rt></ruby>内存。如上所述,虚拟内存不是实际消耗的物理内存。例如, GNOME Shell 进程 `gnome-shell` 实际上没有消耗超过 3.1 GB 的物理内存,但它对很多更低或更高级的库都有依赖,系统必须对每个库都进行映射,以确保在有需要时可以加载这些库。
`RES` 列代表应用程序消耗了多少实际(<ruby> 驻留 <rt> resident </rt></ruby>)内存。对于 GNOME Shell 大约是 180788 KB。例子中的系统拥有大约 7704 MB 的物理内存,因此内存使用率显示为 2.3%。
但根据 `SHR` 列显示,其中至少有 88212 KB 是<ruby> 共享 <rt> shared </rt></ruby>内存,这部分内存可能是其它应用程序也在使用的库函数。这意味着 GNOME Shell 本身大约有 92 MB 内存不与其他进程共享。需要注意的是,上述例子中的其它程序也共享了很多内存。在某些应用程序中,共享内存在内存使用量中会占很大的比例。
值得一提的是,有时进程之间通过内存通信,这些内存也是共享的,但 `top` 这样的工具却不一定能检测到,所以以上的说明也不一定准确。
### 关于交换分区
系统还可以通过交换分区来存储数据(例如硬盘),但读写的速度相对较慢。当物理内存渐渐用满,操作系统就会查找内存中暂时不会使用的部分,将其写出到交换区域等待需要的时候使用。
因此,如果交换内存的使用量一直偏高,表明系统的物理内存已经供不应求了。有时候一个不正常的应用也有可能导致出现这种情况,但如果这种现象经常出现,就需要考虑提升物理内存或者限制某些程序的运行了。
感谢 [Stig Nygaard](https://www.flickr.com/photos/stignygaard/) 在 [Flickr](https://www.flickr.com/photos/stignygaard/3138001676/) 上提供的图片(CC BY 2.0)。
---
via: <https://fedoramagazine.org/understand-fedora-memory-usage-top/>
作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Have you used the *top* utility in a terminal to see memory usage on your Fedora system? If so, you might be surprised to see some of the numbers there. It might look like a lot more memory is consumed than your system has available. This article will explain a little more about memory usage, and how to read these numbers.
## Memory usage in real terms
The way the operating system (OS) uses memory may not be self-evident. In fact, some ingenious, behind-the-scenes techniques are at play. They help your OS use memory more efficiently, without involving you.
Most applications are not self contained. Instead, each relies on sets of functions collected in libraries. These libraries are also installed on the system. In Fedora, the RPM packaging system ensures that when you install an app, any libraries on which it relies are installed, too.
When an app runs, the OS doesn’t necessarily load all the information it uses into real memory. Instead, it builds a map to the storage where that code is stored, called *virtual memory*. The OS then loads only the parts it needs. When it no longer needs portions of memory, it might release or swap them out as appropriate.
This means an app can map a very large amount of virtual memory, while using less real memory on the system at one time. It might even map more RAM than the system has available! In fact, across a whole OS that’s often the case.
In addition, related applications may rely on the same libraries. The Linux kernel in your Fedora system often shares memory between applications. It doesn’t need to load multiple copies of the same library for related apps. This works similarly for separate instances of the same app, too.
Without understanding these details, the output of the *top* application can be confusing. The following example will clarify this view into memory usage.
## Viewing memory usage in *top*
If you haven’t tried yet, open a terminal and run the *top* command to see some output. Hit **Shift+M** to see the list sorted by memory usage. Your display may look slightly different than this example from a running Fedora Workstation:
There are three columns showing memory usage to examine: *VIRT, RES, *and *SHR*. The measurements are currently shown in kilobytes (KB).
The VIRT column is the virtual memory mapped for this process. Recall from the earlier description that virtual memory is not actual RAM consumed. For example, the GNOME Shell process *gnome-shell* is not actually consuming over 3.1 gigabytes of actual RAM. However, it’s built on a number of lower and higher level libraries. The system must map each of those to ensure they can be loaded when necessary.
The RES column shows you how much actual *(resident)* memory is consumed by the app. In the case of GNOME Shell, that’s about 180788 KB. The example system has roughly 7704 MB of physical memory, which is why the memory usage shows up as 2.3%.
However, of that number, at least 88212 KB is *shared* memory, shown in the SHR column. This memory might be, for example, library functions that other apps also use. This means the GNOME Shell is using about 92 MB on its own not shared with other processes. Notice that other apps in the example share an even higher percentage of their resident memory. In some apps, the shared portion is the vast majority of the memory usage.
There is a wrinkle here, which is that sometimes processes communicate with each other via memory. That memory is also shared, but can’t necessarily be detected by a utility like *top*. So yes — even the above clarifications still have some uncertainty!
## A note about swap
Your system has another facility it uses to store information, which is *swap*. Typically this is an area of slower storage (like a hard disk). If the physical memory on the system fills up as needs increase, the OS looks for portions of memory that haven’t been needed in a while. It writes them out to the swap area, where they sit until needed later.
Therefore, prolonged, high swap usage usually means a system is suffering from too little memory for its demands. Sometimes an errant application may be at fault. Or, if you see this often on your system, consider upgrading your machine’s memory, or restricting what you run.
*Photo courtesy of Stig Nygaard, via Flickr (CC BY 2.0).*
## Leslie Satenstein
Thank you for this very useful expository of Linux hosting Gnome and the impact to memory consumption.
Does Linux provide historical measures or stats about the I/O activity for swapping?
## Yury D
No, it doesn’t(apart from wait indicator—0.2 wa in the screenshot), but atop does.
## Diego Roberto dos Santos
Hi Leslie,
In short you need to enable sysstat to collect system activity information, then these data will be stored in disk under the “/var/log/sa” directory and later you will be able to review historical by querying this files using the sar command. For further examples, please see [1][2].
[1] https://github.com/sysstat/sysstat
[2] https://www.thegeekstuff.com/2011/03/sar-examples/
## Leslie Satenstein
Hola Diego,
Gracias. I do know a little more Spanish, as my wife taught me to listen and learn. It is her mother tongue.
I am definitely going to refer to and setup some testing with sysstat. My intention is to use sysstat for a month after a new installation of a Fedora Releases
## Martin
There is iotop for I/O and iftop for network traffic monitoring. Historical data is not retained by my knowledge.
## Kerel
Paul, what’s an easy way to see if your system relies a lot on swap memory?
## Yury D
I usually look at the how much swap space is used, activity of the kswapd process, and how much exercise the swap partitions get. atop is much better at display that last than top, in my opinion.
## alex
What about processes like apache/httpd that show up in
with all their forks , and every one of them are displayed as using memory/cpu.
How do you interpret that ?
## Einer
Leslie Satenstein,
Yes, you can do this a multitude of ways. SAR is the most commonly used.
Kerel,
Depends ….. but SAR can usually give you some idea or you’ll be looking at top for quite some time to catch/estimate the swap area usage.
Alex,
On multithreaded apps, like httpd, the memory reported by top for the thread is the same as the parent … all threads of the parent process share the parents space … so, the parent process is the one you would usually look at and use as a guide for usage. That having been said, there are cases when you will see a particular thread using more memory thant the others in the same process ……
Paul,
SWAP is not always a good indicator of RAM exhaustion. Remember that swap is also used for COW pages (Copy On Write) …… a crude example would be things like DB doing interprocess communications … even vi does this. The typical indication of RAM exhaustion is when you see a lot of I/O wait and so/bo when running vmstat 5 ….
top is a great utility to get a quick realtime look at what is going on but, you also need other tools to get the full picture 🙂
## Leslie Satenstein
Einer,
Thank you all for the feedback. Putting your suggestions together with Yury and with Diego,
I will workout a measuring plan.
## svsv sarma
Thank you. Throws a light on TOP utility and memory usage. Now I am happy to see that my note book has memory 4GB enough and to spare and SWAP is used rarely. This I can see with System Monitor/Resources also. However TOP takes a special study.
## Gudrun Khom
Werde ich da grad verapplet — Ich will dass das Programm funktioniert und mich auf Deutsch unterhalten …
## Paul W. Frields
If you want Fedora to work in German, use the
Region and Languagesettings to add support for German language. Note that when you installed, you were asked to select the language as the first step. For further help, please consult the help page here, since the Magazine is not a support forum: https://fedoraproject.org/wiki/Communicating_and_getting_help## alex
Thank you
Einerfor the clarification , i did not know that 🙂## John Lukas
sar command and sar graph is a nice tool to gather memory info.
$ sudo sar -r 1 3
Refer : https://linoxide.com/linux-command/system-performance-monitoring-sar-command/
## Dan Marinescu
generally, when one presents a set of concepts, does not use undefined terms. apparently, no such case here. you are using resident memory without explaining what it is. |
10,052 | 开源与烹饪有什么相似之处? | https://opensource.com/article/18/9/open-source-cooking | 2018-09-26T14:57:00 | [
"开源"
] | https://linux.cn/article-10052-1.html | 
有什么好的方法,既可以宣传开源的精神又不用写代码呢?这里有个点子:“<ruby> 开源食堂 <rt> open source cooking </rt></ruby>”。在过去的 8 年间,这就是我们在慕尼黑做的事情。
开源食堂已经是我们常规的开源宣传活动了,因为我们发现开源与烹饪有很多共同点。
### 协作烹饪
[慕尼黑开源聚会](https://www.opensourcetreffen.de/)自 2009 年 7 月在 [Café Netzwerk](http://www.cafe-netzwerk.de/) 创办以来,已经组织了若干次活动,活动一般在星期五的晚上组织。该聚会为开源项目工作者或者开源爱好者们提供了相互认识的方式。我们的信条是:“<ruby> 每四周的星期五属于自由软件 <rt> Every fourth Friday for free software </rt></ruby>”。当然在一些周末,我们还会举办一些研讨会。那之后,我们很快加入了很多其他的活动,包括白香肠早餐、桑拿与烹饪活动。
事实上,第一次开源烹饪聚会举办的有些混乱,但是我们经过这 8 年来以及 15 次的活动,已经可以为 25-30 个与会者提供丰盛的美食了。
回头看看这些夜晚,我们愈发发现共同烹饪与开源社区协作之间,有很多相似之处。
### 烹饪步骤中的自由开源精神
这里是几个烹饪与开源精神相同的地方:
* 我们乐于合作且朝着一个共同的目标前进
* 我们成了一个社区
* 由于我们有相同的兴趣与爱好,我们可以更多的了解我们自身与他人,并且可以一同协作
* 我们也会犯错,但我们会从错误中学习,并为了共同的利益去分享关于错误的经验,从而让彼此避免再犯相同的错误
* 每个人都会贡献自己擅长的事情,因为每个人都有自己的一技之长
* 我们会动员其他人去做出贡献并加入到我们之中
* 虽说协作是关键,但难免会有点混乱
* 每个人都会从中收益
### 烹饪中的开源气息
同很多成功的开源聚会一样,开源烹饪也需要一些协作和组织结构。在每次活动之前,我们会组织所有的成员对菜单进行投票,而不单单是直接给每个人分一角披萨,我们希望真正的作出一道美味,迄今为止我们做过日本、墨西哥、匈牙利、印度等地区风味的美食,限于篇幅就不一一列举了。
就像在生活中,共同烹饪同样需要各个成员之间相互的尊重和理解,所以我们也会试着为素食主义者、食物过敏者、或者对某些事物有偏好的人提供针对性的事物。正式开始烹饪之前,在家预先进行些小规模的测试会非常有帮助(和乐趣!)
可扩展性也很重要,在杂货店采购必要的食材很容易就消耗掉 3 个小时。所以我们使用一些表格工具(自然是 LibreOffice Calc)来做一些所需要的食材以及相应的成本。
我们会同志愿者一起,对于每次晚餐我们都有一个“包维护者”,从而及时的制作出菜单并在问题产生的时候寻找一些独到的解决方法。
虽然不是所有人都是大厨,但是只要给与一些帮助,并比较合理的分配任务和责任,就很容易让每个人都参与其中。某种程度上来说,处理 18kg 的西红柿和 100 个鸡蛋都不会让你觉得是件难事,相信我!唯一的限制是一个烤炉只有四个灶,所以可能是时候对基础设施加大投入了。
发布有时间要求,当然要求也不那么严格,我们通常会在 21:30 和 01:30 之间的相当“灵活”时间内供应主菜,即便如此,这个时间也是硬性的发布规定。
最后,像很多开源项目一样,烹饪文档同样有提升的空间。类似洗碟子这样的扫尾工作同样也有可优化的地方。
### 未来的一些新功能点
我们预计的一些想法包括:
* 在其他的国家开展活动
* 购买和烹饪一个价值 700 欧元的大南瓜,并且
* 找家可以为我们采购提供折扣的商店
最后一点,也是开源软件的动机:永远记住,还有一些人们生活在阴影中,他们为没有同等的权限去访问资源而苦恼着。我们如何通过开源的精神去帮助他们呢?
一想到这点,我便期待这下一次的开源烹饪聚会。如果读了上面的东西让你觉得不够完美,并且想自己运作这样的活动,我们非常乐意你能够借鉴我们的想法,甚至抄袭一个。我们也乐意你能够参与到我们其中,甚至做一些演讲和问答。
---
via: <https://opensource.com/article/18/9/open-source-cooking>
作者:[Florian Effenberger](https://opensource.com/users/floeff) 选题:[lujun9972](https://github.com/lujun9972) 译者:[sd886393](https://github.com/sd886393) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | What’s a fun way to promote the principles of free software without actually coding? Here’s an idea: open source cooking. For the past eight years, this is what we’ve been doing in Munich.
The idea of *open source cooking* grew out of our regular open source meetups because we realized that cooking and free software have a lot in common.
## Cooking together
The [Munich Open Source Meetings](https://www.opensourcetreffen.de/) is a series of recurring Friday night events that was born in [Café Netzwerk](http://www.cafe-netzwerk.de/) in July 2009. The meetings help provide a way for open source project members and enthusiasts to get to know each other. Our motto is: “Every fourth Friday for free software.” In addition to adding some weekend workshops, we soon introduced other side events, including white sausage breakfast, sauna, and cooking.
The first official *Open Source Cooking* meetup was admittedly rather chaotic, but we’ve improved our routine over the past eight years and 15 events, and we’ve mastered the art of cooking delicious food for 25-30 people.
Looking back at all those evenings, similarities between cooking together and working together in open source communities have become more clear.
## FLOSS principles at play
Here are a few ways cooking together is like working together on open source projects:
- We enjoy collaborating and working toward a result we share.
- We’ve become a community.
- As we share a common interest and enthusiasm, we learn more about ourselves, each other, and what we’re working on together.
- Mistakes happen. We learn from them and share our knowledge to our mutual benefit, so hopefully we avoid repeating the same mistakes.
- Everyone contributes what they’re best at, as everyone has something they’re better at than someone else.
- We motivate others to contribute and join us.
- Coordination is key, but a bit chaotic.
- Everyone benefits from the results!
## Smells like open source
Like any successful open source-related meetup, open source cooking requires some coordination and structure. Ahead of the event, we run a *call for recipes* in which all participants can vote. Rather than throwing a pizza into a microwave, we want to create something delicious and tasty, and so far we’ve had Japanese, Mexican, Hungarian, and Indian food, just to name a few.
Like in real life, cooking together requires having respect and mutual understanding for each other, so we always try to have dishes for vegans, vegetarians, and people with allergies and food preferences. A little beta test at home can be helpful (and fun!) when preparing for the big release.
Scalability matters, and shopping for our “build requirements” at the grocery store easily can eat up three hours. We use a spreadsheet (LibreOffice Calc, naturally) for calculating ingredient requirements and costs.
For every dinner course we have a “package maintainer” working with volunteers to make the menu in time and to find unconventional solutions to problems that arise.
Not everyone is a cook by profession, but with a little bit of help and a good distribution of tasks and responsibilities, it’s rather easy to parallelize things — at some point, 18kg of tomatoes and 100 eggs really don’t worry you anymore, believe me! The only real scalability limit is the stove with its four hotplates, so maybe it’s time to invest in an infrastructure budget.
Time-based releasing, on the other hand, isn’t working as reliably as it should, as we usually serve the main dish at a rather “flexible” time between 21:30 und 01:30, but that’s not a release blocker, either.
And, as with in many open source projects, cooking documentation has room for improvement. Cleanup tasks such as washing the dishes, surely can be optimized further, too.
## Future flavor releases
Some of our future ideas include:
- cooking in a foreign country,
- finally buying and cooking that large 700 € pumpkin, and
- find a grocery store that donates a percentage of our purchases to a good cause.
The last item is also an important aspect about the free software movement: Always remember there are people who are not living on the sunny side, who do not have the same access to resources, and who are otherwise struggling. How can the open nature of what we’re doing help them?
With all that in mind, I am looking forward to the next Open Source Cooking meetup. If reading about them makes you hungry and you’d like to run own event, we’d love to see you adapt our idea or even fork it. And we’d love to have you join us in a meetup, and perhaps even do some mentoring and QA.
*Article originally appeared on blog.effenberger.org. Reprinted with permission.*
## Comments are closed. |
10,053 | 3 个开源日志聚合工具 | https://opensource.com/article/18/9/open-source-log-aggregation-tools | 2018-09-26T23:22:58 | [
"日志",
"聚合"
] | https://linux.cn/article-10053-1.html |
>
> 日志聚合系统可以帮助我们进行故障排除和其它任务。以下是三个主要工具介绍。
>
>
>

<ruby> 指标聚合 <rt> metrics aggregation </rt></ruby>与<ruby> 日志聚合 <rt> log aggregation </rt></ruby>有何不同?日志不能包括指标吗?日志聚合系统不能做与指标聚合系统相同的事情吗?
这些是我经常听到的问题。我还看到供应商推销他们的日志聚合系统作为所有可观察问题的解决方案。日志聚合是一个有价值的工具,但它通常对时间序列数据的支持不够好。
时间序列的指标聚合系统中几个有价值的功能是专门为时间序列数据定制的<ruby> 固定间隔 <rt> regular interval </rt></ruby>和存储系统。固定间隔允许用户不断地收集实时的数据结果。如果要求日志聚合系统以固定间隔收集指标数据,它也可以。但是,它的存储系统没有针对指标聚合系统中典型的查询类型进行优化。使用日志聚合工具中的存储系统处理这些查询将花费更多的资源和时间。
所以,我们知道日志聚合系统可能不适合时间序列数据,但是它有什么好处呢?日志聚合系统是收集事件数据的好地方。这些无规律的活动是非常重要的。最好的例子为 web 服务的访问日志,这些很重要,因为我们想知道什么正在访问我们的系统,什么时候访问的。另一个例子是应用程序错误记录 —— 因为它不是正常的操作记录,所以在故障排除过程中可能很有价值的。
日志记录的一些规则:
* **须**包含时间戳
* **须**格式化为 JSON
* **不**记录无关紧要的事件
* **须**记录所有应用程序的错误
* **可**记录警告错误
* **可**开关的日志记录
* **须**以可读的形式记录信息
* **不**在生产环境中记录信息
* **不**记录任何无法阅读或反馈的内容
### 云的成本
当研究日志聚合工具时,云服务可能看起来是一个有吸引力的选择。然而,这可能会带来巨大的成本。当跨数百或数千台主机和应用程序聚合时,日志数据是大量的。在基于云的系统中,数据的接收、存储和检索是昂贵的。
以一个真实的系统来参考,大约 500 个节点和几百个应用程序的集合每天产生 200GB 的日志数据。这个系统可能还有改进的空间,但是在许多 SaaS 产品中,即使将它减少一半,每月也要花费将近 10000 美元。而这通常仅保留 30 天,如果你想查看一年一年的趋势数据,就不可能了。
并不是要不使用这些基于云的系统,尤其是对于较小的组织它们可能非常有价值的。这里的目的是指出可能会有很大的成本,当这些成本很高时,就可能令人非常的沮丧。本文的其余部分将集中讨论自托管的开源和商业解决方案。
### 工具选择
#### ELK
[ELK](https://www.elastic.co/webinars/introduction-elk-stack),即 Elasticsearch、Logstash 和 Kibana 简称,是最流行的开源日志聚合工具。它被 Netflix、Facebook、微软、LinkedIn 和思科使用。这三个组件都是由 [Elastic](https://www.elastic.co/) 开发和维护的。[Elasticsearch](https://www.elastic.co/products/elasticsearch) 本质上是一个 NoSQL 数据库,以 Lucene 搜索引擎实现的。[Logstash](https://www.elastic.co/products/logstash) 是一个日志管道系统,可以接收数据,转换数据,并将其加载到像 Elasticsearch 这样的应用中。[Kibana](https://www.elastic.co/products/kibana) 是 Elasticsearch 之上的可视化层。
几年前,引入了 Beats 。Beats 是数据采集器。它们简化了将数据运送到 Logstash 的过程。用户不需要了解每种日志的正确语法,而是可以安装一个 Beats 来正确导出 NGINX 日志或 Envoy 代理日志,以便在 Elasticsearch 中有效地使用它们。
安装生产环境级 ELK 套件时,可能会包括其他几个部分,如 [Kafka](http://kafka.apache.org/)、[Redis](https://redis.io/) 和 [NGINX](https://www.nginx.com/)。此外,用 Fluentd 替换 Logstash 也很常见,我们将在后面讨论。这个系统操作起来很复杂,这在早期导致了很多问题和抱怨。目前,这些问题基本上已经被修复,不过它仍然是一个复杂的系统,如果你使用少部分的功能,建议不要使用它了。
也就是说,有其它可用的服务,所以你不必苦恼于此。可以使用 [Logz.io](https://logz.io/),但是如果你有很多数据,它的标价有点高。当然,你可能规模比较小,没有很多数据。如果你买不起 Logz.io,你可以看看 [AWS Elasticsearch Service](https://aws.amazon.com/elasticsearch-service/) (ES) 。ES 是 Amazon Web Services (AWS) 提供的一项服务,它很容易就可以让 Elasticsearch 马上工作起来。它还拥有使用 Lambda 和 S3 将所有AWS 日志记录到 ES 的工具。这是一个更便宜的选择,但是需要一些管理操作,并有一些功能限制。
ELK 套件的母公司 Elastic [提供](https://www.elastic.co/cloud) 一款更强大的产品,它使用<ruby> 开源核心 <rt> open core </rt></ruby>模式,为分析工具和报告提供了额外的选项。它也可以在谷歌云平台或 AWS 上托管。由于这种工具和托管平台的组合提供了比大多数 SaaS 选项更加便宜,这也许是最好的选择,并且很有用。该系统可以有效地取代或提供 [安全信息和事件管理](https://en.wikipedia.org/wiki/Security_information_and_event_management)(SIEM)系统的功能。
ELK 套件通过 Kibana 提供了很好的可视化工具,但是它缺少警报功能。Elastic 在付费的 X-Pack 插件中提供了警报功能,但是在开源系统没有内置任何功能。Yelp 已经开发了一种解决这个问题的方法,[ElastAlert](https://github.com/Yelp/elastalert),不过还有其他方式。这个额外的软件相当健壮,但是它增加了已经复杂的系统的复杂性。
#### Graylog
[Graylog](https://www.graylog.org/) 最近越来越受欢迎,但它是在 2010 年由 Lennart Koopmann 创建并开发的。两年后,一家公司以同样的名字诞生了。尽管它的使用者越来越多,但仍然远远落后于 ELK 套件。这也意味着它具有较少的社区开发特征,但是它可以使用与 ELK 套件相同的 Beats 。由于 Graylog Collector Sidecar 使用 [Go](https://opensource.com/tags/go) 编写,所以 Graylog 在 Go 社区赢得了赞誉。
Graylog 使用 Elasticsearch、[MongoDB](https://www.mongodb.com/) 和底层的 Graylog Server 。这使得它像 ELK 套件一样复杂,也许还要复杂一些。然而,Graylog 附带了内置于开源版本中的报警功能,以及其他一些值得注意的功能,如流、消息重写和地理定位。
流功能可以允许数据在被处理时被实时路由到特定的 Stream。使用此功能,用户可以在单个 Stream 中看到所有数据库错误,在另外的 Stream 中看到 web 服务器错误。当添加新项目或超过阈值时,甚至可以基于这些 Stream 提供警报。延迟可能是日志聚合系统中最大的问题之一,Stream 消除了 Graylog 中的这一问题。一旦日志进入,它就可以通过 Stream 路由到其他系统,而无需完全处理好。
消息重写功能使用开源规则引擎 [Drools](https://www.drools.org/) 。允许根据用户定义的规则文件评估所有传入的消息,从而可以删除消息(称为黑名单)、添加或删除字段或修改消息。
Graylog 最酷的功能或许是它的地理定位功能,它支持在地图上绘制 IP 地址。这是一个相当常见的功能,在 Kibana 也可以这样使用,但是它增加了很多价值 —— 特别是如果你想将它用作 SIEM 系统。地理定位功能在系统的开源版本中提供。
如果你需要的话,Graylog 公司会提供对开源版本的收费支持。它还为其企业版提供了一个开源核心模式,提供存档、审计日志记录和其他支持。其它提供支持或托管服务的不太多,如果你不需要 Graylog 公司的,你可以托管。
#### Fluentd
[Fluentd](https://www.fluentd.org/) 是 [Treasure Data](https://www.treasuredata.com/) 开发的,[CNCF](https://www.cncf.io/) 已经将它作为一个孵化项目。它是用 C 和 Ruby 编写的,并被 [AWS](https://aws.amazon.com/blogs/aws/all-your-data-fluentd/) 和 [Google Cloud](https://cloud.google.com/logging/docs/agent/) 所推荐。Fluentd 已经成为许多系统中 logstach 的常用替代品。它可以作为一个本地聚合器,收集所有节点日志并将其发送到中央存储系统。它不是日志聚合系统。
它使用一个强大的插件系统,提供不同数据源和数据输出的快速和简单的集成功能。因为有超过 500 个插件可用,所以你的大多数用例都应该包括在内。如果没有,这听起来是一个为开源社区做出贡献的机会。
Fluentd 由于占用内存少(只有几十兆字节)和高吞吐量特性,是 Kubernetes 环境中的常见选择。在像 [Kubernetes](https://opensource.com/resources/what-is-kubernetes) 这样的环境中,每个 pod 都有一个 Fluentd 附属件 ,内存消耗会随着每个新 pod 的创建而线性增加。在这种情况下,使用 Fluentd 将大大降低你的系统利用率。这对于 Java 开发的工具来说是一个常见的问题,这些工具旨在为每个节点运行一个工具,而内存开销并不是主要问题。
---
via: <https://opensource.com/article/18/9/open-source-log-aggregation-tools>
作者:[Dan Barker](https://opensource.com/users/barkerd427) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | How is metrics aggregation different from log aggregation? Can’t logs include metrics? Can’t log aggregation systems do the same things as metrics aggregation systems?
These are questions I hear often. I’ve also seen vendors pitching their log aggregation system as the solution to all observability problems. Log aggregation is a valuable tool, but it isn’t normally a good tool for time-series data.
A couple of valuable features in a time-series metrics aggregation system are the regular interval and the storage system customized specifically for time-series data. The regular interval allows a user to derive real mathematical results consistently. If a log aggregation system is collecting metrics in a regular interval, it can potentially work the same way. However, the storage system isn’t optimized for the types of queries that are typical in a metrics aggregation system. These queries will take more resources and time to process using storage systems found in log aggregation tools.
So, we know a log aggregation system is likely not suitable for time-series data, but what is it good for? A log aggregation system is a great place for collecting event data. These are irregular activities that are significant. An example might be access logs for a web service. These are significant because we want to know what is accessing our systems and when. Another example would be an application error condition—because it is not a normal operating condition, it might be valuable during troubleshooting.
A handful of rules for logging:
- DO include a timestamp
- DO format in JSON
- DON’T log insignificant events
- DO log all application errors
- MAYBE log warnings
- DO turn on logging
- DO write messages in a human-readable form
- DON’T log informational data in production
- DON’T log anything a human can’t read or react to
## Cloud costs
When investigating log aggregation tools, the cloud might seem like an attractive option. However, it can come with significant costs. Logs represent a lot of data when aggregated across hundreds or thousands of hosts and applications. The ingestion, storage, and retrieval of that data are expensive in cloud-based systems.
As a point of reference from a real system, a collection of around 500 nodes with a few hundred apps results in 200GB of log data per day. There’s probably room for improvement in that system, but even reducing it by half will cost nearly $10,000 per month in many SaaS offerings. This often includes retention of only 30 days, which isn’t very long if you want to look at trending data year-over-year.
This isn’t to discourage the use of these systems, as they can be very valuable—especially for smaller organizations. The purpose is to point out that there could be significant costs, and it can be discouraging when they are realized. The rest of this article will focus on open source and commercial solutions that are self-hosted.
## Tool options
### ELK
[ELK](https://www.elastic.co/webinars/introduction-elk-stack), short for Elasticsearch, Logstash, and Kibana, is the most popular open source log aggregation tool on the market. It’s used by Netflix, Facebook, Microsoft, LinkedIn, and Cisco. The three components are all developed and maintained by [Elastic](https://www.elastic.co/). [Elasticsearch](https://www.elastic.co/products/elasticsearch) is essentially a NoSQL, Lucene search engine implementation. [Logstash](https://www.elastic.co/products/logstash) is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. [Kibana](https://www.elastic.co/products/kibana) is a visualization layer on top of Elasticsearch.
A few years ago, Beats were introduced. Beats are data collectors. They simplify the process of shipping data to Logstash. Instead of needing to understand the proper syntax of each type of log, a user can install a Beat that will export NGINX logs or Envoy proxy logs properly so they can be used effectively within Elasticsearch.
When installing a production-level ELK stack, a few other pieces might be included, like [Kafka](http://kafka.apache.org/), [Redis](https://redis.io/), and [NGINX](https://www.nginx.com/). Also, it is common to replace Logstash with Fluentd, which we’ll discuss later. This system can be complex to operate, which in its early days led to a lot of problems and complaints. These have largely been fixed, but it’s still a complex system, so you might not want to try it if you’re a smaller operation.
That said, there are services available so you don’t have to worry about that. [Logz.io](https://logz.io/) will run it for you, but its list pricing is a little steep if you have a lot of data. Of course, you’re probably smaller and may not have a lot of data. If you can’t afford Logz.io, you could look at something like [AWS Elasticsearch Service](https://aws.amazon.com/elasticsearch-service/) (ES). ES is a service Amazon Web Services (AWS) offers that makes it very easy to get Elasticsearch working quickly. It also has tooling to get all AWS logs into ES using Lambda and S3. This is a much cheaper option, but there is some management required and there are a few limitations.
Elastic, the parent company of the stack, [offers](https://www.elastic.co/cloud) a more robust product that uses the open core model, which provides additional options around analytics tools, and reporting. It can also be hosted on Google Cloud Platform or AWS. This might be the best option, as this combination of tools and hosting platforms offers a cheaper solution than most SaaS options and still provides a lot of value. This system could effectively replace or give you the capability of a [security information and event management](https://en.wikipedia.org/wiki/Security_information_and_event_management) (SIEM) system.
The ELK stack also offers great visualization tools through Kibana, but it lacks an alerting function. Elastic provides alerting functionality within the paid X-Pack add-on, but there is nothing built in for the open source system. Yelp has created a solution to this problem, called [ElastAlert](https://github.com/Yelp/elastalert), and there are probably others. This additional piece of software is fairly robust, but it increases the complexity of an already complex system.
### Graylog
[Graylog](https://www.graylog.org/) has recently risen in popularity, but it got its start when Lennart Koopmann created it back in 2010. A company was born with the same name two years later. Despite its increasing use, it still lags far behind the ELK stack. This also means it has fewer community-developed features, but it can use the same Beats that the ELK stack uses. Graylog has gained praise in the Go community with the introduction of the Graylog Collector Sidecar written in [Go](https://opensource.com/tags/go).
Graylog uses Elasticsearch, [MongoDB](https://www.mongodb.com/), and the Graylog Server under the hood. This makes it as complex to run as the ELK stack and maybe a little more. However, Graylog comes with alerting built into the open source version, as well as several other notable features like streaming, message rewriting, and geolocation.
The streaming feature allows for data to be routed to specific Streams in real time while they are being processed. With this feature, a user can see all database errors in a single Stream and web server errors in a different Stream. Alerts can even be based on these Streams as new items are added or when a threshold is exceeded. Latency is probably one of the biggest issues with log aggregation systems, and Streams eliminate that issue in Graylog. As soon as the log comes in, it can be routed to other systems through a Stream without being processed fully.
The message rewriting feature uses the open source rules engine [Drools](https://www.drools.org/). This allows all incoming messages to be evaluated against a user-defined rules file enabling a message to be dropped (called Blacklisting), a field to be added or removed, or the message to be modified.
The coolest feature might be Graylog’s geolocation capability, which supports plotting IP addresses on a map. This is a fairly common feature and is available in Kibana as well, but it adds a lot of value—especially if you want to use this as your SIEM system. The geolocation functionality is provided in the open source version of the system.
Graylog, the company, charges for support on the open source version if you want it. It also offers an open core model for its Enterprise version that offers archiving, audit logging, and additional support. There aren’t many other options for support or hosting, so you’ll likely be on your own if you don’t use Graylog (the company).
### Fluentd
[Fluentd](https://www.fluentd.org/) was developed at [Treasure Data](https://www.treasuredata.com/), and the [CNCF](https://www.cncf.io/) has adopted it as an Incubating project. It was written in C and Ruby and is recommended by [AWS](https://aws.amazon.com/blogs/aws/all-your-data-fluentd/) and [Google Cloud](https://cloud.google.com/logging/docs/agent/). Fluentd has become a common replacement for Logstash in many installations. It acts as a local aggregator to collect all node logs and send them off to central storage systems. It is not a log aggregation system.
It uses a robust plugin system to provide quick and easy integrations with different data sources and data outputs. Since there are over 500 plugins available, most of your use cases should be covered. If they aren’t, this sounds like an opportunity to contribute back to the open source community.
Fluentd is a common choice in Kubernetes environments due to its low memory requirements (just tens of megabytes) and its high throughput. In an environment like [Kubernetes](https://opensource.com/resources/what-is-kubernetes), where each pod has a Fluentd sidecar, memory consumption will increase linearly with each new pod created. Using Fluentd will drastically reduce your system utilization. This is becoming a common problem with tools developed in Java that are intended to run one per node where the memory overhead hasn’t been a major issue.
## 2 Comments |
10,054 | Steam 让我们在 Linux 上玩 Windows 的游戏更加容易 | https://itsfoss.com/steam-play-proton/ | 2018-09-27T00:07:25 | [
"Steam",
"游戏"
] | https://linux.cn/article-10054-1.html | 
总所周知,[Linux 游戏](https://itsfoss.com/linux-gaming-guide/)库中的游戏只有 Windows 游戏库中的一部分,实际上,许多人甚至都不会考虑将操作系统[转换为 Linux](https://itsfoss.com/reasons-switch-linux-windows-xp/),原因很简单,因为他们喜欢的游戏,大多数都不能在这个平台上运行。
在撰写本文时,Steam 上已有超过 5000 种游戏可以在 Linux 上运行,而 Steam 上的游戏总数已经接近 27000 种了。现在 5000 种游戏可能看起来很多,但还没有达到 27000 种,确实没有。
虽然几乎所有的新的<ruby> 独立游戏 <rt> indie game </rt></ruby>都是在 Linux 中推出的,但我们仍然无法在这上面玩很多的 [3A 大作](https://itsfoss.com/triplea-game-review/)。对我而言,虽然这其中有很多游戏我都很希望能有机会玩,但这从来都不是一个非黑即白的问题。因为我主要是玩独立游戏和[复古游戏](https://itsfoss.com/play-retro-games-linux/),所以几乎所有我喜欢的游戏都可以在 Linux 系统上运行。
### 认识 Proton,Steam 的一个 WINE 复刻
现在,这个问题已经成为过去式了,因为本周 Valve [宣布](https://steamcommunity.com/games/221410)要对 Steam Play 进行一次更新,此次更新会将一个名为 Proton 的 Wine 复刻版本添加到 Linux 客户端中。是的,这个工具是开源的,Valve 已经在 [GitHub](https://github.com/ValveSoftware/Proton/) 上开源了源代码,但该功能仍然处于测试阶段,所以你必须使用测试版的 Steam 客户端才能使用这项功能。
#### 使用 proton ,可以在 Linux 系统上通过 Steam 运行更多 Windows 游戏
这对我们这些 Linux 用户来说,实际上意味着什么?简单来说,这意味着我们可以在 Linux 电脑上运行全部 27000 种游戏,而无需配置像 [PlayOnLinux](https://www.playonlinux.com/en/) 或 [Lutris](https://lutris.net/) 这样的东西。我要告诉你的是,配置这些东西有时候会非常让人头疼。
对此更为复杂的答案是,某种原因听起来非常美好。虽然在理论上,你可以用这种方式在 Linux 上玩所有的 Windows 平台上的游戏。但只有一少部分游戏在推出时会正式支持 Linux。这少部分游戏包括 《DOOM》、《最终幻想 VI》、《铁拳 7》、《星球大战:前线 2》,和其他几个。
#### 你可以在 Linux 上玩所有的 Windows 游戏(理论上)
虽然目前该列表只有大约 30 个游戏,你可以点击“为所有游戏启用 Steam Play”复选框来强制使用 Steam 的 Proton 来安装和运行任意游戏。但你最好不要有太高的期待,它们的稳定性和性能表现不一定有你希望的那么好,所以请把期望值压低一点。

据[这份报告](https://spcr.netlify.com/),已经有超过一千个游戏可以在 Linux 上玩了。按[此指南](https://itsfoss.com/steam-play/)来了解如何启用 Steam Play 测试版本。
#### 体验 Proton,没有我想的那么烂
例如,我安装了一些难度适中的游戏,使用 Proton 来进行安装。其中一个是《上古卷轴 4:湮没》,在我玩这个游戏的两个小时里,它只崩溃了一次,而且几乎是紧跟在游戏教程的自动保存点之后。
我有一块英伟达 Gtx 1050 Ti 的显卡。所以我可以使用 1080P 的高配置来玩这个游戏。而且我没有遇到除了这次崩溃之外的任何问题。我唯一真正感到不爽的只有它的帧数没有原本的高。在 90% 的时间里,游戏的帧数都在 60 帧以上,但我知道它的帧数应该能更高。
我安装和运行的其他所有游戏都运行得很完美,虽然我还没有较长时间地玩过它们中的任何一个。我安装的游戏中包括《森林》、《丧尸围城 4》和《刺客信条 2》。(你觉得我这是喜欢恐怖游戏吗?)
#### 为什么 Steam(仍然)要下注在 Linux 上?
现在,一切都很好,这件事为什么会发生呢?为什么 Valve 要花费时间,金钱和资源来做这样的事?我倾向于认为,他们这样做是因为他们懂得 Linux 社区的价值,但是如果要我老实地说,我不相信我们和它有任何的关系。
如果我一定要在这上面花钱,我想说 Valve 开发了 Proton,因为他们还没有放弃 [Steam Machine](https://store.steampowered.com/sale/steam_machines)。因为 [Steam OS](https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/) 是基于 Linux 的发行版,在这类东西上面投资可以获取最大的利润,Steam OS 上可用的游戏越多,就会有更多的人愿意购买 Steam Machine。
可能我是错的,但是我敢打赌啊,我们会在不远的未来看到新一批的 Steam Machine。可能我们会在一年内看到它们,也有可能我们再等五年都见不到,谁知道呢!
无论哪种方式,我所知道的是,我终于能兴奋地从我的 Steam 游戏库里玩游戏了。这个游戏库是多年来我通过各种慈善包、促销码和不定时地买的游戏慢慢积累的,只不过是想试试让它在 Lutris 中运行。
#### 为 Linux 上越来越多的游戏而激动?
你怎么看?你对此感到激动吗?或者说你会害怕只有很少的开发者会开发 Linux 平台上的游戏,因为现在几乎没有需求?Valve 喜欢 Linux 社区,还是说他们喜欢钱?请在下面的评论区告诉我们您的想法,然后重新搜索来查看更多类似这样的开源软件方面的文章。
---
via: <https://itsfoss.com/steam-play-proton/>
作者:[Phillip Prado](https://itsfoss.com/author/phillip/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 
It’s no secret that the [Linux gaming](https://itsfoss.com/linux-gaming-guide/) library offers only a fraction of what the Windows library offers. In fact, many people wouldn’t even consider [switching to Linux](https://itsfoss.com/reasons-switch-linux-windows-xp/) simply because most of the games they want to play aren’t available on the platform.
At the time of writing this article, Linux has just over 5,000 games available on Steam compared to the library’s almost 27,000 total games. Now, 5,000 games may be a lot, but it isn’t 27,000 games, that’s for sure.
And though almost every new indie game seems to launch with a Linux release, we are still left without a way to play many [Triple-A](https://itsfoss.com/triplea-game-review/) titles. For me, though there are many titles I would love the opportunity to play, this has never been a make-or-break problem since almost all of my favorite titles are available on Linux since I primarily play indie and [retro games](https://itsfoss.com/play-retro-games-linux/) anyway.
## Meet Proton: a WINE Fork by Steam
Now, that problem is a thing of the past since this week Valve [announced](https://steamcommunity.com/games/221410) a new update to Steam Play that adds a forked version of Wine to the Linux Steam clients called Proton. Yes, the tool is open-source, and Valve has made the source code available on [Github](https://github.com/ValveSoftware/Proton/). The feature is still in beta though, so you must opt into the beta Steam client in order to take advantage of this functionality.
### With proton, more Windows games are available for Linux on Steam
What does that actually mean for us Linux users? In short, it means that Linux computers can now play all 27,000 of those games without needing to configure something like [PlayOnLinux](https://www.playonlinux.com/en/) or [Lutris](https://lutris.net/) to do so! Which, let me tell you, can be quite the headache at times.
The more complicated answer to this is that it sounds too good to be true for a reason. Though, in theory, you can play literally every Windows game on Linux this way, there is only a short list of games that are officially supported at launch, including DOOM, Final Fantasy VI, Tekken 7, Star Wars: Battlefront 2, and several more.
### You can play all Windows games on Linux (in theory)
Though the list only has about 30 games thus far, you can force enable Steam to install and play any game through Proton by marking the “Enable Steam Play for all titles” checkbox. But don’t get your hopes too high. They do not guarantee the stability and performance you may be hoping for, so keep your expectations reasonable.

As per [this report](https://spcr.netlify.com/), there are over a thousand Windows games that are playable on Linux. Follow this tutorial to [learn how to enable Steam Play beta](https://itsfoss.com/steam-play/) right now.
### Experiencing Proton: Not as bad as I expected
For example, I installed a few moderately taxing games to put Proton through its paces. One of which was The Elder Scrolls IV: Oblivion, and in the two hours I played the game, it only crashed once, and it was almost immediately after an autosave point during the tutorial.
I have an Nvidia Gtx 1050 Ti, so I was able to play the game at 1080p with high settings, and I didn’t see a single problem outside of that one crash. The only negative feedback I really have is that the framerate was not nearly as high as it would have been if it was a native game. I got above 60 frames 90% of the time, but I admit it could have been better.
Every other game that I have installed and launched has also worked flawlessly, granted I haven’t played any of them for an extended amount of time yet. Some games I installed include The Forest, Dead Rising 4 and Assassin’s Creed II (can you tell I like horror games?).
### Why is Steam (still) betting on Linux?
Now, this is all fine and dandy, but why did this happen? Why would Valve spend the time, money, and resources needed to implement something like this? I like to think they did so because they value the Linux community, but if I am honest, I don’t believe we had anything to do with it.
If I had to put money on it, I would say Valve has developed Proton because they haven’t given up on [Steam machines](https://store.steampowered.com/sale/steam_machines) yet. And since [Steam OS](https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/) is running on Linux, it is in their best interest financially to invest in something like this. The more games available on Steam OS, the more people might be willing to buy a Steam Machine.
Maybe I am wrong, but I bet this means we will see a new wave of Steam machines coming in the not-so-distant future. Maybe we will see them in one year, or perhaps we won’t see them for another five, who knows!
Either way, all I know is that I am beyond excited to finally play the games from my Steam library that I have slowly accumulated over the years from all of the Humble Bundles, promo codes, and random times I bought a game on sale just in case I wanted to try to get it running in Lutris.
### Excited for more gaming on Linux?
What do you think? Are you excited about this, or are you afraid fewer developers will create native Linux games because there is almost no need to now? Does Valve love the Linux community, or do they love money? Let us know what you think in the comment section below, and check back in for more FOSS content like this. |
10,055 | 每位 Ubuntu 18.04 用户都应该知道的快捷键 | https://itsfoss.com/ubuntu-shortcuts/ | 2018-09-27T10:27:28 | [
"快捷键",
"Ubuntu"
] | https://linux.cn/article-10055-1.html | 了解快捷键能够提升您的生产力。这里有一些实用的 Ubuntu 快捷键助您像专业人士一样使用 Ubuntu。
您可以用键盘和鼠标组合来使用操作系统。
>
> 注意:本文中提到的键盘快捷键适用于 Ubuntu 18.04 GNOME 版。 通常,它们中的大多数(或者全部)也适用于其他的 Ubuntu 版本,但我不能够保证。
>
>
>

### 实用的 Ubuntu 快捷键
让我们来看一看 Ubuntu GNOME 必备的快捷键吧!通用的快捷键如 `Ctrl+C`(复制)、`Ctrl+V`(粘贴)或者 `Ctrl+S`(保存)不再赘述。
注意:Linux 中的 Super 键即键盘上带有 Windows 图标的键,本文中我使用了大写字母,但这不代表你需要按下 `shift` 键,比如,`T` 代表键盘上的 ‘t’ 键,而不代表 `Shift+t`。
#### 1、 Super 键:打开活动搜索界面
使用 `Super` 键可以打开活动菜单。如果你只能在 Ubuntu 上使用一个快捷键,那只能是 `Super` 键。
想要打开一个应用程序?按下 `Super` 键然后搜索应用程序。如果搜索的应用程序未安装,它会推荐来自应用中心的应用程序。
想要看看有哪些正在运行的程序?按下 `Super` 键,屏幕上就会显示所有正在运行的 GUI 应用程序。
想要使用工作区吗?只需按下 `Super` 键,您就可以在屏幕右侧看到工作区选项。
#### 2、 Ctrl+Alt+T:打开 Ubuntu 终端窗口

*使用 Ctrl+alt+T 来打开终端窗口*
想要打开一个新的终端,您只需使用快捷键 `Ctrl+Alt+T`。这是我在 Ubuntu 中最喜欢的键盘快捷键。 甚至在我的许多 FOSS 教程中,当需要打开终端窗口是,我都会提到这个快捷键。
#### 3、 Super+L 或 Ctrl+Alt+L:锁屏
当您离开电脑时锁定屏幕,是最基本的安全习惯之一。您可以使用 `Super+L` 快捷键,而不是繁琐地点击屏幕右上角然后选择锁定屏幕选项。
有些系统也会使用 `Ctrl+Alt+L` 键锁定屏幕。
#### 4、 Super+D or Ctrl+Alt+D:显示桌面
按下 `Super+D` 可以最小化所有正在运行的应用程序窗口并显示桌面。
再次按 `Super+D` 将重新打开所有正在运行的应用程序窗口,像之前一样。
您也可以使用 `Ctrl+Alt+D` 来实现此目的。
#### 5、 Super+A:显示应用程序菜单
您可以通过单击屏幕左下角的 9 个点打开 Ubuntu 18.04 GNOME 中的应用程序菜单。 但是一个更快捷的方法是使用 `Super+A` 快捷键。
它将显示应用程序菜单,您可以在其中查看或搜索系统上已安装的应用程序。
您可以使用 `Esc` 键退出应用程序菜单界面。
#### 6、 Super+Tab 或 Alt+Tab:在运行中的应用程序间切换
如果您运行的应用程序不止一个,则可以使用 `Super+Tab` 或 `Alt+Tab` 快捷键在应用程序之间切换。
按住 `Super` 键同时按下 `Tab` 键,即可显示应用程序切换器。 按住 `Super` 的同时,继续按下 `Tab` 键在应用程序之间进行选择。 当光标在所需的应用程序上时,松开 `Super` 和 `Tab` 键。
默认情况下,应用程序切换器从左向右移动。 如果要从右向左移动,可使用 `Super+Shift+Tab` 快捷键。
在这里您也可以用 `Alt` 键代替 `Super` 键。
>
> 提示:如果有多个应用程序实例,您可以使用 Super+` 快捷键在这些实例之间切换。
>
>
>
#### 7、 Super+箭头:移动窗口位置
<https://player.vimeo.com/video/289091549>
这个快捷键也适用于 Windows 系统。 使用应用程序时,按下 `Super+左箭头`,应用程序将贴合屏幕的左边缘,占用屏幕的左半边。
同样,按下 `Super+右箭头`会使应用程序贴合右边缘。
按下 `Super+上箭头`将最大化应用程序窗口,`Super+下箭头`将使应用程序恢复到其正常的大小。
#### 8、 Super+M:切换到通知栏
GNOME 中有一个通知栏,您可以在其中查看系统和应用程序活动的通知,这里也有一个日历。

*通知栏*
使用 `Super+M` 快捷键,您可以打开此通知栏。 如果再次按这些键,将关闭打开的通知托盘。
使用 `Super+V` 也可实现相同的功能。
#### 9、 Super+空格:切换输入法(用于多语言设置)
如果您使用多种语言,可能您的系统上安装了多个输入法。 例如,我需要在 Ubuntu 上同时使用[印地语](https://itsfoss.com/type-indian-languages-ubuntu/)和英语,所以我安装了印地语(梵文)输入法以及默认的英语输入法。
如果您也使用多语言设置,则可以使用 `Super+空格` 快捷键快速更改输入法。
#### 10、 Alt+F2:运行控制台
这适用于高级用户。 如果要运行快速命令,而不是打开终端并在其中运行命令,则可以使用 `Alt+F2` 运行控制台。

*控制台*
当您使用只能在终端运行的应用程序时,这尤其有用。
#### 11、 Ctrl+Q:关闭应用程序窗口
如果您有正在运行的应用程序,可以使用 `Ctrl+Q` 快捷键关闭应用程序窗口。您也可以使用 `Ctrl+W` 来实现此目的。
`Alt+F4` 是关闭应用程序窗口更“通用”的快捷方式。
它不适用于一些应用程序,如 Ubuntu 中的默认终端。
#### 12、 Ctrl+Alt+箭头:切换工作区

*切换工作区*
如果您是使用工作区的重度用户,可以使用 `Ctrl+Alt+上箭头`和 `Ctrl+Alt+下箭头`在工作区之间切换。
#### 13、 Ctrl+Alt+Del:注销
不!在 Linux 中使用著名的快捷键 `Ctrl+Alt+Del` 并不会像在 Windows 中一样打开任务管理器(除非您使用自定义快捷键)。

*注销*
在普通的 GNOME 桌面环境中,您可以使用 `Ctrl+Alt+Del` 键打开关机菜单,但 Ubuntu 并不总是遵循此规范,因此当您在 Ubuntu 中使用 `Ctrl+Alt+Del` 键时,它会打开注销菜单。
### 在 Ubuntu 中使用自定义键盘快捷键
您不是只能使用默认的键盘快捷键,您可以根据需要创建自己的自定义键盘快捷键。
转到“设置->设备->键盘”,您将在这里看到系统的所有键盘快捷键。向下滚动到底部,您将看到“自定义快捷方式”选项。

您需要提供易于识别的快捷键名称、使用快捷键时运行的命令,以及您自定义的按键组合。
### Ubuntu 中你最喜欢的键盘快捷键是什么?
快捷键无穷无尽。如果需要,你可以看一看所有可能的 [GNOME 快捷键](https://wiki.gnome.org/Design/OS/KeyboardShortcuts),看其中有没有你需要用到的快捷键。
您可以学习使用您经常使用应用程序的快捷键,这是很有必要的。例如,我使用 Kazam 进行[屏幕录制](https://itsfoss.com/best-linux-screen-recorders/),键盘快捷键帮助我方便地暂停和开始录像。
您最喜欢、最离不开的 Ubuntu 快捷键是什么?
---
via: <https://itsfoss.com/ubuntu-shortcuts/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[XiatianSummer](https://github.com/XiatianSummer) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

You can use an operating system with the combination of keyboard and mouse but using the keyboard shortcuts saves your time.
Let’s have a look at some of the must-know keyboard shortcuts for Ubuntu GNOME. I have not included universal keyboard shortcuts like Ctrl+C (copy), Ctrl+V (paste) or Ctrl+S (save).
You can also watch a video of these Ubuntu shortcuts in action.
## 1. Super key: Opens Activities search

If you have to use just one keyboard shortcut on Ubuntu, this has to be the one. **Super key in Linux refers to the key with the Windows logo. **
You want to open an application? Press the super key and search for the application. If the application is not installed, it will even suggest applications from the software center.
You want to see the running applications? Press super key and it will show you all the running GUI applications.
You want to use workspaces? Simply press the super key and you can see the workspaces option on the right-hand side.
## 2. Ctrl+Alt+T: Ubuntu terminal shortcut

You want to [open a new terminal in Ubuntu](https://itsfoss.com/open-terminal-ubuntu/)? Ctrl+Alt+T is the shortcut to open terminal in Ubuntu. This is my favorite keyboard shortcut in Ubuntu. I even mention it in various tutorials on It’s FOSS when it involves opening a terminal.
## 3. Super+L or Ctrl+Alt+L: Locks the screen
Locking screen when you are not at your desk is one of the most basic security tips. Instead of going to the top right corner and then choosing the lock screen option, you can simply use the Super+L key combination.
Some systems also use Ctrl+Alt+L keys for locking the screen.
## 4. Super+D or Ctrl+Alt+D: Show desktop
Pressing Super+D minimizes all running application windows and shows the desktop.
Pressing Super+D again will open all the running applications windows as it was previously.
You may also use Ctrl+Alt+D for this purpose.
## 5. Super+A: Shows the application menu

You can open the application menu in Ubuntu GNOME by clicking on the 9 dots on the left bottom of the screen. However, a quicker way would be to use Super+A key combination.
It will show the application menu where you can see the installed applications on your systems and can also search for them.
You can use Esc key to move out of the application menu screen.
## 6. Super+Tab or Alt+Tab: Switch between running applications
If you have more than one applications running, you can switch between the applications using the Super+Tab or Alt+Tab key combinations.
Keep holding the super key and press tab and you’ll the application switcher appearing. While holding the super key, keep on tapping the tab key to select between applications. When you are at the desired application, release both super and tab keys.
By default, the application switcher moves from left to right. If you want to move from right to left, use the Super+Shift+Tab key combination.
You can also use Alt key instead of Super here.
Tip: If there are multiple instances of an application, you can switch between those instances by using Super+` key combination.
## 7. Super+Arrow keys: Snap windows
This is available in Windows as well. While using an application, press Super and left arrow key and the application will go to the left edge of the screen, taking half of the screen. This [splitting screen feature of GNOME](https://itsfoss.com/ubuntu-split-screen/) is my favorite.
Similarly, pressing the Super and right arrow keys will move the application to the right edge.
Super and up arrow keys will maximize the application window and super and down arrow will bring the application back to its usual self.
## 8. Super+M: Toggle notification tray
GNOME has a notification tray where you can see notifications for various system and application activities. You also have the calendar here.

With Super+M key combination, you can open this notification area. If you press these keys again, an opened notification tray will be closed.
You can also use Super+V for toggling the notification tray.
## 9. Super+Space: Change input keyboard (for multilingual setup)
If you are multilingual, perhaps you have more than one keyboards installed on your system. For example, I use [Hindi on Ubuntu](https://itsfoss.com/type-indian-languages-ubuntu/) along with English and I have Hindi (Devanagari) keyboard installed along with the default English one.
If you also use a multilingual setup, you can quickly change the input keyboard with the Super+Space shortcut.
## 10. Alt+F2: Run console
This is for power users. If you want to run a quick command, instead of opening a terminal and running the command there, you can use Alt+F2 to run the console.

This is particularly helpful when you have to use applications that can only be run from the terminal.
## 11. Ctrl+Q: Close an application window
If you have an application running, you can close the application window using the Ctrl+Q key combination. You can also use Ctrl+W for this purpose.
Alt+F4 is more ‘universal’ shortcut for closing an application window.
It not work on a few applications such as the default terminal in Ubuntu.
## 12. Ctrl+Alt+arrow: Move between workspaces

If you are one of the power users who [use workspaces in Ubuntu](https://itsfoss.com/ubuntu-workspaces/), you can use the Ctrl+Alt+Up arrow and Ctrl+Alt+Down arrow keys to switch between the workspaces.
[Ubuntu Workspaces: Enabling, Creating, and SwitchingUbuntu workspaces let you dabble with multiple windows while keeping things organized. Here’s all you need to know.](https://itsfoss.com/ubuntu-workspaces/)

## 13. Ctrl+Alt+Del: Log out
No! Like Windows, the famous combination of Ctrl+Alt+Del won’t bring [task manager in Ubuntu](https://itsfoss.com/task-manager-linux/) (unless you use custom keyboard shortcuts for it).

In the normal GNOME [desktop environment](https://itsfoss.com/what-is-desktop-environment/), you can bring the power off menu using the Ctrl+Alt+Del keys but Ubuntu doesn’t always follow the norms. With Ctrl+Alt+Del keys, you [logout from Ubuntu](https://itsfoss.com/ubuntu-logout/).
## Bonus tip: Use custom keyboard shortcuts in Ubuntu
You are not limited to the default keyboard shortcuts. You can create your own custom keyboard shortcuts as you like.
Go to Settings->Devices->Keyboard. You’ll see all the keyboard shortcuts here for your system. Scroll down to the bottom and you’ll see the Custom Shortcuts option.

You have to provide an easy-to-recognize name of the shortcut, the command that will be run when the key combinations are used and of course the keys you are going to use for the shortcut.
## What are your favorite keyboard shortcuts in Ubuntu?
There is no end to shortcuts. If you want, you can have a look at all the possible [GNOME shortcuts](https://wiki.gnome.org/Design/OS/KeyboardShortcuts?ref=itsfoss.com) here and see if there are some more shortcuts you would like to use.
If you use Linux terminal often, you should also check out these [Linux command tips to save your time](https://itsfoss.com/linux-command-tricks/).
You can, and you should also learn keyboard shortcuts for the applications you use most of the time. For example, I use Kazam for [screen recording](https://itsfoss.com/best-linux-screen-recorders/), and the keyboard shortcuts help me a lot in pausing and resuming the recording.
What are your favorite Ubuntu shortcuts that you cannot live without? |
10,056 | Linux DNS 查询剖析(第四部分) | https://zwischenzugs.com/2018/08/06/anatomy-of-a-linux-dns-lookup-part-iv/ | 2018-09-27T22:02:33 | [
"DNS",
"容器"
] | https://linux.cn/article-10056-1.html | 
在 [Linux DNS 查询剖析(第一部分)](/article-9943-1.html),[Linux DNS 查询剖析(第二部分)](/article-9949-1.html) 和 [Linux DNS 查询剖析(第三部分)](/article-9972-1.html) 中,我们已经介绍了以下内容:
* `nsswitch`
* `/etc/hosts`
* `/etc/resolv.conf`
* `ping` 与 `host` 查询方式的对比
* `systemd` 和对应的 `networking` 服务
* `ifup` 和 `ifdown`
* `dhclient`
* `resolvconf`
* `NetworkManager`
* `dnsmasq`
在第四部分中,我将介绍容器如何完成 DNS 查询。你想的没错,也不是那么简单。
### 1) Docker 和 DNS
在 [Linux DNS 查询剖析(第三部分)](/article-9972-1.html) 中,我们介绍了 `dnsmasq`,其工作方式如下:将 DNS 查询指向到 localhost 地址 `127.0.0.1`,同时启动一个进程监听 `53` 端口并处理查询请求。
在按上述方式配置 DNS 的主机上,如果运行了一个 Docker 容器,容器内的 `/etc/resolv.conf` 文件会是怎样的呢?
我们来动手试验一下吧。
按照默认 Docker 创建流程,可以看到如下的默认输出:
```
$ docker run ubuntu cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
search home
nameserver 8.8.8.8
nameserver 8.8.4.4
```
奇怪!
#### 地址 `8.8.8.8` 和 `8.8.4.4` 从何而来呢?
当我思考容器内的 `/etc/resolv.conf` 配置时,我的第一反应是继承主机的 `/etc/resolv.conf`。但只要稍微进一步分析,就会发现这样并不总是有效的。
如果在主机上配置了 `dnsmasq`,那么 `/etc/resolv.conf` 文件总会指向 `127.0.0.1` 这个<ruby> 回环地址 <rt> loopback address </rt></ruby>。如果这个地址被容器继承,容器会在其本身的<ruby> 网络上下文 <rt> networking context </rt></ruby>中使用;由于容器内并没有运行(在 `127.0.0.1` 地址的)DNS 服务器,因此 DNS 查询都会失败。
“有了!”你可能有了新主意:将 主机的 的 IP 地址用作 DNS 服务器地址,其中这个 IP 地址可以从容器的<ruby> 默认路由 <rt> default route </rt></ruby>中获取:
```
root@79a95170e679:/# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
```
#### 使用主机 IP 地址真的可行吗?
从默认路由中,我们可以找到主机的 IP 地址 `172.17.0.1`,进而可以通过手动指定 DNS 服务器的方式进行测试(你也可以更新 `/etc/resolv.conf` 文件并使用 `ping` 进行测试;但我觉得这里很适合介绍新的 `dig` 工具及其 `@` 参数,后者用于指定需要查询的 DNS 服务器地址):
```
root@79a95170e679:/# dig @172.17.0.1 google.com | grep -A1 ANSWER.SECTION
;; ANSWER SECTION:
google.com. 112 IN A 172.217.23.14
```
但是还有一个问题,这种方式仅适用于主机配置了 `dnsmasq` 的情况;如果主机没有配置 `dnsmasq`,主机上并不存在用于查询的 DNS 服务器。
在这个问题上,Docker 的解决方案是忽略所有可能的复杂情况,即无论主机中使用什么 DNS 服务器,容器内都使用 Google 的 DNS 服务器 `8.8.8.8` 和 `8.8.4.4` 完成 DNS 查询。
我的经历:在 2013 年,我遇到了使用 Docker 以来的第一个问题,与 Docker 的这种 DNS 解决方案密切相关。我们公司的网络屏蔽了 `8.8.8.8` 和 `8.8.4.4`,导致容器无法解析域名。
这就是 Docker 容器的情况,但对于包括 Kubernetes 在内的容器 <ruby> 编排引擎 <rt> orchestrators </rt></ruby>,情况又有些不同。
### 2) Kubernetes 和 DNS
在 Kubernetes 中,最小部署单元是 pod;它是一组相互协作的容器,共享 IP 地址(和其它资源)。
Kubernetes 面临的一个额外的挑战是,将 Kubernetes 服务请求(例如,`myservice.kubernetes.io`)通过对应的<ruby> 解析器 <rt> resolver </rt></ruby>,转发到具体服务地址对应的<ruby> 内网地址 <rt> private network </rt></ruby>。这里提到的服务地址被称为归属于“<ruby> 集群域 <rt> cluster domain </rt></ruby>”。集群域可由管理员配置,根据配置可以是 `cluster.local` 或 `myorg.badger` 等。
在 Kubernetes 中,你可以为 pod 指定如下四种 pod 内 DNS 查询的方式。
**Default**
在这种(名称容易让人误解)的方式中,pod 与其所在的主机采用相同的 DNS 查询路径,与前面介绍的主机 DNS 查询一致。我们说这种方式的名称容易让人误解,因为该方式并不是默认选项!`ClusterFirst` 才是默认选项。
如果你希望覆盖 `/etc/resolv.conf` 中的条目,你可以添加到 `kubelet` 的配置中。
**ClusterFirst**
在 `ClusterFirst` 方式中,遇到 DNS 查询请求会做有选择的转发。根据配置的不同,有以下两种方式:
第一种方式配置相对古老但更简明,即采用一个规则:如果请求的域名不是集群域的子域,那么将其转发到 pod 所在的主机。
第二种方式相对新一些,你可以在内部 DNS 中配置选择性转发。
下面给出示例配置并从 [Kubernetes 文档](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods)中选取一张图说明流程:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"acme.local": ["1.2.3.4"]}
upstreamNameservers: |
["8.8.8.8", "8.8.4.4"]
```
在 `stubDomains` 条目中,可以为特定域名指定特定的 DNS 服务器;而 `upstreamNameservers` 条目则给出,待查询域名不是集群域子域情况下用到的 DNS 服务器。
这是通过在一个 pod 中运行我们熟知的 `dnsmasq` 实现的。

剩下两种选项都比较小众:
**ClusterFirstWithHostNet**
适用于 pod 使用主机网络的情况,例如绕开 Docker 网络配置,直接使用与 pod 对应主机相同的网络。
**None**
`None` 意味着不改变 DNS,但强制要求你在 `pod` <ruby> 规范文件 <rt> specification </rt></ruby>的 `dnsConfig` 条目中指定 DNS 配置。
### CoreDNS 即将到来
除了上面提到的那些,一旦 `CoreDNS` 取代 Kubernetes 中的 `kube-dns`,情况还会发生变化。`CoreDNS` 相比 `kube-dns` 具有可配置性更高、效率更高等优势。
如果想了解更多,参考[这里](https://coredns.io/)。
如果你对 OpenShift 的网络感兴趣,我曾写过一篇[文章](https://zwischenzugs.com/2017/10/21/openshift-3-6-dns-in-pictures/)可供你参考。但文章中 OpenShift 的版本是 3.6,可能有些过时。
### 第四部分总结
第四部分到此结束,其中我们介绍了:
* Docker DNS 查询
* Kubernetes DNS 查询
* 选择性转发(子域不转发)
* kube-dns
---
via: <https://zwischenzugs.com/2018/08/06/anatomy-of-a-linux-dns-lookup-part-iv/>
作者:[zwischenzugs](https://zwischenzugs.com/) 译者:[pinewall](https://github.com/pinewall) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In [Anatomy of a Linux DNS Lookup – Part I](https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/), [Part II](https://zwischenzugs.com/2018/06/18/anatomy-of-a-linux-dns-lookup-part-ii/), and [Part III](https://zwischenzugs.com/2018/07/06/anatomy-of-a-linux-dns-lookup-part-iii/) I covered:
`nsswitch`
`/etc/hosts`
`/etc/resolv.conf`
`ping`
vs`host`
style lookups`systemd`
and its`networking`
service`ifup`
and`ifdown`
`dhclient`
`resolvconf`
`NetworkManager`
`dnsmasq`
In Part IV I’ll cover how containers do DNS. Yes, that’s not simple either…
**Other posts in the series:**
[Anatomy of a Linux DNS Lookup – Part I](https://zwischenzugs.com/2018/06/08/anatomy-of-a-linux-dns-lookup-part-i/)
[Anatomy of a Linux DNS Lookup – Part II](https://zwischenzugs.com/2018/06/18/anatomy-of-a-linux-dns-lookup-part-ii/)
[Anatomy of a Linux DNS Lookup – Part III](https://zwischenzugs.com/2018/07/06/anatomy-of-a-linux-dns-lookup-part-iii/)
[Anatomy of a Linux DNS Lookup – Part V – Two Debug Nightmares](https://zwischenzugs.com/2018/09/13/anatomy-of-a-linux-dns-lookup-part-v-two-debug-nightmares/)
# 1) Docker and DNS
In [part III](https://zwischenzugs.com/2018/07/06/anatomy-of-a-linux-dns-lookup-part-iii/) we looked at DNSMasq, and learned that it works by directing DNS queries to the localhost address `127.0.0.1`
, and a process listening on port 53 there will accept the request.
So when you run up a Docker container, on a host set up like this, what do you expect to see in its `/etc/resolv.conf`
?
Have a think, and try and guess what it will be.
Here’s the default output if you run a default Docker setup:
$ docker run ubuntu cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN # 127.0.0.53 is the systemd-resolved stub resolver. # run "systemd-resolve --status" to see details about the actual nameservers. search home nameserver 8.8.8.8 nameserver 8.8.4.4
Hmmm.
#### Where did the addresses `8.8.8.8`
and `8.8.4.4`
come from?
When I pondered this question, my first thought was that the container would inherit the `/etc/resolv.conf`
settings from the host. But a little thought shows that that won’t always work.
If you have DNSmasq set up on the host, the `/etc/resolv.conf`
file will be pointed at the `127.0.0.1`
loopback address. If this were passed through to the container, the container would look up DNS addresses from within its own networking context, and there’s no DNS server available within the container context, so the DNS lookups would fail.
‘A-ha!’ you might think: we can always use the host’s DNS server by using the *host’s* IP address, available from within the container as the default route:
root@79a95170e679:/# ip routedefault via 172.17.0.1 dev eth0172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
#### Use the host?
From that we can work out that the ‘host’ is on the ip address: `172.17.0.1`
, so we could try manually pointing DNS at that using dig (you could also update the `/etc/resolv.conf`
and then run `ping`
, this just seems like a good time to introduce `dig`
and its `@`
flag, which points the request at the ip address you specify):
root@79a95170e679:/# [email protected] | grep -A1 ANSWER.SECTION ;; ANSWER SECTION: google.com. 112 IN A 172.217.23.14
However: that might work if you use DNSMasq, but if you don’t it won’t, as there’s no DNS server on the host to look up.
So Docker’s solution to this quandary is to bypass all that complexity and point your DNS lookups to Google’s DNS servers at `8.8.8.8`
and `8.8.4.4`
, ignoring whatever the host context is.
*Anecdote: This was the source of my first problem with Docker back in 2013. Our corporate network blocked access to those IP addresses, so my containers couldn’t resolve URLs.*
So that’s Docker containers, but container *orchestrators* such as Kubernetes can do different things again…
# 2) Kubernetes and DNS
The unit of container deployment in Kubernetes is a Pod. A pod is a set of co-located containers that (among other things) share the same IP address.
An extra challenge with Kubernetes is to forward requests for Kubernetes services to the right resolver (eg `myservice.kubernetes.io`
) to the private network allocated to those service addresses. These addresses are said to be on the ‘cluster domain’. This cluster domain is configurable by the administrator, so it might be `cluster.local`
or `myorg.badger`
depending on the configuration you set up.
In Kubernetes you have four options for configuring how DNS lookup works within your pod.
**Default**
This (misleadingly-named) option takes the same DNS resolution path as the host the pod runs on, as in the ‘naive’ DNS lookup described earlier. It’s misleadingly named because it’s not the default! ClusterFirst is.
If you want to override the `/etc/resolv.conf`
entries, you can in your config for the kubelet.
**ClusterFirst**
ClusterFirst does selective forwarding on the DNS request. This is achieved in one of two ways based on the configuration.
In the first, older and simpler setup, a rule was followed where if the cluster domain was not found in the request, then it was forwarded to the host.
In the second, newer approach, you can configure selective forwarding on an internal DNS
Here’s what the config looks like and a diagram lifted from the [Kubernetes docs](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods) which shows the flow:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
```**stubDomains**: |
{"acme.local": ["1.2.3.4"]}
**upstreamNameservers**: |
["8.8.8.8", "8.8.4.4"]
The `stubDomains`
entry defines specific DNS servers to use for specific domains. The upstream servers are the servers we defer to when nothing else has picked up the DNS request.
This is achieved with our old friend DNSMasq running in a pod.
The other two options are more niche:
**ClusterFirstWithHostNet**
This applies if you use host network for your pods, ie you bypass the Docker networking setup to use the same network as you would directly on the host the pod is running on.
**None**
None does nothing to DNS but forces you to specify the DNS settings in the `dnsConfig`
field in the pod specification.
### CoreDNS Coming
And if that wasn’t enough, this is set to change again as CoreDNS comes to Kubernetes, replacing kube-dns. CoreDNS will offer a few benefits over kube-dns, being more configurabe and more efficient.
Find out more [here](https://coredns.io/).
If you’re interested in OpenShift networking, I wrote a post on that [here](https://zwischenzugs.com/2017/10/21/openshift-3-6-dns-in-pictures/). But that was for 3.6 so is likely out of date now.
## End of Part IV
That’s part IV done. In it we covered.
- Docker DNS lookups
- Kubernetes DNS lookups
- Selective forwarding (stub domains)
- kube-dns
**If you like this, you might like one of my books:
**
[Learn Git the Hard Way](https://leanpub.com/learngitthehardway?p=4369), [Learn Terraform the Hard Way](https://leanpub.com/learnterraformthehardway), [Learn Bash the Hard Way](https://leanpub.com/learnbashthehardway?p=4369)
Damn good article and the little nugget arond docker throwing away resolv in favour of its own nameserver setup (google) is something ive been trying to fathom for weeks!
I’m trying to get docker to run nicely with my hosts dnsmasq over a user defined bridge network. Painful!
Amazing work, thank you so much! While reading I have couple of insights. |
10,057 | 你没听说过的 Go 语言惊人优点 | https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a | 2018-09-27T23:39:00 | [
"Go"
] | https://linux.cn/article-10057-1.html | 
在这篇文章中,我将讨论为什么你需要尝试一下 Go 语言,以及应该从哪里学起。
Go 语言是可能是最近几年里你经常听人说起的编程语言。尽管它在 2009 年已经发布了,但它最近才开始流行起来。

*根据 Google 趋势,Go 语言非常流行。*
这篇文章不会讨论一些你经常看到的 Go 语言的主要特性。
相反,我想向您介绍一些相当小众但仍然很重要的功能。只有在您决定尝试 Go 语言后,您才会知道这些功能。
这些都是表面上没有体现出来的惊人特性,但它们可以为您节省数周或数月的工作量。而且这些特性还可以使软件开发更加愉快。
阅读本文不需要任何语言经验,所以不必担心你还不了解 Go 语言。如果你想了解更多,可以看看我在底部列出的一些额外的链接。
我们将讨论以下主题:
* GoDoc
* 静态代码分析
* 内置的测试和分析框架
* 竞争条件检测
* 学习曲线
* 反射
* Opinionatedness
* 文化
请注意,这个列表不遵循任何特定顺序来讨论。
### GoDoc
Go 语言非常重视代码中的文档,所以也很简洁。
[GoDoc](https://godoc.org/) 是一个静态代码分析工具,可以直接从代码中创建漂亮的文档页面。GoDoc 的一个显著特点是它不使用任何其他的语言,如 JavaDoc、PHPDoc 或 JSDoc 来注释代码中的结构,只需要用英语。
它使用从代码中获取的尽可能多的信息来概述、构造和格式化文档。它有多而全的功能,比如:交叉引用、代码示例,并直接链接到你的版本控制系统仓库。
而你需要做的只有添加一些像 `// MyFunc transforms Foo into Bar` 这样子的老牌注释,而这些注释也会反映在的文档中。你甚至可以添加一些通过网络界面或者在本地可以实际运行的 [代码示例](https://blog.golang.org/examples)。
GoDoc 是 Go 的唯一文档引擎,整个社区都在使用。这意味着用 Go 编写的每个库或应用程序都具有相同的文档格式。从长远来看,它可以帮你在浏览这些文档时节省大量时间。
例如,这是我最近一个小项目的 GoDoc 页面:[pullkee — GoDoc](https://godoc.org/github.com/kirillrogovoy/pullkee)。
### 静态代码分析
Go 严重依赖于静态代码分析。例如用于文档的 [godoc](https://godoc.org/),用于代码格式化的 [gofmt](https://golang.org/cmd/gofmt/),用于代码风格的 [golint](https://github.com/golang/lint),等等。
它们是如此之多,甚至有一个总揽了它们的项目 [gometalinter](https://github.com/alecthomas/gometalinter#supported-linters) ,将它们组合成了单一的实用程序。
这些工具通常作为独立的命令行应用程序实现,并可轻松与任何编码环境集成。
静态代码分析实际上并不是现代编程的新概念,但是 Go 将其带入了绝对的范畴。我无法估量它为我节省了多少时间。此外,它给你一种安全感,就像有人在你背后支持你一样。
创建自己的分析器非常简单,因为 Go 有专门的内置包来解析和加工 Go 源码。
你可以从这个链接中了解到更多相关内容: [GothamGo Kickoff Meetup: Alan Donovan 的 Go 静态分析工具](https://vimeo.com/114736889)。
### 内置的测试和分析框架
您是否曾尝试为一个从头开始的 JavaScript 项目选择测试框架?如果是这样,你或许会理解经历这种<ruby> 过度分析 <rt> analysis paralysis </rt></ruby>的痛苦。您可能也意识到您没有使用其中 80% 的框架。
一旦您需要进行一些可靠的分析,问题就会重复出现。
Go 附带内置测试工具,旨在简化和提高效率。它为您提供了最简单的 API,并做出最小的假设。您可以将它用于不同类型的测试、分析,甚至可以提供可执行代码示例。
它可以开箱即用地生成便于持续集成的输出,而且它的用法很简单,只需运行 `go test`。当然,它还支持高级功能,如并行运行测试,跳过标记代码,以及其他更多功能。
### 竞争条件检测
您可能已经听说了 Goroutine,它们在 Go 中用于实现并发代码执行。如果你未曾了解过,[这里](https://gobyexample.com/goroutines)有一个非常简短的解释。
无论具体技术如何,复杂应用中的并发编程都不容易,部分原因在于竞争条件的可能性。
简单地说,当几个并发操作以不可预测的顺序完成时,竞争条件就会发生。它可能会导致大量的错误,特别难以追查。如果你曾经花了一天时间调试集成测试,该测试仅在大约 80% 的执行中起作用?这可能是竞争条件引起的。
总而言之,在 Go 中非常重视并发编程,幸运的是,我们有一个强大的工具来捕捉这些竞争条件。它完全集成到 Go 的工具链中。
您可以在这里阅读更多相关信息并了解如何使用它:[介绍 Go 中的竞争条件检测 - Go Blog](https://blog.golang.org/race-detector)。
### 学习曲线
您可以在一个晚上学习**所有**的 Go 语言功能。我是认真的。当然,还有标准库,以及不同的,更具体领域的最佳实践。但是两个小时就足以让你自信地编写一个简单的 HTTP 服务器或命令行应用程序。
Go 语言拥有[出色的文档](https://golang.org/doc/),大部分高级主题已经在他们的博客上进行了介绍:[Go 编程语言博客](https://blog.golang.org/)。
比起 Java(以及 Java 家族的语言)、Javascript、Ruby、Python 甚至 PHP,你可以更轻松地把 Go 语言带到你的团队中。由于环境易于设置,您的团队在完成第一个生产代码之前需要进行的投资要小得多。
### 反射
代码反射本质上是一种隐藏在编译器下并访问有关语言结构的各种元信息的能力,例如变量或函数。
鉴于 Go 是一种静态类型语言,当涉及更松散类型的抽象编程时,它会受到许多各种限制。特别是与 Javascript 或 Python 等语言相比。
此外,Go [没有实现一个名为泛型的概念](https://golang.org/doc/faq#generics),这使得以抽象方式处理多种类型更具挑战性。然而,由于泛型带来的复杂程度,许多人认为不实现泛型对语言实际上是有益的。我完全同意。
根据 Go 的理念(这是一个单独的主题),您应该努力不要过度设计您的解决方案。这也适用于动态类型编程。尽可能坚持使用静态类型,并在确切知道要处理的类型时使用<ruby> 接口 <rt> interface </rt></ruby>。接口在 Go 中非常强大且无处不在。
但是,仍然存在一些情况,你无法知道你处理的数据类型。一个很好的例子是 JSON。您可以在应用程序中来回转换所有类型的数据。字符串、缓冲区、各种数字、嵌套结构等。
为了解决这个问题,您需要一个工具来检查运行时的数据并根据其类型和结构采取不同行为。<ruby> 反射 <rt> Reflect </rt></ruby>可以帮到你。Go 拥有一流的反射包,使您的代码能够像 Javascript 这样的语言一样动态。
一个重要的警告是知道你使用它所带来的代价 —— 并且只有知道在没有更简单的方法时才使用它。
你可以在这里阅读更多相关信息: [反射的法则 — Go 博客](https://blog.golang.org/laws-of-reflection).
您还可以在此处阅读 JSON 包源码中的一些实际代码: [src/encoding/json/encode.go — Source Code](https://golang.org/src/encoding/json/encode.go)
### Opinionatedness(专制独裁的 Go)
顺便问一下,有这样一个单词吗?
来自 Javascript 世界,我面临的最艰巨的困难之一是决定我需要使用哪些约定和工具。我应该如何设计代码?我应该使用什么测试库?我该怎么设计结构?我应该依赖哪些编程范例和方法?
这有时候基本上让我卡住了。我需要花时间思考这些事情而不是编写代码并满足用户。
首先,我应该注意到我完全知道这些惯例的来源,它总是来源于你或者你的团队。无论如何,即使是一群经验丰富的 Javascript 开发人员也很容易发现他们在实现相同的结果时,而大部分的经验却是在完全不同的工具和范例上。
这导致整个团队中出现过度分析,并且使得个体之间更难以相互协作。
嗯,Go 是不同的。即使您对如何构建和维护代码有很多强烈的意见,例如:如何命名,要遵循哪些结构模式,如何更好地实现并发。但你只有一个每个人都遵循的风格指南。你只有一个内置在基本工具链中的测试框架。
虽然这似乎过于严格,但它为您和您的团队节省了大量时间。当你写代码时,受一点限制实际上是一件好事。在构建新代码时,它为您提供了一种更直接的方法,并且可以更容易地调试现有代码。
因此,大多数 Go 项目在代码方面看起来非常相似。
### 文化
人们说,每当你学习一门新的口语时,你也会沉浸在说这种语言的人的某些文化中。因此,您学习的语言越多,您可能会有更多的变化。
编程语言也是如此。无论您将来如何应用新的编程语言,它总能给你带来新的编程视角或某些特别的技术。
无论是函数式编程,<ruby> 模式匹配 <rt> pattern matching </rt></ruby>还是<ruby> 原型继承 <rt> prototypal inheritance </rt></ruby>。一旦你学会了它们,你就可以随身携带这些编程思想,这扩展了你作为软件开发人员所拥有的问题解决工具集。它们也改变了你阅读高质量代码的方式。
而 Go 在这方面有一项了不起的财富。Go 文化的主要支柱是保持简单,脚踏实地的代码,而不会产生许多冗余的抽象概念,并将可维护性放在首位。大部分时间花费在代码的编写工作上,而不是在修补工具和环境或者选择不同的实现方式上,这也是 Go 文化的一部分。
Go 文化也可以总结为:“应当只用一种方法去做一件事”。
一点注意事项。当你需要构建相对复杂的抽象代码时,Go 通常会妨碍你。好吧,我会说这是简单的权衡。
如果你真的需要编写大量具有复杂关系的抽象代码,那么最好使用 Java 或 Python 等语言。然而,这种情况却很少。
在工作时始终使用最好的工具!
### 总结
你或许之前听说过 Go,或者它暂时在你圈子以外的地方。但无论怎样,在开始新项目或改进现有项目时,Go 可能是您或您团队的一个非常不错的选择。
这不是 Go 的所有惊人的优点的完整列表,只是一些被人低估的特性。
请尝试一下从 [Go 之旅](https://tour.golang.org/) 来开始学习 Go,这将是一个令人惊叹的开始。
如果您想了解有关 Go 的优点的更多信息,可以查看以下链接:
* [你为什么要学习 Go? - Keval Patel](https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65)
* [告别Node.js - TJ Holowaychuk](https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b)
并在评论中分享您的阅读感悟!
即使您不是为了专门寻找新的编程语言语言,也值得花一两个小时来感受它。也许它对你来说可能会变得非常有用。
不断为您的工作寻找最好的工具!
*题图来自 <https://github.com/ashleymcnamara/gophers> 的图稿*
---
via: <https://medium.freecodecamp.org/here-are-some-amazing-advantages-of-go-that-you-dont-hear-much-about-1af99de3b23a>
作者:[Kirill Rogovoy](https://twitter.com/krogovoy) 译者:[imquanquan](https://github.com/imquanquan) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,058 | 如何在 Ubuntu 16.04 强制 APT 包管理器使用 IPv4 | https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/ | 2018-09-28T09:02:37 | [
"APT",
"IPv4",
"IPv6"
] | https://linux.cn/article-10058-1.html | 
**APT**, 是 **A**dvanced **P**ackage **T**ool 的缩写,是基于 Debian 的系统的默认包管理器。我们可以使用 APT 安装、更新、升级和删除应用程序。最近,我一直遇到一个奇怪的错误。每当我尝试更新我的 Ubuntu 16.04 时,我都会收到此错误 - **“0% [Connecting to in.archive.ubuntu.com (2001:67c:1560:8001::14)]”** ,同时更新流程会卡住很长时间。我的网络连接没问题,我可以 ping 通所有网站,包括 Ubuntu 官方网站。在搜索了一番谷歌后,我意识到 Ubuntu 镜像站点有时无法通过 IPv6 访问。在我强制将 APT 包管理器在更新系统时使用 IPv4 代替 IPv6 访问 Ubuntu 镜像站点后,此问题得以解决。如果你遇到过此错误,可以按照以下说明解决。
### 强制 APT 包管理器在 Ubuntu 16.04 中使用 IPv4
要在更新和升级 Ubuntu 16.04 LTS 系统时强制 APT 使用 IPv4 代替 IPv6,只需使用以下命令:
```
$ sudo apt-get -o Acquire::ForceIPv4=true update
$ sudo apt-get -o Acquire::ForceIPv4=true upgrade
```
瞧!这次更新很快就完成了。
你还可以使用以下命令在 `/etc/apt/apt.conf.d/99force-ipv4` 中添加以下行,以便将来对所有 `apt-get` 事务保持持久性:
```
$ echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4
```
**免责声明:**
我不知道最近是否有人遇到这个问题,但我今天在我的 Ubuntu 16.04 LTS 虚拟机中遇到了至少四、五次这样的错误,我按照上面的说法解决了这个问题。我不确定这是推荐的解决方案。请浏览 Ubuntu 论坛来确保此方法合法。由于我只是一个 VM,我只将它用于测试和学习目的,我不介意这种方法的真实性。请自行承担使用风险。
希望这有帮助。还有更多的好东西。敬请关注!
干杯!
---
via: <https://www.ostechnix.com/how-to-force-apt-package-manager-to-use-ipv4-in-ubuntu-16-04/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,059 | 让 Python 代码更易维护的七种武器 | https://opensource.com/article/18/7/7-python-libraries-more-maintainable-code | 2018-09-29T10:48:35 | [
"Python",
"代码"
] | https://linux.cn/article-10059-1.html |
>
> 检查你的代码的质量,通过这些外部库使其更易维护。
>
>
>

>
> 可读性很重要。
> — <ruby> <a href="https://www.python.org/dev/peps/pep-0020/"> Python 之禅 </a> <rt> The Zen of Python </rt></ruby>,Tim Peters
>
>
>
随着软件项目进入“维护模式”,对可读性和编码标准的要求很容易落空(甚至从一开始就没有建立过那些标准)。然而,在代码库中保持一致的代码风格和测试标准能够显著减轻维护的压力,也能确保新的开发者能够快速了解项目的情况,同时能更好地全程保持应用程序的质量。
使用外部库来检查代码的质量不失为保护项目未来可维护性的一个好方法。以下会推荐一些我们最喜爱的[检查代码](https://en.wikipedia.org/wiki/Lint_(software))(包括检查 PEP 8 和其它代码风格错误)的库,用它们来强制保持代码风格一致,并确保在项目成熟时有一个可接受的测试覆盖率。
### 检查你的代码风格
[PEP 8](https://www.python.org/dev/peps/pep-0008/) 是 Python 代码风格规范,它规定了类似行长度、缩进、多行表达式、变量命名约定等内容。尽管你的团队自身可能也会有稍微不同于 PEP 8 的代码风格规范,但任何代码风格规范的目标都是在代码库中强制实施一致的标准,使代码的可读性更强、更易于维护。下面三个库就可以用来帮助你美化代码。
#### 1、 Pylint
[Pylint](https://www.pylint.org/) 是一个检查违反 PEP 8 规范和常见错误的库。它在一些流行的[编辑器和 IDE](https://pylint.readthedocs.io/en/latest/user_guide/ide-integration.html) 中都有集成,也可以单独从命令行运行。
执行 `pip install pylint` 安装 Pylint 。然后运行 `pylint [options] path/to/dir` 或者 `pylint [options] path/to/module.py` 就可以在命令行中使用 Pylint,它会向控制台输出代码中违反规范和出现错误的地方。
你还可以使用 `pylintrc` [配置文件](https://pylint.readthedocs.io/en/latest/user_guide/run.html#command-line-options)来自定义 Pylint 对哪些代码错误进行检查。
#### 2、 Flake8
[Flake8](http://flake8.pycqa.org/en/latest/) 是“将 PEP 8、Pyflakes(类似 Pylint)、McCabe(代码复杂性检查器)和第三方插件整合到一起,以检查 Python 代码风格和质量的一个 Python 工具”。
执行 `pip install flake8` 安装 flake8 ,然后执行 `flake8 [options] path/to/dir` 或者 `flake8 [options] path/to/module.py` 可以查看报出的错误和警告。
和 Pylint 类似,Flake8 允许通过[配置文件](http://flake8.pycqa.org/en/latest/user/configuration.html#configuration-locations)来自定义检查的内容。它有非常清晰的文档,包括一些有用的[提交钩子](http://flake8.pycqa.org/en/latest/user/using-hooks.html),可以将自动检查代码纳入到开发工作流程之中。
Flake8 也可以集成到一些流行的编辑器和 IDE 当中,但在文档中并没有详细说明。要将 Flake8 集成到喜欢的编辑器或 IDE 中,可以搜索插件(例如 [Sublime Text 的 Flake8 插件](https://github.com/SublimeLinter/SublimeLinter-flake8))。
#### 3、 Isort
[Isort](https://github.com/timothycrosley/isort) 这个库能将你在项目中导入的库按字母顺序排序,并将其[正确划分为不同部分](https://github.com/timothycrosley/isort#how-does-isort-work)(例如标准库、第三方库、自建的库等)。这样提高了代码的可读性,并且可以在导入的库较多的时候轻松找到各个库。
执行 `pip install isort` 安装 isort,然后执行 `isort path/to/module.py` 就可以运行了。[文档](https://github.com/timothycrosley/isort#using-isort)中还提供了更多的配置项,例如通过[配置](https://github.com/timothycrosley/isort#configuring-isort) `.isort.cfg` 文件来决定 isort 如何处理一个库的多行导入。
和 Flake8、Pylint 一样,isort 也提供了将其与流行的[编辑器和 IDE](https://github.com/timothycrosley/isort/wiki/isort-Plugins) 集成的插件。
### 分享你的代码风格
每次文件发生变动之后都用命令行手动检查代码是一件痛苦的事,你可能也不太喜欢通过运行 IDE 中某个插件来实现这个功能。同样地,你的同事可能会用不同的代码检查方式,也许他们的编辑器中也没有那种插件,甚至你自己可能也不会严格检查代码和按照警告来更正代码。总之,你分享出来的代码库将会逐渐地变得混乱且难以阅读。
一个很好的解决方案是使用一个库,自动将代码按照 PEP 8 规范进行格式化。我们推荐的三个库都有不同的自定义级别来控制如何格式化代码。其中有一些设置较为特殊,例如 Pylint 和 Flake8 ,你需要先行测试,看看是否有你无法忍受但又不能修改的默认配置。
#### 4、 Autopep8
[Autopep8](https://github.com/hhatto/autopep8) 可以自动格式化指定的模块中的代码,包括重新缩进行、修复缩进、删除多余的空格,并重构常见的比较错误(例如布尔值和 `None` 值)。你可以查看文档中完整的[更正列表](https://github.com/hhatto/autopep8#id4)。
运行 `pip install --upgrade autopep8` 安装 Autopep8。然后执行 `autopep8 --in-place --aggressive --aggressive <filename>` 就可以重新格式化你的代码。`aggressive` 选项的数量表示 Auotopep8 在代码风格控制上有多少控制权。在这里可以详细了解 [aggressive](https://github.com/hhatto/autopep8#id5) 选项。
#### 5、 Yapf
[Yapf](https://github.com/google/yapf) 是另一种有自己的[配置项](https://github.com/google/yapf#usage)列表的重新格式化代码的工具。它与 Autopep8 的不同之处在于它不仅会指出代码中违反 PEP 8 规范的地方,还会对没有违反 PEP 8 但代码风格不一致的地方重新格式化,旨在令代码的可读性更强。
执行 `pip install yapf` 安装 Yapf,然后执行 `yapf [options] path/to/dir` 或 `yapf [options] path/to/module.py` 可以对代码重新格式化。[定制选项](https://github.com/google/yapf#usage)的完整列表在这里。
#### 6、 Black
[Black](https://github.com/ambv/black) 在代码检查工具当中算是比较新的一个。它与 Autopep8 和 Yapf 类似,但限制较多,没有太多的自定义选项。这样的好处是你不需要去决定使用怎么样的代码风格,让 Black 来给你做决定就好。你可以在这里查阅 Black [有限的自定义选项](https://github.com/ambv/black#command-line-options)以及[如何在配置文件中对其进行设置](https://github.com/ambv/black#pyprojecttoml)。
Black 依赖于 Python 3.6+,但它可以格式化用 Python 2 编写的代码。执行 `pip install black` 安装 Black,然后执行 `black path/to/dir` 或 `black path/to/module.py` 就可以使用 Black 优化你的代码。
### 检查你的测试覆盖率
如果你正在进行编写测试,你需要确保提交到代码库的新代码都已经测试通过,并且不会降低测试覆盖率。虽然测试覆盖率不是衡量测试有效性和充分性的唯一指标,但它是确保项目遵循基本测试标准的一种方法。对于计算测试覆盖率,我们推荐使用 Coverage 这个库。
#### 7、 Coverage
[Coverage](https://coverage.readthedocs.io/en/latest/) 有数种显示测试覆盖率的方式,包括将结果输出到控制台或 HTML 页面,并指出哪些具体哪些地方没有被覆盖到。你可以通过[配置文件](https://coverage.readthedocs.io/en/latest/config.html)自定义 Coverage 检查的内容,让你更方便使用。
执行 `pip install coverage` 安装 Converage 。然后执行 `coverage [path/to/module.py] [args]` 可以运行程序并查看输出结果。如果要查看哪些代码行没有被覆盖,执行 `coverage report -m` 即可。
### 持续集成工具
<ruby> 持续集成 <rt> Continuous integration </rt></ruby>(CI)是在合并和部署代码之前自动检查代码风格错误和测试覆盖率最小值的过程。很多免费或付费的工具都可以用于执行这项工作,具体的过程不在本文中赘述,但 CI 过程是令代码更易读和更易维护的重要步骤,关于这一部分可以参考 [Travis CI](https://travis-ci.org/) 和 [Jenkins](https://jenkins.io/)。
以上这些只是用于检查 Python 代码的各种工具中的其中几个。如果你有其它喜爱的工具,欢迎在评论中分享。
---
via: <https://opensource.com/article/18/7/7-python-libraries-more-maintainable-code>
作者:[Jeff Triplett](https://opensource.com/users/laceynwilliams) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Readability counts.
—[The Zen of Python], Tim Peters
It's easy to let readability and coding standards fall by the wayside when a software project moves into "maintenance mode." (It's also easy to never establish those standards in the first place.) But maintaining consistent style and testing standards across a codebase is an important part of decreasing the maintenance burden, ensuring that future developers are able to quickly grok what's happening in a new-to-them project and safeguarding the health of the app over time.
A great way to protect the future maintainability of a project is to use external libraries to check your code health for you. These are a few of our favorite libraries for [linting code](https://en.wikipedia.org/wiki/Lint_(software)) (checking for PEP 8 and other style errors), enforcing a consistent style, and ensuring acceptable test coverage as a project reaches maturity.
## Check your code style
[PEP 8](https://www.python.org/dev/peps/pep-0008/) is the Python code style guide, and it sets out rules for things like line length, indentation, multi-line expressions, and naming conventions. Your team might also have your own style rules that differ slightly from PEP 8. The goal of any code style guide is to enforce consistent standards across a codebase to make it more readable, and thus more maintainable. Here are three libraries to help prettify your code.
### 1. Pylint
[Pylint](https://www.pylint.org/) is a library that checks for PEP 8 style violations and common errors. It integrates well with several popular [editors and IDEs](https://pylint.readthedocs.io/en/latest/user_guide/ide-integration.html) and can also be run from the command line.
To install, run `pip install pylint`
.
To use Pylint from the command line, run `pylint [options] path/to/dir`
or `pylint [options] path/to/module.py`
. Pylint will output warnings about style violations and other errors to the console.
You can customize what errors Pylint checks for with a [configuration file](https://pylint.readthedocs.io/en/latest/user_guide/run.html#command-line-options) called `pylintrc`
.
### 2. Flake8
[Flake8](http://flake8.pycqa.org/en/latest/) is a "Python tool that glues together PEP8, Pyflakes (similar to Pylint), McCabe (code complexity checker), and third-party plugins to check the style and quality of some Python code."
To use Flake8, run `pip install flake8`
. Then run `flake8 [options] path/to/dir`
or `flake8 [options] path/to/module.py`
to see its errors and warnings.
Like Pylint, Flake8 permits some customization for what it checks for with a [configuration file](http://flake8.pycqa.org/en/latest/user/configuration.html#configuration-locations). It has very clear docs, including some on useful [commit hooks](http://flake8.pycqa.org/en/latest/user/using-hooks.html) to automatically check your code as part of your development workflow.
Flake8 integrates with popular editors and IDEs, but those instructions generally aren't found in the docs. To integrate Flake8 with your favorite editor or IDE, search online for plugins (for example, [Flake8 plugin for Sublime Text](https://github.com/SublimeLinter/SublimeLinter-flake8)).
### 3. Isort
[Isort](https://github.com/timothycrosley/isort) is a library that sorts your imports alphabetically and breaks them up into [appropriate sections](https://github.com/timothycrosley/isort#how-does-isort-work) (e.g., standard library imports, third-party library imports, imports from your own project, etc.). This increases readability and makes it easier to locate imports if you have a lot of them in your module.
Install isort with `pip install isort`
, and run it with `isort path/to/module.py`
. More configuration options are in the [documentation](https://github.com/timothycrosley/isort#using-isort). For example, you can [configure](https://github.com/timothycrosley/isort#configuring-isort) how isort handles multi-line imports from one library in an `.isort.cfg`
file.
Like Flake8 and Pylint, isort also provides plugins that integrate it with popular [editors and IDEs](https://github.com/timothycrosley/isort/wiki/isort-Plugins).
## Outsource your code style
Remembering to run linters manually from the command line for each file you change is a pain, and you might not like how a particular plugin behaves with your IDE. Also, your colleagues might prefer different linters or might not have plugins for their favorite editors, or you might be less meticulous about always running the linter and correcting the warnings. Over time, the codebase you all share will get messy and harder to read.
A great solution is to use a library that automatically reformats your code into something that passes PEP 8 for you. The three libraries we recommend all have different levels of customization and different defaults for how they format code. Some of these are more opinionated than others, so like with Pylint and Flake8, you'll want to test these out to see which offers the customizations you can't live without… and the unchangeable defaults you can live *with*.
### 4. Autopep8
[Autopep8](https://github.com/hhatto/autopep8) automatically formats the code in the module you specify. It will re-indent lines, fix indentation, remove extraneous whitespace, and refactor common comparison mistakes (like with booleans and `None`
). See a full [list of corrections](https://github.com/hhatto/autopep8#id4) in the docs.
To install, run `pip install --upgrade autopep8`
. To reformat code in place, run `autopep8 --in-place --aggressive --aggressive <filename>`
. The `aggressive`
flags (and the number of them) indicate how much control you want to give autopep8 over your code style. Read more about [aggressive](https://github.com/hhatto/autopep8#id5) options.
### 5. Yapf
[Yapf](https://github.com/google/yapf) is yet another option for reformatting code that comes with its own list of [configuration options](https://github.com/google/yapf#usage). It differs from autopep8 in that it doesn't just address PEP 8 violations. It also reformats code that doesn't violate PEP 8 specifically but isn't styled consistently or could be formatted better for readability.
To install, run `pip install yapf`
. To reformat code, run, `yapf [options] path/to/dir`
or `yapf [options] path/to/module.py`
. There is also a full list of [customization options](https://github.com/google/yapf#usage).
### 6. Black
[Black](https://github.com/ambv/black) is the new kid on the block for linters that reformat code in place. It's similar to autopep8 and Yapf, but way more opinionated. It has very few options for customization, which is kind of the point. The idea is that you shouldn't have to make decisions about code style; the only decision to make is to let Black decide for you. You can read about [limited customization options](https://github.com/ambv/black#command-line-options) and instructions on [storing them in a configuration file](https://github.com/ambv/black#pyprojecttoml).
Black requires Python 3.6+ but can format Python 2 code. To use, run `pip install black`
. To prettify your code, run: `black path/to/dir`
or `black path/to/module.py`
.
## Check your test coverage
You're writing tests, right? Then you will want to make sure new code committed to your codebase is tested and doesn't drop your overall amount of test coverage. While percentage of test coverage is not the only metric you should use to measure the effectiveness and sufficiency of your tests, it is one way to ensure basic testing standards are being followed in your project. For measuring test coverage, we have one recommendation: Coverage.
### 7. Coverage
[Coverage](https://coverage.readthedocs.io/en/latest/) has several options for the way it reports your test coverage to you, including outputting results to the console or to an HTML page and indicating which line numbers are missing test coverage. You can set up a [configuration file](https://coverage.readthedocs.io/en/latest/config.html) to customize what Coverage checks for and make it easier to run.
To install, run `pip install coverage`
. To run a program and see its output, run `coverage run [path/to/module.py] [args]`
, and you will see your program's output. To see a report of which lines of code are missing coverage, run `coverage report -m`
.
## Continuous integration tools
Continuous integration (CI) is a series of processes you can run to automatically check for linter errors and test coverage minimums before you merge and deploy code. There are lots of free or paid tools to automate this process, and a thorough walkthrough is beyond the scope of this article. But because setting up a CI process is an important step in removing blocks to more readable and maintainable code, you should investigate continuous integration tools in general; check out [Travis CI](https://travis-ci.org/) and [Jenkins](https://jenkins.io/) in particular.
These are only a handful of the libraries available to check your Python code. If you have a favorite that's not on this list, please share it in the comments.
## 2 Comments |
10,060 | 让你提高效率的 Linux 技巧 | https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html | 2018-09-29T11:45:47 | [
"命令行"
] | https://linux.cn/article-10060-1.html |
>
> 想要在 Linux 命令行工作中提高效率,你需要使用一些技巧。
>
>
>

巧妙的 Linux 命令行技巧能让你节省时间、避免出错,还能让你记住和复用各种复杂的命令,专注在需要做的事情本身,而不是你要怎么做。以下介绍一些好用的命令行技巧。
### 命令编辑
如果要对一个已输入的命令进行修改,可以使用 `^a`(`ctrl + a`)或 `^e`(`ctrl + e`)将光标快速移动到命令的开头或命令的末尾。
还可以使用 `^` 字符实现对上一个命令的文本替换并重新执行命令,例如 `^before^after^` 相当于把上一个命令中的 `before` 替换为 `after` 然后重新执行一次。
```
$ eho hello world <== 错误的命令
Command 'eho' not found, did you mean:
command 'echo' from deb coreutils
command 'who' from deb coreutils
Try: sudo apt install <deb name>
$ ^e^ec^ <== 替换
echo hello world
hello world
```
### 使用远程机器的名称登录到机器上
如果使用命令行登录其它机器上,可以考虑添加别名。在别名中,可以填入需要登录的用户名(与本地系统上的用户名可能相同,也可能不同)以及远程机器的登录信息。例如使用 `server_name ='ssh -v -l username IP-address'` 这样的别名命令:
```
$ alias butterfly=”ssh -v -l jdoe 192.168.0.11”
```
也可以通过在 `/etc/hosts` 文件中添加记录或者在 DNS 服务器中加入解析记录来把 IP 地址替换成易记的机器名称。
执行 `alias` 命令可以列出机器上已有的别名。
```
$ alias
alias butterfly='ssh -v -l jdoe 192.168.0.11'
alias c='clear'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias list_repos='grep ^[^#] /etc/apt/sources.list /etc/apt/sources.list.d/*'
alias ll='ls -alF'
alias ls='ls --color=auto'
alias show_dimensions='xdpyinfo | grep '\''dimensions:'\'''
```
只要将新的别名添加到 `~/.bashrc` 或类似的文件中,就可以让别名在每次登录后都能立即生效。
### 冻结、解冻终端界面
`^s`(`ctrl + s`)将通过执行流量控制命令 XOFF 来停止终端输出内容,这会对 PuTTY 会话和桌面终端窗口产生影响。如果误输入了这个命令,可以使用 `^q`(`ctrl + q`)让终端重新响应。所以只需要记住 `^q` 这个组合键就可以了,毕竟这种情况并不多见。
### 复用命令
Linux 提供了很多让用户复用命令的方法,其核心是通过历史缓冲区收集执行过的命令。复用命令的最简单方法是输入 `!` 然后接最近使用过的命令的开头字母;当然也可以按键盘上的向上箭头,直到看到要复用的命令,然后按回车键。还可以先使用 `history` 显示命令历史,然后输入 `!` 后面再接命令历史记录中需要复用的命令旁边的数字。
```
!! <== 复用上一条命令
!ec <== 复用上一条以 “ec” 开头的命令
!76 <== 复用命令历史中的 76 号命令
```
### 查看日志文件并动态显示更新内容
使用形如 `tail -f /var/log/syslog` 的命令可以查看指定的日志文件,并动态显示文件中增加的内容,需要监控向日志文件中追加内容的的事件时相当有用。这个命令会输出文件内容的末尾部分,并逐渐显示新增的内容。
```
$ tail -f /var/log/auth.log
Sep 17 09:41:01 fly CRON[8071]: pam_unix(cron:session): session closed for user smmsp
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session opened for user root
Sep 17 09:45:01 fly CRON[8115]: pam_unix(cron:session): session closed for user root
Sep 17 09:47:00 fly sshd[8124]: Accepted password for shs from 192.168.0.22 port 47792
Sep 17 09:47:00 fly sshd[8124]: pam_unix(sshd:session): session opened for user shs by
Sep 17 09:47:00 fly systemd-logind[776]: New session 215 of user shs.
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session opened for user root
Sep 17 09:55:01 fly CRON[8208]: pam_unix(cron:session): session closed for user root
<== 等待显示追加的内容
```
### 寻求帮助
对于大多数 Linux 命令,都可以通过在输入命令后加上选项 `--help` 来获得这个命令的作用、用法以及它的一些相关信息。除了 `man` 命令之外, `--help` 选项可以让你在不使用所有扩展选项的情况下获取到所需要的内容。
```
$ mkdir --help
Usage: mkdir [OPTION]... DIRECTORY...
Create the DIRECTORY(ies), if they do not already exist.
Mandatory arguments to long options are mandatory for short options too.
-m, --mode=MODE set file mode (as in chmod), not a=rwx - umask
-p, --parents no error if existing, make parent directories as needed
-v, --verbose print a message for each created directory
-Z set SELinux security context of each created directory
to the default type
--context[=CTX] like -Z, or if CTX is specified then set the SELinux
or SMACK security context to CTX
--help display this help and exit
--version output version information and exit
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Full documentation at: <http://www.gnu.org/software/coreutils/mkdir>
or available locally via: info '(coreutils) mkdir invocation'
```
### 谨慎删除文件
如果要谨慎使用 `rm` 命令,可以为它设置一个别名,在删除文件之前需要进行确认才能删除。有些系统管理员会默认使用这个别名,对于这种情况,你可能需要看看下一个技巧。
```
$ rm -i <== 请求确认
```
### 关闭别名
你可以使用 `unalias` 命令以交互方式禁用别名。它不会更改别名的配置,而仅仅是暂时禁用,直到下次登录或重新设置了这一个别名才会重新生效。
```
$ unalias rm
```
如果已经将 `rm -i` 默认设置为 `rm` 的别名,但你希望在删除文件之前不必进行确认,则可以将 `unalias` 命令放在一个启动文件(例如 `~/.bashrc`)中。
### 使用 sudo
如果你经常在只有 root 用户才能执行的命令前忘记使用 `sudo`,这里有两个方法可以解决。一是利用命令历史记录,可以使用 `sudo !!`(使用 `!!` 来运行最近的命令,并在前面添加 `sudo`)来重复执行,二是设置一些附加了所需 `sudo` 的命令别名。
```
$ alias update=’sudo apt update’
```
### 更复杂的技巧
有时命令行技巧并不仅仅是一个别名。毕竟,别名能帮你做的只有替换命令以及增加一些命令参数,节省了输入的时间。但如果需要比别名更复杂功能,可以通过编写脚本、向 `.bashrc` 或其他启动文件添加函数来实现。例如,下面这个函数会在创建一个目录后进入到这个目录下。在设置完毕后,执行 `source .bashrc`,就可以使用 `md temp` 这样的命令来创建目录立即进入这个目录下。
```
md () { mkdir -p "$@" && cd "$1"; }
```
### 总结
使用 Linux 命令行是在 Linux 系统上工作最有效也最有趣的方法,但配合命令行技巧和巧妙的别名可以让你获得更好的体验。
---
via: <https://www.networkworld.com/article/3305811/linux/linux-tricks-that-even-you-can-love.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
10,061 | 如何使用 Steam Play 在 Linux 上玩仅限 Windows 的游戏 | https://itsfoss.com/steam-play/ | 2018-09-29T12:47:28 | [
"Steam",
"游戏"
] | https://linux.cn/article-10061-1.html |
>
> Steam 的新实验功能允许你在 Linux 上玩仅限 Windows 的游戏。以下是如何在 Steam 中使用此功能。
>
>
>
你已经听说过这个消息。游戏发行平台 [Steam 正在复刻一个 WINE 分支来允许你玩仅限于 Windows 上的游戏](/article-10054-1.html)。对于 Linux 用户来说,这绝对是一个好消息,因为我们总抱怨 Linux 的游戏数量不足。
这个新功能仍处于测试阶段,但你现在可以在 Linux 上试用它并在 Linux 上玩仅限 Windows 的游戏。让我们看看如何做到这一点。
### 使用 Steam Play 在 Linux 中玩仅限 Windows 的游戏

你需要先安装 Steam。Steam 适用于所有主要 Linux 发行版。我已经详细介绍了[在 Ubuntu 上安装 Steam](https://itsfoss.com/install-steam-ubuntu-linux/),如果你还没有安装 Steam,你可以参考那篇文章。
安装了 Steam 并且你已登录到 Steam 帐户,就可以了解如何在 Steam Linux 客户端中启用 Windows 游戏。
#### 步骤 1:进入帐户设置
运行 Steam 客户端。在左上角,单击 “Steam”,然后单击 “Settings”。

#### 步骤 2:选择加入测试计划
在“Settings”中,从左侧窗口中选择“Account”,然后单击 “Beta participation” 下的 “CHANGE” 按钮。

你应该在此处选择 “Steam Beta Update”。

在此处保存设置后,Steam 将重新启动并下载新的测试版更新。
#### 步骤 3:启用 Steam Play 测试版
下载好 Steam 新的测试版更新后,它将重新启动。到这里就差不多了。
再次进入“Settings”。你现在可以在左侧窗口看到新的 “Steam Play” 选项。单击它并选中复选框:
* Enable Steam Play for supported titles (你可以玩列入白名单的 Windows 游戏)
* Enable Steam Play for all titles (你可以尝试玩所有仅限 Windows 的游戏)

我不记得 Steam 是否会再次重启,但我想这无所谓。你现在应该可以在 Linux 上看到安装仅限 Windows 的游戏的选项了。
比如,我的 Steam 库中有《Age of Empires》,正常情况下这个在 Linux 中没有。但我在 Steam Play 测试版启用所有 Windows 游戏后,现在我可以选择在 Linux 上安装《Age of Empires》了。

*现在可以在 Linux 上安装仅限 Windows 的游戏*
### 有关 Steam Play 测试版功能的信息
在 Linux 上使用 Steam Play 测试版玩仅限 Windows 的游戏有一些事情你需要知道并且牢记。
* 目前,[只有 27 个 Steam Play 中的 Windows 游戏被列入白名单](https://steamcommunity.com/games/221410)。这些白名单游戏可以在 Linux 上无缝运行。
* 你可以使用 Steam Play 测试版尝试任何 Windows 游戏,但它可能不是总能运行。有些游戏有时会崩溃,而某些游戏可能根本无法运行。
* 在测试版中,你无法 Steam 商店中看到适用于 Linux 的 Windows 限定游戏。你必须自己尝试游戏或参考[这个社区维护的列表](https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/htmlview?sle=true#)以查看该 Windows 游戏的兼容性状态。你也可以通过填写[此表](https://docs.google.com/forms/d/e/1FAIpQLSeefaYQduMST_lg0IsYxZko8tHLKe2vtVZLFaPNycyhY4bidQ/viewform)来为列表做出贡献。
* 如果你在 Windows 中通过 Steam 下载了游戏,你可以[在 Linux 和 Windows 之间共享 Steam 游戏文件](/article-8027-1.html)来节省下载的数据。
我希望本教程能帮助你在 Linux 上运行仅限 Windows 的游戏。你期待在 Linux 上玩哪些游戏?
---
via: <https://itsfoss.com/steam-play/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

*Proton-based Steam Play allows you to play Windows-only games on Linux. Here’s how to use this feature in Steam right now.*
## What is Steam Play?
Game distribution platform Steam has implemented a fork of WINE; it’s called – “Steam Play”. With Steam Play, Linux users can play games available on Windows only. A compatibility tool "Proton" is used for Steam Play to make Windows games work on Linux.
## Play Windows-only games in Linux with Steam Play (Proton)
You need to install Steam first.
Steam is available for all major Linux distributions. I have written in detail about [installing Steam on Ubuntu](https://itsfoss.com/install-steam-ubuntu-linux/), and you may refer to that article if you don’t have Steam installed yet.
Once you have Steam installed and you have logged into your Steam account, it’s time to see how to enable Windows games in the Steam Linux client.
**Suggested Read 📖**
[Best Distributions for Gaming on LinuxIf you are a hardcore PC gamer, Linux might not be your first choice. That’s fair because Linux isn’t treated as a first-class citizen when it comes to gaming. You won’t find the most awaited games of the year available on Linux natively. Not to forget that](https://itsfoss.com/linux-gaming-distributions/)

### Step 1: Go to Account Settings
Run Steam client. On the top left, click on Steam and then on Settings.

### Step 2: Enable Steam Play
Now, you’ll see an option **Steam Play** in the left side panel. Click on it and check the boxes:
**Enable Steam Play for supported titles**(This is usually**checked by default**to let you run supported Windows games seamlessly)**Enable Steam Play for all titles**(With this option, you can try/experiment other games that may not be known to work)

You can also opt to **change the version of the compatibility layer (Proton)** if you need it. Once you are done selecting the options, hit “**OK**” and proceed to restart steam in order for the changes to take effect.

To get the latest compatible support, you may want to use "**Proton Experimental**". If you want to use a Proton version that is still in the testing phase, **Proton Next** should be the pick.
In either case, if the game supports a specific version as per your research, you can enable any of the available older versions as well.
Here’s how it will work:
For example, I have Age of Empires in my Steam library, which is not available on Linux typically. But after I enabled Steam Play for all Windows titles, it now gives me the option for installing Age of Empires on Linux.

## Things to know about Steam Play feature
There are a few things you should know and keep in mind about using Windows-only games on Linux with Steam Play:
- A large number of Window-only games work on Linux using this feature. Some are AAA (triple A) titles, and some are indie games.
- You should not expect for all games to work seamlessly. Some might crash, and some might need a little troubleshooting to make things work with your hardware.
- You can always refer to
[ProtonDB](https://www.protondb.com)or[Steam Deck verified](https://www.steamdeck.com/en/verified)list to find games that you should try first. - If you have games downloaded on Windows via Steam, you can save some download data by
[sharing Steam game files between Linux and Windows](https://itsfoss.com/share-steam-files-linux-windows/).
In addition, you should refer to our [Linux gaming guide](https://itsfoss.com/linux-gaming-guide/) for more information.
## How Do You Identify Games That Work On Steam Play?

I’m sure that you don’t have a lot of free time to download games one by one and test them with Steam Play.
As mentioned earlier, you can visit [ProtonDB](https://www.protondb.com/) to check reports/stats contributed by gamers to see what games work and what do not.
Any game that has a rating of silver and above can be tried. However, it would make more sense to try **Platinum/Gold-rated** games first. The website also lists the games that are [Steam Deck verified](https://www.steamdeck.com/en/verified), which is also an excellent way to know what works on Linux.
You can use that as a reference to decide whether you should download/purchase a particular game.
*I hope this tutorial helped you in running Windows-only games on Linux. Which game(s) are you looking forward to playing on Linux?* |
10,062 | “用户组”在 Linux 上到底是怎么工作的? | https://jvns.ca/blog/2017/11/20/groups/ | 2018-09-29T13:23:00 | [
"用户组",
"group"
] | https://linux.cn/article-10062-1.html | 
嗨!就在上周,我还自认为对 Linux 上的用户和组的工作机制了如指掌。我认为它们的关系是这样的:
1. 每个进程都属于一个用户(比如用户 `julia`)
2. 当这个进程试图读取一个被某个组所拥有的文件时, Linux 会 a. 先检查用户`julia` 是否有权限访问文件。(LCTT 译注:此处应该是指检查文件的所有者是否就是 `julia`) b. 检查 `julia` 属于哪些组,并进一步检查在这些组里是否有某个组拥有这个文件或者有权限访问这个文件。
3. 如果上述 a、b 任一为真(或者“其它”位设为有权限访问),那么这个进程就有权限访问这个文件。
比如说,如果一个进程被用户 `julia` 拥有并且 `julia` 在`awesome` 组,那么这个进程就能访问下面这个文件。
```
r--r--r-- 1 root awesome 6872 Sep 24 11:09 file.txt
```
然而上述的机制我并没有考虑得非常清楚,如果你硬要我阐述清楚,我会说进程可能会在**运行时**去检查 `/etc/group` 文件里是否有某些组拥有当前的用户。
### 然而这并不是 Linux 里“组”的工作机制
我在上个星期的工作中发现了一件有趣的事,事实证明我前面的理解错了,我对组的工作机制的描述并不准确。特别是 Linux **并不会**在进程每次试图访问一个文件时就去检查这个进程的用户属于哪些组。
我在读了《[Linux 编程接口](http://man7.org/tlpi/)》这本书的第九章(“进程资格”)后才恍然大悟(这本书真是太棒了),这才是组真正的工作方式!我意识到之前我并没有真正理解用户和组是怎么工作的,我信心满满的尝试了下面的内容并且验证到底发生了什么,事实证明现在我的理解才是对的。
### 用户和组权限检查是怎么完成的
现在这些关键的知识在我看来非常简单! 这本书的第九章上来就告诉我如下事实:用户和组 ID 是**进程的属性**,它们是:
* 真实用户 ID 和组 ID;
* 有效用户 ID 和组 ID;
* 保存的 set-user-ID 和保存的 set-group-ID;
* 文件系统用户 ID 和组 ID(特定于 Linux);
* 补充的组 ID;
这说明 Linux **实际上**检查一个进程能否访问一个文件所做的组检查是这样的:
* 检查一个进程的组 ID 和补充组 ID(这些 ID 就在进程的属性里,**并不是**实时在 `/etc/group` 里查找这些 ID)
* 检查要访问的文件的访问属性里的组设置
* 确定进程对文件是否有权限访问(LCTT 译注:即文件的组是否是以上的组之一)
通常当访问控制的时候使用的是**有效**用户/组 ID,而不是**真实**用户/组 ID。技术上来说当访问一个文件时使用的是**文件系统**的 ID,它们通常和有效用户/组 ID 一样。(LCTT 译注:这句话针对 Linux 而言。)
### 将一个用户加入一个组并不会将一个已存在的进程(的用户)加入那个组
下面是一个有趣的例子:如果我创建了一个新的组:`panda` 组并且将我自己(`bork`)加入到这个组,然后运行 `groups` 来检查我是否在这个组里:结果是我(`bork`)竟然不在这个组?!
```
bork@kiwi~> sudo addgroup panda
Adding group `panda' (GID 1001) ...
Done.
bork@kiwi~> sudo adduser bork panda
Adding user `bork' to group `panda' ...
Adding user bork to group panda
Done.
bork@kiwi~> groups
bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd
```
`panda` 并不在上面的组里!为了再次确定我们的发现,让我们建一个文件,这个文件被 `panda` 组拥有,看看我能否访问它。
```
$ touch panda-file.txt
$ sudo chown root:panda panda-file.txt
$ sudo chmod 660 panda-file.txt
$ cat panda-file.txt
cat: panda-file.txt: Permission denied
```
好吧,确定了,我(`bork`)无法访问 `panda-file.txt`。这一点都不让人吃惊,我的命令解释器并没有将 `panda` 组作为补充组 ID,运行 `adduser bork panda` 并不会改变这一点。
### 那进程一开始是怎么得到用户的组的呢?
这真是个非常令人困惑的问题,对吗?如果进程会将组的信息预置到进程的属性里面,进程在初始化的时候怎么取到组的呢?很明显你无法给你自己指定更多的组(否则就会和 Linux 访问控制的初衷相违背了……)
有一点还是很清楚的:一个新的进程是怎么从我的命令行解释器(`/bash/fish`)里被**执行**而得到它的组的。(新的)进程将拥有我的用户 ID(`bork`),并且进程属性里还有很多组 ID。从我的命令解释器里执行的所有进程是从这个命令解释器里 `fork()` 而来的,所以这个新进程得到了和命令解释器同样的组。
因此一定存在一个“第一个”进程来把你的组设置到进程属性里,而所有由此进程而衍生的进程将都设置这些组。而那个“第一个”进程就是你的<ruby> 登录程序 <rt> login shell </rt></ruby>,在我的笔记本电脑上,它是由 `login` 程序(`/bin/login`)实例化而来。登录程序以 root 身份运行,然后调用了一个 C 的库函数 —— `initgroups` 来设置你的进程的组(具体来说是通过读取 `/etc/group` 文件),因为登录程序是以 root 运行的,所以它能设置你的进程的组。
### 让我们再登录一次
好了!假如说我们正处于一个登录程序中,而我又想刷新我的进程的组设置,从我们前面所学到的进程是怎么初始化组 ID 的,我应该可以通过再次运行登录程序来刷新我的进程组并启动一个新的登录命令!
让我们试试下边的方法:
```
$ sudo login bork
$ groups
bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd panda
$ cat panda-file.txt # it works! I can access the file owned by `panda` now!
```
当然,成功了!现在由登录程序衍生的程序的用户是组 `panda` 的一部分了!太棒了!这并不会影响我其他的已经在运行的登录程序(及其子进程),如果我真的希望“所有的”进程都能对 `panda` 组有访问权限。我必须完全的重启我的登录会话,这意味着我必须退出我的窗口管理器然后再重新登录。(LCTT 译注:即更新进程树的树根进程,这里是窗口管理器进程。)
### newgrp 命令
在 Twitter 上有人告诉我如果只是想启动一个刷新了组信息的命令解释器的话,你可以使用 `newgrp`(LCTT 译注:不启动新的命令解释器),如下:
```
sudo addgroup panda
sudo adduser bork panda
newgrp panda # starts a new shell, and you don't have to be root to run it!
```
你也可以用 `sg panda bash` 来完成同样的效果,这个命令能启动一个`bash` 登录程序,而这个程序就有 `panda` 组。
### seduid 将设置有效用户 ID
其实我一直对一个进程如何以 `setuid root` 的权限来运行意味着什么有点似是而非。现在我知道了,事实上所发生的是:`setuid` 设置了
“有效用户 ID”! 如果我(`julia`)运行了一个 `setuid root` 的进程( 比如 `passwd`),那么进程的**真实**用户 ID 将为 `julia`,而**有效**用户 ID 将被设置为 `root`。
`passwd` 需要以 root 权限来运行,但是它能看到进程的真实用户 ID 是 `julia` ,是 `julia` 启动了这个进程,`passwd` 会阻止这个进程修改除了 `julia` 之外的用户密码。
### 就是这些了!
在《[Linux 编程接口](http://man7.org/tlpi/)》这本书里有很多 Linux 上一些功能的罕见使用方法以及 Linux 上所有的事物到底是怎么运行的详细解释,这里我就不一一展开了。那本书棒极了,我上面所说的都在该书的第九章,这章在 1300 页的书里只占了 17 页。
我最爱这本书的一点是我只用读 17 页关于用户和组是怎么工作的内容,而这区区 17 页就能做到内容完备、详实有用。我不用读完所有的 1300 页书就能得到有用的东西,太棒了!
---
via: <https://jvns.ca/blog/2017/11/20/groups/>
作者:[Julia Evans](https://jvns.ca/) 译者:[DavidChen](https://github.com/DavidChenLiang) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Hello! Last week, I thought I knew how users and groups worked on Linux. Here is what I thought:
- Every process belongs to a user (like
`julia`
) - When a process tries to read a file owned by a group, Linux a) checks if the user
`julia`
can access the file, and b) checks which groups`julia`
belongs to, and whether any of those groups owns & can access that file - If either of those is true (or if the ‘any’ bits are set right) then the process can access the file
So, for example, if a process is owned by the `julia`
user and `julia`
is in the `awesome`
group,
then the process would be allowed to read this file.
```
r--r--r-- 1 root awesome 6872 Sep 24 11:09 file.txt
```
I had not thought carefully about this, but if pressed I would have said that it probably checks the
`/etc/group`
file at runtime to see what groups you’re in.
### that is not how groups work
I found out at work last week that, no, what I describe above is not how groups work. In particular
Linux does **not** check which groups a process’s user belongs to every time that process tries to
access a file.
Here is how groups actually work! I learned this by reading Chapter 9 (“Process Credentials”) of [The Linux Programming Interface](http://man7.org/tlpi/)
which is an incredible book. As soon as I realized that I did not understand how users and groups
worked, I opened up the table of contents with absolute confidence that it would tell me what’s up,
and I was right.
### how users and groups checks are done
They key new insight for me was pretty simple! The chapter starts out by saying that user and group IDs are **attributes of the
process**:
- real user ID and group ID;
- effective user ID and group ID;
- saved set-user-ID and saved set-group-ID;
- file-system user ID and group ID (Linux-specific); and
- supplementary group IDs.
This means that the way Linux **actually** does group checks to see a process can read a file is:
- look at the process’s group IDs & supplementary group IDs (from the attributes on the process,
**not**by looking them up in`/etc/group`
) - look at the group on the file
- see if they match
Generally when doing access control checks it uses the **effective** user/group ID, not the real
user/group ID. Technically when accessing a file it actually uses the **file-system** ids but those
are usually the same as the effective uid/gid.
### Adding a user to a group doesn’t put existing processes in that group
Here’s another fun example that follows from this: if I create a new `panda`
group and add myself
(bork) to it, then run `groups`
to check my group memberships – I’m not in the panda group!
```
bork@kiwi~> sudo addgroup panda
Adding group `panda' (GID 1001) ...
Done.
bork@kiwi~> sudo adduser bork panda
Adding user `bork' to group `panda' ...
Adding user bork to group panda
Done.
bork@kiwi~> groups
bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd
```
no `panda`
in that list! To double check, let’s try making a file owned by the `panda`
group and see
if I can access it:
```
$ touch panda-file.txt
$ sudo chown root:panda panda-file.txt
$ sudo chmod 660 panda-file.txt
$ cat panda-file.txt
cat: panda-file.txt: Permission denied
```
Sure enough, I can’t access `panda-file.txt`
. No big surprise there. My shell didn’t have the `panda`
group as a supplementary GID before, and running `adduser bork panda`
didn’t do anything to change
that.
### how do you get your groups in the first place?
So this raises kind of a confusing question, right – if processes have groups baked into them, how do you get assigned your groups in the first place? Obviously you can’t assign yourself more groups (that would defeat the purpose of access control).
It’s relatively clear how processes I **execute** from my shell (bash/fish) get their groups – my
shell runs as me, and it has a bunch of group IDs on it. Processes I execute from my shell are
forked from the shell so they get the same groups as the shell had.
So there needs to be some “first” process that has your groups set on it, and all the other
processes you set inherit their groups from that. That process is called your **login shell** and
it’s run by the `login`
program (`/bin/login`
) on my laptop. `login`
runs as root and calls a C
function called `initgroups`
to set up your groups (by reading `/etc/group`
). It’s allowed to set up
your groups because it runs as root.
### let’s try logging in again!
So! Let’s say I am running in a shell, and I want to refresh my groups! From what we’ve learned
about how groups are initialized, I should be able to run `login`
to refresh my groups and start a
new login shell!
Let’s try it:
```
$ sudo login bork
$ groups
bork adm cdrom sudo dip plugdev lpadmin sambashare docker lxd panda
$ cat panda-file.txt # it works! I can access the file owned by `panda` now!
```
Sure enough, it works! Now the new shell that `login`
spawned is part of the `panda`
group! Awesome!
This won’t affect any other shells I already have running. If I really want the new `panda`
group
everywhere, I need to restart my login session completely, which means quitting my window manager
and logging in again.
### newgrp
Somebody on Twitter told me that if you want to start a new shell with a new group that you’ve been
added to, you can use `newgrp`
. Like this:
```
sudo addgroup panda
sudo adduser bork panda
newgrp panda # starts a new shell, and you don't have to be root to run it!
```
You can accomplish the same(ish) thing with `sg panda bash`
which will start a `bash`
shell that
runs with the `panda`
group.
### setuid sets the effective user ID
I’ve also always been a little vague about what it means for a process to run as “setuid root”. It
turns out that setuid sets the effective user ID! So if I (`julia`
) run a setuid root process (like `passwd`
), then the **real** user ID will be set to `julia`
, and the **effective** user ID will be set to `root`
.
`passwd`
needs to run as root, but it can look at its real user ID to see that `julia`
started the
process, and prevent `julia`
from editing any passwords except for `julia`
’s password.
### that’s all!
There are a bunch more details about all the edge cases and exactly how everything works in The Linux Programming Interface so I will not get into all the details here. That book is amazing. Everything I talked about in this post is from Chapter 9, which is a 17-page chapter inside a 1300-page book.
The thing I love most about that book is that reading 17 pages about how users and groups work is really approachable, self-contained, super useful, and I don’t have to tackle all 1300 pages of it at once to learn helpful things :) |
10,063 | 怎样解决 Ubuntu 中的 “sub process usr bin dpkg returned an error code 1” 错误 | https://itsfoss.com/dpkg-returned-an-error-code-1/ | 2018-09-29T19:00:56 | [
"安装"
] | https://linux.cn/article-10063-1.html |
>
> 如果你在 Ubuntu Linux 上安装软件时遇到 “sub process usr bin dpkg returned an error code 1”,请按照以下步骤进行修复。
>
>
>
Ubuntu 和其他基于 Debian 的发行版中的一个常见问题是已经损坏的包。你尝试更新系统或安装新软件包时会遇到类似 “Sub-process /usr/bin/dpkg returned an error code” 的错误。
这就是前几天发生在我身上的事。我试图在 Ubuntu 中安装一个电台程序时,它给我了这个错误:
```
Unpacking python-gst-1.0 (1.6.2-1build1) ...
Selecting previously unselected package radiotray.
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
Unpacking radiotray (0.7.3-5ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up polar-bookshelf (1.0.0-beta56) ...
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
dpkg: error processing package polar-bookshelf (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
Setting up python-gst-1.0 (1.6.2-1build1) ...
Setting up radiotray (0.7.3-5ubuntu1) ...
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
这里最后三行非常重要。
```
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
它告诉我 polar-bookshelf 包引发了问题。这可能对你如何修复这个错误至关重要。
### 修复 Sub-process /usr/bin/dpkg returned an error code (1)

让我们尝试修复这个损坏的错误包。我将展示几种你可以逐一尝试的方法。最初的那些易于使用,几乎不用动脑子。
在试了这里讨论的每一种方法之后,你应该尝试运行 `sudo apt update`,接着尝试安装新的包或升级。
#### 方法 1:重新配包数据库
你可以尝试的第一种方法是重新配置包数据库。数据库可能在安装包时损坏了。重新配置通常可以解决问题。
```
sudo dpkg --configure -a
```
#### 方法 2:强制安装
如果是之前包安装过程被中断,你可以尝试强制安装。
```
sudo apt-get install -f
```
#### 方法3:尝试删除有问题的包
如果这不是你的问题,你可以尝试手动删除包。但不要对 Linux 内核包(以 linux- 开头)执行此操作。
```
sudo apt remove
```
#### 方法 4:删除有问题的包中的信息文件
这应该是你最后的选择。你可以尝试从 `/var/lib/dpkg/info` 中删除与相关软件包关联的文件。
**你需要了解一些基本的 Linux 命令来了解发生了什么以及如何对应你的问题**
就我而言,我在 polar-bookshelf 中遇到问题。所以我查找了与之关联的文件:
```
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
```
现在我需要做的就是删除这些文件:
```
sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp
```
使用 `sudo apt update`,接着你应该就能像往常一样安装软件了。
#### 哪种方法适合你(如果有效)?
我希望这篇快速文章可以帮助你修复 “E: Sub-process /usr/bin/dpkg returned an error code (1)” 的错误。
如果它对你有用,是那种方法?你是否设法使用其他方法修复此错误?如果是,请分享一下以帮助其他人解决此问题。
---
via: <https://itsfoss.com/dpkg-returned-an-error-code-1/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

One of the common issue in Ubuntu and other Debian based distribution is the broken packages. You try to update the system or install a new package and you encounter an error like ‘Sub-process /usr/bin/dpkg returned an error code’.
That’s what happened to me the other day. I was trying to install a radio application in Ubuntu when it threw me this error:
```
Unpacking python-gst-1.0 (1.6.2-1build1) ...
Selecting previously unselected package radiotray.
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
Unpacking radiotray (0.7.3-5ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up polar-bookshelf (1.0.0-beta56) ...
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
dpkg: error processing package polar-bookshelf (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
Setting up python-gst-1.0 (1.6.2-1build1) ...
Setting up radiotray (0.7.3-5ubuntu1) ...
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
The last three lines are of the utmost importance here.
```
Errors were encountered while processing:
polar-bookshelf
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
It tells me that the package polar-bookshelf is causing and issue. This might be crucial to how you fix this error here.
## Fixing Sub-process /usr/bin/dpkg returned an error code (1)

Let’s try to fix this broken error package. I’ll show several methods that you can try one by one. The initial ones are easy to use and simply no-brainers.
You should try to run sudo apt update and then try to install a new package or upgrade after trying each of the methods discussed here.
### Method 1: Reconfigure Package Database
The first method you can try is to reconfigure the package database. Probably the database got corrupted while installing a package. Reconfiguring often fixes the problem.
`sudo dpkg --configure -a`
### Method 2: Use force install
If a package installation was interrupted previously, you may try to do a force install.
`sudo apt-get install -f`
### Method 3: Try removing the troublesome package
If it’s not an issue for you, you may try to remove the package manually. Please don’t do it for Linux Kernels (packages starting with linux-).
`sudo apt remove package_name`
### Method 4: Remove post info files of the troublesome package
This should be your last resort. You can try removing the files associated to the package in question from /var/lib/dpkg/info.
**You need to know a little about basic Linux commands to figure out what’s happening and how can you use the same with your problem.**
In my case, I had an issue with polar-bookshelf. So I looked for the files associated with it:
```
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
```
Now all I needed to do was to remove these files:
`sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp`
Use the sudo apt update and then you should be able to install software as usual.
### Which method worked for you (if it worked)?


I hope this quick article helps you in fixing the ‘E: Sub-process /usr/bin/dpkg returned an error code (1)’ error.
If it did work for you, which method was it? Did you manage to fix this error with some other method? If yes, please share that to help others with this issue. |
10,064 | Bat:一种具有语法高亮和 Git 集成的 Cat 类命令 | https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/ | 2018-09-29T23:04:41 | [
"cat",
"bat"
] | https://linux.cn/article-10064-1.html | 
在类 UNIX 系统中,我们使用 `cat` 命令去打印和连接文件。使用 `cat` 命令,我们能将文件目录打印到到标准输出,合成几个文件为一个目标文件,还有追加几个文件到目标文件中。今天,我偶然发现一个具有相似作用的命令叫做 “Bat” ,它是 `cat` 命令的一个克隆版,具有一些例如语法高亮、 Git 集成和自动分页等非常酷的特性。在这个简略指南中,我们将讲述如何在 Linux 中安装和使用 `bat` 命令。
### 安装
Bat 可以在 Arch Linux 的默认软件源中获取。 所以你可以使用 `pacman` 命令在任何基于 arch 的系统上来安装它。
```
$ sudo pacman -S bat
```
在 Debian、Ubuntu、Linux Mint 等系统中,从其[发布页面](https://github.com/sharkdp/bat/releases)下载 **.deb** 文件,然后用下面的命令来安装。
```
$ sudo apt install gdebi
$ sudo gdebi bat_0.5.0_amd64.deb
```
对于其他系统,你也许需要从软件源编译并安装。确保你已经安装了 Rust 1.26 或者更高版本。
然后运行以下命令来安装 Bat:
```
$ cargo install bat
```
或者,你可以从 [Linuxbrew](https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/) 软件包管理中来安装它。
```
$ brew install bat
```
### bat 命令的使用
`bat` 命令的使用与 `cat` 命令的使用非常相似。
使用 `bat` 命令创建一个新的文件:
```
$ bat > file.txt
```
使用 `bat` 命令来查看文件内容,只需要:
```
$ bat file.txt
```
你能同时查看多个文件:
```
$ bat file1.txt file2.txt
```
将多个文件的内容合并至一个单独文件中:
```
$ bat file1.txt file2.txt file3.txt > document.txt
```
就像我之前提到的那样,除了浏览和编辑文件以外,`bat` 命令有一些非常酷的特性。
`bat` 命令支持大多数编程和标记语言的<ruby> 语法高亮 <rt> syntax highlighting </rt></ruby>。比如,下面这个例子。我将使用 `cat` 和 `bat` 命令来展示 `reverse.py` 的内容。

你注意到区别了吗? `cat` 命令以纯文本格式显示文件的内容,而 `bat` 命令显示了语法高亮和整齐的文本对齐格式。更好了不是吗?
如果你只想显示行号(而没有表格)使用 `-n` 标记。
```
$ bat -n reverse.py
```

另一个 `bat` 命令中值得注意的特性是它支持<ruby> 自动分页 <rt> automatic paging </rt></ruby>。 它的意思是当文件的输出对于屏幕来说太大的时候,`bat` 命令自动将自己的输出内容传输到 `less` 命令中,所以你可以一页一页的查看输出内容。
让我给你看一个例子,使用 `cat` 命令查看跨多个页面的文件的内容时,提示符会快速跳至文件的最后一页,你看不到内容的开头和中间部分。
看一下下面的输出:

正如你所看到的,`cat` 命令显示了文章的最后一页。
所以你也许需要去将使用 `cat` 命令的输出传输到 `less` 命令中去从开头一页一页的查看内容。
```
$ cat reverse.py | less
```
现在你可以使用回车键去一页一页的查看输出。然而当你使用 `bat` 命令时这些都是不必要的。`bat` 命令将自动传输跨越多个页面的文件的输出。
```
$ bat reverse.py
```

现在按下回车键去往下一页。
`bat` 命令也支持 <ruby> Git 集成 <rt> <strong> GIT integration </strong> </rt></ruby>,这样您就可以轻松查看/编辑 Git 存储库中的文件。 它与 Git 连接可以显示关于索引的修改。(看左栏)

### 定制 Bat
如果你不喜欢默认主题,你也可以修改它。Bat 同样有修改它的选项。
若要显示可用主题,只需运行:
```
$ bat --list-themes
1337
DarkNeon
Default
GitHub
Monokai Extended
Monokai Extended Bright
Monokai Extended Light
Monokai Extended Origin
TwoDark
```
要使用其他主题,例如 TwoDark,请运行:
```
$ bat --theme=TwoDark file.txt
```
如果你想永久改变主题,在你的 shells 启动文件中加入 `export BAT_THEME="TwoDark"`。
`bat` 还可以选择修改输出的外观。使用 `--style` 选项来修改输出外观。仅显示 Git 的更改和行号但不显示网格和文件头,请使用 `--style=numbers,changes`。
更多详细信息,请参阅 Bat 项目的 GitHub 库(链接在文末)。
最好,这就是目前的全部内容了。希望这篇文章会帮到你。更多精彩文章即将到来,敬请关注!
干杯!
---
via: <https://www.ostechnix.com/bat-a-cat-clone-with-syntax-highlighting-and-git-integration/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[z52527](https://github.com/z52527) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,065 | 如何在 Ubuntu 上安装 Cinnamon 桌面环境 | https://itsfoss.com/install-cinnamon-on-ubuntu/ | 2018-09-29T23:19:24 | [
"Cinnamon",
"桌面"
] | https://linux.cn/article-10065-1.html |
>
> 这篇教程将会为你展示如何在 Ubuntu 上安装 Cinnamon 桌面环境。
>
>
>
[Cinnamon](http://cinnamon.linuxmint.com/) 是 [Linux Mint](http://www.linuxmint.com/) 的默认桌面环境。不同于 Ubuntu 的 Unity 桌面环境,Cinnamon 是一个更加传统而优雅的桌面环境,其带有底部面板和应用菜单。由于 Cinnamon 桌面以及它类 Windows 的用户界面,许多桌面用户[相较于 Ubuntu 更喜欢 Linux Mint](https://itsfoss.com/linux-mint-vs-ubuntu/)。
现在你无需[安装 Linux Mint](https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/) 就能够体验到 Cinnamon了。在这篇教程,我将会展示给你如何在 Ubuntu 18.04,16.04 和 14.04 上安装 Cinnamon。
在 Ubuntu 上安装 Cinnamon 之前,有一些事情需要你注意。有时候,安装的额外桌面环境可能会与你当前的桌面环境有冲突。可能导致会话、应用程序或功能等的崩溃。这就是为什么你需要在做这个决定时谨慎一点的原因。
### 如何在 Ubuntu 上安装 Cinnamon 桌面环境

过去有 Cinnamon 团队为 Ubuntu 提供的一系列的官方 PPA,但现在都已经失效了。不过不用担心,还有一个非官方的 PPA,而且它运行的很完美。这个 PPA 里包含了最新的 Cinnamon 版本。
```
sudo add-apt-repository
ppa:embrosyn/cinnamon
sudo apt update && sudo apt install cinnamon
```
下载的大小大概是 150 MB(如果我没记错的话)。这其中提供的 Nemo(Cinnamon 的文件管理器,基于 Nautilus)和 Cinnamon 控制中心。这些东西提供了一个更加接近于 Linux Mint 的感觉。
### 在 Ubuntu 上使用 Cinnamon 桌面环境
Cinnamon 安装完成后,退出当前会话,在登录界面,点击用户名旁边的 Ubuntu 符号:

之后,它将会显示所有系统可用的桌面环境。选择 Cinnamon。

现在你应该已经登录到有着 Cinnamon 桌面环境的 Ubuntu 中了。你还可以通过同样的方式再回到 Unity 桌面。这里有一张以 Cinnamon 做为桌面环境的 Ubuntu 桌面截图。

看起来是不是像极了 Linux Mint。此外,我并没有发现任何有关 Cinnamon 和 Unity 的兼容性问题。在 Unity 和 Cinnamon 来回切换,它们也依旧工作的很完美。
#### 从 Ubuntu 卸载 Cinnamon
如果你想卸载 Cinnamon,可以使用 PPA Purge 来完成。首先安装 PPA Purge:
```
sudo apt-get install ppa-purge
```
安装完成之后,使用下面的命令去移除该 PPA:
```
sudo ppa-purge ppa:embrosyn/cinnamon
```
更多的信息,我建议你去阅读 [如何从 Linux 移除 PPA](https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/) 这篇文章。
我希望这篇文章能够帮助你在 Ubuntu 上安装 Cinnamon。也可以分享一下你使用 Cinnamon 的经验。
---
via: <https://itsfoss.com/install-cinnamon-on-ubuntu/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[dianbanjiu](https://github.com/dianbanjiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

**This tutorial shows you how to install the Cinnamon desktop environment on Ubuntu.**
Cinnamon is the default desktop environment of [Linux Mint](https://www.linuxmint.com/?ref=itsfoss.com). Unlike the GNOME [desktop environment](https://itsfoss.com/what-is-desktop-environment/) in Ubuntu, Cinnamon is a more traditional but elegant-looking desktop environment with a bottom panel, app menu, etc. Many Windows migrants [prefer Linux Mint over Ubuntu](https://itsfoss.com/linux-mint-vs-ubuntu/) because of the Cinnamon desktop and its Windows-resembling user interface.
You don’t need to [install Linux Mint](https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/) just to try Cinnamon. In this tutorial, I’ll show you **how to install Cinnamon in Ubuntu**.
[Timeshift backup of system settings](https://itsfoss.com/backup-restore-linux-timeshift/)is advisable.
## How to Install Cinnamon on Ubuntu

## Installing Cinnamon on Ubuntu
At the time of updating this tutorial, Cinnamon desktop version 5.2.7 is available in the universe repository of Ubuntu 22.04 LTS.
So [make sure to enable the universe repository](https://itsfoss.com/ubuntu-repositories/) and then use this command to install Cinnamon on Ubuntu 22.04:
`sudo apt install cinnamon`
### Installing the latest Cinnamon daily builds from PPA (not recommended)
The Linux Mint team provides a [daily build unstable PPA](https://launchpad.net/~linuxmint-daily-build-team/+archive/ubuntu/daily-builds?ref=itsfoss.com) to install the latest Mint packages on Ubuntu. **Packages on this repository are unstable and for testing purposes, so you should know what you are doing.**
```
sudo add-apt-repository ppa:linuxmint-daily-build-team/daily-builds
sudo apt update
```
`sudo apt install cinnamon nemo-python`
Note that this version of Cinnamon desktop, only worked correctly for Ubuntu 22.04 LTS and it failed for my system with an “unmet dependency error” on Ubuntu 20.04 LTS.
## Using the Cinnamon desktop environment in Ubuntu
Once you have installed Cinnamon, log out of the current session. At the login screen, click on the Ubuntu symbol beside the username:

When you do this, it will give you all the desktop environments available for your system. No need to tell you that you have to choose Cinnamon:

Now, you should be logged in to Ubuntu with the Cinnamon desktop environment. Remember, you can do the same to switch back to GNOME. Here is a quick screenshot of what it looked like to run **Cinnamon in Ubuntu**:

Looks completely like Linux Mint, doesn’t it? I didn’t find any compatibility issues between Cinnamon and GNOME. I switched back and forth between GNOME and Cinnamon, and both worked perfectly.
Now that you have it installed, here are [some tips on customizing the Cinnamon desktop](https://itsfoss.com/customize-cinnamon-desktop/).
[7 Ways to Customize Cinnamon Desktop in Linux MintThe traditional Cinnamon desktop can be tweaked to look different and customized for your needs. Here’s how to do that.](https://itsfoss.com/customize-cinnamon-desktop/)

## Remove Cinnamon from Ubuntu
It is understandable that you might want to uninstall Cinnamon. First, change back to GNOME or whichever desktop environment you use.
Now, remove the Cinnamon package installed:
`sudo apt remove cinnamon`
If you used the PPA to install Cinnamon, you should also [delete the PPA](https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/) from your list of repositories:
`sudo add-apt-repository -r ppa:linuxmint-daily-build-team/daily-builds`
## Troubleshooting tip
If you are facing any issues like missing some Ubuntu apps after uninstalling the Cinnamon desktop, you can install the **Ubuntu-desktop **package:
`sudo apt install ubuntu-desktop`
## More desktop choices
You can also install many other desktop environments on your existing Ubuntu.
[How to Install MATE Desktop in Ubuntu LinuxScreenshot tutorial to show you how to install MATE desktop in Ubuntu and other versions and how to completely remove it.](https://itsfoss.com/install-mate-desktop-ubuntu/)

Budgie is a modern and elegant desktop choice.
[Installing Budgie Desktop on Ubuntu [Quick Guide]Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial. Among all the various Ubuntu versions, Ubuntu Budgie is the most underrated one. It looks elegant and it’s not heavy on resources. Read this Ubuntu Budgie review or simply watch this video to see what Ubunt…](https://itsfoss.com/install-budgie-ubuntu/)

KDE needs no introduction. If you did not opt for [Kubuntu or KDE Neon](https://itsfoss.com/kde-neon-vs-kubuntu/), you can always [install KDE on your existing Ubuntu](https://itsfoss.com/install-kde-on-ubuntu/).
[How to Install KDE Desktop Environment on UbuntuThis screenshot tutorial demonstrates the steps to install KDE Plasma desktop environment on Ubuntu Linux. In the world of Linux desktop environments, the ones that dominate are GNOME and KDE. There are several other desktop environments but these two are the leaders. Ubuntu used to have U…](https://itsfoss.com/install-kde-on-ubuntu/)

I hope this post helps you to **install Cinnamon in Ubuntu**. Do share your experience with Cinnamon. |
10,066 | 在 Linux 上操作目录 | https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux | 2018-09-29T23:53:05 | [
"目录"
] | https://linux.cn/article-10066-1.html | 
>
> 让我们继续学习一下 Linux 文件系统的树形结构,并展示一下如何在其中创建你的目录。
>
>
>
如果你不熟悉本系列(以及 Linux),[请查看我们的第一部分](/article-9798-1.html)。在那篇文章中,我们贯穿了 Linux 文件系统的树状结构(或者更确切地说是<ruby> 文件层次结构标准 <rt> File Hierarchy Standard </rt></ruby>,FHS)。我建议你仔细阅读,确保你理解自己能安全的做哪些操作。因为这一次,我将向你展示目录操作的魅力。
### 新建目录
在破坏之前,先让我们来创建。首先,打开一个终端窗口并使用命令 `mkdir` 创建一个新目录,如下所示:
```
mkdir <directoryname>
```
如果你只输入了目录名称,该目录将显示在您当前所在目录中。如果你刚刚打开一个终端,你当前位置为你的家目录。在这个例子中,我们展示了将要创建的目录与你当前所处位置的关系:
```
$ pwd # 告知你当前所在位置(参见第一部分)
/home/<username>
$ mkdir newdirectory # 创建 /home/<username>/newdirectory
```
(注:你不用输入 `#` 后面的文本。`#` 后面的文本为注释内容,用于解释发生了什么。它会被 shell 忽略,不会被执行)。
你可以在当前位置中已经存在的某个目录下创建新的目录,方法是在命令行中指定它:
```
mkdir Documents/Letters
```
这将在 `Documents` 目录中创建 `Letters` 目录。
你还可以在路径中使用 `..` 在当前目录的上一级目录中创建目录。假设你进入刚刚创建的 `Documents/Letters/` 目录,并且想要创建`Documents/Memos/` 目录。你可以这样做:
```
cd Documents/Letters # 进入到你刚刚创建的 Letters/ 目录
mkdir ../Memos
```
同样,以上所有内容都是相对于你当前的位置做的。这就是使用了相对路径。
你还可以使用目录的绝对路径:这意味着告诉 `mkdir` 命令将目录放在和根目录(`/`)有关的位置:
```
mkdir /home/<username>/Documents/Letters
```
在上面的命令中将 `<username>` 更改为你的用户名,这相当于从你的主目录执行 `mkdir Documents/Letters`,通过使用绝对路径你可以在目录树中的任何位置完成这项工作。
无论你使用相对路径还是绝对路径,只要命令成功执行,`mkdir` 将静默的创建新目录,而没有任何明显的反馈。只有当遇到某种问题时,`mkdir`才会在你敲下回车键后打印一些反馈。
与大多数其他命令行工具一样,`mkdir` 提供了几个有趣的选项。 `-p` 选项特别有用,因为它允许你嵌套创建目录,即使目录不存在也可以。例如,要在 `Documents/` 中创建一个目录存放写给妈妈的信,你可以这样做:
```
mkdir -p Documents/Letters/Family/Mom
```
`mkdir` 会创建 `Mom/` 之上的整个目录分支,并且也会创建 `Mom/` 目录,无论其上的目录在你敲入该命令时是否已经存在。
你也可以用空格来分隔目录名,来同时创建几个目录:
```
mkdir Letters Memos Reports
```
这将在当前目录下创建目录 `Letters`、`Memos` 和 `Reports`。
### 目录名中可怕的空格
……这带来了目录名称中关于空格的棘手问题。你能在目录名中使用空格吗?是的你可以。那么建议你使用空格吗?不,绝对不建议。空格使一切变得更加复杂,并且可能是危险的操作。
假设您要创建一个名为 `letters mom/` 的目录。如果你不知道如何更好处理,你可能会输入:
```
mkdir letters mom
```
但这是错误的!错误的!错误的!正如我们在上面介绍的,这将创建两个目录 `letters/` 和 `mom/`,而不是一个目录 `letters mom/`。
得承认这是一个小麻烦:你所要做的就是删除这两个目录并重新开始,这没什么大不了。
可是等等!删除目录可是个危险的操作。想象一下,你使用图形工具[Dolphin](https://userbase.kde.org/Dolphin) 或 [Nautilus](https://projects-old.gnome.org/nautilus/screenshots.html) 创建了目录 `letters mom/`。如果你突然决定从终端删除目录 `letters mom`,并且您在同一目录下有另一个名为 `letters` 的目录,并且该目录中包含重要的文档,结果你为了删除错误的目录尝试了以下操作:
```
rmdir letters mom
```
你将会有删除目录 letters 的风险。这里说“风险”,是因为幸运的是`rmdir` 这条用于删除目录的指令,有一个内置的安全措施,如果你试图删除一个非空目录时,它会发出警告。
但是,下面这个:
```
rm -Rf letters mom
```
(注:这是删除目录及其内容的一种非常标准的方式)将完全删除 `letters/` 目录,甚至永远不会告诉你刚刚发生了什么。)
`rm` 命令用于删除文件和目录。当你将它与选项 `-R`(递归删除)和 `-f`(强制删除)一起使用时,它会深入到目录及其子目录中,删除它们包含的所有文件,然后删除子目录本身,然后它将删除所有顶层目录中的文件,再然后是删除目录本身。
`rm -Rf` 是你必须非常小心处理的命令。
我的建议是,你可以使用下划线来代替空格,但如果你仍然坚持使用空格,有两种方法可以使它们起作用。您可以使用单引号或双引号,如下所示:
```
mkdir 'letters mom'
mkdir "letters dad"
```
或者,你可以转义空格。有些字符对 shell 有特殊意义。正如你所见,空格用于在命令行上分隔选项和参数。 “分离选项和参数”属于“特殊含义”范畴。当你想让 shell 忽略一个字符的特殊含义时,你需要转义,你可以在它前面放一个反斜杠(`\`)如:
```
mkdir letters\ mom
mkdir letter\ dad
```
还有其他特殊字符需要转义,如撇号或单引号(`'`),双引号(`“`)和&符号(`&`):
```
mkdir mom\ \&\ dad\'s\ letters
```
我知道你在想什么:如果反斜杠有一个特殊的含义(即告诉 shell 它必须转义下一个字符),这也使它成为一个特殊的字符。然后,你将如何转义转义字符(`\`)?
事实证明,你转义任何其他特殊字符都是同样的方式:
```
mkdir special\\characters
```
这将生成一个名为 `special\characters/` 的目录。
感觉困惑?当然。这就是为什么你应该避免在目录名中使用特殊字符,包括空格。
以防误操作你可以参考下面这个记录特殊字符的列表。(LCTT 译注:此处原文链接丢失。)
### 总结
* 使用 `mkdir <directory name>` 创建新目录。
* 使用 `rmdir <directory name>` 删除目录(仅在目录为空时才有效)。
* 使用 `rm -Rf <directory name>` 来完全删除目录及其内容 —— 请务必谨慎使用。
* 使用相对路径创建相对于当前目录的目录: `mkdir newdir`。
* 使用绝对路径创建相对于根目录(`/`)的目录: `mkdir /home/<username>/newdir`。
* 使用 `..` 在当前目录的上级目录中创建目录: `mkdir ../newdir`。
* 你可以通过在命令行上使用空格分隔目录名来创建多个目录: `mkdir onedir twodir threedir`。
* 同时创建多个目录时,你可以混合使用相对路径和绝对路径: `mkdir onedir twodir /home/<username>/threedir`。
* 在目录名称中使用空格和特殊字符真的会让你很头疼,你最好不要那样做。
有关更多信息,您可以查看 `mkdir`、`rmdir` 和 `rm` 的手册:
```
man mkdir
man rmdir
man rm
```
要退出手册页,请按键盘 `q` 键。
### 下次预告
在下一部分中,你将学习如何创建、修改和删除文件,以及你需要了解的有关权限和特权的所有信息!
通过 Linux 基金会和 edX 免费提供的[“Introduction to Linux”](https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux)课程了解有关Linux的更多信息。
---
via: <https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux>
作者:[Paul Brown](https://www.linux.com/users/bro66) 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,067 | 如何让 Ping 的输出更简单易读 | https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/ | 2018-09-30T13:16:39 | [
"ping"
] | https://linux.cn/article-10067-1.html | 
众所周知,`ping` 命令可以用来检查目标主机是否可达。使用 `ping` 命令的时候,会发送一个 ICMP Echo 请求,通过目标主机的响应与否来确定目标主机的状态。如果你经常使用 `ping` 命令,你可以尝试一下 `prettyping`。Prettyping 只是将一个标准的 ping 工具增加了一层封装,在运行标准 `ping` 命令的同时添加了颜色和 unicode 字符解析输出,所以它的输出更漂亮紧凑、清晰易读。它是用 `bash` 和 `awk` 编写的自由开源工具,支持大部分类 Unix 操作系统,包括 GNU/Linux、FreeBSD 和 Mac OS X。Prettyping 除了美化 `ping` 命令的输出,还有很多值得注意的功能。
* 检测丢失的数据包并在输出中标记出来。
* 显示实时数据。每次收到响应后,都会更新统计数据,而对于普通 `ping` 命令,只会在执行结束后统计。
* 可以灵活处理“未知信息”(例如错误信息),而不搞乱输出结果。
* 能够避免输出重复的信息。
* 兼容常用的 `ping` 工具命令参数。
* 能够由普通用户执行。
* 可以将输出重定向到文件中。
* 不需要安装,只需要下载二进制文件,赋予可执行权限即可执行。
* 快速且轻巧。
* 输出结果清晰直观。
### 安装 Prettyping
如上所述,Prettyping 是一个绿色软件,不需要任何安装,只要使用以下命令下载 Prettyping 二进制文件:
```
$ curl -O https://raw.githubusercontent.com/denilsonsa/prettyping/master/prettyping
```
将二进制文件放置到 `$PATH`(例如 `/usr/local/bin`)中:
```
$ sudo mv prettyping /usr/local/bin
```
然后对其赋予可执行权限:
```
$ sudo chmod +x /usr/local/bin/prettyping
```
就可以使用了。
### 让 ping 的输出清晰易读
安装完成后,通过 `prettyping` 来 ping 任何主机或 IP 地址,就可以以图形方式查看输出。
```
$ prettyping ostechnix.com
```
输出效果大概会是这样:

如果你不带任何参数执行 `prettyping`,它就会一直运行直到被 `ctrl + c` 中断。
由于 Prettyping 只是一个对普通 `ping` 命令的封装,所以常用的 ping 参数也是有效的。例如使用 `-c 5` 来指定 ping 一台主机的 5 次:
```
$ prettyping -c 5 ostechnix.com
```
Prettyping 默认会使用彩色输出,如果你不喜欢彩色的输出,可以加上 `--nocolor` 参数:
```
$ prettyping --nocolor ostechnix.com
```
同样的,也可以用 `--nomulticolor` 参数禁用多颜色支持:
```
$ prettyping --nomulticolor ostechnix.com
```
使用 `--nounicode` 参数禁用 unicode 字符:

如果你的终端不支持 UTF-8,或者无法修复系统中的 unicode 字体,只需要加上 `--nounicode` 参数就能轻松解决。
Prettyping 支持将输出的内容重定向到文件中,例如执行以下这个命令会将 `prettyping ostechnix.com` 的输出重定向到 `ostechnix.txt` 中:
```
$ prettyping ostechnix.com | tee ostechnix.txt
```
Prettyping 还有很多选项帮助你完成各种任务,例如:
* 启用/禁用延时图例(默认启用)
* 强制按照终端的格式输出(默认自动)
* 在统计数据中统计最后的 n 次 ping(默认 60 次)
* 覆盖对终端尺寸的自动检测
* 指定 awk 解释器路径(默认:`awk`)
* 指定 ping 工具路径(默认:`ping`)
查看帮助文档可以了解更多:
```
$ prettyping --help
```
尽管 Prettyping 没有添加任何额外功能,但我个人喜欢它的这些优点:
* 实时统计 —— 可以随时查看所有实时统计信息,标准 `ping` 命令只会在命令执行结束后才显示统计信息。
* 紧凑的显示 —— 可以在终端看到更长的时间跨度。
* 检测丢失的数据包并显示出来。
如果你一直在寻找可视化显示 `ping` 命令输出的工具,那么 Prettyping 肯定会有所帮助。尝试一下,你不会失望的。
---
via: <https://www.ostechnix.com/prettyping-make-the-output-of-ping-command-prettier-and-easier-to-read/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,068 | 书评:《算法之美( Algorithms to Live By )》 | https://www.eyrie.org/~eagle/reviews/books/1-62779-037-3.html | 2018-10-01T11:27:50 | [
"算法"
] | https://linux.cn/article-10068-1.html | 
又一次为了工作图书俱乐部而读书。除了其它我亲自推荐的书,这是我至今最喜爱的书。
作为计算机科学基础之一的研究领域是算法:我们如何高效地用计算机程序解决问题?这基本上属于数学领域,但是这很少关于理想的或理论上的解决方案,而是更在于最高效地利用有限的资源获得一个充分(如果不能完美)的答案。其中许多问题要么是日常的生活问题,要么与人们密切相关。毕竟,计算机科学的目的是为了用计算机解决实际问题。《<ruby> 算法之美 <rt> Algorithms to Live By </rt></ruby>》提出的问题是:“我们可以反过来吗”——我们可以通过学习计算机科学解决问题的方式来帮助我们做出日常决定吗?
本书的十一个章节有很多有趣的内容,但也有一个有趣的主题:人类早已擅长这一点。很多章节以一个算法研究和对问题的数学分析作为开始,接着深入到探讨如何利用这些结果做出更好的决策,然后讨论关于人类真正会做出的决定的研究,之后,考虑到典型生活情境的限制,会发现人类早就在应用我们提出的最佳算法的特殊版本了。这往往会破坏本书的既定目标,值得庆幸的是,它决不会破坏对一般问题的有趣讨论,即计算机科学如何解决它们,以及我们对这些问题的数学和技术形态的了解。我认为这本书的自助效用比作者打算的少一些,但有很多可供思考的东西。
(也就是说,值得考虑这种一致性是否太少了,因为人类已经擅长这方面了,更因为我们的算法是根据人类直觉设计的。可能我们的最佳算法只是反映了人类的思想。在某些情况下,我们发现我们的方案和数学上的典范不一样,但是在另一些情况下,它们仍然是我们当下最好的猜想。)
这是那种章节列表是书评里重要部分的书。这里讨论的算法领域有最优停止、探索和利用决策(什么时候带着你发现的最好东西走,以及什么时候寻觅更好的东西),以及排序、缓存、调度、贝叶斯定理(一般还有预测)、创建模型时的过拟合、放松(解决容易的问题而不是你的实际问题)、随机算法、一系列网络算法,最后还有游戏理论。其中每一项都有有用的见解和发人深省的讨论——这些有时显得十分理论化的概念令人吃惊地很好地映射到了日常生活。这本书以一段关于“可计算的善意”的讨论结束:鼓励减少你自己和你交往的人所需的计算和复杂性惩罚。
如果你有计算机科学背景(就像我一样),其中许多都是熟悉的概念,而且你因为被普及了很多新东西或许会有疑惑。然而,请给这本书一个机会,类比法没你担忧的那么令人紧张。作者既小心又聪明地应用了这些原则。这本书令人惊喜地通过了一个重要的合理性检查:涉及到我知道或反复思考过的主题的章节很少有或没有明显的错误,而且能讲出有用和重要的事情。比如,调度的那一章节毫不令人吃惊地和时间管理有关,通过直接跳到时间管理问题的核心而胜过了半数的时间管理类书籍:如果你要做一个清单上的所有事情,你做这些事情的顺序很少要紧,所以最难的调度问题是决定不做哪些事情而不是做这些事情的顺序。
作者在贝叶斯定理这一章节中的观点完全赢得了我的心。本章的许多内容都是关于贝叶斯先验的,以及一个人对过去事件的了解为什么对分析未来的概率很重要。作者接着讨论了著名的棉花糖实验。即给了儿童一个棉花糖以后,儿童被研究者告知如果他们能够克制自己不吃这个棉花糖,等到研究者回来时,会给他们两个棉花糖。克制自己不吃棉花糖(在心理学文献中叫作“延迟满足”)被发现与未来几年更好的生活有关。这个实验多年来一直被引用和滥用于各种各样的宣传,关于选择未来的收益放弃即时的快乐从而拥有成功的生活,以及生活中的失败是因为无法延迟满足。更多的邪恶分析(当然)将这种能力与种族联系在一起,带有可想而知的种族主义结论。
我对棉花糖实验有点兴趣。这是一个百分百让我愤怒咆哮的话题。
《算法之美》是我读过的唯一提到了棉花糖实验并应用了我认为更有说服力的分析的书。这不是一个关于儿童天赋的实验,这是一个关于他们的贝叶斯先验的实验。什么时候立即吃棉花糖而不是等待奖励是完全合理的?当他们过去的经历告诉他们成年人不可靠,不可信任,会在不可预测的时间内消失并且撒谎的时候。而且,更好的是,作者用我之前没有听说过的后续研究和观察支持了这一分析,观察到的内容是,一些孩子会等待一段时间然后“放弃”。如果他们下意识地使用具有较差先验的贝叶斯模型,这就完全合情合理。
这是一本很好的书。它可能在某些地方的尝试有点太勉强(数学上最优停止对于日常生活的适用性比我认为作者想要表现的更加偶然和牵强附会),如果你学过算法,其中一些内容会感到熟悉,但是它的行文思路清晰,简洁,而且编辑得非常好。这本书没有哪一部分对不起它所受到的欢迎,书中的讨论贯穿始终。如果你发现自己“已经知道了这一切”,你可能还会在接下来几页中遇到一个新的概念或一个简洁的解释。有时作者会做一些我从没想到但是回想起来正确的联系,比如将网络协议中的指数退避和司法系统中的选择惩罚联系起来。还有意识到我们的现代通信世界并不是一直联系的,它是不断缓冲的,我们中的许多人正深受缓冲膨胀这一独特现象的苦恼。
我认为你并不必须是计算机科学专业或者精通数学才能读这本书。如果你想深入,每章的结尾都有许多数学上的细节,但是正文总是易读而清晰,至少就我所知是这样(作为一个以计算机科学为专业并学到了很多数学知识的人,你至少可以有保留地相信我)。即使你已经钻研了多年的算法,这本书仍然可以提供很多东西。
这本书我读得越多越喜欢。如果你喜欢阅读这种对生活的分析,我当然是赞成的。
Rating: 9 out of 10
Reviewed: 2017-10-22
---
via: [https://www.eyrie.org/~eagle/reviews/books/1-62779-037-3.html](https://www.eyrie.org/%7Eeagle/reviews/books/1-62779-037-3.html)
作者:[Brian Christian;Tom Griffiths](https://www.eyrie.org) 译者:[GraveAccent](https://github.com/GraveAccent) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | <
|
Publisher: | Henry Holt and Company |
Copyright: | April 2016 |
ISBN: | 1-62779-037-3 |
Format: | Kindle |
Pages: | 255 |
This is an ebook, so metadata may be inaccurate or
missing. See [notes on ebooks](../notes.html#kindle) for more
information.
Another read for the work book club. This was my favorite to date, apart from the books I recommended myself.
One of the foundations of computer science as a field of study is research into algorithms: how do we solve problems efficiently using computer programs? This is a largely mathematical field, but it's often less about ideal or theoretical solutions and more about making the most efficient use of limited resources and arriving at an adequate, if not perfect, answer. Many of these problems are either day-to-day human problems or are closely related to them; after all, the purpose of computer science is to solve practical problems with computers. The question asked by Algorithms to Live By is "can we reverse this?": can we learn lessons from computer science's approach to problems that would help us make day-to-day decisions?
There's a lot of interesting material in the eleven chapters of this book, but there's also an amusing theme: humans are already very good at this. Many chapters start with an examination of algorithms and mathematical analysis of problems, dive into a discussion of how we can use those results to make better decisions, then talk about studies of the decisions humans actually make... and discover that humans are already applying ad hoc versions of the best algorithms we've come up with, given the constraints of typical life situations. It tends to undermine the stated goal of the book. Thankfully, it in no way undermines interesting discussion of general classes of problems, how computer science has tackled them, and what we've learned about the mathematical and technical shapes of those problems. There's a bit less self-help utility here than I think the authors had intended, but lots of food for thought.
(That said, it's worth considering whether this congruence is less because humans are already good at this and more because our algorithms are designed from human intuition. Maybe our best algorithms just reflect human thinking. In some cases we've checked our solutions against mathematical ideals, but in other cases they're still just our best guesses to date.)
This is the sort of a book where a chapter listing is an important part of the review. The areas of algorithms discussed here are optimal stopping, explore/exploit decisions (when to go with the best thing you've found and when to look for something better), sorting, caching, scheduling, Bayes's rule (and prediction in general), overfitting when building models, relaxation (solving an easier problem than your actual problem), randomized algorithms, a collection of networking algorithms, and finally game theory. Each of these has useful insights and thought-provoking discussion of how these sometimes-theoretical concepts map surprisingly well onto daily problems. The book concludes with a discussion of "computational kindness": an encouragement to reduce the required computation and complexity penalty for both yourself and the people you interact with.
If you have a computer science background (as I do), many of these will be
familiar concepts, and you might be dubious that a popularization would
tell you much that's new. Give this book a shot, though; the analogies
are less stretched than you might fear, and the authors are both careful
and smart about how they apply these principles. This book passes with
flying colors a key sanity check: the chapters on topics that I know well
or have thought about a lot make few or no obvious errors and say useful
and important things. For example, the scheduling chapter, which
unsurprisingly is about time management, surpasses more than half of the
time management literature by jumping straight to the heart of most time
management problems: if you're going to do everything on a list, it rarely
matters the order in which you do it, so the hardest scheduling problems
are about deciding what *not* to do rather than deciding order.
The point in the book where the authors won my heart completely was in the chapter on Bayes's rule. Much of the chapter is about Bayesian priors, and how one's knowledge of past events is a vital part of analysis of future probabilities. The authors then discuss the (in)famous marshmallow experiment, in which children are given one marshmallow and told that if they refrain from eating it until the researcher returns, they'll get two marshmallows. Refraining from eating the marshmallow (delayed gratification, in the psychological literature) was found to be associated with better life outcomes years down the road. This experiment has been used and abused for years for all sorts of propaganda about how trading immediate pleasure for future gains leads to a successful life, and how failure in life is because of inability to delay gratification. More evil analyses have (of course) tied that capability to ethnicity, with predictably racist results.
I have [kind of a thing](1-59184-679-X.html) about the marshmallow
experiment. It's a topic that reliably sends me off into angry rants.
Algorithms to Live By is the *only book* I have ever read to
mention the marshmallow experiment and then apply the analysis that I find
far more convincing. This is not a test of innate capability in the
children; it's a test of their Bayesian priors. When does it make perfect
sense to eat the marshmallow immediately instead of waiting for a reward?
When their past experience tells them that adults are unreliable, can't be
trusted, disappear for unpredictable lengths of time, and lie. And, even
better, the authors supported this analysis with both a follow-up study I
hadn't heard of before and with the observation that some children would
wait for some time and then "give in." This makes perfect sense if they
were subconsciously using a Bayesian model with poor priors.
This is a great book. It may try a bit too hard in places (applicability
of the math of optimal stopping to everyday life is more contingent and
strained than I think the authors want to admit), and some of this will be
familiar if you've studied algorithms. But the writing is clear,
succinct, and very well-edited. No part of the book outlives its welcome;
the discussion moves right along. If you find yourself going "I know all
this already," you'll still probably encounter a new concept or neat
explanation in a few more pages. And sometimes the authors make
connections that never would have occurred to me but feel right in
retrospect, such as relating exponential backoff in networking protocols
to choosing punishments in the criminal justice system. Or the
realization that our modern communication world is not constantly
connected, it's constantly *buffered*, and many of us are suffering
from the characteristic signs of buffer bloat.
I don't think you have to be a CS major, or know much about math, to read this book. There is a lot of mathematical details in the end notes if you want to dive in, but the main text is almost always readable and clear, at least so far as I could tell (as someone who was a CS major and has taken a lot of math, so a grain of salt may be indicated). And it still has a lot to offer even if you've studied algorithms for years.
The more I read of this book, the more I liked it. Definitely recommended if you like reading this sort of analysis of life.
Reviewed: 2017-10-22
<
| |
10,069 | 一些提高开源代码安全性的工具 | https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source-code | 2018-10-01T12:11:20 | [
"安全",
"开源"
] | https://linux.cn/article-10069-1.html |
>
> 开源软件的迅速普及带来了对健全安全实践的需求。
>
>
>

虽然目前开源依然发展势头较好,并被广大的厂商所采用,然而最近由 Black Duck 和 Synopsys 发布的 [2018 开源安全与风险评估报告](https://www.blackducksoftware.com/open-source-security-risk-analysis-2018)指出了一些存在的风险,并重点阐述了对于健全安全措施的需求。这份报告的分析资料素材来自经过脱敏后的 1100 个商业代码库,这些代码所涉及:自动化、大数据、企业级软件、金融服务业、健康医疗、物联网、制造业等多个领域。
这份报告强调开源软件正在被大量的使用,扫描结果中有 96% 的应用都使用了开源组件。然而,报告还指出许多其中存在很多漏洞。具体在 [这里](https://www.prnewswire.com/news-releases/synopsys-report-finds-majority-of-software-plagued-by-known-vulnerabilities-and-license-conflicts-as-open-source-adoption-soars-300648367.html):
* 令人担心的是扫描的所有结果中,有 78% 的代码库存在至少一个开源的漏洞,平均每个代码库有 64 个漏洞。
* 在经过代码审计过后代码库中,发现超过 54% 的漏洞经验证是高危漏洞。
* 17% 的代码库包括一种已经早已公开的漏洞,包括:Heartbleed、Logjam、Freak、Drown、Poddle。
Synopsys 旗下 Black Duck 的技术负责人 Tim Mackey 称,“这份报告清楚的阐述了:随着开源软件正在被企业广泛的使用,企业与组织也应当使用一些工具来检测可能出现在这些开源软件中的漏洞,以及管理其所使用的开源软件的方式是否符合相应的许可证规则。”
确实,随着越来越具有影响力的安全威胁出现,历史上从未有过我们目前对安全工具和实践的需求。大多数的组织已经意识到网络与系统管理员需要具有相应的较强的安全技能和安全证书。[在一篇文章中](https://www.linux.com/blog/sysadmin-ebook/2017/8/future-proof-your-sysadmin-career-locking-down-security),我们给出一些具有较大影响力的工具、认证和实践。
Linux 基金会已经在安全方面提供了许多关于安全的信息与教育资源。比如,Linux 社区提供了许多针对特定平台的免费资源,其中 [Linux 工作站安全检查清单](/article-6753-1.html) 其中提到了很多有用的基础信息。线上的一些发表刊物也可以提升用户针对某些平台对于漏洞的保护,如:[Fedora 安全指南](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html)、[Debian 安全手册](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html)。
目前被广泛使用的私有云平台 OpenStack 也加强了关于基于云的智能安全需求。根据 Linux 基金会发布的 [公有云指南](https://www.linux.com/publications/2016-guide-open-cloud):“据 Gartner 的调研结果,尽管公有云的服务商在安全审查和提升透明度方面做的都还不错,安全问题仍然是企业考虑向公有云转移的最重要的考量之一。”
无论是对于组织还是个人,千里之堤毁于蚁穴,这些“蚁穴”无论是来自路由器、防火墙、VPN 或虚拟机都可能导致灾难性的后果。以下是一些免费的工具可能对于检测这些漏洞提供帮助:
* [Wireshark](https://www.wireshark.org/),流量包分析工具
* [KeePass Password Safe](http://keepass.info/),自由开源的密码管理器
* [Malwarebytes](https://www.malwarebytes.com/),免费的反病毒和勒索软件工具
* [NMAP](http://searchsecurity.techtarget.co.uk/tip/Nmap-tutorial-Nmap-scan-examples-for-vulnerability-discovery),安全扫描器
* [NIKTO](https://cirt.net/Nikto2),开源的 web 服务器扫描器
* [Ansible](https://www.ansible.com/),自动化的配置运维工具,可以辅助做安全基线
* [Metasploit](https://www.metasploit.com/),渗透测试工具,可辅助理解攻击向量
这里有一些对上面工具讲解的视频。比如 [Metasploit 教学](http://www.computerweekly.com/tutorial/The-Metasploit-Framework-Tutorial-PDF-compendium-Your-ready-reckoner)、[Wireshark 教学](https://www.youtube.com/watch?v=TkCSr30UojM)。还有一些传授安全技能的免费电子书,比如:由 Ibrahim Haddad 博士和 Linux 基金会共同出版的[并购过程中的开源审计](https://www.linuxfoundation.org/resources/open-source-audits-merger-acquisition-transactions/),里面阐述了多条在技术平台合并过程中,因没有较好的进行开源审计,从而引发的安全问题。当然,书中也记录了如何在这一过程中进行代码合规检查、准备以及文档编写。
同时,我们 [之前提到的一个免费的电子书](https://www.linux.com/news/networking-security-storage-docker-containers-free-ebook-covers-essentials), 由来自 [The New Stack](http://thenewstack.io/ebookseries/) 编写的“Docker 与容器中的网络、安全和存储”,里面也提到了关于加强容器网络安全的最新技术,以及 Docker 本身可提供的关于提升其网络的安全与效率的最佳实践。这本电子书还记录了关于如何构建安全容器集群的最佳实践。
所有这些工具和资源,可以在很大的程度上预防安全问题,正如人们所说的未雨绸缪,考虑到一直存在的安全问题,现在就应该开始学习这些安全合规资料与工具。
想要了解更多的安全、合规以及开源项目问题,点击[这里](https://www.linuxfoundation.org/projects/security-compliance/)。
---
via: <https://www.linux.com/blog/2018/5/free-resources-securing-your-open-source-code>
作者:[Sam Dean](https://www.linux.com/users/sam-dean) 选题:[lujun9972](https://github.com/lujun9972) 译者:[sd886393](https://github.com/sd886393) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,070 | 在 Linux 下截屏并编辑的最佳工具 | https://itsfoss.com/take-screenshot-linux/ | 2018-10-01T21:15:22 | [
"截屏"
] | https://linux.cn/article-10070-1.html |
>
> 有几种获取屏幕截图并对其进行添加文字、箭头等编辑的方法,这里提及的的屏幕截图工具在 Ubuntu 和其它主流 Linux 发行版中都能够使用。
>
>
>

当我的主力操作系统从 Windows 转换到 Ubuntu 的时候,首要考虑的就是屏幕截图工具的可用性。尽管使用默认的键盘快捷键也可以获取屏幕截图,但如果使用屏幕截图工具,可以更方便地对屏幕截图进行编辑。
本文将会介绍在不适用第三方工具的情况下,如何通过系统自带的方法和工具获取屏幕截图,另外还会介绍一些可用于 Linux 的最佳截图工具。
### 方法 1:在 Linux 中截图的默认方式
你想要截取整个屏幕?屏幕中的某个区域?某个特定的窗口?
如果只需要获取一张屏幕截图,不对其进行编辑的话,那么键盘的默认快捷键就可以满足要求了。而且不仅仅是 Ubuntu ,绝大部分的 Linux 发行版和桌面环境都支持以下这些快捷键:
* `PrtSc` – 获取整个屏幕的截图并保存到 Pictures 目录。
* `Shift + PrtSc` – 获取屏幕的某个区域截图并保存到 Pictures 目录。
* `Alt + PrtSc` –获取当前窗口的截图并保存到 Pictures 目录。
* `Ctrl + PrtSc` – 获取整个屏幕的截图并存放到剪贴板。
* `Shift + Ctrl + PrtSc` – 获取屏幕的某个区域截图并存放到剪贴板。
* `Ctrl + Alt + PrtSc` – 获取当前窗口的 截图并存放到剪贴板。
如上所述,在 Linux 中使用默认的快捷键获取屏幕截图是相当简单的。但如果要在不把屏幕截图导入到其它应用程序的情况下对屏幕截图进行编辑,还是使用屏幕截图工具比较方便。
### 方法 2:在 Linux 中使用 Flameshot 获取屏幕截图并编辑

功能概述:
* 注释 (高亮、标示、添加文本、框选)
* 图片模糊
* 图片裁剪
* 上传到 Imgur
* 用另一个应用打开截图
Flameshot 在去年发布到 [GitHub](https://github.com/lupoDharkael/flameshot),并成为一个引人注目的工具。
如果你需要的是一个能够用于标注、模糊、上传到 imgur 的新式截图工具,那么 Flameshot 是一个好的选择。
下面将会介绍如何安装 Flameshot 并根据你的偏好进行配置。
如果你用的是 Ubuntu,那么只需要在 Ubuntu 软件中心上搜索,就可以找到 Flameshot 进而完成安装了。要是你想使用终端来安装,可以执行以下命令:
```
sudo apt install flameshot
```
如果你在安装过程中遇到问题,可以按照[官方的安装说明](https://github.com/lupoDharkael/flameshot#installation)进行操作。安装完成后,你还需要进行配置。尽管可以通过搜索来随时启动 Flameshot,但如果想使用 `PrtSc` 键触发启动,则需要指定对应的键盘快捷键。以下是相关配置步骤:
* 进入系统设置中的“键盘设置”
* 页面中会列出所有现有的键盘快捷键,拉到底部就会看见一个 “+” 按钮
* 点击 “+” 按钮添加自定义快捷键并输入以下两个字段:
+ “名称”: 任意名称均可。
+ “命令”: `/usr/bin/flameshot gui`
* 最后将这个快捷操作绑定到 `PrtSc` 键上,可能会提示与系统的截图功能相冲突,但可以忽略掉这个警告。
配置之后,你的自定义快捷键页面大概会是以下这样:

*将键盘快捷键映射到 Flameshot*
### 方法 3:在 Linux 中使用 Shutter 获取屏幕截图并编辑

功能概述:
* 注释 (高亮、标示、添加文本、框选)
* 图片模糊
* 图片裁剪
* 上传到图片网站
[Shutter](http://shutter-project.org/) 是一个对所有主流 Linux 发行版都适用的屏幕截图工具。尽管最近已经不太更新了,但仍然是操作屏幕截图的一个优秀工具。
在使用过程中可能会遇到这个工具的一些缺陷。Shutter 在任何一款最新的 Linux 发行版上最常见的问题就是由于缺少了任务栏上的程序图标,导致默认禁用了编辑屏幕截图的功能。 对于这个缺陷,还是有解决方案的。你只需要跟随我们的教程[在 Shutter 中修复这个禁止编辑选项并将程序图标在任务栏上显示出来](https://itsfoss.com/shutter-edit-button-disabled/)。问题修复后,就可以使用 Shutter 来快速编辑屏幕截图了。
同样地,在软件中心搜索也可以找到进而安装 Shutter,也可以在基于 Ubuntu 的发行版中执行以下命令使用命令行安装:
```
sudo apt install shutter
```
类似 Flameshot,你可以通过搜索 Shutter 手动启动它,也可以按照相似的方式设置自定义快捷方式以 `PrtSc` 键唤起 Shutter。
如果要指定自定义键盘快捷键,只需要执行以下命令:
```
shutter -f
```
### 方法 4:在 Linux 中使用 GIMP 获取屏幕截图

功能概述:
* 高级图像编辑功能(缩放、添加滤镜、颜色校正、添加图层、裁剪等)
* 截取某一区域的屏幕截图
如果需要对屏幕截图进行一些预先编辑,GIMP 是一个不错的选择。
通过软件中心可以安装 GIMP。如果在安装时遇到问题,可以参考其[官方网站的安装说明](https://www.gimp.org/downloads/)。
要使用 GIMP 获取屏幕截图,需要先启动程序,然后通过 “File-> Create-> Screenshot” 导航。
打开 Screenshot 选项后,会看到几个控制点来控制屏幕截图范围。点击 “Snap” 截取屏幕截图,图像将自动显示在 GIMP 中可供编辑。
### 方法 5:在 Linux 中使用命令行工具获取屏幕截图
这一节内容仅适用于终端爱好者。如果你也喜欢使用终端,可以使用 “GNOME 截图工具”或 “ImageMagick” 或 “Deepin Scrot”,大部分流行的 Linux 发行版中都自带这些工具。
要立即获取屏幕截图,可以执行以下命令:
#### GNOME 截图工具(可用于 GNOME 桌面)
```
gnome-screenshot
```
GNOME 截图工具是使用 GNOME 桌面的 Linux 发行版中都自带的一个默认工具。如果需要延时获取屏幕截图,可以执行以下命令(这里的 `5` 是需要延迟的秒数):
```
gnome-screenshot -d -5
```
#### ImageMagick
如果你的操作系统是 Ubuntu、Mint 或其它流行的 Linux 发行版,一般会自带 [ImageMagick](https://www.imagemagick.org/script/index.php) 这个工具。如果没有这个工具,也可以按照[官方安装说明](https://www.imagemagick.org/script/install-source.php)使用安装源来安装。你也可以在终端中执行这个命令:
```
sudo apt-get install imagemagick
```
安装完成后,执行下面的命令就可以获取到屏幕截图(截取整个屏幕):
```
import -window root image.png
```
这里的 “image.png” 就是屏幕截图文件保存的名称。
要获取屏幕一个区域的截图,可以执行以下命令:
```
import image.png
```
#### Deepin Scrot
Deepin Scrot 是基于终端的一个较新的截图工具。和前面两个工具类似,一般自带于 Linux 发行版中。如果需要自行安装,可以执行以下命令:
```
sudo apt-get install scrot
```
安装完成后,使用下面这些命令可以获取屏幕截图。
获取整个屏幕的截图:
```
scrot myimage.png
```
获取屏幕某一区域的截图:
```
scrot -s myimage.png
```
### 总结
以上是一些在 Linux 上的优秀截图工具。当然还有很多截图工具没有提及(例如用于 KDE 发行版的 [Spectacle](https://www.kde.org/applications/graphics/spectacle/)),但相比起来还是上面几个工具更为好用。
如果你有比文章中提到的更好的截图工具,欢迎讨论!
---
via: <https://itsfoss.com/take-screenshot-linux/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

When I switched from Windows to Ubuntu as my primary OS, I was first worried about the **availability of screenshot tools**.
It is easy to utilize the default keyboard shortcuts to take screenshots, but with a standalone tool, I get to annotate/edit the image while taking the screenshot.
In this article, I will introduce you to the default methods/tools to take a screenshot while also covering the list of the best screenshot tools available for Linux.
## Method 1: The default way to take screenshots in Linux
Do you want to capture the image of your entire screen? A specific region? A specific window?
If you just want a simple screenshot without any annotations/fancy editing capabilities, the default keyboard shortcuts will do the trick. These are not specific to Ubuntu. Almost all Linux distributions and desktop environments support these keyboard shortcuts.
Let’s take a look at the list of keyboard shortcuts you can utilize:
**PrtSc** – *Save a screenshot of the entire screen to the “Pictures” directory.***Shift + PrtSc** – *Save a screenshot of a specific region to Pictures.***Alt + PrtSc** – * Save a screenshot of the current window to Pictures*.
**Ctrl + PrtSc**–
*Copy the screenshot of the entire screen to the clipboard.***Shift + Ctrl + PrtSc**–
*Copy the screenshot of a specific region to the clipboard.***Ctrl + Alt + PrtSc**–
*Copy the screenshot of the current window to the clipboard.*As you can see, [taking screenshots in Linux is absolutely simple with the default GNOME screenshot tool](https://itsfoss.com/using-gnome-screenshot-tool/). However, if you want to immediately annotate (or other editing features) without importing the screenshot to another application, you can use a dedicated screenshot tool.
## Method 2: Take and edit screenshots in Linux with Flameshot

*Feature Overview*
**Annotate (highlight, point, add text, box in)****Blur part of an image****Crop part of an image****Upload to Imgur****Open screenshot with another app**
[Flameshot](https://itsfoss.com/flameshot/) is a quite impressive screenshot tool which arrived on [GitHub](https://github.com/lupoDharkael/flameshot) last year.
If you have been searching for a screenshot tool that helps you annotate, blur, mark, and upload to imgur while being actively maintained unlike some outdated screenshot tools, Flameshot should be the one to have installed.
Fret not, we will guide you on how to install it and configure it as per your preferences.
To install it on Ubuntu, you just need to search for it on Ubuntu Software center and get it installed. In case you want to use the terminal, here’s the command for it:
`sudo apt install flameshot`
If you face any trouble installing, you can follow their [official installation instructions](https://github.com/lupoDharkael/flameshot#installation). After installation, you need to configure it. Well, you can always search for it and launch it, but if you want to trigger the Flameshot screenshot tool by using **PrtSc** key, you need to assign a custom keyboard shortcut.
Here’s how you can do that:
- Head to the system settings and navigate your way to the Keyboard settings.
- You will find all the keyboard shortcuts listed there, ignore them and scroll down to the bottom. Now, you will find a
**+**button. - Click the “+” button to add a custom shortcut. You need to enter the following in the fields you get:
**Name:***Anything You Want***Command:**/usr/bin/flameshot gui - Finally, set the shortcut to
**PrtSc**– which will warn you that the default screenshot functionality will be disabled – so proceed doing it.
For reference, your custom keyboard shortcut field should look like this after configuration:

## Method 3: Take and edit screenshots in Linux with Shutter

*Feature Overview:*
**Annotate (highlight, point, add text, box in)****Blur part of an image****Crop part of an image****Upload to image hosting sites**
[Shutter](http://shutter-project.org/) is a popular screenshot tool available for all major Linux distributions. Though it seems to be no more being actively developed, it is still an excellent choice for handling screenshots.
You might encounter certain bugs/errors. The most common problem with Shutter on any latest Linux distro releases is that the ability to edit the screenshots is disabled by default along with the missing applet indicator. But, fret not, we have a solution to that. You just need to follow our guide to[ fix the disabled edit option in Shutter and bring back the applet indicator](https://itsfoss.com/shutter-edit-button-disabled/).
After you’re done fixing the problem, you can utilize it to edit the screenshots in a jiffy.
To install shutter, you can browse the software center and get it from there. Alternatively, you can use the following command in the terminal to install Shutter in Ubuntu-based distributions:
`sudo apt install shutter`
As we saw with Flameshot, you can either use the app launcher to search for Shutter and manually launch the application, or you can follow the same instructions (with a different command) to set a custom shortcut to trigger Shutter when you press the** PrtSc** key.
If you are going to assign a custom keyboard shortcut, you need to use the following in the command field:
`shutter -f`
## Method 4: Use GIMP to take screenshots in Linux

*Feature Overview:*
**Advanced Image Editing Capabilities (Scaling, Adding filters, color correction, Add layers, Crop, and so on.)****Take a screenshot of the selected area**
If you use GIMP a lot and you probably want some advance edits on your screenshots, GIMP would be a good choice.
You should already have it installed, if not, you can always head to your software center to install it. If you have trouble installing, you can always refer to their [official website for installation instructions](https://www.gimp.org/downloads/).
To take a screenshot with GIMP, you need first to launch it, and then navigate your way through **File->Create->Screenshot**.
After you click on the screenshot option, you will be greeted with a couple of tweaks to control the screenshot. That’s just it. Click “**Snap**” to take the screenshot, and the image will automatically appear within GIMP, ready for you to edit.
## Method 5: Taking screenshots in Linux using command-line tools
This section is strictly for terminal lovers. If you like using the terminal, you can utilize the **GNOME screenshot** tool or **ImageMagick** or **Deepin Scrot**– which comes baked in on most of the popular Linux distributions.
* To take a screenshot instantly, *enter the following command:
### GNOME Screenshot (for GNOME desktop users)
`gnome-screenshot`
* To take a screenshot with a delay,* enter the following command (here,
**5**– is the number of seconds you want to delay)
GNOME screenshot is one of the default tools that exists in all distributions with GNOME desktop.
`gnome-screenshot -d -5`
### ImageMagick
[ImageMagick](https://www.imagemagick.org/script/index.php) should be already pre-installed on your system if you are using Ubuntu, Mint, or any other popular Linux distribution. In case, it isn’t there, you can always install it by following the [official installation instructions (from source)](https://www.imagemagick.org/script/install-source.php). In either case, you can enter the following in the terminal:
`sudo apt-get install imagemagick`
After you have it installed, you can type in the following commands to take a screenshot:
To take the screenshot of your entire screen:
`import -window root image.png`
Here, “* image.png*” is your desired name for the screenshot.
To take the screenshot of a specific area:
`import image.png`
### Deepin Scrot
Deepin Scrot is a slightly advanced terminal-based screenshot tool. Similar to the others, you should already have it installed. If not, get it installed through the terminal by typing:
`sudo apt-get install scrot`
After having it installed, follow the instructions below to take a screenshot:
*To take a screenshot of the entire screen:*
`scrot myimage.png`
*To take a screenshot of the selected aread:*
`scrot -s myimage.png`
## Wrapping Up
So, these are some of the best screenshot tools available for Linux.
If you find a better screenshot tool than the ones mentioned in our article, feel free to let us know in the comments below.
Also, do tell us about your favorite screenshot tool! |
10,071 | openmediavault 入门:一个家庭 NAS 解决方案 | https://opensource.com/article/18/9/openmediavault | 2018-10-02T12:15:19 | [
"NAS"
] | https://linux.cn/article-10071-1.html |
>
> 这个网络附属文件服务提供了一系列可靠的功能,并且易于安装和配置。
>
>
>

面对许多可供选择的云存储方案,一些人可能会质疑一个家庭 NAS(<ruby> 网络附属存储 <rt> network-attached storage </rt></ruby>)服务器的价值。毕竟,当所有你的文件存储在云上,你就不需要为你自己云服务的维护、更新和安全担忧。
但是,这不完全对,是不是?你有一个家庭网络,所以你已经要负责维护网络的健康和安全。假定你已经维护一个家庭网络,那么[一个家庭 NAS](https://opensource.com/article/18/8/automate-backups-raspberry-pi)并不会增加额外负担。反而你能从少量的工作中得到许多的好处。
你可以为你家里所有的计算机进行备份(你也可以备份到其它地方)。构架一个存储电影、音乐和照片的媒体服务器,无需担心互联网连接是否连通。在家里的多台计算机上处理大型文件,不需要等待从互联网某个其它计算机传输这些文件过来。另外,可以让 NAS 与其他服务配合工作,如托管本地邮件或者家庭 Wiki。也许最重要的是,构架家庭 NAS,数据完全是你的,它始终处于在控制下,随时可访问。
接下来的问题是如何选择 NAS 方案。当然,你可以购买预先搭建好的商品,并在一天内搞定,但是这会有什么乐趣呢?实际上,尽管拥有一个能为你搞定一切的设备很棒,但是有一个可以修复和升级的钻机平台更棒。这就我近期的需求,我选择安装和配置 [openmediavault](https://openmediavault.org)。
### 为什么选择 openmediavault?
市面上有不少开源的 NAS 解决方案,其中有些肯定比 openmediavault 流行。当我询问周遭,例如 [freeNAS](https://freenas.org) 这样的最常被推荐给我。那么为什么我不采纳他们的建议呢?毕竟,用它的人更多。[基于 FreeNAS 官网的一份对比数据](http://www.freenas.org/freenas-vs-openmediavault/),它包含了很多的功能,并且提供许多支持选项。这当然都对。但是 openmediavault 也不差。它实际上是基于 FreeNAS 早期版本的,虽然它在下载量和功能方面较少,但是对于我的需求而言,它已经相当足够了。
另外一个因素是它让我感到很舒适。openmediavault 的底层操作系统是 [Debian](https://www.debian.org/),然而 FreeNAS 是 [FreeBSD](https://www.freebsd.org/)。由于我个人对 FreeBSD 不是很熟悉,因此如果我的 NAS 出现故障,必定难于在 FreeBSD 上修复故障。同样的,也会让我觉得难于优化或添加一些服务到这个机器上。当然,我可以学习 FreeBSD 以更熟悉它,但是我已经在家里构架了这个 NAS;我发现,如果完成它只需要较少的“学习机会”,那么构建 NAS 往往会更成功。
当然,每个人情况都不同,所以你要自己调研,然后作出最适合自己方案的决定。FreeNAS 对于许多人似乎都是不错的解决方案。openmediavault 正是适合我的解决方案。
### 安装与配置
在 [openmediavault 文档](https://openmediavault.readthedocs.io/en/latest/installation/index.html)里详细记录了安装步骤,所以我不在这里重述了。如果你曾经安装过任何一个 Linux 发行版,大部分安装步骤都是很类似的(虽然是在相对丑陋的 [Ncurses](https://invisible-island.net/ncurses/) 界面,而不像你或许在现代发行版里见到的)。我按照 [专用的驱动器](https://openmediavault.readthedocs.io/en/latest/installation/via_iso.html) 的说明来安装它。这些说明不但很好,而且相当精炼的。当你搞定这些步骤,就安装好了一个基本的系统,但是你还需要做更多才能真正构建好 NAS 来存储各种文件。例如,专用驱动器方式需要在硬盘驱动器上安装 openmediavault,但那是指你的操作系统的驱动器,而不是和网络上其他计算机共享的驱动器。你需要自己把这些建立起来并且配置好。
你要做的第一件事是加载用来管理的网页界面,并修改默认密码。这个密码和之前你安装过程设置的 root 密码是不同的。这是网页界面的管理员账号,默认的账户和密码分别是 `admin` 和 `openmediavault`,当你登入后要马上修改。
#### 设置你的驱动器
一旦你安装好 openmediavault,你需要它为你做一些工作。逻辑上的第一个步骤是设置好你即将用来作为存储的驱动器。在这里,我假定你已经物理上安装好它们了,所以接下来你要做的就是让 openmediavault 识别和配置它们。第一步是确保这些磁盘是可见的。侧边栏菜单有很多选项,而且被精心的归类了。选择“Storage -> Disks”。一旦你点击该菜单,你应该能够看到所有你已经安装到该服务器的驱动,包括那个你已经用来安装 openmediavault 的驱动器。如果你没有在那里看到所有驱动器,点击“Scan”按钮去看是否能够挂载它们。通常,这不会是一个问题。
你可以独立的挂载和设置这些驱动器用于文件共享,但是对于一个文件服务器,你会想要一些冗余。你想要能够把很多驱动器当作一个单一卷,并能够在某一个驱动器出现故障时恢复你的数据,或者空间不足时安装新驱动器。这意味你将需要一个 [RAID](https://en.wikipedia.org/wiki/RAID)。你想要的什么特定类型的 RAID 的这个主题是一个大坑,值得另写一篇文章专门来讲述它(而且已经有很多关于该主题的文章了),但是简而言之是你将需要不止一个驱动器,最好的情况下,你所有的驱动都存储一样的容量。
openmediavault 支持所有标准的 RAID 级别,所以这里很简单。可以在“Storage -> RAID Management”里配置你的 RAID。配置是相当简单的:点击“Create”按钮,在你的 RAID 阵列里选择你想要的磁盘和你想要使用的 RAID 级别,并给这个阵列一个名字。openmediavault 为你处理剩下的工作。这里没有复杂的命令行,也不需要试图记住 `mdadm` 命令的一些选项参数。在我的例子,我有六个 2TB 驱动器,设置成了 RAID 10。
当你的 RAID 构建好了,基本上你已经有一个地方可以存储东西了。你仅仅需要设置一个文件系统。正如你的桌面系统,一个硬盘驱动器在没有格式化的情况下是没什么用处的。所以下一个你要去的地方的是位于 openmediavault 控制面板里的“Storage -> File Systems”。和配置你的 RAID 一样,点击“Create”按钮,然后跟着提示操作。如果你在你的服务器上只有一个 RAID ,你应该可以看到一个像 `md0` 的东西。你也需要选择文件系统的类别。如果你不能确定,选择标准的 ext4 类型即可。
#### 定义你的共享
亲爱的!你有个地方可以存储文件了。现在你只需要让它在你的家庭网络中可见。可以从在 openmediavault 控制面板上的“Services”部分上配置。当谈到在网络上设置文件共享,主要有两个选择:NFS 或者 SMB/CIFS. 根据以往经验,如果你网络上的所有计算机都是 Linux 系统,那么你使用 NFS 会更好。然而,当你家庭网络是一个混合环境,是一个包含Linux、Windows、苹果系统和嵌入式设备的组合,那么 SMB/CIFS 可能会是你合适的选择。
这些选项不是互斥的。实际上,你可以在服务器上运行这两个服务,同时拥有这些服务的好处。或者你可以混合起来,如果你有一个特定的设备做特定的任务。不管你的使用场景是怎样,配置这些服务是相当简单。点击你想要的服务,从它配置中激活它,和在网络中设定你想要的共享文件夹为可见。在基于 SMB/CIFS 共享的情况下,相对于 NFS 多了一些可用的配置,但是一般用默认配置就挺好的,接着可以在默认基础上修改配置。最酷的事情是它很容易配置,同时也很容易在需要的时候修改配置。
#### 用户配置
基本上已将完成了。你已经在 RAID 中配置了你的驱动器。你已经用一种文件系统格式化了 RAID,并且你已经在格式化的 RAID 上设定了共享文件夹。剩下来的一件事情是配置那些人可以访问这些共享和可以访问多少。这个可以在“Access Rights Management”配置里设置。使用“User”和“Group”选项来设定可以连接到你共享文件夹的用户,并设定这些共享文件的访问权限。
一旦你完成用户配置,就几乎准备好了。你需要从不同客户端机器访问你的共享,但是这是另外一个可以单独写个文章的话题了。
玩得开心!
---
via: <https://opensource.com/article/18/9/openmediavault>
作者:[Jason van Gumster](https://opensource.com/users/mairin) 选题:[lujun9972](https://github.com/lujun9972) 译者:[jamelouis](https://github.com/jamelouis) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | With so many cloud storage options readily available, some folks might question the value of having a home NAS (network-attached storage) server. After all, with your files on the cloud, you don't have to worry about managing the maintenance, updates, and security of your own server.
But that's not entirely true, is it? You have a home network, so you've got to pay at least *some* attention to that network's health and security. Assuming you're already keeping on top of that, then [a home NAS](https://opensource.com/article/18/8/automate-backups-raspberry-pi) really isn't adding that much additional hassle. And there are all kinds of benefits to gain from that minor amount of work.
You can have a local backup of every computer in your house (you can also back up off-site). Have a media server that holds movies, music, and photos regardless of whether your internet connection flakes out. Work on large files on multiple computers in your home without waiting for them to traverse from some random computer somewhere else on the internet. Plus, you can have your NAS pull double duty with other services, like hosting local email or a household wiki. Perhaps most importantly, with a home NAS, your data is *your* data—under your control and always accessible.
The follow-on question is which NAS solution to choose. Sure, you could buy a pre-built solution and call it a day, but what fun is that? And practically speaking, although it's great to have an appliance that handles everything for you, it's often better to have a rig that you can fix and upgrade yourself. This is the situation I found myself in recently. I chose to install and set up [openmediavault](https://openmediavault.org).
## Why openmediavault?
There are a few open source NAS solutions out there, some arguably more popular than openmediavault. When I asked around, for instance, [FreeNAS](https://freenas.org) was recommended the most. So why didn't I go with that? After all, it is more widely used, includes more features, and offers more support options, [according to a comparison on the FreeNAS website](http://www.freenas.org/freenas-vs-openmediavault/). That's certainly all true. But openmediavault is no slouch. It's actually based on an earlier version of FreeNAS, and while its numbers are lower in terms of downloads and features, they're more than adequate for my needs.
Another factor was a simple matter of comfort. Openmediavault's underlying operating system is [Debian](https://www.debian.org/), whereas FreeNAS sits atop [FreeBSD](https://www.freebsd.org/). I'm personally not as familiar with FreeBSD, so that would make it more difficult for me to fix things if my NAS starts misbehaving. It also makes it more difficult for me to tweak things or add my own services to the machine if I want. Sure, I could learn FreeBSD and get more familiar with it, but I'm already home-building this NAS; I've found that projects tend to be more successful if you limit the number of "learning opportunities" you give yourself to complete them.
Every situation is different, of course, so do your research and decide what seems to be the best fit for you. FreeNAS looks like the right solution for a lot of people. Openmediavault was the right one for me.
## Installation and configuration
The installation process is pretty well covered in the [openmediavault documentation](https://openmediavault.readthedocs.io/en/latest/installation/index.html), so I won't rehash that here. If you've ever installed a Linux distribution, most of the steps should look familiar to you (though with a somewhat uglier [Ncurses](https://invisible-island.net/ncurses/) interface than you might see on modern distributions). I installed it using the [dedicated drive](https://openmediavault.readthedocs.io/en/latest/installation/via_iso.html) instructions. However, those instructions, while good, are rather spartan. When you're done, you have a base system installed, but there's more to do before you can actually use your NAS to store any files. For instance, the dedicated drive instructions install openmediavault on a hard drive, but that's the operating system drive, not the one with the shared space that's accessible to other computers on your network. You need to walk yourself through setting that up and configuring it.
The first thing you should do is load up the administrative web interface and change the default password. This password is different from the root password you set during the installation process. It's the administrative account for the web interface, and the default username and password are `admin`
and `openmediavault`
, respectively—definitely something you'll want to change immediately after logging in.
### Set up your drives
Once you've installed openmediavault, you need it to actually do stuff for you. The first logical step is to set up the drives that you're going to use for storage. I'm assuming that you've already got them physically installed, so all you have to do at this point is get openmediavault to recognize them and configure them. The first step is making sure those disks are visible. The sidebar menu has a lot of options, but it's very sensibly organized. Go to **Storage -> Disks**. Once you click that, you should see all of the drives you've installed on your server, including the one where you actually installed openmediavault. If you don't see all of your drives there, click the Scan button to see if it picks them up. Usually, it's not a problem.
You could mount these drives individually to set them up as your file share, but for a file server, you'll want some redundancy. You want to be able to treat multiple drives as a single volume and recover your data if a drive fails or add new drives when you start running out of space. That means you're going to want a [RAID](https://en.wikipedia.org/wiki/RAID). The topic of what specific type of RAID configuration you want is a deep rabbit hole that deserves an article all of its own (and many have been written), but suffice it to say that you'll need more than one drive, and in the best case, all of your drives store the same amount of data.
Openmediavault supports all standard RAID levels, so you're good to go there. Configure your RAID in **Storage -> RAID Management**. Configuration is absurdly simple: Click the Create button, choose the disks you want in your RAID array, the RAID level you want to use, and a name for the array. Openmediavault handles the rest for you. There's no messing around at the command line, trying to remember which flags to use with the `mdadm`
command. In my specific case, I have six 2-TB drives that I've set up as RAID 10.
With your RAID set up, you've *almost* got a place to store things. You just need to set up a file system. Just like your desktop computer, a hard drive doesn't do you any good until you format it. So the next place to go in openmediavault's control panel is **Storage -> File Systems**. Just like configuring your RAID, click the Create button and follow the prompts. In this case, you choose the device to format. If you have only the one RAID on your server, it should be something like `md0`
. You'll also need to choose the filesystem type. If you're not sure, just use the standard ext4 type.
### Define your shares
Sweet! You've got a place to store files. Now you just need to make it visible on your home network. Configure this from the **Services** section of the openmediavault control panel. When it comes to setting up a file share on a network, there are really two main choices: NFS or SMB/CIFS. As a rule of thumb, if all of the computers on your network are running Linux distributions, then you're probably better off using NFS. However, if your home network is a mixed environment with a combination of Linux, Windows, Mac OS, and embedded devices, then SMB/CIFS is probably the right choice.
These options aren't mutually exclusive. You could actually run both services on your server and get the best of both worlds. Or you could mix it up if you have specific devices dedicated to particular tasks. Whatever your usage scenario, configuring these services is dirt simple. Click on the service you want, enable it from its Settings, and define the shared folders you want visible on the network. In the case of SMB/CIFS shares, there are a few more settings available than with NFS, but most of the defaults are fine to start with. The cool thing is that since it's so easy to configure, it's also pretty easy to change on the fly.
### Configure users
You're almost done. You've configured your drives in a RAID. You've formatted that RAID with a file system. And you've defined shared folders on that formatted RAID. The only thing left is saying who can access those shares and how much. This is handled from the **Access Rights Management** section. Use the **User** and **Group** sections to define the users who connect to your shared folders and the permissions they have with the files in those folders.
Once you do that, you're pretty much good to go. You'll need to access your shares from your various client machines, but that's a topic for another article.
Have fun!
## Comments are closed. |
10,072 | 备份安装的包并在全新安装的 Ubuntu 上恢复它们 | https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/ | 2018-10-02T12:24:28 | [
"软件包",
"安装"
] | https://linux.cn/article-10072-1.html | 
在多个 Ubuntu 系统上安装同一组软件包是一项耗时且无聊的任务。你不会想花时间在多个系统上反复安装相同的软件包。在类似架构的 Ubuntu 系统上安装软件包时,有许多方法可以使这项任务更容易。你可以方便地通过 [Aptik](https://www.ostechnix.com/how-to-migrate-system-settings-and-data-from-an-old-system-to-a-newly-installed-ubuntu-system/) 并点击几次鼠标将以前的 Ubuntu 系统的应用程序、设置和数据迁移到新安装的系统中。或者,你可以使用软件包管理器(例如 APT)获取[备份的已安装软件包的完整列表](https://www.ostechnix.com/create-list-installed-packages-install-later-list-centos-ubuntu/#comment-12598),然后在新安装的系统上安装它们。今天,我了解到还有另一个专用工具可以完成这项工作。来看一下 `apt-clone`,这是一个简单的工具,可以让你为 Debian/Ubuntu 系统创建一个已安装的软件包列表,这些软件包可以在新安装的系统或容器上或目录中恢复。
`apt-clone` 会帮助你处理你想要的情况,
* 在运行类似 Ubuntu(及衍生版)的多个系统上安装一致的应用程序。
* 经常在多个系统上安装相同的软件包。
* 备份已安装的应用程序的完整列表,并在需要时随时随地恢复它们。
在本简要指南中,我们将讨论如何在基于 Debian 的系统上安装和使用 `apt-clone`。我在 Ubuntu 18.04 LTS 上测试了这个程序,但它应该适用于所有基于 Debian 和 Ubuntu 的系统。
### 备份已安装的软件包并在新安装的 Ubuntu 上恢复它们
`apt-clone` 在默认仓库中有。要安装它,只需在终端输入以下命令:
```
$ sudo apt install apt-clone
```
安装后,只需创建已安装软件包的列表,并将其保存在你选择的任何位置。
```
$ mkdir ~/mypackages
$ sudo apt-clone clone ~/mypackages
```
上面的命令将我的 Ubuntu 中所有已安装的软件包保存在 `~/mypackages` 目录下名为 `apt-clone-state-ubuntuserver.tar.gz` 的文件中。
要查看备份文件的详细信息,请运行:
```
$ apt-clone info mypackages/apt-clone-state-ubuntuserver.tar.gz
Hostname: ubuntuserver
Arch: amd64
Distro: bionic
Meta:
Installed: 516 pkgs (33 automatic)
Date: Sat Sep 15 10:23:05 2018
```
如你所见,我的 Ubuntu 服务器总共有 516 个包。
现在,将此文件复制到 USB 或外部驱动器上,并转至要安装同一套软件包的任何其他系统。或者,你也可以将备份文件传输到网络上的系统,并使用以下命令安装软件包:
```
$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz
```
请注意,此命令将覆盖你现有的 `/etc/apt/sources.list` 并将安装/删除软件包。警告过你了!此外,只需确保目标系统是相同的 CPU 架构和操作系统。例如,如果源系统是 18.04 LTS 64 位,那么目标系统必须也是相同的。
如果你不想在系统上恢复软件包,可以使用 `--destination /some/location` 选项将克隆复制到这个文件夹中。
```
$ sudo apt-clone restore apt-clone-state-ubuntuserver.tar.gz --destination ~/oldubuntu
```
在此例中,上面的命令将软件包恢复到 `~/oldubuntu` 中。
有关详细信息,请参阅帮助部分:
```
$ apt-clone -h
```
或者手册页:
```
$ man apt-clone
```
建议阅读:
* [Systemback - 将 Ubuntu 桌面版和服务器版恢复到以前的状态](https://www.ostechnix.com/systemback-restore-ubuntu-desktop-and-server-to-previous-state/)
* [Cronopete - Linux 下的苹果时间机器](https://www.ostechnix.com/cronopete-apples-time-machine-clone-linux/)
就是这些了。希望这个有用。还有更多好东西。敬请期待!
干杯!
---
via: <https://www.ostechnix.com/backup-installed-packages-and-restore-them-on-freshly-installed-ubuntu-system/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,073 | 如何在 Linux 中查看进程占用的端口号 | https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/ | 2018-10-02T12:38:13 | [
"端口"
] | https://linux.cn/article-10073-1.html | 
对于 Linux 系统管理员来说,清楚某个服务是否正确地绑定或监听某个端口,是至关重要的。如果你需要处理端口相关的问题,这篇文章可能会对你有用。
端口是 Linux 系统上特定进程之间逻辑连接的标识,包括物理端口和软件端口。由于 Linux 操作系统是一个软件,因此本文只讨论软件端口。软件端口始终与主机的 IP 地址和相关的通信协议相关联,因此端口常用于区分应用程序。大部分涉及到网络的服务都必须打开一个套接字来监听传入的网络请求,而每个服务都使用一个独立的套接字。
**推荐阅读:**
* [在 Linux 上查看进程 ID 的 4 种方法](https://www.2daygeek.com/how-to-check-find-the-process-id-pid-ppid-of-a-running-program-in-linux/)
* [在 Linux 上终止进程的 3 种方法](https://www.2daygeek.com/kill-terminate-a-process-in-linux-using-kill-pkill-killall-command/)
套接字是和 IP 地址、软件端口和协议结合起来使用的,而端口号对传输控制协议(TCP)和用户数据报协议(UDP)协议都适用,TCP 和 UDP 都可以使用 0 到 65535 之间的端口号进行通信。
以下是端口分配类别:
* 0 - 1023: 常用端口和系统端口
* 1024 - 49151: 软件的注册端口
* 49152 - 65535: 动态端口或私有端口
在 Linux 上的 `/etc/services` 文件可以查看到更多关于保留端口的信息。
```
# less /etc/services
# /etc/services:
# $Id: services,v 1.55 2013/04/14 ovasik Exp $
#
# Network services, Internet style
# IANA services version: last updated 2013-04-10
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, most entries here have two entries
# even if the protocol doesn't support UDP operations.
# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
# are included, only the more common ones.
#
# The latest IANA port assignments can be gotten from
# http://www.iana.org/assignments/port-numbers
# The Well Known Ports are those from 0 through 1023.
# The Registered Ports are those from 1024 through 49151
# The Dynamic and/or Private Ports are those from 49152 through 65535
#
# Each line describes one service, and is of the form:
#
# service-name port/protocol [aliases ...] [# comment]
tcpmux 1/tcp # TCP port service multiplexer
tcpmux 1/udp # TCP port service multiplexer
rje 5/tcp # Remote Job Entry
rje 5/udp # Remote Job Entry
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
systat 11/udp users
daytime 13/tcp
daytime 13/udp
qotd 17/tcp quote
qotd 17/udp quote
msp 18/tcp # message send protocol (historic)
msp 18/udp # message send protocol (historic)
chargen 19/tcp ttytst source
chargen 19/udp ttytst source
ftp-data 20/tcp
ftp-data 20/udp
# 21 is registered to ftp, but also used by fsp
ftp 21/tcp
ftp 21/udp fsp fspd
ssh 22/tcp # The Secure Shell (SSH) Protocol
ssh 22/udp # The Secure Shell (SSH) Protocol
telnet 23/tcp
telnet 23/udp
# 24 - private mail system
lmtp 24/tcp # LMTP Mail Delivery
lmtp 24/udp # LMTP Mail Delivery
```
可以使用以下六种方法查看端口信息。
* `ss`:可以用于转储套接字统计信息。
* `netstat`:可以显示打开的套接字列表。
* `lsof`:可以列出打开的文件。
* `fuser`:可以列出那些打开了文件的进程的进程 ID。
* `nmap`:是网络检测工具和端口扫描程序。
* `systemctl`:是 systemd 系统的控制管理器和服务管理器。
以下我们将找出 `sshd` 守护进程所使用的端口号。
### 方法 1:使用 ss 命令
`ss` 一般用于转储套接字统计信息。它能够输出类似于 `netstat` 输出的信息,但它可以比其它工具显示更多的 TCP 信息和状态信息。
它还可以显示所有类型的套接字统计信息,包括 PACKET、TCP、UDP、DCCP、RAW、Unix 域等。
```
# ss -tnlp | grep ssh
LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4))
```
也可以使用端口号来检查。
```
# ss -tnlp | grep ":22"
LISTEN 0 128 *:22 *:* users:(("sshd",pid=997,fd=3))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=997,fd=4))
```
### 方法 2:使用 netstat 命令
`netstat` 能够显示网络连接、路由表、接口统计信息、伪装连接以及多播成员。
默认情况下,`netstat` 会列出打开的套接字。如果不指定任何地址族,则会显示所有已配置地址族的活动套接字。但 `netstat` 已经过时了,一般会使用 `ss` 来替代。
```
# netstat -tnlp | grep ssh
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd
tcp6 0 0 :::22 :::* LISTEN 997/sshd
```
也可以使用端口号来检查。
```
# netstat -tnlp | grep ":22"
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1208/sshd
tcp6 0 0 :::22 :::* LISTEN 1208/sshd
```
### 方法 3:使用 lsof 命令
`lsof` 能够列出打开的文件,并列出系统上被进程打开的文件的相关信息。
```
# lsof -i -P | grep ssh
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 11584 root 3u IPv4 27625 0t0 TCP *:22 (LISTEN)
sshd 11584 root 4u IPv6 27627 0t0 TCP *:22 (LISTEN)
sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED)
```
也可以使用端口号来检查。
```
# lsof -i tcp:22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1208 root 3u IPv4 20919 0t0 TCP *:ssh (LISTEN)
sshd 1208 root 4u IPv6 20921 0t0 TCP *:ssh (LISTEN)
sshd 11592 root 3u IPv4 27744 0t0 TCP vps.2daygeek.com:ssh->103.5.134.167:49902 (ESTABLISHED)
```
### 方法 4:使用 fuser 命令
`fuser` 工具会将本地系统上打开了文件的进程的进程 ID 显示在标准输出中。
```
# fuser -v 22/tcp
USER PID ACCESS COMMAND
22/tcp: root 1208 F.... sshd
root 12388 F.... sshd
root 49339 F.... sshd
```
### 方法 5:使用 nmap 命令
`nmap`(“Network Mapper”)是一款用于网络检测和安全审计的开源工具。它最初用于对大型网络进行快速扫描,但它对于单个主机的扫描也有很好的表现。
`nmap` 使用原始 IP 数据包来确定网络上可用的主机,这些主机的服务(包括应用程序名称和版本)、主机运行的操作系统(包括操作系统版本等信息)、正在使用的数据包过滤器或防火墙的类型,以及很多其它信息。
```
# nmap -sV -p 22 localhost
Starting Nmap 6.40 ( http://nmap.org ) at 2018-09-23 12:36 IST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000089s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.4 (protocol 2.0)
Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 0.44 seconds
```
### 方法 6:使用 systemctl 命令
`systemctl` 是 systemd 系统的控制管理器和服务管理器。它取代了旧的 SysV 初始化系统管理,目前大多数现代 Linux 操作系统都采用了 systemd。
**推荐阅读:**
* [chkservice – Linux 终端上的 systemd 单元管理工具](https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/)
* [如何查看 Linux 系统上正在运行的服务](https://www.2daygeek.com/how-to-check-all-running-services-in-linux/)
```
# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-09-23 02:08:56 EDT; 6h 11min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 11584 (sshd)
CGroup: /system.slice/sshd.service
└─11584 /usr/sbin/sshd -D
Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Starting OpenSSH server daemon...
Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on 0.0.0.0 port 22.
Sep 23 02:08:56 vps.2daygeek.com sshd[11584]: Server listening on :: port 22.
Sep 23 02:08:56 vps.2daygeek.com systemd[1]: Started OpenSSH server daemon.
Sep 23 02:09:15 vps.2daygeek.com sshd[11589]: Connection closed by 103.5.134.167 port 49899 [preauth]
Sep 23 02:09:41 vps.2daygeek.com sshd[11592]: Accepted password for root from 103.5.134.167 port 49902 ssh2
```
以上输出的内容显示了最近一次启动 `sshd` 服务时 `ssh` 服务的监听端口。但它不会将最新日志更新到输出中。
```
# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-09-06 07:40:59 IST; 2 weeks 3 days ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1208 (sshd)
CGroup: /system.slice/sshd.service
├─ 1208 /usr/sbin/sshd -D
├─23951 sshd: [accepted]
└─23952 sshd: [net]
Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: Invalid user pi from 95.210.113.142 port 51666
Sep 23 12:50:36 vps.2daygeek.com sshd[23909]: input_userauth_request: invalid user pi [preauth]
Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): check pass; user unknown
Sep 23 12:50:37 vps.2daygeek.com sshd[23911]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142
Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): check pass; user unknown
Sep 23 12:50:37 vps.2daygeek.com sshd[23909]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95.210.113.142
Sep 23 12:50:39 vps.2daygeek.com sshd[23911]: Failed password for invalid user pi from 95.210.113.142 port 51670 ssh2
Sep 23 12:50:39 vps.2daygeek.com sshd[23909]: Failed password for invalid user pi from 95.210.113.142 port 51666 ssh2
Sep 23 12:50:40 vps.2daygeek.com sshd[23911]: Connection closed by 95.210.113.142 port 51670 [preauth]
Sep 23 12:50:40 vps.2daygeek.com sshd[23909]: Connection closed by 95.210.113.142 port 51666 [preauth]
```
大部分情况下,以上的输出不会显示进程的实际端口号。这时更建议使用以下这个 `journalctl` 命令检查日志文件中的详细信息。
```
# journalctl | grep -i "openssh\|sshd"
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[997]: Received signal 15; terminating.
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Stopping OpenSSH server daemon...
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Starting OpenSSH server daemon...
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on 0.0.0.0 port 22.
Sep 23 02:08:56 vps138235.vps.ovh.ca sshd[11584]: Server listening on :: port 22.
Sep 23 02:08:56 vps138235.vps.ovh.ca systemd[1]: Started OpenSSH server daemon.
```
---
via: <https://www.2daygeek.com/how-to-find-out-which-port-number-a-process-is-using-in-linux/>
作者:[Prakash Subramanian](https://www.2daygeek.com/author/prakash/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
10,074 | 如何在 vi 中创建快捷键 | https://opensource.com/article/18/5/shortcuts-vi-text-editor | 2018-10-02T23:14:00 | [
"快捷键",
"Vi",
"Vim"
] | /article-10074-1.html |
>
> 那些常见编辑任务的快捷键可以使 Vi 编辑器更容易使用,更有效率。
>
>
>

学习使用 [vi 文本编辑器](http://ex-vi.sourceforge.net/) 确实得花点功夫,不过 vi 的老手们都知道,经过一小会儿的锻炼,就可以将基本的 vi 操作融汇贯通。我们都知道“肌肉记忆”,那么学习 vi 的过程可以称之为“手指记忆”。
当你抓住了基础的操作窍门之后,你就可以定制化地配置 vi 的快捷键,从而让其处理的功能更为强大、流畅。我希望下面描述的技术可以加速您的协作、编程和数据操作。
在开始之前,我想先感谢下 Chris Hermansen(是他雇佣我写了这篇文章)仔细地检查了我的另一篇关于使用 vi 增强版本 [Vim](https://www.vim.org/) 的文章。当然还有他那些我未采纳的建议。
首先,我们来说明下面几个惯例设定。我会使用符号 `<RET>` 来代表按下回车,`<SP>` 代表按下空格键,`CTRL-x` 表示一起按下 `Control` 键和 `x` 键(`x` 可以是需要的某个键)。
使用 `map` 命令来进行按键的映射。第一个例子是 `write` 命令,通常你之前保存使用这样的命令:
```
:w<RET>
```
虽然这里只有三个键,不过考虑到我用这个命令实在是太频繁了,我更想“一键”搞定它。在这里我选择逗号键,它不是标准的 vi 命令集的一部分。这样设置:
```
:map , :wCTRL-v<RET>
```
这里的 `CTRL-v` 事实上是对 `<RET>` 做了转义的操作,如果不加这个的话,默认 `<RET>` 会作为这条映射指令的结束信号,而非映射中的一个操作。 `CTRL-v` 后面所跟的操作会翻译为用户的实际操作,而非该按键平常的操作。
在上面的映射中,右边的部分会在屏幕中显示为 `:w^M`,其中 `^` 字符就是指代 `control`,完整的意思就是 `CTRL-m`,表示就是系统中一行的结尾。
目前来说,就很不错了。如果我编辑、创建了十二次文件,这个键位映射就可以省掉了 2\*12 次按键。不过这里没有计算你建立这个键位映射所花费的 11 次按键(计算 `CTRL-v` 和 `:` 均为一次按键)。虽然这样已经省了很多次,但是每次打开 vi 都要重新建立这个映射也会觉得非常麻烦。
幸运的是,这里可以将这些键位映射放到 vi 的启动配置文件中,让其在每次启动的时候自动读取:文件为 `.exrc`,对于 vim 是 `.vimrc`。只需要将这些文件放在你的用户根目录中即可,并在文件中每行写入一个键位映射,之后就会在每次启动 vi 生效直到你删除对应的配置。
在继续说明 `map` 其他用法以及其他的缩写机制之前,这里在列举几个我常用提高文本处理效率的 map 设置:
| 映射 | 显示为 |
| --- | --- |
| `:map X :xCTRL-v<RET>` | `:x^M` |
| `:map X ,:qCTRL-v<RET>` | `,:q^M` |
上面的 `map` 指令的意思是写入并关闭当前的编辑文件。其中 `:x` 是 vi 原本的命令,而下面的版本说明之前的 `map` 配置可以继续用作第二个 `map` 键位映射。
| 映射 | 显示为 |
| --- | --- |
| `:map v :e<SP>` | `:e` |
上面的指令意思是在 vi 编辑器内部切换文件,使用这个时候,只需要按 `v` 并跟着输入文件名,之后按 `<RET>` 键。
| 映射 | 显示为 |
| --- | --- |
| `:map CTRL-vCTRL-e :e<SP>#CTRL-v<RET>` | `:e #^M` |
`#` 在这里是 vi 中标准的符号,意思是最后使用的文件名。所以切换当前与上一个文件的方法就使用上面的映射。
| 映射 | 显示为 |
| --- | --- |
| `map CTRL-vCTRL-r :!spell %>err &CTRL-v<RET>` | `:!spell %>err&^M` |
(注意:在两个例子中出现的第一个 `CRTL-v` 在某些 vi 的版本中是不需要的)其中,`:!` 用来运行一个外部的(非 vi 内部的)命令。在这个拼写检查的例子中,`%` 是 vi 中的符号用来指代目前的文件, `>` 用来重定向拼写检查中的输出到 `err` 文件中,之后跟上 `&` 说明该命令是一个后台运行的任务,这样可以保证在拼写检查的同时还可以进行编辑文件的工作。这里我可以键入 `verr<RET>`(使用我之前定义的快捷键 `v` 跟上 `err`),进入 `spell` 输出结果的文件,之后再输入 `CTRL-e` 来回到刚才编辑的文件中。这样我就可以在拼写检查之后,使用 `CTRL-r` 来查看检查的错误,再通过 `CTRL-e` 返回刚才编辑的文件。
还用很多字符串输入的缩写,也使用了各种 `map` 命令,比如:
```
:map! CTRL-o \fI
:map! CTRL-k \fP
```
这个映射允许你使用 `CTRL-o` 作为 `groff` 命令的缩写,从而让让接下来书写的单词有斜体的效果,并使用 `CTRL-k` 进行恢复。
还有两个类似的映射:
```
:map! rh rhinoceros
:map! hi hippopotamus
```
上面的也可以使用 `ab` 命令来替换,就像下面这样(如果想这么用的话,需要首先按顺序运行: 1、 `unmap! rh`,2、`umap! hi`):
```
:ab rh rhinoceros
:ab hi hippopotamus
```
在上面 `map!` 的命令中,缩写会马上的展开成原有的单词,而在 `ab` 命令中,单词展开的操作会在输入了空格和标点之后才展开(不过在 Vim 和我的 vi 中,展开的形式与 `map!` 类似)。
想要取消刚才设定的按键映射,可以对应的输入 `:unmap`、 `unmap!` 或 `:unab`。
在我使用的 vi 版本中,比较好用的候选映射按键包括 `g`、`K`、`q`、 `v`、 `V`、 `Z`,控制字符包括:`CTRL-a`、`CTRL-c`、 `CTRL-k`、`CTRL-n`、`CTRL-p`、`CTRL-x`;还有一些其他的字符如 `#`、 `*`,当然你也可以使用那些已经在 vi 中有过定义但不经常使用的字符,比如本文选择 `X` 和 `I`,其中 `X` 表示删除左边的字符,并立刻左移当前字符。
最后,下面的命令
```
:map<RET>
:map!<RET>
:ab
```
将会显示,目前所有的缩写和键位映射。
希望上面的技巧能够更好地更高效地帮助你使用 vi。
---
via: <https://opensource.com/article/18/5/shortcuts-vi-text-editor>
作者:[Dan Sonnenschein](https://opensource.com/users/dannyman)
选题:[lujun9972](https://github.com/lujun9972)
译者:[sd886393](https://github.com/sd886393)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
10,075 | Linux 防火墙:关于 iptables 和 firewalld 的那些事 | https://opensource.com/article/18/9/linux-iptables-firewalld | 2018-10-03T17:18:01 | [
"防火墙",
"iptables",
"firewalld"
] | https://linux.cn/article-10075-1.html |
>
> 以下是如何使用 iptables 和 firewalld 工具来管理 Linux 防火墙规则。
>
>
>

这篇文章摘自我的书《[Linux in Action](https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource)》,尚未发布的第二个曼宁出版项目。
### 防火墙
防火墙是一组规则。当数据包进出受保护的网络区域时,进出内容(特别是关于其来源、目标和使用的协议等信息)会根据防火墙规则进行检测,以确定是否允许其通过。下面是一个简单的例子:

*防火墙可以根据协议或基于目标的规则过滤请求。*
一方面, [iptables](https://en.wikipedia.org/wiki/Iptables) 是 Linux 机器上管理防火墙规则的工具。
另一方面,[firewalld](https://firewalld.org/) 也是 Linux 机器上管理防火墙规则的工具。
你有什么问题吗?如果我告诉你还有另外一种工具,叫做 [nftables](https://wiki.nftables.org/wiki-nftables/index.php/Main_Page),这会不会糟蹋你的美好一天呢?
好吧,我承认整件事确实有点好笑,所以让我来解释一下。这一切都从 Netfilter 开始,它在 Linux 内核模块级别控制访问网络栈。几十年来,管理 Netfilter 钩子的主要命令行工具是 iptables 规则集。
因为调用这些规则所需的语法看起来有点晦涩难懂,所以各种用户友好的实现方式,如 [ufw](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) 和 firewalld 被引入,作为更高级别的 Netfilter 解释器。然而,ufw 和 firewalld 主要是为解决单独的计算机所面临的各种问题而设计的。构建全方面的网络解决方案通常需要 iptables,或者从 2014 年起,它的替代品 nftables (nft 命令行工具)。
iptables 没有消失,仍然被广泛使用着。事实上,在未来的许多年里,作为一名管理员,你应该会使用 iptables 来保护的网络。但是 nftables 通过操作经典的 Netfilter 工具集带来了一些重要的崭新的功能。
从现在开始,我将通过示例展示 firewalld 和 iptables 如何解决简单的连接问题。
### 使用 firewalld 配置 HTTP 访问
正如你能从它的名字中猜到的,firewalld 是 [systemd](https://en.wikipedia.org/wiki/Systemd) 家族的一部分。firewalld 可以安装在 Debian/Ubuntu 机器上,不过,它默认安装在 RedHat 和 CentOS 上。如果您的计算机上运行着像 Apache 这样的 web 服务器,您可以通过浏览服务器的 web 根目录来确认防火墙是否正在工作。如果网站不可访问,那么 firewalld 正在工作。
你可以使用 `firewall-cmd` 工具从命令行管理 firewalld 设置。添加 `–state` 参数将返回当前防火墙的状态:
```
# firewall-cmd --state
running
```
默认情况下,firewalld 处于运行状态,并拒绝所有传入流量,但有几个例外,如 SSH。这意味着你的网站不会有太多的访问者,这无疑会为你节省大量的数据传输成本。然而,这不是你对 web 服务器的要求,你希望打开 HTTP 和 HTTPS 端口,按照惯例,这两个端口分别被指定为 80 和 443。firewalld 提供了两种方法来实现这个功能。一个是通过 `–add-port` 参数,该参数直接引用端口号及其将使用的网络协议(在本例中为TCP)。 另外一个是通过 `–permanent` 参数,它告诉 firewalld 在每次服务器启动时加载此规则:
```
# firewall-cmd --permanent --add-port=80/tcp
# firewall-cmd --permanent --add-port=443/tcp
```
`–reload` 参数将这些规则应用于当前会话:
```
# firewall-cmd --reload
```
查看当前防火墙上的设置,运行 `–list-services`:
```
# firewall-cmd --list-services
dhcpv6-client http https ssh
```
假设您已经如前所述添加了浏览器访问,那么 HTTP、HTTPS 和 SSH 端口现在都应该是和 `dhcpv6-client` 一样开放的 —— 它允许 Linux 从本地 DHCP 服务器请求 IPv6 IP 地址。
### 使用 iptables 配置锁定的客户信息亭
我相信你已经看到了信息亭——它们是放在机场、图书馆和商务场所的盒子里的平板电脑、触摸屏和 ATM 类电脑,邀请顾客和路人浏览内容。大多数信息亭的问题是,你通常不希望用户像在自己家一样,把他们当成自己的设备。它们通常不是用来浏览、观看 YouTube 视频或对五角大楼发起拒绝服务攻击的。因此,为了确保它们没有被滥用,你需要锁定它们。
一种方法是应用某种信息亭模式,无论是通过巧妙使用 Linux 显示管理器还是控制在浏览器级别。但是为了确保你已经堵塞了所有的漏洞,你可能还想通过防火墙添加一些硬性的网络控制。在下一节中,我将讲解如何使用iptables 来完成。
关于使用 iptables,有两件重要的事情需要记住:你给出的规则的顺序非常关键;iptables 规则本身在重新启动后将无法保持。我会一次一个地在解释这些。
### 信息亭项目
为了说明这一切,让我们想象一下,我们为一家名为 BigMart 的大型连锁商店工作。它们已经存在了几十年;事实上,我们想象中的祖父母可能是在那里购物并长大的。但是如今,BigMart 公司总部的人可能只是在数着亚马逊将他们永远赶下去的时间。
尽管如此,BigMart 的 IT 部门正在尽他们最大努力提供解决方案,他们向你发放了一些具有 WiFi 功能信息亭设备,你在整个商店的战略位置使用这些设备。其想法是,登录到 BigMart.com 产品页面,允许查找商品特征、过道位置和库存水平。信息亭还允许进入 bigmart-data.com,那里储存着许多图像和视频媒体信息。
除此之外,您还需要允许下载软件包更新。最后,您还希望只允许从本地工作站访问 SSH,并阻止其他人登录。下图说明了它将如何工作:

\*信息亭业务流由 iptables 控制。 \*
### 脚本
以下是 Bash 脚本内容:
```
#!/bin/bash
iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT
iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT
iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT
iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT
iptables -A OUTPUT -p tcp --dport 80 -j DROP
iptables -A OUTPUT -p tcp --dport 443 -j DROP
iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT
iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP
```
我们从基本规则 `-A` 开始分析,它告诉 iptables 我们要添加规则。`OUTPUT` 意味着这条规则应该成为输出链的一部分。`-p` 表示该规则仅使用 TCP 协议的数据包,正如 `-d` 告诉我们的,目的地址是 [bigmart.com](http://bigmart.com/)。`-j` 参数的作用是当数据包符合规则时要采取的操作是 `ACCEPT`。第一条规则表示允许(或接受)请求。但,往下的规则你能看到丢弃(或拒绝)的请求。
规则顺序是很重要的。因为 iptables 会对一个请求遍历每个规则,直到遇到匹配的规则。一个向外发出的浏览器请求,比如访问 bigmart.com 是会通过的,因为这个请求匹配第一条规则,但是当它到达 `dport 80` 或 `dport 443` 规则时——取决于是 HTTP 还是 HTTPS 请求——它将被丢弃。当遇到匹配时,iptables 不再继续往下检查了。(LCTT 译注:此处原文有误,径改。)
另一方面,向 ubuntu.com 发出软件升级的系统请求,只要符合其适当的规则,就会通过。显然,我们在这里做的是,只允许向我们的 BigMart 或 Ubuntu 发送 HTTP 或 HTTPS 请求,而不允许向其他目的地发送。
最后两条规则将处理 SSH 请求。因为它不使用端口 80 或 443 端口,而是使用 22 端口,所以之前的两个丢弃规则不会拒绝它。在这种情况下,来自我的工作站的登录请求将被接受,但是对其他任何地方的请求将被拒绝。这一点很重要:确保用于端口 22 规则的 IP 地址与您用来登录的机器的地址相匹配——如果不这样做,将立即被锁定。当然,这没什么大不了的,因为按照目前的配置方式,只需重启服务器,iptables 规则就会全部丢失。如果使用 LXC 容器作为服务器并从 LXC 主机登录,则使用主机 IP 地址连接容器,而不是其公共地址。
如果机器的 IP 发生变化,请记住更新这个规则;否则,你会被拒绝访问。
在家玩(是在某种一次性虚拟机上)?太好了。创建自己的脚本。现在我可以保存脚本,使用 `chmod` 使其可执行,并以 `sudo` 的形式运行它。不要担心“igmart-data.com 没找到”之类的错误 —— 当然没找到;它不存在。
```
chmod +X scriptname.sh
sudo ./scriptname.sh
```
你可以使用 `cURL` 命令行测试防火墙。请求 ubuntu.com 奏效,但请求 [manning.com](http://manning.com/) 是失败的 。
```
curl ubuntu.com
curl manning.com
```
### 配置 iptables 以在系统启动时加载
现在,我如何让这些规则在每次信息亭启动时自动加载?第一步是将当前规则保存。使用 `iptables-save` 工具保存规则文件。这将在根目录中创建一个包含规则列表的文件。管道后面跟着 `tee` 命令,是将我的`sudo` 权限应用于字符串的第二部分:将文件实际保存到否则受限的根目录。
然后我可以告诉系统每次启动时运行一个相关的工具,叫做 `iptables-restore` 。我们在上一章节(LCTT 译注:指作者的书)中看到的常规 cron 任务并不适用,因为它们在设定的时间运行,但是我们不知道什么时候我们的计算机可能会决定崩溃和重启。
有许多方法来处理这个问题。这里有一个:
在我的 Linux 机器上,我将安装一个名为 [anacron](https://sourceforge.net/projects/anacron/) 的程序,该程序将在 `/etc/` 目录中为我们提供一个名为 `anacrontab` 的文件。我将编辑该文件并添加这个 `iptables-restore` 命令,告诉它加载那个 .rule 文件的当前内容。当引导后,规则每天(必要时)01:01 时加载到 iptables 中(LCTT 译注:anacron 会补充执行由于机器没有运行而错过的 cron 任务,因此,即便 01:01 时机器没有启动,也会在机器启动会尽快执行该任务)。我会给该任务一个标识符(`iptables-restore`),然后添加命令本身。如果你在家和我一起这样,你应该通过重启系统来测试一下。
```
sudo iptables-save | sudo tee /root/my.active.firewall.rules
sudo apt install anacron
sudo nano /etc/anacrontab
1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules
```
我希望这些实际例子已经说明了如何使用 iptables 和 firewalld 来管理基于 Linux 的防火墙上的连接问题。
---
via: <https://opensource.com/article/18/9/linux-iptables-firewalld>
作者:[David Clinton](https://opensource.com/users/remyd) 选题:[lujun9972](https://github.com/lujun9972) 译者:[heguangzhi](https://github.com/heguangzhi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This article is excerpted from my book, [Linux in Action](https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource), and a second Manning project that’s yet to be released.
## The firewall
A firewall is a set of rules. When a data packet moves into or out of a protected network space, its contents (in particular, information about its origin, target, and the protocol it plans to use) are tested against the firewall rules to see if it should be allowed through. Here’s a simple example:

A firewall can filter requests based on protocol or target-based rules.
On the one hand, [iptables](https://en.wikipedia.org/wiki/Iptables) is a tool for managing firewall rules on a Linux machine.
On the other hand, [firewalld](https://firewalld.org/) is also a tool for managing firewall rules on a Linux machine.
You got a problem with that? And would it spoil your day if I told you that there was another tool out there, called [nftables](https://wiki.nftables.org/wiki-nftables/index.php/Main_Page)?
OK, I’ll admit that the whole thing does smell a bit funny, so let me explain. It all starts with Netfilter, which controls access to and from the network stack at the Linux kernel module level. For decades, the primary command-line tool for managing Netfilter hooks was the iptables ruleset.
Because the syntax needed to invoke those rules could come across as a bit arcane, various user-friendly implementations like [ufw](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) and firewalld were introduced as higher-level Netfilter interpreters. Ufw and firewalld are, however, primarily designed to solve the kinds of problems faced by stand-alone computers. Building full-sized network solutions will often require the extra muscle of iptables or, since 2014, its replacement, nftables (through the nft command line tool).
iptables hasn’t gone anywhere and is still widely used. In fact, you should expect to run into iptables-protected networks in your work as an admin for many years to come. But nftables, by adding on to the classic Netfilter toolset, has brought some important new functionality.
From here on, I’ll show by example how firewalld and iptables solve simple connectivity problems.
## Configure HTTP access using firewalld
As you might have guessed from its name, firewalld is part of the [systemd](https://en.wikipedia.org/wiki/Systemd) family. Firewalld can be installed on Debian/Ubuntu machines, but it’s there by default on Red Hat and CentOS. If you’ve got a web server like Apache running on your machine, you can confirm that the firewall is working by browsing to your server’s web root. If the site is unreachable, then firewalld is doing its job.
You’ll use the `firewall-cmd`
tool to manage firewalld settings from the command line. Adding the `–state`
argument returns the current firewall status:
```
# firewall-cmd --state
running
```
By default, firewalld will be active and will reject all incoming traffic with a couple of exceptions, like SSH. That means your website won’t be getting too many visitors, which will certainly save you a lot of data transfer costs. As that’s probably not what you had in mind for your web server, though, you’ll want to open the HTTP and HTTPS ports that by convention are designated as 80 and 443, respectively. firewalld offers two ways to do that. One is through the `–add-port`
argument that references the port number directly along with the network protocol it’ll use (TCP in this case). The `–permanent`
argument tells firewalld to load this rule each time the server boots:
```
# firewall-cmd --permanent --add-port=80/tcp
# firewall-cmd --permanent --add-port=443/tcp
```
The `–reload`
argument will apply those rules to the current session:
`# firewall-cmd --reload`
Curious as to the current settings on your firewall? Run `–list-services`
:
```
# firewall-cmd --list-services
dhcpv6-client http https ssh
```
Assuming you’ve added browser access as described earlier, the HTTP, HTTPS, and SSH ports should now all be open—along with `dhcpv6-client`
, which allows Linux to request an IPv6 IP address from a local DHCP server.
## Configure a locked-down customer kiosk using iptables
I’m sure you’ve seen kiosks—they’re the tablets, touchscreens, and ATM-like PCs in a box that airports, libraries, and business leave lying around, inviting customers and passersby to browse content. The thing about most kiosks is that you don’t usually want users to make themselves at home and treat them like their own devices. They’re not generally meant for browsing, viewing YouTube videos, or launching denial-of-service attacks against the Pentagon. So to make sure they’re not misused, you need to lock them down.
One way is to apply some kind of kiosk mode, whether it’s through clever use of a Linux display manager or at the browser level. But to make sure you’ve got all the holes plugged, you’ll probably also want to add some hard network controls through a firewall. In the following section, I'll describe how I would do it using iptables.
There are two important things to remember about using iptables: The order you give your rules is critical, and by themselves, iptables rules won’t survive a reboot. I’ll address those here one at a time.
## The kiosk project
To illustrate all this, let’s imagine we work for a store that’s part of a larger chain called BigMart. They’ve been around for decades; in fact, our imaginary grandparents probably grew up shopping there. But these days, the guys at BigMart corporate headquarters are probably just counting the hours before Amazon drives them under for good.
Nevertheless, BigMart’s IT department is doing its best, and they’ve just sent you some WiFi-ready kiosk devices that you’re expected to install at strategic locations throughout your store. The idea is that they’ll display a web browser logged into the BigMart.com products pages, allowing them to look up merchandise features, aisle location, and stock levels. The kiosks will also need access to bigmart-data.com, where many of the images and video media are stored.
Besides those, you’ll want to permit updates and, whenever necessary, package downloads. Finally, you’ll want to permit inbound SSH access only from your local workstation, and block everyone else. The figure below illustrates how it will all work:

The kiosk traffic flow being controlled by iptables.
## The script
Here’s how that will all fit into a Bash script:
```
#!/bin/bash
iptables -A OUTPUT -p tcp -d bigmart.com -j ACCEPT
iptables -A OUTPUT -p tcp -d bigmart-data.com -j ACCEPT
iptables -A OUTPUT -p tcp -d ubuntu.com -j ACCEPT
iptables -A OUTPUT -p tcp -d ca.archive.ubuntu.com -j ACCEPT
iptables -A OUTPUT -p tcp --dport 80 -j DROP
iptables -A OUTPUT -p tcp --dport 443 -j DROP
iptables -A INPUT -p tcp -s 10.0.3.1 --dport 22 -j ACCEPT
iptables -A INPUT -p tcp -s 0.0.0.0/0 --dport 22 -j DROP
```
The basic anatomy of our rules starts with `-A`
, telling iptables that we want to add the following rule. `OUTPUT`
means that this rule should become part of the OUTPUT chain. `-p`
indicates that this rule will apply only to packets using the TCP protocol, where, as `-d`
tells us, the destination is [bigmart.com](http://bigmart.com/). The `-j`
flag points to `ACCEPT`
as the action to take when a packet matches the rule. In this first rule, that action is to permit, or accept, the request. But further down, you can see requests that will be dropped, or denied.
Remember that order matters. And that’s because iptables will run a request past each of its rules, but only until it gets a match. So an outgoing browser request for, say, [youtube.com](http://youtube.com/) will pass the first four rules, but when it gets to either the `–dport 80`
or `–dport 443`
rule—depending on whether it’s an HTTP or HTTPS request—it’ll be dropped. iptables won’t bother checking any further because that was a match.
On the other hand, a system request to ubuntu.com for a software upgrade will get through when it hits its appropriate rule. What we’re doing here, obviously, is permitting outgoing HTTP or HTTPS requests to only our BigMart or Ubuntu destinations and no others.
The final two rules will deal with incoming SSH requests. They won’t already have been denied by the two previous drop rules since they don’t use ports 80 or 443, but 22. In this case, login requests from my workstation will be accepted but requests for anywhere else will be dropped. This is important: Make sure the IP address you use for your port 22 rule matches the address of the machine you’re using to log in—if you don’t do that, you’ll be instantly locked out. It's no big deal, of course, because the way it’s currently configured, you could simply reboot the server and the iptables rules will all be dropped. If you’re using an LXC container as your server and logging on from your LXC host, then use the IP address your host uses to connect to the container, not its public address.
You’ll need to remember to update this rule if my machine’s IP ever changes; otherwise, you’ll be locked out.
Playing along at home (hopefully on a throwaway VM of some sort)? Great. Create your own script. Now I can save the script, use `chmod`
to make it executable, and run it as `sudo`
. Don’t worry about that `bigmart-data.com not found`
error—of course it’s not found; it doesn’t exist.
```
chmod +X scriptname.sh
sudo ./scriptname.sh
```
You can test your firewall from the command line using `cURL`
. Requesting ubuntu.com works, but [manning.com](http://manning.com/) fails.
```
curl ubuntu.com
curl manning.com
```
## Configuring iptables to load on system boot
Now, how do I get these rules to automatically load each time the kiosk boots? The first step is to save the current rules to a .rules file using the `iptables-save`
tool. That’ll create a file in the root directory containing a list of the rules. The pipe, followed by the tee command, is necessary to apply my `sudo`
authority to the second part of the string: the actual saving of a file to the otherwise restricted root directory.
I can then tell the system to run a related tool called `iptables-restore`
every time it boots. A regular cron job of the kind we saw in the previous module won’t help because they’re run at set times, but we have no idea when our computer might decide to crash and reboot.
There are lots of ways to handle this problem. Here’s one:
On my Linux machine, I’ll install a program called [anacron](https://sourceforge.net/projects/anacron/) that will give us a file in the /etc/ directory called anacrontab. I’ll edit the file and add this `iptables-restore`
command, telling it to load the current values of that .rules file into iptables each day (when necessary) one minute after a boot. I’ll give the job an identifier (`iptables-restore`
) and then add the command itself. Since you’re playing along with me at home, you should test all this out by rebooting your system.
```
sudo iptables-save | sudo tee /root/my.active.firewall.rules
sudo apt install anacron
sudo nano /etc/anacrontab
1 1 iptables-restore iptables-restore < /root/my.active.firewall.rules
```
I hope these practical examples have illustrated how to use iptables and firewalld for managing connectivity issues on Linux-based firewalls.
## 10 Comments |
10,076 | 5 个给孩子的非常好的 Linux 游戏和教育软件 | https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/ | 2018-10-03T20:53:00 | [
"教育",
"儿童"
] | https://linux.cn/article-10076-1.html | 
Linux 是一个非常强大的操作系统,因此因特网上的大多数服务器都使用它。尽管它算不上是对用户友好的最佳操作系统,但它的多元化还是值的称赞的。对于 Linux 来说,每个人都能在它上面找到他们自己的所需。不论你是用它来写代码、还是用于教学或物联网(IoT),你总能找到一个适合你用的 Linux 发行版。为此,许多人认为 Linux 是未来计算的最佳操作系统。
未来是属于孩子们的,让孩子们了解 Linux 是他们掌控未来的最佳方式。这个操作系统上或许并没有一些像 FIFA 或 PES 那样的声名赫赫的游戏;但是,它为孩子们提供了一些非常好的教育软件和游戏。这里有五款最好的 Linux 教育软件,可以让你的孩子远离游戏。
**相关阅读**:[使用一个 Linux 发行版的新手指南](https://www.maketecheasier.com/beginner-guide-to-using-linux-distro/%09)
### 1、GCompris
如果你正在为你的孩子寻找一款最佳的教育软件,[GCompris](http://www.gcompris.net/downloads-en.html) 将是你的最好的开端。这款软件专门为 2 到 10 岁的孩子所设计。作为所有的 Linux 教育软件套装的巅峰之作,GCompris 为孩子们提供了大约 100 项活动。它囊括了你期望你的孩子学习的所有内容,从阅读材料到科学、地理、绘画、代数、测验等等。

GCompris 甚至有一项活动可以帮你的孩子学习计算机的相关知识。如果你的孩子还很小,你希望他去学习字母、颜色和形状,GCompris 也有这方面的相关内容。更重要的是,它也为孩子们准备了一些益智类游戏,比如国际象棋、井字棋、好记性、以及猜词游戏。GCompris 并不是一个仅在 Linux 上可运行的游戏。它也可以运行在 Windows 和 Android 上。
### 2、TuxMath
很多学生认为数学是门非常难的课程。你可以通过 Linux 教育软件如 [TuxMath](https://tuxmath.en.uptodown.com/ubuntu) 来让你的孩子了解数学技能,从而来改变这种看法。TuxMath 是为孩子开发的顶级的数学教育辅助游戏。在这个游戏中,你的角色是在如雨点般下降的数学问题中帮助 Linux 企鹅 Tux 来保护它的星球。

在它们落下来毁坏 Tux 的星球之前,找到问题的答案,就可以使用你的激光去帮助 Tux 拯救它的星球。数字问题的难度每过一关就会提升一点。这个游戏非常适合孩子,因为它可以让孩子们去开动脑筋解决问题。而且还有助他们学好数学,以及帮助他们开发智力。
### 3、Sugar on a Stick
[Sugar on a Stick](http://wiki.sugarlabs.org/go/Sugar_on_a_Stick/Downloads) 是献给孩子们的学习程序 —— 一个广受好评的全新教学法。这个程序为你的孩子提供一个成熟的教学平台,在那里,他们可以收获创造、探索、发现和思考方面的技能。和 GCompris 一样,Sugar on a Stick 为孩子们带来了包括游戏和谜题在内的大量学习资源。

关于 Sugar on a Stick 最大的一个好处是你可以将它配置在一个 U 盘上。你只要有一台 X86 的 PC,插入那个 U 盘,然后就可以从 U 盘引导这个发行版。Sugar on a Stick 是由 Sugar 实验室提供的一个项目 —— 这个实验室是一个由志愿者运作的非盈利组织。
### 4、KDE Edu Suite
[KDE Edu Suite](https://edu.kde.org/) 是一个用途与众不同的软件包。带来了大量不同领域的应用程序,KDE 社区已经证实,它不仅可以给成年人授权;它还关心年青一代如何适应他们周围的一切。它囊括了一系列孩子们使用的应用程序,从科学到数学、地理等等。

KDE Edu 套件根据长大后所必需的知识为基础,既能够用作学校的教学软件,也能够作为孩子们的学习 APP。它提供了大量的可免费下载的软件包。KDE Edu 套件在主流的 GNU/Linux 发行版都能安装。
### 5、Tux Paint

[Tux Paint](http://www.tuxpaint.org/) 是给孩子们的另一个非常好的 Linux 教育软件。这个屡获殊荣的绘画软件在世界各地被用于帮助培养孩子们的绘画技能,它有一个简洁的、易于使用的界面和有趣的音效,可以高效地帮助孩子去使用这个程序。它也有一个卡通吉祥物去鼓励孩子们使用这个程序。Tux Paint 中有许多绘画工具,它们可以帮助孩子们放飞他们的创意。
### 总结
由于这些教育软件深受孩子们的欢迎,许多学校和幼儿园都使用这些程序进行辅助教学。典型的一个例子就是 [Edubuntu](http://edubuntu.org/),它是儿童教育领域中广受老师和家长们欢迎的一个基于 Ubuntu 的发行版。
Tux Paint 是另一个非常好的例子,它在这些年越来越流行,它大量地用于学校中教孩子们如何绘画。以上的这个清单并不很详细。还有成百上千的对孩子有益的其它 Linux 教育软件和游戏。
如果你还知道给孩子们的其它非常好的 Linux 教育软件和游戏,在下面的评论区分享给我们吧。
---
via: <https://www.maketecheasier.com/5-best-linux-software-packages-for-kids/>
作者:[Kenneth Kimari](https://www.maketecheasier.com/author/kennkimari/)
选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 
Linux is a very powerful operating system, which is why it powers most of the servers on the Internet. While this OS may not have a reputation for popular games such as FIFA or PES, it offers the best educational software and games for kids. These are eight of the best Linux educational software to keep your kids ahead of the game.
**Good to know**: if you are getting a phone for your kids, these are [the best kids cell phones to get](https://www.maketecheasier.com/best-cell-phones-for-kids/).
## 1. GCompris
If you’re looking for the best educational software for kids, [GCompris](https://www.gcompris.net/downloads-en.html) should be your starting point. This software is specifically designed for kids education and is ideal for kids between 2 and 10 years old. As the pinnacle of all Linux educational software suites for children, GCompris offers about 100 activities for kids. It packs everything you want for your kids, from reading practice to science, geography, drawing, algebra, quizzes, and more.

GCompris even has activities for helping your kids learn computer peripherals. If your kids are young, and you want them to learn the alphabet, colors, and shapes, GCompris has programs for those, too. What’s more, it also comes with helpful games for kids, such as chess, tic-tac-toe, memory, and hangman. GCompris is not a Linux-only app and is also available for Windows and Android.
You can install GCompris in Ubuntu from the Software Center or through the Snap store:
sudo snap install gcompris
## 2. TuxMath
Most students consider math a tough subject. You can change that perception by acquainting your kids with mathematical skills through Linux software applications, such as [TuxMath](https://github.com/tux4kids/tuxmath). TuxMath is a top-rated educational Math tutorial game for kids. In this game, your role is to help Tux the penguin of Linux protect his planet from a rain of mathematical problems.

By finding the answer, you help Tux save the planet by destroying the asteroids with your laser before they make an impact. The difficulty of the math problems increases with each level you pass. This game is ideal for kids, as it can help them rack their brains for solutions. Besides making them good at math, it also helps improve their mental agility.
You can install TuxMath in Ubuntu from the Software Center or through the apt command:
sudo apt install tuxmath
**Tip**: check out some of the [best sites to find ebooks for kids](https://www.maketecheasier.com/best-sites-find-free-online-books-for-kids/).
## 3. Sugar on a Stick
[Sugar on a Stick](https://wiki.sugarlabs.org/go/Sugar_on_a_Stick) is a dedicated learning program for kids: a pedagogy that has gained a lot of traction. This program provides your kids with a full-fledged learning platform where they can gain skills in creating, exploring, discovering and also reflecting on ideas. Just like GCompris, Sugar on a Stick comes with a host of learning resources for kids, including games and puzzles.

The best thing about Sugar on a Stick is that you can set it up on a USB Drive. All you need is an X86-based PC, then plug in the USB and boot the distro from it. Sugar on a Stick is a project by Sugar Labs – a non-profit organization that is run by volunteers.
Aside from booting from a separate USB stick, you can also [install Sugar as a desktop environment](https://www.maketecheasier.com/sugar-linux-distro-for-kids/) on a running Linux distro. This can be incredibly helpful if you want to create a dedicated learning computer for your children at home.
## 4. Scratch
Scratch is a programming language for non-programmers and children. It aims to be an easy-to-use and highly accessible way to create simple programs and games on your computer. Unlike a regular programming language, Scratch uses interconnected puzzle-like blocks to represent functions, variables and structures.

This simplified approach means that a user does not need any prerequisite knowledge about programming to start creating programs. For example, it is incredibly easy to create an iterative for loop in Scratch by just combining two block types together.

While you can use Scratch as a web app, it is also available in most Linux distros as an offline application. You can install the program in Ubuntu from the Software Center or via the following command:
sudo apt install scratch
**Good to know: **Scratch is a great first programming language. Once you are proficient in it, you can take your next step by [learning shell scripting](https://www.maketecheasier.com/beginners-guide-scripting-linux/).
## 5. Tux Paint
[Tux Paint](https://www.tuxpaint.org/) is another great Linux educational software for kids. This award-winning drawing program is used in schools around the world to help children nurture the art of drawing. It comes with a clean, easy-to-use interface and fun sound effects that help children use the program. There is also an encouraging cartoon mascot that guides kids as they use the program. Tux Paint comes with a variety of drawing tools that help kids unleash their creativity.

You can install Tux Paint in Ubuntu from the Software Center or by running the following command:
sudo apt install tuxpaint
## 6. Tux Typing
[Tux Typing](https://www.tux4kids.com/tuxtyping.html) is a simple, yet intuitive touch typing tutor for Linux. Unlike regular typing programs, it also aims to be an accessible and fun way for children and non-computer users to learn how to type quickly and efficiently with a keyboard.

One of the main selling points of Tux Typing is that its developers structured it to behave like a mini game. For example, the program includes two simple games that introduce both proper finger placement and efficient keystroke typing. This abstraction, in turn, makes the program less daunting for non-technical users that want to improve their computer skills.

You can easily install Tux Typing in Ubuntu from the Software Center or by running the following command:
sudo snap install tuxtyping
**Alternative**: you can also check out [these games to improve your typing skills](https://www.maketecheasier.com/best-typing-games/).
## 7. Nootka
Simple and elegant. Nootka is a music notation program that aims to teach the basics of reading and writing sheet music. The program will display a specific note in the screen and ask the student to play that note. Nootka will use the machine’s microphone to detect whether the student played the note correctly.

Aside from that, Nootka also packs a number of great features out of the box. For example, it has a “tuner mode” that will listen and teach the student how to tune various instruments for playing. It also has a “listening mode,” where the student can train their ears to listen for particular notes.

You can obtain Nootka in an [AppImage container](https://www.maketecheasier.com/appimage-in-linux/) from the [developer’s SourceForge website](https://sourceforge.net/projects/nootka/). Run the following commands to use it in your desktop:
sudo apt install fuse cd /home/$USER/Downloads sudo chmod +x ./nootka-2.0.2-x86_64.AppImage ./nootka-2.0.2-x86_64.AppImage
**Tip: **you can automate this process further by [creating a .desktop shortcut](https://www.maketecheasier.com/create-desktop-file-linux/) for Nootka.
## 8. TupiTube Desk
TupiTube Desk is a powerful yet easy-to-learn 2D animation software for Linux, Windows and Mac OSX. Unlike professional animation tools, TupiTube Desk distinguishes itself by focusing on the fundamental tools that make up 2D animation. For example, the program has an extensive frame-by-frame feature as well as dynamic vector backgrounds.

Aside from that, TupiTube also supports in-app sharing. It is incredibly easy for a user to publish their work online. Because of these, TupiTube can be a great tool for students that want to get started with animation but do not know where to start.
You can install TupiTube in your Ubuntu machine by [obtaining its .deb file](https://tupitube.com/index.php?r=custom_pages%2Fview&id=30) on the developer’s download page. Once it is in your machine, you can install the program by running dpkg:
sudo dpkg -i /home/$USER/Downloads/tupitubedesk_0.2.18_amd64.deb

## Frequently Asked Questions
### Is Scratch a secure IDE for my computer?
Yes. Scratch is a completely safe program that you can install on any computer. For the most part, the functions that are available in Scratch do not modify any system file on your computer. Aside from that, the latest version of Scratch also includes the ability to run any user-created program inside a virtual machine.
### Are there benefits to using Sugar on a Stick instead of Sugar?
While Sugar and Sugar on a Stick are fundamentally the same desktop environment, the latter can be helpful if you do not want your child to access and modify any files in your machine, as Sugar on a Stick is a live image that will only contain the files that are present in its USB drive.
### TupiTube Desk failed to install on my machine.
This issue is most likely due to a missing dependency in your system. Unlike apt, dpkg is a completely manual package manager, so it cannot automatically resolve any missing dependencies on its own.
Knowing that, you can fix this issue by running apt: `sudo apt --fix-broken install`
. This will check your current “install queue” and resolve any dependencies that dpkg encountered while attempting to install TupiTube.
Image credit: [Thomas Park via Unsplash](https://unsplash.com/photos/w9i7wMaM3EE). All alterations and screenshots by Ramces Red.
Our latest tutorials delivered straight to your inbox |
10,077 | 在 Linux 中安全且轻松地管理 Cron 定时任务 | https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/ | 2018-10-03T21:29:37 | [
"crontab",
"cron"
] | https://linux.cn/article-10077-1.html | 
在 Linux 中遇到计划任务的时候,你首先会想到的大概就是 Cron 定时任务了。Cron 定时任务能帮助你在类 Unix 操作系统中计划性地执行命令或者任务。也可以参考一下我们之前的一篇《[关于 Cron 定时任务的新手指导](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/)》。对于有一定 Linux 经验的人来说,设置 Cron 定时任务不是什么难事,但对于新手来说就不一定了,他们在编辑 crontab 文件的时候不知不觉中犯的一些小错误,也有可能把整个 Cron 定时任务搞挂了。如果你在处理 Cron 定时任务的时候为了以防万一,可以尝试使用 **Crontab UI**,它是一个可以在类 Unix 操作系统上安全轻松管理 Cron 定时任务的 Web 页面工具。
Crontab UI 是使用 NodeJS 编写的自由开源软件。有了 Crontab UI,你在创建、删除和修改 Cron 定时任务的时候就不需要手工编辑 Crontab 文件了,只需要打开浏览器稍微操作一下,就能完成上面这些工作。你可以用 Crontab UI 轻松创建、编辑、暂停、删除、备份 Cron 定时任务,甚至还可以简单地做到导入、导出、部署其它机器上的 Cron 定时任务,它还支持错误日志、邮件发送和钩子。
### 安装 Crontab UI
只需要一条命令就可以安装好 Crontab UI,但前提是已经安装好 NPM。如果还没有安装 NPM,可以参考《[如何在 Linux 上安装 NodeJS](https://www.ostechnix.com/install-node-js-linux/)》这篇文章。
执行这一条命令来安装 Crontab UI。
```
$ npm install -g crontab-ui
```
就是这么简单,下面继续来看看在 Crontab UI 上如何管理 Cron 定时任务。
### 在 Linux 上安全轻松管理 Cron 定时任务
执行这一条命令启动 Crontab UI:
```
$ crontab-ui
```
你会看到这样的输出:
```
Node version: 10.8.0
Crontab UI is running at http://127.0.0.1:8000
```
首先在你的防火墙和路由器上放开 8000 端口,然后打开浏览器访问 `<http://127.0.0.1:8000>`。
注意,默认只有在本地才能访问到 Crontab UI 的控制台页面。但如果你想让 Crontab UI 使用系统的 IP 地址和自定义端口,也就是想让其它机器也访问到本地的 Crontab UI,你需要使用以下这个命令:
```
$ HOST=0.0.0.0 PORT=9000 crontab-ui
Node version: 10.8.0
Crontab UI is running at http://0.0.0.0:9000
```
Crontab UI 就能够通过 `<http://IP-Address>:9000` 这样的 URL 被远程机器访问到了。
Crontab UI 的控制台页面长这样:

从上面的截图就可以看到,Crontab UI 的界面非常简洁,所有选项的含义都能不言自明。
在终端输入 `Ctrl + C` 就可以关闭 Crontab UI。
#### 创建、编辑、运行、停止、删除 Cron 定时任务
点击 “New”,输入 Cron 定时任务的信息并点击 “Save” 保存,就可以创建一个新的 Cron 定时任务了。
1. 为 Cron 定时任务命名,这是可选的;
2. 你想要执行的完整命令;
3. 设定计划执行的时间。你可以按照启动、每时、每日、每周、每月、每年这些指标快速指定计划任务,也可以明确指定任务执行的具体时间。指定好计划时间后,“Jobs” 区域就会显示 Cron 定时任务的句式。
4. 选择是否为某个 Cron 定时任务记录错误日志。
这是我的一个 Cron 定时任务样例。

如你所见,我设置了一个每月清理 `pacman` 缓存的 Cron 定时任务。你也可以设置多个 Cron 定时任务,都能在控制台页面看到。

如果你需要更改 Cron 定时任务中的某些参数,只需要点击 “Edit” 按钮并按照你的需求更改对应的参数。点击 “Run” 按钮可以立即执行 Cron 定时任务,点击 “Stop” 则可以立即停止 Cron 定时任务。如果想要查看某个 Cron 定时任务的详细日志,可以点击 “Log” 按钮。对于不再需要的 Cron 定时任务,就可以按 “Delete” 按钮删除。
#### 备份 Cron 定时任务
点击控制台页面的 “Backup” 按钮并确认,就可以备份所有 Cron 定时任务。

备份之后,一旦 Crontab 文件出现了错误,就可以使用备份来恢复了。
#### 导入/导出其它机器上的 Cron 定时任务
Crontab UI 还有一个令人注目的功能,就是导入、导出、部署其它机器上的 Cron 定时任务。如果同一个网络里的多台机器都需要执行同样的 Cron 定时任务,只需要点击 “Export” 按钮并选择文件的保存路径,所有的 Cron 定时任务都会导出到 `crontab.db` 文件中。
以下是 `crontab.db` 文件的内容:
```
$ cat Downloads/crontab.db
{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"}
```
导出成文件以后,你就可以把这个 `crontab.db` 文件放置到其它机器上并导入成 Cron 定时任务,而不需要在每一台主机上手动设置 Cron 定时任务。总之,在一台机器上设置完,导出,再导入到其他机器,就完事了。
#### 在 Crontab 文件获取/保存 Cron 定时任务
你可能在使用 Crontab UI 之前就已经使用 `crontab` 命令创建过 Cron 定时任务。如果是这样,你可以点击控制台页面上的 “Get from crontab” 按钮来获取已有的 Cron 定时任务。

同样地,你也可以使用 Crontab UI 来将新的 Cron 定时任务保存到 Crontab 文件中,只需要点击 “Save to crontab” 按钮就可以了。
管理 Cron 定时任务并没有想象中那么难,即使是新手使用 Crontab UI 也能轻松管理 Cron 定时任务。赶快开始尝试并发表一下你的看法吧。
---
via: <https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,078 | 如何在 Ubuntu Linux 中使用 RAR 文件 | https://itsfoss.com/use-rar-ubuntu-linux/ | 2018-10-03T21:39:07 | [
"rar",
"unrar",
"压缩",
"解压"
] | https://linux.cn/article-10078-1.html | [RAR](https://www.rarlab.com/rar_file.htm) 是一种非常好的归档文件格式。但相比之下 7-zip 能提供了更好的压缩率,并且默认情况下还可以在多个平台上轻松支持 Zip 文件。不过 RAR 仍然是最流行的归档格式之一。然而 [Ubuntu](https://www.ubuntu.com/) 自带的归档管理器却不支持提取 RAR 文件,也不允许创建 RAR 文件。
办法总比问题多。只要安装 `unrar` 这款由 [RARLAB](https://www.rarlab.com/) 提供的免费软件,就能在 Ubuntu 上支持提取 RAR 文件了。你也可以安装 `rar` 试用版来创建和管理 RAR 文件。

### 提取 RAR 文件
在未安装 `unrar` 的情况下,提取 RAR 文件会报出“未能提取”错误,就像下面这样(以 [Ubuntu 18.04](https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/) 为例):

如果要解决这个错误并提取 RAR 文件,请按照以下步骤安装 `unrar`:
打开终端并输入:
```
sudo apt-get install unrar
```
安装 `unrar` 后,直接输入 `unrar` 就可以看到它的用法以及如何使用这个工具处理 RAR 文件。
最常用到的功能是提取 RAR 文件。因此,可以**通过右键单击 RAR 文件并执行提取**,也可以借助此以下命令通过终端执行操作:
```
unrar x FileName.rar
```
结果类似以下这样:

如果压缩文件没放在家目录中,就必须使用 `cd` 命令移动到目标目录下。例如 RAR 文件如果在 `Music` 目录下,只需要使用 `cd Music` 就可以移动到相应的目录,然后提取 RAR 文件。
### 创建和管理 RAR 文件

`unrar` 不允许创建 RAR 文件。因此还需要安装 `rar` 命令行工具才能创建 RAR 文件。
要创建 RAR 文件,首先需要通过以下命令安装 `rar`:
```
sudo apt-get install rar
```
按照下面的命令语法创建 RAR 文件:
```
rar a ArchiveName File_1 File_2 Dir_1 Dir_2
```
按照这个格式输入命令时,它会将目录中的每个文件添加到 RAR 文件中。如果需要某一个特定的文件,就要指定文件确切的名称或路径。
默认情况下,RAR 文件会放置在**家目录**中。
以类似的方式,可以更新或管理 RAR 文件。同样是使用以下的命令语法:
```
rar u ArchiveName Filename
```
在终端输入 `rar` 就可以列出 RAR 工具的相关命令。
### 总结
现在你已经知道如何在 Ubuntu 上管理 RAR 文件了,你会更喜欢使用 7-zip、Zip 或 Tar.xz 吗?
欢迎在评论区中评论。
---
via: <https://itsfoss.com/use-rar-ubuntu-linux/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

[RAR](https://www.rarlab.com/rar_file.htm?ref=itsfoss.com) is a quite good archive file format. It is one of the most popular archive formats.
By default, Ubuntu's archive manager does not support extracting RAR files nor does it let you create RAR files. It would rather show you this error:
*There is no command installed for RAR archive files. Do you want to search for a command to open this file?*

Sometimes extracting RAR files will show you "* Extraction not performed" error*.

Fret not. We have a solution for you. To enable the support to extract RAR files, you need to install **UNRAR** – which is a freeware by [RARLAB](https://www.rarlab.com/?ref=itsfoss.com). And, to create and manage RAR files, you need to install **RAR**.
## Install RAR support in Ubuntu
Launch the terminal and type in the following command to make sure you have [multiverse repository enabled in Ubuntu](https://itsfoss.com/ubuntu-repositories/):
`sudo add-apt-repository multiverse`
Update the package cache:
`sudo apt update`
Now, **if you just want to extract RAR files** in Ubuntu, simply install the unrar package like this:
`sudo apt install unrar`
If you also want to create RAR files, you should install this file.
`sudo apt install rar`
## Extracting RAR Files in Ubuntu Linux
After installing unrar, you may choose to type in “**unrar**” (without the inverted commas) to know more about its usage and how to use RAR files with the help of it.
The most common usage would obviously be extracting the RAR file you have. So, **you can either perform a right-click on the file and proceed to extract it** from there or you can do it via the terminal with the help of this command:
`unrar x FileName.rar`
You can see that in action here:

If the file isn’t present in the Home directory, then you have to navigate to the target folder by [using the cd command in Linux](https://linuxhandbook.com/cd-command-examples/?ref=itsfoss.com). For instance, if you have the archive in the Music directory, simply type in “**cd Music**” to navigate to the location and then extract the RAR file.
## Creating & Managing RAR files in Ubuntu Linux
UNRAR does not let you create RAR files. So, you need to install the RAR command-line tool to be able to create RAR archives.
To do that, you need to type in the following command:
`sudo apt-get install rar`
Here, we will help you create a RAR file. In order to do that, follow the command syntax below:
`rar a ArchiveName File_1 File_2 Dir_1 Dir_2`
When you type a command in this format, it will add every item inside the directory to the archive. In either case, if you want specific files, just mention the exact name/path.
By default, the RAR files reside in **HOME** directory.

In the same way, you can update/manage the RAR files. Just type in a command using the following syntax:
`rar u ArchiveName Filename`
To get the list of commands for the RAR tool, just type “**rar**” in the terminal.
## Wrapping Up
Now that you’ve known how to use RAR files on Ubuntu, will you prefer using it over 7-zip, Zip, or Tar.xz?
Let us know your thoughts in the comments below. |
10,079 | 用于 Linux 桌面的 4 个扫描工具 | https://opensource.com/article/18/9/linux-scanner-tools | 2018-10-04T21:01:38 | [
"扫描"
] | https://linux.cn/article-10079-1.html |
>
> 使用这些开源软件之一驱动你的扫描仪来实现无纸化办公。
>
>
>

尽管无纸化世界还没有到来,但越来越多的人通过扫描文件和照片来摆脱纸张的束缚。不过,仅仅拥有一台扫描仪还不足够。你还需要软件来驱动扫描仪。
然而问题是许多扫描仪制造商没有与他们的设备适配在一起的软件的 Linux 版本。不过在大多数情况下,即使没有也没多大关系。因为在 Linux 桌面上已经有很好的扫描软件了。它们能够与许多扫描仪配合很好的完成工作。
现在就让我们看看四个简单又灵活的开源 Linux 扫描工具。我已经使用过了下面这些工具(甚至[早在 2014 年](https://opensource.com/life/14/8/3-tools-scanners-linux-desktop)写过关于其中三个工具的文章)并且觉得它们非常有用。希望你也会这样认为。
### Simple Scan
这是我最喜欢的一个软件之一,[Simple Scan](https://gitlab.gnome.org/GNOME/simple-scan) 小巧、快捷、高效且易用。如果你以前见过它,那是因为 Simple Scan 是 GNOME 桌面上的默认扫描应用程序,也是许多 Linux 发行版的默认扫描程序。
你只需单击一下就能扫描文档或照片。扫描过某些内容后,你可以旋转或裁剪它并将其另存为图像(仅限 JPEG 或 PNG 格式)或 PDF 格式。也就是说 Simple Scan 可能会很慢,即使你用较低分辨率来扫描文档。最重要的是,Simple Scan 在扫描时会使用一组全局的默认值,例如 150dpi 用于文本,300dpi 用于照片。你需要进入 Simple Scan 的首选项才能更改这些设置。
如果你扫描的内容超过了几页,还可以在保存之前重新排序页面。如果有必要的话 —— 假如你正在提交已签名的表格 —— 你可以使用 Simple Scan 来发送电子邮件。
### Skanlite
从很多方面来看,[Skanlite](https://www.kde.org/applications/graphics/skanlite/) 是 Simple Scan 在 KDE 世界中的表兄弟。虽然 Skanlite 功能很少,但它可以出色的完成工作。
你可以自己配置这个软件的选项,包括自动保存扫描文件、设置扫描质量以及确定扫描保存位置。 Skanlite 可以保存为以下图像格式:JPEG、PNG、BMP、PPM、XBM 和 XPM。
其中一个很棒的功能是 Skanlite 能够将你扫描的部分内容保存到单独的文件中。当你想要从照片中删除某人或某物时,这就派上用场了。
### Gscan2pdf
这是我另一个最爱的老软件,[gscan2pdf](http://gscan2pdf.sourceforge.net/) 可能会显得很老旧了,但它仍然包含一些比这里提到的其他软件更好的功能。即便如此,gscan2pdf 仍然显得很轻便。
除了以各种图像格式(JPEG、PNG 和 TIFF)保存扫描外,gscan2pdf 还可以将它们保存为 PDF 或 [DjVu](http://en.wikipedia.org/wiki/DjVu) 文件。你可以在单击“扫描”按钮之前设置扫描的分辨率,无论是黑白、彩色还是纸张大小,每当你想要更改任何这些设置时,都可以进入 gscan2pdf 的首选项。你还可以旋转、裁剪和删除页面。
虽然这些都不是真正的杀手级功能,但它们会给你带来更多的灵活性。
### GIMP
你大概会知道 [GIMP](http://www.gimp.org/) 是一个图像编辑工具。但是你恐怕不知道可以用它来驱动你的扫描仪吧。
你需要安装 [XSane](https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane) 扫描软件和 GIMP XSane 插件。这两个应该都可以从你的 Linux 发行版的包管理器中获得。在软件里,选择“文件>创建>扫描仪/相机”。单击“扫描仪”,然后单击“扫描”按钮即可进行扫描。
如果这不是你想要的,或者它不起作用,你可以将 GIMP 和一个叫作 [QuiteInsane](http://sourceforge.net/projects/quiteinsane/) 的插件结合起来。使用任一插件,都能使 GIMP 成为一个功能强大的扫描软件,它可以让你设置许多选项,如是否扫描彩色或黑白、扫描的分辨率,以及是否压缩结果等。你还可以使用 GIMP 的工具来修改或应用扫描后的效果。这使得它非常适合扫描照片和艺术品。
### 它们真的能够工作吗?
所有的这些软件在大多数时候都能够在各种硬件上运行良好。我将它们与我过去几年来拥有的多台多功能打印机一起使用 —— 无论是使用 USB 线连接还是通过无线连接。
你可能已经注意到我在前一段中写过“大多数时候运行良好”。这是因为我确实遇到过一个例外:一个便宜的 canon 多功能打印机。我使用的软件都没有检测到它。最后我不得不下载并安装 canon 的 Linux 扫描仪软件才使它工作。
你最喜欢的 Linux 开源扫描工具是什么?发表评论,分享你的选择。
---
via: <https://opensource.com/article/18/9/linux-scanner-tools>
作者:[Scott Nesbitt](https://opensource.com/users/scottnesbitt) 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | While the paperless world isn't here quite yet, more and more people are getting rid of paper by scanning documents and photos. Having a scanner isn't enough to do the deed, though. You need software to drive that scanner.
But the catch is many scanner makers don't have Linux versions of the software they bundle with their devices. For the most part, that doesn't matter. Why? Because there are good scanning applications available for the Linux desktop. They work with a variety of scanners and do a good job.
Let's take a look at four simple but flexible open source Linux scanning tools. I've used each of these tools (and even wrote about three of them [back in 2014](https://opensource.com/life/14/8/3-tools-scanners-linux-desktop)) and found them very useful. You might, too.
## Simple Scan
One of my longtime favorites, [Simple Scan](https://gitlab.gnome.org/GNOME/simple-scan) is small, quick, efficient, and easy to use. If you've seen it before, that's because Simple Scan is the default scanner application on the GNOME desktop, as well as for a number of Linux distributions.
Scanning a document or photo takes one click. After scanning something, you can rotate or crop it and save it as an image (JPEG or PNG only) or as a PDF. That said, Simple Scan can be slow, even if you scan documents at lower resolutions. On top of that, Simple Scan uses a set of global defaults for scanning, like 150dpi for text and 300dpi for photos. You need to go into Simple Scan's preferences to change those settings.
If you've scanned something with more than a couple of pages, you can reorder the pages before you save. And if necessary—say you're submitting a signed form—you can email from within Simple Scan.
## Skanlite
In many ways, [Skanlite](https://www.kde.org/applications/graphics/skanlite/) is Simple Scan's cousin in the KDE world. Skanlite has few features, but it gets the job done nicely.
The software has options that you can configure, including automatically saving scanned files, setting the quality of the scan, and identifying where to save your scans. Skanlite can save to these image formats: JPEG, PNG, BMP, PPM, XBM, and XPM.
One nifty feature is the software's ability to save portions of what you've scanned to separate files. That comes in handy when, say, you want to excise someone or something from a photo.
## Gscan2pdf
Another old favorite, [gscan2pdf](http://gscan2pdf.sourceforge.net/) might be showing its age, but it still packs a few more features than some of the other applications mentioned here. Even so, gscan2pdf is still comparatively light.
In addition to saving scans in various image formats (JPEG, PNG, and TIFF), gscan2pdf also saves them as PDF or [DjVu](http://en.wikipedia.org/wiki/DjVu) files. You can set the scan's resolution, whether it's black and white or color, and paper size *before* you click the Scan button. That beats going into gscan2pdf's preferences every time you want to change any of those settings. You can also rotate, crop, and delete pages.
While none of those features are truly killer, they give you a bit more flexibility.
## GIMP
You probably know [GIMP](http://www.gimp.org/) as an image-editing tool. But did you know you can use it to drive your scanner?
You'll need to install the [XSane](https://en.wikipedia.org/wiki/Scanner_Access_Now_Easy#XSane) scanner software and the GIMP XSane plugin. Both of those should be available from your Linux distro's package manager. From there, select File > Create > Scanner/Camera. From there, click on your scanner and then the Scan button.
If that's not your cup of tea, or if it doesn't work, you can combine GIMP with a plugin called [QuiteInsane](http://sourceforge.net/projects/quiteinsane/). With either plugin, GIMP becomes a powerful scanning application that lets you set a number of options like whether to scan in color or black and white, the resolution of the scan, and whether or not to compress results. You can also use GIMP's tools to touch up or apply effects to your scans. This makes it great for scanning photos and art.
## Do they really just work?
All of this software works well for the most part and with a variety of hardware. I've used them with several multifunction printers that I've owned over the years—whether connecting using a USB cable or over wireless.
You might have noticed that I wrote "works well *for the most part*" in the previous paragraph. I did run into one exception: an inexpensive Canon multifunction printer. None of the software I used could detect it. I had to download and install Canon's Linux scanner software, which did work.
What's your favorite open source scanning tool for Linux? Share your pick by leaving a comment.
## 7 Comments |
10,080 | 3 个用于数据科学的顶级 Python 库 | https://opensource.com/article/18/9/top-3-python-libraries-data-science | 2018-10-04T21:29:25 | [
"Python",
"数据科学",
"SciPy"
] | /article-10080-1.html |
>
> 使用这些库把 Python 变成一个科学数据分析和建模工具。
>
>
>

Python 的许多特性,比如开发效率、代码可读性、速度等使之成为了数据科学爱好者的首选编程语言。对于想要升级应用程序功能的数据科学家和机器学习专家来说,Python 通常是最好的选择(比如,Andrey Bulezyuk 使用 Python 语言创造了一个优秀的[机器学习应用程序](https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/))。
由于 Python 的广泛使用,因此它拥有大量的库,使得数据科学家能够很容易地完成复杂的任务,而且不会遇到许多编码困难。下面列出 3 个用于数据科学的顶级 Python 库。如果你想在数据科学这一领域开始你的职业生涯,就去了解一下它们吧。
### NumPy
[NumPy](http://www.numpy.org/)(数值 Python 的简称)是其中一个顶级数据科学库,它拥有许多有用的资源,从而帮助数据科学家把 Python 变成一个强大的科学分析和建模工具。NumPy 是在 BSD 许可证的许可下开源的,它是在科学计算中执行任务的基础 Python 库。SciPy 是一个更大的基于 Python 生态系统的开源工具,而 NumPy 是 SciPy 非常重要的一部分。
NumPy 为 Python 提供了大量数据结构,从而能够轻松地执行多维数组和矩阵运算。除了用于求解线性代数方程和其它数学计算之外,NumPy 还可以用做不同类型通用数据的多维容器。
此外,NumPy 还可以和其他编程语言无缝集成,比如 C/C++ 和 Fortran。NumPy 的多功能性使得它可以简单而快速地与大量数据库和工具结合。比如,让我们来看一下如何使用 NumPy(缩写成 `np`)来实现两个矩阵的乘法运算。
我们首先导入 NumPy 库(在这些例子中,我将使用 Jupyter notebook):
```
import numpy as np
```
接下来,使用 `eye()` 函数来生成指定维数的单位矩阵:
```
matrix_one = np.eye(3)
matrix_one
```
输出如下:
```
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
```
让我们生成另一个 3x3 矩阵。
我们使用 `arange([starting number], [stopping number])` 函数来排列数字。注意,函数中的第一个参数是需要列出的初始数字,而后一个数字不包含在生成的结果中。
另外,使用 `reshape()` 函数把原始生成的矩阵的维度改成我们需要的维度。为了使两个矩阵“可乘”,它们需要有相同的维度。
```
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
```
输出如下:
```
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
```
接下来,使用 `dot()` 函数将两个矩阵相乘。
```
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
```
相乘后的输出如下:
```
array([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
```
太好了!
我们成功使用 NumPy 完成了两个矩阵的相乘,而不是使用<ruby> 普通冗长 <rt> vanilla </rt></ruby>的 Python 代码。
下面是这个例子的完整代码:
```
import numpy as np
#生成一个 3x3 单位矩阵
matrix_one = np.eye(3)
matrix_one
#生成另一个 3x3 矩阵以用来做乘法运算
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
#将两个矩阵相乘
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
```
### Pandas
[Pandas](http://pandas.pydata.org/) 是另一个可以提高你的 Python 数据科学技能的优秀库。就和 NumPy 一样,它属于 SciPy 开源软件家族,可以在 BSD 自由许可证许可下使用。
Pandas 提供了多能而强大的工具,用于管理数据结构和执行大量数据分析。该库能够很好的处理不完整、非结构化和无序的真实世界数据,并且提供了用于整形、聚合、分析和可视化数据集的工具
Pandas 中有三种类型的数据结构:
* Series:一维、相同数据类型的数组
* DataFrame:二维异型矩阵
* Panel:三维大小可变数组
例如,我们来看一下如何使用 Panda 库(缩写成 `pd`)来执行一些描述性统计计算。
首先导入该库:
```
import pandas as pd
```
然后,创建一个<ruby> 序列 <rt> series </rt></ruby>字典:
```
d = {'Name':pd.Series(['Alfrick','Michael','Wendy','Paul','Dusan','George','Andreas',
'Irene','Sagar','Simon','James','Rose']),
'Years of Experience':pd.Series([5,9,1,4,3,4,7,9,6,8,3,1]),
'Programming Language':pd.Series(['Python','JavaScript','PHP','C++','Java','Scala','React','Ruby','Angular','PHP','Python','JavaScript'])
}
```
接下来,再创建一个<ruby> 数据框 <rt> DataFrame </rt></ruby>:
```
df = pd.DataFrame(d)
```
输出是一个非常规整的表:
```
Name Programming Language Years of Experience
0 Alfrick Python 5
1 Michael JavaScript 9
2 Wendy PHP 1
3 Paul C++ 4
4 Dusan Java 3
5 George Scala 4
6 Andreas React 7
7 Irene Ruby 9
8 Sagar Angular 6
9 Simon PHP 8
10 James Python 3
11 Rose JavaScript 1
```
下面是这个例子的完整代码:
```
import pandas as pd
#创建一个序列字典
d = {'Name':pd.Series(['Alfrick','Michael','Wendy','Paul','Dusan','George','Andreas',
'Irene','Sagar','Simon','James','Rose']),
'Years of Experience':pd.Series([5,9,1,4,3,4,7,9,6,8,3,1]),
'Programming Language':pd.Series(['Python','JavaScript','PHP','C++','Java','Scala','React','Ruby','Angular','PHP','Python','JavaScript'])
}
#创建一个数据框
df = pd.DataFrame(d)
print(df)
```
### Matplotlib
[Matplotlib](https://matplotlib.org/) 也是 Scipy 核心包的一部分,并且在 BSD 许可证下可用。它是一个非常流行的科学库,用于实现简单而强大的可视化。你可以使用这个 Python 数据科学框架来生成曲线图、柱状图、直方图以及各种不同形状的图表,并且不用担心需要写很多行的代码。例如,我们来看一下如何使用 Matplotlib 库来生成一个简单的柱状图。
首先导入该库:
```
from matplotlib import pyplot as plt
```
然后生成 x 轴和 y 轴的数值:
```
x = [2, 4, 6, 8, 10]
y = [10, 11, 6, 7, 4]
```
接下来,调用函数来绘制柱状图:
```
plt.bar(x,y)
```
最后,显示图表:
```
plt.show()
```
柱状图如下:

下面是这个例子的完整代码:
```
#导入 Matplotlib 库
from matplotlib import pyplot as plt
#和 import matplotlib.pyplot as plt 一样
#生成 x 轴的数值
x = [2, 4, 6, 8, 10]
#生成 y 轴的数值
y = [10, 11, 6, 7, 4]
#调用函数来绘制柱状图
plt.bar(x,y)
#显示图表
plt.show()
```
### 总结
Python 编程语言非常擅长数据处理和准备,但是在科学数据分析和建模方面就没有那么优秀了。幸好有这些用于[数据科学](https://www.liveedu.tv/guides/data-science/)的顶级 Python 框架填补了这一空缺,从而你能够进行复杂的数学计算以及创建复杂模型,进而让数据变得更有意义。
你还知道其它的 Python 数据挖掘库吗?你的使用经验是什么样的?请在下面的评论中和我们分享。
---
via: <https://opensource.com/article/18/9/top-3-python-libraries-data-science>
作者:[Dr.Michael J.Garbade](https://opensource.com/users/drmjg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[ucasFL](https://github.com/ucasFL) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
10,081 | 如何在 Ubuntu 18.04 上安装 Popcorn Time | https://itsfoss.com/popcorn-time-ubuntu-linux/ | 2018-10-05T07:37:00 | [
"视频"
] | https://linux.cn/article-10081-1.html |
>
> 简要:这篇教程展示给你如何在 Ubuntu 和其他 Linux 发行版上安装 Popcorn Time,也会讨论一些 Popcorn Time 的便捷操作。
>
>
>
[Popcorn Time](https://popcorntime.sh/) 是一个受 [Netflix](https://netflix.com/) 启发的开源的 [torrent](https://en.wikipedia.org/wiki/Torrent_file) 流媒体应用,可以在 Linux、Mac、Windows 上运行。
传统的 torrent,在你看影片之前必须等待它下载完成。
[Popcorn Time](https://en.wikipedia.org/wiki/Popcorn_Time) 有所不同。它的使用基于 torrent,但是允许你(几乎)立即开始观看影片。它跟你在 Youtube 或者 Netflix 等流媒体网页上看影片一样,无需等待它下载完成。

如果你不想在看在线电影时被突如其来的广告吓倒的话,Popcorn Time 是一个不错的选择。不过要记得,它的播放质量依赖于当前网络中可用的<ruby> 种子 <rt> seed </rt></ruby>数。
Popcorn Time 还提供了一个不错的用户界面,让你能够浏览可用的电影、电视剧和其他视频内容。如果你曾经[在 Linux 上使用过 Netflix](https://itsfoss.com/netflix-firefox-linux/),你会发现两者有一些相似之处。
有些国家严格打击盗版,所以使用 torrent 下载电影是违法行为。在类似美国、英国和西欧等一些国家,你或许曾经收到过法律声明。也就是说,是否使用取决于你。已经警告过你了。
Popcorn Time 一些主要的特点:
* 使用 Torrent 在线观看电影和电视剧
* 有一个时尚的用户界面让你浏览可用的电影和电视剧资源
* 调整流媒体的质量
* 标记为稍后观看
* 下载为离线观看
* 可以默认开启字幕,改变字母尺寸等
* 使用键盘快捷键浏览
### 如何在 Ubuntu 和其它 Linux 发行版上安装 Popcorn Time
这篇教程以 Ubuntu 18.04 为例,但是你可以使用类似的说明,在例如 Linux Mint、Debian、Manjaro、Deepin 等 Linux 发行版上安装。
Popcorn Time 在 Deepin Linux 的软件中心中也可用。Manjaro 和 Arch 用户也可以轻松地使用 AUR 来安装 Popcorn Time。
接下来我们看该如何在 Linux 上安装 Popcorn Time。事实上,这个过程非常简单。只需要按照说明操作复制粘贴我提到的这些命令即可。
#### 第一步:下载 Popcorn Time
你可以从它的官网上安装 Popcorn Time。下载链接在它的主页上。
* [下载 Popcorn Time](https://popcorntime.sh/)
#### 第二步:安装 Popcorn Time
下载完成之后,就该使用它了。下载下来的是一个 tar 文件,在这些文件里面包含有一个可执行文件。你可以把 tar 文件提取在任何位置,[Linux 常把附加软件安装在](http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html) [/opt 目录](http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html)。
在 `/opt` 下创建一个新的目录:
```
sudo mkdir /opt/popcorntime
```
现在进入你下载文件的文件夹中,比如我把 Popcorn Time 下载到了主目录的 Downloads 目录下。
```
cd ~/Downloads
```
提取下载好的 Popcorn Time 文件到新创建的 `/opt/popcorntime` 目录下:
```
sudo tar Jxf Popcorn-Time-* -C /opt/popcorntime
```
#### 第三步:让所有用户可以使用 Popcorn Time
如果你想要系统中所有的用户无需经过 `sudo` 就可以运行 Popcorn Time。你需要在 `/usr/bin` 目录下创建一个[符号链接(软链接)](https://en.wikipedia.org/wiki/Symbolic_link)指向这个可执行文件。
```
ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time
```
#### 第四步:为 Popcorn Time 创建桌面启动器
到目前为止,一切顺利,但是你也许想要在应用菜单里看到 Popcorn Time,又或是想把它添加到最喜欢的应用列表里等。
为此,你需要创建一个桌面入口。
打开一个终端窗口,在 `/usr/share/applications` 目录下创建一个名为 `popcorntime.desktop` 的文件。
你可以使用任何[基于命令行的文本编辑器](https://itsfoss.com/command-line-text-editors-linux/)。Ubuntu 默认安装了 [Nano](https://itsfoss.com/nano-3-release/),所以你可以直接使用这个。
```
sudo nano /usr/share/applications/popcorntime.desktop
```
在里面插入以下内容:
```
[Desktop Entry]
Version = 1.0
Type = Application
Terminal = false
Name = Popcorn-Time
Exec = /usr/bin/Popcorn-Time
Icon = /opt/popcorntime/popcorn.png
Categories = Application;
```
如果你使用的是 Nano 编辑器,使用 `Ctrl+X` 保存输入的内容,当询问是否保存时,输入 `Y`,然后按回车保存并退出。
就快要完成了。最后一件事就是为 Popcorn Time 设置一个正确的图标。你可以下载一个 Popcorn Time 图标到 `/opt/popcorntime` 目录下,并命名为 `popcorn.png`。
你可以使用以下命令:
```
sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png
```
这样就 OK 了。现在你可以搜索 Popcorn Time 然后点击启动它了。

*在菜单里搜索 Popcorn Time*
第一次启动时,你必须接受这些条款和条件。

*接受这些服务条款*
一旦你完成这些,你就可以享受你的电影和电视节目了。

好了,这就是所有你在 Ubuntu 或者其他 Linux 发行版上安装 Popcorn Time 所需要的了。你可以直接开始看你最喜欢的影视节目了。
### 高效使用 Popcorn Time 的七个小贴士
现在你已经安装好了 Popcorn Time 了,我接下来将要告诉你一些有用的 Popcorn Time 技巧。我保证它会增强你使用 Popcorn Time 的体验。
#### 1、 使用高级设置
始终启用高级设置。它给了你更多的选项去调整 Popcorn Time 点击右上角的齿轮标记。查看其中的高级设置。

#### 2、 在 VLC 或者其他播放器里观看影片
你知道你可以选择自己喜欢的播放器而不是 Popcorn Time 默认的播放器观看一个视频吗?当然,这个播放器必须已经安装在你的系统上了。
现在你可能会问为什么要使用其他的播放器。我的回答是:其他播放器可以弥补 Popcorn Time 默认播放器上的一些不足。
例如,如果一个文件的声音非常小,你可以使用 VLC 将音频声音增强 400%,你还可以[使用 VLC 同步不连贯的字幕](https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/)。你可以在播放文件之前在不同的媒体播放器之间进行切换。

#### 3、 将影片标记为稍后观看
只是浏览电影和电视节目,但是却没有时间和精力去看?这不是问题。你可以添加这些影片到书签里面,稍后可以在 Faveriate 标签里面访问这些影片。这可以让你创建一个你想要稍后观看的列表。

#### 4、 检查 torrent 的信息和种子信息
像我之前提到的,你在 Popcorn Time 的观看体验依赖于 torrent 的速度。好消息是 Popcorn Time 显示了 torrent 的信息,因此你可以知道流媒体的速度。
你可以在文件上看到一个绿色/黄色/红色的点。绿色意味着有足够的种子,文件很容易播放。黄色意味着有中等数量的种子,应该可以播放。红色意味着只有非常少可用的种子,播放的速度会很慢甚至无法观看。

#### 5、 添加自定义字幕
如果你需要字幕而且它没有你想要的语言,你可以从外部网站下载自定义字幕。得到 .src 文件,然后就可以在 Popcorn Time 中使用它:

你可以[用 VLC 自动下载字幕](https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/)。
#### 6、 保存文件离线观看
用 Popcorn Time 播放内容时,它会下载并暂时存储这些内容。当你关闭 APP 时,缓存会被清理干净。你可以更改这个操作,使得下载的文件可以保存下来供你未来使用。
在高级设置里面,向下滚动一点。找到缓存目录,你可以把它更改到其他像是 Downloads 目录,这下你即便关闭了 Popcorn Time,这些文件依旧可以观看。

#### 7、 拖放外部 torrent 文件立即播放
我猜你不知道这个操作。如果你没有在 Popcorn Time 发现某些影片,从你最喜欢的 torrent 网站下载 torrent 文件,打开 Popcorn Time,然后拖放这个 torrent 文件到 Popcorn Time 里面。它将会立即播放文件,当然这个取决于种子。这次你不需要在观看前下载整个文件了。
当你拖放文件到 Popcorn Time 后,它将会给你对应的选项,选择它应该播放的。如果里面有字幕,它会自动播放,否则你需要添加外部字幕。

在 Popcorn Time 里面有很多的功能,但是我决定就此打住,剩下的就由你自己来探索吧。我希望你能发现更多 Popcorn Time 有用的功能和技巧。
我再提醒一遍,使用 Torrents 在很多国家是违法的。
---
via: <https://itsfoss.com/popcorn-time-ubuntu-linux/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[dianbanjiu](https://github.com/dianbanjiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | **Brief: This tutorial shows you how to install Popcorn Time on Ubuntu and other Linux distributions. Some handy Popcorn Time tips have also been discussed.**
Popcorn Time is an open source [Netflix](https://netflix.com/) inspired [torrent](https://en.wikipedia.org/wiki/Torrent_file) streaming application for Linux, Mac
With the regular torrents, you have to wait for the download to finish before you could watch the videos.
Popcorn Time is different. It uses torrent underneath but allows you to start watching the videos (almost) immediately. It’s like you are watching videos on streaming websites like YouTube or Netflix. You don’t have to wait for the download to finish here.

If you want to watch movies online without those creepy ads, Popcorn Time is a good alternative. Keep in mind that the streaming quality depends on the number of available seeds.
Popcorn Time also provides a nice user interface where you can browse through available movies, tv-series and other contents. If you ever used [Netflix on Linux](https://itsfoss.com/netflix-firefox-linux/), you will find it’s somewhat a similar experience.
Using torrent to download movies is illegal in several countries where there are strict laws against piracy. In countries like the USA, UK and West European you may even get legal notices. That said, it’s up to you to decide if you want to use it or not. You have been warned.
(If you still want to take the risk and use Popcorn Time, you should use a VPN service like [Ivacy](https://shop.itsfoss.com/sales/ivacy-lifetime-subscription) that has been specifically designed for using Torrents and protecting your identity. Even then it’s not always easy to avoid the snooping authorities.)
Some of the main features of Popcorn Time are:
- Watch movies and TV Series online using Torrent
- A sleek user interface lets you browse the available movies and TV series
- Change streaming quality
- Bookmark content for watching later
- Download content for offline viewing
- Ability to enable subtitles by default, change the subtitles size etc
- Keyboard shortcuts to navigate through Popcorn Time
## How to install Popcorn Time on Ubuntu and other Linux Distributions
I am using Ubuntu 18.04 in this tutorial but you can use the same instructions for other Linux distributions such as Linux Mint, Debian etc.
Popcorn Time is available in the software center for Deepin Linux users. [Manjaro](https://manjaro.org/) and Arch users can easily install Popcorn Time using [AUR](https://itsfoss.com/best-aur-helpers/).
Let’s see how to install Popcorn time on Linux. It’s really easy actually. Simply follow the instructions and copy paste the commands I have mentioned.
### Step 0: Install dependencies
To avoid errors like “Popcorn-Time: [error while loading shared libraries](https://itsfoss.com/solve-open-shared-object-file-quick-tip/):
`sudo apt update && sudo apt install libcanberra-gtk-module libgconf-2-4 libatomic1`
### Step 1: Download Popcorn Time
Attention!
Please note that due to legal troubles, Popcorn Time website keeps on changing its web address. It’s possible that while you are reading, the project might have moved to some other URL. In that case, please search the internet and download the software from a reliable source.
You can download Popcorn Time from its official website. The download link is present on the homepage itself.
Note: Popcorn Time 4.4 is now available. If you were using Popcorn Time 3 previously, the older version stops working. I advise that you remove the files from /opt/popcorntime and redo this tutorial.
### Step 2: Install Popcorn Time
Once you have downloaded Popcorn Time, it’s time to use it. The downloaded file is a tar file that consists of an executable among other files. While you can extract this tar file anywhere, the [Linux convention is to install additional software in ](http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html)[opt directory.](http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/opt.html)
Create a new directory in /opt:
`sudo mkdir /opt/popcorntime`
Now go to the Downloads directory.
`cd ~/Downloads`
The new Popcorn Time 4.4 is available in zip format. [Unzip the file to the specified directory in Linux terminal](https://itsfoss.com/unzip-linux/) like this:
`sudo unzip Popcorn-Time-0.4.4-linux64.zip -d /opt/popcorntime/`
As a few readers have notified that Popcorn Time 4.4 is not working on their systems (it is fine on my system), I advise you to install the dependencies mentioned in step 0 and run **sudo apt-get -f install** afterwards.
### Step 3: Make Popcorn Time accessible for everyone
You would want every user on your system to be able to run Popcorn Time without sudo access, right? To do that, you need to [create a symbolic link](https://linuxhandbook.com/symbolic-link-linux/) to the executable in /
`sudo ln -sf /opt/popcorntime/Popcorn-Time /usr/bin/Popcorn-Time`
### Step 4: Create desktop launcher for Popcorn Time
So far so good. But you would also like to see Popcorn Time in the application menu, add it to your favorite application list etc.
For that, you need to create a desktop entry.
Open a terminal and create a new file named popcorntime.desktop in /usr/share/applications.
You can use any [command line based text editor](https://itsfoss.com/command-line-text-editors-linux/). Ubuntu has [Nano](https://itsfoss.com/nano-3-release/) installed by default so you can use that.
`sudo nano /usr/share/applications/popcorntime.desktop`
Insert the following lines here:
```
[Desktop Entry]
Version = 1.0
Type = Application
Terminal = false
Name = Popcorn Time
Exec = /usr/bin/Popcorn-Time
Icon = /opt/popcorntime/popcorn.png
Categories = Application;
```
If you used Nano editor, save it using shortcut Ctrl+X. When asked for saving, enter Y and then press enter again to save and exit.
We are almost there. One last thing to do here is to have the correct icon for Popcorn Time. For that
You can do that using the command below:
`sudo wget -O /opt/popcorntime/popcorn.png https://upload.wikimedia.org/wikipedia/commons/d/df/Pctlogo.png`
That’s it. Now you can search for Popcorn Time and click on it to launch it.

Troubleshoot: If you don’t find the Popcorn Time in system menu immediately, run Popcorn-Time in the terminal.
On the first launch, you’ll have to accept the terms and conditions.

Once you do that, you can enjoy the movies and TV shows.

Well, that’s all you needed to install Popcorn Time on Ubuntu or any other Linux distribution. You can start watching your favorite movies straightaway.
However, if you are interested, I would suggest reading these Popcorn Time tips to get more out of it.


## Removing Popcorn Time from your system
To remove Popcorn Time from your system, you have to undo everything you did to install it. If you are familiar with Linux command line, you should be able to do it easily. I’ll provide the commands anyway.
**Please be extra careful while entering the commands to delete the files. I have recommended using the -i option of rm command so that it asks for confirmation before removing the files.**
Remove the link in the bin directory:
`sudo rm -i /usr/bin/Popcorn-Time`
And then remove the directory where Popcorn Time files are stored. The command below makes you switch to the /opt directory first and then remove the directory popcorntime. This reduces the risk of removing wrong files.
Run these commands one by one:
```
cd /opt
sudo rm -ri popcorntime
```
You should also remove the desktop file you created for the system integration by running the commands below one by one:
```
cd /usr/share/applications
sudo rm -i popcorntime.desktop
```
This should help you delete Popcorn Time from your system if you had installed it using the exact same steps mentioned in the tutorial here.
## 7 Tips for using Popcorn Time effectively
Now that you have installed Popcorn Time, I am going to tell you some nifty Popcorn Time tricks. I assure you that it will enhance your experience with Popcorn Time multiple folds.
### 1. Use advanced settings
Always have the advanced settings enabled. It gives you more options to tweak Popcorn Time. Go to the top right corner and click on the gear symbol. Click on it and check advanced settings on the next screen.
### 2. Watch the movies in VLC or other players
Did you know that you can choose to watch a file in your preferred media player instead of the default Popcorn Time player? Of course, that media player should have been installed in the system.
Now you may ask why would one want to use another player. And my answer is because other players like VLC has hidden features which you might not find in the Popcorn Time player.
For example, if a file has very low volume, you can use VLC to enhance the audio by 400 percent. You can also [synchronize incoherent subtitles with VLC](https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/). You can switch between media players before you start to play a file:
### 3. Bookmark movies and watch it later
Just browsing through movies and TV series but don’t have time or mood to watch those? No issues. You can add the movies to the bookmark and can access these bookmarked videos from the Favorites tab. This enables you to create a list of movies you would watch later.
### 4. Check torrent health and seed information
As I had mentioned earlier, your viewing experience in Popcorn Times depends on torrent speed. Good thing is that Popcorn time shows the health of the torrent file so that you can be aware of the streaming speed.
You will see a green/yellow/red dot on the file. Green means there are plenty of seeds and the file will stream easily. Yellow means
### 5. Add custom subtitles
If you need subtitles and it is not available in your preferred language, you can add custom subtitles downloaded from external websites. Get the .srt files and use it inside Popcorn Time:
This is where VLC comes handy as you can [download subtitles automatically with VLC](https://itsfoss.com/download-subtitles-automatically-vlc-media-player-ubuntu/).
### 6. Save the files for offline viewing
When Popcorn Times stream a content, it downloads it and
In the advanced settings, scroll down a bit. Look for
### 7. Drag and drop external torrent files to play immediately
I bet you did not know about this one. If you don’t find a certain movie on Popcorn Time, download the torrent file from your favorite torrent website. Open Popcorn Time and just drag and drop the torrent file in Popcorn Time. It will start playing the file, depending upon seeds. This way, you don’t need to download the entire file before watching it.
When you drag and drop the torrent file in Popcorn Time, it will give you the option to choose which video file should it play. If there are subtitles in it, it will play automatically or else, you can add external subtitles.
There are plenty of other features in Popcorn Time. But I’ll stop with my list here and let you explore Popcorn Time on Ubuntu Linux. I hope you find these Popcorn Time tips and tricks useful.
I am repeating again. Using Torrents is illegal in many countries. If you do that, take precaution and use a VPN service. If you are looking for my recommendation, you can go for [Swiss-based privacy company ProtonVPN](https://proton.go2cloud.org/SH3p) (of [ProtonMail](https://itsfoss.com/protonmail/) fame). Singapore based [Ivacy](https://billing.ivacy.com/page/23628) is another good option. If you think these are expensive, you can look for [cheap VPN deals on It’s FOSS Shop](https://shop.itsfoss.com/search?utf8=%E2%9C%93&query=vpn).
*Note: This article contains affiliate links. Please read our affiliate policy.* |
10,082 | WinWorld:大型的废弃操作系统、软件、游戏的博物馆 | https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/ | 2018-10-05T18:14:04 | [
"软件",
"WinWorld"
] | https://linux.cn/article-10082-1.html | 
有一天,我正在测试 Dosbox – 这是一个[在 Linux 平台上运行 MS-DOS 游戏与程序的软件](https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/)。当我在搜索一些常用的软件,例如 Turbo C++ 时,我意外留意到了一个叫做 [WinWorld](https://winworldpc.com/library/) 的网站。我查看了这个网站上的某些内容,并且着实被惊艳到了。WinWorld 收集了非常多经典的,但已经被它们的开发者所抛弃许久的操作系统、软件、应用、开发工具、游戏以及各式各样的工具。它是一个以保存和分享古老的、已经被废弃的或者预发布版本程序为目的的线上博物馆,由社区成员和志愿者运营。
WinWorld 于 2013 年开始运营。它的创始者声称是被 Yahoo birefcases 激发了灵感并以此构建了这个网站。这个网站原目标是保存并且分享老旧软件。多年来,许多志愿者以不同方式提供了帮助,WinWorld 收集的老旧软件增长迅速。整个 WinWorld 仓库都是自由开源的,所有人都可以使用。
### WinWorld 保存了大量的废弃操作系统、软件、系统应用以及游戏
就像我刚才说的那样, WinWorld 存储了大量的被抛弃并且不再被开发的软件。
**Linux 与 Unix:**
这里我给出了完整的 UNIX 和 LINUX 操作系统的列表,以及它们各自的简要介绍、首次发行的年代。
* **A/UX** - 于 1988 年推出,移植到苹果的 68k Macintosh 平台的 Unix 系统。
* **AIX** - 于 1986 年推出,IBM 移植的 Unix 系统。
* **AT &T System V Unix** - 于 1983 年推出,最早的商业版 Unix 之一。
* **Banyan VINES** - 于 1984 年推出,专为 Unix 设计的网络操作系统。
* **Corel Linux** - 于 1999 年推出,商业 Linux 发行版。
* **DEC OSF-1** - 于 1991 年推出,由 DEC 公司开发的 Unix 版本。
* **Digital UNIX** - 由 DEC 于 1995 年推出,**OSF-1** 的重命名版本。
* **FreeBSD 1.0** - 于 1993 年推出,FreeBSD 的首个发行版。这个系统是基于 4.3BSD 开发的。
* **Gentus Linux** - 由 ABIT 于 2000 年推出,未遵守 GPL 协议的 Linux 发行版。
* **HP-UX** - 于 1992 年推出,UNIX 的变种系统。
* **IRIX** - 由硅谷图形公司(SGI)于 1988 年推出的操作系统。
* **Lindows** - 于 2002 年推出,与 Corel Linux 类似的商业操作系统。
* **Linux Kernel** - 0.01 版本于 90 年代早期推出,Linux 源代码的副本。
* **Mandrake Linux** - 于 1999 年推出。基于 Red Hat Linux 的 Linux 发行版,稍后被重新命名为 Mandriva。
* **NEWS-OS** - 由 Sony 于 1989 年推出,BSD 的变种。
* **NeXTStep** - 由史蒂夫·乔布斯创立的 NeXT 公司于 1987 年推出,基于 Unix 的操作系统。
* **PC/IX** - 于 1984 年推出,为 IBM 个人电脑服务的基于 Unix 的操作系统。
* **Red Hat Linux 5.0** - 由 Red Hat 推出,商业 Linux 发行版。
* **Sun Solaris** - 由 Sun Microsystem 于 1992 年推出,基于 Unix 的操作系统。
* **SunOS** - 由 Sun Microsystem 于 1982 年推出,衍生自 BSD 基于 Unix 的操作系统。
* **Tru64 UNIX** - 由 DEC 开发,旧称 OSF/1。
* **Ubuntu 4.10** - 基于 Debian 的知名操作系统。这是早期的 beta 预发布版本,比第一个 Ubuntu 正式发行版更早推出。
* **Ultrix** - 由 DEC 开发, UNIX 克隆。
* **UnixWare** - 由 Novell 推出, UNIX 变种。
* **Xandros Linux** - 首个版本于 2003 年推出。基于 Corel Linux 的专有 Linux 发行版。
* **Xenix** - 最初由微软于 1984 推出,UNIX 变种操作系统。
不仅仅是 Linux/Unix,你还能找到例如 DOS、Windows、Apple/Mac、OS 2、Novell netware 等其他的操作系统与 shell。
**DOS & CP/M:**
* 86-DOS
* Concurrent CPM-86 & Concurrent DOS
* CP/M 86 & CP/M-80
* DOS Plus
* DR-DOS
* GEM
* MP/M
* MS-DOS
* 多任务的 MS-DOS 4.00
* 多用户 DOS
* PC-DOS
* PC-MOS
* PTS-DOS
* Real/32
* Tandy Deskmate
* Wendin DOS
**Windows:**
* BackOffice Server
* Windows 1.0/2.x/3.0/3.1/95/98/2000/ME/NT 3.X/NT 4.0
* Windows Whistler
* WinFrame
**Apple/Mac:**
* Mac OS 7/8/9
* Mac OS X
* System Software (0-6)
**OS/2:**
* Citrix Multiuser
* OS/2 1.x
* OS/2 2.0
* OS/2 3.x
* OS/2 Warp 4
于此同时,WinWorld 也收集了大量的旧软件、系统应用、开发工具和游戏。你也可以一起看看它们。
说实话,这个网站列出的绝大部分东西,我甚至都不知道它们存在过。其中列出的某些工具发布于我出生之前。
如果您需要或者打算去测试一个经典的程序(例如游戏、软件、操作系统),并且在其他地方找不到它们,那么来 WinWorld 资源库看看,下载它们然后开始你的探险吧。祝您好运!

**免责声明:**
OSTechNix 并非隶属于 WinWorld。我们 OSTechNix 并不确保 WinWorld 站点存储数据的真实性与可靠性。而且在你所在的地区,或许从第三方站点下载软件是违法行为。本篇文章作者和 OSTechNix 都不会承担任何责任,使用此服务意味着您将自行承担风险。(LCTT 译注:本站和译者亦同样申明。)
本篇文章到此为止。希望这对您有用,更多的好文章即将发布,敬请期待!
谢谢各位的阅读!
---
via: <https://www.ostechnix.com/winworld-a-large-collection-of-defunct-oss-software-and-games/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[thecyanbird](https://github.com/thecyanbird) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,083 | ScreenCloud:一个增强的截屏程序 | https://itsfoss.com/screencloud-app/ | 2018-10-05T18:40:01 | [
"截屏"
] | https://linux.cn/article-10083-1.html | [ScreenCloud](https://screencloud.net)是一个很棒的小程序,你甚至不知道你需要它。桌面 Linux 的默认屏幕截图流程很好(`PrtScr` 按钮),我们甚至有一些[强大的截图工具](https://itsfoss.com/take-screenshot-linux/),如 [Shutter](http://shutter-project.org)。但是,ScreenCloud 有一个非常简单但非常方便的功能,让我爱上了它。在我们深入它之前,让我们先看一个背景故事。
我截取了很多截图,远超常人。收据、注册详细信息、开发工作、文章中程序的截图等等。我接下来要做的就是打开浏览器,浏览我最喜欢的云存储并将重要的内容转储到那里,以便我可以在手机上以及 PC 上的多个操作系统上访问它们。这也让我可以轻松与我的团队分享我正在使用的程序的截图。
我对这个标准的截图流程没有抱怨,打开浏览器并登录我的云,然后手动上传屏幕截图,直到我遇到 ScreenCloud。
### ScreenCloud
ScreenCloud 是跨平台的程序,它提供轻松的屏幕截图功能和灵活的[云备份选项](https://itsfoss.com/cloud-services-linux/)管理。这包括使用你自己的 [FTP 服务器](https://itsfoss.com/set-ftp-server-linux/)。

ScreenCloud 很顺滑,在细节上投入了大量的精力。它为你提供了非常容易记住的热键来捕获全屏、活动窗口或鼠标选择区域。

*ScreenCloud 的默认键盘快捷键*
截取屏幕截图后,你可以设置 ScreenCloud 如何处理图像或直接将其上传到你选择的云服务。它甚至支持 SFTP。截图上传后(通常在几秒钟内),图像链接就会被自动复制到剪贴板,这让你可以轻松共享。

你还可以使用 ScreenCloud 进行一些基本编辑。为此,你需要将 “Save to” 设置为 “Ask me”。此设置在应用图标菜单中有并且通常是默认设置。当使用它时,当你截取屏幕截图时,你会看到编辑文件的选项。在这里,你可以在屏幕截图中添加箭头、文本和数字。

*用 ScreenCloud 编辑截屏*
### 在 Linux 上安装 ScreenCloud
ScreenCloud 可在 [Snap 商店](https://snapcraft.io/)中找到。因此,你可以通过访问 [Snap 商店](https://snapcraft.io/screencloud)或运行以下命令,轻松地将其安装在 Ubuntu 和其他[启用 Snap](https://itsfoss.com/install-snap-linux/) 的发行版上。
```
sudo snap install screencloud
```
对于无法通过 Snap 安装程序的 Linux 发行版,你可以[在这里](https://screencloud.net)下载 AppImage。进入下载文件夹,右键单击并在那里打开终端。然后运行以下命令。
```
sudo chmod +x ScreenCloud-v1.4.0-x86_64.AppImage
```
然后,你可以通过双击下载的文件来启动程序。

### 总结
ScreenCloud 适合所有人吗?可能不会。它比默认屏幕截图更好吗?可能是。如果你正在截某些屏幕,有可能它是重要的或是你想分享的。ScreenCloud 可以更轻松,更快速地备份或共享屏幕截图。所以,如果你想要这些功能,你应该试试 ScreenCloud。
欢迎在用下面的评论栏提出你的想法和意见。还有不要忘记与朋友分享这篇文章。干杯。
---
via: <https://itsfoss.com/screencloud-app/>
作者:[Aquil Roshan](https://itsfoss.com/author/aquil/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [ScreenCloud](https://screencloud.net) is an amazing little app, that you don’t even know you need. The default screenshot procedure on desktop Linux is great (Prt Scr Button) and we even have some[ powerful screenshot utilities ](https://itsfoss.com/take-screenshot-linux/)like [Shutter](http://shutter-project.org). But ScreenCloud brings one more really simple yet really convenient feature that I just fell in love with. But before we get into it, let’s catch a little backstory.
I take a lot of screenshots. A lot more than average. Receipts, registration details, development work, screenshots of applications for articles, and lot more. The next thing I do is open a browser, browse to my favorite cloud storage and dump the important ones there so that I can access them on my phone and also across multiple operating systems on my PC. This also allows me to easily share screenshots of the apps that I’m working on with my team.
I had no complaints with this standard procedure of taking screenshots, opening a browser and logging into my cloud and then uploading the screenshots manually, until I came across ScreenCloud.
## ScreenCloud
ScreenCloud is cross-platform utility that provides easy screenshot capture and management along with flexible [cloud backup options](https://itsfoss.com/cloud-services-linux/), including your own [FTP server](https://itsfoss.com/set-ftp-server-linux/).

ScreenCloud is really streamlined with a lot of attention given to the smaller things. It provides you very easy to remember hotkeys to capture the full screen, the active window or capture an area selected with the mouse.

Once a screenshot is taken, you can either set ScreenCloud to ask what to do with the image or to directly upload it the cloud service of your choice. Even SFTP is supported. Once the screenshot it uploaded, generally within a couple of seconds, the link to the image is automatically copied to the clipboard, which you can easily share.

You can also do some basic editing with ScreenCloud. For this, you should have “Save to” set to “Ask me”. This setting is available from the application indicator and is usually set by default. With this, when you take a screenshot, you’ll see the option for editing the file. Here, you can add arrows, text and numbers to the screenshot.

Unfortunately, you will not get the ability to annotate as offered by [Flameshot](https://itsfoss.com/flameshot/), which happens to be a popular tool for screenshots on Linux. So, if you just want a simple screenshot tool that lets you upload to the cloud/FTP with basic selection feature, this should be a good pick for you.
You can explore more about it through their official website or the [GitHub page](https://github.com/olav-st/screencloud).
## Installing ScreenCloud on Linux
ScreenCloud is available in the [Snap store](https://snapcraft.io/screencloud). So you can easily install it on Ubuntu and other [Snap enabled](https://itsfoss.com/install-snap-linux/) distros by through the snap store or just type in the command below:
`sudo snap install screencloud`
In case you didn’t know, you can follow our guide on [using snaps on Linux](https://itsfoss.com/use-snap-packages-ubuntu-16-04/) to get help.
If you don’t want to use Snap packages, you can download the AppImage file from its [official website](https://screencloud.net/). We also have a guide for [using AppImage on Linux](https://screencloud.net/) if you’re curious.

## Wrapping up
Is ScreenCloud for everybody? Probably not. Is it better than the default screenshot? Probably yes. See if you’re taking a screenshot of something, then chances are, that it’s probably important or you intend to share it. ScreenCloud makes backing up or sharing that screenshot easier and considerably faster. So yeah, you should give ScreenCloud a try if you want these features.
Your thoughts and comments are always welcome, use the comments section below. And don’t forget to share this article with your friends if you found it useful. |
10,084 | 在 Linux 中使用 Wondershaper 限制网络带宽 | https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/ | 2018-10-05T18:53:13 | [
"带宽"
] | https://linux.cn/article-10084-1.html | 
以下内容将向你介绍如何轻松对网络带宽做出限制,并在类 Unix 操作系统中对网络流量进行优化。通过限制网络带宽,可以节省应用程序不必要的带宽消耗,包括软件包管理器(pacman、yum、apt)、web 浏览器、torrent 客户端、下载管理器等,并防止单个或多个用户滥用网络带宽。在本文当中,将会介绍 Wondershaper 这一个实用的命令行程序,这是我认为限制 Linux 系统 Internet 或本地网络带宽的最简单、最快捷的方式之一。
请注意,Wondershaper 只能限制本地网络接口的传入和传出流量,而不能限制路由器或调制解调器的接口。换句话说,Wondershaper 只会限制本地系统本身的网络带宽,而不会限制网络中的其它系统。因此 Wondershaper 主要用于限制本地系统中一个或多个网卡的带宽。
下面来看一下 Wondershaper 是如何优化网络流量的。
### 在 Linux 中使用 Wondershaper 限制网络带宽
`wondershaper` 是用于显示系统网卡网络带宽的简单脚本。它使用了 iproute 的 `tc` 命令,但大大简化了操作过程。
#### 安装 Wondershaper
使用 `git clone` 克隆 Wondershaper 的版本库就可以安装最新版本:
```
$ git clone https://github.com/magnific0/wondershaper.git
```
按照以下命令进入 `wondershaper` 目录并安装:
```
$ cd wondershaper
$ sudo make install
```
然后执行以下命令,可以让 `wondershaper` 在每次系统启动时都自动开始服务:
```
$ sudo systemctl enable wondershaper.service
$ sudo systemctl start wondershaper.service
```
如果你不强求安装最新版本,也可以使用软件包管理器(官方和非官方均可)来进行安装。
`wondershaper` 在 [Arch 用户软件仓库](https://aur.archlinux.org/packages/wondershaper-git/)(Arch User Repository,AUR)中可用,所以可以使用类似 [yay](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/) 这些 AUR 辅助软件在基于 Arch 的系统中安装 `wondershaper` 。
```
$ yay -S wondershaper-git
```
对于 Debian、Ubuntu 和 Linux Mint 可以使用以下命令安装:
```
$ sudo apt-get install wondershaper
```
对于 Fedora 可以使用以下命令安装:
```
$ sudo dnf install wondershaper
```
对于 RHEL、CentOS,只需要启用 EPEL 仓库,就可以使用以下命令安装:
```
$ sudo yum install epel-release
$ sudo yum install wondershaper
```
在每次系统启动时都自动启动 `wondershaper` 服务。
```
$ sudo systemctl enable wondershaper.service
$ sudo systemctl start wondershaper.service
```
#### 用法
首先需要找到网络接口的名称,通过以下几个命令都可以查询到网卡的详细信息:
```
$ ip addr
$ route
$ ifconfig
```
在确定网卡名称以后,就可以按照以下的命令限制网络带宽:
```
$ sudo wondershaper -a <adapter> -d <rate> -u <rate>
```
例如,如果网卡名称是 `enp0s8`,并且需要把上行、下行速率分别限制为 1024 Kbps 和 512 Kbps,就可以执行以下命令:
```
$ sudo wondershaper -a enp0s8 -d 1024 -u 512
```
其中参数的含义是:
* `-a`:网卡名称
* `-d`:下行带宽
* `-u`:上行带宽
如果要对网卡解除网络带宽的限制,只需要执行:
```
$ sudo wondershaper -c -a enp0s8
```
或者:
```
$ sudo wondershaper -c enp0s8
```
如果系统中有多个网卡,为确保稳妥,需要按照上面的方法手动设置每个网卡的上行、下行速率。
如果你是通过 `git clone` 克隆 GitHub 版本库的方式安装 Wondershaper,那么在 `/etc/conf.d/` 目录中会存在一个名为 `wondershaper.conf` 的配置文件,修改这个配置文件中的相应值(包括网卡名称、上行速率、下行速率),也可以设置上行或下行速率。
```
$ sudo nano /etc/conf.d/wondershaper.conf
[wondershaper]
# Adapter
#
IFACE="eth0"
# Download rate in Kbps
#
DSPEED="2048"
# Upload rate in Kbps
#
USPEED="512"
```
Wondershaper 使用前:

Wondershaper 使用后:

可以看到,使用 Wondershaper 限制网络带宽之后,下行速率与限制之前相比已经大幅下降。
执行以下命令可以查看更多相关信息。
```
$ wondershaper -h
```
也可以查看 Wondershaper 的用户手册:
```
$ man wondershaper
```
根据测试,Wondershaper 按照上面的方式可以有很好的效果。你可以试用一下,然后发表你的看法。
---
via: <https://www.ostechnix.com/how-to-limit-network-bandwidth-in-linux-using-wondershaper/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,085 | 简化 Django 开发的八个 Python 包 | https://opensource.com/article/18/9/django-packages | 2018-10-05T21:41:14 | [
"Django"
] | https://linux.cn/article-10085-1.html |
>
> 这个月的 Python 专栏将介绍一些 Django 包,它们有益于你的工作,以及你的个人或业余项目。
>
>
>

Django 开发者们,在这个月的 Python 专栏中,我们会介绍一些能帮助你们的软件包。这些软件包是我们最喜欢的 [Django](https://www.djangoproject.com/) 库,能够节省开发时间,减少样板代码,通常来说,这会让我们的生活更加轻松。我们为 Django 应用准备了六个包,为 Django 的 REST 框架准备了两个包。几乎所有我们的项目里,都用到了这些包,真的,不是说笑。
不过在继续阅读之前,请先看看我们关于[让 Django 管理后台更安全](https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure)的几个提示,以及这篇关于 [5 个最受欢迎的开源 Django 包](https://opensource.com/business/15/12/5-favorite-open-source-django-packages) 的文章。
### 有用又省时的工具集合:django-extensions
[django-extensions](https://django-extensions.readthedocs.io/en/latest/) 这个 Django 包非常受欢迎,全是有用的工具,比如下面这些管理命令:
* `shell_plus` 打开 Django 的管理 shell,这个 shell 已经自动导入了所有的数据库模型。在测试复杂的数据关系时,就不需要再从几个不同的应用里做导入操作了。
* `clean_pyc` 删除项目目录下所有位置的 .pyc 文件
* `create_template_tags` 在指定的应用下,创建模板标签的目录结构。
* `describe_form` 输出模型的表单定义,可以粘贴到 `forms.py` 文件中。(需要注意的是,这种方法创建的是普通 Django 表单,而不是模型表单。)
* `notes` 输出你项目里所有带 TODO、FIXME 等标记的注释。
Django-extensions 还包括几个有用的抽象基类,在定义模型时,它们能满足常见的模式。当你需要以下模型时,可以继承这些基类:
* `TimeStampedModel`:这个模型的基类包含了 `created` 字段和 `modified` 字段,还有一个 `save()` 方法,在适当的场景下,该方法自动更新 `created` 和 `modified` 字段的值。
* `ActivatorModel`:如果你的模型需要像 `status`、`activate_date` 和 `deactivate_date` 这样的字段,可以使用这个基类。它还自带了一个启用 `.active()` 和 `.inactive()` 查询集的 manager。
* `TitleDescriptionModel` 和 `TitleSlugDescriptionModel`:这两个模型包括了 `title` 和 `description` 字段,其中 `description` 字段还包括 `slug`,它根据 `title` 字段自动产生。
django-extensions 还有其他更多的功能,也许对你的项目有帮助,所以,去浏览一下它的[文档](https://django-extensions.readthedocs.io/)吧!
### 12 因子应用的配置:django-environ
在 Django 项目的配置方面,[django-environ](https://django-environ.readthedocs.io/en/latest/) 提供了符合 [12 因子应用](https://www.12factor.net/) 方法论的管理方法。它是另外一些库的集合,包括 [envparse](https://github.com/rconradharris/envparse) 和 [honcho](https://github.com/nickstenning/honcho) 等。安装了 django-environ 之后,在项目的根目录创建一个 `.env` 文件,用这个文件去定义那些随环境不同而不同的变量,或者需要保密的变量。(比如 API 密钥,是否启用调试,数据库的 URL 等)
然后,在项目的 `settings.py` 中引入 `environ`,并参考[官方文档的例子](https://django-environ.readthedocs.io/)设置好 `environ.PATH()` 和 `environ.Env()`。就可以通过 `env('VARIABLE_NAME')` 来获取 `.env` 文件中定义的变量值了。
### 创建出色的管理命令:django-click
[django-click](https://github.com/GaretJax/django-click) 是基于 [Click](http://click.pocoo.org/5/) 的,(我们[之前推荐过](https://opensource.com/article/18/9/python-libraries-side-projects)… [两次](https://opensource.com/article/18/5/3-python-command-line-tools) Click),它对编写 Django 管理命令很有帮助。这个库没有很多文档,但是代码仓库中有个存放[测试命令](https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands)的目录,非常有参考价值。 django-click 基本的 Hello World 命令是这样写的:
```
# app_name.management.commands.hello.py
import djclick as click
@click.command()
@click.argument('name')
def command(name):
click.secho(f'Hello, {name}')
```
在命令行下调用它,这样执行即可:
```
>> ./manage.py hello Lacey
Hello, Lacey
```
### 处理有限状态机:django-fsm
[django-fsm](https://github.com/viewflow/django-fsm) 给 Django 的模型添加了有限状态机的支持。如果你管理一个新闻网站,想用类似于“写作中”、“编辑中”、“已发布”来流转文章的状态,django-fsm 能帮你定义这些状态,还能管理状态变化的规则与限制。
Django-fsm 为模型提供了 FSMField 字段,用来定义模型实例的状态。用 django-fsm 的 `@transition` 修饰符,可以定义状态变化的方法,并处理状态变化的任何副作用。
虽然 django-fsm 文档很轻量,不过 [Django 中的工作流(状态)](https://gist.github.com/Nagyman/9502133) 这篇 GitHub Gist 对有限状态机和 django-fsm 做了非常好的介绍。
### 联系人表单:#django-contact-form
联系人表单可以说是网站的标配。但是不要自己去写全部的样板代码,用 [django-contact-form](https://django-contact-form.readthedocs.io/en/1.5/) 在几分钟内就可以搞定。它带有一个可选的能过滤垃圾邮件的表单类(也有不过滤的普通表单类)和一个 `ContactFormView` 基类,基类的方法可以覆盖或自定义修改。而且它还能引导你完成模板的创建,好让表单正常工作。
### 用户注册和认证:django-allauth
[django-allauth](https://django-allauth.readthedocs.io/en/latest/) 是一个 Django 应用,它为用户注册、登录/注销、密码重置,还有第三方用户认证(比如 GitHub 或 Twitter)提供了视图、表单和 URL,支持邮件地址作为用户名的认证方式,而且有大量的文档记录。第一次用的时候,它的配置可能会让人有点晕头转向;请仔细阅读[安装说明](https://django-allauth.readthedocs.io/en/latest/installation.html),在[自定义你的配置](https://django-allauth.readthedocs.io/en/latest/configuration.html)时要专注,确保启用某个功能的所有配置都用对了。
### 处理 Django REST 框架的用户认证:django-rest-auth
如果 Django 开发中涉及到对外提供 API,你很可能用到了 [Django REST Framework](http://www.django-rest-framework.org/)(DRF)。如果你在用 DRF,那么你应该试试 django-rest-auth,它提供了用户注册、登录/注销,密码重置和社交媒体认证的端点(是通过添加 django-allauth 的支持来实现的,这两个包协作得很好)。
### Django REST 框架的 API 可视化:django-rest-swagger
[Django REST Swagger](https://django-rest-swagger.readthedocs.io/en/latest/) 提供了一个功能丰富的用户界面,用来和 Django REST 框架的 API 交互。你只需要安装 Django REST Swagger,把它添加到 Django 项目的已安装应用中,然后在 `urls.py` 中添加 Swagger 的视图和 URL 模式就可以了,剩下的事情交给 API 的 docstring 处理。

API 的用户界面按照 app 的维度展示了所有端点和可用方法,并列出了这些端点的可用操作,而且它提供了和 API 交互的功能(比如添加/删除/获取记录)。django-rest-swagger 从 API 视图中的 docstrings 生成每个端点的文档,通过这种方法,为你的项目创建了一份 API 文档,这对你,对前端开发人员和用户都很有用。
---
via: <https://opensource.com/article/18/9/django-packages>
作者:[Jeff Triplett](https://opensource.com/users/laceynwilliams) 选题:[lujun9972](https://github.com/lujun9972) 译者:[belitex](https://github.com/belitex) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Django developers, we're devoting this month's Python column to packages that will help you. These are our favorite [Django](https://www.djangoproject.com/) libraries for saving time, cutting down on boilerplate code, and generally simplifying our lives. We've got six packages for Django apps and two for Django's REST Framework, and we're not kidding when we say these packages show up in almost every project we work on.
But first, see our tips for making the [Django Admin more secure](https://opensource.com/article/18/1/10-tips-making-django-admin-more-secure) and an article on 5 favorite [open source Django packages](https://opensource.com/business/15/12/5-favorite-open-source-django-packages).
## A kitchen sink of useful time-savers: django-extensions
[Django-extensions](https://django-extensions.readthedocs.io/en/latest/) is a favorite Django package chock full of helpful tools like these management commands:
**shell_plus**starts the Django shell with all your database models already loaded. No more importing from several different apps to test one complex relationship!**clean_pyc**removes all .pyc projects from everywhere inside your project directory.**create_template_tags**creates a template tag directory structure inside the app you specify.**describe_form**displays a form definition for a model, which you can then copy/paste into forms.py. (Note that this produces a regular Django form, not a ModelForm.)**notes**displays all comments with stuff like TODO, FIXME, etc. throughout your project.
Django-extensions also includes useful abstract base classes to use for common patterns in your own models. Inherit from these base classes when you create your models to get their:
**TimeStampedModel**: This base class includes the fields**created**and**modified**and a**save()**method that automatically updates these fields appropriately.**ActivatorModel**: If your model will need fields like**status**,**activate_date**, and**deactivate_date**, use this base class. It comes with a manager that enables**.active()**and**.inactive()**querysets.**TitleDescriptionModel**and**TitleSlugDescriptionModel**: These include the**title**and**description**fields, and the latter also includes a**slug**field. The**slug**field will automatically populate based on the**title**field.
Django-extensions has more features you may find useful in your projects, so take a tour through its [docs](https://django-extensions.readthedocs.io/)!
## 12-factor-app settings: django-environ
[Django-environ](https://django-environ.readthedocs.io/en/latest/) allows you to use [12-factor app](https://www.12factor.net/) methodology to manage your settings in your Django project. It collects other libraries, including [envparse](https://github.com/rconradharris/envparse) and [honcho](https://github.com/nickstenning/honcho). Once you install django-environ, create an .env file at your project's root. Define in that module any settings variables that may change between environments or should remain secret (like API keys, debug status, and database URLs).
Then, in your project's settings.py file, import **environ** and set up variables for **environ.PATH()** and **environ.Env()** according to the [example](https://django-environ.readthedocs.io/). Access settings variables defined in your .env file with **env('VARIABLE_NAME')**.
## Creating great management commands: django-click
[Django-click](https://github.com/GaretJax/django-click), based on [Click](http://click.pocoo.org/5/) (which we have recommended [before](https://opensource.com/article/18/9/python-libraries-side-projects)… [twice](https://opensource.com/article/18/5/3-python-command-line-tools)), helps you write Django management commands. This library doesn't have extensive documentation, but it does have a directory of [test commands](https://github.com/GaretJax/django-click/tree/master/djclick/test/testprj/testapp/management/commands) in its repository that are pretty useful. A basic Hello World command would look like this:
```
# app_name.management.commands.hello.py
import djclick as click
@click.command()
@click.argument('name')
def command(name):
click.secho(f'Hello, {name}')
```
Then in the command line, run:
```
>> ./manage.py hello Lacey
Hello, Lacey
```
## Handling finite state machines: django-fsm
[Django-fsm](https://github.com/viewflow/django-fsm) adds support for finite state machines to your Django models. If you run a news website and need articles to process through states like Writing, Editing, and Published, django-fsm can help you define those states and manage the rules and restrictions around moving from one state to another.
Django-fsm provides an FSMField to use for the model attribute that defines the model instance's state. Then you can use django-fsm's **@transition** decorator to define methods that move the model instance from one state to another and handle any side effects from that transition.
Although django-fsm is light on documentation, [Workflows (States) in Django](https://gist.github.com/Nagyman/9502133) is a gist that serves as an excellent introduction to both finite state machines and django-fsm.
## Contact forms: #django-contact-form
A contact form is such a standard thing on a website. But don't write all that boilerplate code yourself—set yours up in minutes with [django-contact-form](https://django-contact-form.readthedocs.io/en/1.5/). It comes with an optional spam-filtering contact form class (and a regular, non-filtering class) and a **ContactFormView** base class with methods you can override or customize, and it walks you through the templates you will need to create to make your form work.
## Registering and authenticating users: django-allauth
[Django-allauth](https://django-allauth.readthedocs.io/en/latest/) is an app that provides views, forms, and URLs for registering users, logging them in and out, resetting their passwords, and authenticating users with outside sites like GitHub or Twitter. It supports email-as-username authentication and is extensively documented. It can be a little confusing to set up the first time you use it; follow the [installation instructions](https://django-allauth.readthedocs.io/en/latest/installation.html) carefully and read closely when you [customize your settings](https://django-allauth.readthedocs.io/en/latest/configuration.html) to make sure you're using all the settings you need to enable a specific feature.
## Handling user authentication with Django REST Framework: django-rest-auth
If your Django development includes writing APIs, you're probably using [Django REST Framework](http://www.django-rest-framework.org/) (DRF). If you're using DRF, you should check out [django-rest-auth](https://django-rest-auth.readthedocs.io/), a package that enables endpoints for user registration, login/logout, password reset, and social media authentication (by adding django-allauth, which works well with django-rest-auth).
## Visualizing a Django REST Framework API: django-rest-swagger
[Django REST Swagger](https://django-rest-swagger.readthedocs.io/en/latest/) provides a feature-rich user interface for interacting with your Django REST Framework API. Once you've installed Django REST Swagger and added it to installed apps, add the Swagger view and URL pattern to your urls.py file; the rest is taken care of in the docstrings of your APIs.

The UI for your API will include all your endpoints and available methods broken out by app. It will also list available operations for those endpoints and enable you to interact with the API (adding/deleting/fetching records, for example). It uses the docstrings in your API views to generate documentation for each endpoint, creating a set of API documentation for your project that's useful to you, your frontend developers, and your users.
## Comments are closed. |
10,086 | 如何在 Linux 中配置基于密钥认证的 SSH | https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/ | 2018-10-05T21:53:25 | [
"ssh",
"密钥",
"认证"
] | https://linux.cn/article-10086-1.html | 
### 什么是基于 SSH 密钥的认证?
众所周知,**Secure Shell**,又称 **SSH**,是允许你通过无安全网络(例如 Internet)和远程系统之间安全访问/通信的加密网络协议。无论何时使用 SSH 在无安全网络上发送数据,它都会在源系统上自动地被加密,并且在目的系统上解密。SSH 提供了四种加密方式,**基于密码认证**,**基于密钥认证**,**基于主机认证**和**键盘认证**。最常用的认证方式是基于密码认证和基于密钥认证。
在基于密码认证中,你需要的仅仅是远程系统上用户的密码。如果你知道远程用户的密码,你可以使用 `ssh user@remote-system-name` 访问各自的系统。另一方面,在基于密钥认证中,为了通过 SSH 通信,你需要生成 SSH 密钥对,并且为远程系统上传 SSH 公钥。每个 SSH 密钥对由私钥与公钥组成。私钥应该保存在客户系统上,公钥应该上传给远程系统。你不应该将私钥透露给任何人。希望你已经对 SSH 和它的认证方式有了基本的概念。
这篇教程,我们将讨论如何在 Linux 上配置基于密钥认证的 SSH。
### 在 Linux 上配置基于密钥认证的 SSH
为方便演示,我将使用 Arch Linux 为本地系统,Ubuntu 18.04 LTS 为远程系统。
本地系统详情:
* OS: Arch Linux Desktop
* IP address: 192.168.225.37/24
远程系统详情:
* OS: Ubuntu 18.04 LTS Server
* IP address: 192.168.225.22/24
### 本地系统配置
就像我之前所说,在基于密钥认证的方法中,想要通过 SSH 访问远程系统,需要将公钥上传到远程系统。公钥通常会被保存在远程系统的一个 `~/.ssh/authorized_keys` 文件中。
**注意事项**:不要使用 **root** 用户生成密钥对,这样只有 root 用户才可以使用。使用普通用户创建密钥对。
现在,让我们在本地系统上创建一个 SSH 密钥对。只需要在客户端系统上运行下面的命令。
```
$ ssh-keygen
```
上面的命令将会创建一个 2048 位的 RSA 密钥对。你需要输入两次密码。更重要的是,记住你的密码。后面将会用到它。
**样例输出**:
```
Generating public/private rsa key pair.
Enter file in which to save the key (/home/sk/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sk/.ssh/id_rsa.
Your public key has been saved in /home/sk/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:wYOgvdkBgMFydTMCUI3qZaUxvjs+p2287Tn4uaZ5KyE [email protected]
The key's randomart image is:
+---[RSA 2048]----+
|+=+*= + |
|o.o=.* = |
|.oo * o + |
|. = + . o |
|. o + . S |
| . E . |
| + o |
| +.*o+o |
| .o*=OO+ |
+----[SHA256]-----+
```
如果你已经创建了密钥对,你将看到以下信息。输入 `y` 就会覆盖已存在的密钥。
```
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
```
请注意**密码是可选的**。如果你输入了密码,那么每次通过 SSH 访问远程系统时都要求输入密码,除非你使用了 SSH 代理保存了密码。如果你不想要密码(虽然不安全),简单地敲两次回车。不过,我建议你使用密码。从安全的角度来看,使用无密码的 ssh 密钥对不是什么好主意。这种方式应该限定在特殊的情况下使用,例如,没有用户介入的服务访问远程系统。(例如,用 `rsync` 远程备份……)
如果你已经在个人文件 `~/.ssh/id_rsa` 中有了无密码的密钥,但想要更新为带密码的密钥。使用下面的命令:
```
$ ssh-keygen -p -f ~/.ssh/id_rsa
```
**样例输出**:
```
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.
```
现在,我们已经在本地系统上创建了密钥对。接下来,使用下面的命令将 SSH 公钥拷贝到你的远程 SSH 服务端上。
```
$ ssh-copy-id [email protected]
```
在这里,我把本地(Arch Linux)系统上的公钥拷贝到了远程系统(Ubuntu 18.04 LTS)上。从技术上讲,上面的命令会把本地系统 `~/.ssh/id_rsa.pub` 文件中的内容拷贝到远程系统 `~/.ssh/authorized_keys` 中。明白了吗?非常棒。
输入 `yes` 来继续连接你的远程 SSH 服务端。接着,输入远程系统用户 `sk` 的密码。
```
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
```
如果你已经拷贝了密钥,但想要替换为新的密码,使用 `-f` 选项覆盖已有的密钥。
```
$ ssh-copy-id -f [email protected]
```
我们现在已经成功地将本地系统的 SSH 公钥添加进了远程系统。现在,让我们在远程系统上完全禁用掉基于密码认证的方式。因为我们已经配置了密钥认证,因此不再需要密码认证了。
### 在远程系统上禁用基于密码认证的 SSH
你需要在 root 用户或者 `sudo` 执行下面的命令。
禁用基于密码的认证,你需要在远程系统的终端里编辑 `/etc/ssh/sshd_config` 配置文件:
```
$ sudo vi /etc/ssh/sshd_config
```
找到下面这一行,去掉注释然后将值设为 `no`:
```
PasswordAuthentication no
```
重启 ssh 服务让它生效。
```
$ sudo systemctl restart sshd
```
### 从本地系统访问远程系统
在本地系统上使用命令 SSH 你的远程服务端:
```
$ ssh [email protected]
```
输入密码。
**样例输出**:
```
Enter passphrase for key '/home/sk/.ssh/id_rsa':
Last login: Mon Jul 9 09:59:51 2018 from 192.168.225.37
sk@ubuntuserver:~$
```
现在,你就能 SSH 你的远程系统了。如你所见,我们已经使用之前 `ssh-keygen` 创建的密码登录进了远程系统的账户,而不是使用当前账户实际的密码。
如果你试图从其它客户端系统 ssh(远程系统),你将会得到这条错误信息。比如,我试图通过命令从 CentOS SSH 访问 Ubuntu 系统:
**样例输出**:
```
The authenticity of host '192.168.225.22 (192.168.225.22)' can't be established.
ECDSA key fingerprint is 67:fc:69:b7:d4:4d:fd:6e:38:44:a8:2f:08:ed:f4:21.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.225.22' (ECDSA) to the list of known hosts.
Permission denied (publickey).
```
如你所见,除了 CentOS(LCTT 译注:根据上文,这里应该是 Arch)系统外,我不能通过其它任何系统 SSH 访问我的远程系统 Ubuntu 18.04。
### 为 SSH 服务端添加更多客户端系统的密钥
这点非常重要。就像我说过的那样,除非你配置过(在之前的例子中,是 Ubuntu),否则你不能通过 SSH 访问到远程系统。如果我希望给更多客户端予以权限去访问远程 SSH 服务端,我应该怎么做?很简单。你需要在所有的客户端系统上生成 SSH 密钥对并且手动拷贝 ssh 公钥到想要通过 ssh 访问的远程服务端上。
在客户端系统上创建 SSH 密钥对,运行:
```
$ ssh-keygen
```
输入两次密码。现在,ssh 密钥对已经生成了。你需要手动把公钥(不是私钥)拷贝到远程服务端上。
使用以下命令查看公钥:
```
$ cat ~/.ssh/id_rsa.pub
```
应该会输出类似下面的信息:
```
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3a9tIeK5rPx9p74/KjEVXa6/OODyRp0QLS/sLp8W6iTxFL+UgALZlupVNgFjvRR5luJ9dLHWwc+d4umavAWz708e6Na9ftEPQtC28rTFsHwmyLKvLkzcGkC5+A0NdbiDZLaK3K3wgq1jzYYKT5k+IaNS6vtrx5LDObcPNPEBDt4vTixQ7GZHrDUUk5586IKeFfwMCWguHveTN7ykmo2EyL2rV7TmYq+eY2ZqqcsoK0fzXMK7iifGXVmuqTkAmZLGZK8a3bPb6VZd7KFum3Ezbu4BXZGp7FVhnOMgau2kYeOH/ItKPzpCAn+dg3NAAziCCxnII9b4nSSGz3mMY4Y7 ostechnix@centosserver
```
拷贝所有内容(通过 USB 驱动器或者其它任何介质),然后去你的远程服务端的终端,像下面那样,在 `$HOME` 下创建文件夹叫做 `.ssh`。你需要以 root 身份执行命令(注:不一定需要 root)。
```
$ mkdir -p ~/.ssh
```
现在,将前几步创建的客户端系统的公钥添加进文件中。
```
echo {Your_public_key_contents_here} >> ~/.ssh/authorized_keys
```
在远程系统上重启 ssh 服务。现在,你可以在新的客户端上 SSH 远程服务端了。
如果觉得手动添加 ssh 公钥有些困难,在远程系统上暂时性启用密码认证,使用 `ssh-copy-id` 命令从本地系统上拷贝密钥,最后禁用密码认证。
**推荐阅读:**
* [SSLH – Share A Same Port For HTTPS And SSH](https://www.ostechnix.com/sslh-share-port-https-ssh/)
* [ScanSSH – Fast SSH Server And Open Proxy Scanner](https://www.ostechnix.com/scanssh-fast-ssh-server-open-proxy-scanner/)
好了,到此为止。基于密钥认证的 SSH 提供了一层防止暴力破解的额外保护。如你所见,配置密钥认证一点也不困难。这是一个非常好的方法让你的 Linux 服务端安全可靠。
不久我会带来另一篇有用的文章。请继续关注 OSTechNix。
干杯!
---
via: <https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuuMing](https://github.com/LuuMing) 校对:[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,087 | Linux 下如何创建 M3U 播放列表 | https://itsfoss.com/create-m3u-playlist-linux/ | 2018-10-06T19:23:40 | [
"M3U",
"播放列表"
] | https://linux.cn/article-10087-1.html |
>
> 简介:关于如何在Linux终端中根据乱序文件创建M3U播放列表实现循序播放的小建议。
>
>
>

我是外国电视连续剧的粉丝,这些连续剧不太容易从 DVD 或像 [Netflix](https://itsfoss.com/netflix-open-source-ai/) 这样的流媒体上获得。好在,您可以在 YouTube 上找到一些内容并[从 YouTube 下载](https://itsfoss.com/download-youtube-linux/)。
现在出现了一个问题。你的文件可能不是按顺序存储的。在 GNU/Linux中,文件不是按数字顺序自然排序的,因此我必须创建 .m3u 播放列表,以便 [MPV 视频播放器](https://itsfoss.com/mpv-video-player/)可以按顺序播放视频而不是乱顺进行播放。
同样的,有时候表示第几集的数字是在文件名中间或结尾的,像这样 “My Web Series S01E01.mkv”。这里的剧集信息位于文件名的中间,“S01E01”告诉我们人类这是第一集,后面还有其它剧集。
因此我要做的事情就是在视频墓中创建一个 .m3u 播放列表,并告诉 MPV 播放这个 .m3u 播放列表,MPV 自然会按顺序播放这些视频.
### 什么是 M3U 文件?
[M3U](https://en.wikipedia.org/wiki/M3U) 基本上就是个按特定顺序包含文件名的文本文件。当类似 MPV 或 VLC 这样的播放器打开 M3U 文件时,它会尝试按给定的顺序播放指定文件。
### 创建 M3U 来按顺序播放音频/视频文件
就我而言, 我使用了下面命令:
```
$/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u .
```
然我们拆分一下看看每个部分表示什么意思:
`ls -1v` = 这就是用普通的 `ls` 来列出目录中的内容. 其中 `-1` 表示每行显示一个文件。而 `-v` 表示根据文本中的数字(版本)进行自然排序。
`| grep .mkv` = 基本上就是告诉 `ls` 寻找那些以 `.mkv` 结尾的文件。它也可以是 `.mp4` 或其他任何你想要的媒体文件格式。
通过在控制台上运行命令来进行试运行通常是个好主意:
```
ls -1v |grep .mkv
My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E02 [Episode 2 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E03 [Episode 3 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E04 [Episode 4 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv
```
结果显示我要做的是正确的。现在下一步就是让输出以 `.m3u` 播放列表的格式输出。
```
ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u .
```
这就在当前目录中创建了 .m3u 文件。这个 .m3u 播放列表只不过是一个 .txt 文件,其内容与上面相同,扩展名为 .m3u 而已。 你也可以手动编辑它,并按照想要的顺序添加确切的文件名。
之后你只需要这样做:
```
mpv web_playlist.m3u
```
一般来说,MPV 和播放列表的好处在于你不需要一次性全部看完。 您可以一次看任意长时间,然后在下一次查看其余部分。
我希望写一些有关 MPV 的文章,以及如何制作在媒体文件中嵌入字幕的 mkv 文件,但这是将来的事情了。
注意: 这是开源软件,不鼓励盗版。
---
via: <https://itsfoss.com/create-m3u-playlist-linux/>
作者:[Shirsh](https://itsfoss.com/author/shirish/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | **Brief: A quick tip on how to create M3U playlists in Linux terminal from unordered files to play them in a sequence.**
I am a fan of foreign tv series and it’s not always easy to get them on DVD or on streaming services like [Netflix](https://itsfoss.com/netflix-open-source-ai/). Thankfully, you can find some of them on YouTube and [download them from YouTube](https://itsfoss.com/download-youtube-linux/).
Now there comes a problem. Your files might not be sorted in a particular order. In GNU/Linux files are not naturally sort ordered by number sequencing so I had to make a .m3u playlist so [MPV video player](https://itsfoss.com/mpv-video-player/) would play the videos in sequence and not out of sequence.
Also sometimes the numbers are in the middle or the end like ‘My Web Series S01E01.mkv’ as an example. The episode information here is in the middle of the filename, the ‘S01E01’ which tells us, humans, which is the first episode and which needs to come in next.
So what I did was to generate an m3u playlist in the video directory and tell MPV to play the .m3u playlist and it would take care of playing them in the sequence.
### What is an M3U file?
[M3U](https://en.wikipedia.org/wiki/M3U) is basically a text file that contains filenames in a specific order. When a player like MPV or VLC opens an M3U file, it tries to play the specified files in the given sequence.
### Creating M3U to play audio/video files in a sequence
In my case, I used the following command:
`$/home/shirish/Videos/web-series-video/$ ls -1v |grep .mkv > /tmp/1.m3u && mv /tmp/1.m3u .`
Let’s break it down a bit and see each bit as to what it means –
**ls -1v** = This is using the plain ls or listing entries in the directory. The -1 means list one file per line. while -v natural sort of (version) numbers within text
**| grep .mkv** = It’s basically telling `ls`
to look for files which are ending in .mkv . It could be .mp4 or any other media file format that you want.
It’s usually a good idea to do a dry run by running the command on the console:
```
ls -1v |grep .mkv
My Web Series S01E01 [Episode 1 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E02 [Episode 2 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E03 [Episode 3 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E04 [Episode 4 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E05 [Episode 5 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E06 [Episode 6 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E07 [Episode 7 Name] Multi 480p WEBRip x264 - xRG.mkv
My Web Series S01E08 [Episode 8 Name] Multi 480p WEBRip x264 - xRG.mkv
```
This tells me that what I’m trying to do is correct. Now just have to make that the output is in the form of a .m3u playlist which is the next part.
`ls -1v |grep .mkv > /tmp/web_playlist.m3u && mv /tmp/web_playlist.m3u .`
This makes the .m3u generate in the current directory. The .m3u playlist is nothing but a .txt file with the same contents as above with the .m3u extension. You can edit it manually as well and add the exact filenames in an order you desire.
After that you just have to do something like this:
`mpv web_playlist.m3u`
The great thing about MPV and the playlists, in general, is that you don’t have to binge-watch. You can see however much you want to do in one sitting and see the rest in the next session or the session after that.
I hope to do articles featuring MPV as well as how to make mkv files embedding subtitles in a media file but that’s in the future.
*Note: It’s FOSS doesn’t encourage piracy.* |
10,088 | 5 个很酷的音乐播放器 | https://fedoramagazine.org/5-cool-music-player-apps/ | 2018-10-06T21:01:31 | [
"音乐",
"播放器"
] | https://linux.cn/article-10088-1.html | 
你喜欢音乐吗?那么 Fedora 中可能有你正在寻找的东西。本文介绍在 Fedora 上运行的各种音乐播放器。无论你有庞大的音乐库,还是小一些的,抑或根本没有,你都可以用到音乐播放器。这里有四个图形程序和一个基于终端的音乐播放器,可以让你挑选。
### Quod Libet
Quod Libet 是一个完备的大型音频库管理器。如果你有一个庞大的音频库,你不想只是听,也想要管理,Quod Libet 可能是一个很好的选择。

Quod Libet 可以从磁盘上的多个位置导入音乐,并允许你编辑音频文件的标签 —— 因此一切都在你的控制之下。此外,它还有各种插件可用,从简单的均衡器到 [last.fm](https://last.fm) 同步。你也可以直接从 [Soundcloud](https://soundcloud.com/) 搜索和播放音乐。
Quod Libet 在 HiDPI 屏幕上工作得很好,它有 Fedora 的 RPM 包,如果你运行 [Silverblue](https://teamsilverblue.org/),它在 [Flathub](https://flathub.org/home) 中也有。使用 Gnome Software 或命令行安装它:
```
$ sudo dnf install quodlibet
```
### Audacious
如果你喜欢简单的音乐播放器,甚至可能看起来像传说中的 Winamp,Audacious 可能是你的不错选择。

Audacious 可能不直接管理你的所有音乐,但你如果想将音乐按文件组织起来,它能做得很好。你还可以导出和导入播放列表,而无需重新组织音乐文件本身。
此外,你可以让它看起来像 Winamp。要让它与上面的截图相同,请进入 “Settings/Appearance”,选择顶部的 “Winamp Classic Interface”,然后选择右下方的 “Refugee” 皮肤。就这么简单。
Audacious 在 Fedora 中作为 RPM 提供,可以使用 Gnome Software 或在终端运行以下命令安装:
```
$ sudo dnf install audacious
```
### Lollypop
Lollypop 是一个音乐播放器,它与 GNOME 集成良好。如果你喜欢 GNOME 的外观,并且想要一个集成良好的音乐播放器,Lollypop 可能适合你。

除了与 GNOME Shell 的良好视觉集成之外,它还可以很好地用于 HiDPI 屏幕,并支持暗色主题。
额外地,Lollypop 有一个集成的封面下载器和一个所谓的派对模式(右上角的音符按钮),它可以自动选择和播放音乐。它还集成了 [last.fm](https://last.fm) 或 [libre.fm](https://libre.fm) 等在线服务。
它有 Fedora 的 RPM 也有用于 [Silverblue](https://teamsilverblue.org/) 工作站的 [Flathub](https://flathub.org/home),使用 Gnome Software 或终端进行安装:
```
$ sudo dnf install lollypop
```
### Gradio
如果你没有任何音乐但仍想听怎么办?或者你只是喜欢收音机?Gradio 就是为你准备的。

Gradio 是一个简单的收音机,它允许你搜索和播放网络电台。你可以按国家、语言或直接搜索找到它们。额外地,它可视化地集成到了 GNOME Shell 中,可以与 HiDPI 屏幕配合使用,并且可以选择黑暗主题。
可以在 [Flathub](https://flathub.org/home) 中找到 Gradio,它同时可以运行在 Fedora Workstation 和 [Silverblue](https://teamsilverblue.org/) 中。使用 Gnome Software 安装它。
### sox
你喜欢使用终端在工作时听一些音乐吗?多亏有了 sox,你不必离开终端。

sox 是一个非常简单的基于终端的音乐播放器。你需要做的就是运行如下命令:
```
$ play file.mp3
```
接着 sox 就会为你播放。除了单独的音频文件外,sox 还支持 m3u 格式的播放列表。
此外,因为 sox 是基于终端的程序,你可以通过 ssh 运行它。你有一个带扬声器的家用服务器吗?或者你想从另一台电脑上播放音乐吗?尝试将它与 [tmux](https://fedoramagazine.org/use-tmux-more-powerful-terminal/) 一起使用,这样即使会话关闭也可以继续听。
sox 在 Fedora 中以 RPM 提供。运行下面的命令安装:
```
$ sudo dnf install sox
```
---
via: <https://fedoramagazine.org/5-cool-music-player-apps/>
作者:[Adam Šamalík](https://fedoramagazine.org/author/asamalik/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Do you like music? Then Fedora may have just what you’re looking for. This article introduces different music player apps that run on Fedora. You’re covered whether you have an extensive music library, a small one, or none at all. Here are four graphical application and one terminal-based music player that will have you jamming.
## Quod Libet
Quod Libet is a complete manager for your large audio library. If you have an extensive audio library that you would like not just listen to, but also manage, Quod Libet might a be a good choice for you.
Quod Libet can import music from multiple locations on your disk, and allows you to edit tags of the audio files — so everything is under your control. As a bonus, there are various plugins available for anything from a simple equalizer to a [last.fm](https://last.fm) sync. You can also search and play music directly from [Soundcloud](https://soundcloud.com/).
Quod Libet works great on HiDPI screens, and is available as an RPM in Fedora or on [Flathub](https://flathub.org/home) in case you run [Silverblue](https://teamsilverblue.org/). Install it using Gnome Software or the command line:
$sudo dnf install quodlibet
## Audacious
If you like a simple music player that could even look like the legendary Winamp, Audacious might be a good choice for you.
Audacious probably won’t manage all your music at once, but it works great if you like to organize your music as files. You can also export and import playlists without reorganizing the music files themselves.
As a bonus, you can make it look likeWinamp. To make it look the same as on the screenshot above, go to *Settings* / *Appearance*, select *Winamp Classic Interface* at the top, and choose the Refugee skin right below. And Bob’s your uncle!
Audacious is available as an RPM in Fedora, and can be installed using the Gnome Software app or the following command on the terminal:
$sudo dnf install audacious
## Lollypop
Lollypop is a music player that provides great integration with GNOME. If you enjoy how GNOME looks, and would like a music player that’s nicely integrated, Lollypop could be for you.
Apart from nice visual integration with the GNOME Shell, it woks nicely on HiDPI screens, and supports a dark theme.
As a bonus, Lollypop has an integrated cover art downloader, and a so-called Party Mode (the note button at the top-right corner) that selects and plays music automatically for you. It also integrates with online services such as [last.fm](https://last.fm) or [libre.fm](https://libre.fm).
Available as both an RPM in Fedora or a [Flathub](https://flathub.org/home) for your [Silverblue](https://teamsilverblue.org/) workstation, install it using the Gnome Software app or using the terminal:
$sudo dnf install lollypop
## Gradio
What if you don’t own any music, but still like to listen to it? Or you just simply love radio? Then Gradio is here for you.
Gradio is a simple radio player that allows you to search and play internet radio stations. You can find them by country, language, or simply using search. As a bonus, it’s visually integrated into GNOME Shell, works great with HiDPI screens, and has an option for a dark theme.
Gradio is available on [Flathub](https://flathub.org/home) which works with both Fedora Workstation and [Silverblue](https://teamsilverblue.org/). Install it using the Gnome Software app.
## sox
Do you like using the terminal instead, and listening to some music while you work? You don’t have to leave the terminal thanks to *sox*.
sox is a very simple, terminal-based music player. All you need to do is to run a command such as:
$play file.mp3
…and sox will play it for you. Apart from individual audio files, sox also supports playlists in the m3u format.
As a bonus, because sox is a terminal-based application, you can run it over ssh. Do you have a home server with speakers attached to it? Or do you want to play music from a different computer? Try using it together with [ tmux](https://fedoramagazine.org/use-tmux-more-powerful-terminal/), so you can keep listening even when the session closes.
sox is available in Fedora as an RPM. Install it by running:
$sudo dnf install sox
Photo by [Malte Wingen](https://unsplash.com/photos/PDX_a_82obo?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/music?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText).
## Martins M
Lollypop and Gradio screenshots look surprisingly similar 🙂
## Clément Verna
Thanks, it is now updated 🙂
## Anonymous Squirrel
Guayadeque is also worth mentioning because it”s a great player for really large music collections
## Adam Šamalík
Thanks for the tip! 😉
## Silvia
You left out Clementine and Amarok. Why?
## Paul W. Frields
Because the article was 5 and not 7? Seriously, any finite list has to leave something out. Sorry it was the ones you thought of. But it had to happen to someone.
## Luke
If you want a more comprehensive list of audio players, you can visit: https://en.wikipedia.org/wiki/Comparison_of_audio_player_software
it doesn’t contain Lollypop or Gradio however (SoX is listed elsewhere, even though I use it to play audio too, it’s usually used for audio modification)
## zezinho
Yeah Clementine is incredible with it’s autoplay mode where you can add criterias. I use it daily at work since more than 10 years in my 15k files own music folder : I asked it to only play titles that were not played in the last month, and excluded child’s music genre.
## Timothée F.
The
(curses UI, supports mpd) +
(music player daemon) combo is worth mentioning as well 🙂
## Adam Šamalík
I’ve actually looked into that! Pretty interesting concept, especially if you have a home media server with speakers. I heard there’s also an Android app with the client?
## Luke
yes there is: https://f-droid.org/en/packages/org.gateshipone.malp/
you can either control mpd or stream music directly to your phone
## Robert
Recently I’ve discovered https://www.googleplaymusicdesktopplayer.com/ it is really nice interface for Google Music with last.fm scrobbling.
## Michel Alexandre S.
And that’s also available via Flatpak:
https://flathub.org/apps/details/com.googleplaymusicdesktopplayer.GPMDP
## Danniello
From this selection I prefer Audacious. WinAmp look is very nostalgic (ahhh that times when I dial-up via modem to download newer version of WinAmp plus some new skins:), but “more modern” and, in my opinion better, Windows referential is Foobar2000. By default Audacious works similar to Foobar2000 (but looking much better than default Foobar:) and is very configurable.
Unfortunately Audacious (in fact all mentioned players) have one very big issue – not supporting video stream as visualization…
I have quite big collection of music videos – it would be nice to have player that is capable to play audio and video. VLC or SMPlayer playlist capabilities are very limited compared to Audacious… Kodi is supporting audio and video, but it is rather Media Center for displaying on TV or something – as generic player is very uncomfortable to use…
There is available Audacious fork – Fauxdacious with video support, but I would like have such function upstream…
https://wildstar84.wordpress.com/fauxdacious/
## samsung printer customer support
What I like the most about these applications is the vast collection of music of different genres. I am familiar with the Gradio and it is cool.
## Audioliscious
I prefer to not listen to music on my PC since it is decidedly low-fi, even with expensive D/A audio cards, there is still a flotilla of bad stuff done to the analogue signal in the name of “new, better technologies”, or in the interest of space saving compression. So, if I’m working at my desktop, or my laptop, and I want to listen to music I use the Stereo, it’s a nifty little invention that (depending on gear) can afford you the benefit of hearing the music the way the artist wanted it to be heard.
## James
Why don’t you get a USB DAC?
## James
I use deadbeef myself. Maybe it’s not updated as often as it should be, but it’s got many interesting plugins such as Headerbar (which makes it look like a GNOME app) or lots of different visualisers. It’s a bit like foobar2000.
## kim
the realm of free music apps for PC might be fading away with time, but still
## Jalal
Thanks a lot
## kevin
I recently found Museeks on github, it is a minimal cross-platform player utilizing electron https://github.com/KeitIG/museeks.
## kevin
Adam, thanks for the suggestions. I tried Quod Libet today and I love the plugin architecture for adding only the features you need. I may have new music app for handling my large collection.
## Adam Šamalík
Glad I could help!
## gn0mish
I use Lollypop for all my music files and Gradio for the radio. They rank highest for me. Lollypop is the most attractive of all Gnome app in my opinion. It looks really great |
10,089 | 更好利用 tmux 会话的 4 个技巧 | https://fedoramagazine.org/4-tips-better-tmux-sessions/ | 2018-10-07T20:44:56 | [
"tmux"
] | https://linux.cn/article-10089-1.html | 
tmux 是一个终端多路复用工具,它可以让你系统上的终端支持多面板。你可以排列面板位置,在每个面板运行不同进程,这通常可以更好的地利用你的屏幕。我们在 [这篇早期的文章](https://fedoramagazine.org/use-tmux-more-powerful-terminal/) 中向读者介绍过这一强力工具。如果你已经开始使用 tmux 了,那么这里有一些技巧可以帮你更好地使用它。
本文假设你当前的前缀键是 `Ctrl+b`。如果你已重新映射该前缀,只需在相应位置替换为你定义的前缀即可。
### 设置终端为自动使用 tmux
使用 tmux 的一个最大好处就是可以随意的从会话中断开和重连。这使得远程登录会话功能更加强大。你有没有遇到过丢失了与远程系统的连接,然后好希望能够恢复在远程系统上做过的那些工作的情况?tmux 能够解决这一问题。
然而,有时在远程系统上工作时,你可能会忘记开启会话。避免出现这一情况的一个方法就是每次通过交互式 shell 登录系统时都让 tmux 启动或附加上一个会话。
在你远程系统上的 `~/.bash_profile` 文件中加入下面内容:
```
if [ -z "$TMUX" ]; then
tmux attach -t default || tmux new -s default
fi
```
然后注销远程系统,并使用 SSH 重新登录。你会发现你处在一个名为 `default` 的 tmux 会话中了。如果退出该会话,则下次登录时还会重新生成此会话。但更重要的是,若您正常地从会话中分离,那么下次登录时你会发现之前工作并没有丢失 - 这在连接中断时非常有用。
你当然也可以将这段配置加入本地系统中。需要注意的是,大多数 GUI 界面的终端并不会自动使用这个 `default` 会话,因此它们并不是登录 shell。虽然你可以修改这一行为,但它可能会导致终端嵌套执行附加到 tmux 会话这一动作,从而导致会话不太可用,因此当进行此操作时请一定小心。
### 使用缩放功能使注意力专注于单个进程
虽然 tmux 的目的就是在单个会话中提供多窗口、多面板和多进程的能力,但有时候你需要专注。如果你正在与一个进程进行交互并且需要更多空间,或需要专注于某个任务,则可以使用缩放命令。该命令会将当前面板扩展,占据整个当前窗口的空间。
缩放在其他情况下也很有用。比如,想象你在图形桌面上运行一个终端窗口。面板会使得从 tmux 会话中拷贝和粘帖多行内容变得相对困难。但若你缩放了面板,就可以很容易地对多行数据进行拷贝/粘帖。
要对当前面板进行缩放,按下 `Ctrl+b, z`。需要恢复的话,按下相同按键组合来恢复面板。
### 绑定一些有用的命令
tmux 默认有大量的命令可用。但将一些更常用的操作绑定到容易记忆的快捷键会很有用。下面一些例子可以让会话变得更好用,你可以添加到 `~/.tmux.conf` 文件中:
```
bind r source-file ~/.tmux.conf \; display "Reloaded config"
```
该命令重新读取你配置文件中的命令和键绑定。添加该条绑定后,退出任意一个 tmux 会话然后重启一个会话。现在你做了任何更改后,只需要简单的按下 `Ctrl+b, r` 就能将修改的内容应用到现有的会话中了。
```
bind V split-window -h
bind H split-window
```
这些命令可以很方便地对窗口进行横向切分(按下 `Shift+V`)和纵向切分(`Shift+H`)。
若你想查看所有绑定的快捷键,按下 `Ctrl+B, ?` 可以看到一个列表。你首先看到的应该是复制模式下的快捷键绑定,表示的是当你在 tmux 中进行复制粘帖时对应的快捷键。你添加的那两个键绑定会在<ruby> 前缀模式 <rt> prefix mode </rt></ruby>中看到。请随意把玩吧!
### 使用 powerline 更清晰
[如前文所示](https://fedoramagazine.org/add-power-terminal-powerline/),powerline 工具是对 shell 的绝佳补充。而且它也兼容在 tmux 中使用。由于 tmux 接管了整个终端空间,powerline 窗口能提供的可不仅仅是更好的 shell 提示那么简单。
[](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png)
如果你还没有这么做,按照 [这篇文章](https://fedoramagazine.org/add-power-terminal-powerline/) 中的指示来安装该工具。然后[使用 sudo](https://fedoramagazine.org/howto-use-sudo/) 来安装附件:
```
sudo dnf install tmux-powerline
```
接着重启会话,就会在底部看到一个漂亮的新状态栏。根据终端的宽度,默认的状态栏会显示你当前会话 ID、打开的窗口、系统信息、日期和时间,以及主机名。若你进入了使用 git 进行版本控制的项目目录中还能看到分支名和用色彩标注的版本库状态。
当然,这个状态栏具有很好的可配置性。享受你新增强的 tmux 会话吧,玩的开心点。
---
via: <https://fedoramagazine.org/4-tips-better-tmux-sessions/>
作者:[Paul W. Frields](https://fedoramagazine.org/author/pfrields/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article](https://fedoramagazine.org/use-tmux-more-powerful-terminal/). Here are some tips that will help you get more out of tmux if you’re getting started.
This article assumes your current prefix key is **Ctrl+b**. If you’ve remapped that prefix, simply substitute your prefix in its place.
## Set your terminal to automatically use tmux
One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved.
However, you may sometimes find yourself doing work on a remote system, and realize you didn’t start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell.
Add this to your remote system’s *~/.bash_profile* file:
if [ -z "$TMUX" ]; then tmux attach -t default || tmux new -s default fi
Then logout of the remote system, and log back in with SSH. You’ll find you’re in a tmux session named *default.* This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted.
Of course you can add this to your local system as well. Note that terminals inside most GUIs won’t use the default session automatically, because they aren’t login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution.
## Use zoom to focus on a single process
While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If you’re in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space.
Zoom can be useful in other situations too. For instance, imagine you’re using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease.
To zoom into the current pane, hit **Ctrl+b, z**. When you’re finished with the zoom function, hit the same key combo to unzoom the pane.
## Bind some useful commands
By default tmux has numerous commands available. But it’s helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your *~/.tmux.conf* file to make sessions more enjoyable:
bind r source-file ~/.tmux.conf \; display "Reloaded config"
This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run **Ctrl+b, r** and the changes will be part of your existing session.
bind V split-window -h bind H split-window
These commands make it easier to split the current window across a vertical axis (note that’s **Shift+V**) or across a horizontal axis (**Shift+H**).
If you want to see how all keys are bound, use **Ctrl+B, ?** to see a list. You may see keys bound in *copy-mode* first, for when you’re working with copy and paste inside tmux. The *prefix* mode bindings are where you’ll see ones you’ve added above. Feel free to experiment with your own!
## Use powerline for great justice
[As reported in a previous Fedora Magazine article](https://fedoramagazine.org/add-power-terminal-powerline/), the *powerline* utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt.
If you haven’t already, follow the instructions in the [Magazine’s powerline article](https://fedoramagazine.org/add-power-terminal-powerline/) to install that utility. Then, install the addon [using sudo](https://fedoramagazine.org/howto-use-sudo/):
sudo dnf install tmux-powerline
Now restart your session, and you’ll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, you’ll see the branch and color-coded status as well.
Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it.
Photo by [Pamela Saunders](https://unsplash.com/photos/S-Med_LVdio?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/window?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText).
## dac.override
Tmux is great especially if you do not have access to a GUI, and these are great tips for making the experience better.
But it does to me also make a good point. By default traditional terminal multiplexers do not integrate as well as they could. There are some downsides to using it . For example nesting TMUX instances, but also its unawareness of SSH and a terminal emulator if applicable.
TermySequence Termy-server is a “TMUX” server implementation and TermySequence Qtermy is a QT client implementation that works alone and with Termy-server. Qtermy is a modern terminal emulator that leverages SSH as well as termy-server.
The downside of TermySequence compared to TMUX is that you need a GUI-client (Qtermy is a reference implementation of that). So it is not a solution for the command line. The benefit of TermySequence is that you get a more integrated and modern experience.
It is available on COPR:
https://copr.fedorainfracloud.org/coprs/ewalsh/termysequence/
Website:
https://termysequence.io/
## Pieter
Thanks Paul. Setting my terminal to automatically use tmux is a great tip. For splitting a window I use shift+| and shift+_ but I definitely like your H for the horizontal axis and V for the vertical axis. Should you ever consider a follow up article on tmux then please include how to setup copy and paste across vim instances in tmux as that seems to have many folks puzzled.
## Mace Moneta
Even better, use a ssh-mosh client to connect. It establishes connections via SSH, then uses encrypted UDP and maintains connections even across sleep, switching access points, or hopping between WiFi and LTE and back again. There are clients for ChromeOS:
https://chrome.google.com/webstore/detail/mosh/ooiklbnjmhbcgemelgfhaeaocllobloj
and Android as well:
https://play.google.com/store/apps/details?id=com.server.auditor.ssh.client
In combination with tmux or screen, it allows me to maintain continuous connections to our servers.
## Andreas
And if you haven’t discovered it yet.
There is tmate for pair-programming!
## Demetrius Veras
nice
## Tomas
You can also use tmux-top in your status to get some handy info about the host. It’s in Fedora repos.
https://github.com/TomasTomecek/tmux-top
## Chiqo
TIL the zoom feature, thank you.
My status bar now looks like:
set-window-option -g window-status-current-format “#[bg=white]#[fg=colour166]| #I.#{?window_zoomed_flag,#[bg=colour166]#[fg=white],}#P#[bg=white]#[fg=colour166]:#W |”
## Kyle R. Conway
Paul,
Had no idea you could use this is tmux. I’ve got this in my ~/.tmux.conf but I’m not seeing powerline function after installing tmux-powerline. Any ideas?
powerline_tmux_1.8.conf
tmux Version 1.8 introduces window-status-last-{attr,bg,fg}, which is
deprecated for versions 1.9+, thus only applicable to version 1.8.
set -qg window-status-last-fg “$_POWERLINE_ACTIVE_WINDOW_FG”
vim: ft=tmux
source “/usr/share/tmux/powerline.conf”
## Steve
Hello Kyle,
There is a link mentioned above for an excellent article on setting up powerline. I personally don’t use tmux, preferring Terminal simply because that’s the first tabbed terminal I chose to use. Also, I am using zsh for my shell and the configuration file powerline in my case is therefore powerline.zsh, and can be found in /usr/share/powerline/zsh/
## Stephan
Thanks for sharing. Really great. I just get to know the Zoom thing. Really Cool
## Pierpaolo
As an alternative to Powerline, you can just add a few lines to your
to match the look of the popular Agnoster theme for Zsh https://github.com/i5ar/tmux-colors-solarized/blob/master/tmuxcolors-dark.conf
## juanfgs
But then all our shell will belong to us (not to mention that it will move every single zig) |
10,090 | 在 React 条件渲染中使用三元表达式和 “&&” | https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935 | 2018-10-07T21:18:00 | [
"React",
"三元表达式"
] | https://linux.cn/article-10090-1.html | 
React 组件可以通过多种方式决定渲染内容。你可以使用传统的 `if` 语句或 `switch` 语句。在本文中,我们将探讨一些替代方案。但要注意,如果你不小心,有些方案会带来自己的陷阱。
### 三元表达式 vs if/else
假设我们有一个组件被传进来一个 `name` 属性。 如果这个字符串非空,我们会显示一个问候语。否则,我们会告诉用户他们需要登录。
这是一个只实现了如上功能的无状态函数式组件(SFC)。
```
const MyComponent = ({ name }) => {
if (name) {
return (
<div className="hello">
Hello {name}
</div>
);
}
return (
<div className="hello">
Please sign in
</div>
);
};
```
这个很简单但是我们可以做得更好。这是使用<ruby> 三元运算符 <rt> conditional ternary operator </rt></ruby>编写的相同组件。
```
const MyComponent = ({ name }) => (
<div className="hello">
{name ? `Hello ${name}` : 'Please sign in'}
</div>
);
```
请注意这段代码与上面的例子相比是多么简洁。
有几点需要注意。因为我们使用了箭头函数的单语句形式,所以隐含了`return` 语句。另外,使用三元运算符允许我们省略掉重复的 `<div className="hello">` 标记。
### 三元表达式 vs &&
正如您所看到的,三元表达式用于表达 `if`/`else` 条件式非常好。但是对于简单的 `if` 条件式怎么样呢?
让我们看另一个例子。如果 `isPro`(一个布尔值)为真,我们将显示一个奖杯表情符号。我们也要渲染星星的数量(如果不是 0)。我们可以这样写。
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro ? '♨' : null}
</div>
{stars ? (
<div>
Stars:{'☆'.repeat(stars)}
</div>
) : null}
</div>
);
```
请注意 `else` 条件返回 `null` 。 这是因为三元表达式要有“否则”条件。
对于简单的 `if` 条件式,我们可以使用更合适的东西:`&&` 运算符。这是使用 `&&` 编写的相同代码。
```
const MyComponent = ({ name, isPro, stars}) => (
<div className="hello">
<div>
Hello {name}
{isPro && '♨'}
</div>
{stars && (
<div>
Stars:{'☆'.repeat(stars)}
</div>
)}
</div>
);
```
没有太多区别,但是注意我们消除了每个三元表达式最后面的 `: null` (`else` 条件式)。一切都应该像以前一样渲染。
嘿!约翰得到了什么?当什么都不应该渲染时,只有一个 `0`。这就是我上面提到的陷阱。这里有解释为什么:
[根据 MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators),一个逻辑运算符“和”(也就是 `&&`):
>
> `expr1 && expr2`
>
>
> 如果 `expr1` 可以被转换成 `false` ,返回 `expr1`;否则返回 `expr2`。 如此,当与布尔值一起使用时,如果两个操作数都是 `true`,`&&` 返回 `true` ;否则,返回 `false`。
>
>
>
好的,在你开始拔头发之前,让我为你解释它。
在我们这个例子里, `expr1` 是变量 `stars`,它的值是 `0`,因为 0 是假值,`0` 会被返回和渲染。看,这还不算太坏。
我会简单地这么写。
>
> 如果 `expr1` 是假值,返回 `expr1` ,否则返回 `expr2`。
>
>
>
所以,当对非布尔值使用 `&&` 时,我们必须让这个假值返回 React 无法渲染的东西,比如说,`false` 这个值。
我们可以通过几种方式实现这一目标。让我们试试吧。
```
{!!stars && (
<div>
{'☆'.repeat(stars)}
</div>
)}
```
注意 `stars` 前的双感叹操作符(`!!`)(呃,其实没有双感叹操作符。我们只是用了感叹操作符两次)。
第一个感叹操作符会强迫 `stars` 的值变成布尔值并且进行一次“非”操作。如果 `stars` 是 `0` ,那么 `!stars` 会是 `true`。
然后我们执行第二个`非`操作,所以如果 `stars` 是 `0`,`!!stars` 会是 `false`。正好是我们想要的。
如果你不喜欢 `!!`,那么你也可以强制转换出一个布尔数比如这样(这种方式我觉得有点冗长)。
```
{Boolean(stars) && (
```
或者只是用比较符产生一个布尔值(有些人会说这样甚至更加语义化)。
```
{stars > 0 && (
```
#### 关于字符串
空字符串与数字有一样的毛病。但是因为渲染后的空字符串是不可见的,所以这不是那种你很可能会去处理的难题,甚至可能不会注意到它。然而,如果你是完美主义者并且不希望 DOM 上有空字符串,你应采取我们上面对数字采取的预防措施。
### 其它解决方案
一种可能的将来可扩展到其他变量的解决方案,是创建一个单独的 `shouldRenderStars` 变量。然后你用 `&&` 处理布尔值。
```
const shouldRenderStars = stars > 0;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'☆'.repeat(stars)}
</div>
)}
</div>
);
```
之后,在将来,如果业务规则要求你还需要已登录,拥有一条狗以及喝淡啤酒,你可以改变 `shouldRenderStars` 的得出方式,而返回的内容保持不变。你还可以把这个逻辑放在其它可测试的地方,并且保持渲染明晰。
```
const shouldRenderStars =
stars > 0 && loggedIn && pet === 'dog' && beerPref === 'light`;
```
```
return (
<div>
{shouldRenderStars && (
<div>
{'☆'.repeat(stars)}
</div>
)}
</div>
);
```
### 结论
我认为你应该充分利用这种语言。对于 JavaScript,这意味着为 `if/else` 条件式使用三元表达式,以及为 `if` 条件式使用 `&&` 操作符。
我们可以回到每处都使用三元运算符的舒适区,但你现在消化了这些知识和力量,可以继续前进 `&&` 取得成功了。
---
作者简介:
美国运通工程博客的执行编辑 <http://aexp.io> 以及 @AmericanExpress 的工程总监。MyViews !== ThoseOfMyEmployer.
---
via: <https://medium.freecodecamp.org/conditional-rendering-in-react-using-ternaries-and-logical-and-7807f53b6935>
作者:[Donavon West](https://medium.freecodecamp.org/@donavon) 译者:[GraveAccent](https://github.com/GraveAccent) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,091 | 如何安装并使用 Wireshark | https://www.linuxtechi.com/install-use-wireshark-debian-9-ubuntu/ | 2018-10-07T22:46:14 | [
"Wireshark",
"数据包"
] | https://linux.cn/article-10091-1.html | [](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Debian-9-Ubuntu-16.04-17.10.jpg)
Wireshark 是自由开源的、跨平台的基于 GUI 的网络数据包分析器,可用于 Linux、Windows、MacOS、Solaris 等。它可以实时捕获网络数据包,并以人性化的格式呈现。Wireshark 允许我们监控网络数据包直到其微观层面。Wireshark 还有一个名为 `tshark` 的命令行实用程序,它与 Wireshark 执行相同的功能,但它是通过终端而不是 GUI。
Wireshark 可用于网络故障排除、分析、软件和通信协议开发以及用于教育目的。Wireshark 使用 `pcap` 库来捕获网络数据包。
Wireshark 具有许多功能:
* 支持数百项协议检查
* 能够实时捕获数据包并保存,以便以后进行离线分析
* 许多用于分析数据的过滤器
* 捕获的数据可以即时压缩和解压缩
* 支持各种文件格式的数据分析,输出也可以保存为 XML、CSV 和纯文本格式
* 数据可以从以太网、wifi、蓝牙、USB、帧中继、令牌环等多个接口中捕获
在本文中,我们将讨论如何在 Ubuntu/Debian 上安装 Wireshark,并将学习如何使用 Wireshark 捕获网络数据包。
#### 在 Ubuntu 16.04 / 17.10 上安装 Wireshark
Wireshark 在 Ubuntu 默认仓库中可用,只需使用以下命令即可安装。但有可能得不到最新版本的 wireshark。
```
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
因此,要安装最新版本的 wireshark,我们必须启用或配置官方 wireshark 仓库。
使用下面的命令来配置仓库并安装最新版本的 wireshark 实用程序。
```
linuxtechi@nixworld:~$ sudo add-apt-repository ppa:wireshark-dev/stable
linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install wireshark -y
```
一旦安装了 wireshark,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。
```
linuxtechi@nixworld:~$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
```
#### 在 Debian 9 上安装 Wireshark
Wireshark 包及其依赖项已存在于 debian 9 的默认仓库中,因此要在 Debian 9 上安装最新且稳定版本的 Wireshark,请使用以下命令:
```
linuxtechi@nixhome:~$ sudo apt-get update
linuxtechi@nixhome:~$ sudo apt-get install wireshark -y
```
在安装过程中,它会提示我们为非超级用户配置 dumpcap,
选择 `yes` 并回车。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Configure-Wireshark-Debian9.jpg)
安装完成后,执行以下命令,以便非 root 用户也可以捕获接口的实时数据包。
```
linuxtechi@nixhome:~$ sudo chmod +x /usr/bin/dumpcap
```
我们还可以使用最新的源代码包在 Ubuntu/Debian 和其它 Linux 发行版上安装 wireshark。
#### 在 Debian / Ubuntu 系统上使用源代码安装 Wireshark
首先下载最新的源代码包(写这篇文章时它的最新版本是 2.4.2),使用以下命令:
```
linuxtechi@nixhome:~$ wget https://1.as.dl.wireshark.org/src/wireshark-2.4.2.tar.xz
```
然后解压缩包,进入解压缩的目录:
```
linuxtechi@nixhome:~$ tar -xf wireshark-2.4.2.tar.xz -C /tmp
linuxtechi@nixhome:~$ cd /tmp/wireshark-2.4.2
```
现在我们使用以下命令编译代码:
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ ./configure --enable-setcap-install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ make
```
最后安装已编译的软件包以便在系统上安装 Wireshark:
```
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo make install
linuxtechi@nixhome:/tmp/wireshark-2.4.2$ sudo ldconfig
```
在安装后,它将创建一个单独的 Wireshark 组,我们现在将我们的用户添加到组中,以便它可以与 Wireshark 一起使用,否则在启动 wireshark 时可能会出现 “permission denied(权限被拒绝)”错误。
要将用户添加到 wireshark 组,执行以下命令:
```
linuxtechi@nixhome:~$ sudo usermod -a -G wireshark linuxtechi
```
现在我们可以使用以下命令从 GUI 菜单或终端启动 wireshark:
```
linuxtechi@nixhome:~$ wireshark
```
#### 在 Debian 9 系统上使用 Wireshark
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-debian9.jpg)
点击 Wireshark 图标。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-debian9.jpg)
#### 在 Ubuntu 16.04 / 17.10 上使用 Wireshark
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Access-wireshark-Ubuntu.jpg)
点击 Wireshark 图标。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Wireshark-window-Ubuntu.jpg)
#### 捕获并分析数据包
一旦 wireshark 启动,我们就会看到 wireshark 窗口,上面有 Ubuntu 和 Debian 系统的示例。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/wireshark-Linux-system.jpg)
所有这些都是我们可以捕获网络数据包的接口。根据你系统上的接口,此屏幕可能与你的不同。
我们选择 `enp0s3` 来捕获该接口的网络流量。选择接口后,在我们网络上所有设备的网络数据包开始填充(参考下面的屏幕截图):
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Capturing-Packet-from-enp0s3-Ubuntu-Wireshark.jpg)
第一次看到这个屏幕,我们可能会被这个屏幕上显示的数据所淹没,并且可能已经想过如何整理这些数据,但不用担心,Wireshark 的最佳功能之一就是它的过滤器。
我们可以根据 IP 地址、端口号,也可以使用来源和目标过滤器、数据包大小等对数据进行排序和过滤,也可以将两个或多个过滤器组合在一起以创建更全面的搜索。我们也可以在 “Apply a Display Filter(应用显示过滤器)”选项卡中编写过滤规则,也可以选择已创建的规则。要选择之前构建的过滤器,请单击 “Apply a Display Filter(应用显示过滤器)”选项卡旁边的旗帜图标。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Filter-in-wireshark-Ubuntu.jpg)
我们还可以根据颜色编码过滤数据,默认情况下,浅紫色是 TCP 流量,浅蓝色是 UDP 流量,黑色标识有错误的数据包,看看这些编码是什么意思,点击 “View -> Coloring Rules”,我们也可以改变这些编码。
[](https://www.linuxtechi.com/wp-content/uploads/2017/11/Packet-Colouring-Wireshark.jpg)
在我们得到我们需要的结果之后,我们可以点击任何捕获的数据包以获得有关该数据包的更多详细信息,这将显示该网络数据包的所有数据。
Wireshark 是一个非常强大的工具,需要一些时间来习惯并对其进行命令操作,本教程将帮助你入门。请随时在下面的评论框中提出你的疑问或建议。
---
via: <https://www.linuxtechi.com/install-use-wireshark-debian-9-ubuntu/>
作者:[Pradeep Kumar](https://www.linuxtechi.com/author/pradeep/) 译者:[MjSeven](https://github.com/MjSeven) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,092 | DevOps 实践指南 | https://opensource.com/article/18/1/getting-devops | 2018-10-07T23:22:40 | [
"DevOps"
] | https://linux.cn/article-10092-1.html |
>
> 这些技巧或许对那些想要践行 DevOps 的系统运维和开发者能有所帮助。
>
>
>

在去年大概一年的时间里,我注意到对“Devops 实践”感兴趣的开发人员和系统管理员突然有了明显的增加。这样的变化也合理:现在开发者只要花很少的钱,调用一些 API,就能单枪匹马地在一整套分布式基础设施上运行自己的应用,在这个时代,开发和运维的紧密程度前所未有。我看过许多博客和文章介绍很酷的 DevOps 工具和相关思想,但是给那些希望践行 DevOps 的人以指导和建议的内容,我却很少看到。
这篇文章的目的就是描述一下如何去实践。我的想法基于 Reddit 上 [devops](https://www.reddit.com/r/devops/) 的一些访谈、聊天和深夜讨论,还有一些随机谈话,一般都发生在享受啤酒和美食的时候。如果你已经开始这样实践,我对你的反馈很感兴趣,请通过[我的博客](https://carlosonunez.wordpress.com/)或者 [Twitter](https://twitter.com/easiestnameever) 联系我,也可以直接在下面评论。我很乐意听到你们的想法和故事。
### 古代的 IT
了解历史是搞清楚未来的关键,DevOps 也不例外。想搞清楚 DevOps 运动的普及和流行,去了解一下上世纪 90 年代后期和 21 世纪前十年 IT 的情况会有帮助。这是我的经验。
我的第一份工作是在一家大型跨国金融服务公司做 Windows 系统管理员。当时给计算资源扩容需要给 Dell 打电话(或者像我们公司那样打给 CDW),并下一个价值数十万美元的订单,包含服务器、网络设备、电缆和软件,所有这些都要运到生产或线下的数据中心去。虽然 VMware 仍在尝试说服企业使用虚拟机运行他们的“性能敏感”型程序是更划算的,但是包括我们在内的很多公司都还是愿意使用他们的物理机运行应用。
在我们技术部门,有一个专门做数据中心工程和运营的团队,他们的工作包括价格谈判,让荒唐的月租能够降一点点,还包括保证我们的系统能够正常冷却(如果设备太多,这个事情的难度会呈指数增长)。如果这个团队足够幸运足够有钱,境外数据中心的工作人员对我们所有的服务器型号又都有足够的了解,就能避免在盘后交易中不小心搞错东西。那时候亚马逊 AWS 和 Rackspace 逐渐开始加速扩张,但还远远没到临界规模。
当时我们还有专门的团队来保证硬件上运行着的操作系统和软件能够按照预期工作。这些工程师负责设计可靠的架构以方便给系统打补丁、监控和报警,还要定义<ruby> 基础镜像 <rt> gold image </rt></ruby>的内容。这些大都是通过很多手工实验完成的,很多手工实验是为了编写一个<ruby> 运行说明书 <rt> runbook </rt></ruby>来描述要做的事情,并确保按照它执行后的结果确实在预期内。在我们这么大的组织里,这样做很重要,因为一线和二线的技术支持都是境外的,而他们的培训内容只覆盖到了这些运行说明而已。
(这是我职业生涯前三年的世界。我那时候的梦想是成为制定最高标准的人!)
软件发布则完全是另外一头怪兽。无可否认,我在这方面并没有积累太多经验。但是,从我收集的故事(和最近的经历)来看,当时大部分软件开发的日常大概是这样:
* 开发人员按照技术和功能需求来编写代码,这些需求来自于业务分析人员的会议,但是会议并没有邀请开发人员参加。
* 开发人员可以选择为他们的代码编写单元测试,以确保在代码里没有任何明显的疯狂行为,比如除以 0 但不抛出异常。
* 然后开发者会把他们的代码标记为 “Ready for QA”(准备好了接受测试),质量保障的成员会把这个版本的代码发布到他们自己的环境中,这个环境和生产环境可能相似,也可能不,甚至和开发环境相比也不一定相似。
* 故障会在几天或者几个星期内反馈到开发人员那里,这个时长取决于其它业务活动和优先事项。
虽然系统管理员和开发人员经常有不一致的意见,但是对“变更管理”却一致痛恨。变更管理由高度规范的(就我当时的雇主而言)和非常必要的规则和程序组成,用来管理一家公司应该什么时候做技术变更,以及如何做。很多公司都按照 [ITIL](https://en.wikipedia.org/wiki/ITIL) 来操作,简单的说,ITIL 问了很多和事情发生的原因、时间、地点和方式相关的问题,而且提供了一个过程,对产生最终答案的决定做审计跟踪。
你可能从我的简短历史课上了解到,当时 IT 的很多很多事情都是手工完成的。这导致了很多错误。错误又导致了很多财产损失。变更管理的工作就是尽量减少这些损失,它常常以这样的形式出现:不管变更的影响和规模大小,每两周才能发布部署一次。周五下午 4 点到周一早上 5 点 59 分这段时间,需要排队等候发布窗口。(讽刺的是,这种流程导致了更多错误,通常还是更严重的那种错误)
### DevOps 不是专家团
你可能在想 “Carlos 你在讲啥啊,什么时候才能说到 Ansible playbooks?”,我喜欢 Ansible,但是请稍等 —— 下面这些很重要。
你有没有过被分配到需要跟 DevOps 小组打交道的项目?你有没有依赖过“配置管理”或者“持续集成/持续交付”小组来保证业务流水线设置正确?你有没有在代码开发完的数周之后才参加发布部署的会议?
如果有过,那么你就是在重温历史,这个历史是由上面所有这些导致的。
出于本能,我们喜欢和像自己的人一起工作,这会导致[壁垒](https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo)的形成。很自然,这种人类特质也会在工作场所表现出来是不足为奇的。我甚至在曾经工作过的一个 250 人的创业公司里见到过这样的现象。刚开始的时候,开发人员都在聚在一起工作,彼此深度协作。随着代码变得复杂,开发相同功能的人自然就坐到了一起,解决他们自己的复杂问题。然后按功能划分的小组很快就正式形成了。
在我工作过的很多公司里,系统管理员和开发人员不仅像这样形成了天然的壁垒,而且彼此还有激烈的对抗。开发人员的环境出问题了或者他们的权限太小了,就会对系统管理员很恼火。系统管理员怪开发人员无时无刻地在用各种方式破坏他们的环境,怪开发人员申请的计算资源严重超过他们的需要。双方都不理解对方,更糟糕的是,双方都不愿意去理解对方。
大部分开发人员对操作系统,内核或计算机硬件都不感兴趣。同样,大部分系统管理员,即使是 Linux 的系统管理员,也都不愿意学习编写代码,他们在大学期间学过一些 C 语言,然后就痛恨它,并且永远都不想再碰 IDE。所以,开发人员把运行环境的问题甩给围墙外的系统管理员,系统管理员把这些问题和甩过来的其它上百个问题放在一起安排优先级。每个人都忙于怨恨对方。DevOps 的目的就是解决这种矛盾。
DevOps 不是一个团队,CI/CD 也不是 JIRA 系统的一个用户组。DevOps 是一种思考方式。根据这个运动来看,在理想的世界里,开发人员、系统管理员和业务相关人将作为一个团队工作。虽然他们可能不完全了解彼此的世界,可能没有足够的知识去了解彼此的积压任务,但他们在大多数情况下能有一致的看法。
把所有基础设施和业务逻辑都代码化,再串到一个发布部署流水线里,就像是运行在这之上的应用一样。这个理念的基础就是 DevOps。因为大家都理解彼此,所以人人都是赢家。聊天机器人和易用的监控工具、可视化工具的兴起,背后的基础也是 DevOps。
[Adam Jacob](https://twitter.com/adamhjk/status/572832185461428224) 说的最好:“DevOps 就是企业往软件导向型过渡时我们用来描述操作的词。”
### 要实践 DevOps 我需要知道些什么
我经常被问到这个问题,它的答案和同属于开放式的其它大部分问题一样:视情况而定。
现在“DevOps 工程师”在不同的公司有不同的含义。在软件开发人员比较多但是很少有人懂基础设施的小公司,他们很可能是在找有更多系统管理经验的人。而其他公司,通常是大公司或老公司,已经有一个稳固的系统管理团队了,他们在向类似于谷歌 [SRE](https://landing.google.com/sre/interview/ben-treynor.html) 的方向做优化,也就是“设计运维功能的软件工程师”。但是,这并不是金科玉律,就像其它技术类工作一样,这个决定很大程度上取决于他的招聘经理。
也就是说,我们一般是在找对深入学习以下内容感兴趣的工程师:
* 如何管理和设计安全、可扩展的云平台(通常是在 AWS 上,不过微软的 Azure、Google Cloud Platform,还有 DigitalOcean 和 Heroku 这样的 PaaS 提供商,也都很流行)。
* 如何用流行的 [CI/CD](https://en.wikipedia.org/wiki/CI/CD) 工具,比如 Jenkins、GoCD,还有基于云的 Travis CI 或者 CircleCI,来构造一条优化的发布部署流水线和发布部署策略。
* 如何在你的系统中使用基于时间序列的工具,比如 Kibana、Grafana、Splunk、Loggly 或者 Logstash 来监控、记录,并在变化的时候报警。
* 如何使用配置管理工具,例如 Chef、Puppet 或者 Ansible 做到“基础设施即代码”,以及如何使用像 Terraform 或 CloudFormation 的工具发布这些基础设施。
容器也变得越来越受欢迎。尽管有人对大规模使用 Docker 的现状[表示不满](https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/),但容器正迅速地成为一种很好的方式来实现在更少的操作系统上运行超高密度的服务和应用,同时提高它们的可靠性。(像 Kubernetes 或者 Mesos 这样的容器编排工具,能在宿主机故障的时候,几秒钟之内重新启动新的容器。)考虑到这些,掌握 Docker 或者 rkt 以及容器编排平台的知识会对你大有帮助。
如果你是希望做 DevOps 实践的系统管理员,你还需要知道如何写代码。Python 和 Ruby 是 DevOps 领域的流行语言,因为它们是可移植的(也就是说可以在任何操作系统上运行)、快速的,而且易读易学。它们还支撑着这个行业最流行的配置管理工具(Ansible 是使用 Python 写的,Chef 和 Puppet 是使用 Ruby 写的)以及云平台的 API 客户端(亚马逊 AWS、微软 Azure、Google Cloud Platform 的客户端通常会提供 Python 和 Ruby 语言的版本)。
如果你是开发人员,也希望做 DevOps 的实践,我强烈建议你去学习 Unix、Windows 操作系统以及网络基础知识。虽然云计算把很多系统管理的难题抽象化了,但是对应用的性能做调试的时候,如果你知道操作系统如何工作的就会有很大的帮助。下文包含了一些这个主题的图书。
如果你觉得这些东西听起来内容太多,没关系,大家都是这么想的。幸运的是,有很多小项目可以让你开始探索。其中一个项目是 Gary Stafford 的[选举服务](https://github.com/garystafford/voter-service),一个基于 Java 的简单投票平台。我们要求面试候选人通过一个流水线将该服务从 GitHub 部署到生产环境基础设施上。你可以把这个服务与 Rob Mile 写的了不起的 DevOps [入门教程](https://github.com/maxamg/cd-office-hours)结合起来学习。
还有一个熟悉这些工具的好方法,找一个流行的服务,然后只使用 AWS 和配置管理工具来搭建这个服务所需要的基础设施。第一次先手动搭建,了解清楚要做的事情,然后只用 CloudFormation(或者 Terraform)和 Ansible 重写刚才的手动操作。令人惊讶的是,这就是我们基础设施开发人员为客户所做的大部分日常工作,我们的客户认为这样的工作非常有意义!
### 需要读的书
如果你在找 DevOps 的其它资源,下面这些理论和技术书籍值得一读。
#### 理论书籍
* Gene Kim 写的 《<ruby> <a href="https://itrevolution.com/book/the-phoenix-project/"> 凤凰项目 </a> <rt> The Phoenix Project </rt></ruby>》。这是一本很不错的书,内容涵盖了我上文解释过的历史(写的更生动形象),描述了一个运行在敏捷和 DevOps 之上的公司向精益前进的过程。
* Terrance Ryan 写的 《<ruby> <a href="https://pragprog.com/book/trevan/driving-technical-change"> 布道之道 </a> <rt> Driving Technical Change </rt></ruby>》。非常好的一小本书,讲了大多数技术型组织内的常见性格特点以及如何和他们打交道。这本书对我的帮助比我想象的更多。
* Tom DeMarco 和 Tim Lister 合著的 《<ruby> <a href="https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams"> 人件 </a> <rt> Peopleware </rt></ruby>》。管理工程师团队的经典图书,有一点过时,但仍然很有价值。
* Tom Limoncelli 写的 《<ruby> <a href="http://shop.oreilly.com/product/9780596007836.do"> 时间管理:给系统管理员 </a> <rt> Time Management for System Administrators </rt></ruby>》。这本书主要面向系统管理员,它对很多大型组织内的系统管理员生活做了深入的展示。如果你想了解更多系统管理员和开发人员之间的冲突,这本书可能解释了更多。
* Eric Ries 写的 《<ruby> <a href="http://theleanstartup.com/"> 精益创业 </a> <rt> The Lean Startup </rt></ruby>》。描述了 Eric 自己的 3D 虚拟形象公司,IMVU,发现了如何精益工作,快速失败和更快盈利。
* Jez Humble 和他的朋友写的 《<ruby> <a href="https://info.thoughtworks.com/lean-enterprise-book.html"> 精益企业 </a> <rt> Lean Enterprise </rt></ruby>》。这本书是对精益创业做的改编,以更适应企业,两本书都很棒,都很好地解释了 DevOps 背后的商业动机。
* Kief Morris 写的 《<ruby> <a href="http://infrastructure-as-code.com/book/"> 基础设施即代码 </a> <rt> Infrastructure As Code </rt></ruby>》。关于“基础设施即代码”的非常好的入门读物!很好的解释了为什么所有公司都有必要采纳这种做法。
* Betsy Beyer、Chris Jones、Jennifer Petoff 和 Niall Richard Murphy 合著的 《<ruby> <a href="https://landing.google.com/sre/book.html"> 站点可靠性工程师 </a> <rt> Site Reliability Engineering </rt></ruby>》。一本解释谷歌 SRE 实践的书,也因为是“DevOps 诞生之前的 DevOps”被人熟知。在如何处理运行时间、时延和保持工程师快乐方面提供了有意思的看法。
#### 技术书籍
如果你想找的是让你直接跟代码打交道的书,看这里就对了。
* W. Richard Stevens 的 《<ruby> <a href="https://en.wikipedia.org/wiki/TCP/IP_Illustrated"> TCP/IP 详解 </a> <rt> TCP/IP Illustrated </rt></ruby>》。这是一套经典的(也可以说是最全面的)讲解网络协议基础的巨著,重点介绍了 TCP/IP 协议族。如果你听说过 1、2、3、4 层网络,而且对深入学习它们感兴趣,那么你需要这本书。
* Evi Nemeth、Trent Hein 和 Ben Whaley 合著的 《<ruby> <a href="http://www.admin.com/"> UNIX/Linux 系统管理员手册 </a> <rt> UNIX and Linux System Administration Handbook </rt></ruby>》。一本很好的入门书,介绍 Linux/Unix 如何工作以及如何使用。
* Don Jones 和 Jeffrey Hicks 合著的 《<ruby> <a href="https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition"> Windows PowerShell 实战指南 </a> <rt> Learn Windows Powershell In A Month of Lunches </rt></ruby>》。如果你在 Windows 系统下做自动化任务,你需要学习怎么使用 Powershell。这本书能够帮助你。Don Jones 是这方面著名的 MVP。
* 几乎所有 [James Turnbull](https://jamesturnbull.net/) 写的东西,针对流行的 DevOps 工具,他发表了很好的技术入门读物。
不管是在那些把所有应用都直接部署在物理机上的公司,(现在很多公司仍然有充分的理由这样做)还是在那些把所有应用都做成 serverless 的先驱公司,DevOps 都很可能会持续下去。这部分工作很有趣,产出也很有影响力,而且最重要的是,它搭起桥梁衔接了技术和业务之间的缺口。DevOps 是一个值得期待的美好事物。
首次发表在 [Neurons Firing on a Keyboard](https://carlosonunez.wordpress.com/2017/03/02/getting-into-devops/)。使用 CC-BY-SA 协议。
---
via: <https://opensource.com/article/18/1/getting-devops>
作者:[Carlos Nunez](https://opensource.com/users/carlosonunez) 译者:[belitex](https://github.com/belitex) 校对:[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | I've observed a sharp uptick of developers and systems administrators interested in "getting into DevOps" within the past year or so. This pattern makes sense: In an age in which a single developer can spin up a globally distributed infrastructure for an application with a few dollars and a few API calls, the gap between development and systems administration is closer than ever. Although I've seen plenty of blog posts and articles about cool DevOps tools and thoughts to think about, I've seen fewer content on pointers and suggestions for people looking to get into this work.
My goal with this article is to draw what that path looks like. My thoughts are based upon several interviews, chats, late-night discussions on [reddit.com/r/devops](https://www.reddit.com/r/devops/), and random conversations, likely over beer and delicious food. I'm also interested in hearing feedback from those who have made the jump; if you have, please reach out through [my blog](https://carlosonunez.wordpress.com/), [Twitter](https://twitter.com/easiestnameever), or in the comments below. I'd love to hear your thoughts and stories.
## Olde world IT
Understanding history is key to understanding the future, and DevOps is no exception. To understand the pervasiveness and popularity of the DevOps movement, understanding what IT was like in the late '90s and most of the '00s is helpful. This was my experience.
I started my career in late 2006 as a Windows systems administrator in a large, multi-national financial services firm. In those days, adding new compute involved calling Dell (or, in our case, CDW) and placing a multi-hundred-thousand-dollar order of servers, networking equipment, cables, and software, all destined for your on- and offsite datacenters. Although VMware was still convincing companies that using virtual machines was, indeed, a cost-effective way of hosting its "performance-sensitive" application, many companies, including mine, pledged allegiance to running applications on their physical hardware.
Our technology department had an entire group dedicated to datacenter engineering and operations, and its job was to negotiate our leasing rates down to some slightly less absurd monthly rate and ensure that our systems were being cooled properly (an exponentially difficult problem if you have enough equipment). If the group was lucky/wealthy enough, the offshore datacenter crew knew enough about all of our server models to not accidentally pull the wrong thing during after-hours trading. Amazon Web Services and Rackspace were slowly beginning to pick up steam, but were far from critical mass.
In those days, we also had teams dedicated to ensuring that the operating systems and software running on top of that hardware worked when they were supposed to. The engineers were responsible for designing reliable architectures for patching, monitoring, and alerting these systems as well as defining what the "gold image" looked like. Most of this work was done with a lot of manual experimentation, and the extent of most tests was writing a runbook describing what you did, and ensuring that what you did actually did what you expected it to do after following said runbook. This was important in a large organization like ours, since most of the level 1 and 2 support was offshore, and the extent of their training ended with those runbooks.
(This is the world that your author lived in for the first three years of his career. My dream back then was to be the one who made the gold standard!)
Software releases were another beast altogether. Admittedly, I didn't gain a lot of experience working on this side of the fence. However, from stories that I've gathered (and recent experience), much of the daily grind for software development during this time went something like this:
- Developers wrote code as specified by the technical and functional requirements laid out by business analysts from meetings they weren't invited to.
- Optionally, developers wrote unit tests for their code to ensure that it didn't do anything obviously crazy, like try to divide over zero without throwing an exception.
- When done, developers would mark their code as "Ready for QA." A quality assurance person would pick up the code and run it in their own environment, which might or might not be like production or even the environment the developer used to test their own code against.
- Failures would get sent back to the developers within "a few days or weeks" depending on other business activities and where priorities fell.
Although sysadmins and developers didn't often see eye to eye, the one thing they shared a common hatred for was "change management." This was a composition of highly regulated (and in the case of my employer at the time), highly necessary rules and procedures governing when and how technical changes happened in a company. Most companies followed [ITIL](https://en.wikipedia.org/wiki/ITIL) practices, which, in a nutshell, asked a lot of questions around why, when, where, and how things happened and provided a process for establishing an audit trail of the decisions that led up to those answers.
## DevOps isn't a Tiger Team
You might be thinking "What is Carlos going on about, and when is he going to talk about Ansible playbooks?" I love Ansible tons, but hang on; this is important.
Have you ever been assigned to a project where you had to interact with the "DevOps" team? Or did you have to rely on a "configuration management" or "CI/CD" team to ensure your pipeline was set up properly? Have you had to attend meetings about your release and what it pertains to—weeks after the work was marked "code complete"?
If so, then you're reliving history. All of that comes from all of the above.
[Silos form](https://www.psychologytoday.com/blog/time-out/201401/getting-out-your-silo) out of an instinctual draw to working with people like ourselves. Naturally, it's no surprise that this human trait also manifests in the workplace. I even saw this play out at a 250-person startup where I used to work. When I started, developers all worked in common pods and collaborated heavily with each other. As the codebase grew in complexity, developers who worked on common features naturally aligned with each other to try and tackle the complexity within their own feature. Soon afterwards, feature teams were officially formed.
Sysadmins and developers at many of the companies I worked at not only formed natural silos like this, but also fiercely competed with each other. Developers were mad at sysadmins when their environments were broken. Developers were mad at sysadmins when their environments were too locked down. Sysadmins were mad that developers were breaking their environments in arbitrary ways all of the time. Sysadmins were mad at developers for asking for way more computing power than they needed. Neither side understood each other, and worse yet, neither side wanted to.
Most developers were uninterested in the basics of operating systems, kernels, or, in some cases, computer hardware. As well, most sysadmins, even Linux sysadmins, took a 10-foot pole approach to learning how to code. They tried a bit of C in college, hated it and never wanted to touch an IDE again. Consequently, developers threw their environment problems over the wall to sysadmins, sysadmins prioritized them with the hundreds of other things that were thrown over the wall to them, and everyone busy-waited angrily while hating each other. The purpose of DevOps was to put an end to this.
DevOps isn't a team. CI/CD isn't a group in Jira. DevOps is a way of thinking. According to the movement, in an ideal world, developers, sysadmins, and business stakeholders would be working as one team. While they might not know everything about each other's worlds, not only do they all know enough to understand each other and their backlogs, but they can, for the most part, speak the same language.
This is the basis behind having all infrastructure and business logic be in code and subject to the same deployment pipelines as the software that sits on top of it. Everybody is winning because everyone understands each other. This is also the basis behind the rise of other tools like chatbots and easily accessible monitoring and graphing.[Adam Jacob said](https://twitter.com/adamhjk/status/572832185461428224) it best: "DevOps is the word we will use to describe the operational side of the transition to enterprises being software led."
## What do I need to know to get into DevOps?
I'm commonly asked this question, and the answer, like most open-ended questions like this, is: It depends.
At the moment, the "DevOps engineer" varies from company to company. Smaller companies that have plenty of software developers but fewer folks that understand infrastructure will likely look for people with more experience administrating systems. Other, usually larger and/or older companies that have a solid sysadmin organization will likely optimize for something closer to a [Google site reliability engineer](https://landing.google.com/sre/interview/ben-treynor.html), i.e. "a software engineer to design an operations function." This isn't written in stone, however, as, like any technology job, the decision largely depends on the hiring manager sponsoring it.
That said, we typically look for engineers who are interested in learning more about:
- How to administrate and architect secure and scalable cloud platforms (usually on AWS, but Azure, Google Cloud Platform, and PaaS providers like DigitalOcean and Heroku are popular too);
- How to build and optimize deployment pipelines and deployment strategies on popular
[CI/CD](https://en.wikipedia.org/wiki/CI/CD)tools like Jenkins, Go continuous delivery, and cloud-based ones like Travis CI or CircleCI; - How to monitor, log, and alert on changes in your system with timeseries-based tools like Kibana, Grafana, Splunk, Loggly, or Logstash; and
- How to maintain infrastructure as code with configuration management tools like Chef, Puppet, or Ansible, as well as deploy said infrastructure with tools like Terraform or CloudFormation.
Containers are becoming increasingly popular as well. Despite the [beef against the status quo](https://thehftguy.com/2016/11/01/docker-in-production-an-history-of-failure/) surrounding Docker at scale, containers are quickly becoming a great way of achieving an extremely high density of services and applications running on fewer systems while increasing their reliability. (Orchestration tools like Kubernetes or Mesos can spin up new containers in seconds if the host they're being served by fails.) Given this, having knowledge of Docker or rkt and an orchestration platform will go a long way.
If you're a systems administrator that's looking to get into DevOps, you will also need to know how to write code. Python and Ruby are popular languages for this purpose, as they are portable (i.e., can be used on any operating system), fast, and easy to read and learn. They also form the underpinnings of the industry's most popular configuration management tools (Python for Ansible, Ruby for Chef and Puppet) and cloud API clients (Python and Ruby are commonly used for AWS, Azure, and Google Cloud Platform clients).
If you're a developer looking to make this change, I highly recommend learning more about Unix, Windows, and networking fundamentals. Even though the cloud abstracts away many of the complications of administrating a system, debugging slow application performance is aided greatly by knowing how these things work. I've included a few books on this topic in the next section.
If this sounds overwhelming, you aren't alone. Fortunately, there are plenty of small projects to dip your feet into. One such toy project is Gary Stafford's Voter Service, a simple Java-based voting platform. We ask our candidates to take the service from GitHub to production infrastructure through a pipeline. One can combine that with Rob Mile's awesome DevOps Tutorial repository to learn about ways of doing this.
Another great way of becoming familiar with these tools is taking popular services and setting up an infrastructure for them using nothing but AWS and configuration management. Set it up manually first to get a good idea of what to do, then replicate what you just did using nothing but CloudFormation (or Terraform) and Ansible. Surprisingly, this is a large part of the work that we infrastructure devs do for our clients on a daily basis. Our clients find this work to be highly valuable!
## Books to read
If you're looking for other resources on DevOps, here are some theory and technical books that are worth a read.
### Theory books
by Gene Kim. This is a great book that covers much of the history I explained earlier (with much more color) and describes the journey to a lean company running on agile and DevOps.[The Phoenix Project](https://itrevolution.com/book/the-phoenix-project/)by Terrance Ryan. Awesome little book on common personalities within most technology organizations and how to deal with them. This helped me out more than I expected.[Driving Technical Change](https://pragprog.com/book/trevan/driving-technical-change)by Tom DeMarco and Tim Lister. A classic on managing engineering organizations. A bit dated, but still relevant.[Peopleware](https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams)by Tom Limoncelli. While this is heavily geared towards sysadmins, it provides great insight into the life of a systems administrator at most large organizations. If you want to learn more about the war between sysadmins and developers, this book might explain more.[Time Management for System Administrators](http://shop.oreilly.com/product/9780596007836.do)by Eric Ries. Describes how Eric's 3D avatar company, IMVU, discovered how to work lean, fail fast, and find profit faster.[The Lean Startup](http://theleanstartup.com/)by Jez Humble and friends. This book is an adaption of[Lean Enterprise](https://info.thoughtworks.com/lean-enterprise-book.html)*The Lean Startup*for the enterprise. Both are great reads and do a good job of explaining the business motivation behind DevOps.by Kief Morris. Awesome primer on, well, infrastructure as code! It does a great job of describing why it's essential for any business to adopt this for their infrastructure.[Infrastructure As Code](http://infrastructure-as-code.com/book/)by Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. A book explaining how Google does SRE, or also known as "DevOps before DevOps was a thing." Provides interesting opinions on how to handle uptime, latency, and keeping engineers happy.[Site Reliability Engineering](https://landing.google.com/sre/book.html)
### Technical books
If you're looking for books that'll take you straight to code, you've come to the right section.
by the late W. Richard Stevens. This is the classic (and, arguably, complete) tome on the fundamental networking protocols, with special emphasis on TCP/IP. If you've heard of Layers 1, 2, 3, and 4 and are interested in learning more, you'll need this book.[TCP/IP Illustrated](https://en.wikipedia.org/wiki/TCP/IP_Illustrated)by Evi Nemeth, Trent Hein, and Ben Whaley. A great primer into how Linux and Unix work and how to navigate around them.[UNIX and Linux System Administration Handbook](http://www.admin.com/)by Don Jones and Jeffrey Hicks. If you're doing anything automated with Windows, you will need to learn how to use Powershell. This is the book that will help you do that. Don Jones is a well-known MVP in this space.[Learn Windows Powershell In A Month of Lunches](https://www.manning.com/books/learn-windows-powershell-in-a-month-of-lunches-third-edition)- Practically anything by
[James Turnbull](https://jamesturnbull.net/). He puts out great technical primers on popular DevOps-related tools.
From companies deploying everything to bare metal (there are plenty that still do, for good reasons) to trailblazers doing everything serverless, DevOps is likely here to stay for a while. The work is interesting, the results are impactful, and, most important, it helps bridge the gap between technology and business. It's a wonderful thing to see.
*Originally published at Neurons Firing on a Keyboard, CC-BY-SA.*
## 2 Comments |
10,093 | 从过时的 Windows 机器迁移到 Linux | https://opensource.com/article/18/1/move-to-linux-old-windows | 2018-10-08T22:03:19 | [
"Windows"
] | https://linux.cn/article-10093-1.html |
>
> 这是一个当老旧的 Windows 机器退役时,决定迁移到 Linux 的故事。
>
>
>

我在 ONLYOFFICE 的市场部门工作的每一天,我都能看到 Linux 用户在网上讨论我们的办公软件。我们的产品在 Linux 用户中很受欢迎,这使得我对使用 Linux 作为日常工具的体验非常好奇。我的老旧的 Windows XP 机器在性能上非常差,因此我决定了解 Linux 系统(特别是 Ubuntu)并且决定去尝试使用它。我的两个同事也加入了我的计划。
### 为何选择 Linux ?
我们必须做出改变,首先,我们的老系统在性能方面不够用:我们经历过频繁的崩溃,每当运行超过两个应用时,机器就会负载过度,关闭机器时有一半的几率冻结等等。这很容易让我们从工作中分心,意味着我们没有我们应有的工作效率了。
升级到 Windows 的新版本也是一种选择,但这样可能会带来额外的开销,而且我们的软件本身也是要与 Microsoft 的办公软件竞争。因此我们在这方面也存在意识形态的问题。
其次,就像我之前提过的, ONLYOFFICE 产品在 Linux 社区内非常受欢迎。通过阅读 Linux 用户在使用我们的软件时的体验,我们也对加入他们很感兴趣。
在我们要求转换到 Linux 系统一周后,我们拿到了崭新的装好了 [Kubuntu](https://kubuntu.org/) 的机器。我们选择了 16.04 版本,因为这个版本支持 KDE Plasma 5.5 和包括 Dolphin 在内的很多 KDE 应用,同时也包括 LibreOffice 5.1 和 Firefox 45 。
### Linux 让人喜欢的地方
我相信 Linux 最大的优势是它的运行速度,比如,从按下机器的电源按钮到开始工作只需要几秒钟时间。从一开始,一切看起来都超乎寻常地快:总体的响应速度,图形界面,甚至包括系统更新的速度。
另一个使我惊奇的事情是跟 Windows 相比, Linux 几乎能让你配置任何东西,包括整个桌面的外观。在设置里面,我发现了如何修改各种栏目、按钮和字体的颜色和形状,也可以重新布置任意桌面组件的位置,组合桌面小工具(甚至包括漫画和颜色选择器)。我相信我还仅仅只是了解了基本的选项,之后还需要探索这个系统更多著名的定制化选项。
Linux 发行版通常是一个非常安全的环境。人们很少在 Linux 系统中使用防病毒的软件,因为很少有人会写病毒程序来攻击 Linux 系统。因此你可以拥有很好的系统速度,并且节省了时间和金钱。
总之, Linux 已经改变了我们的日常生活,用一系列的新选项和功能大大震惊了我们。仅仅通过短时间的使用,我们已经可以给它总结出以下特性:
* 操作很快很顺畅
* 高度可定制
* 对新手很友好
* 了解基本组件很有挑战性,但回报丰厚
* 安全可靠
* 对所有想改变工作场所的人来说都是一次绝佳的体验
你已经从 Windows 或 MacOS 系统换到 Kubuntu 或其他 Linux 变种了么?或者你是否正在考虑做出改变?请分享你想要采用 Linux 系统的原因,连同你对开源的印象一起写在评论中。
---
via: <https://opensource.com/article/18/1/move-to-linux-old-windows>
作者:[Michael Korotaev](https://opensource.com/users/michaelk) 译者:[bookug](https://github.com/bookug) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Every day, while working in the marketing department at ONLYOFFICE, I see Linux users discussing our office productivity software on the internet. Our products are popular among Linux users, which made me curious about using Linux as an everyday work tool. My old Windows XP-powered computer was an obstacle to performance, so I started reading about Linux systems (particularly Ubuntu) and decided to try it out as an experiment. Two of my colleagues joined me.
## Why Linux?
We needed to make a change, first, because our old systems were not enough in terms of performance: we experienced regular crashes, an overload every time more than two apps were active, a 50% chance of freezing when a machine was shut down, and so forth. This was rather distracting to our work, which meant we were considerably less efficient than we could be.
Upgrading to newer versions of Windows was an option, too, but that is an additional expense, plus our software competes against Microsoft's office suite. So that was an ideological question, too.
Second, as I mentioned earlier, ONLYOFFICE products are rather popular within the Linux community. By reading about Linux users' experience with our software, we became interested in joining them.
A week after we asked to change to Linux, we got our shiny new computer cases with [Kubuntu](https://kubuntu.org/) inside. We chose version 16.04, which features KDE Plasma 5.5 and many KDE apps including Dolphin, as well as LibreOffice 5.1 and Firefox 45.
## What we like about Linux
Linux's biggest advantage, I believe, is its speed; for instance, it takes just seconds from pushing the machine's On button to starting your work. Everything seemed amazingly rapid from the very beginning: the overall responsiveness, the graphics, and even system updates.
One other thing that surprised me compared to Windows is that Linux allows you to configure nearly everything, including the entire look of your desktop. In Settings, I found how to change the color and shape of bars, buttons, and fonts; relocate any desktop element; and build a composition of widgets, even including comics and Color Picker. I believe I've barely scratched the surface of the available options and have yet to explore most of the customization opportunities that this system is well known for.
Linux distributions are generally a very safe environment. People rarely use antivirus apps in Linux, simply because there are so few viruses written for it. You save system speed, time, and, sure enough, money.
In general, Linux has refreshed our everyday work lives, surprising us with a number of new options and opportunities. Even in the short time we've been using it, we'd characterize it as:
- Fast and smooth to operate
- Highly customizable
- Relatively newcomer-friendly
- Challenging with basic components, however very rewarding in return
- Safe and secure
- An exciting experience for everyone who seeks to refresh their workplace
Have you switched from Windows or MacOS to Kubuntu or another Linux variant? Or are you considering making the change? Please share your reasons for wanting to adopt Linux, as well as your impressions of going open source, in the comments.
## 16 Comments |
10,094 | Linux 开发的五大必备工具 | https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development | 2018-10-08T22:47:05 | [
"开发工具"
] | https://linux.cn/article-10094-1.html |
>
> Linux 上的开发工具如此之多,以至于会担心找不到恰好适合你的。
>
>
>

Linux 已经成为工作、娱乐和个人生活等多个领域的支柱,人们已经越来越离不开它。在 Linux 的帮助下,技术的变革速度超出了人们的想象,Linux 开发的速度也以指数规模增长。因此,越来越多的开发者也不断地加入开源和学习 Linux 开发地潮流当中。在这个过程之中,合适的工具是必不可少的,可喜的是,随着 Linux 的发展,大量适用于 Linux 的开发工具也不断成熟。甚至可以说,这样的工具已经多得有点惊人。
为了选择更合适自己的开发工具,缩小选择范围是很必要的。但是这篇文章并不会要求你必须使用某个工具,而只是缩小到五个工具类别,然后对每个类别提供一个例子。然而,对于大多数类别,都会有不止一种选择。下面我们来看一下。
### 容器
放眼于现实,现在已经是容器的时代了。容器既及其容易部署,又可以方便地构建开发环境。如果你针对的是特定的平台的开发,将开发流程所需要的各种工具都创建到容器映像中是一种很好的方法,只要使用这一个容器映像,就能够快速启动大量运行所需服务的实例。
一个使用容器的最佳范例是使用 [Docker](https://www.docker.com/),使用容器(或 Docker)有这些好处:
* 开发环境保持一致
* 部署后即可运行
* 易于跨平台部署
* Docker 映像适用于多种开发环境和语言
* 部署单个容器或容器集群都并不繁琐
通过 [Docker Hub](https://hub.docker.com/),几乎可以找到适用于任何平台、任何开发环境、任何服务器、任何服务的映像,几乎可以满足任何一种需求。使用 Docker Hub 中的映像,就相当于免除了搭建开发环境的步骤,可以直接开始开发应用程序、服务器、API 或服务。
Docker 在所有 Linux 平台上都很容易安装,例如可以通过终端输入以下命令在 Ubuntu 上安装 Docker:
```
sudo apt-get install docker.io
```
Docker 安装完毕后,就可以从 Docker 仓库中拉取映像,然后开始开发和部署了(如下图)。

*图 1: Docker 镜像准备部署*
### 版本控制工具
如果你正在开发一个大型项目,又或者参与团队开发,版本控制工具是必不可少的,它可以用于记录代码变更、提交代码以及合并代码。如果没有这样的工具,项目几乎无法妥善管理。在 Linux 系统上,[Git](https://git-scm.com/) 和 [GitHub](https://github.com/) 的易用性和流行程度是其它版本控制工具无法比拟的。如果你对 Git 和 GitHub 还不太熟悉,可以简单理解为 Git 是在本地计算机上安装的版本控制系统,而 GitHub 则是用于上传和管理项目的远程存储库。 Git 可以安装在大多数的 Linux 发行版上。例如在基于 Debian 的系统上,只需要通过以下这一条简单的命令就可以安装:
```
sudo apt-get install git
```
安装完毕后,就可以使用 Git 来实施版本控制了(如下图)。

*图 2:Git 已经安装,可以用于很多重要任务*
Github 会要求用户创建一个帐户。用户可以免费使用 GitHub 来管理非商用项目,当然也可以使用 GitHub 的付费模式(更多相关信息,可以参阅[价格矩阵](https://github.com/pricing))。
### 文本编辑器
如果没有文本编辑器,在 Linux 上开发将会变得异常艰难。当然,文本编辑器之间孰优孰劣,具体还是要取决于开发者的需求。对于文本编辑器,有人可能会使用 vim、emacs 或 nano,也有人会使用带有 GUI 的编辑器。但由于重点在于开发,我们需要的是一种能够满足开发人员需求的工具。不过我首先要说,vim 对于开发人员来说确实是一个利器,但前提是要对 vim 非常熟悉,在这种前提下,vim 能够满足你的所有需求,甚至还能给你更好的体验。然而,对于一些开发者(尤其是刚开始接触 Linux 的新手)来说,这不仅难以帮助他们快速达成需求,甚至还会是一个需要逾越的障碍。考虑到这篇文章的目标是帮助 Linux 的新手(而不仅仅是为各种编辑器的死忠粉宣传他们拥护的编辑器),我更倾向于使用 GUI 编辑器。
就文本编辑器而论,选择 [Bluefish](http://bluefish.openoffice.nl/index.html) 一般不会有错。 Bluefish 可以从大部分软件库中安装,它支持项目管理、远程文件多线程操作、搜索和替换、递归打开文件、侧边栏、集成 make/lint/weblint/xmllint、无限制撤销/重做、在线拼写检查、自动恢复、全屏编辑、语法高亮(如下图)、多种语言等等。

*图 3:运行在 Ubuntu 18.04 上的 Bluefish*
### IDE
<ruby> 集成开发环境 <rt> Integrated Development Environment </rt></ruby>(IDE)是包含一整套全面的工具、可以实现一站式功能的开发环境。 开发者除了可以使用 IDE 编写代码,还可以编写文档和构建软件。在 Linux 上也有很多适用的 IDE,其中 [Geany](https://www.geany.org/) 就包含在标准软件库中,它对用户非常友好,功能也相当强大。 Geany 具有语法高亮、代码折叠、自动完成,构建代码片段、自动关闭 XML 和 HTML 标签、调用提示、支持多种文件类型、符号列表、代码导航、构建编译,简单的项目管理和内置的插件系统等强大功能。
Geany 也能在系统上轻松安装,例如执行以下命令在基于 Debian 的 Linux 发行版上安装 Geany:
```
sudo apt-get install geany
```
安装完毕后,就可以快速上手这个易用且强大的 IDE 了(如下图)。

*图 4:Geany 可以作为你的 IDE*
### 文本比较工具
有时候会需要比较两个文件的内容来找到它们之间的不同之处,它们可能是同一文件的两个不同副本(有一个经过编译,而另一个没有)。这种情况下,你肯定不想要凭借肉眼来找出差异,而是想要使用像 [Meld](http://meldmerge.org/) 这样的工具。 Meld 是针对开发者的文本比较和合并工具,可以使用 Meld 来发现两个文件之间的差异。虽然你可以使用命令行中的文本比较工具,但就效率而论,Meld 无疑更为优秀。
Meld 可以打开两个文件进行比较,并突出显示文件之间的差异之处。 Meld 还允许用户从两个文件的其中一方合并差异(下图显示了 Meld 同时打开两个文件)。

*图 5: 以简单差异的模式比较两个文件*
Meld 也可以通过大多数标准的软件库安装,在基于 Debian 的系统上,执行以下命令就可以安装:
```
sudo apt-get install meld
```
### 高效地工作
以上提到的五个工具除了帮助你完成工作,而且有助于提高效率。尽管适用于 Linux 开发者的工具有很多,但对于以上几个类别,你最好分别使用一个对应的工具。
---
via: <https://www.linux.com/learn/intro-to-linux/2018/8/5-essential-tools-linux-development>
作者:[Jack Wallen](https://www.linux.com/users/jlwallen) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,095 | 如何在 Linux 上使用网络配置工具 Netplan | https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux | 2018-10-09T12:14:35 | [
"网络"
] | https://linux.cn/article-10095-1.html |
>
> netplan 是一个命令行工具,用于在某些 Linux 发行版上配置网络。
>
>
>

多年以来 Linux 管理员和用户们以相同的方式配置他们的网络接口。例如,如果你是 Ubuntu 用户,你能够用桌面 GUI 配置网络连接,也可以在 `/etc/network/interfaces` 文件里配置。配置相当简单且可以奏效。在文件中配置看起来就像这样:
```
auto enp10s0
iface enp10s0 inet static
address 192.168.1.162
netmask 255.255.255.0
gateway 192.168.1.100
dns-nameservers 1.0.0.1,1.1.1.1
```
保存并关闭文件。使用命令重启网络:
```
sudo systemctl restart networking
```
或者,如果你使用不带 systemd 的发行版,你可以通过老办法来重启网络:
```
sudo /etc/init.d/networking restart
```
你的网络将会重新启动,新的配置将会生效。
这就是多年以来的做法。但是现在,在某些发行版上(例如 Ubuntu Linux 18.04),网络的配置与控制发生了很大的变化。不需要那个 `interfaces` 文件和 `/etc/init.d/networking` 脚本,我们现在转向使用 [Netplan](https://netplan.io/)。Netplan 是一个在某些 Linux 发行版上配置网络连接的命令行工具。Netplan 使用 YAML 描述文件来配置网络接口,然后,通过这些描述为任何给定的呈现工具生成必要的配置选项。
我将向你展示如何在 Linux 上使用 Netplan 配置静态 IP 地址和 DHCP 地址。我会在 Ubuntu Server 18.04 上演示。有句忠告,你创建的 .yaml 文件中的缩进必须保持一致,否则将会失败。你不用为每行使用特定的缩进间距,只需保持一致就行了。
### 新的配置文件
打开终端窗口(或者通过 SSH 登录进 Ubuntu 服务器)。你会在 `/etc/netplan` 文件夹下发现 Netplan 的新配置文件。使用 `cd /etc/netplan` 命令进入到那个文件夹下。一旦进到了那个文件夹,也许你就能够看到一个文件:
```
01-netcfg.yaml
```
你可以创建一个新的文件或者是编辑默认文件。如果你打算修改默认文件,我建议你先做一个备份:
```
sudo cp /etc/netplan/01-netcfg.yaml /etc/netplan/01-netcfg.yaml.bak
```
备份好后,就可以开始配置了。
### 网络设备名称
在你开始配置静态 IP 之前,你需要知道设备名称。要做到这一点,你可以使用命令 `ip a`,然后找出哪一个设备将会被用到(图 1)。

*图 1:使用 ip a 命令找出设备名称*
我将为 ens5 配置一个静态的 IP。
### 配置静态 IP 地址
使用命令打开原来的 .yaml 文件:
```
sudo nano /etc/netplan/01-netcfg.yaml
```
文件的布局看起来就像这样:
```
network:
Version: 2
Renderer: networkd
ethernets:
DEVICE_NAME:
Dhcp4: yes/no
Addresses: [IP/NETMASK]
Gateway: GATEWAY
Nameservers:
Addresses: [NAMESERVER, NAMESERVER]
```
其中:
* `DEVICE_NAME` 是需要配置设备的实际名称。
* `yes`/`no` 代表是否启用 dhcp4。
* `IP` 是设备的 IP 地址。
* `NETMASK` 是 IP 地址的掩码。
* `GATEWAY` 是网关的地址。
* `NAMESERVER` 是由逗号分开的 DNS 服务器列表。
这是一份 .yaml 文件的样例:
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
dhcp4: no
addresses: [192.168.1.230/24]
gateway4: 192.168.1.254
nameservers:
addresses: [8.8.4.4,8.8.8.8]
```
编辑上面的文件以达到你想要的效果。保存并关闭文件。
注意,掩码已经不用再配置为 `255.255.255.0` 这种形式。取而代之的是,掩码已被添加进了 IP 地址中。
### 测试配置
在应用改变之前,让我们测试一下配置。为此,使用命令:
```
sudo netplan try
```
上面的命令会在应用配置之前验证其是否有效。如果成功,你就会看到配置被接受。换句话说,Netplan 会尝试将新的配置应用到运行的系统上。如果新的配置失败了,Netplan 会自动地恢复到之前使用的配置。成功后,新的配置就会被使用。
### 应用新的配置
如果你确信配置文件没有问题,你就可以跳过测试环节并且直接使用新的配置。它的命令是:
```
sudo netplan apply
```
此时,你可以使用 ip a 看看新的地址是否正确。
### 配置 DHCP
虽然你可能不会配置 DHCP 服务,但通常还是知道比较好。例如,你也许不知道网络上当前可用的静态 IP 地址是多少。你可以为设备配置 DHCP,获取到 IP 地址,然后将那个地址重新配置为静态地址。
在 Netplan 上使用 DHCP,配置文件看起来就像这样:
```
network:
version: 2
renderer: networkd
ethernets:
ens5:
Addresses: []
dhcp4: true
optional: true
```
保存并退出。用下面命令来测试文件:
```
sudo netplan try
```
Netplan 应该会成功配置 DHCP 服务。这时你可以使用 `ip a` 命令得到动态分配的地址,然后重新配置静态地址。或者,你可以直接使用 DHCP 分配的地址(但看看这是一个服务器,你可能不想这样做)。
也许你有不只一个的网络接口,你可以命名第二个 .yaml 文件为 `02-netcfg.yaml` 。Netplan 会按照数字顺序应用配置文件,因此 01 会在 02 之前使用。根据你的需要创建多个配置文件。
### 就是这些了
不管怎样,那些就是所有关于使用 Netplan 的东西了。虽然它对于我们习惯性的配置网络地址来说是一个相当大的改变,但并不是所有人都用的惯。但这种配置方式值得一提……因此你会适应的。
在 Linux Foundation 和 edX 上通过 [“Introduction to Linux”](https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux) 课程学习更多关于 Linux 的内容。
---
via: <https://www.linux.com/learn/intro-to-linux/2018/9/how-use-netplan-network-configuration-tool-linux>
作者:[Jack Wallen](https://www.linux.com/users/jlwallen) 选题:[lujun9972](https://github.com/lujun9972) 译者:[LuuMing](https://github.com/LuuMing) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,096 | df 命令新手教程 | https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/ | 2018-10-10T13:00:51 | [
"df",
"磁盘"
] | https://linux.cn/article-10096-1.html | 
在本指南中,我们将学习如何使用 `df` 命令。df 命令是 “Disk Free” 的首字母组合,它报告文件系统磁盘空间的使用情况。它显示一个 Linux 系统中文件系统上可用磁盘空间的数量。`df` 命令很容易与 `du` 命令混淆。它们的用途不同。`df` 命令报告我们拥有多少磁盘空间(空闲磁盘空间),而 `du` 命令报告被文件和目录占用了多少磁盘空间。希望我这样的解释你能更清楚。在继续之前,我们来看一些 `df` 命令的实例,以便于你更好地理解它。
### df 命令使用举例
#### 1、查看整个文件系统磁盘空间使用情况
无需任何参数来运行 `df` 命令,以显示整个文件系统磁盘空间使用情况。
```
$ df
```
示例输出:
```
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
/dev/sda2 478425016 428790352 25308980 95% /
tmpfs 4038880 34396 4004484 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11636 4027244 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 95054 55724 32162 64% /boot
tmpfs 807776 28 807748 1% /run/user/1000
```

正如你所见,输出结果分为六列。我们来看一下每一列的含义。
* `Filesystem` – Linux 系统中的文件系统
* `1K-blocks` – 文件系统的大小,用 1K 大小的块来表示。
* `Used` – 以 1K 大小的块所表示的已使用数量。
* `Available` – 以 1K 大小的块所表示的可用空间的数量。
* `Use%` – 文件系统中已使用的百分比。
* `Mounted on` – 已挂载的文件系统的挂载点。
#### 2、以人类友好格式显示文件系统硬盘空间使用情况
在上面的示例中你可能已经注意到了,它使用 1K 大小的块为单位来表示使用情况,如果你以人类友好格式来显示它们,可以使用 `-h` 标志。
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 1.1M 3.9G 1% /run
/dev/sda2 457G 409G 25G 95% /
tmpfs 3.9G 27M 3.9G 1% /dev/shm
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 12M 3.9G 1% /tmp
/dev/loop0 83M 83M 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93M 55M 32M 64% /boot
tmpfs 789M 28K 789M 1% /run/user/1000
```
现在,在 `Size` 列和 `Avail` 列,使用情况是以 GB 和 MB 为单位来显示的。
#### 3、仅以 MB 为单位来显示文件系统磁盘空间使用情况
如果仅以 MB 为单位来显示文件系统磁盘空间使用情况,使用 `-m` 标志。
```
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
dev 3939 0 3939 0% /dev
run 3945 2 3944 1% /run
/dev/sda2 467212 418742 24716 95% /
tmpfs 3945 26 3920 1% /dev/shm
tmpfs 3945 0 3945 0% /sys/fs/cgroup
tmpfs 3945 12 3933 1% /tmp
/dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93 55 32 64% /boot
tmpfs 789 1 789 1% /run/user/1000
```
#### 4、列出节点而不是块的使用情况
如下所示,我们可以通过使用 `-i` 标记来列出节点而不是块的使用情况。
```
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
dev 1008304 439 1007865 1% /dev
run 1009720 649 1009071 1% /run
/dev/sda2 30392320 844035 29548285 3% /
tmpfs 1009720 86 1009634 1% /dev/shm
tmpfs 1009720 18 1009702 1% /sys/fs/cgroup
tmpfs 1009720 3008 1006712 1% /tmp
/dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 25688 390 25298 2% /boot
tmpfs 1009720 29 1009691 1% /run/user/1000
```
#### 5、显示文件系统类型
使用 `-T` 标志显示文件系统类型。
```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
dev devtmpfs 4033216 0 4033216 0% /dev
run tmpfs 4038880 1120 4037760 1% /run
/dev/sda2 ext4 478425016 428790896 25308436 95% /
tmpfs tmpfs 4038880 31300 4007580 1% /dev/shm
tmpfs tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 ext4 95054 55724 32162 64% /boot
tmpfs tmpfs 807776 28 807748 1% /run/user/1000
```
正如你所见,现在出现了显示文件系统类型的额外的列(从左数的第二列)。
#### 6、仅显示指定类型的文件系统
我们可以限制仅列出某些文件系统。比如,只列出 ext4 文件系统。我们使用 `-t` 标志。
```
$ df -t ext4
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 478425016 428790896 25308436 95% /
/dev/sda1 95054 55724 32162 64% /boot
```
看到了吗?这个命令仅显示了 ext4 文件系统的磁盘空间使用情况。
#### 7、不列出指定类型的文件系统
有时,我们可能需要从结果中去排除指定类型的文件系统。我们可以使用 `-x` 标记达到我们的目的。
```
$ df -x ext4
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
tmpfs 4038880 26116 4012764 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
tmpfs 807776 28 807748 1% /run/user/1000
```
上面的命令列出了除 ext4 类型以外的全部文件系统。
#### 8、显示一个目录的磁盘使用情况
去显示某个目录的硬盘空间使用情况以及它的挂载点,例如 `/home/sk/` 目录,可以使用如下的命令:
```
$ df -hT /home/sk/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 457G 409G 25G 95% /
```
这个命令显示文件系统类型、以人类友好格式显示已使用和可用磁盘空间、以及它的挂载点。如果你不想去显示文件系统类型,只需要忽略 `-t` 标志即可。
更详细的使用情况,请参阅 man 手册页。
```
$ man df
```
今天就到此这止!我希望对你有用。还有更多更好玩的东西即将奉上。请继续关注!
再见!
---
via: <https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,097 | 管理 Linux 系统中的用户 | https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html | 2018-10-10T13:12:00 | [
"用户"
] | https://linux.cn/article-10097-1.html | 
也许你的 Linux 用户并不是愤怒的公牛,但是当涉及管理他们的账户的时候,能让他们一直满意也是一种挑战。你需要监控他们的访问权限,跟进他们遇到问题时的解决方案,并且把他们在使用系统时出现的重要变动记录下来。这里有一些方法和工具可以让这个工作轻松一点。
### 配置账户
添加和删除账户是管理用户中比较简单的一项,但是这里面仍然有很多需要考虑的方面。无论你是用桌面工具或是命令行选项,这都是一个非常自动化的过程。你可以使用 `adduser jdoe` 命令添加一个新用户,同时会触发一系列的反应。在创建 John 这个账户时会自动使用下一个可用的 UID,并有很多自动生成的文件来完成这个工作。当你运行 `adduser` 后跟一个参数时(要创建的用户名),它会提示一些额外的信息,同时解释这是在干什么。
```
$ sudo adduser jdoe
Adding user 'jdoe' ...
Adding new group `jdoe' (1001) ...
Adding new user `jdoe' (1001) with group `jdoe' ...
Creating home directory `/home/jdoe' ...
Copying files from `/etc/skel' …
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for jdoe
Enter the new value, or press ENTER for the default
Full Name []: John Doe
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y
```
如你所见,`adduser` 会添加用户的信息(到 `/etc/passwd` 和 `/etc/shadow` 文件中),创建新的<ruby> 家目录 <rt> home directory </rt></ruby>,并用 `/etc/skel` 里设置的文件填充家目录,提示你分配初始密码和认证信息,然后确认这些信息都是正确的,如果你在最后的提示 “Is the information correct?” 处的回答是 “n”,它会回溯你之前所有的回答,允许修改任何你想要修改的地方。
创建好一个用户后,你可能会想要确认一下它是否是你期望的样子,更好的方法是确保在添加第一个帐户**之前**,“自动”选择与你想要查看的内容是否匹配。默认有默认的好处,它对于你想知道他们定义在哪里很有用,以便你想做出一些变动 —— 例如,你不想让用户的家目录在 `/home` 里,你不想让用户 UID 从 1000 开始,或是你不想让家目录下的文件被系统中的**每个人**都可读。
`adduser` 的一些配置细节设置在 `/etc/adduser.conf` 文件里。这个文件包含的一些配置项决定了一个新的账户如何配置,以及它之后的样子。注意,注释和空白行将会在输出中被忽略,因此我们更关注配置项。
```
$ cat /etc/adduser.conf | grep -v "^#" | grep -v "^$"
DSHELL=/bin/bash
DHOME=/home
GROUPHOMES=no
LETTERHOMES=no
SKEL=/etc/skel
FIRST_SYSTEM_UID=100
LAST_SYSTEM_UID=999
FIRST_SYSTEM_GID=100
LAST_SYSTEM_GID=999
FIRST_UID=1000
LAST_UID=29999
FIRST_GID=1000
LAST_GID=29999
USERGROUPS=yes
USERS_GID=100
DIR_MODE=0755
SETGID_HOME=no
QUOTAUSER=""
SKEL_IGNORE_REGEX="dpkg-(old|new|dist|save)"
```
可以看到,我们有了一个默认的 shell(`DSHELL`),UID(`FIRST_UID`)的起始值,家目录(`DHOME`)的位置,以及启动文件(`SKEL`)的来源位置。这个文件也会指定分配给家目录(`DIR_HOME`)的权限。
其中 `DIR_HOME` 是最重要的设置,它决定了每个家目录被使用的权限。这个设置分配给用户创建的目录权限是 755,家目录的权限将会设置为 `rwxr-xr-x`。用户可以读其他用户的文件,但是不能修改和移除它们。如果你想要更多的限制,你可以更改这个设置为 750(用户组外的任何人都不可访问)甚至是 700(除用户自己外的人都不可访问)。
任何用户账号在创建之前都可以进行手动修改。例如,你可以编辑 `/etc/passwd` 或者修改家目录的权限,开始在新服务器上添加用户之前配置 `/etc/adduser.conf` 可以确保一定的一致性,从长远来看可以节省时间和避免一些麻烦。
`/etc/adduser.conf` 的修改将会在之后创建的用户上生效。如果你想以不同的方式设置某个特定账户,除了用户名之外,你还可以选择使用 `adduser` 命令提供账户配置选项。或许你想为某些账户分配不同的 shell,分配特殊的 UID,或完全禁用该账户登录。`adduser` 的帮助页将会为你显示一些配置个人账户的选择。
```
adduser [options] [--home DIR] [--shell SHELL] [--no-create-home]
[--uid ID] [--firstuid ID] [--lastuid ID] [--ingroup GROUP | --gid ID]
[--disabled-password] [--disabled-login] [--gecos GECOS]
[--add_extra_groups] [--encrypt-home] user
```
每个 Linux 系统现在都会默认把每个用户放入对应的组中。作为一个管理员,你可能会选择以不同的方式。你也许会发现把用户放在一个共享组中更适合你的站点,你就可以选择使用 `adduser` 的 `--gid` 选项指定一个特定的组。当然,用户总是许多组的成员,因此也有一些选项来管理主要和次要的组。
### 处理用户密码
一直以来,知道其他人的密码都不是一件好事,在设置账户时,管理员通常使用一个临时密码,然后在用户第一次登录时运行一条命令强制他修改密码。这里是一个例子:
```
$ sudo chage -d 0 jdoe
```
当用户第一次登录时,会看到类似下面的提示:
```
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for jdoe.
(current) UNIX password:
```
### 添加用户到副组
添加用户到副组中,你可能会用如下所示的 `usermod` 命令添加用户到组中并确认已经做出变动。
```
$ sudo usermod -a -G sudo jdoe
$ sudo grep sudo /etc/group
sudo:x:27:shs,jdoe
```
记住在一些组意味着特别的权限,如 sudo 或者 wheel 组,一定要特别注意这一点。
### 移除用户,添加组等
Linux 系统也提供了移除账户,添加新的组,移除组等一些命令。例如,`deluser` 命令,将会从 `/etc/passwd` 和 `/etc/shadow` 中移除用户记录,但是会完整保留其家目录,除非你添加了 `--remove-home` 或者 `--remove-all-files` 选项。`addgroup` 命令会添加一个组,默认按目前组的次序分配下一个 id(在用户组范围内),除非你使用 `--gid` 选项指定 id。
```
$ sudo addgroup testgroup --gid=131
Adding group `testgroup' (GID 131) ...
Done.
```
### 管理特权账户
一些 Linux 系统中有一个 wheel 组,它给组中成员赋予了像 root 一样运行命令的权限。在这种情况下,`/etc/sudoers` 将会引用该组。在 Debian 系统中,这个组被叫做 sudo,但是原理是相同的,你在 `/etc/sudoers` 中可以看到像这样的信息:
```
%sudo ALL=(ALL:ALL) ALL
```
这行基本的配置意味着任何在 wheel 或者 sudo 组中的成员只要在他们运行的命令之前添加 `sudo`,就可以以 root 的权限去运行命令。
你可以向 sudoers 文件中添加更多有限的权限 —— 也许给特定用户几个能以 root 运行的命令。如果你是这样做的,你应该定期查看 `/etc/sudoers` 文件以评估用户拥有的权限,以及仍然需要提供的权限。
在下面显示的命令中,我们过滤了 `/etc/sudoers` 中有效的配置行。其中最有意思的是,它包含了能使用 `sudo` 运行命令的路径设置,以及两个允许通过 `sudo` 运行命令的组。像刚才提到的那样,单个用户可以通过包含在 sudoers 文件中来获得权限,但是更有实际意义的方法是通过组成员来定义各自的权限。
```
# cat /etc/sudoers | grep -v "^#" | grep -v "^$"
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL <== admin group
%sudo ALL=(ALL:ALL) ALL <== sudo group
```
### 登录检查
你可以通过以下命令查看用户的上一次登录:
```
# last jdoe
jdoe pts/18 192.168.0.11 Thu Sep 14 08:44 - 11:48 (00:04)
jdoe pts/18 192.168.0.11 Thu Sep 14 13:43 - 18:44 (00:00)
jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:00)
```
如果你想查看每一个用户上一次的登录情况,你可以通过一个像这样的循环来运行 `last` 命令:
```
$ for user in `ls /home`; do last $user | head -1; done
jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43 (00:03)
rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02 (00:00)
shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged in
```
此命令仅显示自当前 wtmp 文件登录过的用户。空白行表示用户自那以后从未登录过,但没有将他们显示出来。一个更好的命令可以明确地显示这期间从未登录过的用户:
```
$ for user in `ls /home`; do echo -n "$user"; last $user | head -1 | awk '{print substr($0,40)}'; done
dhayes
jdoe pts/18 192.168.0.11 Thu Sep 14 19:42 - 19:43
peanut pts/19 192.168.0.29 Mon Sep 11 09:15 - 17:11
rocket pts/18 192.168.0.11 Thu Sep 14 13:02 - 13:02
shs pts/17 192.168.0.11 Thu Sep 14 12:45 still logged
tsmith
```
这个命令要打很多字,但是可以通过一个脚本使它更加清晰易用。
```
#!/bin/bash
for user in `ls /home`
do
echo -n "$user ";last $user | head -1 | awk '{print substr($0,40)}'
done
```
有时这些信息可以提醒你用户角色的变动,表明他们可能不再需要相关帐户了。
### 与用户沟通
Linux 提供了许多和用户沟通的方法。你可以向 `/etc/motd` 文件中添加信息,当用户从终端登录到服务器时,将会显示这些信息。你也可以通过例如 `write`(通知单个用户)或者 `wall`(write 给所有已登录的用户)命令发送通知。
```
$ wall System will go down in one hour
Broadcast message from shs@stinkbug (pts/17) (Thu Sep 14 14:04:16 2017):
System will go down in one hour
```
重要的通知应该通过多个渠道传达,因为很难预测用户实际会注意到什么。mesage-of-the-day(motd),`wall` 和 email 通知可以吸引用户大部分的注意力。
### 注意日志文件
多注意日志文件也可以帮你理解用户的活动情况。尤其 `/var/log/auth.log` 文件将会显示用户的登录和注销活动,组的创建记录等。`/var/log/message` 或者 `/var/log/syslog` 文件将会告诉你更多有关系统活动的日志。
### 追踪问题和需求
无论你是否在 Linux 系统上安装了事件跟踪系统,跟踪用户遇到的问题以及他们提出的需求都非常重要。如果需求的一部分久久不见回应,用户必然不会高兴。即使是记录在纸上也是有用的,或者最好有个电子表格,这可以让你注意到哪些问题仍然悬而未决,以及问题的根本原因是什么。确认问题和需求非常重要,记录还可以帮助你记住你必须采取的措施来解决几个月甚至几年后重新出现的问题。
### 总结
在繁忙的服务器上管理用户帐号,部分取决于配置良好的默认值,部分取决于监控用户活动和遇到的问题。如果用户觉得你对他们的顾虑有所回应并且知道在需要系统升级时会发生什么,他们可能会很高兴。
---
via: <https://www.networkworld.com/article/3225109/linux/managing-users-on-linux-systems.html>
作者:[Sandra Henry-Stocker](https://www.networkworld.com/author/Sandra-Henry_Stocker/) 译者:[dianbanjiu](https://github.com/dianbanjiu) 校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
10,098 | 10 个 Linux 中方便的 Bash 别名 | https://opensource.com/article/18/9/handy-bash-aliases | 2018-10-10T13:19:54 | [
"bash",
"别名"
] | https://linux.cn/article-10098-1.html |
>
> 对 Bash 长命令使用压缩的版本来更有效率。
>
>
>

你有多少次在命令行上输入一个长命令,并希望有一种方法可以保存它以供日后使用?这就是 Bash 别名派上用场的地方。它们允许你将长而神秘的命令压缩为易于记忆和使用的东西。需要一些例子来帮助你入门吗?没问题!
要使用你创建的 Bash 别名,你需要将其添加到 `.bash_profile` 中,该文件位于你的家目录中。请注意,此文件是隐藏的,并只能从命令行访问。编辑此文件的最简单方法是使用 Vi 或 Nano 之类的东西。
### 10 个方便的 Bash 别名
1、 你有几次遇到需要解压 .tar 文件但无法记住所需的确切参数?别名可以帮助你!只需将以下内容添加到 `.bash_profile` 中,然后使用 `untar FileName` 解压缩任何 .tar 文件。
```
alias untar='tar -zxvf '
```
2、 想要下载的东西,但如果出现问题可以恢复吗?
```
alias wget='wget -c '
```
3、 是否需要为新的网络帐户生成随机的 20 个字符的密码?没问题。
```
alias getpass="openssl rand -base64 20"
```
4、 下载文件并需要测试校验和?我们也可做到。
```
alias sha='shasum -a 256 '
```
5、 普通的 `ping` 将永远持续下去。我们不希望这样。相反,让我们将其限制在五个 `ping`。
```
alias ping='ping -c 5'
```
6、 在任何你想要的文件夹中启动 Web 服务器。
```
alias www='python -m SimpleHTTPServer 8000'
```
7、 想知道你的网络有多快?只需下载 Speedtest-cli 并使用此别名即可。你可以使用 `speedtest-cli --list` 命令选择离你所在位置更近的服务器。
```
alias speed='speedtest-cli --server 2406 --simple'
```
8、 你有多少次需要知道你的外部 IP 地址,但是不知道如何获取?我也是。
```
alias ipe='curl ipinfo.io/ip'
```
9、 需要知道你的本地 IP 地址?
```
alias ipi='ipconfig getifaddr en0'
```
10、 最后,让我们清空屏幕。
```
alias c='clear'
```
如你所见,Bash 别名是一种在命令行上简化生活的超级简便方法。想了解更多信息?我建议你 Google 搜索“Bash 别名”或在 Github 中看下。
---
via: <https://opensource.com/article/18/9/handy-bash-aliases>
作者:[Patrick H.Mullins](https://opensource.com/users/pmullins) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | How many times have you repeatedly typed out a long command on the command line and wished there was a way to save it for later? This is where Bash aliases come in handy. They allow you to condense long, cryptic commands down to something easy to remember and use. Need some examples to get you started? No problem!
To use a Bash alias you've created, you need to add it to your .bash_profile file, which is located in your home folder. Note that this file is hidden and accessible only from the command line. The easiest way to work with this file is to use something like Vi or Nano.
## 10 handy Bash aliases
- How many times have you needed to unpack a .tar file and couldn't remember the exact arguments needed? Aliases to the rescue! Just add the following to your .bash_profile file and then use
**untar FileName**to unpack any .tar file.
`alias untar='tar -zxvf '`
- Want to download something but be able to resume if something goes wrong?
`alias wget='wget -c '`
- Need to generate a random, 20-character password for a new online account? No problem.
`alias getpass="openssl rand -base64 20"`
- Downloaded a file and need to test the checksum? We've got that covered too.
`alias sha='shasum -a 256 '`
- A normal ping will go on forever. We don't want that. Instead, let's limit that to just five pings.
`alias ping='ping -c 5'`
- Start a web server in any folder you'd like.
`alias www='python -m SimpleHTTPServer 8000'`
- Want to know how fast your network is? Just download Speedtest-cli and use this alias. You can choose a server closer to your location by using the
**speedtest-cli --list**command.
`alias speed='speedtest-cli --server 2406 --simple'`
- How many times have you needed to know your external IP address and had no idea how to get that info? Yeah, me too.
`alias ipe='curl ipinfo.io/ip'`
- Need to know your local IP address?
`alias ipi='ipconfig getifaddr en0'`
- Finally, let's clear the screen.
`alias c='clear'`
As you can see, Bash aliases are a super-easy way to simplify your life on the command line. Want more info? I recommend a quick Google search for "Bash aliases" or a trip to GitHub.
## 13 Comments |
10,099 | 75 个最常用的 Linux 应用程序(2018 年) | https://www.fossmint.com/most-used-linux-applications/ | 2018-10-10T13:40:00 | [
"应用",
"Linux"
] | https://linux.cn/article-10099-1.html | 
对于许多应用程序来说,2018 年是非常好的一年,尤其是自由开源的应用程序。尽管各种 Linux 发行版都自带了很多默认的应用程序,但用户也可以自由地选择使用它们或者其它任何免费或付费替代方案。
下面汇总了[一系列的 Linux 应用程序](https://www.fossmint.com/awesome-linux-software/),这些应用程序都能够在 Linux 系统上安装,尽管还有很多其它选择。以下汇总中的任何应用程序都属于其类别中最常用的应用程序,如果你还没有用过,欢迎试用一下!
### 备份工具
#### Rsync
[Rsync](https://rsync.samba.org/) 是一个开源的、节约带宽的工具,它用于执行快速的增量文件传输,而且它也是一个免费工具。
```
$ rsync [OPTION...] SRC... [DEST]
```
想要了解更多示例和用法,可以参考《[10 个使用 Rsync 命令的实际例子](https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/)》。
#### Timeshift
[Timeshift](https://github.com/teejee2008/timeshift) 能够通过增量快照来保护用户的系统数据,而且可以按照日期恢复指定的快照,类似于 Mac OS 中的 Time Machine 功能和 Windows 中的系统还原功能。

### BT(BitTorrent) 客户端

#### Deluge
[Deluge](https://deluge-torrent.org/) 是一个漂亮的跨平台 BT 客户端,旨在优化 μTorrent 体验,并向用户免费提供服务。
使用以下命令在 Ubuntu 和 Debian 安装 Deluge。
```
$ sudo add-apt-repository ppa:deluge-team/ppa
$ sudo apt-get update
$ sudo apt-get install deluge
```
#### qBittorent
[qBittorent](https://www.qbittorrent.org/) 是一个开源的 BT 客户端,旨在提供类似 μTorrent 的免费替代方案。
使用以下命令在 Ubuntu 和 Debian 安装 qBittorent。
```
$ sudo add-apt-repository ppa:qbittorrent-team/qbittorrent-stable
$ sudo apt-get update
$ sudo apt-get install qbittorrent
```
#### Transmission
[Transmission](https://transmissionbt.com/) 是一个强大的 BT 客户端,它主要关注速度和易用性,一般在很多 Linux 发行版上都有预装。
使用以下命令在 Ubuntu 和 Debian 安装 Transmission。
```
$ sudo add-apt-repository ppa:transmissionbt/ppa
$ sudo apt-get update
$ sudo apt-get install transmission-gtk transmission-cli transmission-common transmission-daemon
```
### 云存储

#### Dropbox
[Dropbox](https://www.dropbox.com/) 团队在今年早些时候给他们的云服务换了一个名字,也为客户提供了更好的性能和集成了更多应用程序。Dropbox 会向用户免费提供 2 GB 存储空间。
使用以下命令在 Ubuntu 和 Debian 安装 Dropbox。
```
$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86" | tar xzf - [On 32-Bit]
$ cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf - [On 64-Bit]
$ ~/.dropbox-dist/dropboxd
```
#### Google Drive
[Google Drive](https://www.google.com/drive/) 是 Google 提供的云服务解决方案,这已经是一个广为人知的服务了。与 Dropbox 一样,可以通过它在所有联网的设备上同步文件。它免费提供了 15 GB 存储空间,包括Gmail、Google 图片、Google 地图等服务。
参考阅读:[5 个适用于 Linux 的 Google Drive 客户端](https://www.fossmint.com/best-google-drive-clients-for-linux/)
#### Mega
[Mega](https://mega.nz/) 也是一个出色的云存储解决方案,它的亮点除了高度的安全性之外,还有为用户免费提供高达 50 GB 的免费存储空间。它使用端到端加密,以确保用户的数据安全,所以如果忘记了恢复密钥,用户自己也无法访问到存储的数据。
参考阅读:[在 Ubuntu 下载 Mega 云存储客户端](https://mega.nz/sync!linux)
### 命令行编辑器

#### Vim
[Vim](https://www.vim.org/) 是 vi 文本编辑器的开源克隆版本,它的主要目的是可以高度定制化并能够处理任何类型的文本。
使用以下命令在 Ubuntu 和 Debian 安装 Vim。
```
$ sudo add-apt-repository ppa:jonathonf/vim
$ sudo apt update
$ sudo apt install vim
```
#### Emacs
[Emacs](https://www.gnu.org/s/emacs/) 是一个高度可配置的文本编辑器,最流行的一个分支 GNU Emacs 是用 Lisp 和 C 编写的,它的最大特点是可以自文档化、可扩展和可自定义。
使用以下命令在 Ubuntu 和 Debian 安装 Emacs。
```
$ sudo add-apt-repository ppa:kelleyk/emacs
$ sudo apt update
$ sudo apt install emacs25
```
#### Nano
[Nano](https://www.nano-editor.org/) 是一款功能丰富的命令行文本编辑器,比较适合高级用户。它可以通过多个终端进行不同功能的操作。
使用以下命令在 Ubuntu 和 Debian 安装 Nano。
```
$ sudo add-apt-repository ppa:n-muench/programs-ppa
$ sudo apt-get update
$ sudo apt-get install nano
```
### 下载器

#### Aria2
[Aria2](https://aria2.github.io/) 是一个开源的、轻量级的、多软件源和多协议的命令行下载器,它支持 Metalink、torrent、HTTP/HTTPS、SFTP 等多种协议。
使用以下命令在 Ubuntu 和 Debian 安装 Aria2。
```
$ sudo apt-get install aria2
```
#### uGet
[uGet](http://ugetdm.com/) 已经成为 Linux 各种发行版中排名第一的开源下载器,它可以处理任何下载任务,包括多连接、队列、类目等。
使用以下命令在 Ubuntu 和 Debian 安装 uGet。
```
$ sudo add-apt-repository ppa:plushuang-tw/uget-stable
$ sudo apt update
$ sudo apt install uget
```
#### XDM
[XDM](http://xdman.sourceforge.net/)(Xtreme Download Manager)是一个使用 Java 编写的开源下载软件。和其它下载器一样,它可以结合队列、种子、浏览器使用,而且还带有视频采集器和智能调度器。
使用以下命令在 Ubuntu 和 Debian 安装 XDM。
```
$ sudo add-apt-repository ppa:noobslab/apps
$ sudo apt-get update
$ sudo apt-get install xdman
```
### 电子邮件客户端

#### Thunderbird
[Thunderbird](https://www.thunderbird.net/) 是最受欢迎的电子邮件客户端之一。它的优点包括免费、开源、可定制、功能丰富,而且最重要的是安装过程也很简便。
使用以下命令在 Ubuntu 和 Debian 安装 Thunderbird。
```
$ sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa
$ sudo apt-get update
$ sudo apt-get install thunderbird
```
#### Geary
[Geary](https://github.com/GNOME/geary) 是一个基于 WebKitGTK+ 的开源电子邮件客户端。它是一个免费开源的功能丰富的软件,并被 GNOME 项目收录。
使用以下命令在 Ubuntu 和 Debian 安装 Geary。
```
$ sudo add-apt-repository ppa:geary-team/releases
$ sudo apt-get update
$ sudo apt-get install geary
```
#### Evolution
[Evolution](https://github.com/GNOME/evolution) 是一个免费开源的电子邮件客户端,可以用于电子邮件、会议日程、备忘录和联系人的管理。
使用以下命令在 Ubuntu 和 Debian 安装 Evolution。
```
$ sudo add-apt-repository ppa:gnome3-team/gnome3-staging
$ sudo apt-get update
$ sudo apt-get install evolution
```
### 财务软件

#### GnuCash
[GnuCash](https://www.gnucash.org/) 是一款免费的跨平台开源软件,它适用于个人和中小型企业的财务任务。
使用以下命令在 Ubuntu 和 Debian 安装 GnuCash。
```
$ sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu $(lsb_release -sc)-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list'
$ sudo apt-get update
$ sudo apt-get install gnucash
```
#### KMyMoney
[KMyMoney](https://kmymoney.org/) 是一个财务管理软件,它可以提供商用或个人理财所需的大部分主要功能。
使用以下命令在 Ubuntu 和 Debian 安装 KmyMoney。
```
$ sudo add-apt-repository ppa:claydoh/kmymoney2-kde4
$ sudo apt-get update
$ sudo apt-get install kmymoney
```
### IDE

#### Eclipse IDE
[Eclipse](https://www.eclipse.org/ide/) 是最广为使用的 Java IDE,它包括一个基本工作空间和一个用于自定义编程环境的强大的的插件配置系统。
关于 Eclipse IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Eclipse IDE](https://www.tecmint.com/install-eclipse-oxygen-ide-in-ubuntu-debian/) 这一篇文章。
#### Netbeans IDE
[Netbeans](https://netbeans.org/) 是一个相当受用户欢迎的 IDE,它支持使用 Java、PHP、HTML 5、JavaScript、C/C++ 或其他语言编写移动应用,桌面软件和 web 应用。
关于 Netbeans IDE 的安装,可以参考 [如何在 Debian 和 Ubuntu 上安装 Netbeans IDE](https://www.tecmint.com/install-netbeans-ide-in-ubuntu-debian-linux-mint/) 这一篇文章。
#### Brackets
[Brackets](http://brackets.io/) 是由 Adobe 开发的高级文本编辑器,它带有可视化工具,支持预处理程序,以及用于 web 开发的以设计为中心的用户流程。对于熟悉它的用户,它可以发挥 IDE 的作用。
使用以下命令在 Ubuntu 和 Debian 安装 Brackets。
```
$ sudo add-apt-repository ppa:webupd8team/brackets
$ sudo apt-get update
$ sudo apt-get install brackets
```
#### Atom IDE
[Atom IDE](https://ide.atom.io/) 是一个加强版的 Atom 编辑器,它添加了大量扩展和库以提高性能和增加功能。总之,它是各方面都变得更强大了的 Atom 。
使用以下命令在 Ubuntu 和 Debian 安装 Atom。
```
$ sudo apt-get install snapd
$ sudo snap install atom --classic
```
#### Light Table
[Light Table](http://lighttable.com/) 号称下一代的 IDE,它提供了数据流量统计和协作编程等的强大功能。
使用以下命令在 Ubuntu 和 Debian 安装 Light Table。
```
$ sudo add-apt-repository ppa:dr-akulavich/lighttable
$ sudo apt-get update
$ sudo apt-get install lighttable-installer
```
#### Visual Studio Code
[Visual Studio Code](https://code.visualstudio.com/) 是由微软开发的代码编辑器,它包含了文本编辑器所需要的最先进的功能,包括语法高亮、自动完成、代码调试、性能统计和图表显示等功能。
参考阅读:[在Ubuntu 下载 Visual Studio Code](https://code.visualstudio.com/download)
### 即时通信工具

#### Pidgin
[Pidgin](https://www.pidgin.im/) 是一个开源的即时通信工具,它几乎支持所有聊天平台,还支持额外扩展功能。
使用以下命令在 Ubuntu 和 Debian 安装 Pidgin。
```
$ sudo add-apt-repository ppa:jonathonf/backports
$ sudo apt-get update
$ sudo apt-get install pidgin
```
#### Skype
[Skype](https://www.skype.com/) 也是一个广为人知的软件了,任何感兴趣的用户都可以在 Linux 上使用。
使用以下命令在 Ubuntu 和 Debian 安装 Skype。
```
$ sudo apt install snapd
$ sudo snap install skype --classic
```
#### Empathy
[Empathy](https://wiki.gnome.org/Apps/Empathy) 是一个支持多协议语音、视频聊天、文本和文件传输的即时通信工具。它还允许用户添加多个服务的帐户,并用其与所有服务的帐户进行交互。
使用以下命令在 Ubuntu 和 Debian 安装 Empathy。
```
$ sudo apt-get install empathy
```
### Linux 防病毒工具
#### ClamAV/ClamTk
[ClamAV](https://www.clamav.net/) 是一个开源的跨平台命令行防病毒工具,用于检测木马、病毒和其他恶意代码。而 [ClamTk](https://dave-theunsub.github.io/clamtk/) 则是它的前端 GUI。
使用以下命令在 Ubuntu 和 Debian 安装 ClamAV 和 ClamTk。
```
$ sudo apt-get install clamav
$ sudo apt-get install clamtk
```
### Linux 桌面环境
#### Cinnamon
[Cinnamon](https://github.com/linuxmint/cinnamon-desktop) 是 GNOME 3 的自由开源衍生产品,它遵循传统的 <ruby> 桌面比拟 <rt> desktop metaphor </rt></ruby> 约定。
使用以下命令在 Ubuntu 和 Debian 安装 Cinnamon。
```
$ sudo add-apt-repository ppa:embrosyn/cinnamon
$ sudo apt update
$ sudo apt install cinnamon-desktop-environment lightdm
```
#### Mate
[Mate](https://mate-desktop.org/) 桌面环境是 GNOME 2 的衍生和延续,目的是在 Linux 上通过使用传统的桌面比拟提供有一个吸引力的 UI。
使用以下命令在 Ubuntu 和 Debian 安装 Mate。
```
$ sudo apt install tasksel
$ sudo apt update
$ sudo tasksel install ubuntu-mate-desktop
```
#### GNOME
[GNOME](https://www.gnome.org/) 是由一些免费和开源应用程序组成的桌面环境,它可以运行在任何 Linux 发行版和大多数 BSD 衍生版本上。
使用以下命令在 Ubuntu 和 Debian 安装 Gnome。
```
$ sudo apt install tasksel
$ sudo apt update
$ sudo tasksel install ubuntu-desktop
```
#### KDE
[KDE](https://www.kde.org/plasma-desktop) 由 KDE 社区开发,它为用户提供图形解决方案以控制操作系统并执行不同的计算任务。
使用以下命令在 Ubuntu 和 Debian 安装 KDE。
```
$ sudo apt install tasksel
$ sudo apt update
$ sudo tasksel install kubuntu-desktop
```
### Linux 维护工具
#### GNOME Tweak Tool
[GNOME Tweak Tool](https://github.com/nzjrs/gnome-tweak-tool) 是用于自定义和调整 GNOME 3 和 GNOME Shell 设置的流行工具。
使用以下命令在 Ubuntu 和 Debian 安装 GNOME Tweak Tool。
```
$ sudo apt install gnome-tweak-tool
```
#### Stacer
[Stacer](https://github.com/oguzhaninan/Stacer) 是一款用于监控和优化 Linux 系统的免费开源应用程序。
使用以下命令在 Ubuntu 和 Debian 安装 Stacer。
```
$ sudo add-apt-repository ppa:oguzhaninan/stacer
$ sudo apt-get update
$ sudo apt-get install stacer
```
#### BleachBit
[BleachBit](https://www.bleachbit.org/) 是一个免费的磁盘空间清理器,它也可用作隐私管理器和系统优化器。
参考阅读:[在 Ubuntu 下载 BleachBit](https://www.bleachbit.org/download)
### Linux 终端工具
#### GNOME 终端
[GNOME 终端](https://github.com/GNOME/gnome-terminal) 是 GNOME 的默认终端模拟器。
使用以下命令在 Ubuntu 和 Debian 安装 Gnome 终端。
```
$ sudo apt-get install gnome-terminal
```
#### Konsole
[Konsole](https://konsole.kde.org/) 是 KDE 的一个终端模拟器。
使用以下命令在 Ubuntu 和 Debian 安装 Konsole。
```
$ sudo apt-get install konsole
```
#### Terminator
[Terminator](https://gnometerminator.blogspot.com/p/introduction.html) 是一个功能丰富的终端程序,它基于 GNOME 终端,并且专注于整理终端功能。
使用以下命令在 Ubuntu 和 Debian 安装 Terminator。
```
$ sudo apt-get install terminator
```
#### Guake
[Guake](http://guake-project.org/) 是 GNOME 桌面环境下一个轻量级的可下拉式终端。
使用以下命令在 Ubuntu 和 Debian 安装 Guake。
```
$ sudo apt-get install guake
```
### 多媒体编辑工具
#### Ardour
[Ardour](https://ardour.org/) 是一款漂亮的的<ruby> 数字音频工作站 <rt> Digital Audio Workstation </rt></ruby>,可以完成专业的录制、编辑和混音工作。
使用以下命令在 Ubuntu 和 Debian 安装 Ardour。
```
$ sudo add-apt-repository ppa:dobey/audiotools
$ sudo apt-get update
$ sudo apt-get install ardour
```
#### Audacity
[Audacity](https://www.audacityteam.org/) 是最著名的音频编辑软件之一,它是一款跨平台的开源多轨音频编辑器。
使用以下命令在 Ubuntu 和 Debian 安装 Audacity。
```
$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity
$ sudo apt-get update
$ sudo apt-get install audacity
```
#### GIMP
[GIMP](https://www.gimp.org/) 是 Photoshop 的开源替代品中最受欢迎的。这是因为它有多种可自定义的选项、第三方插件以及活跃的用户社区。
使用以下命令在 Ubuntu 和 Debian 安装 Gimp。
```
$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp
$ sudo apt update
$ sudo apt install gimp
```
#### Krita
[Krita](https://krita.org/en/) 是一款开源的绘画程序,它具有美观的 UI 和可靠的性能,也可以用作图像处理工具。
使用以下命令在 Ubuntu 和 Debian 安装 Krita。
```
$ sudo add-apt-repository ppa:kritalime/ppa
$ sudo apt update
$ sudo apt install krita
```
#### Lightworks
[Lightworks](https://www.lwks.com/) 是一款功能强大、灵活美观的专业视频编辑工具。它拥有上百种配套的视觉效果功能,可以处理任何编辑任务,毕竟这个软件已经有长达 25 年的视频处理经验。
参考阅读:[在 Ubuntu 下载 Lightworks](https://www.lwks.com/index.php?option=com_lwks&view=download&Itemid=206)
#### OpenShot
[OpenShot](https://www.openshot.org/) 是一款屡获殊荣的免费开源视频编辑器,这主要得益于其出色的性能和强大的功能。
使用以下命令在 Ubuntu 和 Debian 安装 `Openshot。
```
$ sudo add-apt-repository ppa:openshot.developers/ppa
$ sudo apt update
$ sudo apt install openshot-qt
```
#### PiTiV
[Pitivi](http://www.pitivi.org/) 也是一个美观的视频编辑器,它有优美的代码库、优质的社区,还支持优秀的协作编辑功能。
使用以下命令在 Ubuntu 和 Debian 安装 PiTiV。
```
$ flatpak install --user https://flathub.org/repo/appstream/org.pitivi.Pitivi.flatpakref
$ flatpak install --user http://flatpak.pitivi.org/pitivi.flatpakref
$ flatpak run org.pitivi.Pitivi//stable
```
### 音乐播放器
#### Rhythmbox
[Rhythmbox](https://wiki.gnome.org/Apps/Rhythmbox) 支持海量种类的音乐,目前被认为是最可靠的音乐播放器,并由 Ubuntu 自带。
使用以下命令在 Ubuntu 和 Debian 安装 Rhythmbox。
```
$ sudo add-apt-repository ppa:fossfreedom/rhythmbox
$ sudo apt-get update
$ sudo apt-get install rhythmbox
```
#### Lollypop
[Lollypop](https://gnumdk.github.io/lollypop-web/) 是一款较为年轻的开源音乐播放器,它有很多高级选项,包括网络电台,滑动播放和派对模式。尽管功能繁多,它仍然尽量做到简单易管理。
使用以下命令在 Ubuntu 和 Debian 安装 Lollypop。
```
$ sudo add-apt-repository ppa:gnumdk/lollypop
$ sudo apt-get update
$ sudo apt-get install lollypop
```
#### Amarok
[Amarok](https://amarok.kde.org/en) 是一款功能强大的音乐播放器,它有一个直观的 UI 和大量的高级功能,而且允许用户根据自己的偏好去发现新音乐。
使用以下命令在 Ubuntu 和 Debian 安装 Amarok。
```
$ sudo apt-get update
$ sudo apt-get install amarok
```
#### Clementine
[Clementine](https://www.clementine-player.org/) 是一款 Amarok 风格的音乐播放器,因此和 Amarok 相似,也有直观的用户界面、先进的控制模块,以及让用户搜索和发现新音乐的功能。
使用以下命令在 Ubuntu 和 Debian 安装 Clementine。
```
$ sudo add-apt-repository ppa:me-davidsansome/clementine
$ sudo apt-get update
$ sudo apt-get install clementine
```
#### Cmus
[Cmus](https://cmus.github.io/) 可以说是最高效的的命令行界面音乐播放器了,它具有快速可靠的特点,也支持使用扩展。
使用以下命令在 Ubuntu 和 Debian 安装 Cmus。
```
$ sudo add-apt-repository ppa:jmuc/cmus
$ sudo apt-get update
$ sudo apt-get install cmus
```
### 办公软件
#### Calligra 套件
[Calligra 套件](https://www.calligra.org/tour/calligra-suite/)为用户提供了一套总共 8 个应用程序,涵盖办公、管理、图表等各个范畴。
使用以下命令在 Ubuntu 和 Debian 安装 Calligra 套件。
```
$ sudo apt-get install calligra
```
#### LibreOffice
[LibreOffice](https://www.libreoffice.org/) 是开源社区中开发过程最活跃的办公套件,它以可靠性著称,也可以通过扩展来添加功能。
使用以下命令在 Ubuntu 和 Debian 安装 LibreOffice。
```
$ sudo add-apt-repository ppa:libreoffice/ppa
$ sudo apt update
$ sudo apt install libreoffice
```
#### WPS Office
[WPS Office](https://www.wps.com/) 是一款漂亮的办公套件,它有一个很具现代感的 UI。
参考阅读:[在 Ubuntu 安装 WPS Office](http://wps-community.org/downloads)
### 屏幕截图工具
#### Shutter
[Shutter](http://shutter-project.org/) 允许用户截取桌面的屏幕截图,然后使用一些效果进行编辑,还支持上传和在线共享。
使用以下命令在 Ubuntu 和 Debian 安装 Shutter。
```
$ sudo add-apt-repository -y ppa:shutter/ppa
$ sudo apt update
$ sudo apt install shutter
```
#### Kazam
[Kazam](https://launchpad.net/kazam) 可以用于捕获屏幕截图,它的输出对于任何支持 VP8/WebM 和 PulseAudio 视频播放器都可用。
使用以下命令在 Ubuntu 和 Debian 安装 Kazam。
```
$ sudo add-apt-repository ppa:kazam-team/unstable-series
$ sudo apt update
$ sudo apt install kazam python3-cairo python3-xlib
```
#### Gnome Screenshot
[Gnome Screenshot](https://gitlab.gnome.org/GNOME/gnome-screenshot) 过去曾经和 Gnome 一起捆绑,但现在已经独立出来。它以易于共享的格式进行截屏。
使用以下命令在 Ubuntu 和 Debian 安装 Gnome Screenshot。
```
$ sudo apt-get update
$ sudo apt-get install gnome-screenshot
```
### 录屏工具
#### SimpleScreenRecorder
[SimpleScreenRecorder](http://www.maartenbaert.be/simplescreenrecorder/) 面世时已经是录屏工具中的佼佼者,现在已成为 Linux 各个发行版中最有效、最易用的录屏工具之一。
使用以下命令在 Ubuntu 和 Debian 安装 SimpleScreenRecorder。
```
$ sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder
$ sudo apt-get update
$ sudo apt-get install simplescreenrecorder
```
#### recordMyDesktop
[recordMyDesktop](http://recordmydesktop.sourceforge.net/about.php) 是一个开源的会话记录器,它也能记录桌面会话的音频。
使用以下命令在 Ubuntu 和 Debian 安装 recordMyDesktop。
```
$ sudo apt-get update
$ sudo apt-get install gtk-recordmydesktop
```
### 文本编辑器
#### Atom
[Atom](https://atom.io/) 是由 GitHub 开发和维护的可定制文本编辑器。它是开箱即用的,但也可以使用扩展和主题自定义 UI 来增强其功能。
使用以下命令在 Ubuntu 和 Debian 安装 Atom。
```
$ sudo apt-get install snapd
$ sudo snap install atom --classic
```
#### Sublime Text
[Sublime Text](https://www.sublimetext.com/) 已经成为目前最棒的文本编辑器。它可定制、轻量灵活(即使打开了大量数据文件和加入了大量扩展),最重要的是可以永久免费使用。
使用以下命令在 Ubuntu 和 Debian 安装 Sublime Text。
```
$ sudo apt-get install snapd
$ sudo snap install sublime-text
```
#### Geany
[Geany](https://www.geany.org/) 是一个内存友好的文本编辑器,它具有基本的IDE功能,可以显示加载时间、扩展库函数等。
使用以下命令在 Ubuntu 和 Debian 安装 Geany。
```
$ sudo apt-get update
$ sudo apt-get install geany
```
#### Gedit
[Gedit](https://wiki.gnome.org/Apps/Gedit) 以其简单著称,在很多 Linux 发行版都有预装,它具有文本编辑器都具有的优秀的功能。
使用以下命令在 Ubuntu 和 Debian 安装 Gedit。
```
$ sudo apt-get update
$ sudo apt-get install gedit
```
### 备忘录软件
#### Evernote
[Evernote](https://everdo.net/) 是一款云上的笔记程序,它带有待办列表和提醒功能,能够与不同类型的笔记完美配合。
Evernote 在 Linux 上没有官方提供的软件,但可以参考 [Linux 上的 6 个 Evernote 替代客户端](https://www.fossmint.com/evernote-alternatives-for-linux/) 这篇文章使用其它第三方工具。
#### Everdo
[Everdo](https://everdo.net/) 是一款美观,安全,易兼容的备忘软件,可以用于处理待办事项和其它笔记。如果你认为 Evernote 有所不足,相信 Everdo 会是一个好的替代。
参考阅读:[在 Ubuntu 下载 Everdo](https://everdo.net/linux/)
#### Taskwarrior
[Taskwarrior](https://taskwarrior.org/) 是一个用于管理个人任务的开源跨平台命令行应用,它的速度和无干扰的环境是它的两大特点。
使用以下命令在 Ubuntu 和 Debian 安装 Taskwarrior。
```
$ sudo apt-get update
$ sudo apt-get install taskwarrior
```
### 视频播放器
#### Banshee
[Banshee](http://banshee.fm/) 是一个开源的支持多格式的媒体播放器,于 2005 年开始开发并逐渐成长。
使用以下命令在 Ubuntu 和 Debian 安装 Banshee。
```
$ sudo add-apt-repository ppa:banshee-team/ppa
$ sudo apt-get update
$ sudo apt-get install banshee
```
#### VLC
[VLC](https://www.videolan.org/) 是我最喜欢的视频播放器,它几乎可以播放任何格式的音频和视频,它还可以播放网络电台、录制桌面会话以及在线播放电影。
使用以下命令在 Ubuntu 和 Debian 安装 VLC。
```
$ sudo add-apt-repository ppa:videolan/stable-daily
$ sudo apt-get update
$ sudo apt-get install vlc
```
#### Kodi
[Kodi](https://kodi.tv/) 是世界上最着名的媒体播放器之一,它有一个成熟的媒体中心,可以播放本地和远程的多媒体文件。
使用以下命令在 Ubuntu 和 Debian 安装 Kodi。
```
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:team-xbmc/ppa
$ sudo apt-get update
$ sudo apt-get install kodi
```
#### SMPlayer
[SMPlayer](https://www.smplayer.info/) 是 MPlayer 的 GUI 版本,所有流行的媒体格式它都能够处理,并且它还有从 YouTube 和 Chromcast 和下载字幕的功能。
使用以下命令在 Ubuntu 和 Debian 安装 SMPlayer。
```
$ sudo add-apt-repository ppa:rvm/smplayer
$ sudo apt-get update
$ sudo apt-get install smplayer
```
### 虚拟化工具
#### VirtualBox
[VirtualBox](https://www.virtualbox.org/wiki/VirtualBox) 是一个用于操作系统虚拟化的开源应用程序,在服务器、台式机和嵌入式系统上都可以运行。
使用以下命令在 Ubuntu 和 Debian 安装 VirtualBox。
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install virtualbox-5.2
$ virtualbox
```
#### VMWare
[VMware](https://www.vmware.com/) 是一个为客户提供平台虚拟化和云计算服务的数字工作区,是第一个成功将 x86 架构系统虚拟化的工作站。 VMware 工作站的其中一个产品就允许用户在虚拟内存中运行多个操作系统。
参阅 [在 Ubuntu 上安装 VMWare Workstation Pro](https://www.tecmint.com/install-vmware-workstation-in-linux/) 可以了解 VMWare 的安装。
### 浏览器
#### Chrome
[Google Chrome](https://www.google.com/chrome/) 无疑是最受欢迎的浏览器。Chrome 以其速度、简洁、安全、美观而受人喜爱,它遵循了 Google 的界面设计风格,是 web 开发人员不可缺少的浏览器,同时它也是免费开源的。
使用以下命令在 Ubuntu 和 Debian 安装 Google Chrome。
```
$ wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
$ sudo apt-get update
$ sudo apt-get install google-chrome-stable
```
#### Firefox
[Firefox Quantum](https://www.mozilla.org/en-US/firefox/) 是一款漂亮、快速、完善并且可自定义的浏览器。它也是自由开源的,包含有开发人员所需要的工具,对于初学者也没有任何使用门槛。
使用以下命令在 Ubuntu 和 Debian 安装 Firefox Quantum。
```
$ sudo add-apt-repository ppa:mozillateam/firefox-next
$ sudo apt update && sudo apt upgrade
$ sudo apt install firefox
```
#### Vivaldi
[Vivaldi](https://vivaldi.com/) 是一个基于 Chrome 的自由开源项目,旨在通过添加扩展来使 Chrome 的功能更加完善。色彩丰富的界面,性能良好、灵活性强是它的几大特点。
参考阅读:[在 Ubuntu 下载 Vivaldi](https://vivaldi.com/)
以上就是我的推荐,你还有更好的软件向大家分享吗?欢迎评论。
---
via: <https://www.fossmint.com/most-used-linux-applications/>
作者:[Martins D. Okoi](https://www.fossmint.com/author/dillivine/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,100 | Clinews:从命令行阅读新闻和最新头条 | https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/ | 2018-10-11T14:51:02 | [
"新闻"
] | https://linux.cn/article-10100-1.html | 
不久前,我们写了一个名为 [InstantNews](https://www.ostechnix.com/get-news-instantly-commandline-linux/) 的命令行新闻客户端,它可以帮助你立即在命令行阅读新闻和最新头条新闻。今天,我偶然发现了一个名为 **Clinews** 的类似,它的其功能与此相同 —— 在终端阅读来自热门网站的新闻和最新头条,还有博客。你无需安装 GUI 应用或移动应用。你可以直接从终端阅读世界上正在发生的事情。它是使用 **NodeJS** 编写的自由开源程序。
### 安装 Clinews
由于 Clinews 是使用 NodeJS 编写的,因此你可以使用 NPM 包管理器安装。如果尚未安装 NodeJS,请按照以下链接中的说明进行安装。
安装 node 后,运行以下命令安装 Clinews:
```
$ npm i -g clinews
```
你也可以使用 **Yarn** 安装 Clinews:
```
$ yarn global add clinews
```
Yarn 本身可以使用 npm 安装
```
$ npm -i yarn
```
### 配置 News API
Clinews 从 [News API](https://newsapi.org/) 中检索所有新闻标题。News API 是一个简单易用的 API,它返回当前在一系列新闻源和博客上发布的头条的 JSON 元数据。它目前提供来自 70 个热门源的实时头条,包括 Ars Technica、BBC、Blooberg、CNN、每日邮报、Engadget、ESPN、金融时报、谷歌新闻、hacker News,IGN、Mashable、国家地理、Reddit r/all、路透社、 Speigel Online、Techcrunch、The Guardian、The Hindu、赫芬顿邮报、纽约时报、The Next Web、华尔街日报,今日美国和[等等](https://newsapi.org/sources)。
首先,你需要 News API 的 API 密钥。进入 <https://newsapi.org/register> 并注册一个免费帐户来获取 API 密钥。
从 News API 获得 API 密钥后,编辑 `.bashrc`:
```
$ vi ~/.bashrc
```
在最后添加 newsapi API 密钥,如下所示:
```
export IN_API_KEY="Paste-API-key-here"
```
请注意,你需要将密钥粘贴在双引号内。保存并关闭文件。
运行以下命令以更新更改。
```
$ source ~/.bashrc
```
完成。现在继续并从新闻源获取最新的头条新闻。
### 在命令行阅读新闻和最新头条
要阅读特定新闻源的新闻和最新头条,例如 **The Hindu**,请运行:
```
$ news fetch the-hindu
```
这里,`the-hindu` 是新闻源的源id(获取 id)。
上述命令将从 The Hindu 新闻站获取最新的 10 个头条,并将其显示在终端中。此外,它还显示新闻的简要描述、发布的日期和时间以及到源的实际链接。
**示例输出:**

要在浏览器中阅读新闻,请按住 Ctrl 键并单击 URL。它将在你的默认 Web 浏览器中打开。
要查看所有的新闻源,请运行:
```
$ news sources
```
**示例输出:**

正如你在上面的截图中看到的,Clinews 列出了所有新闻源,包括新闻源的名称、获取 ID、网站描述、网站 URL 以及它所在的国家/地区。在撰写本指南时,Clinews 目前支持 70 多个新闻源。
Clinews 还可以搜索符合搜索条件/术语的所有源的新闻报道。例如,要列出包含单词 “Tamilnadu” 的所有新闻报道,请使用以下命令:
```
$ news search "Tamilnadu"
```
此命令将会筛选所有新闻源中含有 “Tamilnadu” 的报道。
Clinews 有一些其它选项可以帮助你
* 限制你想看的新闻报道的数量, \* 排序新闻报道(热门、最新), \* 智能显示新闻报道分类(例如商业、娱乐、游戏、大众、音乐、政治、科学和自然、体育、技术)
更多详细信息,请参阅帮助部分:
```
$ clinews -h
```
就是这些了。希望这篇对你有用。还有更多好东西。敬请关注!
干杯!
---
via: <https://www.ostechnix.com/clinews-read-news-and-latest-headlines-from-commandline/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,101 | 五种加速 Go 的特性 | https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast | 2018-10-11T15:06:40 | [
"Go",
"性能"
] | https://linux.cn/article-10101-1.html | *Anthony Starks 使用他出色的 Deck 演示工具重构了我原来的基于 Google Slides 的幻灯片。你可以在他的博客上查看他重构后的幻灯片,*
*[mindchunk.blogspot.com.au/2014/06/remixing-with-deck](http://mindchunk.blogspot.com.au/2014/06/remixing-with-deck.html)。*
我最近被邀请在 Gocon 发表演讲,这是一个每半年在日本东京举行的 Go 的精彩大会。[Gocon 2014](http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350) 是一个完全由社区驱动的为期一天的活动,由培训和一整个下午的围绕着生产环境中的 Go 这个主题的演讲组成.(LCTT 译注:本文发表于 2014 年)
以下是我的讲义。原文的结构能让我缓慢而清晰的演讲,因此我已经编辑了它使其更可读。
我要感谢 [Bill Kennedy](http://www.goinggo.net/) 和 Minux Ma,特别是 [Josh Bleecher Snyder](https://twitter.com/offbymany),感谢他们在我准备这次演讲中的帮助。
---
大家下午好。
我叫 David.
我很高兴今天能来到 Gocon。我想参加这个会议已经两年了,我很感谢主办方能提供给我向你们演讲的机会。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-1.jpg)
我想以一个问题开始我的演讲。
为什么选择 Go?
当大家讨论学习或在生产环境中使用 Go 的原因时,答案不一而足,但因为以下三个原因的最多。
[](https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast/gocon-2014-2)
这就是 TOP3 的原因。
第一,并发。
Go 的 <ruby> 并发原语 <rt> Concurrency Primitives </rt></ruby> 对于来自 Nodejs,Ruby 或 Python 等单线程脚本语言的程序员,或者来自 C++ 或 Java 等重量级线程模型的语言都很有吸引力。
易于部署。
我们今天从经验丰富的 Gophers 那里听说过,他们非常欣赏部署 Go 应用的简单性。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-3.jpg)
然后是性能。
我相信人们选择 Go 的一个重要原因是它 快。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-4.jpg)
在今天的演讲中,我想讨论五个有助于提高 Go 性能的特性。
我还将与大家分享 Go 如何实现这些特性的细节。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-5.jpg)
我要谈的第一个特性是 Go 对于值的高效处理和存储。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-6.jpg)
这是 Go 中一个值的例子。编译时,`gocon` 正好消耗四个字节的内存。
让我们将 Go 与其他一些语言进行比较
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-7.jpg)
由于 Python 表示变量的方式的开销,使用 Python 存储相同的值会消耗六倍的内存。
Python 使用额外的内存来跟踪类型信息,进行 <ruby> 引用计数 <rt> Reference Counting </rt></ruby> 等。
让我们看另一个例子:
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-8.jpg)
与 Go 类似,Java 消耗 4 个字节的内存来存储 `int` 型。
但是,要在像 `List` 或 `Map` 这样的集合中使用此值,编译器必须将其转换为 `Integer` 对象。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-9.jpg)
因此,Java 中的整数通常消耗 16 到 24 个字节的内存。
为什么这很重要? 内存便宜且充足,为什么这个开销很重要?
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-10.jpg)
这是一张显示 CPU 时钟速度与内存总线速度的图表。
请注意 CPU 时钟速度和内存总线速度之间的差距如何继续扩大。
两者之间的差异实际上是 CPU 花费多少时间等待内存。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-11.jpg)
自 1960 年代后期以来,CPU 设计师已经意识到了这个问题。
他们的解决方案是一个缓存,一个更小、更快的内存区域,介入 CPU 和主存之间。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-12.jpg)
这是一个 `Location` 类型,它保存物体在三维空间中的位置。它是用 Go 编写的,因此每个 `Location` 只消耗 24 个字节的存储空间。
我们可以使用这种类型来构造一个容纳 1000 个 `Location` 的数组类型,它只消耗 24000 字节的内存。
在数组内部,`Location` 结构体是顺序存储的,而不是随机存储的 1000 个 `Location` 结构体的指针。
这很重要,因为现在所有 1000 个 `Location` 结构体都按顺序放在缓存中,紧密排列在一起。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-13.jpg)
Go 允许您创建紧凑的数据结构,避免不必要的填充字节。
紧凑的数据结构能更好地利用缓存。
更好的缓存利用率可带来更好的性能。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-14.jpg)
函数调用不是无开销的。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-15.jpg)
调用函数时会发生三件事。
创建一个新的 <ruby> 栈帧 <rt> Stack Frame </rt></ruby>,并记录调用者的详细信息。
在函数调用期间可能被覆盖的任何寄存器都将保存到栈中。
处理器计算函数的地址并执行到该新地址的分支。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-16.jpg)
由于函数调用是非常常见的操作,因此 CPU 设计师一直在努力优化此过程,但他们无法消除开销。
函调固有开销,或重于泰山,或轻于鸿毛,这取决于函数做了什么。
减少函数调用开销的解决方案是 <ruby> 内联 <rt> Inlining </rt></ruby>。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-17.jpg)
Go 编译器通过将函数体视为调用者的一部分来内联函数。
内联也有成本,它增加了二进制文件大小。
只有当调用开销与函数所做工作关联度的很大时内联才有意义,因此只有简单的函数才能用于内联。
复杂的函数通常不受调用它们的开销所支配,因此不会内联。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-18.jpg)
这个例子显示函数 `Double` 调用 `util.Max`。
为了减少调用 `util.Max` 的开销,编译器可以将 `util.Max` 内联到 `Double` 中,就象这样
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-19.jpg)
内联后不再调用 `util.Max`,但是 `Double` 的行为没有改变。
内联并不是 Go 独有的。几乎每种编译或及时编译的语言都执行此优化。但是 Go 的内联是如何实现的?
Go 实现非常简单。编译包时,会标记任何适合内联的小函数,然后照常编译。
然后函数的源代码和编译后版本都会被存储。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-20.jpg)
此幻灯片显示了 `util.a` 的内容。源代码已经过一些转换,以便编译器更容易快速处理。
当编译器编译 `Double` 时,它看到 `util.Max` 可内联的,并且 `util.Max` 的源代码是可用的。
就会替换原函数中的代码,而不是插入对 `util.Max` 的编译版本的调用。
拥有该函数的源代码可以实现其他优化。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-21.jpg)
在这个例子中,尽管函数 `Test` 总是返回 `false`,但 `Expensive` 在不执行它的情况下无法知道结果。
当 `Test` 被内联时,我们得到这样的东西。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-22.jpg)
编译器现在知道 `Expensive` 的代码无法访问。
这不仅节省了调用 `Test` 的成本,还节省了编译或运行任何现在无法访问的 `Expensive` 代码。
Go 编译器可以跨文件甚至跨包自动内联函数。还包括从标准库调用的可内联函数的代码。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-23.jpg)
<ruby> 强制垃圾回收 <rt> Mandatory Garbage Collection </rt></ruby> 使 Go 成为一种更简单,更安全的语言。
这并不意味着垃圾回收会使 Go 变慢,或者垃圾回收是程序速度的瓶颈。
这意味着在堆上分配的内存是有代价的。每次 GC 运行时都会花费 CPU 时间,直到释放内存为止。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-24.jpg)
然而,有另一个地方分配内存,那就是栈。
与 C 不同,它强制您选择是否将值通过 `malloc` 将其存储在堆上,还是通过在函数范围内声明将其储存在栈上;Go 实现了一个名为 <ruby> 逃逸分析 <rt> Escape Analysis </rt></ruby> 的优化。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-25.jpg)
逃逸分析决定了对一个值的任何引用是否会从被声明的函数中逃逸。
如果没有引用逃逸,则该值可以安全地存储在栈中。
存储在栈中的值不需要分配或释放。
让我们看一些例子
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-26.jpg)
`Sum` 返回 1 到 100 的整数的和。这是一种相当不寻常的做法,但它说明了逃逸分析的工作原理。
因为切片 `numbers` 仅在 `Sum` 内引用,所以编译器将安排到栈上来存储的 100 个整数,而不是安排到堆上。
没有必要回收 `numbers`,它会在 `Sum` 返回时自动释放。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-27.jpg)
第二个例子也有点尬。在 `CenterCursor` 中,我们创建一个新的 `Cursor` 对象并在 `c` 中存储指向它的指针。
然后我们将 `c` 传递给 `Center()` 函数,它将 `Cursor` 移动到屏幕的中心。
最后我们打印出那个 ‘Cursor` 的 X 和 Y 坐标。
即使 `c` 被 `new` 函数分配了空间,它也不会存储在堆上,因为没有引用 `c` 的变量逃逸 `CenterCursor` 函数。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-28.jpg)
默认情况下,Go 的优化始终处于启用状态。可以使用 `-gcflags = -m` 开关查看编译器的逃逸分析和内联决策。
因为逃逸分析是在编译时执行的,而不是运行时,所以无论垃圾回收的效率如何,栈分配总是比堆分配快。
我将在本演讲的其余部分详细讨论栈。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-30.jpg)
Go 有 goroutine。 这是 Go 并发的基石。
我想退一步,探索 goroutine 的历史。
最初,计算机一次运行一个进程。在 60 年代,多进程或 <ruby> 分时 <rt> Time Sharing </rt></ruby> 的想法变得流行起来。
在分时系统中,操作系统必须通过保护当前进程的现场,然后恢复另一个进程的现场,不断地在这些进程之间切换 CPU 的注意力。
这称为 进程切换。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-29.jpg)
进程切换有三个主要开销。
首先,内核需要保护该进程的所有 CPU 寄存器的现场,然后恢复另一个进程的现场。
内核还需要将 CPU 的映射从虚拟内存刷新到物理内存,因为这些映射仅对当前进程有效。
最后是操作系统 <ruby> 上下文切换 <rt> Context Switch </rt></ruby> 的成本,以及 <ruby> 调度函数 <rt> Scheduler Function </rt></ruby> 选择占用 CPU 的下一个进程的开销。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-31.jpg)
现代处理器中有数量惊人的寄存器。我很难在一张幻灯片上排开它们,这可以让你知道保护和恢复它们需要多少时间。
由于进程切换可以在进程执行的任何时刻发生,因此操作系统需要存储所有寄存器的内容,因为它不知道当前正在使用哪些寄存器。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-32.jpg)
这导致了线程的出生,这些线程在概念上与进程相同,但共享相同的内存空间。
由于线程共享地址空间,因此它们比进程更轻,因此创建速度更快,切换速度更快。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-33.jpg)
Goroutine 升华了线程的思想。
Goroutine 是 <ruby> 协作式调度 <rt> Cooperative Scheduled <br/> </rt></ruby>的,而不是依靠内核来调度。
当对 Go <ruby> 运行时调度器 <rt> Runtime Scheduler </rt></ruby> 进行显式调用时,goroutine 之间的切换仅发生在明确定义的点上。
编译器知道正在使用的寄存器并自动保存它们。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-34.jpg)
虽然 goroutine 是协作式调度的,但运行时会为你处理。
Goroutine 可能会给禅让给其他协程时刻是:
* 阻塞式通道发送和接收。
* Go 声明,虽然不能保证会立即调度新的 goroutine。
* 文件和网络操作式的阻塞式系统调用。
* 在被垃圾回收循环停止后。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-35.jpg)
这个例子说明了上一张幻灯片中描述的一些调度点。
箭头所示的线程从左侧的 `ReadFile` 函数开始。遇到 `os.Open`,它在等待文件操作完成时阻塞线程,因此调度器将线程切换到右侧的 goroutine。
继续执行直到从通道 `c` 中读,并且此时 `os.Open` 调用已完成,因此调度器将线程切换回左侧并继续执行 `file.Read` 函数,然后又被文件 IO 阻塞。
调度器将线程切换回右侧以进行另一个通道操作,该操作在左侧运行期间已解锁,但在通道发送时再次阻塞。
最后,当 `Read` 操作完成并且数据可用时,线程切换回左侧。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-36.jpg)
这张幻灯片显示了低级语言描述的 `runtime.Syscall` 函数,它是 `os` 包中所有函数的基础。
只要你的代码调用操作系统,就会通过此函数。
对 `entersyscall` 的调用通知运行时该线程即将阻塞。
这允许运行时启动一个新线程,该线程将在当前线程被阻塞时为其他 goroutine 提供服务。
这导致每 Go 进程的操作系统线程相对较少,Go 运行时负责将可运行的 Goroutine 分配给空闲的操作系统线程。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-37.jpg)
在上一节中,我讨论了 goroutine 如何减少管理许多(有时是数十万个并发执行线程)的开销。
Goroutine故事还有另一面,那就是栈管理,它引导我进入我的最后一个话题。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-39.jpg)
这是一个进程的内存布局图。我们感兴趣的关键是堆和栈的位置。
传统上,在进程的地址空间内,堆位于内存的底部,位于程序(代码)的上方并向上增长。
栈位于虚拟地址空间的顶部,并向下增长。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-40.jpg)
因为堆和栈相互覆盖的结果会是灾难性的,操作系统通常会安排在栈和堆之间放置一个不可写内存区域,以确保如果它们发生碰撞,程序将中止。
这称为保护页,有效地限制了进程的栈大小,通常大约为几兆字节。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-41.jpg)
我们已经讨论过线程共享相同的地址空间,因此对于每个线程,它必须有自己的栈。
由于很难预测特定线程的栈需求,因此为每个线程的栈和保护页面保留了大量内存。
希望是这些区域永远不被使用,而且防护页永远不会被击中。
缺点是随着程序中线程数的增加,可用地址空间的数量会减少。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-42.jpg)
我们已经看到 Go 运行时将大量的 goroutine 调度到少量线程上,但那些 goroutines 的栈需求呢?
Go 编译器不使用保护页,而是在每个函数调用时插入一个检查,以检查是否有足够的栈来运行该函数。如果没有,运行时可以分配更多的栈空间。
由于这种检查,goroutines 初始栈可以做得更小,这反过来允许 Go 程序员将 goroutines 视为廉价资源。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-43.jpg)
这是一张显示了 Go 1.2 如何管理栈的幻灯片。
当 `G` 调用 `H` 时,没有足够的空间让 `H` 运行,所以运行时从堆中分配一个新的栈帧,然后在新的栈段上运行 `H`。当 `H` 返回时,栈区域返回到堆,然后返回到 `G`。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-44.jpg)
这种管理栈的方法通常很好用,但对于某些类型的代码,通常是递归代码,它可能导致程序的内部循环跨越这些栈边界之一。
例如,在程序的内部循环中,函数 `G` 可以在循环中多次调用 `H`,
每次都会导致栈拆分。 这被称为 <ruby> 热分裂 <rt> Hot Split </rt></ruby> 问题。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-45.jpg)
为了解决热分裂问题,Go 1.3 采用了一种新的栈管理方法。
如果 goroutine 的栈太小,则不会添加和删除其他栈段,而是分配新的更大的栈。
旧栈的内容被复制到新栈,然后 goroutine 使用新的更大的栈继续运行。
在第一次调用 `H` 之后,栈将足够大,对可用栈空间的检查将始终成功。
这解决了热分裂问题。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-46.jpg)
值,内联,逃逸分析,Goroutines 和分段/复制栈。
这些是我今天选择谈论的五个特性,但它们绝不是使 Go 成为快速的语言的唯一因素,就像人们引用他们学习 Go 的理由的三个原因一样。
这五个特性一样强大,它们不是孤立存在的。
例如,运行时将 goroutine 复用到线程上的方式在没有可扩展栈的情况下几乎没有效率。
内联通过将较小的函数组合成较大的函数来降低栈大小检查的成本。
逃逸分析通过自动将从实例从堆移动到栈来减少垃圾回收器的压力。
逃逸分析还提供了更好的 <ruby> 缓存局部性 <rt> Cache Locality </rt></ruby>。
如果没有可增长的栈,逃逸分析可能会对栈施加太大的压力。
[](https://dave.cheney.net/wp-content/uploads/2014/06/Gocon-2014-47.jpg)
* 感谢 Gocon 主办方允许我今天发言
* twitter / web / email details
* 感谢 @offbymany,@billkennedy\_go 和 Minux 在准备这个演讲的过程中所提供的帮助。
### 相关文章:
1. [听我在 OSCON 上关于 Go 性能的演讲](https://dave.cheney.net/2015/05/31/hear-me-speak-about-go-performance-at-oscon)
2. [为什么 Goroutine 的栈是无限大的?](https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite)
3. [Go 的运行时环境变量的旋风之旅](https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables)
4. [没有事件循环的性能](https://dave.cheney.net/2015/08/08/performance-without-the-event-loop)
---
作者简介:
David 是来自澳大利亚悉尼的程序员和作者。
自 2011 年 2 月起成为 Go 的 contributor,自 2012 年 4 月起成为 committer。
联系信息
* [[email protected]](mailto:[email protected])
* twitter: @davecheney
---
via: <https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast>
作者:[Dave Cheney](https://dave.cheney.net/) 译者:[houbaron](https://github.com/houbaron) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | *Anthony Starks has remixed my original Google Present based slides using his fantastic Deck presentation tool. You can check out his remix over on his blog, mindchunk.blogspot.com.au/2014/06/remixing-with-deck.*
I was recently invited to give a talk at Gocon, a fantastic Go conference held semi-annually in Tokyo, Japan. [Gocon 2014](http://ymotongpoo.hatenablog.com/entry/2014/06/01/124350) was an entirely community-run one day event combining training and an afternoon of presentations surrounding the theme of Go in production
.
The following is the text of my presentation. The original text was structured to force me to speak slowly and clearly, so I have taken the liberty of editing it slightly to be more readable.
I want to thank [Bill Kennedy](http://www.goinggo.net/), Minux Ma, and especially [Josh Bleecher Snyder](https://twitter.com/offbymany), for their assistance in preparing this talk.
Good afternoon.
My name is David.
I am delighted to be here at Gocon today. I have wanted to come to this conference for two years and I am very grateful to the organisers for extending me the opportunity to present to you today.
I want to begin my talk with a question.
Why are people choosing to use Go ?
When people talk about their decision to learn Go, or use it in their product, they have a variety of answers, but there always three that are at the top of their list
The first, Concurrency.
Go’s concurrency primitives are attractive to programmers who come from single threaded scripting languages like Nodejs, Ruby, or Python, or from languages like C++ or Java with their heavyweight threading model.
Ease of deployment.
We have heard today from experienced Gophers who appreciate the simplicity of deploying Go applications.
This leaves Performance.
I believe an important reason why people choose to use Go is because it is *fast*.
For my talk today I want to discuss five features that contribute to Go’s performance.
I will also share with you the details of how Go implements these features.
The first feature I want to talk about is Go’s efficient treatment and storage of values.
This is an example of a value in Go. When compiled, `gocon`
consumes exactly four bytes of memory.
Let’s compare Go with some other languages
Due to the overhead of the way Python represents variables, storing the same value using Python consumes six times more memory.
This extra memory is used by Python to track type information, do reference counting, etc
Let’s look at another example:
Similar to Go, the Java `int`
type consumes 4 bytes of memory to store this value.
However, to use this value in a collection like a `List`
or `Map`
, the compiler must convert it into an `Integer`
object.
So an integer in Java frequently looks more like this and consumes between 16 and 24 bytes of memory.
Why is this important ? Memory is cheap and plentiful, why should this overhead matter ?
This is a graph showing CPU clock speed vs memory bus speed.
Notice how the gap between CPU clock speed and memory bus speed continues to widen.
The difference between the two is effectively how much time the CPU spends waiting for memory.
Since the late 1960’s CPU designers have understood this problem.
Their solution is a cache, an area of smaller, faster memory which is inserted between the CPU and main memory.
This is a `Location`
type which holds the location of some object in three dimensional space. It is written in Go, so each `Location`
consumes exactly 24 bytes of storage.
We can use this type to construct an array type of 1,000 `Location`
s, which consumes exactly 24,000 bytes of memory.
Inside the array, the `Location`
structures are stored sequentially, rather than as pointers to 1,000 Location structures stored randomly.
This is important because now all 1,000 `Location`
structures are in the cache in sequence, packed tightly together.
Go lets you create compact data structures, avoiding unnecessary indirection.
Compact data structures utilise the cache better.
Better cache utilisation leads to better performance.
Function calls are not free.
Three things happen when a function is called.
A new stack frame is created, and the details of the caller recorded.
Any registers which may be overwritten during the function call are saved to the stack.
The processor computes the address of the function and executes a branch to that new address.
Because function calls are very common operations, CPU designers have worked hard to optimise this procedure, but they cannot eliminate the overhead.
Depending on what the function does, this overhead may be trivial or significant.
A solution to reducing function call overhead is an optimisation technique called Inlining.
The Go compiler inlines a function by treating the body of the function as if it were part of the caller.
Inlining has a cost; it increases binary size.
It only makes sense to inline when the overhead of calling a function is large relative to the work the function does, so only simple functions are candidates for inlining.
Complicated functions are usually not dominated by the overhead of calling them and are therefore not inlined.
This example shows the function `Double`
calling `util.Max`
.
To reduce the overhead of the call to `util.Max`
, the compiler can inline `util.Max`
into `Double`
, resulting in something like this
After inlining there is no longer a call to `util.Max`
, but the behaviour of `Double`
is unchanged.
Inlining isn’t exclusive to Go. Almost every compiled or JITed language performs this optimisation. But how does inlining in Go work?
The Go implementation is very simple. When a package is compiled, any small function that is suitable for inlining is marked and then compiled as usual.
Then both the source of the function and the compiled version are stored.
This slide shows the contents of util.a. The source has been transformed a little to make it easier for the compiler to process quickly.
When the compiler compiles Double it sees that `util.Max`
is inlinable, and the source of `util.Max`
is available.
Rather than insert a call to the compiled version of `util.Max`
, it can substitute the source of the original function.
Having the source of the function enables other optimizations.
In this example, although the function Test always returns false, Expensive cannot know that without executing it.
When `Test`
is inlined, we get something like this
The compiler now knows that the expensive code is unreachable.
Not only does this save the cost of calling Test, it saves compiling or running any of the expensive code that is now unreachable.
The Go compiler can automatically inline functions across files and even across packages. This includes code that calls inlinable functions from the standard library.
Mandatory garbage collection makes Go a simpler and safer language.
This does not imply that garbage collection makes Go slow, or that garbage collection is the ultimate arbiter of the speed of your program.
What it does mean is memory allocated on the heap comes at a cost. It is a debt that costs CPU time every time the GC runs until that memory is freed.
There is however another place to allocate memory, and that is the stack.
Unlike C, which forces you to choose if a value will be stored on the heap, via `malloc`
, or on the stack, by declaring it inside the scope of the function, Go implements an optimisation called *escape analysis*.
Escape analysis determines whether any references to a value escape the function in which the value is declared.
If no references escape, the value may be safely stored on the stack.
Values stored on the stack do not need to be allocated or freed.
Lets look at some examples
`Sum`
adds the numbers between 1 and 100 and returns the result. This is a rather unusual way to do this, but it illustrates how Escape Analysis works.
Because the numbers slice is only referenced inside `Sum`
, the compiler will arrange to store the 100 integers for that slice on the stack, rather than the heap.
There is no need to garbage collect `numbers`
, it is automatically freed when `Sum`
returns.
This second example is also a little contrived. In `CenterCursor`
we create a new `Cursor`
and store a pointer to it in c.
Then we pass `c`
to the `Center()`
function which moves the `Cursor`
to the center of the screen.
Then finally we print the X and Y locations of that `Cursor`
.
Even though `c`
was allocated with the `new`
function, it will not be stored on the heap, because no reference `c`
escapes the `CenterCursor`
function.
Go’s optimisations are always enabled by default. You can see the compiler’s escape analysis and inlining decisions with the `-gcflags=-m`
switch.
Because escape analysis is performed at compile time, not run time, stack allocation will always be faster than heap allocation, no matter how efficient your garbage collector is.
I will talk more about the stack in the remaining sections of this talk.
Go has goroutines. These are the foundations for concurrency in Go.
I want to step back for a moment and explore the history that leads us to goroutines.
In the beginning computers ran one process at a time. Then in the 60’s the idea of multiprocessing, or time sharing became popular.
In a time-sharing system the operating systems must constantly switch the attention of the CPU between these processes by recording the state of the current process, then restoring the state of another.
This is called *process switching*.
There are three main costs of a process switch.
First is the kernel needs to store the contents of all the CPU registers for that process, then restore the values for another process.
The kernel also needs to flush the CPU’s mappings from virtual memory to physical memory as these are only valid for the current process.
Finally there is the cost of the operating system context switch, and the overhead of the scheduler function to choose the next process to occupy the CPU.
There are a surprising number of registers in a modern processor. I have difficulty fitting them on one slide, which should give you a clue how much time it takes to save and restore them.
Because a process switch can occur at any point in a process’ execution, the operating system needs to store the contents of all of these registers because it does not know which are currently in use.
This lead to the development of threads, which are conceptually the same as processes, but share the same memory space.
As threads share address space, they are lighter than processes so are faster to create and faster to switch between.
Goroutines take the idea of threads a step further.
Goroutines are cooperatively scheduled, rather than relying on the kernel to manage their time sharing.
The switch between goroutines only happens at well defined points, when an explicit call is made to the Go runtime scheduler.
The compiler knows the registers which are in use and saves them automatically.
While goroutines are cooperatively scheduled, this scheduling is handled for you by the runtime.
Places where Goroutines may yield to others are:
- Channel send and receive operations, if those operations would block.
- The Go statement, although there is no guarantee that new goroutine will be scheduled immediately.
- Blocking syscalls like file and network operations.
- After being stopped for a garbage collection cycle.
This an example to illustrate some of the scheduling points described in the previous slide.
The thread, depicted by the arrow, starts on the left in the `ReadFile`
function. It encounters `os.Open`
, which blocks the thread while waiting for the file operation to complete, so the scheduler switches the thread to the goroutine on the right hand side.
Execution continues until the read from the `c`
chan blocks, and by this time the `os.Open`
call has completed so the scheduler switches the thread back the left hand side and continues to the `file.Read`
function, which again blocks on file IO.
The scheduler switches the thread back to the right hand side for another channel operation, which has unblocked during the time the left hand side was running, but it blocks again on the channel send.
Finally the thread switches back to the left hand side as the `Read`
operation has completed and data is available.
This slide shows the low level `runtime.Syscall`
function which is the base for all functions in the os package.
Any time your code results in a call to the operating system, it will go through this function.
The call to `entersyscall`
informs the runtime that this thread is about to block.
This allows the runtime to spin up a new thread which will service other goroutines while this current thread blocked.
This results in relatively few operating system threads per Go process, with the Go runtime taking care of assigning a runnable Goroutine to a free operating system thread.
In the previous section I discussed how goroutines reduce the overhead of managing many, sometimes hundreds of thousands of concurrent threads of execution.
There is another side to the goroutine story, and that is stack management, which leads me to my final topic.
This is a diagram of the memory layout of a process. The key thing we are interested is the location of the heap and the stack.
Traditionally inside the address space of a process, the heap is at the bottom of memory, just above the program (text) and grows upwards.
The stack is located at the top of the virtual address space, and grows downwards.
Because the heap and stack overwriting each other would be catastrophic, the operating system usually arranges to place an area of unwritable memory between the stack and the heap to ensure that if they did collide, the program will abort.
This is called a guard page, and effectively limits the stack size of a process, usually in the order of several megabytes.
We’ve discussed that threads share the same address space, so for each thread, it must have its own stack.
Because it is hard to predict the stack requirements of a particular thread, a large amount of memory is reserved for each thread’s stack along with a guard page.
The hope is that this is more than will ever be needed and the guard page will never be hit.
The downside is that as the number of threads in your program increases, the amount of available address space is reduced.
We’ve seen that the Go runtime schedules a large number of goroutines onto a small number of threads, but what about the stack requirements of those goroutines ?
Instead of using guard pages, the Go compiler inserts a check as part of every function call to check if there is sufficient stack for the function to run. If there is not, the runtime can allocate more stack space.
Because of this check, a goroutines initial stack can be made much smaller, which in turn permits Go programmers to treat goroutines as cheap resources.
This is a slide that shows how stacks are managed in Go 1.2.
When `G`
calls to `H`
there is not enough space for `H`
to run, so the runtime allocates a new stack frame from the heap, then runs `H`
on that new stack segment. When `H`
returns, the stack area is returned to the heap before returning to `G`
.
This method of managing the stack works well in general, but for certain types of code, usually recursive code, it can cause the inner loop of your program to straddle one of these stack boundaries.
For example, in the inner loop of your program, function `G`
may call `H`
many times in a loop,
Each time this will cause a stack split. This is known as the hot split problem.
To solve hot splits, Go 1.3 has adopted a new stack management method.
Instead of adding and removing additional stack segments, if the stack of a goroutine is too small, a new, larger, stack will be allocated.
The old stack’s contents are copied to the new stack, then the goroutine continues with its new larger stack.
After the first call to `H`
the stack will be large enough that the check for available stack space will always succeed.
This resolves the hot split problem.
Values, Inlining, Escape Analysis, Goroutines, and segmented/copying stacks.
These are the five features that I chose to speak about today, but they are by no means the only things that makes Go a fast programming language, just as there more that three reasons that people cite as their reason to learn Go.
As powerful as these five features are individually, they do not exist in isolation.
For example, the way the runtime multiplexes goroutines onto threads would not be nearly as efficient without growable stacks.
Inlining reduces the cost of the stack size check by combining smaller functions into larger ones.
Escape analysis reduces the pressure on the garbage collector by automatically moving allocations from the heap to the stack.
Escape analysis is also provides better cache locality.
Without growable stacks, escape analysis might place too much pressure on the stack.
* Thank you to the Gocon organisers for permitting me to speak today
* twitter / web / email details
* thanks to @offbymany, @billkennedy_go, and Minux for their assistance in preparing this talk. |
10,102 | 一款免费且安全的在线 PDF 转换软件 | https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/ | 2018-10-11T15:18:00 | [
"PDF"
] | https://linux.cn/article-10102-1.html | 
我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理 PDF 文档时,你肯定会想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐 [**EasyPDF**](https://easypdf.com/) —— 一款可以胜任所有场合的在线 PDF 软件。通过大量的测试,我们可以保证:这款工具能够让你的 PDF 文档管理更加容易。
不过,关于 EasyPDF 有一些十分重要的事情,你必须知道。
* EasyPDF 是免费的、匿名的在线 PDF 转换软件。
* 能够将 PDF 文档转换成 Word、Excel、PowerPoint、AutoCAD、JPG、GIF 和文本等格式格式的文档。
* 能够从 Word、Excel、PowerPoint 等其他格式的文件创建 PDF 文件。
* 能够进行 PDF 文档的合并、分割和压缩。
* 能够识别扫描的 PDF 和图片中的内容。
* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。
* 可以在 Windows、Linux、Mac 和智能手机上通过浏览器来操作。
* 支持多种语言。
### EasyPDF的用户界面

EasyPDF 最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF 的整体使用体验相比以前会好很多。
每种不同类型的转换都有它们专门的菜单,只需要简单地向其中添加文件,你并不需要知道太多知识来进行操作。
许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF 突破了这一个瓶颈。在智能手机上,EasyPDF 几乎可以秒开,并且可以顺畅的操作。你也通过 Chrome 的“三点菜单”把 EasyPDF 添加到手机的主屏幕上。

### 特性
除了好看的界面,EasyPDF 还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外, EasyPDF 也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗?
首先,你需要选择一种想要进行的格式转换,比如,将 PDF 转换成 Word。然后,选择你想要转换的 PDF 文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[Google Drive](https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/) 或 [Dropbox](https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/)来上传文件。
选择要进行格式转换的文件后,点击 Convert 按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。

要进行其他类型的格式转换,直接返回到主页。
目前支持的几种格式转换类型如下:
* **PDF to Word** – 将 PDF 文档 转换成 Word 文档
* **PDF 转换成 PowerPoint** – 将 PDF 文档 转换成 PowerPoint 演示讲稿
* **PDF 转换成 Excel** – 将 PDF 文档 转换成 Excel 文档
* **PDF 创建** – 从一些其他类型的文件(如,文本、doc、odt)来创建PDF文档
* **Word 转换成 PDF** – 将 Word 文档 转换成 PDF 文档
* **JPG 转换成 PDF** – 将 JPG images 转换成 PDF 文档
* **PDF 转换成 AutoCAD** – 将 PDF 文档 转换成 .dwg 格式(DWG 是 CAD 文件的原生的格式)
* **PDF 转换成 Text** – 将 PDF 文档 转换成 Text 文档
* **PDF 分割** – 把 PDF 文件分割成多个部分
* **PDF 合并** – 把多个 PDF 文件合并成一个文件
* **PDF 压缩** – 将 PDF 文档进行压缩
* **PDF 转换成 JPG** – 将 PDF 文档 转换成 JPG 图片
* **PDF 转换成 PNG** – 将 PDF 文档 转换成 PNG 图片
* **PDF 转换成 GIF** – 将 PDF 文档 转换成 GIF 文件
* **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word、Excel、文本)
想试一试吗?好极了!点击下面的链接,然后开始格式转换吧!
[](https://easypdf.com/)
### 总结
EasyPDF 名符其实,能够让 PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。
好吧,我就说这些。更多的好东西还在后后面,请继续关注!
加油!
---
via: <https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[zhousiyu325](https://github.com/zhousiyu325) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,103 | 我应该使用哪些稳定版内核? | http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/ | 2018-10-11T20:43:00 | [
"Linux",
"内核"
] | https://linux.cn/article-10103-1.html |
>
> 本文作者 Greg Kroah-Hartman 是 Linux 稳定版内核的维护负责人。
>
>
>

很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。在这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。
和以前一样,在这里给出的这些看法只代表我个人的意见。
### 可选择的内核有哪些
下面列出了我建议你应该去使用的内核的列表,从最好的到最差的都有。我在下面将详细介绍,但是如果你只想得到一个结论,它就是你想要的:
建议你使用的内核的分级,从最佳的方案到最差的方案如下:
* 你最喜欢的 Linux 发行版支持的内核
* 最新的稳定版
* 最新的 LTS (长期支持)版本
* 仍然处于维护状态的老的 LTS 版本
绝对不要去使用的内核:
* 不再维护的内核版本
给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日,kernel.org 页面上可以看到是这样:

因此,基于上面的列表,那它应该是:
* 4.18.5 是最新的稳定版
* 4.14.67 是最新的 LTS 版本
* 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 版本
* 4.17.19 和 3.18.119 是过去 60 天内有过发布的 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。
非常容易,对吗?
Ok,现在我给出这样选择的一些理由:
### Linux 发行版内核
对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS,以及其它的。
所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复](http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/)),这就是你可以使用的最安全、最好的内核之一。
有些社区的 Linux 发行版需要很长的时间才发行一个新内核版本,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。
如果我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。
许多人好像喜欢旧式、“传统” 模式的发行版,使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,甚至几十年。他们反向移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这项工作是一项真正吃力不讨好的工作,分配到这些任务的开发人员做了一些精彩的工作才能实现这些目标。所以如果你希望永远不看到你的内核版本号发生过变化,那么就使用这些发行版。他们通常会为使用而付出一些钱,当发生错误时能够从这些公司得到一些支持,那就是值得的。
所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。
但是,如果你不希望去依赖别人,而是希望你自己管理你的内核,或者你有发行版不支持的硬件,那么你应该去使用最新的稳定版:
### 最新的稳定版
最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。接下来的三个月之后,进入到下一个内核版本的 bug 修复将被反向移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。
最新的稳定版内核通常也是主流社区发行版所使用的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。
三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,因为当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。
如果你在上一个 LTS (长期支持)版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,最新的稳定版内核通常是推荐运行的内核。
### 最新的 LTS 版本
如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码<ruby> 树外 <rt> out-of-tree </rt></ruby>的补丁,那么对你来说,最好的内核版本是最新的 LTS 版本。这个版本拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。
请注意,这个最新的 LTS 版本没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。
另外,对于这个 LTS 版本的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 版本,并每年升级一次,这是一个很好的实践。
使用这个 LTS 版本的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。
另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你需要意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。
现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 版内核上,那么,或许你应该去使用老的 LTS 版内核:
### 老的 LTS 版本
传统上,这些版本都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org](https://kernelci.org/)、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 版内核得到更长时间的支持。
最新的 LTS 版本以及它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html](https://www.kernel.org/category/releases.html) 上的信息:

Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂的(也有人说是打破常规)开发模型。这些设备在芯片发行前几年就启动了他们的开发周期,而那些代码从来不会合并到上游,最终结果是新打造的芯片是基于一个 2 年以前的老内核发布的。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。
如果在 2 年后,这个 LTS 版本停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的非常不安全的设备仍然在使用中,这对任何生态系统来说都不是什么好事情。
由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 版本——这些为它们特定发布的版本(例如现在的每个 4.9.y 版本)。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备所带的 “Andrid O” 版本(和现在的 “Android P” 版本)指定了最低允许使用的内核版本,并且 Andoird 安全更新版本也开始越来越频繁在设备上要求使用这些 “.y” 版本。
我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全更新版本,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,据我所知,他们持续跟踪 4.4.y 版本的速度比其它公司都快。
当使用这种老的内核时有个重大警告。反向移植到这种内核中的安全修复不如最新版本的 LTS 内核多,因为这些使用老的 LTS 内核的设备的传统模式是一个更加简化的用户模式。这些内核不能用于任何“通用计算”模式中,在这里用的是<ruby> 不可信用户 <rt> untrusted user </rt></ruby>或虚拟机,极大地削弱了对老的内核做像最近的 Spectre 这样的修复的能力,如果在一些分支中存在这样的 bug 的话。
因此,仅在你能够完全控制的设备,或者限定在一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)时使用老的 LTS 版本。绝对不要在有不可信用户/程序,或虚拟机的服务器上使用这些老的 LTS 版内核。
此外,如果社区对它有支持的话,社区对这些老的 LTS 版内核相比正常的 LTS 版内核的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,几乎没有供应商会为你提供支持,因此,你要特别注意 ……)。
### 不再维护的内核发行版
更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 -rc 的内核发行版。-rc 的发行版是开发中的版本,根本没有 Linux 内核开发者认为它适合任何人使用,更不用说是数百万的用户了。
当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的“隐性”成本。
### 总结
基于以上原因,下面是一个针对不同类型设备的简短列表,这些设备我推荐适用的内核如下:
* 笔记本 / 台式机:最新的稳定版内核
* 服务器:最新的稳定版内核或最新的 LTS 版内核
* 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格)
至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。
---
via: <http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/>
作者:[Greg Kroah-Hartman](http://kroah.com) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend.
As always, the opinions written here are my own, I speak for no one but myself.
## What kernel to pick
Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is:
Hierarchy of what kernel to use, from best solution to worst:
- Supported kernel from your favorite Linux distribution
- Latest stable release
- Latest LTS release
- Older LTS release that is still being maintained
What kernel to never use:
- Unmaintained kernel release
To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:
So, based on the above list that would mean that:
- 4.18.5 is the latest stable release
- 4.14.67 is the latest LTS release
- 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
- 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.
Quite easy, right?
Ok, now for some justification for all of this:
## Distribution kernels
The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.
All of these distributions use the latest stable upstream kernel release
and make sure that any needed bugfixes are applied on a regular basis.
That is the one of the most solid and best kernel that you can use when
it comes to having the latest fixes
([remember all fixes are security fixes](http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/))
in it.
There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.
Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.
Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.
So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.
But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:
## Latest stable release
This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.
This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.
After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.
If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.
## Latest LTS release
If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.
Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.
Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.
The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.
Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.
Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:
## Older LTS release
These releases have traditionally been supported by the community for 2
years, sometimes longer for when a major distribution relies on this
(like Debian or SLES). However in the past year, thanks to a lot of
suport and investment in testing and infrastructure from Google, Linaro,
Linaro member companies, [kernelci.org](https://kernelci.org/), and
others, these kernels are starting to be supported for much longer.
Here’s the latest LTS releases and how long they will be supported for,
as shown at
[kernel.org/category/releases.html](https://www.kernel.org/category/releases.html)
on August 24, 2018:
The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.
If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.
Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.
I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.
There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.
So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.
Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)
## Unmaintained kernel release
Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.
You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.
## Summary
So, here’s a short list of different types of devices, and what I would recommend for their kernels:
- Laptop / Desktop: Latest stable release
- Server: Latest stable release or latest LTS release
- Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.
And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time. |
10,104 | 树莓派自建 NAS 云盘之——树莓派搭建网络存储盘 | https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi | 2018-10-12T01:48:11 | [
"NAS",
"树莓派"
] | https://linux.cn/article-10104-1.html |
>
> 跟随这些逐步指导构建你自己的基于树莓派的 NAS 系统。
>
>
>

我将在接下来的这三篇文章中讲述如何搭建一个简便、实用的 NAS 云盘系统。我在这个中心化的存储系统中存储数据,并且让它每晚都会自动的备份增量数据。本系列文章将利用 NFS 文件系统将磁盘挂载到同一网络下的不同设备上,使用 [Nextcloud](https://nextcloud.com/) 来离线访问数据、分享数据。
本文主要讲述将数据盘挂载到远程设备上的软硬件步骤。本系列第二篇文章将讨论数据备份策略、如何添加定时备份数据任务。最后一篇文章中我们将会安装 Nextcloud 软件,用户通过 Nextcloud 提供的 web 界面可以方便的离线或在线访问数据。本系列教程最终搭建的 NAS 云盘支持多用户操作、文件共享等功能,所以你可以通过它方便的分享数据,比如说你可以发送一个加密链接,跟朋友分享你的照片等等。
最终的系统架构如下图所示:

### 硬件
首先需要准备硬件。本文所列方案只是其中一种示例,你也可以按不同的硬件方案进行采购。
最主要的就是[树莓派 3](https://www.raspberrypi.org/products/raspberry-pi-3-model-b/),它带有四核 CPU、1G RAM,以及(比较)快速的网络接口。数据将存储在两个 USB 磁盘驱动器上(这里使用 1TB 磁盘);其中一个磁盘用于每天数据存储,另一个用于数据备份。请务必使用有源 USB 磁盘驱动器或者带附加电源的 USB 集线器,因为树莓派无法为两个 USB 磁盘驱动器供电。
### 软件
在该社区中最活跃的操作系统当属 [Raspbian](https://www.raspbian.org/),便于定制个性化项目。已经有很多 [操作指南](https://www.raspberrypi.org/documentation/installation/installing-images/) 讲述如何在树莓派中安装 Raspbian 系统,所以这里不再赘述。在撰写本文时,最新的官方支持版本是 [Raspbian Stretch](https://www.raspberrypi.org/blog/raspbian-stretch/),它对我来说很好使用。
到此,我将假设你已经配置好了基本的 Raspbian 系统并且可以通过 `ssh` 访问到你的树莓派。
### 准备 USB 磁盘驱动器
为了更好地读写数据,我建议使用 ext4 文件系统去格式化磁盘。首先,你必须先找到连接到树莓派的磁盘。你可以在 `/dev/sd/<x>` 中找到磁盘设备。使用命令 `fdisk -l`,你可以找到刚刚连接的两块 USB 磁盘驱动器。请注意,操作下面的步骤将会清除 USB 磁盘驱动器上的所有数据,请做好备份。
```
pi@raspberrypi:~ $ sudo fdisk -l
<...>
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe8900690
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6aa4f598
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux
```
由于这些设备是连接到树莓派的唯一的 1TB 的磁盘,所以我们可以很容易的辨别出 `/dev/sda` 和 `/dev/sdb` 就是那两个 USB 磁盘驱动器。每个磁盘末尾的分区表提示了在执行以下的步骤后如何查看,这些步骤将会格式化磁盘并创建分区表。为每个 USB 磁盘驱动器按以下步骤进行操作(假设你的磁盘也是 `/dev/sda` 和 `/dev/sdb`,第二次操作你只要替换命令中的 `sda` 为 `sdb` 即可)。
首先,删除磁盘分区表,创建一个新的并且只包含一个分区的新分区表。在 `fdisk` 中,你可以使用交互单字母命令来告诉程序你想要执行的操作。只需要在提示符 `Command(m for help):` 后输入相应的字母即可(可以使用 `m` 命令获得更多详细信息):
```
pi@raspberrypi:~ $ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS disklabel with disk identifier 0x9c310964.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1953525167, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Command (m for help): p
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9c310964
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
Command (m for help): w
The partition table has been altered.
Syncing disks.
```
现在,我们将用 ext4 文件系统格式化新创建的分区 `/dev/sda1`:
```
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
<...>
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
```
重复以上步骤后,让我们根据用途来对它们建立标签:
```
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
```
现在,让我们安装这些磁盘并存储一些数据。以我运营该系统超过一年的经验来看,当树莓派启动时(例如在断电后),USB 磁盘驱动器并不是总被挂载,因此我建议使用 autofs 在需要的时候进行挂载。
首先,安装 autofs 并创建挂载点:
```
pi@raspberrypi:~ $ sudo apt install autofs
pi@raspberrypi:~ $ sudo mkdir /nas
```
然后添加下面这行来挂载设备 `/etc/auto.master`:
```
/nas /etc/auto.usb
```
如果不存在以下内容,则创建 `/etc/auto.usb`,然后重新启动 autofs 服务:
```
data -fstype=ext4,rw :/dev/disk/by-label/data
backup -fstype=ext4,rw :/dev/disk/by-label/backup
pi@raspberrypi3:~ $ sudo service autofs restart
```
现在你应该可以分别访问 `/nas/data` 以及 `/nas/backup` 磁盘了。显然,到此还不会令人太兴奋,因为你只是擦除了磁盘中的数据。不过,你可以执行以下命令来确认设备是否已经挂载成功:
```
pi@raspberrypi3:~ $ cd /nas/data
pi@raspberrypi3:/nas/data $ cd /nas/backup
pi@raspberrypi3:/nas/backup $ mount
<...>
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
<...>
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
```
首先进入对应目录以确保 autofs 能够挂载设备。autofs 会跟踪文件系统的访问记录,并随时挂载所需要的设备。然后 `mount` 命令会显示这两个 USB 磁盘驱动器已经挂载到我们想要的位置了。
设置 autofs 的过程容易出错,如果第一次尝试失败,请不要沮丧。你可以上网搜索有关教程。
### 挂载网络存储
现在你已经设置了基本的网络存储,我们希望将它安装到远程 Linux 机器上。这里使用 NFS 文件系统,首先在树莓派上安装 NFS 服务器:
```
pi@raspberrypi:~ $ sudo apt install nfs-kernel-server
```
然后,需要告诉 NFS 服务器公开 `/nas/data` 目录,这是从树莓派外部可以访问的唯一设备(另一个用于备份)。编辑 `/etc/exports` 添加如下内容以允许所有可以访问 NAS 云盘的设备挂载存储:
```
/nas/data *(rw,sync,no_subtree_check)
```
更多有关限制挂载到单个设备的详细信息,请参阅 `man exports`。经过上面的配置,任何人都可以访问数据,只要他们可以访问 NFS 所需的端口:`111` 和 `2049`。我通过上面的配置,只允许通过路由器防火墙访问到我的家庭网络的 22 和 443 端口。这样,只有在家庭网络中的设备才能访问 NFS 服务器。
如果要在 Linux 计算机挂载存储,运行以下命令:
```
you@desktop:~ $ sudo mkdir /nas/data
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
```
同样,我建议使用 autofs 来挂载该网络设备。如果需要其他帮助,请参看 [如何使用 Autofs 来挂载 NFS 共享](https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)。
现在你可以在远程设备上通过 NFS 系统访问位于你树莓派 NAS 云盘上的数据了。在后面一篇文章中,我将介绍如何使用 `rsync` 自动将数据备份到第二个 USB 磁盘驱动器。你将会学到如何使用 `rsync` 创建增量备份,在进行日常备份的同时还能节省设备空间。
---
via: <https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi>
作者:[Manuel Dewald](https://opensource.com/users/ntlx) 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In this three-part series, I'll explain how to set up a simple, useful NAS (network attached storage) system. I use this kind of setup to store my files on a central system, creating incremental backups automatically every night. To mount the disk on devices that are located in the same network, NFS is installed. To access files offline and share them with friends, I use [Nextcloud](https://nextcloud.com/).
This article will cover the basic setup of software and hardware to mount the data disk on a remote device. In the second article, I will discuss a backup strategy and set up a cron job to create daily backups. In the third and last article, we will install Nextcloud, a tool for easy file access to devices synced offline as well as online using a web interface. It supports multiple users and public file-sharing so you can share pictures with friends, for example, by sending a password-protected link.
The target architecture of our system looks like this:

## Hardware
Let's get started with the hardware you need. You might come up with a different shopping list, so consider this one an example.
The computing power is delivered by a [Raspberry Pi 3](https://www.raspberrypi.org/products/raspberry-pi-3-model-b/), which comes with a quad-core CPU, a gigabyte of RAM, and (somewhat) fast ethernet. Data will be stored on two USB hard drives (I use 1-TB disks); one is used for the everyday traffic, the other is used to store backups. Be sure to use either active USB hard drives or a USB hub with an additional power supply, as the Raspberry Pi will not be able to power two USB drives.
## Software
The operating system with the highest visibility in the community is [Raspbian](https://www.raspbian.org/), which is excellent for custom projects. There are plenty of [guides](https://www.raspberrypi.org/documentation/installation/installing-images/) that explain how to install Raspbian on a Raspberry Pi, so I won't go into details here. The latest official supported version at the time of this writing is [Raspbian Stretch](https://www.raspberrypi.org/blog/raspbian-stretch/), which worked fine for me.
At this point, I will assume you have configured your basic Raspbian and are able to connect to the Raspberry Pi by `ssh`
.
## Prepare the USB drives
To achieve good performance reading from and writing to the USB hard drives, I recommend formatting them with ext4. To do so, you must first find out which disks are attached to the Raspberry Pi. You can find the disk devices in `/dev/sd/<x>`
. Using the command `fdisk -l`
, you can find out which two USB drives you just attached. *Please note that all data on the USB drives will be lost as soon as you follow these steps.*
```
pi@raspberrypi:~ $ sudo fdisk -l
<...>
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe8900690
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6aa4f598
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 1953521663 1953519616 931.5G 83 Linux
```
As those devices are the only 1TB disks attached to the Raspberry Pi, we can easily see that `/dev/sda`
and `/dev/sdb`
are the two USB drives. The partition table at the end of each disk shows how it should look after the following steps, which create the partition table and format the disks. To do this, repeat the following steps for each of the two devices by replacing `sda`
with `sdb`
the second time (assuming your devices are also listed as `/dev/sda`
and `/dev/sdb`
in `fdisk`
).
First, delete the partition table of the disk and create a new one containing only one partition. In `fdisk`
, you can use interactive one-letter commands to tell the program what to do. Simply insert them after the prompt `Command (m for help):`
as follows (you can also use the `m`
command anytime to get more information):
```
pi@raspberrypi:~ $ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS disklabel with disk identifier 0x9c310964.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1953525167, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953525167, default 1953525167):
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Command (m for help): p
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9c310964
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953525167 1953523120 931.5G 83 Linux
Command (m for help): w
The partition table has been altered.
Syncing disks.
```
Now we will format the newly created partition `/dev/sda1`
using the ext4 filesystem:
```
pi@raspberrypi:~ $ sudo mkfs.ext4 /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks: done
<...>
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
```
After repeating the above steps, let's label the new partitions according to their usage in your system:
```
pi@raspberrypi:~ $ sudo e2label /dev/sda1 data
pi@raspberrypi:~ $ sudo e2label /dev/sdb1 backup
```
Now let's get those disks mounted to store some data. My experience, based on running this setup for over a year now, is that USB drives are not always available to get mounted when the Raspberry Pi boots up (for example, after a power outage), so I recommend using autofs to mount them when needed.
First install autofs and create the mount point for the storage:
```
pi@raspberrypi:~ $ sudo apt install autofs
pi@raspberrypi:~ $ sudo mkdir /nas
```
Then mount the devices by adding the following line to `/etc/auto.master`
:
`/nas /etc/auto.usb`
Create the file `/etc/auto.usb`
if not existing with the following content, and restart the autofs service:
```
data -fstype=ext4,rw :/dev/disk/by-label/data
backup -fstype=ext4,rw :/dev/disk/by-label/backup
```
`pi@raspberrypi3:~ $ sudo service autofs restart`
Now you should be able to access the disks at `/nas/data`
and `/nas/backup`
, respectively. Clearly, the content will not be too thrilling, as you just erased all the data from the disks. Nevertheless, you should be able to verify the devices are mounted by executing the following commands:
```
pi@raspberrypi3:~ $ cd /nas/data
pi@raspberrypi3:/nas/data $ cd /nas/backup
pi@raspberrypi3:/nas/backup $ mount
<...>
/etc/auto.usb on /nas type autofs (rw,relatime,fd=6,pgrp=463,timeout=300,minproto=5,maxproto=5,indirect)
<...>
/dev/sda1 on /nas/data type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /nas/backup type ext4 (rw,relatime,data=ordered)
```
First move into the directories to make sure autofs mounts the devices. Autofs tracks access to the filesystems and mounts the needed devices on the go. Then the `mount`
command shows that the two devices are actually mounted where we wanted them.
Setting up autofs is a bit fault-prone, so do not get frustrated if mounting doesn't work on the first try. Give it another chance, search for more detailed resources (there is plenty of documentation online), or leave a comment.
## Mount network storage
Now that you have set up the basic network storage, we want it to be mounted on a remote Linux machine. We will use the network file system (NFS) for this. First, install the NFS server on the Raspberry Pi:
`pi@raspberrypi:~ $ sudo apt install nfs-kernel-server`
Next we need to tell the NFS server to expose the `/nas/data`
directory, which will be the only device accessible from outside the Raspberry Pi (the other one will be used for backups only). To export the directory, edit the file `/etc/exports`
and add the following line to allow all devices with access to the NAS to mount your storage:
`/nas/data *(rw,sync,no_subtree_check)`
For more information about restricting the mount to single devices and so on, refer to `man exports`
. In the configuration above, anyone will be able to mount your data as long as they have access to the ports needed by NFS: `111`
and `2049`
. I use the configuration above and allow access to my home network only for ports 22 and 443 using the routers firewall. That way, only devices in the home network can reach the NFS server.
To mount the storage on a Linux computer, run the commands:
```
you@desktop:~ $ sudo mkdir /nas/data
you@desktop:~ $ sudo mount -t nfs <raspberry-pi-hostname-or-ip>:/nas/data /nas/data
```
Again, I recommend using autofs to mount this network device. For extra help, check out [How to use autofs to mount NFS shares](https://opensource.com/article/18/6/using-autofs-mount-nfs-shares).
Now you are able to access files stored on your own RaspberryPi-powered NAS from remote devices using the NFS mount. In the next part of this series, I will cover how to automatically back up your data to the second hard drive using `rsync`
. To save space on the device while still doing daily backups, you will learn how to create incremental backups with `rsync`
.
## 1 Comment |
10,105 | 在 Ubuntu 18.04 LTS 无头服务器上安装 Oracle VirtualBox | https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/ | 2018-10-12T01:59:20 | [
"VirtualBox"
] | https://linux.cn/article-10105-1.html | 
本教程将指导你在 Ubuntu 18.04 LTS 无头服务器上,一步一步地安装 **Oracle VirtualBox**。同时,本教程也将介绍如何使用 **phpVirtualBox** 去管理安装在无头服务器上的 **VirtualBox** 实例。**phpVirtualBox** 是 VirtualBox 的一个基于 Web 的前端工具。这个教程也可以工作在 Debian 和其它 Ubuntu 衍生版本上,如 Linux Mint。现在,我们开始。
### 前提条件
在安装 Oracle VirtualBox 之前,我们的 Ubuntu 18.04 LTS 服务器上需要满足如下的前提条件。
首先,逐个运行如下的命令来更新 Ubuntu 服务器。
```
$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade
```
接下来,安装如下的必需的包:
```
$ sudo apt install build-essential dkms unzip wget
```
安装完成所有的更新和必需的包之后,重启动 Ubuntu 服务器。
```
$ sudo reboot
```
### 在 Ubuntu 18.04 LTS 服务器上安装 VirtualBox
添加 Oracle VirtualBox 官方仓库。为此你需要去编辑 `/etc/apt/sources.list` 文件:
```
$ sudo nano /etc/apt/sources.list
```
添加下列的行。
在这里,我将使用 Ubuntu 18.04 LTS,因此我添加下列的仓库。
```
deb http://download.virtualbox.org/virtualbox/debian bionic contrib
```

用你的 Ubuntu 发行版的代码名字替换关键字 ‘bionic’,比如,‘xenial’、‘vivid’、‘utopic’、‘trusty’、‘raring’、‘quantal’、‘precise’、‘lucid’、‘jessie’、‘wheezy’、或 ‘squeeze‘。
然后,运行下列的命令去添加 Oracle 公钥:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
对于 VirtualBox 的老版本,添加如下的公钥:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
```
接下来,使用如下的命令去更新软件源:
```
$ sudo apt update
```
最后,使用如下的命令去安装最新版本的 Oracle VirtualBox:
```
$ sudo apt install virtualbox-5.2
```
### 添加用户到 VirtualBox 组
我们需要去创建并添加我们的系统用户到 `vboxusers` 组中。你也可以单独创建用户,然后将它分配到 `vboxusers` 组中,也可以使用已有的用户。我不想去创建新用户,因此,我添加已存在的用户到这个组中。请注意,如果你为 virtualbox 使用一个单独的用户,那么你必须注销当前用户,并使用那个特定的用户去登入,来完成剩余的步骤。
我使用的是我的用户名 `sk`,因此,我运行如下的命令将它添加到 `vboxusers` 组中。
```
$ sudo usermod -aG vboxusers sk
```
现在,运行如下的命令去检查 virtualbox 内核模块是否已加载。
```
$ sudo systemctl status vboxdrv
```

正如你在上面的截屏中所看到的,vboxdrv 模块已加载,并且是已运行的状态!
对于老的 Ubuntu 版本,运行:
```
$ sudo /etc/init.d/vboxdrv status
```
如果 virtualbox 模块没有启动,运行如下的命令去启动它。
```
$ sudo /etc/init.d/vboxdrv setup
```
很好!我们已经成功安装了 VirtualBox 并启动了 virtualbox 模块。现在,我们继续来安装 Oracle VirtualBox 的扩展包。
### 安装 VirtualBox 扩展包
VirtualBox 扩展包为 VirtualBox 访客系统提供了如下的功能。
* 虚拟的 USB 2.0 (EHCI) 驱动
* VirtualBox 远程桌面协议(VRDP)支持
* 宿主机网络摄像头直通
* Intel PXE 引导 ROM
* 对 Linux 宿主机上的 PCI 直通提供支持
从[这里](https://www.virtualbox.org/wiki/Downloads)为 VirtualBox 5.2.x 下载最新版的扩展包。
```
$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
使用如下的命令去安装扩展包:
```
$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
恭喜!我们已经成功地在 Ubuntu 18.04 LTS 服务器上安装了 Oracle VirtualBox 的扩展包。现在已经可以去部署虚拟机了。参考 [virtualbox 官方指南](http://www.virtualbox.org/manual/ch08.html),在命令行中开始创建和管理虚拟机。
然而,并不是每个人都擅长使用命令行。有些人可能希望在图形界面中去创建和使用虚拟机。不用担心!下面我们为你带来非常好用的 **phpVirtualBox** 工具!
### 关于 phpVirtualBox
**phpVirtualBox** 是一个免费的、基于 web 的 Oracle VirtualBox 后端。它是使用 PHP 开发的。用 phpVirtualBox 我们可以通过 web 浏览器从网络上的任意一个系统上,很轻松地创建、删除、管理、和执行虚拟机。
### 在 Ubuntu 18.04 LTS 上安装 phpVirtualBox
由于它是基于 web 的工具,我们需要安装 Apache web 服务器、PHP 和一些 php 模块。
为此,运行如下命令:
```
$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
```
然后,从 [下载页面](https://github.com/phpvirtualbox/phpvirtualbox/releases) 上下载 phpVirtualBox 5.2.x 版。请注意,由于我们已经安装了 VirtualBox 5.2 版,因此,同样的我们必须去安装 phpVirtualBox 的 5.2 版本。
运行如下的命令去下载它:
```
$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
```
使用如下命令解压下载的安装包:
```
$ unzip 5.2-0.zip
```
这个命令将解压 5.2.0.zip 文件的内容到一个名为 `phpvirtualbox-5.2-0` 的文件夹中。现在,复制或移动这个文件夹的内容到你的 apache web 服务器的根文件夹中。
```
$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
```
给 phpvirtualbox 文件夹分配适当的权限。
```
$ sudo chmod 777 /var/www/html/phpvirtualbox/
```
接下来,我们开始配置 phpVirtualBox。
像下面这样复制示例配置文件。
```
$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
```
编辑 phpVirtualBox 的 `config.php` 文件:
```
$ sudo nano /var/www/html/phpvirtualbox/config.php
```
找到下列行,并且用你的系统用户名和密码去替换它(就是前面的“添加用户到 VirtualBox 组中”节中使用的用户名)。
在我的案例中,我的 Ubuntu 系统用户名是 `sk` ,它的密码是 `ubuntu`。
```
var $username = 'sk';
var $password = 'ubuntu';
```

保存并关闭这个文件。
接下来,创建一个名为 `/etc/default/virtualbox` 的新文件:
```
$ sudo nano /etc/default/virtualbox
```
添加下列行。用你自己的系统用户替换 `sk`。
```
VBOXWEB_USER=sk
```
最后,重引导你的系统或重启下列服务去完成整个配置工作。
```
$ sudo systemctl restart vboxweb-service
$ sudo systemctl restart vboxdrv
$ sudo systemctl restart apache2
```
### 调整防火墙允许连接 Apache web 服务器
如果你在 Ubuntu 18.04 LTS 上启用了 UFW,那么在默认情况下,apache web 服务器是不能被任何远程系统访问的。你必须通过下列的步骤让 http 和 https 流量允许通过 UFW。
首先,我们使用如下的命令来查看在策略中已经安装了哪些应用:
```
$ sudo ufw app list
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
```
正如你所见,Apache 和 OpenSSH 应该已经在 UFW 的策略文件中安装了。
如果你在策略中看到的是 `Apache Full`,说明它允许流量到达 80 和 443 端口:
```
$ sudo ufw app info "Apache Full"
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web
server.
Ports:
80,443/tcp
```
现在,运行如下的命令去启用这个策略中的 HTTP 和 HTTPS 的入站流量:
```
$ sudo ufw allow in "Apache Full"
Rules updated
Rules updated (v6)
```
如果你希望允许 https 流量,但是仅是 http (80) 的流量,运行如下的命令:
```
$ sudo ufw app info "Apache"
```
### 访问 phpVirtualBox 的 Web 控制台
现在,用任意一台远程系统的 web 浏览器来访问。
在地址栏中,输入:`http://IP-address-of-virtualbox-headless-server/phpvirtualbox`。
在我的案例中,我导航到这个链接 – `http://192.168.225.22/phpvirtualbox`。
你将看到如下的屏幕输出。输入 phpVirtualBox 管理员用户凭据。
phpVirtualBox 的默认管理员用户名和密码是 `admin` / `admin`。

恭喜!你现在已经进入了 phpVirtualBox 管理面板了。

现在,你可以从 phpvirtualbox 的管理面板上,开始去创建你的 VM 了。正如我在前面提到的,你可以从同一网络上的任意一台系统上访问 phpVirtualBox 了,而所需要的仅仅是一个 web 浏览器和 phpVirtualBox 的用户名和密码。
如果在你的宿主机系统(不是访客机)的 BIOS 中没有启用虚拟化支持,phpVirtualBox 将只允许你去创建 32 位的访客系统。要安装 64 位的访客系统,你必须在你的宿主机的 BIOS 中启用虚拟化支持。在你的宿主机的 BIOS 中你可以找到一些类似于 “virtualization” 或 “hypervisor” 字眼的选项,然后确保它是启用的。
本文到此结束了,希望能帮到你。如果你找到了更有用的指南,共享出来吧。
还有一大波更好玩的东西即将到来,请继续关注!
---
via: <https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,106 | 三周内构建 JavaScript 全栈 web 应用 | https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c | 2018-10-12T10:30:00 | [
"JavaScript",
"开发"
] | https://linux.cn/article-10106-1.html | 
*应用 Align 中,用户主页的控制面板*
### 从构思到部署应用程序的简单分步指南
我在 Grace Hopper Program 为期三个月的编码训练营即将结束,实际上这篇文章的标题有些纰漏 —— 现在我已经构建了 三个 全栈应用:[从零开始的电子商店](https://github.com/limitless-leggings/limitless-leggings)、我个人的 [私人黑客马拉松项目](https://www.youtube.com/watch?v=qyLoInHNjoc),还有这个“三周的结业项目”。这个项目是迄今为止强度最大的 —— 我和另外两名队友共同花费三周的时光 —— 而它也是我在训练营中最引以为豪的成就。这是我目前所构建和涉及的第一款稳定且复杂的应用。
如大多数开发者所知,即使你“知道怎么编写代码”,但真正要制作第一款全栈的应用却是非常困难的。JavaScript 生态系统出奇的大:有包管理器、模块、构建工具、转译器、数据库、库文件,还要对上述所有东西进行选择,难怪如此多的编程新手除了 Codecademy 的教程外,做不了任何东西。这就是为什么我想让你体验这个决策的分布教程,跟着我们队伍的脚印,构建可用的应用。
---
首先,简单的说两句。Align 是一个 web 应用,它使用直观的时间线界面帮助用户管理时间、设定长期目标。我们的技术栈有:用于后端服务的 Firebase 和用于前端的 React。我和我的队友在这个[短视频](https://youtu.be/YacM6uYP2Jo)中解释的更详细。
从第 1 天(我们组建团队的那天)开始,直到最终应用的完成,我们是如何做的?这里是我们采取的步骤纲要:
---
### 第 1 步:构思
第一步是弄清楚我们到底要构建什么东西。过去我在 IBM 中当咨询师的时候,我和合作组长一同带领着构思工作组。从那之后,我一直建议小组使用经典的头脑风暴策略,在会议中我们能够提出尽可能多的想法 —— 即使是 “愚蠢的想法” —— 这样每个人的大脑都在思考,没有人因顾虑而不敢发表意见。

在产生了好几个关于应用的想法时,我们把这些想法分类记录下来,以便更好的理解我们大家都感兴趣的主题。在我们这个小组中,我们看到实现想法的清晰趋势,需要自我改进、设定目标、情怀,还有个人发展。我们最后从中决定了具体的想法:做一个用于设置和管理长期目标的控制面板,有保存记忆的元素,可以根据时间将数据可视化。
从此,我们创作出了一系列用户故事(从一个终端用户的视角,对我们想要拥有的功能进行描述),阐明我们到底想要应用实现什么功能。
### 第 2 步:UX/UI 示意图
接下来,在一块白板上,我们画出了想象中应用的基本视图。结合了用户故事,以便理解在应用基本框架中这些视图将会如何工作。



这些骨架确保我们意见统一,提供了可预见的蓝图,让我们向着计划的方向努力。
### 第 3 步:选好数据结构和数据库类型
到了设计数据结构的时候。基于我们的示意图和用户故事,我们在 Google doc 中制作了一个清单,它包含我们将会需要的模型和每个模型应该包含的属性。我们知道需要 “目标(goal)” 模型、“用户(user)”模型、“里程碑(milestone)”模型、“记录(checkin)”模型还有最后的“资源(resource)”模型和“上传(upload)”模型,

*最初的数据模型结构*
在正式确定好这些模型后,我们需要选择某种 类型 的数据库:“关系型的”还是“非关系型的”(也就是“SQL”还是“NoSQL”)。由于基于表的 SQL 数据库需要预定义的格式,而基于文档的 NoSQL 数据库却可以用动态格式描述非结构化数据。
对于我们这个情况,用 SQL 型还是 No-SQL 型的数据库没多大影响,由于下列原因,我们最终选择了 Google 的 NoSQL 云数据库 Firebase:
1. 它能够把用户上传的图片保存在云端并存储起来
2. 它包含 WebSocket 功能,能够实时更新
3. 它能够处理用户验证,并且提供简单的 OAuth 功能。
我们确定了数据库后,就要理解数据模型之间的关系了。由于 Firebase 是 NoSQL 类型,我们无法创建联合表或者设置像 “记录 (Checkins)属于目标(Goals)” 的从属关系。因此我们需要弄清楚 JSON 树是什么样的,对象是怎样嵌套的(或者不是嵌套的关系)。最终,我们构建了像这样的模型:

*我们最终为目标(Goal)对象确定的 Firebase 数据格式。注意里程碑(Milestones)和记录(Checkins)对象嵌套在 Goals 中。*
(注意: 出于性能考虑,Firebase 更倾向于简单、常规的数据结构, 但对于我们这种情况,需要在数据中进行嵌套,因为我们不会从数据库中获取目标(Goal)却不获取相应的子对象里程碑(Milestones)和记录(Checkins)。)
### 第 4 步:设置好 Github 和敏捷开发工作流
我们知道,从一开始就保持井然有序、执行敏捷开发对我们有极大好处。我们设置好 Github 上的仓库,我们无法直接将代码合并到主(master)分支,这迫使我们互相审阅代码。

我们还在 [Waffle.io](http://www.waffle.io/) 网站上创建了敏捷开发的面板,它是免费的,很容易集成到 Github。我们在 Waffle 面板上罗列出所有用户故事以及需要我们去修复的 bug。之后当我们开始编码时,我们每个人会为自己正在研究的每一个用户故事创建一个 git 分支,在完成工作后合并这一条条的分支。
我们还开始保持晨会的习惯,讨论前一天的工作和每一个人遇到的阻碍。会议常常决定了当天的流程 —— 哪些人要结对编程,哪些人要独自处理问题。
我认为这种类型的工作流程非常好,因为它让我们能够清楚地找到自己的定位,不用顾虑人际矛盾地高效执行工作。
### 第 5 步: 选择、下载样板文件
由于 JavaScript 的生态系统过于复杂,我们不打算从最底层开始构建应用。把宝贵的时间花在连通 Webpack 构建脚本和加载器,把符号链接指向项目工程这些事情上感觉很没必要。我的团队选择了 [Firebones](https://github.com/FullstackAcademy/firebones) 框架,因为它恰好适用于我们这个情况,当然还有很多可供选择的开源框架。
### 第 6 步:编写后端 API 路由(或者 Firebase 监听器)
如果我们没有用基于云的数据库,这时就应该开始编写执行数据库查询的后端高速路由了。但是由于我们用的是 Firebase,它本身就是云端的,可以用不同的方式进行代码交互,因此我们只需要设置好一个可用的数据库监听器。
为了确保监听器在工作,我们用代码做出了用于创建目标(Goal)的基本用户表格,实际上当我们完成表格时,就看到数据库执行可更新。数据库就成功连接了!
### 第 7 步:构建 “概念证明”
接下来是为应用创建 “概念证明”,也可以说是实现起来最复杂的基本功能的原型,证明我们的应用 可以 实现。对我们而言,这意味着要找个前端库来实现时间线的渲染,成功连接到 Firebase,显示数据库中的一些种子数据。

*Victory.JS 绘制的简单时间线*
我们找到了基于 D3 构建的响应式库 Victory.JS,花了一天时间阅读文档,用 VictoryLine 和 VictoryScatter 组件实现了非常基础的示例,能够可视化地显示数据库中的数据。实际上,这很有用!我们可以开始构建了。
### 第 8 步:用代码实现功能
最后,是时候构建出应用中那些令人期待的功能了。取决于你要构建的应用,这一重要步骤会有些明显差异。我们根据所用的框架,编码出不同的用户故事并保存在 Waffle 上。常常需要同时接触前端和后端代码(比如,创建一个前端表格同时要连接到数据库)。我们实现了包含以下这些大大小小的功能:
* 能够创建新目标、里程碑和记录
* 能够删除目标,里程碑和记录
* 能够更改时间线的名称,颜色和详细内容
* 能够缩放时间线
* 能够为资源添加链接
* 能够上传视频
* 在达到相关目标的里程碑和记录时弹出资源和视频
* 集成富文本编辑器
* 用户注册、验证、OAuth 验证
* 弹出查看时间线选项
* 加载画面
有各种原因,这一步花了我们很多时间 —— 这一阶段是产生最多优质代码的阶段,每当我们实现了一个功能,就会有更多的事情要完善。
### 第 9 步: 选择并实现设计方案
当我们使用 MVP 架构实现了想要的功能,就可以开始清理,对它进行美化了。像表单,菜单和登陆栏等组件,我的团队用的是 Material-UI,不需要很多深层次的设计知识,它也能确保每个组件看上去都很圆润光滑。

*这是我制作的最喜爱功能之一了。它美得令人心旷神怡。*
我们花了一点时间来选择颜色方案和编写 CSS ,这让我们在编程中休息了一段美妙的时间。期间我们还设计了 logo 图标,还上传了网站图标。
### 第 10 步: 找出并减少 bug
我们一开始就应该使用测试驱动开发的模式,但时间有限,我们那点时间只够用来实现功能。这意味着最后的两天时间我们花在了模拟我们能够想到的每一种用户流,并从应用中找出 bug。

这一步是最不具系统性的,但是我们发现了一堆够我们忙乎的 bug,其中一个是在某些情况下加载动画不会结束的 bug,还有一个是资源组件会完全停止运行的 bug。修复 bug 是件令人恼火的事情,但当软件可以运行时,又特别令人满足。
### 第 11 步:应用上线
最后一步是上线应用,这样才可以让用户使用它!由于我们使用 Firebase 存储数据,因此我们使用了 Firebase Hosting,它很直观也很简单。如果你要选择其它的数据库,你可以使用 Heroku 或者 DigitalOcean。一般来讲,可以在主机网站中查看使用说明。
我们还在 Namecheap.com 上购买了一个便宜的域名,这让我们的应用更加完善,很容易被找到。

---
好了,这就是全部的过程 —— 我们都是这款实用的全栈应用的合作开发者。如果要继续讲,那么第 12 步将会是对用户进行 A/B 测试,这样我们才能更好地理解:实际用户与这款应用交互的方式和他们想在 V2 版本中看到的新功能。
但是,现在我们感到非常开心,不仅是因为成品,还因为我们从这个过程中获得了难以估量的知识和理解。点击 [这里](https://align.fun/) 查看 Align 应用!

*Align 团队:Sara Kladky(左),Melanie Mohn(中),还有我自己。*
---
via: <https://medium.com/ladies-storm-hackathons/how-we-built-our-first-full-stack-javascript-web-app-in-three-weeks-8a4668dbd67c?imm_mid=0f581a&cmp=em-web-na-na-newsltr_20170816>
作者:[Sophia Ciocca](https://medium.com/@sophiaciocca?source=post_header_lockup) 译者:[BriFuture](https://github.com/BriFuture) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # How we built our first full-stack JavaScript web app in three weeks
## A simple step-by-step guide to go from idea to deployed app
My three months of coding bootcamp at the Grace Hopper Program have come to a close, and the title of this article is actually not quite true — I’ve now built *three* full-stack apps: [an e-commerce store from scratch](https://github.com/limitless-leggings/limitless-leggings), a [personal hackathon project](https://www.youtube.com/watch?v=qyLoInHNjoc) of my choice, and finally, a three-week capstone project. That capstone project was by far the most intensive— a three week journey with two teammates — and it is my proudest achievement from bootcamp. It is the first robust, complex app I have ever fully built and designed.
As most developers know, even when you “know how to code”, **it can be really overwhelming to embark on the creation of your first full-stack app.** The JavaScript ecosystem is incredibly vast: with package managers, modules, build tools, transpilers, databases, libraries, and decisions to be made about all of them, it’s no wonder that so many budding coders never build anything beyond Codecademy tutorials. That’s why I want to walk you through a step-by-step guide of the decisions and steps my team took to create our live app, Align.
First, some context. **Align is a web app that uses an intuitive timeline interface to help users set long-term goals and manage them over time.** Our stack includes Firebase for back-end services and React on the front end. My teammates and I explain more in this short video:
So how did we go from Day 1, when we were assigned our teams, to the final live app? Here’s a rundown of the steps we took:
# Step 1: Ideate
The first step was to figure out what exactly we wanted to build. In my past life as a consultant at IBM, I led ideation workshops with corporate leaders. Pulling from that, I suggested to my group the **classic post-it brainstorming strategy**, in which we all scribble out as many ideas as we can — even ‘stupid ones’ — so that people’s brains keep moving and no one avoids voicing ideas out of fear.
After generating a few dozen app ideas, we sorted them into categories to gain a better understanding of what themes we were collectively excited about. In our group, we saw a clear trend towards ideas surrounding self-improvement, goal-setting, nostalgia, and personal development. From that, we eventually honed in on a specific idea: a personal dashboard for setting and managing long-term goals, with elements of memory-keeping and data visualization over time.
From there, we created a set of **user stories **— descriptions of features we wanted to have, from an end-user perspective — to elucidate what exactly we wanted our app to do.
# Step 2: Wireframe UX/UI
Next, on a white board, we drew out the basic views we envisioned in our app. We incorporated our set of user stories to understand how these views would work in a skeletal app framework.
These sketches ensured we were all on the same page, and provided a visual blueprint going forward of what exactly we were all working towards.
# Step 3: Choose a data structure and type of database
It was now time to design our data structure. Based on our wireframes and user stories, we created a list in a Google doc of the models we would need and what attributes each should include. We knew we needed a ‘goal’ model, a ‘user’ model, a ‘milestone’ model, and a ‘checkin’ model, as well as eventually a ‘resource’ model, and an ‘upload’ model.
After informally sketching the models out, we needed to choose a *type *of database: ‘relational’ vs. ‘non-relational’ (a.k.a. ‘SQL’ vs. ‘NoSQL’). **Whereas SQL databases are table-based and need predefined schema, NoSQL databases are document-based and have dynamic schema for unstructured data.**
For our use case, it didn’t matter much whether we used a SQL or a No-SQL database, so we ultimately chose Google’s cloud NoSQL database **Firebase** for other reasons:
- It could hold user image uploads in its cloud storage
- It included WebSocket integration for real-time updating
- It could handle our user authentication and offer easy OAuth integration
Once we chose a database, it was time to understand the **relations **between our data models. Since Firebase is NoSQL, we couldn’t create join tables or set up formal relations like *“Checkins belongTo Goals”*. Instead, we needed to figure out what the JSON tree would look like, and how the objects would be nested (or not). Ultimately, we structured our model like this:
*(Note: Firebase prefers shallow, normalized data structures for efficiency, but for our use case, it made most sense to nest it, since we would never be pulling a Goal from the database without its child Milestones and Checkins.)*
# Step 4: Set up Github and an agile workflow
We knew from the start that staying organized and practicing agile development would serve us well. We set up a **Github repo**, on which **we** **prevented merging to master** to force ourselves to review each other’s code.
We also created an agile board on [Waffle.io](http://www.waffle.io), which is free and has easy integration with Github. On the Waffle board, we listed our user stories as well as bugs we knew we needed to fix. Later, when we started coding, we would each create git branches for the user story we were currently working on, moving it from swim lane to swim lane as we made progress.
We also began holding **“stand-up” meetings** each morning to discuss the previous day’s progress and any blockers each of us were encountering. This meeting often decided the day’s flow — who would be pair programming, and who would work on an issue solo.
I highly recommend some sort of structured workflow like this, as it allowed us to clearly define our priorities and make efficient progress without any interpersonal conflict.
# Step 5: Choose & download a boilerplate
Because the JavaScript ecosystem is so complicated, we opted not to build our app from absolute ground zero. It felt unnecessary to spend valuable time wiring up our Webpack build scripts and loaders, and our symlink that pointed to our project directory. My team chose the [Firebones](https://github.com/FullstackAcademy/firebones) skeleton because it fit our use case, but there are many open-source skeleton options available to choose from.
# Step 6: Write back-end API routes (or Firebase listeners)
If we weren’t using a cloud-based database, this would have been the time to start writing our back-end **Express routes** to make requests to our database. But since we were using Firebase, which is already in the cloud and has a different way of communicating with code, we just worked to set up our first successful **database listener**.
To ensure our listener was working, we coded out a basic user form for creating a Goal, and saw that, indeed, when we filled out the form, our database was live-updating. We were connected!
# Step 7: Build a “Proof Of Concept”
Our next step was to create a “proof of concept” for our app, or a **prototype of the most difficult fundamental features to implement**, demonstrating that our app *could *eventually* *exist. For us, this meant finding a front-end library to satisfactorily render timelines, and connecting it to Firebase successfully to display some seed data in our database.
We found **Victory.JS**, a React library built on D3, and spent a day reading the documentation and putting together a very basic example of a *VictoryLine* component and a *VictoryScatter* component to visually display data from the database. Indeed, it worked! We were ready to build.
# Step 8: Code out the features
Finally, it was time to build out all the exciting functionality of our app. This is a giant step that will obviously vary widely depending on the app you’re personally building. We looked at our wireframes and started** coding out the individual user stories** in our Waffle. This often included touching both front-end and back-end code (for example, creating a front-end form and also connecting it to the database). Our features ranged from major to minor, and included things like:
- ability to create new goals, milestones, and checkins
- ability to delete goals, milestones, and checkins
- ability to change a timeline’s name, color, and details
- ability to zoom in on timelines
- ability to add links to resources
- ability to upload media
- ability to bubble up resources and media from milestones and checkins to their associated goals
- rich text editor integration
- user signup / authentication / OAuth
- popover to view timeline options
- loading screens
For obvious reasons, this step took up the bulk of our time — this phase is where most of the meaty code happened, and each time we finished a feature, there were always more to build out!
# Step 9: Choose and code the design scheme
Once we had an MVP of the functionality we desired in our app, it was time to clean it up and make it pretty. My team used **Material-UI **for components like form fields, menus, and login tabs, which ensured everything looked sleek, polished, and coherent without much in-depth design knowledge.
We spent a while choosing a color scheme and editing the CSS, which provided us a nice break from in-the-trenches coding. We also designed a** logo** and uploaded a **favicon**.
# Step 10: Find and squash bugs
While we should have been using **test-driven development **from the beginning, time constraints left us with precious little time for anything but features. This meant that we spent the final two days **simulating every user flow we could think of and hunting our app for bugs.**
This process was not the most systematic, but we found plenty of bugs to keep us busy, including a bug in which the loading screen would last indefinitely in certain situations, and one in which the resource component had stopped working entirely. Fixing bugs can be annoying, but when it finally works, it’s extremely satisfying.
# Step 11: Deploy the live app
The final step was to deploy our app** **so it would be available live! Because we were using Firebase to store our data, we deployed to **Firebase** **Hosting**, which was intuitive and simple. If your back end uses a different database, you can use **Heroku** or **DigitalOcean**. Generally, deployment directions are readily available on the hosting site.
We also bought a cheap **domain name** on Namecheap.com to make our app more polished and easy to find.
And that was it — we were suddenly the co-creators of a real live full-stack app that someone could use! If we had a longer runway, Step 12 would have been to run A/B testing on users, so we could better understand how actual users interact with our app and what they’d like to see in a V2.
For now, however, we’re happy with the final product, and with the immeasurable knowledge and understanding we gained throughout this process. Check out Align [here](https://align.fun)!
— — *If you enjoyed this piece, I’d love it if you hit the green heart 💚 so others might stumble upon it. Feel free to check out the **source code** for Align, and **follow me on Github**, as well as my badass team members, **Sara Kladky** and **Melanie Mohn**.* |
10,107 | 解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting file | https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/ | 2018-10-12T11:10:15 | [
"错误",
"更新"
] | https://linux.cn/article-10107-1.html | 
自我更新 Arch Linux 桌面以来已经有一个月了。今天我试着更新我的 Arch Linux 系统,然后遇到一个错误 “error:failed to commit transaction (conflicting files) stfl:/usr/lib/libstfl.so.0 exists in filesystem”。看起来是 pacman 无法更新一个已经存在于文件系统上的库 (/usr/lib/libstfl.so.0)。如果你也遇到了同样的问题,下面是一个快速解决方案。
### 解决 Arch Linux 中出现的 “error:failed to commit transaction (conflicting files)”
有三种方法。
1。简单在升级时忽略导致问题的 stfl 库并尝试再次更新系统。请参阅此指南以了解 [如何在更新时忽略软件包](https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/)。
2。使用命令覆盖这个包:
```
$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0
```
3。手工删掉 stfl 库然后再次升级系统。请确保目标包不被其他任何重要的包所依赖。可以通过去 archlinux.org 查看是否有这种冲突。
```
$ sudo rm /usr/lib/libstfl.so.0
```
现在,尝试更新系统:
```
$ sudo pacman -Syu
```
我选择第三种方法,直接删除该文件然后升级 Arch Linux 系统。很有效!
希望本文对你有所帮助。还有更多好东西。敬请期待!
干杯!
---
via: <https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[lujun9972](https://github.com/lujun9972) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,108 | 使用 Chrome 扩展将 YouTube 播放器控件添加到 Linux 桌面 | https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html | 2018-10-12T22:41:00 | [
"视频播放"
] | https://linux.cn/article-10108-1.html | 一个我怀念的 Unity 功能(虽然只使用了一小段时间)是在 Web 浏览器中访问 YouTube 等网站时在 Ubuntu 声音指示器中自动出现播放器控件,因此你可以直接从顶部栏暂停或停止视频,以及浏览视频/歌曲信息和预览。
这个 Unity 功能已经消失很久了,但我正在为 Gnome Shell 寻找类似的东西,然后我遇到了 [browser-mpris2](https://github.com/otommod/browser-mpris2),这是一个为 Google Chrome/Chromium 实现 MPRIS v2 接口的扩展,目前只支持 YouTube,我想可能会有一些读者会喜欢这个。
该扩展还适用于 Opera 和 Vivaldi 等基于 Chromium 的 Web 浏览器。
browser-mpris2 也支持 Firefox,但因为通过 `about:debugging` 加载扩展是临时的,而这是 browser-mpris2 所需要的,因此本文不包括 Firefox 的指导。开发人员[打算](https://github.com/otommod/browser-mpris2/issues/11)将来将扩展提交到 Firefox 插件网站上。
使用此 Chrome 扩展,你可以在支持 MPRIS2 的 applets 中获得 YouTube 媒体播放器控件(播放、暂停、停止和查找
)。例如,如果你使用 Gnome Shell,你可将 YouTube 媒体播放器控件作为永久显示的控件,或者你可以使用 Media Player Indicator 之类的扩展来实现此目的。在 Cinnamon /Linux Mint with Cinnamon 中,它出现在声音 Applet 中。
我无法在 Unity 上用它,我不知道为什么。我没有在不同桌面环境(KDE、Xfce、MATE 等)中使用其他支持 MPRIS2 的 applet 尝试此扩展。如果你尝试过,请告诉我们它是否适用于你的桌面环境/支持 MPRIS2 的 applet。
以下是在使用 Gnome Shell 的 Ubuntu 18.04 并装有 Chromium 浏览器的[媒体播放器指示器](https://extensions.gnome.org/extension/55/media-player-indicator/)的截图,其中显示了有关当前正在播放的 YouTube 视频的信息及其控件(播放/暂停,停止和查找):
[](https://extensions.gnome.org/extension/55/media-player-indicator/)
在 Linux Mint 19 Cinnamon 中使用其默认声音 applet 和 Chromium 浏览器的截图:

### 如何为 Google Chrom/Chromium安装 browser-mpris2
1、 如果你还没有安装 Git 就安装它
在 Debian/Ubuntu/Linux Mint 中,使用此命令安装 git:
```
sudo apt install git
```
2、 下载并安装 [browser-mpris2](https://github.com/otommod/browser-mpris2) 所需文件。
下面的命令克隆了 browser-mpris2 的 Git 仓库并将 chrome-mpris2 安装到 `/usr/local/bin/`(在一个你可以保存 browser-mpris2 文件夹的地方运行 `git clone ...` 命令,由于它会被 Chrome/Chromium 使用,你不能删除它):
```
git clone https://github.com/otommod/browser-mpris2
sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/
```
3、 在基于 Chrome/Chromium 的 Web 浏览器中加载此扩展。

打开 Google Chrome、Chromium、Opera 或 Vivaldi 浏览器,进入 Extensions 页面(在 URL 栏中输入 `chrome://extensions`),在屏幕右上角切换到“开发者模式”。然后选择 “Load Unpacked” 并选择 chrome-mpris2 目录(确保没有选择子文件夹)。
复制扩展 ID 并保存它,因为你以后需要它(它类似于这样:`emngjajgcmeiligomkgpngljimglhhii`,但它会与你的不一样,因此确保使用你计算机中的 ID!)。
4、 运行 `install-chrome.py`(在 `browser-mpris2/native` 文件夹中),指定扩展 id 和 chrome-mpris2 路径。
在终端中使用此命令(将 `REPLACE-THIS-WITH-EXTENSION-ID` 替换为上一步中 `chrome://extensions` 下显示的 browser-mpris2 扩展 ID)安装此扩展:
```
browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2
```
你只需要运行此命令一次,无需将其添加到启动或其他类似的地方。你在 Google Chrome 或 Chromium 浏览器中播放的任何 YouTube 视频都应显示在你正在使用的任何 MPRISv2 applet 中。你无需重启 Web 浏览器。
---
via: <https://www.linuxuprising.com/2018/08/add-youtube-player-controls-to-your.html>
作者:[Logix](https://plus.google.com/118280394805678839070) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Add YouTube Player Controls To Your Linux Desktop With browser-mpris2 (Chrome Extension)
A Unity feature that I miss (it only actually worked for a short while though) is automatically getting player controls in the Ubuntu Sound Indicator when visiting a website like YouTube in a web browser, so you could pause or stop the video directly from the top bar, as well as see the video / song information and a preview.
This Unity feature is long dead, but I was searching for something similar for Gnome Shell and I came across
Here is a screenshot with
And in Linux Mint 19 Cinnamon with its default sound applet and Chromium browser:
In Debian / Ubuntu / Linux Mint, use this command to install git:
The commands below clone the browser-mpris2 Git repository and install the chrome-mpris2 file to
Open Google Chrome, Chromium, Opera or Vivaldi web browsers, go to the Extensions page (enter
Copy the extension ID and save it because you'll need it later (it's something like:
Use this command in a terminal (replace
You only need to run this command once, there's no need to add it to startup or anything like that. Any YouTube video you play in Google Chrome or Chromium browsers should show up in whatever MPRISv2 applet you're using. There's no need to restart the web browser.
This Unity feature is long dead, but I was searching for something similar for Gnome Shell and I came across
**, and I thought there might be some Linux Uprising readers who'll like this.**[browser-mpris2](https://github.com/otommod/browser-mpris2), an extension that implements a MPRIS v2 interface for Google Chrome / Chromium, which currently only supports YouTube**The extension also works with Chromium-based web browsers like Opera and Vivaldi.**
**browser-mpris2 also supports Firefox but since loading extensions via about:debugging is temporary, and this is needed for browser-mpris2, this article doesn't include Firefox instructions. The developer**[intends](https://github.com/otommod/browser-mpris2/issues/11)to submit the extension to the Firefox addons website in the future.**Using this Chrome extension you get YouTube media player controls (play, pause, stop and seeking) in MPRIS2-capable applets**. For example, if you use Gnome Shell, you get YouTube media player controls as a permanent notification or, you can use an extension like Media Player Indicator for this. In Cinnamon / Linux Mint with Cinnamon, it shows up in the Sound Applet.**It didn't work for me on Unity**, I'm not sure why. I didn't try this extension with other MPRIS2-capable applets available in various desktop environments (KDE, Xfce, MATE, etc.). If you give it a try, let us know if it works with your desktop environment / MPRIS2 enabled applet.Here is a screenshot with
[Media Player Indicator](https://extensions.gnome.org/extension/55/media-player-indicator/)displaying information about the currently playing YouTube video, along with its controls (play/pause, stop and seeking), on Ubuntu 18.04 with Gnome Shell and Chromium browser:And in Linux Mint 19 Cinnamon with its default sound applet and Chromium browser:
**Related:**[Fix Media Player Indicator Extension Misaligned Controls / Text On Gnome Shell 3.28](https://www.linuxuprising.com/2018/05/fix-media-player-indicator-extension.html)## How to install browser-mpris2 for Google Chrome / Chromium
**1. Install Git if you haven't already.**In Debian / Ubuntu / Linux Mint, use this command to install git:
`sudo apt install git`
**2. Download and install the**[browser-mpris2](https://github.com/otommod/browser-mpris2)required files.The commands below clone the browser-mpris2 Git repository and install the chrome-mpris2 file to
`/usr/local/bin/`
(run the "git clone..." command in a folder where you can continue to keep the browser-mpris2 folder because you can't remove it, as it will be used by Chrome / Chromium):```
git clone https://github.com/otommod/browser-mpris2
sudo install browser-mpris2/native/chrome-mpris2 /usr/local/bin/
```
**3. Load the extension in Chrome / Chromium-based web browsers.**Open Google Chrome, Chromium, Opera or Vivaldi web browsers, go to the Extensions page (enter
`chrome://extensions`
in the URL bar), enable `Developer mode`
using the toggle available in the top right-hand side of the screen, then select `Load Unpacked`
and select the chrome-mpris2 directory (make sure to not select a subfolder).Copy the extension ID and save it because you'll need it later (it's something like:
`emngjajgcmeiligomkgpngljimglhhii`
but it's different for you so make sure to use the ID from your computer!) .**4. Run**`install-chrome.py`
(from the `browser-mpris2/native`
folder), specifying the extension id and chrome-mpris2 path.Use this command in a terminal (replace
`REPLACE-THIS-WITH-EXTENSION-ID`
with the browser-mpris2 extension ID displayed under `chrome://extensions`
from the previous step) to install this extension:`browser-mpris2/native/install-chrome.py REPLACE-THIS-WITH-EXTENSION-ID /usr/local/bin/chrome-mpris2`
You only need to run this command once, there's no need to add it to startup or anything like that. Any YouTube video you play in Google Chrome or Chromium browsers should show up in whatever MPRISv2 applet you're using. There's no need to restart the web browser.
**Chrome-related:** |
10,109 | 如何提交你的第一个 Linux 内核补丁 | https://opensource.com/article/18/8/first-linux-kernel-patch | 2018-10-12T23:23:11 | [
"内核",
"补丁",
"贡献"
] | https://linux.cn/article-10109-1.html |
>
> 学习如何做出你的首个 Linux 内核贡献,以及在开始之前你应该知道什么。
>
>
>

Linux 内核是最大且变动最快的开源项目之一,它由大约 53,600 个文件和近 2,000 万行代码组成。在全世界范围内超过 15,600 位程序员为它贡献代码,Linux 内核项目的维护者使用了如下的协作模型。

本文中,为了便于在 Linux 内核中提交你的第一个贡献,我将为你提供一个必需的快速检查列表,以告诉你在提交补丁时,应该去查看和了解的内容。对于你贡献的第一个补丁的提交流程方面的更多内容,请阅读 [KernelNewbies 的第一个内核补丁教程](https://kernelnewbies.org/FirstKernelPatch)。
### 为内核作贡献
**第 1 步:准备你的系统。**
本文开始之前,假设你的系统已经具备了如下的工具:
* 文本编辑器
* Email 客户端
* 版本控制系统(例如:git)
**第 2 步:下载 Linux 内核代码仓库。**
```
git clone -b staging-testing
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
```
复制你的当前配置:
```
cp /boot/config-`uname -r`* .config
```
**第 3 步:构建/安装你的内核。**
```
make -jX
sudo make modules_install install
```
**第 4 步:创建一个分支并切换到该分支。**
```
git checkout -b first-patch
```
**第 5 步:更新你的内核并指向到最新的代码。**
```
git fetch origin
git rebase origin/staging-testing
```
**第 6 步:在最新的代码库上产生一个变更。**
使用 `make` 命令重新编译,确保你的变更没有错误。
**第 7 步:提交你的变更并创建一个补丁。**
```
git add <file>
git commit -s -v
git format-patch -o /tmp/ HEAD^
```

主题是由冒号分隔的文件名组成,跟着是使用祈使语态来描述补丁做了什么。空行之后是强制的 `signed off` 标记,最后是你的补丁的 `diff` 信息。
下面是另外一个简单补丁的示例:

接下来,[从命令行使用邮件](https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients)(在本例子中使用的是 Mutt)发送这个补丁:
```
mutt -H /tmp/0001-<whatever your filename is>
```
使用 [get\_maintainer.pl 脚本](https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl),去了解你的补丁应该发送给哪位维护者的列表。
### 提交你的第一个补丁之前,你应该知道的事情
* [Greg Kroah-Hartman](3) 的 [staging tree](https://www.kernel.org/doc/html/v4.15/process/2.Process.html) 是提交你的 [第一个补丁](https://kernelnewbies.org/FirstKernelPatch) 的最好的地方,因为他更容易接受新贡献者的补丁。在你熟悉了补丁发送流程以后,你就可以去发送复杂度更高的子系统专用的补丁。
* 你也可以从纠正代码中的编码风格开始。想学习更多关于这方面的内容,请阅读 [Linux 内核编码风格文档](https://www.kernel.org/doc/html/v4.10/process/coding-style.html)。
* [checkpatch.pl](https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl) 脚本可以帮你检测编码风格方面的错误。例如,运行如下的命令:`perl scripts/checkpatch.pl -f drivers/staging/android/* | less`
* 你可以去补全开发者留下的 TODO 注释中未完成的内容:`find drivers/staging -name TODO`
* [Coccinelle](http://coccinelle.lip6.fr/) 是一个模式匹配的有用工具。
* 阅读 [归档的内核邮件]([email protected])。
* 为找到灵感,你可以去遍历 [linux.git 日志](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/)去查看以前的作者的提交内容。
* 注意:不要与你的补丁的审核者在邮件顶部交流!下面就是一个这样的例子:
**错误的方式:**
```
Chris,
Yes let’s schedule the meeting tomorrow, on the second floor.
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1. Do you want to schedule the meeting tomorrow?
> 2. On which floor in the office?
> 3. What time is suitable to you?
```
(注意那最后一个问题,在回复中无意中落下了。)
**正确的方式:**
```
Chris,
See my answers below...
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1. Do you want to schedule the meeting tomorrow?
Yes tomorrow is fine.
> 2. On which floor in the office?
Let's keep it on the second floor.
> 3. What time is suitable to you?
09:00 am would be alright.
```
(所有问题全部回复,并且这种方式还保存了阅读的时间。)
* [Eudyptula challenge](http://eudyptula-challenge.org/) 是学习内核基础知识的非常好的方式。
想学习更多内容,阅读 [KernelNewbies 的第一个内核补丁教程](https://kernelnewbies.org/FirstKernelPatch)。之后如果你还有任何问题,可以在 [kernelnewbies 邮件列表](https://kernelnewbies.org/MailingList) 或者 [#kernelnewbies IRC channel](https://kernelnewbies.org/IRC) 中提问。
---
via: <https://opensource.com/article/18/8/first-linux-kernel-patch>
作者:[Sayli Karnik](https://opensource.com/users/sayli) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | One of the biggest—and the fastest moving—open source projects, the Linux kernel, is composed of about 53,600 files and nearly 20-million lines of code. With more than 15,600 programmers contributing to the project worldwide, the Linux kernel follows a maintainer model for collaboration.

In this article, I'll provide a quick checklist of steps involved with making your first kernel contribution, and look at what you should know before submitting a patch. For a more in-depth look at the submission process for contributing your first patch, read the [KernelNewbies First Kernel Patch tutorial](https://kernelnewbies.org/FirstKernelPatch).
## Contributing to the kernel
### Step 1: Prepare your system.
Steps in this article assume you have the following tools on your system:
[Text editor](https://opensource.com/sitewide-search?search_api_views_fulltext=%22text%20editor%22)[Email client](https://opensource.com/business/18/1/desktop-email-clients)- Version control system (e.g.,
[git](https://opensource.com/tags/git))
### Step 2: Download the Linux kernel code repository`:`
```
git clone -b staging-testing
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
```
### Copy your current config: ` `
` `
`cp /boot/config-`uname -r`* .config`
### Step 3: Build/install your kernel.
```
make -jX
sudo make modules_install install
```
### Step 4: Make a branch and switch to it.
`git checkout -b first-patch`
### Step 5: Update your kernel to point to the latest code base.
```
git fetch origin
git rebase origin/staging-testing
```
### Step 6: Make a change to the code base.
Recompile using `make`
command to ensure that your change does not produce errors.
### Step 7: Commit your changes and create a patch.
```
git add <file>
git commit -s -v
git format-patch -o /tmp/ HEAD^
```

The subject consists of the path to the file name separated by colons, followed by what the patch does in the imperative tense. After a blank line comes the description of the patch and the mandatory signed off tag and, lastly, a diff of your patch.
Here is another example of a simple patch:

Next, send the patch [using email from the command line](https://opensource.com/life/15/8/top-4-open-source-command-line-email-clients) (in this case, Mutt): ` `
`mutt -H /tmp/0001-<whatever your filename is>`
To know the list of maintainers to whom to send the patch, use the [get_maintainer.pl script](https://github.com/torvalds/linux/blob/master/scripts/get_maintainer.pl).
## What to know before submitting your first patch
[Greg Kroah-Hartman](https://twitter.com/gregkh)'s[staging tree](https://www.kernel.org/doc/html/v4.15/process/2.Process.html)is a good place to submit your[first patch](https://kernelnewbies.org/FirstKernelPatch)as he accepts easy patches from new contributors. When you get familiar with the patch-sending process, you could send subsystem-specific patches with increased complexity.- You also could start with correcting coding style issues in the code. To learn more, read the
[Linux kernel coding style documentation](https://www.kernel.org/doc/html/v4.10/process/coding-style.html). - The script
[checkpatch.pl](https://github.com/torvalds/linux/blob/master/scripts/checkpatch.pl)detects coding style errors for you. For example, run:
`perl scripts/checkpatch.pl -f drivers/staging/android/* | less`
- You could complete TODOs left incomplete by developers:
`find drivers/staging -name TODO`
[Coccinelle](http://coccinelle.lip6.fr/)is a helpful tool for pattern matching.- Read the
[kernel mailing archives](https://lkml.org/). - Go through the
[linux.git log](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/)to see commits by previous authors for inspiration. - Note: Do not top-post to communicate with the reviewer of your patch! Here's an example:
**Wrong way:**Chris,
*Yes let’s schedule the meeting tomorrow, on the second floor.*
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1. Do you want to schedule the meeting tomorrow?
> 2. On which floor in the office?
> 3. What time is suitable to you?(Notice that the last question was unintentionally left unanswered in the reply.)
**Correct way:**Chris,
See my answers below...
> On Fri, Apr 26, 2013 at 9:25 AM, Chris wrote:
> Hey John, I had some questions:
> 1. Do you want to schedule the meeting tomorrow?
*Yes tomorrow is fine.*
> 2. On which floor in the office?
*Let's keep it on the second floor.*
> 3. What time is suitable to you?
*09:00 am would be alright.*(All questions were answered, and this way saves reading time.)
- The
[Eudyptula challenge](http://eudyptula-challenge.org/)is a great way to learn kernel basics.
To learn more, read the [KernelNewbies First Kernel Patch tutorial](https://kernelnewbies.org/FirstKernelPatch). After that, if you still have any questions, ask on the [kernelnewbies mailing list](https://kernelnewbies.org/MailingList) or in the [#kernelnewbies IRC channel](https://kernelnewbies.org/IRC).
## 3 Comments |
10,110 | 如何在 Ubuntu 上安装 pip | https://itsfoss.com/install-pip-ubuntu/ | 2018-10-13T11:00:00 | [
"pip",
"Python"
] | https://linux.cn/article-10110-1.html | **`pip` 是一个命令行工具,允许你安装 Python 编写的软件包。 学习如何在 Ubuntu 上安装 `pip` 以及如何使用它来安装 Python 应用程序。**
有许多方法可以[在 Ubuntu 上安装软件](https://itsfoss.com/how-to-add-remove-programs-in-ubuntu/)。 你可以从软件中心安装应用程序,也可以从下载的 DEB 文件、PPA(LCTT 译注:PPA 即 Personal Package Archives,个人软件包集)、[Snap 软件包](https://itsfoss.com/use-snap-packages-ubuntu-16-04/),也可以使用 [Flatpak](https://itsfoss.com/flatpak-guide/)、使用 [AppImage](https://itsfoss.com/use-appimage-linux/),甚至用旧的源代码安装方式。
还有一种方法可以在 [Ubuntu](https://www.ubuntu.com/) 中安装软件包。 它被称为 `pip`,你可以使用它来安装基于 Python 的应用程序。
### 什么是 pip
[pip](https://en.wikipedia.org/wiki/pip_(package_manager)) 代表 “pip Installs Packages”。 [pip](https://pypi.org/project/pip/) 是一个基于命令行的包管理系统。 用于安装和管理 [Python 语言](https://www.python.org/)编写的软件。
你可以使用 `pip` 来安装 Python 包索引([PyPI](https://pypi.org/))中列出的包。
作为软件开发人员,你可以使用 `pip` 为你自己的 Python 项目安装各种 Python 模块和包。
作为最终用户,你可能需要使用 `pip` 来安装一些 Python 开发的并且可以使用 `pip` 轻松安装的应用程序。 一个这样的例子是 [Stress Terminal](https://itsfoss.com/stress-terminal-ui/) 应用程序,你可以使用 `pip` 轻松安装。
让我们看看如何在 Ubuntu 和其他基于 Ubuntu 的发行版上安装 `pip`。
### 如何在 Ubuntu 上安装 pip

默认情况下,`pip` 未安装在 Ubuntu 上。 你必须首先安装它才能使用。 在 Ubuntu 上安装 `pip` 非常简单。 我马上展示给你。
Ubuntu 18.04 默认安装了 Python 2 和 Python 3。 因此,你应该为两个 Python 版本安装 `pip`。
`pip`,默认情况下是指 Python 2。`pip3` 代表 Python 3 中的 pip。
注意:我在本教程中使用的是 Ubuntu 18.04。 但是这里的教程应该适用于其他版本,如Ubuntu 16.04、18.10 等。你也可以在基于 Ubuntu 的其他 Linux 发行版上使用相同的命令,如 Linux Mint、Linux Lite、Xubuntu、Kubuntu 等。
#### 为 Python 2 安装 pip
首先,确保已经安装了 Python 2。 在 Ubuntu 上,可以使用以下命令进行验证。
```
python2 --version
```
如果没有错误并且显示了 Python 版本的有效输出,则说明安装了 Python 2。 所以现在你可以使用这个命令为 Python 2 安装 `pip`:
```
sudo apt install python-pip
```
这将安装 `pip` 和它的许多其他依赖项。 安装完成后,请确认你已正确安装了 `pip`。
```
pip --version
```
它应该显示一个版本号,如下所示:
```
pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)
```
这意味着你已经成功在 Ubuntu 上安装了 `pip`。
#### 为 Python 3 安装 pip
你必须确保在 Ubuntu 上安装了 Python 3。 可以使用以下命令检查一下:
```
python3 --version
```
如果显示了像 Python 3.6.6 这样的数字,则说明 Python 3 在你的 Linux 系统上安装好了。
现在,你可以使用以下命令安装 `pip3`:
```
sudo apt install python3-pip
```
你应该使用以下命令验证 `pip3` 是否已正确安装:
```
pip3 --version
```
它应该显示一个这样的数字:
```
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
```
这意味着 `pip3` 已成功安装在你的系统上。
### 如何使用 pip 命令
现在你已经安装了 `pip`,让我们快速看一些基本的 `pip` 命令。 这些命令将帮助你使用 `pip` 命令来搜索、安装和删除 Python 包。
要从 Python 包索引 PyPI 中搜索包,可以使用以下 `pip` 命令:
```
pip search <search_string>
```
例如,如果你搜索“stress”这个词,将会显示名称或描述中包含字符串“stress”的所有包。
```
pip search stress
stress (1.0.0) - A trivial utility for consuming system resources.
s-tui (0.8.2) - Stress Terminal UI stress test and monitoring tool
stressypy (0.0.12) - A simple program for calling stress and/or stress-ng from python
fuzzing (0.3.2) - Tools for stress testing applications.
stressant (0.4.1) - Simple stress-test tool
stressberry (0.1.7) - Stress tests for the Raspberry Pi
mobbage (0.2) - A HTTP stress test and benchmark tool
stresser (0.2.1) - A large-scale stress testing framework.
cyanide (1.3.0) - Celery stress testing and integration test support.
pysle (1.5.7) - An interface to ISLEX, a pronunciation dictionary with stress markings.
ggf (0.3.2) - global geometric factors and corresponding stresses of the optical stretcher
pathod (0.17) - A pathological HTTP/S daemon for testing and stressing clients.
MatPy (1.0) - A toolbox for intelligent material design, and automatic yield stress determination
netblow (0.1.2) - Vendor agnostic network testing framework to stress network failures
russtress (0.1.3) - Package that helps you to put lexical stress in russian text
switchy (0.1.0a1) - A fast FreeSWITCH control library purpose-built on traffic theory and stress testing.
nx4_selenium_test (0.1) - Provides a Python class and apps which monitor and/or stress-test the NoMachine NX4 web interface
physical_dualism (1.0.0) - Python library that approximates the natural frequency from stress via physical dualism, and vice versa.
fsm_effective_stress (1.0.0) - Python library that uses the rheological-dynamical analogy (RDA) to compute damage and effective buckling stress in prismatic shell structures.
processpathway (0.3.11) - A nifty little toolkit to create stress-free, frustrationless image processing pathways from your webcam for computer vision experiments. Or observing your cat.
```
如果要使用 `pip` 安装应用程序,可以按以下方式使用它:
```
pip install <package_name>
```
`pip` 不支持使用 tab 键补全包名,因此包名称需要准确指定。 它将下载所有必需的文件并安装该软件包。
如果要删除通过 `pip` 安装的 Python 包,可以使用 `pip` 中的 `uninstall` 选项。
```
pip uninstall <installed_package_name>
```
你可以在上面的命令中使用 `pip3` 代替 `pip`。
我希望这个快速提示可以帮助你在 Ubuntu 上安装 `pip`。 如果你有任何问题或建议,请在下面的评论部分告诉我。
---
via: <https://itsfoss.com/install-pip-ubuntu/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | 

**Summary**: To install PIP on Ubuntu, you should make sure to enable universe repository and then install
`python3-pip`
package like this:**sudo add-apt-repository universe**
**sudo apt install python3-pip**
There are numerous ways to [install software on Ubuntu](https://itsfoss.com/remove-install-software-ubuntu/). You can install applications from the software center, from downloaded deb files, from PPAs, from [Snap packages](https://itsfoss.com/use-snap-packages-ubuntu-16-04/), [using Flatpak](https://itsfoss.com/flatpak-guide/), using [AppImage](https://itsfoss.com/use-appimage-linux/) and even from the good old source code.
Here’s another way to install packages on [Ubuntu](https://www.ubuntu.com/?ref=itsfoss.com). It’s called PIP and you can use it to install Python-based applications.
## What is pip?
[Pip](https://en.wikipedia.org/wiki/Pip_(package_manager)?ref=itsfoss.com) stands for “Pip Installs Packages”. [Pip](https://pypi.org/project/pip/?ref=itsfoss.com) is a command-line based package management system. It’s used to install and manage software written in the [Python language](https://www.python.org/?ref=itsfoss.com). You can use pip to install packages listed in the Python Package Index ([PyPI](https://pypi.org/?ref=itsfoss.com)).
**As a software developer**, you can use pip to install various Python modules and packages for your own Python projects.
**As an end user**, you may need pip for installing some applications that are developed using Python and can be installed easily using pip. One such example is the [Stress Terminal](https://itsfoss.com/stress-terminal-ui/) application, which you can easily install with pip.
Let’s see how you can install pip on Ubuntu and other Ubuntu-based distributions.
## How to install pip on Ubuntu, Linux Mint and other Ubuntu-based distributions
First, make sure that Python 3 is installed on Ubuntu. To check that, use this command:
`python3 --version`
If it shows you a number like Python 3.x.y, Python 3 is installed on your Linux system.

Now you can install pip3 using the command below:
`sudo apt install python3-pip`
You should verify that pip3 has been installed correctly using this command:
`pip3 --version`
It should show you a number like this:
`pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)`
This means that pip3 is successfully installed on your system.
`pip`
command defaults to pip3 in Ubuntu 20.04 and above.## Installing Python packages [Recommended Way]
Recently, a change has been done on distributions like Ubuntu 23.04 and upcoming Debian 12, regarding the [installation of python packages](https://itsfoss.com/externally-managed-environment/).
From now on, you should install Python packages either from native repositories, install in a virtual environment or use `pipx`
.
[Externally Managed Environment Error With Pip in UbuntuSeeing an “externally managed environment” error while using Pip in Ubuntu 23.04? Here’s what you can do about this error.](https://itsfoss.com/externally-managed-environment/)

This was implemented to avoid the conflict between OS package managers and Python-specific package management tools like pip. These conflicts include both Python-level API incompatibilities and conflicts over file ownership.
## Using pip commands
Now that you’ve installed pip, let’s quickly see some basic pip commands. These commands will help you use pip commands for searching, installing and removing Python packages.
### Install a package with pip
There are two ways to install a package with PIP. You either install it for the currently logged-in user, or you install system-wide.
If you use `--user`
option, it installs the package for the logged-in user, i.e., you, without needing sudo access. The installed python software is available only for you. Other users on your system (if any) cannot use it.
`pip3 install --user python_package_name`
If you remove the `--user`
option, the package will be installed system-wide, and it will be available for all the users on your system. You’ll need sudo access in this case.
`sudo pip3 install python_package_name`
PIP doesn’t support tab completion by default. So you need to know the exact package name that you want to install. How do you get that? I show that to you in the next section.
### Search for packages in PyPI
To search for packages in the Python Package Index, you can go to their official package search website.
For example, if you search on ‘stress’, it will show all the packages that have the string ‘stress’ in their name or description.

Pip had provided a command line search option, which was disabled due to excessive web traffic issues. So, if you try to use `pip search package-name`
, you will come across an error, as shown in the screenshot below.

`pip search`
errorSo, use the PyPI website instead, as mentioned above.
### Upgrade Packages installed via pip
To [upgrade packages installed via pip](https://itsfoss.com/upgrade-pip-packages/), use the command below:
`pip3 install --upgrade <package-name>`
### Remove packages installed via pip
If you want to remove a Python package installed via pip, you can use the remove option.
`pip3 uninstall <installed_package_name>`
## Uninstall Pip from Ubuntu
To remove pip from Ubuntu, open a terminal and run:
```
sudo apt remove python3-pip
sudo apt autoremove
```
## Pipx is better! Start using it instead of Pip
Actually, if you want to use Pip for installing Python-based GUI applications, you should use Pipx. It complies with the new Python guidelines.
[Using Pipx](https://itsfoss.com/install-pipx-ubuntu/) is similar to Pip so it should feel familiar.
[Install and Use pipx in Ubuntu & Other LinuxPipx addresses the shortcomings of the popular pip tool. Learn to install and use Pipx in Linux.](https://itsfoss.com/install-pipx-ubuntu/)

I hope you like this tutorial on installing and using Pip on Ubuntu and hopefully on other distributions, too. Let me know if you have questions or suggestions. |
10,111 | ndm:NPM 的桌面 GUI 程序 | https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/ | 2018-10-13T13:42:13 | [
"npm",
"Node.js"
] | https://linux.cn/article-10111-1.html | 
NPM 是 **N**ode **P**ackage **M**anager (node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[使用 NPM 管理 NodeJS 包](https://www.ostechnix.com/manage-nodejs-packages-using-npm/)。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM,代表 **N**PM **D**esktop **M**anager (npm 桌面管理器),是 NPM 的自由开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。
在这个简短的教程中,我们将了解 Linux 中的 Ndm。
### 安装 NDM
NDM 在 AUR 中可用,因此你可以在 Arch Linux 及其衍生版(如 Antergos 和 Manjaro Linux)上使用任何 AUR 助手程序安装。
使用 [Pacaur](https://www.ostechnix.com/install-pacaur-arch-linux/):
```
$ pacaur -S ndm
```
使用 [Packer](https://www.ostechnix.com/install-packer-arch-linux-2/):
```
$ packer -S ndm
```
使用 [Trizen](https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/):
```
$ trizen -S ndm
```
使用 [Yay](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/):
```
$ yay -S ndm
```
使用 [Yaourt](https://www.ostechnix.com/install-yaourt-arch-linux/):
```
$ yaourt -S ndm
```
在基于 RHEL 的系统(如 CentOS)上,运行以下命令以安装 NDM。
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
在 Debian、Ubuntu、Linux Mint:
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
也可以使用 **Linuxbrew** 安装 NDM。首先,按照以下链接中的说明安装 Linuxbrew。
安装 Linuxbrew 后,可以使用以下命令安装 NDM:
```
$ brew update
$ brew install ndm
```
在其他 Linux 发行版上,进入 [NDM 发布页面](https://github.com/720kb/ndm/releases),下载最新版本,自行编译和安装。
### NDM 使用
从菜单或使用应用启动器启动 NDM。这就是 NDM 的默认界面。

在这里你可以本地或全局安装 NodeJS 包/模块。
#### 本地安装 NodeJS 包
要在本地安装软件包,首先通过单击主屏幕上的 “Add projects” 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 “demo” 的目录作为我的项目目录。
单击项目目录(即 demo),然后单击 “Add packages” 按钮。

输入要安装的软件包名称,然后单击 “Install” 按钮。

安装后,软件包将列在项目目录下。只需单击该目录即可在本地查看已安装软件包的列表。

同样,你可以创建单独的项目目录并在其中安装 NodeJS 模块。要查看项目中已安装模块的列表,请单击项目目录,右侧将显示软件包。
#### 全局安装 NodeJS 包
要全局安装 NodeJS 包,请单击主界面左侧的 “Globals” 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。
#### 管理包
单击任何已安装的包,不将在顶部看到各种选项,例如:
1. 版本(查看已安装的版本),
2. 最新(安装最新版本),
3. 更新(更新当前选定的包),
4. 卸载(删除所选包)等。

NDM 还有两个选项,即 “Update npm” 用于将 node 包管理器更新成最新可用版本, 而 “Doctor” 会运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。
### 总结
NDM 使安装、更新、删除 NodeJS 包的过程更加容易!你无需记住执行这些任务的命令。NDM 让我们在简单的图形界面中点击几下鼠标即可完成所有操作。对于那些懒得输入命令的人来说,NDM 是管理 NodeJS 包的完美伴侣。
干杯!
---
via: <https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,112 | 树莓派自建 NAS 云盘之——数据自动备份 | https://opensource.com/article/18/8/automate-backups-raspberry-pi | 2018-10-13T14:10:10 | [
"备份",
"NAS"
] | https://linux.cn/article-10112-1.html |
>
> 把你的树莓派变成数据的安全之所。
>
>
>

在《树莓派自建 NAS 云盘》系列的 [第一篇](/article-10104-1.html) 文章中,我们讨论了建立 NAS 的一些基本步骤,添加了两块 1TB 的存储硬盘驱动(一个用于数据存储,一个用于数据备份),并且通过网络文件系统(NFS)将数据存储盘挂载到远程终端上。本文是此系列的第二篇文章,我们将探讨数据自动备份。数据自动备份保证了数据的安全,为硬件损坏后的数据恢复提供便利以及减少了文件误操作带来的不必要的麻烦。

### 备份策略
我们就从为小型 NAS 构想一个备份策略着手开始吧。我建议每天有时间节点、有计划的去备份数据,以防止干扰到我们正常的访问 NAS,比如备份时间点避开正在访问 NAS 并写入文件的时间点。举个例子,你可以每天凌晨 2 点去进行数据备份。
另外,你还得决定每天的备份需要被保留的时间长短,因为如果没有时间限制,存储空间很快就会被用完。一般每天的备份保留一周便可以,如果数据出了问题,你便可以很方便的从备份中恢复出来原数据。但是如果需要恢复数据到更久之前怎么办?可以将每周一的备份文件保留一个月、每个月的备份保留更长时间。让我们把每月的备份保留一年时间,每一年的备份保留更长时间、例如五年。
这样,五年内在备份盘上产生大量备份:
* 每周 7 个日备份
* 每月 4 个周备份
* 每年 12 个月备份
* 每五年 5 个年备份
你应该还记得,我们搭建的备份盘和数据盘大小相同(每个 1 TB)。如何将不止 10 个 1TB 数据的备份从数据盘存放到只有 1TB 大小的备份盘呢?如果你创建的是完整备份,这显然不可能。因此,你需要创建增量备份,它是每一份备份都基于上一份备份数据而创建的。增量备份方式不会每隔一天就成倍的去占用存储空间,它每天只会增加一点占用空间。
以下是我的情况:我的 NAS 自 2016 年 8 月开始运行,备份盘上有 20 个备份。目前,我在数据盘上存储了 406GB 的文件。我的备份盘用了 726GB。当然,备份盘空间使用率在很大程度上取决于数据的更改频率,但正如你所看到的,增量备份不会占用 20 个完整备份所需的空间。然而,随着时间的推移,1TB 空间也可能不足以进行备份。一旦数据增长接近 1TB 限制(或任何备份盘容量),应该选择更大的备份盘空间并将数据移动转移过去。
### 利用 rsync 进行数据备份
利用 `rsync` 命令行工具可以生成完整备份。
```
pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01
```
这段命令将挂载在 `/nas/data/` 目录下的数据盘中的数据进行了完整的复制备份。备份文件保存在 `/nas/backup/2018-08-01` 目录下。`-a` 参数是以归档模式进行备份,这将会备份所有的元数据,例如文件的修改日期、权限、拥有者以及软连接文件。
现在,你已经在 8 月 1 日创建了完整的初始备份,你将在 8 月 2 日创建第一个增量备份。
```
pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02
```
上面这行代码又创建了一个关于 `/nas/data` 目录中数据的备份。备份路径是 `/nas/backup/2018-08-02`。这里的参数 `--link-dest` 指定了一个备份文件所在的路径。这样,这次备份会与 `/nas/backup/2018-08-01` 的备份进行比对,只备份已经修改过的文件,未做修改的文件将不会被复制,而是创建一个到上一个备份文件中它们的硬链接。
使用备份文件中的硬链接文件时,你一般不会注意到硬链接和初始拷贝之间的差别。它们表现的完全一样,如果删除其中一个硬链接或者文件,其他的依旧存在。你可以把它们看做是同一个文件的两个不同入口。下面就是一个例子:

左侧框是在进行了第二次备份后的原数据状态。中间的方块是昨天的备份。昨天的备份中只有图片 `file1.jpg` 并没有 `file2.txt` 。右侧的框反映了今天的增量备份。增量备份命令创建昨天不存在的 `file2.txt`。由于 `file1.jpg` 自昨天以来没有被修改,所以今天创建了一个硬链接,它不会额外占用磁盘上的空间。
### 自动化备份
你肯定也不想每天凌晨去输入命令进行数据备份吧。你可以创建一个任务定时去调用下面的脚本让它自动化备份。
```
#!/bin/bash
TODAY=$(date +%Y-%m-%d)
DATADIR=/nas/data/
BACKUPDIR=/nas/backup/
SCRIPTDIR=/nas/data/backup_scripts
LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1)
TODAYPATH=${BACKUPDIR}/${TODAY}
if [[ ! -e ${TODAYPATH} ]]; then
mkdir -p ${TODAYPATH}
fi
rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@
${SCRIPTDIR}/deleteOldBackups.sh
```
第一段代码指定了数据路径、备份路径、脚本路径以及昨天和今天的备份路径。第二段代码调用 `rsync` 命令。最后一段代码执行 `deleteOldBackups.sh` 脚本,它会清除一些过期的没有必要的备份数据。如果不想频繁的调用 `deleteOldBackups.sh`,你也可以手动去执行它。
下面是今天讨论的备份策略的一个简单完整的示例脚本。
```
#!/bin/bash
BACKUPDIR=/nas/backup/
function listYearlyBackups() {
for i in 0 1 2 3 4 5
do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1
done
}
function listMonthlyBackups() {
for i in 0 1 2 3 4 5 6 7 8 9 10 11 12
do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1
done
}
function listWeeklyBackups() {
for i in 0 1 2 3 4
do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")"
done
}
function listDailyBackups() {
for i in 0 1 2 3 4 5 6
do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")"
done
}
function getAllBackups() {
listYearlyBackups
listMonthlyBackups
listWeeklyBackups
listDailyBackups
}
function listUniqueBackups() {
getAllBackups | sort -u
}
function listBackupsToDelete() {
ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")"
}
cd ${BACKUPDIR}
listBackupsToDelete | while read file_to_delete; do
rm -rf ${file_to_delete}
done
```
这段脚本会首先根据你的备份策略列出所有需要保存的备份文件,然后它会删除那些再也不需要了的备份目录。
下面创建一个定时任务去执行上面这段代码。以 root 用户权限打开 `crontab -e`,输入以下这段命令,它将会创建一个每天凌晨 2 点去执行 `/nas/data/backup_scripts/daily.sh` 的定时任务。
```
0 2 * * * /nas/data/backup_scripts/daily.sh
```
有关创建定时任务请参考 [cron 创建定时任务](https://opensource.com/article/17/11/how-use-cron-linux)。
* 当没有备份任务时,卸载你的备份盘或者将它挂载为只读盘;
* 利用远程服务器作为你的备份盘,这样就可以通过互联网同步数据
你也可用下面的方法来加强你的备份策略,以防止备份数据的误删除或者被破坏:
本文中备份策略示例是备份一些我觉得有价值的数据,你也可以根据个人需求去修改这些策略。
我将会在 《树莓派自建 NAS 云盘》 系列的第三篇文章中讨论 [Nextcloud](https://nextcloud.com/)。Nextcloud 提供了更方便的方式去访问 NAS 云盘上的数据并且它还提供了离线操作,你还可以在客户端中同步你的数据。
---
via: <https://opensource.com/article/18/8/automate-backups-raspberry-pi>
作者:[Manuel Dewald](https://opensource.com/users/ntlx) 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In the [first part](https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi) of this three-part series using a Raspberry Pi for network-attached storage (NAS), we covered the fundamentals of the NAS setup, attached two 1TB hard drives (one for data and one for backups), and mounted the data drive on a remote device via the network filesystem (NFS). In part two, we will look at automating backups. Automated backups allow you to continually secure your data and recover from a hardware defect or accidental file removal.

## Backup strategy
Let's get started by coming up with with a backup strategy for our small NAS. I recommend creating daily backups of your data and scheduling them for a time they won't interfere with other NAS activities, including when you need to access or store your files. For example, you could trigger the backup activities each day at 2am.
You also need to decide how long you'll keep each backup, since you would quickly run out of storage if you kept each daily backup indefinitely. Keeping your daily backups for one week allows you to travel back into your recent history if you realize something went wrong over the previous seven days. But what if you need something from further in the past? Keeping each Monday backup for a month and one monthly backup for a longer period of time should be sufficient. Let's keep the monthly backups for a year and one backup every year for long-distance time travels, e.g., for the last five years.
This results in a bunch of backups on your backup drive over a five-year period:
- 7 daily backups
- 4 (approx.) weekly backups
- 12 monthly backups
- 5 annual backups
You may recall that your backup drive and your data drive are of equal size (1TB each). How will more than 10 backups of 1TB from your data drive fit onto a 1TB backup disk? If you create full backups, they won't. Instead, you will create incremental backups, reusing the data from the last backup if it didn't change and creating replicas of new or changed files. That way, the backup doesn't double every night, but only grows a little bit depending on the changes that happen to your data over a day.
Here is my situation: My NAS has been running since August 2016, and 20 backups are on the backup drive. Currently, I store 406GB of files on the data drive. The backups take up 726GB on my backup drive. Of course, this depends heavily on your data's change frequency, but as you can see, the incremental backups don't consume as much space as 20 full backups would. Nevertheless, over time the 1TB disk will probably become insufficient for your backups. Once your data grows close to the 1TB limit (or whatever your backup drive capacity), you should choose a bigger backup drive and move your data there.
## Creating backups with rsync
To create a full backup, you can use the rsync command line tool. Here is an example command to create the initial full backup.
`pi@raspberrypi:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01`
This command creates a full replica of all data stored on the data drive, mounted on `/nas/data`
, on the backup drive. There, it will create the folder `2018-08-01`
and create the backup inside it. The `-a`
flag starts rsync in *archive-mode*, which means it preserves all kinds of metadata, like modification dates, permissions, and owners, and copies soft links as soft links.
Now that you have created your full, initial backup as of August 1, on August 2, you will create your first daily incremental backup.
`pi@raspberrypi:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02`
This command tells rsync to again create a backup of `/nas/data`
. The target directory this time is `/nas/backup/2018-08-02`
. The script also specified the `--link-dest`
option and passed the location of the last backup as an argument. With this option specified, rsync looks at the folder `/nas/backup/2018-08-01`
and checks what data files changed compared to that folder's content. Unchanged files will not be copied, rather they will be hard-linked to their counterparts in yesterday's backup folder.
When using a hard-linked file from a backup, you won't notice any difference between the initial copy and the link. They behave exactly the same, and if you delete either the link or the initial file, the other will still exist. You can imagine them as two equal entry points to the same file. Here is an example:

The left box reflects the state shortly after the second backup. The box in the middle is yesterday's replica. The `file2.txt`
didn't exist yesterday, but the image `file1.jpg`
did and was copied to the backup drive. The box on the right reflects today's incremental backup. The incremental backup command created `file2.txt`
, which didn't exist yesterday. Since `file1.jpg`
didn't change since yesterday, today a hard link is created so it doesn't take much additional space on the disk.
## Automate your backups
You probably don't want to execute your daily backup command by hand at 2am each day. Instead, you can automate your backup by using a script like the following, which you may want to start with a cron job.
```
#!/bin/bash
TODAY=$(date +%Y-%m-%d)
DATADIR=/nas/data/
BACKUPDIR=/nas/backup/
SCRIPTDIR=/nas/data/backup_scripts
LASTDAYPATH=${BACKUPDIR}/$(ls ${BACKUPDIR} | tail -n 1)
TODAYPATH=${BACKUPDIR}/${TODAY}
if [[ ! -e ${TODAYPATH} ]]; then
mkdir -p ${TODAYPATH}
fi
rsync -a --link-dest ${LASTDAYPATH} ${DATADIR} ${TODAYPATH} $@
${SCRIPTDIR}/deleteOldBackups.sh
```
The first block calculates the last backup's folder name to use for links and the name of today's backup folder. The second block has the rsync command (as described above). The last block executes a `deleteOldBackups.sh`
script. It will clean up the old, unnecessary backups based on the backup strategy outlined above. You could also execute the cleanup script independently from the backup script if you want it to run less frequently.
The following script is an example implementation of the backup strategy in this how-to article.
```
#!/bin/bash
BACKUPDIR=/nas/backup/
function listYearlyBackups() {
for i in 0 1 2 3 4 5
do ls ${BACKUPDIR} | egrep "$(date +%Y -d "${i} year ago")-[0-9]{2}-[0-9]{2}" | sort -u | head -n 1
done
}
function listMonthlyBackups() {
for i in 0 1 2 3 4 5 6 7 8 9 10 11 12
do ls ${BACKUPDIR} | egrep "$(date +%Y-%m -d "${i} month ago")-[0-9]{2}" | sort -u | head -n 1
done
}
function listWeeklyBackups() {
for i in 0 1 2 3 4
do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "last monday -${i} weeks")"
done
}
function listDailyBackups() {
for i in 0 1 2 3 4 5 6
do ls ${BACKUPDIR} | grep "$(date +%Y-%m-%d -d "-${i} day")"
done
}
function getAllBackups() {
listYearlyBackups
listMonthlyBackups
listWeeklyBackups
listDailyBackups
}
function listUniqueBackups() {
getAllBackups | sort -u
}
function listBackupsToDelete() {
ls ${BACKUPDIR} | grep -v -e "$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")"
}
cd ${BACKUPDIR}
listBackupsToDelete | while read file_to_delete; do
rm -rf ${file_to_delete}
done
```
This script will first list all the backups to keep (according to our backup strategy), then it will delete all the backup folders that are not necessary anymore.
To execute the scripts every night to create daily backups, schedule the backup script by running `crontab -e`
as the root user. (You need to be in root to make sure it has permission to read all the files on the data drive, no matter who created them.) Add a line like the following, which starts the script every night at 2am.
`0 2 * * * /nas/data/backup_scripts/daily.sh`
For more information, read about [scheduling tasks with cron](https://opensource.com/article/17/11/how-use-cron-linux).
There are additional things you can do to fortify your backups against accidental removal or damage, including the following:
- Unmount your backup drive or mount it as read-only when no backups are running
- Attach the backup drive to a remote server and sync the files over the internet
This example backup strategy enables you to back up your valuable data to make sure it won't get lost. You can also easily adjust this technique for your personal needs and preferences.
In part three of this series, we will talk about [Nextcloud](https://nextcloud.com/), a convenient way to store and access data on your NAS system that also provides offline access as it synchronizes your data to the client devices.
## 10 Comments |
10,113 | Sysget:给主流的包管理器加个前端 | https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/ | 2018-10-14T00:42:00 | [
"包管理器"
] | https://linux.cn/article-10113-1.html | 
你是一个喜欢每隔几天尝试 Linux 操作系统的新发行版的发行版收割机吗?如果是这样,我有一些东西对你有用。 尝试 Sysget,这是一个类 Unix 操作系统中的流行软件包管理器的前端。 你不需要学习每个包管理器来执行基本的操作,例如安装、更新、升级和删除包。 你只需要对每个运行在类 Unix 操作系统上的包管理器记住一种语法即可。 Sysget 是包管理器的包装脚本,它是用 C++ 编写的。 源代码可在 GitHub 上免费获得。
使用 Sysget,你可以执行各种基本的包管理操作,包括:
* 安装包,
* 更新包,
* 升级包,
* 搜索包,
* 删除包,
* 删除弃用包,
* 更新数据库,
* 升级系统,
* 清除包管理器缓存。
**给 Linux 学习者的一个重要提示:**
Sysget 不会取代软件包管理器,绝对不适合所有人。如果你是经常切换到新 Linux 操作系统的新手,Sysget 可能会有所帮助。当在不同的 Linux 发行版中使用不同的软件包管理器时,就必须学习安装、更新、升级、搜索和删除软件包的新命令,这时 Sysget 就是帮助<ruby> 发行版收割机 <rt> distro hopper </rt></ruby>(或新 Linux 用户)的包装脚本。
如果你是 Linux 管理员或想要学习 Linux 深层的爱好者,你应该坚持使用你的发行版的软件包管理器并学习如何使用它。
### 安装 Sysget
安装 Sysget 很简单。 转到[发布页面](https://github.com/emilengler/sysget/releases)并下载最新的 Sysget 二进制文件并按如下所示进行安装。 在编写本指南时,Sysget 最新版本为1.2。
```
$ sudo wget -O /usr/local/bin/sysget https://github.com/emilengler/sysget/releases/download/v1.2/sysget
$ sudo mkdir -p /usr/local/share/sysget
$ sudo chmod a+x /usr/local/bin/sysget
```
### 用法
Sysget 命令与 APT 包管理器大致相同,因此它应该适合新手使用。
当你第一次运行 Sysget 时,系统会要求你选择要使用的包管理器。 由于我在 Ubuntu,我选择了 apt-get。

你必须根据正在运行的发行版选择正确的包管理器。 例如,如果你使用的是 Arch Linux,请选择 pacman。 对于 CentOS,请选择 yum。 对于 FreeBSD,请选择 pkg。 当前支持的包管理器列表是:
1. apt-get (Debian)
2. xbps (Void)
3. dnf (Fedora)
4. yum (Enterprise Linux/Legacy Fedora)
5. zypper (OpenSUSE)
6. eopkg (Solus)
7. pacman (Arch)
8. emerge (Gentoo)
9. pkg (FreeBSD)
10. chromebrew (ChromeOS)
11. homebrew (Mac OS)
12. nix (Nix OS)
13. snap (Independent)
14. npm (Javascript, Global)
如果你分配了错误的包管理器,则可以使用以下命令设置新的包管理器:
```
$ sudo sysget set yum
Package manager changed to yum
```
只需确保你选择了本地包管理器。
现在,你可以像使用本机包管理器一样执行包管理操作。
要安装软件包,例如 Emacs,只需运行:
```
$ sudo sysget install emacs
```
上面的命令将调用本机包管理器(在我的例子中是 “apt-get”)并安装给定的包。

同样,要删除包,只需运行:
```
$ sudo sysget remove emacs
```

更新软件仓库(数据库):
```
$ sudo sysget update
```
搜索特定包:
```
$ sudo sysget search emacs
```
升级单个包:
```
$ sudo sysget upgrade emacs
```
升级所有包:
```
$ sudo sysget upgrade
```
移除废弃的包:
```
$ sudo sysget autoremove
```
清理包管理器的缓存:
```
$ sudo sysget clean
```
有关更多详细信息,请参阅帮助部分:
```
$ sysget help
Help of sysget
sysget [OPTION] [ARGUMENT]
search [query] search for a package in the resporitories
install [package] install a package from the repos
remove [package] removes a package
autoremove removes not needed packages (orphans)
update update the database
upgrade do a system upgrade
upgrade [package] upgrade a specific package
clean clean the download cache
set [NEW MANAGER] set a new package manager
```
请记住,不同 Linux 发行版中的所有包管理器的 Sysget 语法都是相同的。 你不需要记住每个包管理器的命令。
同样,我必须告诉你 Sysget 不是包管理器的替代品。 它只是类 Unix 系统中流行的包管理器的包装器,它只执行基本的包管理操作。
Sysget 对于不想去学习不同包管理器的新命令的新手和发行版收割机用户可能有些用处。 如果你有兴趣,试一试,看看它是否有帮助。
而且,这就是本次所有的内容了。 更多干货即将到来。 敬请关注!
祝快乐!
---
via: <https://www.ostechnix.com/sysget-a-front-end-for-popular-package-managers/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,114 | Linux 系统上交换空间的介绍 | https://opensource.com/article/18/9/swap-space-linux-systems | 2018-10-14T11:36:34 | [
"交换",
"swap"
] | https://linux.cn/article-10114-1.html |
>
> 学习如何修改你的系统上的交换空间的容量,以及你到底需要多大的交换空间。
>
>
>

当今无论什么操作系统<ruby> 交换 <rt> Swap </rt></ruby>空间是非常常见的。Linux 使用交换空间来增加主机可用的虚拟内存。它可以在常规文件或逻辑卷上使用一个或多个专用交换分区或交换文件。
典型计算机中有两种基本类型的内存。第一种类型,随机存取存储器 (RAM),用于存储计算机使用的数据和程序。只有程序和数据存储在 RAM 中,计算机才能使用它们。随机存储器是易失性存储器;也就是说,如果计算机关闭了,存储在 RAM 中的数据就会丢失。
硬盘是用于长期存储数据和程序的磁性介质。该磁介质可以很好的保存数据;即使计算机断电,存储在磁盘上的数据也会保留下来。CPU(中央处理器)不能直接访问硬盘上的程序和数据;它们必须首先复制到 RAM 中,RAM 是 CPU 访问代码指令和操作数据的地方。在引导过程中,计算机将特定的操作系统程序(如内核、init 或 systemd)以及硬盘上的数据复制到 RAM 中,在 RAM 中,计算机的处理器 CPU 可以直接访问这些数据。
### 交换空间
交换空间是现代 Linux 系统中的第二种内存类型。交换空间的主要功能是当全部的 RAM 被占用并且需要更多内存时,用磁盘空间代替 RAM 内存。
例如,假设你有一个 8GB RAM 的计算机。如果你启动的程序没有填满 RAM,一切都好,不需要交换。假设你在处理电子表格,当添加更多的行时,你电子表格会增长,加上所有正在运行的程序,将会占用全部的 RAM 。如果这时没有可用的交换空间,你将不得不停止处理电子表格,直到关闭一些其他程序来释放一些 RAM 。
内核使用一个内存管理程序来检测最近没有使用的内存块(内存页)。内存管理程序将这些相对不经常使用的内存页交换到硬盘上专门指定用于“分页”或交换的特殊分区。这会释放 RAM,为输入电子表格更多数据腾出了空间。那些换出到硬盘的内存页面被内核的内存管理代码跟踪,如果需要,可以被分页回 RAM。
Linux 计算机中的内存总量是 RAM + 交换分区,交换分区被称为虚拟内存.
### Linux 交换分区类型
Linux 提供了两种类型的交换空间。默认情况下,大多数 Linux 在安装时都会创建一个交换分区,但是也可以使用一个特殊配置的文件作为交换文件。交换分区顾名思义就是一个标准磁盘分区,由 `mkswap` 命令指定交换空间。
如果没有可用磁盘空间来创建新的交换分区,或者卷组中没有空间为交换空间创建逻辑卷,则可以使用交换文件。这只是一个创建好并预分配指定大小的常规文件。然后运行 `mkswap` 命令将其配置为交换空间。除非绝对必要,否则我不建议使用文件来做交换空间。(LCTT 译注:Ubuntu 近来的版本采用了交换文件而非交换空间,所以我对于这种说法保留看法)
### 频繁交换
当总虚拟内存(RAM 和交换空间)变得快满时,可能会发生频繁交换。系统花了太多时间在交换空间和 RAM 之间做内存块的页面切换,以至于几乎没有时间用于实际工作。这种情况的典型症状是:系统变得缓慢或完全无反应,硬盘指示灯几乎持续亮起。
使用 `free` 的命令来显示 CPU 负载和内存使用情况,你会发现 CPU 负载非常高,可能达到系统中 CPU 内核数量的 30 到 40 倍。另一个情况是 RAM 和交换空间几乎完全被分配了。
事实上,查看 SAR(系统活动报告)数据也可以显示这些内容。在我的每个系统上都安装 SAR ,并将这些用于数据分析。
### 交换空间的正确大小是多少?
许多年前,硬盘上分配给交换空间大小是计算机上的 RAM 的两倍(当然,这是大多数计算机的 RAM 以 KB 或 MB 为单位的时候)。因此,如果一台计算机有 64KB 的 RAM,应该分配 128KB 的交换分区。该规则考虑到了这样的事实情况,即 RAM 大小在当时非常小,分配超过 2 倍的 RAM 用于交换空间并不能提高性能。使用超过两倍的 RAM 进行交换,比实际执行有用的工作的时候,大多数系统将花费更多的时间。
RAM 现在已经很便宜了,如今大多数计算机的 RAM 都达到了几十亿字节。我的大多数新电脑至少有 8GB 内存,一台有 32GB 内存,我的主工作站有 64GB 内存。我的旧电脑有 4 到 8GB 的内存。
当操作具有大量 RAM 的计算机时,交换空间的限制性能系数远低于 2 倍。[Fedora 28 在线安装指南](https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/) 定义了当前关于交换空间分配的方法。下面内容是我提出的建议。
下表根据系统中的 RAM 大小以及是否有足够的内存让系统休眠,提供了交换分区的推荐大小。建议的交换分区大小是在安装过程中自动建立的。但是,为了满足系统休眠,您需要在自定义分区阶段编辑交换空间。
表 1: Fedora 28 文档中推荐的系统交换空间
| **系统内存大小** | **推荐的交换空间** | **推荐的交换空间大小(支持休眠模式)** |
| --- | --- | --- |
| 小于 2 GB | 2 倍 RAM | 3 倍 RAM |
| 2 GB - 8 GB | 等于 RAM 大小 | 2 倍 RAM |
| 8 GB - 64 GB | 0.5 倍 RAM | 1.5 倍 RAM |
| 大于 64 GB | 工作量相关 | 不建议休眠模式 |
在上面列出的每个范围之间的边界(例如,具有 2GB、8GB 或 64GB 的系统 RAM),请根据所选交换空间和支持休眠功能请谨慎使用。如果你的系统资源允许,增加交换空间可能会带来更好的性能。
当然,大多数 Linux 管理员对多大的交换空间量有自己的想法。下面的表2 包含了基于我在多种环境中的个人经历所做出的建议。这些可能不适合你,但是和表 1 一样,它们可能对你有所帮助。
表 2: 作者推荐的系统交换空间
| RAM 大小 | 推荐的交换空间 |
| --- | --- |
| ≤ 2GB | 2X RAM |
| 2GB – 8GB | = RAM |
| >8GB | 8GB |
这两个表中共同点,随着 RAM 数量的增加,超过某一点增加更多交换空间只会导致在交换空间几乎被全部使用之前就发生频繁交换。根据以上建议,则应尽可能添加更多 RAM,而不是增加更多交换空间。如类似影响系统性能的情况一样,请使用最适合你的建议。根据 Linux 环境中的条件进行测试和更改是需要时间和精力的。
### 向非 LVM 磁盘环境添加更多交换空间
面对已安装 Linux 的主机并对交换空间的需求不断变化,有时有必要修改系统定义的交换空间的大小。此过程可用于需要增加交换空间大小的任何情况。它假设有足够的可用磁盘空间。此过程还假设磁盘分区为 “原始的” EXT4 和交换分区,而不是使用逻辑卷管理(LVM)。
基本步骤很简单:
1. 关闭现有的交换空间。
2. 创建所需大小的新交换分区。
3. 重读分区表。
4. 将分区配置为交换空间。
5. 添加新分区到 `/etc/fstab`。
6. 打开交换空间。
应该不需要重新启动机器。
为了安全起见,在关闭交换空间前,至少你应该确保没有应用程序在运行,也没有交换空间在使用。`free` 或 `top` 命令可以告诉你交换空间是否在使用中。为了更安全,您可以恢复到运行级别 1 或单用户模式。
使用关闭所有交换空间的命令关闭交换分区:
```
swapoff -a
```
现在查看硬盘上的现有分区。
```
fdisk -l
```
这将显示每个驱动器上的分区表。按编号标识当前的交换分区。
使用以下命令在交互模式下启动 `fdisk`:
```
fdisk /dev/<device name>
```
例如:
```
fdisk /dev/sda
```
此时,`fdisk` 是交互方式的,只在指定的磁盘驱动器上进行操作。
使用 `fdisk` 的 `p` 子命令验证磁盘上是否有足够的可用空间来创建新的交换分区。硬盘上的空间以 512 字节的块以及起始和结束柱面编号的形式显示,因此您可能需要做一些计算来确定分配分区之间和末尾的可用空间。
使用 `n` 子命令创建新的交换分区。`fdisk` 会问你开始柱面。默认情况下,它选择编号最低的可用柱面。如果你想改变这一点,输入开始柱面的编号。
`fdisk` 命令允许你以多种格式输入分区的大小,包括最后一个柱面号或字节、KB 或 MB 的大小。例如,键入 4000M ,这将在新分区上提供大约 4GB 的空间,然后按回车键。
使用 `p` 子命令来验证分区是否按照指定的方式创建的。请注意,除非使用结束柱面编号,否则分区可能与你指定的不完全相同。`fdisk` 命令只能在整个柱面上增量的分配磁盘空间,因此你的分区可能比你指定的稍小或稍大。如果分区不是您想要的,你可以删除它并重新创建它。
现在指定新分区是交换分区了 。子命令 `t` 允许你指定定分区的类型。所以输入 `t`,指定分区号,当它要求十六进制分区类型时,输入 `82`,这是 Linux 交换分区类型,然后按回车键。
当你对创建的分区感到满意时,使用 `w` 子命令将新的分区表写入磁盘。`fdisk` 程序将退出,并在完成修改后的分区表的编写后返回命令提示符。当 `fdisk` 完成写入新分区表时,会收到以下消息:
```
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
```
此时,你使用 `partprobe` 命令强制内核重新读取分区表,这样就不需要执行重新启动机器。
```
partprobe
```
使用命令 `fdisk -l` 列出分区,新交换分区应该在列出的分区中。确保新的分区类型是 “Linux swap”。
修改 `/etc/fstab` 文件以指向新的交换分区。如下所示:
```
LABEL=SWAP-sdaX swap swap defaults 0 0
```
其中 `X` 是分区号。根据新交换分区的位置,添加以下内容:
```
/dev/sdaY swap swap defaults 0 0
```
请确保使用正确的分区号。现在,可以执行创建交换分区的最后一步。使用 `mkswap` 命令将分区定义为交换分区。
```
mkswap /dev/sdaY
```
最后一步是使用以下命令启用交换空间:
```
swapon -a
```
你的新交换分区现在与以前存在的交换分区一起在线。您可以使用 `free` 或`top` 命令来验证这一点。
#### 在 LVM 磁盘环境中添加交换空间
如果你的磁盘使用 LVM ,更改交换空间将相当容易。同样,假设当前交换卷所在的卷组中有可用空间。默认情况下,LVM 环境中的 Fedora Linux 在安装过程将交换分区创建为逻辑卷。您可以非常简单地增加交换卷的大小。
以下是在 LVM 环境中增加交换空间大小的步骤:
1. 关闭所有交换空间。
2. 增加指定用于交换空间的逻辑卷的大小。
3. 为交换空间调整大小的卷配置。
4. 启用交换空间。
首先,让我们使用 `lvs` 命令(列出逻辑卷)来验证交换空间是否存在以及交换空间是否是逻辑卷。
```
[root@studentvm1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home fedora_studentvm1 -wi-ao---- 2.00g
pool00 fedora_studentvm1 twi-aotz-- 2.00g 8.17 2.93
root fedora_studentvm1 Vwi-aotz-- 2.00g pool00 8.17
swap fedora_studentvm1 -wi-ao---- 8.00g
tmp fedora_studentvm1 -wi-ao---- 5.00g
usr fedora_studentvm1 -wi-ao---- 15.00g
var fedora_studentvm1 -wi-ao---- 10.00g
[root@studentvm1 ~]#
```
你可以看到当前的交换空间大小为 8GB。在这种情况下,我们希望将 2GB 添加到此交换卷中。首先,停止现有的交换空间。如果交换空间正在使用,终止正在运行的程序。
```
swapoff -a
```
现在增加逻辑卷的大小。
```
[root@studentvm1 ~]# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
Logical volume fedora_studentvm1/swap successfully resized.
[root@studentvm1 ~]#
```
运行 `mkswap` 命令将整个 10GB 分区变成交换空间。
```
[root@studentvm1 ~]# mkswap /dev/mapper/fedora_studentvm1-swap
mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
[root@studentvm1 ~]#
```
重新启用交换空间。
```
[root@studentvm1 ~]# swapon -a
[root@studentvm1 ~]#
```
现在,使用 `lsblk` 命令验证新交换空间是否存在。同样,不需要重新启动机器。
```
[root@studentvm1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
|-sda1 8:1 0 1G 0 part /boot
`-sda2 8:2 0 59G 0 part
|-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
|-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
|-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
|-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
`-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
sr0 11:0 1 1024M 0 rom
[root@studentvm1 ~]#
```
您也可以使用 `swapon -s` 命令或 `top`、`free` 或其他几个命令来验证这一点。
```
[root@studentvm1 ~]# free
total used free shared buff/cache available
Mem: 4038808 382404 2754072 4152 902332 3404184
Swap: 10485756 0 10485756
[root@studentvm1 ~]#
```
请注意,不同的命令以不同的形式显示或要求输入设备文件。在 `/dev` 目录中访问特定设备有多种方式。在我的文章 [在 Linux 中管理设备](/article-8099-1.html) 中有更多关于 `/dev` 目录及其内容说明。
---
via: <https://opensource.com/article/18/9/swap-space-linux-systems>
作者:[David Both](https://opensource.com/users/dboth)
选题:[lujun9972](https://github.com/lujun9972)
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | There are two basic types of memory in a typical computer. The first type, random access memory (RAM), is used to store data and programs while they are being actively used by the computer. Programs and data cannot be used by the computer unless they are stored in RAM. RAM is volatile memory; that is, the data stored in RAM is lost if the computer is turned off.
Hard drives are magnetic media used for long-term storage of data and programs. Magnetic media is nonvolatile; the data stored on a disk remains even when power is removed from the computer. The CPU (central processing unit) cannot directly access the programs and data on the hard drive; it must be copied into RAM first, and that is where the CPU can access its programming instructions and the data to be operated on by those instructions. During the boot process, a computer copies specific operating system programs, such as the kernel and init or systemd, and data from the hard drive into RAM, where it is accessed directly by the computer’s processor, the CPU.
The second type of memory in modern Linux systems is swap space.
## Swap space
The primary function of swap space is to substitute disk space for RAM memory when real RAM fills up and more space is needed.
For example, assume you have a computer system with 8GB of RAM. If you start up programs that don’t fill that RAM, everything is fine and no swapping is required. But suppose the spreadsheet you are working on grows when you add more rows, and that, plus everything else that's running, now fills all of RAM. Without swap space available, you would have to stop working on the spreadsheet until you could free up some of your limited RAM by closing down some other programs.
The kernel uses a memory management program that detects blocks, aka pages, of memory in which the contents have not been used recently. The memory management program swaps enough of these relatively infrequently used pages of memory out to a special partition on the hard drive specifically designated for “paging,” or swapping. This frees up RAM and makes room for more data to be entered into your spreadsheet. Those pages of memory swapped out to the hard drive are tracked by the kernel’s memory management code and can be paged back into RAM if they are needed.
The total amount of memory in a Linux computer is the RAM plus swap space and is referred to as *virtual memory*.
## Types of Linux swap
Linux provides for two types of swap space. By default, most Linux installations create a swap partition, but it is also possible to use a specially configured file as a swap file. A swap partition is just what its name implies—a standard disk partition that is designated as swap space by the `mkswap`
command.
A swap file can be used if there is no free disk space in which to create a new swap partition or space in a volume group where a logical volume can be created for swap space. This is just a regular file that is created and preallocated to a specified size. Then the `mkswap`
command is run to configure it as swap space. I don’t recommend using a file for swap space unless absolutely necessary.
## Thrashing
Thrashing can occur when total virtual memory, both RAM and swap space, become nearly full. The system spends so much time paging blocks of memory between swap space and RAM and back that little time is left for real work. The typical symptoms of this are obvious: The system becomes slow or completely unresponsive, and the hard drive activity light is on almost constantly.
If you can manage to issue a command like `free`
that shows CPU load and memory usage, you will see that the CPU load is very high, perhaps as much as 30 to 40 times the number of CPU cores in the system. Another symptom is that both RAM and swap space are almost completely allocated.
After the fact, looking at SAR (system activity report) data can also show these symptoms. I install SAR on every system I work on and use it for post-repair forensic analysis.
## What is the right amount of swap space?
Many years ago, the rule of thumb for the amount of swap space that should be allocated on the hard drive was 2X the amount of RAM installed in the computer (of course, that was when most computers' RAM was measured in KB or MB). So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. This rule took into account the facts that RAM sizes were typically quite small at that time and that allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than actually performing useful work.
RAM has become an inexpensive commodity and most computers these days have amounts of RAM that extend into tens of gigabytes. Most of my newer computers have at least 8GB of RAM, one has 32GB, and my main workstation has 64GB. My older computers have from 4 to 8 GB of RAM.
When dealing with computers having huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. The Fedora 28 online Installation Guide, which can be found online at [Fedora Installation Guide](https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/), defines current thinking about swap space allocation. I have included below some discussion and the table of recommendations from that document.
The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage.
*Table 1: Recommended system swap space in Fedora documentation*
|
|
|
---|---|---|
less than 2 GB |
2 times the amount of RAM |
3 times the amount of RAM |
2 GB - 8 GB |
Equal to the amount of RAM |
2 times the amount of RAM |
8 GB - 64 GB |
0.5 times the amount of RAM |
1.5 times the amount of RAM |
more than 64 GB |
workload dependent |
hibernation not recommended |
At the border between each range listed above (for example, a system with 2 GB, 8 GB, or 64 GB of system RAM), use discretion with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance.
Of course, most Linux administrators have their own ideas about the appropriate amount of swap space—as well as pretty much everything else. Table 2, below, contains my recommendations based on my personal experiences in multiple environments. These may not work for you, but as with Table 1, they may help you get started.
*Table 2: Recommended system swap space per the author*
Amount of RAM |
Recommended swap space |
---|---|
≤ 2GB |
2X RAM |
2GB – 8GB |
= RAM |
>8GB |
8GB |
One consideration in both tables is that as the amount of RAM increases, beyond a certain point adding more swap space simply leads to thrashing well before the swap space even comes close to being filled. If you have too little virtual memory while following these recommendations, you should add more RAM, if possible, rather than more swap space. As with all recommendations that affect system performance, use what works best for your specific environment. This will take time and effort to experiment and make changes based on the conditions in your Linux environment.
### Adding more swap space to a non-LVM disk environment
Due to changing requirements for swap space on hosts with Linux already installed, it may become necessary to modify the amount of swap space defined for the system. This procedure can be used for any general case where the amount of swap space needs to be increased. It assumes sufficient available disk space is available. This procedure also assumes that the disks are partitioned in “raw” EXT4 and swap partitions and do not use logical volume management (LVM).
The basic steps to take are simple:
-
Turn off the existing swap space.
-
Create a new swap partition of the desired size.
-
Reread the partition table.
-
Configure the partition as swap space.
-
Add the new partition/etc/fstab.
-
Turn on swap.
A reboot should not be necessary.
For safety's sake, before turning off swap, at the very least you should ensure that no applications are running and that no swap space is in use. The `free`
or `top`
commands can tell you whether swap space is in use. To be even safer, you could revert to run level 1 or single-user mode.
Turn off the swap partition with the command which turns off all swap space:
`$ swapoff -a`
Now display the existing partitions on the hard drive.
`$ fdisk -l`
This displays the current partition tables on each drive. Identify the current swap partition by number.
Start `fdisk`
in interactive mode with the command:
`$ fdisk /dev/<device name>`
For example:
`$ fdisk /dev/sda`
At this point, `fdisk`
is interactive and operates only on the specified disk drive.
Use the fdisk `p`
sub-command to verify that there is enough free space on the disk to create the new swap partition. The space on the hard drive is shown in terms of 512-byte blocks and starting and ending cylinder numbers, so you may have to do some math to determine the available space between and at the end of allocated partitions.
Use the `n`
sub-command to create a new swap partition. fdisk will ask you the starting cylinder. By default, it chooses the lowest-numbered available cylinder. If you wish to change that, type in the number of the starting cylinder.
The `fdisk`
command now allows you to enter the size of the partitions in a number of formats, including the last cylinder number or the size in bytes, KB or MB. Type in 4000M, which will give about 4GB of space on the new partition (for example), and press Enter.
Use the `p`
sub-command to verify that the partition was created as you specified it. Note that the partition will probably not be exactly what you specified unless you used the ending cylinder number. The `fdisk`
command can only allocate disk space in increments on whole cylinders, so your partition may be a little smaller or larger than you specified. If the partition is not what you want, you can delete it and create it again.
Now it is necessary to specify that the new partition is to be a swap partition. The sub-command `t`
allows you to specify the type of partition. So enter `t`
, specify the partition number, and when it asks for the hex code partition type, type 82, which is the Linux swap partition type, and press Enter.
When you are satisfied with the partition you have created, use the `w`
sub-command to write the new partition table to the disk. The `fdisk`
program will exit and return you to the command prompt after it completes writing the revised partition table. You will probably receive the following message as `fdisk`
completes writing the new partition table:
```
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
```
At this point, you use the `partprobe`
command to force the kernel to re-read the partition table so that it is not necessary to perform a reboot.
`$ partprobe`
Now use the command `fdisk -l`
to list the partitions and the new swap partition should be among those listed. Be sure that the new partition type is “Linux swap”.
It will be necessary to modify the /etc/fstab file to point to the new swap partition. The existing line may look like this:
`LABEL=SWAP-sdaX swap swap defaults 0 0`
where `X`
is the partition number. Add a new line that looks similar this, depending upon the location of your new swap partition:
`/dev/sdaY swap swap defaults 0 0`
Be sure to use the correct partition number. Now you can perform the final step in creating the swap partition. Use the `mkswap`
command to define the partition as a swap partition.
`$ mkswap /dev/sdaY`
The final step is to turn swap on using the command:
`$ swapon -a`
Your new swap partition is now online along with the previously existing swap partition. You can use the `free`
or `top`
commands to verify this.
### Adding swap to an LVM disk environment
If your disk setup uses LVM, changing swap space will be fairly easy. Again, this assumes that space is available in the volume group in which the current swap volume is located. By default, the installation procedures for Fedora Linux in an LVM environment create the swap partition as a logical volume. This makes it easy because you can simply increase the size of the swap volume.
Here are the steps required to increase the amount of swap space in an LVM environment:
-
Turn off all swap.
-
Increase the size of the logical volume designated for swap.
-
Configure the resized volume as swap space.
-
Turn on swap.
First, verify that swap exists and is a logical volume using the `lvs`
command (list logical volume).
```
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home fedora_studentvm1 -wi-ao---- 2.00g
pool00 fedora_studentvm1 twi-aotz-- 2.00g 8.17 2.93
root fedora_studentvm1 Vwi-aotz-- 2.00g pool00 8.17
swap fedora_studentvm1 -wi-ao---- 8.00g
tmp fedora_studentvm1 -wi-ao---- 5.00g
usr fedora_studentvm1 -wi-ao---- 15.00g
var fedora_studentvm1 -wi-ao---- 10.00g
```
You can see that the current swap size is 8GB. In this case, we want to add 2GB to this swap volume. First, stop existing swap. You may have to terminate running programs if swap space is in use.
`$ swapoff -a`
Now increase the size of the logical volume.
```
# lvextend -L +2G /dev/mapper/fedora_studentvm1-swap
Size of logical volume fedora_studentvm1/swap changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
Logical volume fedora_studentvm1/swap successfully resized.
```
Run the `mkswap`
command to make this entire 10GB partition into swap space.
```
# mkswap /dev/mapper/fedora_studentvm1-swap
mkswap: /dev/mapper/fedora_studentvm1-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10 GiB (10737414144 bytes)
no label, UUID=3cc2bee0-e746-4b66-aa2d-1ea15ef1574a
```
Turn swap back on.
`# swapon -a`
Now verify the new swap space is present with the list block devices command. Again, a reboot is not required.
```
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
|-sda1 8:1 0 1G 0 part /boot
`-sda2 8:2 0 59G 0 part
|-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
|-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
|-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
|-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
`-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
sr0
```
You can also use the `swapon -s`
command, or `top`
, `free`
, or any of several other commands to verify this.
```
# free
total used free shared buff/cache available
Mem: 4038808 382404 2754072 4152 902332 3404184
Swap: 10485756 0 10485756
```
Note that the different commands display or require as input the device special file in different forms. There are a number of ways in which specific devices are accessed in the /dev directory. My article, [Managing Devices in Linux](https://opensource.com/article/16/11/managing-devices-linux), includes more information about the /dev directory and its contents.
*This article was originally published in September 2018 and has been updated with additional information by the editor.*
## 5 Comments |
10,115 | 如何将 Scikit-learn Python 库用于数据科学项目 | https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects | 2018-10-14T21:22:28 | [
"数据科学",
"Python"
] | https://linux.cn/article-10115-1.html |
>
> 灵活多样的 Python 库为数据分析和数据挖掘提供了强力的机器学习工具。
>
>
>

Scikit-learn Python 库最初于 2007 年发布,通常用于解决各种方面的机器学习和数据科学问题。这个多种功能的库提供了整洁、一致、高效的 API 和全面的在线文档。
### 什么是 Scikit-learn?
[Scikit-learn](http://scikit-learn.org/stable/index.html) 是一个开源 Python 库,拥有强大的数据分析和数据挖掘工具。 在 BSD 许可下可用,并建立在以下机器学习库上:
* `NumPy`,一个用于操作多维数组和矩阵的库。它还具有广泛的数学函数汇集,可用于执行各种计算。
* `SciPy`,一个由各种库组成的生态系统,用于完成技术计算任务。
* `Matplotlib`,一个用于绘制各种图表和图形的库。
Scikit-learn 提供了广泛的内置算法,可以充分用于数据科学项目。
以下是使用 Scikit-learn 库的主要方法。
#### 1、分类
[分类](https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/)工具识别与提供的数据相关联的类别。例如,它们可用于将电子邮件分类为垃圾邮件或非垃圾邮件。
Scikit-learn 中的分类算法包括:
* <ruby> 支持向量机 <rt> Support vector machines </rt></ruby>(SVM)
* <ruby> 最邻近 <rt> Nearest neighbors </rt></ruby>
* <ruby> 随机森林 <rt> Random forest </rt></ruby>
#### 2、回归
回归涉及到创建一个模型去试图理解输入和输出数据之间的关系。例如,回归工具可用于理解股票价格的行为。
回归算法包括:
* <ruby> 支持向量机 <rt> Support vector machines </rt></ruby>(SVM)
* <ruby> 岭回归 <rt> Ridge regression </rt></ruby>
* Lasso(LCTT 译注:Lasso 即 least absolute shrinkage and selection operator,又译为最小绝对值收敛和选择算子、套索算法)
#### 3、聚类
Scikit-learn 聚类工具用于自动将具有相同特征的数据分组。 例如,可以根据客户数据的地点对客户数据进行细分。
聚类算法包括:
* K-means
* <ruby> 谱聚类 <rt> Spectral clustering </rt></ruby>
* Mean-shift
#### 4、降维
降维降低了用于分析的随机变量的数量。例如,为了提高可视化效率,可能不会考虑外围数据。
降维算法包括:
* <ruby> 主成分分析 <rt> Principal component analysis </rt></ruby>(PCA)
* <ruby> 功能选择 <rt> Feature selection </rt></ruby>
* <ruby> 非负矩阵分解 <rt> Non-negative matrix factorization </rt></ruby>
#### 5、模型选择
模型选择算法提供了用于比较、验证和选择要在数据科学项目中使用的最佳参数和模型的工具。
通过参数调整能够增强精度的模型选择模块包括:
* <ruby> 网格搜索 <rt> Grid search </rt></ruby>
* <ruby> 交叉验证 <rt> Cross-validation </rt></ruby>
* <ruby> 指标 <rt> Metrics </rt></ruby>
#### 6、预处理
Scikit-learn 预处理工具在数据分析期间的特征提取和规范化中非常重要。 例如,您可以使用这些工具转换输入数据(如文本)并在分析中应用其特征。
预处理模块包括:
* 预处理
* 特征提取
### Scikit-learn 库示例
让我们用一个简单的例子来说明如何在数据科学项目中使用 Scikit-learn 库。
我们将使用[鸢尾花花卉数据集](https://en.wikipedia.org/wiki/Iris_flower_data_set),该数据集包含在 Scikit-learn 库中。 鸢尾花数据集包含有关三种花种的 150 个细节,三种花种分别为:
* Setosa:标记为 0
* Versicolor:标记为 1
* Virginica:标记为 2
数据集包括每种花种的以下特征(以厘米为单位):
* 萼片长度
* 萼片宽度
* 花瓣长度
* 花瓣宽度
#### 第 1 步:导入库
由于鸢尾花花卉数据集包含在 Scikit-learn 数据科学库中,我们可以将其加载到我们的工作区中,如下所示:
```
from sklearn import datasets
iris = datasets.load_iris()
```
这些命令从 `sklearn` 导入数据集 `datasets` 模块,然后使用 `datasets` 中的 `load_iris()` 方法将数据包含在工作空间中。
#### 第 2 步:获取数据集特征
数据集 `datasets` 模块包含几种方法,使您更容易熟悉处理数据。
在 Scikit-learn 中,数据集指的是类似字典的对象,其中包含有关数据的所有详细信息。 使用 `.data` 键存储数据,该数据列是一个数组列表。
例如,我们可以利用 `iris.data` 输出有关鸢尾花花卉数据集的信息。
```
print(iris.data)
```
这是输出(结果已被截断):
```
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]
[5.4 3.7 1.5 0.2]
[4.8 3.4 1.6 0.2]
[4.8 3. 1.4 0.1]
[4.3 3. 1.1 0.1]
[5.8 4. 1.2 0.2]
[5.7 4.4 1.5 0.4]
[5.4 3.9 1.3 0.4]
[5.1 3.5 1.4 0.3]
```
我们还使用 `iris.target` 向我们提供有关花朵不同标签的信息。
```
print(iris.target)
```
这是输出:
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
```
如果我们使用 `iris.target_names`,我们将输出数据集中找到的标签名称的数组。
```
print(iris.target_names)
```
以下是运行 Python 代码后的结果:
```
['setosa' 'versicolor' 'virginica']
```
#### 第 3 步:可视化数据集
我们可以使用[箱形图](https://en.wikipedia.org/wiki/Box_plot)来生成鸢尾花数据集的视觉描绘。 箱形图说明了数据如何通过四分位数在平面上分布的。
以下是如何实现这一目标:
```
import seaborn as sns
box_data = iris.data # 表示数据数组的变量
box_target = iris.target # 表示标签数组的变量
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
让我们看看结果:

在横轴上:
* 0 是萼片长度
* 1 是萼片宽度
* 2 是花瓣长度
* 3 是花瓣宽度
垂直轴的尺寸以厘米为单位。
### 总结
以下是这个简单的 Scikit-learn 数据科学教程的完整代码。
```
from sklearn import datasets
iris = datasets.load_iris()
print(iris.data)
print(iris.target)
print(iris.target_names)
import seaborn as sns
box_data = iris.data # 表示数据数组的变量
box_target = iris.target # 表示标签数组的变量
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Scikit-learn 是一个多功能的 Python 库,可用于高效完成数据科学项目。
如果您想了解更多信息,请查看 [LiveEdu](https://www.liveedu.tv/guides/data-science/) 上的教程,例如 Andrey Bulezyuk 关于使用 Scikit-learn 库创建[机器学习应用程序](https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/)的视频。
有什么评价或者疑问吗? 欢迎在下面分享。
---
via: <https://opensource.com/article/18/9/how-use-scikit-learn-data-science-projects>
作者:[Dr.Michael J.Garbade](https://opensource.com/users/drmjg) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The Scikit-learn Python library, initially released in 2007, is commonly used in solving machine learning and data science problems—from the beginning to the end. The versatile library offers an uncluttered, consistent, and efficient API and thorough online documentation.
## What is Scikit-learn?
[Scikit-learn](http://scikit-learn.org/stable/index.html) is an open source Python library that has powerful tools for data analysis and data mining. It's available under the BSD license and is built on the following machine learning libraries:
**NumPy**, a library for manipulating multi-dimensional arrays and matrices. It also has an extensive compilation of mathematical functions for performing various calculations.**SciPy**, an ecosystem consisting of various libraries for completing technical computing tasks.**Matplotlib**, a library for plotting various charts and graphs.
Scikit-learn offers an extensive range of built-in algorithms that make the most of data science projects.
Here are the main ways the Scikit-learn library is used.
### 1. Classification
The [classification](https://blog.liveedu.tv/regression-versus-classification-machine-learning-whats-the-difference/) tools identify the category associated with provided data. For example, they can be used to categorize email messages as either spam or not.
Classification algorithms in Scikit-learn include:
- Support vector machines (SVMs)
- Nearest neighbors
- Random forest
### 2. Regression
Regression involves creating a model that tries to comprehend the relationship between input and output data. For example, regression tools can be used to understand the behavior of stock prices.
Regression algorithms include:
- SVMs
- Ridge regression
- Lasso
### 3. Clustering
The Scikit-learn clustering tools are used to automatically group data with the same characteristics into sets. For example, customer data can be segmented based on their localities.
Clustering algorithms include:
- K-means
- Spectral clustering
- Mean-shift
### 4. Dimensionality reduction
Dimensionality reduction lowers the number of random variables for analysis. For example, to increase the efficiency of visualizations, outlying data may not be considered.
Dimensionality reduction algorithms include:
- Principal component analysis (PCA)
- Feature selection
- Non-negative matrix factorization
### 5. Model selection
Model selection algorithms offer tools to compare, validate, and select the best parameters and models to use in your data science projects.
Model selection modules that can deliver enhanced accuracy through parameter tuning include:
- Grid search
- Cross-validation
- Metrics
### 6. Preprocessing
The Scikit-learn preprocessing tools are important in feature extraction and normalization during data analysis. For example, you can use these tools to transform input data—such as text—and apply their features in your analysis.
Preprocessing modules include:
- Preprocessing
- Feature extraction
## A Scikit-learn library example
Let's use a simple example to illustrate how you can use the Scikit-learn library in your data science projects.
We'll use the [Iris flower dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which is incorporated in the Scikit-learn library. The Iris flower dataset contains 150 details about three flower species:
- Setosa—labeled 0
- Versicolor—labeled 1
- Virginica—labeled 2
The dataset includes the following characteristics of each flower species (in centimeters):
- Sepal length
- Sepal width
- Petal length
- Petal width
### Step 1: Importing the library
Since the Iris dataset is included in the Scikit-learn data science library, we can load it into our workspace as follows:
```
from sklearn import datasets
iris = datasets.load_iris()
```
These commands import the **datasets** module from **sklearn**, then use the **load_digits()** method from **datasets** to include the data in the workspace.
### Step 2: Getting dataset characteristics
The **datasets** module contains several methods that make it easier to get acquainted with handling data.
In Scikit-learn, a dataset refers to a dictionary-like object that has all the details about the data. The data is stored using the **.data** key, which is an array list.
For instance, we can utilize **iris.data** to output information about the Iris flower dataset.
`print(iris.data)`
Here is the output (the results have been truncated):
```
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]
[5.4 3.7 1.5 0.2]
[4.8 3.4 1.6 0.2]
[4.8 3. 1.4 0.1]
[4.3 3. 1.1 0.1]
[5.8 4. 1.2 0.2]
[5.7 4.4 1.5 0.4]
[5.4 3.9 1.3 0.4]
[5.1 3.5 1.4 0.3]
```
Let's also use **iris.target** to give us information about the different labels of the flowers.
`print(iris.target)`
Here is the output:
```
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
```
If we use **iris.target_names**, we'll output an array of the names of the labels found in the dataset.
`print(iris.target_names)`
Here is the result after running the Python code:
`['setosa' 'versicolor' 'virginica']`
### Step 3: Visualizing the dataset
We can use the [box plot](https://en.wikipedia.org/wiki/Box_plot) to produce a visual depiction of the Iris flower dataset. The box plot illustrates how the data is distributed over the plane through their quartiles.
Here's how to achieve this:
```
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Let's see the result:

On the horizontal axis:
- 0 is sepal length
- 1 is sepal width
- 2 is petal length
- 3 is petal width
The vertical axis is dimensions in centimeters.
## Wrapping up
Here is the entire code for this simple Scikit-learn data science tutorial.
```
from sklearn import datasets
iris = datasets.load_iris()
print(iris.data)
print(iris.target)
print(iris.target_names)
import seaborn as sns
box_data = iris.data #variable representing the data array
box_target = iris.target #variable representing the labels array
sns.boxplot(data = box_data,width=0.5,fliersize=5)
sns.set(rc={'figure.figsize':(2,15)})
```
Scikit-learn is a versatile Python library you can use to efficiently complete data science projects.
If you want to learn more, check out the tutorials on [LiveEdu](https://www.liveedu.tv/guides/data-science/), such as Andrey Bulezyuk's video on using the Scikit-learn library to create a [machine learning application](https://www.liveedu.tv/andreybu/REaxr-machine-learning-model-python-sklearn-kera/oPGdP-machine-learning-model-python-sklearn-kera/).
Do you have any questions or comments? Feel free to share them below.
## 1 Comment |
10,116 | 如何在 Linux 中列出可用的软件包组 | https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/ | 2018-10-14T22:22:10 | [
"软件包",
"安装"
] | https://linux.cn/article-10116-1.html | 
我们知道,如果想要在 Linux 中安装软件包,可以使用软件包管理器来进行安装。由于系统管理员需要频繁用到软件包管理器,所以它是 Linux 当中的一个重要工具。
但是如果想一次性安装一个软件包组,在 Linux 中有可能吗?又如何通过命令去实现呢?
在 Linux 中确实可以用软件包管理器来达到这样的目的。很多软件包管理器都有这样的选项来实现这个功能,但就我所知,`apt` 或 `apt-get` 软件包管理器却并没有这个选项。因此对基于 Debian 的系统,需要使用的命令是 `tasksel`,而不是 `apt` 或 `apt-get` 这样的官方软件包管理器。
在 Linux 中安装软件包组有很多好处。对于 LAMP 来说,安装过程会包含多个软件包,但如果安装软件包组命令来安装,只安装一个包就可以了。
当你的团队需要安装 LAMP,但不知道其中具体包含哪些软件包,这个时候软件包组就派上用场了。软件包组是 Linux 系统上一个很方便的工具,它能让你轻松地完成一组软件包的安装。
软件包组是一组用于公共功能的软件包,包括系统工具、声音和视频。 安装软件包组的过程中,会获取到一系列的依赖包,从而大大节省了时间。
**推荐阅读:**
* [如何在 Linux 上按照大小列出已安装的软件包](https://www.2daygeek.com/how-to-list-installed-packages-by-size-largest-on-linux/)
* [如何在 Linux 上查看/列出可用的软件包更新](https://www.2daygeek.com/how-to-view-list-the-available-packages-updates-in-linux/)
* [如何在 Linux 上查看软件包的安装/更新/升级/移除/卸载时间](https://www.2daygeek.com/how-to-view-a-particular-package-installed-updated-upgraded-removed-erased-date-on-linux/)
* [如何在 Linux 上查看一个软件包的详细信息](https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-in-linux/)
* [如何查看一个软件包是否在你的 Linux 发行版上可用](https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/)
* [萌新指导:一个可视化的 Linux 包管理工具](https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/)
* [老手必会:命令行软件包管理器的用法](https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/)
### 如何在 CentOS/RHEL 系统上列出可用的软件包组
RHEL 和 CentOS 系统使用的是 RPM 软件包,因此可以使用 `yum` 软件包管理器来获取相关的软件包信息。
`yum` 是 “Yellowdog Updater, Modified” 的缩写,它是一个用于基于 RPM 系统(例如 RHEL 和 CentOS)的,开源的命令行软件包管理工具。它是从发行版仓库或其它第三方库中获取、安装、删除、查询和管理 RPM 包的主要工具。
**推荐阅读:** [使用 yum 命令在 RHEL/CentOS 系统上管理软件包](https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/)
```
# yum grouplist
Loaded plugins: fastestmirror, security
Setting up Group Process
Loading mirror speeds from cached hostfile
* epel: epel.mirror.constant.com
Installed Groups:
Base
E-mail server
Graphical Administration Tools
Hardware monitoring utilities
Legacy UNIX compatibility
Milkymist
Networking Tools
Performance Tools
Perl Support
Security Tools
Available Groups:
Additional Development
Backup Client
Backup Server
CIFS file server
Client management tools
Compatibility libraries
Console internet tools
Debugging Tools
Desktop
.
.
Available Language Groups:
Afrikaans Support [af]
Albanian Support [sq]
Amazigh Support [ber]
Arabic Support [ar]
Armenian Support [hy]
Assamese Support [as]
Azerbaijani Support [az]
.
.
Done
```
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Performance Tools” 组相关联的软件包。
```
# yum groupinfo "Performance Tools"
Loaded plugins: fastestmirror, security
Setting up Group Process
Loading mirror speeds from cached hostfile
* epel: ewr.edge.kernel.org
Group: Performance Tools
Description: Tools for diagnosing system and application-level performance problems.
Mandatory Packages:
blktrace
sysstat
Default Packages:
dstat
iotop
latencytop
latencytop-tui
oprofile
perf
powertop
seekwatcher
Optional Packages:
oprofile-jit
papi
sdparm
sg3_utils
tiobench
tuned
tuned-utils
```
### 如何在 Fedora 系统上列出可用的软件包组
Fedora 系统使用的是 DNF 软件包管理器,因此可以通过 DNF 软件包管理器来获取相关的信息。
DNF 的含义是 “Dandified yum”。DNF 软件包管理器是 YUM 软件包管理器的一个分支,它使用 hawkey/libsolv 库作为后端。从 Fedora 18 开始,Aleš Kozumplík 开始着手 DNF 的开发,直到在 Fedora 22 开始加入到系统中。
`dnf` 命令可以在 Fedora 22 及更高版本上安装、更新、搜索和删除软件包, 它可以自动解决软件包的依赖关系并其顺利安装,不会产生问题。
YUM 被 DNF 取代是由于 YUM 中存在一些长期未被解决的问题。为什么 Aleš Kozumplík 没有对 yum 的这些问题作出修补呢,他认为补丁解决存在技术上的难题,而 YUM 团队也不会马上接受这些更改,还有一些重要的问题。而且 YUM 的代码量有 5.6 万行,而 DNF 只有 2.9 万行。因此已经不需要沿着 YUM 的方向继续开发了,重新开一个分支才是更好的选择。
**推荐阅读:** [在 Fedora 系统上使用 DNF 命令管理软件包](https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/)
```
# dnf grouplist
Last metadata expiration check: 0:00:00 ago on Sun 09 Sep 2018 07:10:36 PM IST.
Available Environment Groups:
Fedora Custom Operating System
Minimal Install
Fedora Server Edition
Fedora Workstation
Fedora Cloud Server
KDE Plasma Workspaces
Xfce Desktop
LXDE Desktop
Hawaii Desktop
LXQt Desktop
Cinnamon Desktop
MATE Desktop
Sugar Desktop Environment
Development and Creative Workstation
Web Server
Infrastructure Server
Basic Desktop
Installed Groups:
C Development Tools and Libraries
Development Tools
Available Groups:
3D Printing
Administration Tools
Ansible node
Audio Production
Authoring and Publishing
Books and Guides
Cloud Infrastructure
Cloud Management Tools
Container Management
D Development Tools and Libraries
.
.
RPM Development Tools
Security Lab
Text-based Internet
Window Managers
GNOME Desktop Environment
Graphical Internet
KDE (K Desktop Environment)
Fonts
Games and Entertainment
Hardware Support
Sound and Video
System Tools
```
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “Editor” 组相关联的软件包。
```
# dnf groupinfo Editors
Last metadata expiration check: 0:04:57 ago on Sun 09 Sep 2018 07:10:36 PM IST.
Group: Editors
Description: Sometimes called text editors, these are programs that allow you to create and edit text files. This includes Emacs and Vi.
Optional Packages:
code-editor
cssed
emacs
emacs-auctex
emacs-bbdb
emacs-ess
emacs-vm
geany
gobby
jed
joe
leafpad
nedit
poedit
psgml
vim-X11
vim-enhanced
xemacs
xemacs-packages-base
xemacs-packages-extra
xemacs-xft
xmlcopyeditor
zile
```
### 如何在 openSUSE 系统上列出可用的软件包组
openSUSE 系统使用的是 zypper 软件包管理器,因此可以通过 zypper 软件包管理器来获取相关的信息。
Zypper 是 suse 和 openSUSE 发行版的命令行包管理器。它可以用于安装、更新、搜索和删除软件包,还有管理存储库,执行各种查询等功能。 Zypper 命令行界面用到了 ZYpp 系统管理库(libzypp)。
**推荐阅读:** [在 openSUSE 和 suse 系统使用 zypper 命令管理软件包](https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/)
```
# zypper patterns
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
S | Name | Version | Repository | Dependency
---|----------------------|---------------|-----------------------|-----------
| 64bit | 20150918-25.1 | Main Repository (OSS) |
| apparmor | 20150918-25.1 | Main Repository (OSS) |
i | apparmor | 20150918-25.1 | @System |
| base | 20150918-25.1 | Main Repository (OSS) |
i+ | base | 20150918-25.1 | @System |
| books | 20150918-25.1 | Main Repository (OSS) |
| console | 20150918-25.1 | Main Repository (OSS) |
| devel_C_C++ | 20150918-25.1 | Main Repository (OSS) |
i | enhanced_base | 20150918-25.1 | @System |
| enlightenment | 20150918-25.1 | Main Repository (OSS) |
| file_server | 20150918-25.1 | Main Repository (OSS) |
| fonts | 20150918-25.1 | Main Repository (OSS) |
i | fonts | 20150918-25.1 | @System |
| games | 20150918-25.1 | Main Repository (OSS) |
i | games | 20150918-25.1 | @System |
| gnome | 20150918-25.1 | Main Repository (OSS) |
| gnome_basis | 20150918-25.1 | Main Repository (OSS) |
i | imaging | 20150918-25.1 | @System |
| kde | 20150918-25.1 | Main Repository (OSS) |
i+ | kde | 20150918-25.1 | @System |
| kde_plasma | 20150918-25.1 | Main Repository (OSS) |
i | kde_plasma | 20150918-25.1 | @System |
| lamp_server | 20150918-25.1 | Main Repository (OSS) |
| laptop | 20150918-25.1 | Main Repository (OSS) |
i+ | laptop | 20150918-25.1 | @System |
| lxde | 20150918-25.1 | Main Repository (OSS) |
| lxqt | 20150918-25.1 | Main Repository (OSS) |
i | multimedia | 20150918-25.1 | @System |
| network_admin | 20150918-25.1 | Main Repository (OSS) |
| non_oss | 20150918-25.1 | Main Repository (OSS) |
i | non_oss | 20150918-25.1 | @System |
| office | 20150918-25.1 | Main Repository (OSS) |
i | office | 20150918-25.1 | @System |
| print_server | 20150918-25.1 | Main Repository (OSS) |
| remote_desktop | 20150918-25.1 | Main Repository (OSS) |
| x11 | 20150918-25.1 | Main Repository (OSS) |
i+ | x11 | 20150918-25.1 | @System |
| x86 | 20150918-25.1 | Main Repository (OSS) |
| xen_server | 20150918-25.1 | Main Repository (OSS) |
| xfce | 20150918-25.1 | Main Repository (OSS) |
| xfce_basis | 20150918-25.1 | Main Repository (OSS) |
| yast2_basis | 20150918-25.1 | Main Repository (OSS) |
i | yast2_basis | 20150918-25.1 | @System |
| yast2_install_wf | 20150918-25.1 | Main Repository (OSS) |
```
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “file\_server” 组相关联的软件包。另外 `zypper` 还允许用户使用不同的选项执行相同的操作。
```
# zypper info file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
如果需要列出相关联的软件包,可以执行以下这个命令。
```
# zypper pattern-info file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
如果需要列出相关联的软件包,也可以执行以下这个命令。
```
# zypper info pattern file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
如果需要列出相关联的软件包,也可以执行以下这个命令。
```
# zypper info -t pattern file_server
Loading repository data...
Warning: Repository 'Update Repository (Non-Oss)' appears to be outdated. Consider using a different mirror or server.
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for pattern file_server:
------------------------------------
Repository : Main Repository (OSS)
Name : file_server
Version : 20150918-25.1
Arch : x86_64
Vendor : openSUSE
Installed : No
Visible to User : Yes
Summary : File Server
Description :
File services to host files so that they may be accessed or retrieved by other computers on the same network. This includes the FTP, SMB, and NFS protocols.
Contents :
S | Name | Type | Dependency
---|-------------------------------|---------|------------
i+ | patterns-openSUSE-base | package | Required
| patterns-openSUSE-file_server | package | Required
| nfs-kernel-server | package | Recommended
i | nfsidmap | package | Recommended
i | samba | package | Recommended
i | samba-client | package | Recommended
i | samba-winbind | package | Recommended
| tftp | package | Recommended
| vsftpd | package | Recommended
| yast2-ftp-server | package | Recommended
| yast2-nfs-server | package | Recommended
i | yast2-samba-server | package | Recommended
| yast2-tftp-server | package | Recommended
```
### 如何在 Debian/Ubuntu 系统上列出可用的软件包组
由于 APT 或 APT-GET 软件包管理器没有为基于 Debian/Ubuntu 的系统提供这样的选项,因此需要使用 `tasksel` 命令来获取相关信息。
[tasksel](https://wiki.debian.org/tasksel) 是 Debian/Ubuntu 系统上一个很方便的工具,只需要很少的操作就可以用它来安装好一组软件包。可以在 `/usr/share/tasksel` 目录下的 `.desc` 文件中安排软件包的安装任务。
默认情况下,`tasksel` 工具是作为 Debian 系统的一部分安装的,但桌面版 Ubuntu 则没有自带 `tasksel`,这个功能类似软件包管理器中的元包(meta-packages)。
`tasksel` 工具带有一个基于 zenity 的简单用户界面,例如命令行中的弹出图形对话框。
**推荐阅读:** [使用 tasksel 在 Debian/Ubuntu 系统上快速安装软件包组](https://www.2daygeek.com/tasksel-install-group-of-software-in-a-single-click-or-single-command-on-debian-ubuntu/)
```
# tasksel --list-task
u kubuntu-live Kubuntu live CD
u lubuntu-live-gtk Lubuntu live CD (GTK part)
u ubuntu-budgie-live Ubuntu Budgie live CD
u ubuntu-live Ubuntu live CD
u ubuntu-mate-live Ubuntu MATE Live CD
u ubuntustudio-dvd-live Ubuntu Studio live DVD
u vanilla-gnome-live Ubuntu GNOME live CD
u xubuntu-live Xubuntu live CD
u cloud-image Ubuntu Cloud Image (instance)
u dns-server DNS server
u kubuntu-desktop Kubuntu desktop
u kubuntu-full Kubuntu full
u lamp-server LAMP server
u lubuntu-core Lubuntu minimal installation
u lubuntu-desktop Lubuntu Desktop
u lubuntu-gtk-core Lubuntu minimal installation (GTK part)
u lubuntu-gtk-desktop Lubuntu Desktop (GTK part)
u lubuntu-qt-core Lubuntu minimal installation (Qt part)
u lubuntu-qt-desktop Lubuntu Qt Desktop (Qt part)
u mail-server Mail server
u postgresql-server PostgreSQL database
i print-server Print server
u samba-server Samba file server
u tomcat-server Tomcat Java server
u ubuntu-budgie-desktop Ubuntu Budgie desktop
i ubuntu-desktop Ubuntu desktop
u ubuntu-mate-core Ubuntu MATE minimal
u ubuntu-mate-desktop Ubuntu MATE desktop
i ubuntu-usb Ubuntu desktop USB
u ubuntustudio-audio Audio recording and editing suite
u ubuntustudio-desktop Ubuntu Studio desktop
u ubuntustudio-desktop-core Ubuntu Studio minimal DE installation
u ubuntustudio-fonts Large selection of font packages
u ubuntustudio-graphics 2D/3D creation and editing suite
u ubuntustudio-photography Photograph touchup and editing suite
u ubuntustudio-publishing Publishing applications
u ubuntustudio-video Video creation and editing suite
u vanilla-gnome-desktop Vanilla GNOME desktop
u xubuntu-core Xubuntu minimal installation
u xubuntu-desktop Xubuntu desktop
u openssh-server OpenSSH server
u server Basic Ubuntu server
```
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “lamp-server” 组相关联的软件包。
```
# tasksel --task-desc "lamp-server"
Selects a ready-made Linux/Apache/MySQL/PHP server.
```
### 如何在基于 Arch Linux 的系统上列出可用的软件包组
基于 Arch Linux 的系统使用的是 pacman 软件包管理器,因此可以通过 pacman 软件包管理器来获取相关的信息。
pacman 是 “package manager” 的缩写。`pacman` 可以用于安装、构建、删除和管理 Arch Linux 软件包。`pacman` 使用 libalpm(Arch Linux Package Management 库,ALPM)作为后端来执行所有操作。
**推荐阅读:** [使用 pacman 在基于 Arch Linux 的系统上管理软件包](https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/)
```
# pacman -Sg
base-devel
base
multilib-devel
gnome-extra
kde-applications
kdepim
kdeutils
kdeedu
kf5
kdemultimedia
gnome
plasma
kdegames
kdesdk
kdebase
xfce4
fprint
kdegraphics
kdenetwork
kdeadmin
kf5-aids
kdewebdev
.
.
dlang-ldc
libretro
ring
lxqt
non-daw
non
alsa
qtcurve
realtime
sugar-fructose
tesseract-data
vim-plugins
```
如果需要列出相关联的软件包,可以执行以下这个命令。下面的例子是列出和 “gnome” 组相关联的软件包。
```
# pacman -Sg gnome
gnome baobab
gnome cheese
gnome eog
gnome epiphany
gnome evince
gnome file-roller
gnome gdm
gnome gedit
gnome gnome-backgrounds
gnome gnome-calculator
gnome gnome-calendar
gnome gnome-characters
gnome gnome-clocks
gnome gnome-color-manager
gnome gnome-contacts
gnome gnome-control-center
gnome gnome-dictionary
gnome gnome-disk-utility
gnome gnome-documents
gnome gnome-font-viewer
.
.
gnome sushi
gnome totem
gnome tracker
gnome tracker-miners
gnome vino
gnome xdg-user-dirs-gtk
gnome yelp
gnome gnome-boxes
gnome gnome-software
gnome simple-scan
```
也可以执行以下这个命令实现同样的效果。
```
# pacman -S gnome
:: There are 64 members in group gnome:
:: Repository extra
1) baobab 2) cheese 3) eog 4) epiphany 5) evince 6) file-roller 7) gdm 8) gedit 9) gnome-backgrounds 10) gnome-calculator 11) gnome-calendar 12) gnome-characters 13) gnome-clocks
14) gnome-color-manager 15) gnome-contacts 16) gnome-control-center 17) gnome-dictionary 18) gnome-disk-utility 19) gnome-documents 20) gnome-font-viewer 21) gnome-getting-started-docs
22) gnome-keyring 23) gnome-logs 24) gnome-maps 25) gnome-menus 26) gnome-music 27) gnome-photos 28) gnome-screenshot 29) gnome-session 30) gnome-settings-daemon 31) gnome-shell
32) gnome-shell-extensions 33) gnome-system-monitor 34) gnome-terminal 35) gnome-themes-extra 36) gnome-todo 37) gnome-user-docs 38) gnome-user-share 39) gnome-video-effects 40) grilo-plugins
41) gvfs 42) gvfs-afc 43) gvfs-goa 44) gvfs-google 45) gvfs-gphoto2 46) gvfs-mtp 47) gvfs-nfs 48) gvfs-smb 49) mousetweaks 50) mutter 51) nautilus 52) networkmanager 53) orca 54) rygel
55) sushi 56) totem 57) tracker 58) tracker-miners 59) vino 60) xdg-user-dirs-gtk 61) yelp
:: Repository community
62) gnome-boxes 63) gnome-software 64) simple-scan
Enter a selection (default=all): ^C
Interrupt signal received
```
可以执行以下命令检查相关软件包的数量。
```
# pacman -Sg gnome | wc -l
64
```
---
via: <https://www.2daygeek.com/how-to-list-an-available-package-groups-in-linux/>
作者:[Prakash Subramanian](https://www.2daygeek.com/author/prakash/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
10,117 | Hegemon:使用 Rust 编写的模块化系统监视程序 | https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/ | 2018-10-15T12:07:57 | [
"top"
] | https://linux.cn/article-10117-1.html | 
在类 Unix 系统中监视运行进程时,最常用的程序是 `top` 和它的增强版 `htop`。我个人最喜欢的是 `htop`。但是,开发人员不时会发布这些程序的替代品。`top` 和 `htop` 工具的一个替代品是 `Hegemon`。它是使用 Rust 语言编写的模块化系统监视程序。
关于 Hegemon 的功能,我们可以列出以下这些:
* Hegemon 会监控 CPU、内存和交换页的使用情况。
* 它监控系统的温度和风扇速度。
* 更新间隔时间可以调整。默认值为 3 秒。
* 我们可以通过扩展数据流来展示更详细的图表和其他信息。
* 单元测试。
* 干净的界面。
* 自由开源。
### 安装 Hegemon
确保已安装 Rust 1.26 或更高版本。要在 Linux 发行版中安装 Rust,请参阅以下指南:
* [在 Linux 中安装 Rust 编程语言](https://www.ostechnix.com/install-rust-programming-language-in-linux/)
另外要安装 [libsensors](https://github.com/lm-sensors/lm-sensors) 库。它在大多数 Linux 发行版的默认仓库中都有。例如,你可以使用以下命令将其安装在基于 RPM 的系统(如 Fedora)中:
```
$ sudo dnf install lm_sensors-devel
```
在像 Ubuntu、Linux Mint 这样的基于 Debian 的系统上,可以使用这个命令安装它:
```
$ sudo apt-get install libsensors4-dev
```
在安装 Rust 和 libsensors 后,使用命令安装 Hegemon:
```
$ cargo install hegemon
```
安装 hegemon 后,使用以下命令开始监视 Linux 系统中正在运行的进程:
```
$ hegemon
```
以下是 Arch Linux 桌面的示例输出。

要退出,请按 `Q`。
请注意,hegemon 仍处于早期开发阶段,并不能完全取代 `top` 命令。它可能存在 bug 和功能缺失。如果你遇到任何 bug,请在项目的 GitHub 页面中报告它们。开发人员计划在即将推出的版本中引入更多功能。所以,请关注这个项目。
就是这些了。希望这篇文章有用。还有更多的好东西。敬请关注!
干杯!
---
via: <https://www.ostechnix.com/hegemon-a-modular-system-monitor-application-written-in-rust/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,118 | cloc:计算不同编程语言源代码的行数 | https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/ | 2018-10-15T12:18:12 | [
"代码"
] | https://linux.cn/article-10118-1.html | 
作为一个开发人员,你可能需要不时地向你的领导或者同事分享你目前的工作与代码开发进展,抑或你的领导想对代码进行全方位的分析。这时,你就需要用到一些代码统计的工具,我知道其中一个是 [**Ohcount**](https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/)。今天,我遇到了另一个程序,**cloc**。你可以用 cloc 很容易地统计多种语言的源代码行数。它还可以计算空行数、代码行数、实际代码的行数,并通过整齐的表格进行结果输出。cloc 是自由开源的跨平台程序,使用 **Perl** 进行开发。
### 特点
cloc 有很多优势:
* 安装方便而且易用,不需要额外的依赖项
* 可移植
* 支持多种的结果格式导出,包括:纯文本、SQL、JSON、XML、YAML、CSV
* 可以计算 git 的提交数
* 可递归计算文件夹内的代码行数
* 可计算压缩后的文件,如:tar、zip、Java 的 .ear 等类型
* 开源,跨平台
### 安装
cloc 的安装包在大多数的类 Unix 操作系统的默认软件库内,所以你只需要使用默认的包管理器安装即可。
Arch Linux:
```
$ sudo pacman -S cloc
```
Debian/Ubuntu:
```
$ sudo apt-get install cloc
```
CentOS/Red Hat/Scientific Linux:
```
$ sudo yum install cloc
```
Fedora:
```
$ sudo dnf install cloc
```
FreeBSD:
```
$ sudo pkg install cloc
```
当然你也可以使用第三方的包管理器,比如 [**NPM**](https://www.ostechnix.com/install-node-js-linux/)。
```
$ npm install -g cloc
```
### 统计多种语言代码数据的使用举例
首先来几个简单的例子,比如下面在我目前工作目录中的的 C 代码。
```
$ cat hello.c
#include <stdio.h>
int main()
{
// printf() displays the string inside quotation
printf("Hello, World!");
return 0;
}
```
想要计算行数,只需要简单运行:
```
$ cloc hello.c
```
输出:

第一列是被分析文件的编程语言,上面我们可以看到这个文件是用 C 语言编写的。
第二列显示的是该种语言有多少文件,图中说明只有一个。
第三列显示空行的数量,图中显示是 0 行。
第四列显示注释的行数。
第五列显示该文件中实际的代码总行数。
这是一个有只有 6 行代码的源文件,我们看到统计的还算准确,那么如果用来统计一个行数较多的源文件呢?
```
$ cloc file.tar.gz
```
输出:

上述输出结果如果手动统计准确的代码行数非常困难,但是 cloc 只需要几秒,而且以易读的表格格式显示结果。你还可以在最后查看每个部分的总计,这在分析程序的源代码时非常方便。
除了源代码文件,cloc 还能递归计算各个目录及其子目录下的文件、压缩包、甚至 git commit 数目等。
文件夹中使用的例子:
```
$ cloc dir/
```

子文件夹中使用的例子\*:
```
$ cloc dir/cloc/tests
```

计算一个压缩包中源代码的行数:
```
$ cloc archive.zip
```

你还可以计算一个 git 项目,也可以像下面这样针对某次提交时的状态统计:
```
$ git clone https://github.com/AlDanial/cloc.git
$ cd cloc
$ cloc 157d706
```

cloc 可以自动识别一些语言,使用下面的命令查看 cloc 支持的语言:
```
$ cloc --show-lang
```
更新信息请查阅 cloc 的使用帮助。
```
$ cloc --help
```
开始使用吧!
---
via: <https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[littleji](https://github.com/littleji) 校对:[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,119 | 如何在 Linux 下锁住键盘和鼠标而不锁屏 | https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/ | 2018-10-15T23:00:16 | [
"锁定"
] | https://linux.cn/article-10119-1.html | 
我四岁的侄女是个好奇的孩子,她非常喜爱“阿凡达”电影,当阿凡达电影在播放时,她是如此的专注,好似眼睛粘在了屏幕上。但问题是当她观看电影时,她经常会碰到键盘上的某个键或者移动了鼠标,又或者是点击了鼠标的按钮。有时她非常意外地按了键盘上的某个键,从而将电影关闭或者暂停了。所以我就想找个方法来将键盘和鼠标都锁住,但屏幕不会被锁住。幸运的是,我在 Ubuntu 论坛上找到了一个完美的解决方法。假如在你正看着屏幕上的某些重要的事情时,你不想让你的小猫或者小狗在你的键盘上行走,或者让你的孩子在键盘上瞎搞一气,那我建议你试试 **xtrlock** 这个工具。它很简单但非常实用,你可以锁定屏幕的显示直到用户在键盘上输入自己设定的密码(LCTT 译注:就是用户自己的密码,例如用来打开屏保的那个密码,不需要单独设定)。在这篇简单的教程中,我将为你展示如何在 Linux 下锁住键盘和鼠标,而不锁掉屏幕。这个技巧几乎可以在所有的 Linux 操作系统中生效。
### 安装 xtrlock
xtrlock 软件包在大多数 Linux 操作系统的默认软件仓库中都可以获取到。所以你可以使用你安装的发行版的包管理器来安装它。
在 **Arch Linux** 及其衍生发行版中,运行下面的命令来安装它:
```
$ sudo pacman -S xtrlock
```
在 **Fedora** 上使用:
```
$ sudo dnf install xtrlock
```
在 **RHEL、CentOS** 上使用:
```
$ sudo yum install xtrlock
```
在 **SUSE/openSUSE** 上使用:
```
$ sudo zypper install xtrlock
```
在 **Debian、Ubuntu、Linux Mint** 上使用:
```
$ sudo apt-get install xtrlock
```
### 使用 xtrlock 锁住键盘和鼠标但不锁屏
安装好 xtrlock 后,你需要根据你的选择来创建一个快捷键,通过这个快捷键来锁住键盘和鼠标。
(LCTT 译注:译者在自己的系统(Arch + Deepin)中发现这里的到下面创建快捷键的部分可以不必做,依然生效。)
在 `/usr/local/bin` 目录下创建一个名为 `lockkbmouse` 的新文件:
```
$ sudo vi /usr/local/bin/lockkbmouse
```
然后将下面的命令添加到这个文件中:
```
#!/bin/bash
sleep 1 && xtrlock
```
保存并关闭这个文件。
然后使用下面的命令来使得它可以被执行:
```
$ sudo chmod a+x /usr/local/bin/lockkbmouse
```
接着,我们就需要创建快捷键了。
#### 创建快捷键
**在 Arch Linux MATE 桌面中**
依次点击 “System -> Preferences -> Hardware -> keyboard Shortcuts”
然后点击 “Add” 来创建快捷键。

首先键入你的这个快捷键的名称,然后将下面的命令填入命令框中,最后点击 “Apply” 按钮。
```
bash -c "sleep 1 && xtrlock"
```

为了能够给这个快捷键赋予快捷方式,需要选中它或者双击它然后输入你选定的快捷键组合,例如我使用 `Alt+k` 这组快捷键。

如果要清除这个快捷键组合,按住 `BACKSPACE` 键就可以了。完成后,关闭键盘设定窗口。
**在 Ubuntu GNOME 桌面中**
依次进入 “System Settings -> Devices -> Keyboard”,然后点击 “+” 这个符号。
键入你快捷键的名称并将下面的命令加到命令框里面,然后点击 “Add” 按钮。
```
bash -c "sleep 1 && xtrlock"
```

接下来为这个新建的快捷键赋予快捷方式。我们只需要选择或者双击 “Set shortcut” 这个按钮就可以了。

然后你将看到下面的一屏。

输入你选定的快捷键组合,例如我使用 `Alt+k`。

如果要清除这个快捷键组合,则可以按 `BACKSPACE` 这个键。这样快捷键便设定好了,完成这个后,关闭键盘设定窗口。
从现在起,每当你输入刚才设定的快捷键(在我们的示例中是 `ATL+K`),鼠标的指针便会变成一个挂锁的模样。现在,键盘和鼠标便被锁定了,这时你便可以自在地观看你的电影或者做其他你想做的事儿。即便是你的孩子或者宠物碰了键盘上的某些键或者点击了鼠标,这些操作都不会起作用。
因为 `xtrlock` 已经在工作了。

你看到了那个小的锁按钮了吗?它意味着键盘和鼠标已经被锁定了。即便你移动这个锁按钮,也不会发生任何事情。后台的任务在一直执行,直到你将屏幕解除,然后手动停掉运行中的任务。
### 将键盘和鼠标解锁
要将键盘和鼠标解锁,只需要输入你的密码然后敲击回车键就可以了,在输入的过程中你将看不到密码。只需要输入然后敲回车键就可以了。在你输入了正确的密码后,鼠标和键盘就可以再工作了。假如你输入了一个错误的密码,你将听到警告声。按 `ESC` 来清除输入的错误密码,然后重新输入正确的密码。要去掉未完全输入完的密码中的一个字符,只需要按 `BACKSPACE` 或者 `DELETE` 键就可以了。
### 要是我被永久地锁住了怎么办?
以防你被永久地锁定了屏幕,切换至一个 TTY(例如 `CTRL+ALT+F2`)然后运行:
```
$ sudo killall xtrlock
```
或者你还可以使用 `chvt` 命令来在 TTY 和 X 会话之间切换。
例如,如果要切换到 TTY1,则运行:
```
$ sudo chvt 1
```
要切换回 X 会话,则键入:
```
$ sudo chvt 7
```
不同的发行版使用了不同的快捷键组合来在不同的 TTY 间切换。请参考你安装的对应发行版的官方网站了解更多详情。
如果想知道更多 xtrlock 的信息,请参考 man 页:
```
$ man xtrlock
```
那么这就是全部了。希望这个指南可以帮到你。假如你发现这个指南很有用,请花点时间将这个指南共享到你的朋友圈并支持我们(OSTechNix)。
**资源:**
* [**Ubuntu 论坛**](https://ubuntuforums.org/showthread.php?t=993800)
---
via: <https://www.ostechnix.com/lock-keyboard-mouse-not-screen-linux/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[FSSlc](https://github.com/FSSlc) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,120 | 如何在救援(单用户模式)/紧急模式下启动 Ubuntu 18.04/Debian 9 服务器 | https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/ | 2018-10-16T10:53:32 | [
"启动",
"救援"
] | https://linux.cn/article-10120-1.html | 
将 Linux 服务器引导到单用户模式或<ruby> 救援模式 <rt> rescue mode </rt></ruby>是 Linux 管理员在关键时刻恢复服务器时通常使用的重要故障排除方法之一。在 Ubuntu 18.04 和 Debian 9 中,单用户模式被称为救援模式。
除了救援模式外,Linux 服务器可以在<ruby> 紧急模式 <rt> emergency mode </rt></ruby>下启动,它们之间的主要区别在于,紧急模式加载了带有只读根文件系统文件系统的最小环境,没有启用任何网络或其他服务。但救援模式尝试挂载所有本地文件系统并尝试启动一些重要的服务,包括网络。
在本文中,我们将讨论如何在救援模式和紧急模式下启动 Ubuntu 18.04 LTS/Debian 9 服务器。
#### 在单用户/救援模式下启动 Ubuntu 18.04 LTS 服务器:
重启服务器并进入启动加载程序 (Grub) 屏幕并选择 “Ubuntu”,启动加载器页面如下所示,

按下 `e`,然后移动到以 `linux` 开头的行尾,并添加 `systemd.unit=rescue.target`。如果存在单词 `$vt_handoff` 就删除它。

现在按 `Ctrl-x` 或 `F10` 启动,

现在按回车键,然后你将得到所有文件系统都以读写模式挂载的 shell 并进行故障排除。完成故障排除后,可以使用 `reboot` 命令重新启动服务器。
#### 在紧急模式下启动 Ubuntu 18.04 LTS 服务器
重启服务器并进入启动加载程序页面并选择 “Ubuntu”,然后按 `e` 并移动到以 `linux` 开头的行尾,并添加 `systemd.unit=emergency.target`。

现在按 `Ctrl-x` 或 `F10` 以紧急模式启动,你将获得一个 shell 并从那里进行故障排除。正如我们已经讨论过的那样,在紧急模式下,文件系统将以只读模式挂载,并且在这种模式下也不会有网络,

使用以下命令将根文件系统挂载到读写模式,
```
# mount -o remount,rw /
```
同样,你可以在读写模式下重新挂载其余文件系统。
#### 将 Debian 9 引导到救援和紧急模式
重启 Debian 9.x 服务器并进入 grub页面选择 “Debian GNU/Linux”。

按下 `e` 并移动到 linux 开头的行尾并添加 `systemd.unit=rescue.target` 以在救援模式下启动系统, 要在紧急模式下启动,那就添加 `systemd.unit=emergency.target`。
#### 救援模式:

现在按 `Ctrl-x` 或 `F10` 以救援模式启动

按下回车键以获取 shell,然后从这里开始故障排除。
#### 紧急模式:

现在按下 `ctrl-x` 或 `F10` 以紧急模式启动系统

按下回车获取 shell 并使用 `mount -o remount,rw /` 命令以读写模式挂载根文件系统。
**注意:**如果已经在 Ubuntu 18.04 和 Debian 9 Server 中设置了 root 密码,那么你必须输入 root 密码才能在救援和紧急模式下获得 shell
就是这些了,如果您喜欢这篇文章,请分享你的反馈和评论。
---
via: <https://www.linuxtechi.com/boot-ubuntu-18-04-debian-9-rescue-emergency-mode/>
作者:[Pradeep Kumar](http://www.linuxtechi.com/author/pradeep/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
10,121 | 在 Ubuntu 18.04 LTS 上使用 KVM 配置无头虚拟化服务器 | https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/ | 2018-10-16T11:19:10 | [
"KVM",
"虚拟化"
] | https://linux.cn/article-10121-1.html | 
我们已经讲解了 [在 Ubuntu 18.04 无头服务器上配置 Oracle VirtualBox](https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/) 。在本教程中,我们将讨论如何使用 **KVM** 去配置无头虚拟化服务器,以及如何从一个远程客户端去管理访客系统。正如你所知道的,KVM(**K**ernel-based **v**irtual **m**achine)是开源的,是 Linux 上的全虚拟化。使用 KVM,我们可以在几分钟之内,很轻松地将任意 Linux 服务器转换到一个完全的虚拟化环境中,以及部署不同种类的虚拟机,比如 GNU/Linux、\*BSD、Windows 等等。
### 使用 KVM 配置无头虚拟化服务器
我在 Ubuntu 18.04 LTS 服务器上测试了本指南,但是它在其它的 Linux 发行版上也可以使用,比如,Debian、CentOS、RHEL 以及 Scientific Linux。这个方法完全适合哪些希望在没有任何图形环境的 Linux 服务器上,去配置一个简单的虚拟化环境。
基于本指南的目的,我将使用两个系统。
**KVM 虚拟化服务器:**
* **宿主机操作系统** – 最小化安装的 Ubuntu 18.04 LTS(没有 GUI)
* **宿主机操作系统的 IP 地址**:192.168.225.22/24
* **访客操作系统**(它将运行在 Ubuntu 18.04 的宿主机上):Ubuntu 16.04 LTS server
**远程桌面客户端:**
* **操作系统** – Arch Linux
### 安装 KVM
首先,我们先检查一下我们的系统是否支持硬件虚拟化。为此,需要在终端中运行如下的命令:
```
$ egrep -c '(vmx|svm)' /proc/cpuinfo
```
假如结果是 `zero (0)`,说明系统不支持硬件虚拟化,或者在 BIOS 中禁用了虚拟化。进入你的系统 BIOS 并检查虚拟化选项,然后启用它。
假如结果是 `1` 或者 **更大的数**,说明系统将支持硬件虚拟化。然而,在你运行上面的命令之前,你需要始终保持 BIOS 中的虚拟化选项是启用的。
或者,你也可以使用如下的命令去验证它。但是为了使用这个命令你需要先安装 KVM。
```
$ kvm-ok
```
示例输出:
```
INFO: /dev/kvm exists
KVM acceleration can be used
```
如果输出的是如下这样的错误,你仍然可以在 KVM 中运行访客虚拟机,但是它的性能将非常差。
```
INFO: Your CPU does not support KVM extensions
INFO: For more detailed results, you should run this as root
HINT: sudo /usr/sbin/kvm-ok
```
当然,还有其它的方法来检查你的 CPU 是否支持虚拟化。更多信息参考接下来的指南。
* [如何知道 CPU 是否支持虚拟技术(VT)](https://www.ostechnix.com/how-to-find-if-a-cpu-supports-virtualization-technology-vt/)
接下来,安装 KVM 和在 Linux 中配置虚拟化环境所需要的其它包。
在 Ubuntu 和其它基于 DEB 的系统上,运行如下命令:
```
$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
```
KVM 安装完成后,启动 libvertd 服务(如果它没有启动的话):
```
$ sudo systemctl enable libvirtd
$ sudo systemctl start libvirtd
```
### 创建虚拟机
所有的虚拟机文件和其它的相关文件都保存在 `/var/lib/libvirt/` 下。ISO 镜像的默认路径是 `/var/lib/libvirt/boot/`。
首先,我们先检查一下是否有虚拟机。查看可用的虚拟机列表,运行如下的命令:
```
$ sudo virsh list --all
```
示例输出:
```
Id Name State
----------------------------------------------------
```

正如上面的截屏,现在没有可用的虚拟机。
现在,我们来创建一个。
例如,我们来创建一个有 512 MB 内存、1 个 CPU 核心、8 GB 硬盘的 Ubuntu 16.04 虚拟机。
```
$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc
```
请确保在路径 `/var/lib/libvirt/boot/` 中有一个 Ubuntu 16.04 的 ISO 镜像文件,或者在上面命令中给定的其它路径中有相应的镜像文件。
示例输出:
```
WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer.
WARNING No console to launch for the guest, defaulting to --wait -1
Starting install...
Creating domain... | 0 B 00:00:01
Domain installation still in progress. Waiting for installation to complete.
Domain has shutdown. Continuing.
Domain creation completed.
Restarting guest.
```

我们来分别讲解以上的命令和看到的每个选项的作用。
* `–name`:这个选项定义虚拟机名字。在我们的案例中,这个虚拟机的名字是 `Ubuntu-16.04`。
* `–ram=512`:给虚拟机分配 512MB 内存。
* `–vcpus=1`:指明虚拟机中 CPU 核心的数量。
* `–cpu host`:通过暴露宿主机 CPU 的配置给访客系统来优化 CPU 属性。
* `–hvm`:要求完整的硬件虚拟化。
* `–disk path`:虚拟机硬盘的位置和大小。在我们的示例中,我分配了 8GB 的硬盘。
* `–cdrom`:安装 ISO 镜像的位置。请注意你必须在这个位置真的有一个 ISO 镜像。
* `–graphics vnc`:允许 VNC 从远程客户端访问虚拟机。
### 使用 VNC 客户端访问虚拟机
现在,我们在远程桌面系统上使用 SSH 登入到 Ubuntu 服务器上(虚拟化服务器),如下所示。
```
$ ssh [email protected]
```
在这里,`sk` 是我的 Ubuntu 服务器的用户名,而 `192.168.225.22` 是它的 IP 地址。
运行如下的命令找出 VNC 的端口号。我们从一个远程系统上访问虚拟机需要它。
```
$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc
```
示例输出:
```
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
```

记下那个端口号 `5900`。安装任意的 VNC 客户端应用程序。在本指南中,我们将使用 TigerVnc。TigerVNC 是 Arch Linux 默认仓库中可用的客户端。在 Arch 上安装它,运行如下命令:
```
$ sudo pacman -S tigervnc
```
在安装有 VNC 客户端的远程客户端系统上输入如下的 SSH 端口转发命令。
```
$ ssh [email protected] -L 5900:127.0.0.1:5900
```
再强调一次,`192.168.225.22` 是我的 Ubuntu 服务器(虚拟化服务器)的 IP 地址。
然后,从你的 Arch Linux(客户端)打开 VNC 客户端。
在 VNC 服务器框中输入 `localhost:5900`,然后点击 “Connect” 按钮。

然后就像你在物理机上安装系统一样的方法开始安装 Ubuntu 虚拟机。


同样的,你可以根据你的服务器的硬件情况配置多个虚拟机。
或者,你可以使用 `virt-viewer` 实用程序在访客机器中安装操作系统。`virt-viewer` 在大多数 Linux 发行版的默认仓库中都可以找到。安装完 `virt-viewer` 之后,运行下列的命令去建立到虚拟机的访问连接。
```
$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04
```
### 管理虚拟机
使用管理用户接口 `virsh` 从命令行去管理虚拟机是非常有趣的。命令非常容易记。我们来看一些例子。
查看运行的虚拟机,运行如下命令:
```
$ sudo virsh list
```
或者,
```
$ sudo virsh list --all
```
示例输出:
```
Id Name State
----------------------------------------------------
2 Ubuntu-16.04 running
```

启动一个虚拟机,运行如下命令:
```
$ sudo virsh start Ubuntu-16.04
```
或者,也可以使用虚拟机 id 去启动它。

正如在上面的截图所看到的,Ubuntu 16.04 虚拟机的 Id 是 2。因此,启动它时,你也可以像下面一样只指定它的 ID。
```
$ sudo virsh start 2
```
重启动一个虚拟机,运行如下命令:
`$ sudo virsh reboot Ubuntu-16.04`
示例输出:
```
Domain Ubuntu-16.04 is being rebooted
```

暂停一个运行中的虚拟机,运行如下命令:
```
$ sudo virsh suspend Ubuntu-16.04
```
示例输出:
```
Domain Ubuntu-16.04 suspended
```
让一个暂停的虚拟机重新运行,运行如下命令:
```
$ sudo virsh resume Ubuntu-16.04
```
示例输出:
```
Domain Ubuntu-16.04 resumed
```
关闭一个虚拟机,运行如下命令:
```
$ sudo virsh shutdown Ubuntu-16.04
```
示例输出:
```
Domain Ubuntu-16.04 is being shutdown
```
完全移除一个虚拟机,运行如下的命令:
```
$ sudo virsh undefine Ubuntu-16.04
$ sudo virsh destroy Ubuntu-16.04
```
示例输出:
```
Domain Ubuntu-16.04 destroyed
```

关于它的更多选项,建议你去查看 man 手册页:
```
$ man virsh
```
今天就到这里吧。开始在你的新的虚拟化环境中玩吧。对于研究和开发者、以及测试目的,KVM 虚拟化将是很好的选择,但它能做的远不止这些。如果你有充足的硬件资源,你可以将它用于大型的生产环境中。如果你还有其它好玩的发现,不要忘记在下面的评论区留下你的高见。
谢谢!
---
via: <https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[qhwdw](https://github.com/qhwdw) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,122 | Linux 拥有了新的行为准则,但是许多人都对此表示不满 | https://itsfoss.com/linux-code-of-conduct/ | 2018-10-16T12:26:00 | [
"Linus",
"CoC"
] | https://linux.cn/article-10122-1.html |
>
> Linux 内核有了新的<ruby> 行为准则 <rt> Code of Conduct </rt></ruby>(CoC)。但在这条行为准则被签署以及发布仅仅 30 分钟之后,Linus Torvalds 就暂时离开了 Linux 内核的开发工作。因为新行为准则的作者那富有争议的过去,现在这件事成为了热点话题。许多人都对这新的行为准则表示不满。
>
>
>
如果你还不了解这件事,请参阅 [Linus Torvalds 对于自己之前的不良态度致歉并开始休假,以改善自己的行为态度](/article-10022-1.html)
### Linux 内核开发遵守的新行为准则
Linux 内核开发者并不是以前没有需要遵守的行为准则,但是之前的[<ruby> 冲突准则 <rt> code of conflict </rt></ruby>](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e)现在被替换成了以“给内核开发社区营造更加热情,更方便他人参与的氛围”为目的的行为准则。
>
> “为营造一个开放并且热情的社区环境,我们,贡献者与维护者,许诺让每一个参与进我们项目和社区的人享受一个没有骚扰的体验。无关于他们的年纪、体型、身体残疾、种族、性别、性别认知与表达、社会经验、教育水平、社会或者经济地位、国籍、外表、人种、信仰、性认同和性取向。”
>
>
>
你可以在这里阅读整篇行为准则:[Linux 行为准则](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8a104f8b5867c682d994ffa7a74093c54469c11f)。
### Linus Torvalds 是被迫道歉并且休假的吗?

这个新的行为准则由 Linus Torvalds 和 Greg Kroah-Hartman (仅次于 Torvalds 的二把手)签发。来自 Intel 的 Dan Williams 和来自 Facebook 的 Chris Mason 也是该准则的签署者之一。
如果我正确地解读了时间线,在签署这个行为准则的半小时之后,Torvalds [发送了一封邮件,对自己之前的不良态度致歉](https://lkml.org/lkml/2018/9/16/167)。他同时宣布会进行休假,以改善自己的行为态度。
不过有些人开始阅读这封邮件的话外之音,并对如下文字报以特别关注:
>
> **在这周,许多社区成员批评了我之前种种不解人意的行为。我以前在邮件里进行的,对他人轻率的批评是非专业以及不必要的**。这种情况在我将事情放在私人渠道处理的时候尤为严重。我理解这件事情的严重性,这是不好的行为,我对此感到十分抱歉。
>
>
>
他是否是因为新的行为准则被强迫做出道歉,并且决定休假,可以通过这几行来判断。这也可以让我们采取一些措施,避免 Torvalds 被新的行为准则伤害。
### 有关贡献者盟约作者 Coraline Ada Ehmke 的争议
Linux 的行为准则基于[<ruby> 贡献者盟约 <rt> Contributor Convenant </rt></ruby>1.4 版本](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html)。贡献者盟约[被上百个开源项目所接纳](https://www.contributor-covenant.org/adopters),包括 Eclipse、Angular、Ruby、Kubernetes 等项目。
贡献者盟约由 [Coraline Ada Ehmke](https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke) 创作,她是一个软件工程师,开源支持者,以及 [LGBT](https://en.wikipedia.org/wiki/LGBT) 活动家。她对于促进开源世界的多样性做了显著的贡献。
Coraline 对于精英主义的反对立场同样十分鲜明。[<ruby> 精英主义 <rt> meritocracy </rt></ruby>](https://en.wikipedia.org/wiki/Meritocracy)这个词语源自拉丁文,本意为系统内的进步取决于“精英”,例如智力水平、取得的证书以及教育程度。但[类似 Coraline 的活动家们认为](https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy)唯才是用是个糟糕的体系,因为它只是通过人的智力产出来度量一个人,而并不重视他们的人性。
[](https://pbs.twimg.com/media/DnTTfi7XoAAdk08.jpg)
*图片来源:推特用户@nickmon1112*
[Linus Torvalds 不止一次地说到,他在意的只是代码而并非写代码的人](https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/)。所以很明显,这忤逆了 Coraline 有关唯才是用体系的观点。
具体来说,Coraline 那被人关注饱受争议的过去,是一个关于 [Opal 项目](https://opalrb.com/)贡献者的事件。那是一个发生[在推特上的讨论](https://twitter.com/krainboltgreene/status/611569515315507200),Elia,来自意大利的 Opal 项目核心开发者说“(那些变性人)不接受现实才是问题所在。”
Coraline 并没有参加讨论,也不是 Opal 项目的贡献者。不过作为 LGBT 活动家,她以 Elia 发表“冒犯变性人群体的发言”为由,[要求他退出 Opal 项目](https://github.com/opal/opal/issues/941)。 Coraline 和她的支持者——他们给这个项目做过贡献,通过在 GitHub 仓库平台上冗长且激烈的争论,试图将 Elia——此项目的核心开发者移出项目。
虽然 Elia 并没有离开这个项目,不过 Opal 项目的维护者同意实行一个行为准则。这个行为准则就是 Coraline 不停向维护者们宣扬的,她那著名的贡献者盟约。
不过故事到这里并没有结束。贡献者盟约稍后被更改,[加入了一些针对 Elia 的新条款](https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11)。这些新条款将行为准则的管束范围扩展到公共领域。不过这些更改稍后[被维护者们标记为恶意篡改](https://github.com/opal/opal/pull/948#issuecomment-113486020)。最后 Opal 项目摆脱了贡献者盟约,并用自己的行为准则取而代之。
这个例子非常好的说明了,某些被冒犯的少数人群——哪怕他们并没有给这个项目做过一点贡献,是怎样试图去驱逐这个项目的核心开发者的。
### 人们对于 Linux 新的行为准则的以及 Torvalds 道歉的反映。
Linux 行为准则以及 Torvalds 的道歉一发布,社交媒体与论坛上就开始盛传种种谣言与[推测](https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/)。虽然很多人对新的行为准则感到满意,但仍有些人认为这是 [SJW 尝试渗透 Linux 社区](https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/)的阴谋。(LCTT 译注:SJW——Social Justice Warrior 所谓“为社会正义而战的人”。)
Caroline 发布的一个富有嘲讽意味的推特让争论愈发激烈。
>
> 我迫不及待期待看到大批的人离开 Linux 社区的场景了。现在它已经被 SJW 的成员渗透了。哈哈哈哈。
> [pic.twitter.com/eFeY6r4ENv](https://t.co/eFeY6r4ENv)
>
>
> — Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018](https://twitter.com/CoralineAda/status/1041441155874009093?ref_src=twsrc%5Etfw)
>
>
>
随着对于 Linux 行为准则的争论持续发酵,Carolaine 公开宣称贡献者盟约是一份政治文件。这并不能被那些试图将政治因素排除在开源项目之外的人所接收。
>
> 有些人说贡献者盟约是一份政治文件,他们说的没错。
>
>
> — Coraline Ada Ehmke (@CoralineAda) [9 月 16 日, 2018](https://twitter.com/CoralineAda/status/1041465346656530432?ref_src=twsrc%5Etfw)
>
>
>
Nick Monroe,一位自由记者,宣称 Linux 行为准则远没有表面上看上去那么简单。为了证明自己的观点,他挖掘出了 Coraline 的过去。如果您愿意,可以阅读以下材料。
>
> 好啦,你们已经看到过几千次了。这是一个行为准则。
>
>
> 它包含了社会认同的正义行为。<https://t.co/KuQqeriYeJ>
>
>
> 不过它或许没有看上去来的那么简单。[pic.twitter.com/8NUL2K1gu2](https://t.co/8NUL2K1gu2)
>
>
> — Nick Monroe (@nickmon1112) [9 月 17 日, 2018](https://twitter.com/nickmon1112/status/1041668315947708416?ref_src=twsrc%5Etfw)
>
>
>
Nick 并不是唯一一个反对 Linux 新的行为准则的人。[SJW](https://www.urbandictionary.com/define.php?term=SJW) 的参与引发了更多的阴谋论猜测。
>
> 我猜今天关于 Linux 的大新闻就是现在,Linux 内核被一个 “<ruby> 后精英政治 <rt> post meritocracy </rt></ruby>” 世界观下的行为准则给掌控了。
>
>
> 这个行为准则的宗旨看起来不错。不过在实际操作中,它们通常被当作 SJW 分子攻击他们不喜之人的工具。况且,很多人都被 SJW 分子所厌恶。
>
>
> — Mark Kern (@Grummz) [September 17, 2018](https://twitter.com/Grummz/status/1041524170331287552?ref_src=twsrc%5Etfw)
>
>
>
虽然很多人对于 Torvalds 的道歉感到欣慰,仍有一些人在责备 Torvalds 的态度。
>
> 我是不是唯一一个认为 Linus Torvalds 这十几年来的态度恰好就是 Linux 和开源“社区”特有的那种,居高临下,粗鲁,鄙视一切新人的行为作风?反正作为一个新用户,我从来没有在 Linux 社区里感受到自己是受欢迎的。
>
>
> — Jonathan Frappier (@jfrappier) [9 月 17 日, 2018](https://twitter.com/jfrappier/status/1041486055038492674?ref_src=twsrc%5Etfw)
>
>
>
还有些人并不能接受 Torvalds 的道歉。
>
> 哦快看啊,一个喜欢辱骂他人的开源软件维护者,在十几年的恶行之后,终于承认了他的行为**可能**是不正确的。
>
>
> 我关注的那些人因为这件事都惊讶到平地摔,并且决定给他(Linus Torvalds)寄饼干来庆祝。
>
>
> — Kelly Ellis (@justkelly\_ok) [9 月 17 日, 2018](https://twitter.com/justkelly_ok/status/1041522269002985473?ref_src=twsrc%5Etfw)
>
>
>
Torvalds 的道歉引起了广泛关注 ;)
>
> 我现在要在我的个人档案里写上”我不知是否该原谅 Linus Torvalds“ 吗?
>
>
> — Verónica. (@maria\_fibonacci) [9 月 17 日, 2018](https://twitter.com/maria_fibonacci/status/1041538148121997313?ref_src=twsrc%5Etfw)
>
>
>
不继续开玩笑了。有关 Linus 道歉的关注是由 Sharp 挑起的。她因为“恶劣的社区环境”于 2015 年[退出了 Linux 内核的开发](https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html)。(LCTT 译注,Sarah Sharp 现在改名为“Sage Sharp”,并要求别人称其为“them”而不是“she”或“he”。)
>
> 现在我们要面对的问题是,这个成就了 Linus,给予他肆意辱骂特权的社区能否迎来改变。不仅仅是 Linus 个人,Linux 内核开发社区也急需改变。<https://t.co/EG5KO43416>
>
>
> — Sage Sharp (@sagesharp) [9 月 17 日, 2018](https://twitter.com/_sagesharp_/status/1041480963287539712?ref_src=twsrc%5Etfw)
>
>
>
### 你对于 Linux 行为准则怎么看?
如果你问我的观点,我认为目前社区的确是需要一个行为准则。它能指导人们尊重他人,不因为他人的种族、宗教信仰、国籍、政治观点(左派或者右派)而歧视,营造出一个积极向上的社区氛围。
对于这个事件,你怎么看?你认为这个行为准则能够帮助 Linux 内核的开发,或者说因为 SJW 成员们的加入,情况会变得更糟?
在 FOSS 里我们没有行为准则,不过我们都会持着文明友好的态度讨论问题。
---
via: <https://itsfoss.com/linux-code-of-conduct/>
作者:[Abhishek Prakash](https://itsfoss.com/author/abhishek/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[thecyanbird](https://github.com/thecyanbird) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | **The Linux kernel has a new code of conduct (CoC). Linus Torvalds took a break from Linux kernel development just 30 minutes after signing this code of conduct. And since the writer of this code of conduct has had a controversial past, it has now become a point of heated discussion. With all the politics involved, not many people are happy with this new CoC.**
In case you don’t know already, [Linux creator Linus Torvalds has apologized for his past behavior and has taken a temporary break from Linux kernel development to improve his behavior](https://itsfoss.com/torvalds-takes-a-break-from-linux/).
## The new code of conduct for Linux kernel development
Linux kernel developers now have a code of conduct. It’s not like they didn’t have a code before, but the previous [code of conflict](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/CodeOfConflict?id=ddbd2b7ad99a418c60397901a0f3c997d030c65e) is now replaced by this new code of conduct to “help make the kernel community a welcoming environment to participate in.”
“In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.”
You can read the entire code of conduct on this commit page.
**Recommended Read:**
## Was Linus Torvalds forced to apologize and take a break?

The code of conduct was signed off by Linus Torvalds and Greg Kroah-Hartman (kind of the second-in-command after Torvalds). Dan Williams of Intel and Chris Mason from Facebook were some of the other signatories.
If I have read through the timeline correctly, half an hour after signing this code of conduct Torvalds sent an [email apologizing for his past behavior](https://lkml.org/lkml/2018/9/16/167). He also announced that he would take a temporary break to improve his behavior.
But at this point some people started reading between the lines, paying special attention to this line from his email:
This week people in our community confronted me about my lifetime ofnot understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.
This particular line could be read as if he was coerced into apologizing and taking a break because of the new code of conduct. Though it could also be a precautionary measure to prevent Torvalds from violating the newly created code of conduct.
**Recommended Read:**
## The controversy around Contributor Covenant creator Coraline Ada Ehmke
The Linux code of conduct is based on the [Contributor Covenant, version 1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html). The Contributor Covenant has been adopted by hundreds of open-source projects. Eclipse, Angular, Ruby and Kubernetes are some of the [many adopters of the Contributor Covenant](https://www.contributor-covenant.org/adopters).
The Contributor Covenant was created by [Coraline Ada Ehmke](https://en.wikipedia.org/wiki/Coraline_Ada_Ehmke), a software developer, open source advocate, and [LGBT](https://en.wikipedia.org/wiki/LGBT) activist. She has been instrumental in promoting diversity in
Coraline has also been vocal about her stance against [meritocracy](https://en.wikipedia.org/wiki/Meritocracy). The Latin word meritocracy originally refers to a “system under which advancement within the system turns on ‘merits’, like intelligence, credentials, and education.” Activists like [Coraline believe](https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy) that meritocracy is a negative system where the worth of an individual is measured not by their humanity, but solely by their intellectual output.
Remember that [Linus Torvalds has repeatedly said that he cares about the code, not the person who writes it](https://arstechnica.com/information-technology/2015/01/linus-torvalds-on-why-he-isnt-nice-i-dont-care-about-you/).
Coraline was previously involved in an incident with a contributor to the [Opal project](https://opalrb.com/). In a [discussion taking place on Twitter](https://twitter.com/krainboltgreene/status/611569515315507200), Elia, a core contributor to the Opal project from Italy, said “(trans people) not accepting reality is the problem here”.
Coraline wasn’t originally part of this discussion nor was she a contributor to the Opal project. But as an LGBT activist, she got involved, [requesting that Elia be removed from the Opal Project](https://github.com/opal/opal/issues/941) for his “views against trans people”. A lengthy and heated discussion took place on Opal’s GitHub repository. Coraline and her supporters, who had never contributed to Opal, tried to coerce the moderators into removing Elia, a core contributor to the project.
While Elia wasn’t removed from the project, the Opal project maintainers agreed to put a code of conduct in place. And this code of conduct was the very same Contributor Covenant, which Coraline had pitched to the maintainers herself.
But the story didn’t end here. The Contributor Covenant was then modified and a [new clause added in order to get to Elia](https://github.com/opal/opal/pull/948/commits/817321e27eccfffb3841f663815c17eecb8ef061#diff-a1ee87dafebc22cbd96979f1b2b7e837R11). The new clause widened the scope of the code of conduct to apply in public spaces. This malicious change was [spotted by the maintainers](https://github.com/opal/opal/pull/948#issuecomment-113486020) and they edited the clause. Opal eventually got rid of the Contributor Covenant and put its own guidelines in place instead.
This is a classic case where a few offended people, who had never contributed a single line of code to the project, tried to oust a core contributor.
## Reactions to the Linux Code of Conduct and Torvalds’ apology
As soon as the Linux code of conduct and Torvalds’ apology went public, social media and forums were rife with rumors and [speculations](https://www.reddit.com/r/linux/comments/9go8cp/linus_torvalds_daughter_has_signed_the/). While many people appreciated this new development, there were some who saw a conspiracy by [SJWs infiltrating Linux](https://snew.github.io/r/linux/comments/9ghrrj/linuxs_new_coc_is_a_piece_of_shit/).
A sarcastic tweet from Coraline only fueled the fire.
In the wake of the Linux CoC controversy, Coraline openly said that the Contributor Covenant code of conduct is a political document. This did not go down well with those who want to keep political stuff out of open source projects.
Nick Monroe, a freelance journalist, dug up Coraline’s past in an effort to prove his claim that there is more to the Linux CoC than meets the eye. You can read the entire thread if you want.
Nick wasn’t the only one to disapprove of the new Linux CoC. The [SJW](https://www.urbandictionary.com/define.php?term=SJW) involvement led to more skepticism.
While there were many who appreciated Torvalds’ apology, a few blamed Torvalds’ attitude:
And some were simply not amused by his apology:
The entire Torvalds apology episode has raised a genuine concern ;)
Jokes aside, genuine concern *was* raised by Sharp, who had [quit Linux Kernel development](https://www.networkworld.com/article/2988850/opensource-subnet/linux-kernel-dev-sarah-sharp-quits-citing-brutal-communications-style.html) in 2015 due to the “toxic community”.
## What do you think of the Linux Code of Conduct?
If you ask my opinion, I do think that a Code of Conduct is a requirement these days. It guides people in behaving in a respectable way and helps create a positive environment for all kinds of people irrespective of their race, ethnicity, religion, nationality or political views (both left and right).
What are your views on the entire episode? Do you think the CoC will help Linux kernel development? Or will it deteriorate with the involvement of anti-meritocracy SJWs?
We don’t have a code of conduct at It’s FOSS but let’s keep the discussion civil :) |
10,123 | 如何在 Linux 中找到并删除重复文件 | https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/ | 2018-10-16T17:07:03 | [
"删除",
"重复"
] | https://linux.cn/article-10123-1.html | 
在编辑或修改配置文件或旧文件前,我经常会把它们备份到硬盘的某个地方,因此我如果意外地改错了这些文件,我可以从备份中恢复它们。但问题是如果我忘记清理备份文件,一段时间之后,我的磁盘会被这些大量重复文件填满 —— 我觉得要么是懒得清理这些旧文件,要么是担心可能会删掉重要文件。如果你们像我一样,在类 Unix 操作系统中,大量多版本的相同文件放在不同的备份目录,你可以使用下面的工具找到并删除重复文件。
**提醒一句:**
在删除重复文件的时请尽量小心。如果你不小心,也许会导致[意外丢失数据](https://www.ostechnix.com/prevent-files-folders-accidental-deletion-modification-linux/)。我建议你在使用这些工具的时候要特别注意。
### 在 Linux 中找到并删除重复文件
出于本指南的目的,我将讨论下面的三个工具:
1. Rdfind
2. Fdupes
3. FSlint
这三个工具是自由开源的,且运行在大多数类 Unix 系统中。
#### 1. Rdfind
**Rdfind** 意即 **r**edundant **d**ata **find**(冗余数据查找),是一个通过访问目录和子目录来找出重复文件的自由开源的工具。它是基于文件内容而不是文件名来比较。Rdfind 使用**排序**算法来区分原始文件和重复文件。如果你有两个或者更多的相同文件,Rdfind 会很智能的找到原始文件并认定剩下的文件为重复文件。一旦找到副本文件,它会向你报告。你可以决定是删除还是使用[硬链接或者符号(软)链接](https://www.ostechnix.com/explaining-soft-link-and-hard-link-in-linux-with-examples/)代替它们。
**安装 Rdfind**
Rdfind 存在于 [AUR](https://aur.archlinux.org/packages/rdfind/) 中。因此,在基于 Arch 的系统中,你可以像下面一样使用任一如 [Yay](https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/) AUR 程序助手安装它。
```
$ yay -S rdfind
```
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install rdfind
```
在 Fedora 上:
```
$ sudo dnf install rdfind
```
在 RHEL、CentOS 上:
```
$ sudo yum install epel-release
$ sudo yum install rdfind
```
**用法**
一旦安装完成,仅带上目录路径运行 Rdfind 命令就可以扫描重复文件。
```
$ rdfind ~/Downloads
```

正如你看到上面的截屏,Rdfind 命令将扫描 `~/Downloads` 目录,并将结果存储到当前工作目录下一个名为 `results.txt` 的文件中。你可以在 `results.txt` 文件中看到可能是重复文件的名字。
```
$ cat results.txt
# Automatically generated
# duptype id depth size device inode priority name
DUPTYPE_FIRST_OCCURRENCE 1469 8 9 2050 15864884 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test5.regex
DUPTYPE_WITHIN_SAME_TREE -1469 8 9 2050 15864886 1 /home/sk/Downloads/tor-browser_en-US/Browser/TorBrowser/Tor/PluggableTransports/fte/tests/dfas/test6.regex
[...]
DUPTYPE_FIRST_OCCURRENCE 13 0 403635 2050 15740257 1 /home/sk/Downloads/Hyperledger(1).pdf
DUPTYPE_WITHIN_SAME_TREE -13 0 403635 2050 15741071 1 /home/sk/Downloads/Hyperledger.pdf
# end of file
```
通过检查 `results.txt` 文件,你可以很容易的找到那些重复文件。如果愿意你可以手动的删除它们。
此外,你可在不修改其他事情情况下使用 `-dryrun` 选项找出所有重复文件,并在终端上输出汇总信息。
```
$ rdfind -dryrun true ~/Downloads
```
一旦找到重复文件,你可以使用硬链接或符号链接代替他们。
使用硬链接代替所有重复文件,运行:
```
$ rdfind -makehardlinks true ~/Downloads
```
使用符号链接/软链接代替所有重复文件,运行:
```
$ rdfind -makesymlinks true ~/Downloads
```
目录中有一些空文件,也许你想忽略他们,你可以像下面一样使用 `-ignoreempty` 选项:
```
$ rdfind -ignoreempty true ~/Downloads
```
如果你不再想要这些旧文件,删除重复文件,而不是使用硬链接或软链接代替它们。
删除重复文件,就运行:
```
$ rdfind -deleteduplicates true ~/Downloads
```
如果你不想忽略空文件,并且和所哟重复文件一起删除。运行:
```
$ rdfind -deleteduplicates true -ignoreempty false ~/Downloads
```
更多细节,参照帮助部分:
```
$ rdfind --help
```
手册页:
```
$ man rdfind
```
#### 2. Fdupes
**Fdupes** 是另一个在指定目录以及子目录中识别和移除重复文件的命令行工具。这是一个使用 C 语言编写的自由开源工具。Fdupes 通过对比文件大小、部分 MD5 签名、全部 MD5 签名,最后执行逐个字节对比校验来识别重复文件。
与 Rdfind 工具类似,Fdupes 附带非常少的选项来执行操作,如:
* 在目录和子目录中递归的搜索重复文件
* 从计算中排除空文件和隐藏文件
* 显示重复文件大小
* 出现重复文件时立即删除
* 使用不同的拥有者/组或权限位来排除重复文件
* 更多
**安装 Fdupes**
Fdupes 存在于大多数 Linux 发行版的默认仓库中。
在 Arch Linux 和它的变种如 Antergos、Manjaro Linux 上,如下使用 Pacman 安装它。
```
$ sudo pacman -S fdupes
```
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install fdupes
```
在 Fedora 上:
```
$ sudo dnf install fdupes
```
在 RHEL、CentOS 上:
```
$ sudo yum install epel-release
$ sudo yum install fdupes
```
**用法**
Fdupes 用法非常简单。仅运行下面的命令就可以在目录中找到重复文件,如:`~/Downloads`。
```
$ fdupes ~/Downloads
```
我系统中的样例输出:
```
/home/sk/Downloads/Hyperledger.pdf
/home/sk/Downloads/Hyperledger(1).pdf
```
你可以看到,在 `/home/sk/Downloads/` 目录下有一个重复文件。它仅显示了父级目录中的重复文件。如何显示子目录中的重复文件?像下面一样,使用 `-r` 选项。
```
$ fdupes -r ~/Downloads
```
现在你将看到 `/home/sk/Downloads/` 目录以及子目录中的重复文件。
Fdupes 也可用来从多个目录中迅速查找重复文件。
```
$ fdupes ~/Downloads ~/Documents/ostechnix
```
你甚至可以搜索多个目录,递归搜索其中一个目录,如下:
```
$ fdupes ~/Downloads -r ~/Documents/ostechnix
```
上面的命令将搜索 `~/Downloads` 目录,`~/Documents/ostechnix` 目录和它的子目录中的重复文件。
有时,你可能想要知道一个目录中重复文件的大小。你可以使用 `-S` 选项,如下:
```
$ fdupes -S ~/Downloads
403635 bytes each:
/home/sk/Downloads/Hyperledger.pdf
/home/sk/Downloads/Hyperledger(1).pdf
```
类似的,为了显示父目录和子目录中重复文件的大小,使用 `-Sr` 选项。
我们可以在计算时分别使用 `-n` 和 `-A` 选项排除空白文件以及排除隐藏文件。
```
$ fdupes -n ~/Downloads
$ fdupes -A ~/Downloads
```
在搜索指定目录的重复文件时,第一个命令将排除零长度文件,后面的命令将排除隐藏文件。
汇总重复文件信息,使用 `-m` 选项。
```
$ fdupes -m ~/Downloads
1 duplicate files (in 1 sets), occupying 403.6 kilobytes
```
删除所有重复文件,使用 `-d` 选项。
```
$ fdupes -d ~/Downloads
```
样例输出:
```
[1] /home/sk/Downloads/Hyperledger Fabric Installation.pdf
[2] /home/sk/Downloads/Hyperledger Fabric Installation(1).pdf
Set 1 of 1, preserve files [1 - 2, all]:
```
这个命令将提示你保留还是删除所有其他重复文件。输入任一号码保留相应的文件,并删除剩下的文件。当使用这个选项的时候需要更加注意。如果不小心,你可能会删除原文件。
如果你想要每次保留每个重复文件集合的第一个文件,且无提示的删除其他文件,使用 `-dN` 选项(不推荐)。
```
$ fdupes -dN ~/Downloads
```
当遇到重复文件时删除它们,使用 `-I` 标志。
```
$ fdupes -I ~/Downloads
```
关于 Fdupes 的更多细节,查看帮助部分和 man 页面。
```
$ fdupes --help
$ man fdupes
```
#### 3. FSlint
**FSlint** 是另外一个查找重复文件的工具,有时我用它去掉 Linux 系统中不需要的重复文件并释放磁盘空间。不像另外两个工具,FSlint 有 GUI 和 CLI 两种模式。因此对于新手来说它更友好。FSlint 不仅仅找出重复文件,也找出坏符号链接、坏名字文件、临时文件、坏的用户 ID、空目录和非精简的二进制文件等等。
**安装 FSlint**
FSlint 存在于 [AUR](https://aur.archlinux.org/packages/fslint/),因此你可以使用任一 AUR 助手安装它。
```
$ yay -S fslint
```
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install fslint
```
在 Fedora 上:
```
$ sudo dnf install fslint
```
在 RHEL,CentOS 上:
```
$ sudo yum install epel-release
$ sudo yum install fslint
```
一旦安装完成,从菜单或者应用程序启动器启动它。
FSlint GUI 展示如下:

如你所见,FSlint 界面友好、一目了然。在 “Search path” 栏,添加你要扫描的目录路径,点击左下角 “Find” 按钮查找重复文件。验证递归选项可以在目录和子目录中递归的搜索重复文件。FSlint 将快速的扫描给定的目录并列出重复文件。

从列表中选择那些要清理的重复文件,也可以选择 “Save”、“Delete”、“Merge” 和 “Symlink” 操作他们。
在 “Advanced search parameters” 栏,你可以在搜索重复文件的时候指定排除的路径。

**FSlint 命令行选项**
FSlint 提供下面的 CLI 工具集在你的文件系统中查找重复文件。
* `findup` — 查找重复文件
* `findnl` — 查找名称规范(有问题的文件名)
* `findu8` — 查找非法的 utf8 编码的文件名
* `findbl` — 查找坏链接(有问题的符号链接)
* `findsn` — 查找同名文件(可能有冲突的文件名)
* `finded` — 查找空目录
* `findid` — 查找死用户的文件
* `findns` — 查找非精简的可执行文件
* `findrs` — 查找文件名中多余的空白
* `findtf` — 查找临时文件
* `findul` — 查找可能未使用的库
* `zipdir` — 回收 ext2 目录项下浪费的空间
所有这些工具位于 `/usr/share/fslint/fslint/fslint` 下面。
例如,在给定的目录中查找重复文件,运行:
```
$ /usr/share/fslint/fslint/findup ~/Downloads/
```
类似的,找出空目录命令是:
```
$ /usr/share/fslint/fslint/finded ~/Downloads/
```
获取每个工具更多细节,例如:`findup`,运行:
```
$ /usr/share/fslint/fslint/findup --help
```
关于 FSlint 的更多细节,参照帮助部分和 man 页。
```
$ /usr/share/fslint/fslint/fslint --help
$ man fslint
```
##### 总结
现在你知道在 Linux 中,使用三个工具来查找和删除不需要的重复文件。这三个工具中,我经常使用 Rdfind。这并不意味着其他的两个工具效率低下,因为到目前为止我更喜欢 Rdfind。好了,到你了。你的最喜欢哪一个工具呢?为什么?在下面的评论区留言让我们知道吧。
就到这里吧。希望这篇文章对你有帮助。更多的好东西就要来了,敬请期待。
谢谢!
---
via: <https://www.ostechnix.com/how-to-find-and-delete-duplicate-files-in-linux/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[pygmalion666](https://github.com/pygmalion666) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,124 | 什么是行为驱动的 Python? | https://opensource.com/article/18/5/behavior-driven-python | 2018-10-16T18:27:45 | [
"BDD",
"测试"
] | https://linux.cn/article-10124-1.html |
>
> 使用 Python behave 框架的行为驱动开发模式可以帮助你的团队更好的协作和测试自动化。
>
>
>

您是否听说过<ruby> <a href="https://automationpanda.com/bdd/"> 行为驱动开发 </a> <rt> behavior-driven development </rt></ruby>(BDD),并好奇这是个什么东西?也许你发现了团队成员在谈论“嫩瓜”(LCTT 译注:“<ruby> 嫩瓜 <rt> gherkin </rt></ruby>” 是一种简单的英语文本语言,工具 cucumber 通过解释它来执行测试脚本,见下文),而你却不知所云。或许你是一个 <ruby> Python 人 <rt> Pythonista </rt></ruby>,正在寻找更好的方法来测试你的代码。 无论在什么情况下,了解 BDD 都可以帮助您和您的团队实现更好的协作和测试自动化,而 Python 的 [behave](https://behave.readthedocs.io/en/latest/) 框架是一个很好的起点。
### 什么是 BDD?
在软件中,*行为*是指在明确定义的输入、动作和结果场景中功能是如何运转的。 产品可以表现出无数的行为,例如:
* 在网站上提交表单
* 搜索想要的结果
* 保存文档
* 进行 REST API 调用
* 运行命令行界面命令
根据产品的行为定义产品的功能可以更容易地描述产品,并对其进行开发和测试。 BDD 的核心是:使行为成为软件开发的焦点。在开发早期使用示例语言的规范来定义行为。最常见的行为规范语言之一是 Gherkin,Cucumber项目中的Given-When-Then场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
下面是用Gherkin编写的行为规范的示例:
根据产品的行为定义产品的功能可以更容易地描述产品,开发产品并对其进行测试。 这是BDD的核心:使行为成为软件开发的焦点。 在开发早期使用[示例规范](https://en.wikipedia.org/wiki/Specification_by_example)的语言来定义行为。 最常见的行为规范语言之一是[Gherkin](https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/),来自 [Cucumber](https://cucumber.io/) 项目中的 Given-When-Then 场景格式。 行为规范基本上是对行为如何工作的简单语言描述,具有一致性和聚焦点的一些正式结构。 通过将步骤文本“粘合”到代码实现,测试框架可以轻松地自动化这些行为规范。
下面是用 Gherkin 编写的行为规范的示例:
```
Scenario: Basic DuckDuckGo Search
Given the DuckDuckGo home page is displayed
When the user searches for "panda"
Then results are shown for "panda"
```
快速浏览一下,行为是直观易懂的。 除少数关键字外,该语言为自由格式。 场景简洁而有意义。 一个真实的例子说明了这种行为。 步骤以声明的方式表明应该发生什么——而不会陷入如何如何的细节中。
[BDD 的主要优点](https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/)是良好的协作和自动化。 每个人都可以为行为开发做出贡献,而不仅仅是程序员。从流程开始就定义并理解预期的行为。测试可以与它们涵盖的功能一起自动化。每个测试都包含一个单一的、独特的行为,以避免重复。最后,现有的步骤可以通过新的行为规范重用,从而产生雪球效果。
### Python 的 behave 框架
behave 是 Python 中最流行的 BDD 框架之一。 它与其他基于 Gherkin 的 Cucumber 框架非常相似,尽管没有得到官方的 Cucumber 定名。 behave 有两个主要层:
1. 用 Gherkin 的 `.feature` 文件编写的行为规范
2. 用 Python 模块编写的步骤定义和钩子,用于实现 Gherkin 步骤
如上例所示,Gherkin 场景有三部分格式:
1. 鉴于(Given)一些初始状态
2. 每当(When)行为发生时
3. 然后(Then)验证结果
当 behave 运行测试时,每个步骤由装饰器“粘合”到 Python 函数。
### 安装
作为先决条件,请确保在你的计算机上安装了 Python 和 `pip`。 我强烈建议使用 Python 3.(我还建议使用 [pipenv](https://docs.pipenv.org/),但以下示例命令使用更基本的 `pip`。)
behave 框架只需要一个包:
```
pip install behave
```
其他包也可能有用,例如:
```
pip install requests # 用于调用 REST API
pip install selenium # 用于 web 浏览器交互
```
GitHub 上的 [behavior-driven-Python](https://github.com/AndyLPK247/behavior-driven-python) 项目包含本文中使用的示例。
### Gherkin 特点
behave 框架使用的 Gherkin 语法实际上是符合官方的 Cucumber Gherkin 标准的。`.feature` 文件包含了功能(`Feature`)部分,而场景部分又包含具有 Given-When-Then 步骤的场景(`Scenario`) 部分。 以下是一个例子:
```
Feature: Cucumber Basket
As a gardener,
I want to carry many cucumbers in a basket,
So that I don’t drop them all.
@cucumber-basket
Scenario: Add and remove cucumbers
Given the basket is empty
When "4" cucumbers are added to the basket
And "6" more cucumbers are added to the basket
But "3" cucumbers are removed from the basket
Then the basket contains "7" cucumbers
```
这里有一些重要的事情需要注意:
* `Feature` 和 `Scenario` 部分都有[简短的描述性标题](https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/)。
* 紧跟在 `Feature` 标题后面的行是会被 behave 框架忽略掉的注释。将功能描述放在那里是一种很好的做法。
* `Scenario` 和 `Feature` 可以有标签(注意 `@cucumber-basket` 标记)用于钩子和过滤(如下所述)。
* 步骤都遵循[严格的 Given-When-Then 顺序](https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/)。
* 使用 `And` 和 `But` 可以为任何类型添加附加步骤。
* 可以使用输入对步骤进行参数化——注意双引号里的值。
通过使用场景大纲(`Scenario Outline`),场景也可以写为具有多个输入组合的模板:
```
Feature: Cucumber Basket
@cucumber-basket
Scenario Outline: Add cucumbers
Given the basket has “<initial>” cucumbers
When "<more>" cucumbers are added to the basket
Then the basket contains "<total>" cucumbers
Examples: Cucumber Counts
| initial | more | total |
| 0 | 1 | 1 |
| 1 | 2 | 3 |
| 5 | 4 | 9 |
```
场景大纲总是有一个示例(`Examples`)表,其中第一行给出列标题,后续每一行给出一个输入组合。 只要列标题出现在由尖括号括起的步骤中,行值就会被替换。 在上面的示例中,场景将运行三次,因为有三行输入组合。 场景大纲是避免重复场景的好方法。
Gherkin 语言还有其他元素,但这些是主要的机制。 想了解更多信息,请阅读 Automation Panda 这个网站的文章 [Gherkin by Example](https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/) 和 [Writing Good Gherkin](https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/)。
### Python 机制
每个 Gherkin 步骤必须“粘合”到步骤定义——即提供了实现的 Python 函数。 每个函数都有一个带有匹配字符串的步骤类型装饰器。它还接收共享的上下文和任何步骤参数。功能文件必须放在名为 `features/` 的目录中,而步骤定义模块必须放在名为 `features/steps/` 的目录中。 任何功能文件都可以使用任何模块中的步骤定义——它们不需要具有相同的名称。 下面是一个示例 Python 模块,其中包含 cucumber basket 功能的步骤定义。
```
from behave import *
from cucumbers.basket import CucumberBasket
@given('the basket has "{initial:d}" cucumbers')
def step_impl(context, initial):
context.basket = CucumberBasket(initial_count=initial)
@when('"{some:d}" cucumbers are added to the basket')
def step_impl(context, some):
context.basket.add(some)
@then('the basket contains "{total:d}" cucumbers')
def step_impl(context, total):
assert context.basket.count == total
```
可以使用三个[步骤匹配器](http://behave.readthedocs.io/en/latest/api.html#step-parameters):`parse`、`cfparse` 和 `re`。默认的,也是最简单的匹配器是 `parse`,如上例所示。注意如何解析参数化值并将其作为输入参数传递给函数。一个常见的最佳实践是在步骤中给参数加双引号。
每个步骤定义函数还接收一个[上下文](http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes)变量,该变量保存当前正在运行的场景的数据,例如 `feature`、`scenario` 和 `tags` 字段。也可以添加自定义字段,用于在步骤之间共享数据。始终使用上下文来共享数据——永远不要使用全局变量!
behave 框架还支持[钩子](http://behave.readthedocs.io/en/latest/api.html#environment-file-functions)来处理 Gherkin 步骤之外的自动化问题。钩子是一个将在步骤、场景、功能或整个测试套件之前或之后运行的功能。钩子让人联想到[面向方面的编程](https://en.wikipedia.org/wiki/Aspect-oriented_programming)。它们应放在 `features/` 目录下的特殊 `environment.py` 文件中。钩子函数也可以检查当前场景的标签,因此可以有选择地应用逻辑。下面的示例显示了如何使用钩子为标记为 `@web` 的任何场景生成和销毁一个 Selenium WebDriver 实例。
```
from selenium import webdriver
def before_scenario(context, scenario):
if 'web' in context.tags:
context.browser = webdriver.Firefox()
context.browser.implicitly_wait(10)
def after_scenario(context, scenario):
if 'web' in context.tags:
context.browser.quit()
```
注意:也可以使用 [fixtures](http://behave.readthedocs.io/en/latest/api.html#fixtures) 进行构建和清理。
要了解一个 behave 项目应该是什么样子,这里是示例项目的目录结构:

任何 Python 包和自定义模块都可以与 behave 框架一起使用。 使用良好的设计模式构建可扩展的测试自动化解决方案。步骤定义代码应简明扼要。
### 运行测试
要从命令行运行测试,请切换到项目的根目录并运行 behave 命令。 使用 `-help` 选项查看所有可用选项。
以下是一些常见用例:
```
# run all tests
behave
# run the scenarios in a feature file
behave features/web.feature
# run all tests that have the @duckduckgo tag
behave --tags @duckduckgo
# run all tests that do not have the @unit tag
behave --tags ~@unit
# run all tests that have @basket and either @add or @remove
behave --tags @basket --tags @add,@remove
```
为方便起见,选项可以保存在 [config](http://behave.readthedocs.io/en/latest/behave.html#configuration-files) 文件中。
### 其他选择
behave 不是 Python 中唯一的 BDD 测试框架。其他好的框架包括:
* pytest-bdd,是 pytest 的插件,和 behave 一样,它使用 Gherkin 功能文件和步骤定义模块,但它也利用了 pytest 的所有功能和插件。例如,它可以使用 pytest-xdist 并行运行 Gherkin 场景。 BDD 和非 BDD 测试也可以与相同的过滤器一起执行。pytest-bdd 还提供更灵活的目录布局。
* radish 是一个 “Gherkin 增强版”框架——它将场景循环和前提条件添加到标准的 Gherkin 语言中,这使得它对程序员更友好。它还像 behave 一样提供了丰富的命令行选项。
* lettuce 是一种较旧的 BDD 框架,与 behave 非常相似,在框架机制方面存在细微差别。然而,GitHub 最近显示该项目的活动很少(截至2018 年 5 月)。
任何这些框架都是不错的选择。
另外,请记住,Python 测试框架可用于任何黑盒测试,即使对于非 Python 产品也是如此! BDD 框架非常适合 Web 和服务测试,因为它们的测试是声明性的,而 Python 是一种[很好的测试自动化语言](https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/)。
本文基于作者的 [PyCon Cleveland 2018](https://us.pycon.org/2018/) 演讲“[行为驱动的Python](https://us.pycon.org/2018/schedule/presentation/87/)”。
---
via: <https://opensource.com/article/18/5/behavior-driven-python>
作者:[Andrew Knight](https://opensource.com/users/andylpk247) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Have you heard about [behavior-driven development](https://automationpanda.com/bdd/) (BDD) and wondered what all the buzz is about? Maybe you've caught team members talking in "gherkin" and felt left out of the conversation. Or perhaps you're a Pythonista looking for a better way to test your code. Whatever the circumstance, learning about BDD can help you and your team achieve better collaboration and test automation, and Python's
framework is a great place to start.[behave](https://behave.readthedocs.io/en/latest/)
## What is BDD?
In software, a *behavior* is how a feature operates within a well-defined scenario of inputs, actions, and outcomes. Products can exhibit countless behaviors, such as:
- Submitting forms on a website
- Searching for desired results
- Saving a document
- Making REST API calls
- Running command-line interface commands
Defining a product's features based on its behaviors makes it easier to describe them, develop them, and test them. This is the heart of BDD: making behaviors the focal point of software development. Behaviors are defined early in development using a [specification by example](https://en.wikipedia.org/wiki/Specification_by_example) language. One of the most common behavior spec languages is [Gherkin](https://automationpanda.com/2017/01/26/bdd-101-the-gherkin-language/), the Given-When-Then scenario format from the [Cucumber](https://cucumber.io/) project. Behavior specs are basically plain-language descriptions of how a behavior works, with a little bit of formal structure for consistency and focus. Test frameworks can easily automate these behavior specs by "gluing" step texts to code implementations.
Below is an example of a behavior spec written in Gherkin:
```
Scenario: Basic DuckDuckGo Search
Given the DuckDuckGo home page is displayed
When the user searches for "panda"
Then results are shown for "panda"
```
At a quick glance, the behavior is intuitive to understand. Except for a few keywords, the language is freeform. The scenario is concise yet meaningful. A real-world example illustrates the behavior. Steps declaratively indicate *what* should happen—without getting bogged down in the details of *how*.
The [main benefits of BDD](https://automationpanda.com/2017/02/13/12-awesome-benefits-of-bdd/) are good collaboration and automation. Everyone can contribute to behavior development, not just programmers. Expected behaviors are defined and understood from the beginning of the process. Tests can be automated together with the features they cover. Each test covers a singular, unique behavior in order to avoid duplication. And, finally, existing steps can be reused by new behavior specs, creating a snowball effect.
## Python's behave framework
`behave`
is one of the most popular BDD frameworks in Python. It is very similar to other Gherkin-based Cucumber frameworks despite not holding the official Cucumber designation. `behave`
has two primary layers:
- Behavior specs written in Gherkin
`.feature`
files - Step definitions and hooks written in Python modules that implement Gherkin steps
As shown in the example above, Gherkin scenarios use a three-part format:
- Given some initial state
- When an action is taken
- Then verify the outcome
Each step is "glued" by decorator to a Python function when `behave`
runs tests.
## Installation
As a prerequisite, make sure you have Python and `pip`
installed on your machine. I strongly recommend using Python 3. (I also recommend using [ pipenv](https://docs.pipenv.org/), but the following example commands use the more basic
`pip`
.)Only one package is required for `behave`
:
`pip install behave`
Other packages may also be useful, such as:
```
pip install requests # for REST API calls
pip install selenium # for Web browser interactions
```
The [behavior-driven-Python](https://github.com/AndyLPK247/behavior-driven-python) project on GitHub contains the examples used in this article.
## Gherkin features
The Gherkin syntax that `behave`
uses is practically compliant with the official Cucumber Gherkin standard. A `.feature`
file has Feature sections, which in turn have Scenario sections with Given-When-Then steps. Below is an example:
```
Feature: Cucumber Basket
As a gardener,
I want to carry many cucumbers in a basket,
So that I don’t drop them all.
@cucumber-basket
Scenario: Add and remove cucumbers
Given the basket is empty
When "4" cucumbers are added to the basket
And "6" more cucumbers are added to the basket
But "3" cucumbers are removed from the basket
Then the basket contains "7" cucumbers
```
There are a few important things to note here:
- Both the Feature and Scenario sections have
[short, descriptive titles](https://automationpanda.com/2018/01/31/good-gherkin-scenario-titles/). - The lines immediately following the Feature title are comments ignored by
`behave`
. It is a good practice to put the user story there. - Scenarios and Features can have tags (notice the
`@cucumber-basket`
mark) for hooks and filtering (explained below). - Steps follow a
[strict Given-When-Then order](https://automationpanda.com/2018/02/03/are-gherkin-scenarios-with-multiple-when-then-pairs-okay/). - Additional steps can be added for any type using
`And`
and`But`
. - Steps can be parametrized with inputs—notice the values in double quotes.
Scenarios can also be written as templates with multiple input combinations by using a Scenario Outline:
```
Feature: Cucumber Basket
@cucumber-basket
Scenario Outline: Add cucumbers
Given the basket has “<initial>” cucumbers
When "<more>" cucumbers are added to the basket
Then the basket contains "<total>" cucumbers
Examples: Cucumber Counts
| initial | more | total |
| 0 | 1 | 1 |
| 1 | 2 | 3 |
| 5 | 4 | 9 |
```
Scenario Outlines always have an Examples table, in which the first row gives column titles and each subsequent row gives an input combo. The row values are substituted wherever a column title appears in a step surrounded by angle brackets. In the example above, the scenario will be run three times because there are three rows of input combos. Scenario Outlines are a great way to avoid duplicate scenarios.
There are other elements of the Gherkin language, but these are the main mechanics. To learn more, read the Automation Panda articles [Gherkin by Example](https://automationpanda.com/2017/01/27/bdd-101-gherkin-by-example/) and [Writing Good Gherkin](https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/).
## Python mechanics
Every Gherkin step must be "glued" to a step definition, a Python function that provides the implementation. Each function has a step type decorator with the matching string. It also receives a shared context and any step parameters. Feature files must be placed in a directory named `features/`
, while step definition modules must be placed in a directory named `features/steps/`
. Any feature file can use step definitions from any module—they do not need to have the same names. Below is an example Python module with step definitions for the cucumber basket features.
```
from behave import *
from cucumbers.basket import CucumberBasket
@given('the basket has "{initial:d}" cucumbers')
def step_impl(context, initial):
context.basket = CucumberBasket(initial_count=initial)
@when('"{some:d}" cucumbers are added to the basket')
def step_impl(context, some):
context.basket.add(some)
@then('the basket contains "{total:d}" cucumbers')
def step_impl(context, total):
assert context.basket.count == total
```
Three [step matchers](http://behave.readthedocs.io/en/latest/api.html#step-parameters) are available: `parse`
, `cfparse`
, and `re`
. The default and simplest marcher is `parse`
, which is shown in the example above. Notice how parametrized values are parsed and passed into the functions as input arguments. A common best practice is to put double quotes around parameters in steps.
Each step definition function also receives a [context](http://behave.readthedocs.io/en/latest/api.html#detecting-that-user-code-overwrites-behave-context-attributes) variable that holds data specific to the current scenario being run, such as `feature`
, `scenario`
, and `tags`
fields. Custom fields may be added, too, to share data between steps. Always use context to share data—never use global variables!
`behave`
also supports [hooks](http://behave.readthedocs.io/en/latest/api.html#environment-file-functions) to handle automation concerns outside of Gherkin steps. A hook is a function that will be run before or after a step, scenario, feature, or whole test suite. Hooks are reminiscent of [aspect-oriented programming](https://en.wikipedia.org/wiki/Aspect-oriented_programming). They should be placed in a special `environment.py`
file under the `features/`
directory. Hook functions can check the current scenario's tags, as well, so logic can be selectively applied. The example below shows how to use hooks to set up and tear down a Selenium WebDriver instance for any scenario tagged as `@web`
.
```
from selenium import webdriver
def before_scenario(context, scenario):
if 'web' in context.tags:
context.browser = webdriver.Firefox()
context.browser.implicitly_wait(10)
def after_scenario(context, scenario):
if 'web' in context.tags:
context.browser.quit()
```
Note: Setup and cleanup can also be done with [fixtures](http://behave.readthedocs.io/en/latest/api.html#fixtures) in `behave`
.
To offer an idea of what a `behave`
project should look like, here's the example project's directory structure:

Any Python packages and custom modules can be used with `behave`
. Use good design patterns to build a scalable test automation solution. Step definition code should be concise.
## Running tests
To run tests from the command line, change to the project's root directory and run the `behave`
command. Use the `–help`
option to see all available options.
Below are a few common use cases:
```
# run all tests
behave
# run the scenarios in a feature file
behave features/web.feature
# run all tests that have the @duckduckgo tag
behave --tags @duckduckgo
# run all tests that do not have the @unit tag
behave --tags ~@unit
# run all tests that have @basket and either @add or @remove
behave --tags @basket --tags @add,@remove
```
For convenience, options may be saved in [config](http://behave.readthedocs.io/en/latest/behave.html#configuration-files) files.
## Other options
`behave`
is not the only BDD test framework in Python. Other good frameworks include:
, a plugin for[pytest-bdd](https://github.com/pytest-dev/pytest-bdd)
. Like[pytest](https://docs.pytest.org/)`behave`
, it uses Gherkin feature files and step definition modules, but it also leverages all the features and plugins of`pytest`
. For example, it can run Gherkin scenarios in parallel using
. BDD and non-BDD tests can also be executed together with the same filters.[pytest-xdist](https://github.com/pytest-dev/pytest-xdist)`pytest-bdd`
also offers a more flexible directory layout.
is a "Gherkin-plus" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. It also offers rich command line options like[radish](http://radish-bdd.io/)`behave`
.
is an older BDD framework very similar to[lettuce](http://lettuce.it/)`behave`
, with minor differences in framework mechanics. However, GitHub shows little recent activity in the project (as of May 2018).
Any of these frameworks would be good choices.
Also, remember that Python test frameworks can be used for any black box testing, even for non-Python products! BDD frameworks are great for web and service testing because their tests are declarative, and Python is a [great language for test automation](https://automationpanda.com/2017/01/21/the-best-programming-language-for-test-automation/).
This article is based on the author's [PyCon Cleveland 2018](https://us.pycon.org/2018/) talk, [Behavior-Driven Python](https://us.pycon.org/2018/schedule/presentation/87/).
## Comments are closed. |
10,125 | Minikube 入门:笔记本上的 Kubernetes | https://opensource.com/article/18/10/getting-started-minikube | 2018-10-17T21:44:07 | [
"Minikube",
"K8S",
"Kubernetes"
] | /article-10125-1.html |
>
> 运行 Minikube 的分步指南。
>
>
>

在 [Hello Minikube](https://kubernetes.io/docs/tutorials/hello-minikube) 教程页面上 Minikube 被宣传为基于 Docker 运行 Kubernetes 的一种简单方法。 虽然该文档非常有用,但它主要是为 MacOS 编写的。 你可以深入挖掘在 Windows 或某个 Linux 发行版上的使用说明,但它们不是很清楚。 许多文档都是针对 Debian / Ubuntu 用户的,比如[安装 Minikube 的驱动程序](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md)。
这篇指南旨在使得在基于 RHEL/Fedora/CentOS 的操作系统上更容易安装 Minikube。
### 先决条件
1. 你已经[安装了 Docker](https://docs.docker.com/install)。
2. 你的计算机是一个基于 RHEL / CentOS / Fedora 的工作站。
3. 你已经[安装了正常运行的 KVM2 虚拟机管理程序](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver)。
4. 你有一个可以工作的 docker-machine-driver-kvm2。 以下命令将安装该驱动程序:
```
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
chmod +x docker-machine-driver-kvm2 \
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
&& rm docker-machine-driver-kvm2
```
### 下载、安装和启动Minikube
1、为你要即将下载的两个文件创建一个目录,两个文件分别是:[minikube](https://github.com/kubernetes/minikube/releases) 和 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl)。
2、打开终端窗口并运行以下命令来安装 minikube。
```
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
```
请注意,minikube 版本(例如,minikube-linux-amd64)可能因计算机的规格而有所不同。
3、`chmod` 加执行权限。
```
chmod +x minikube
```
4、将文件移动到 `/usr/local/bin` 路径下,以便你能将其作为命令运行。
```
mv minikube /usr/local/bin
```
5、使用以下命令安装 `kubectl`(类似于 minikube 的安装过程)。
```
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
```
使用 `curl` 命令确定最新版本的Kubernetes。
6、`chmod` 给 `kubectl` 加执行权限。
```
chmod +x kubectl
```
7、将 `kubectl` 移动到 `/usr/local/bin` 路径下作为命令运行。
```
mv kubectl /usr/local/bin
```
8、 运行 `minikube start` 命令。 为此,你需要有虚拟机管理程序。 我使用过 KVM2,你也可以使用 Virtualbox。 确保是以普通用户而不是 root 身份运行以下命令,以便为用户而不是 root 存储配置。
```
minikube start --vm-driver=kvm2
```
这可能需要一段时间,等一会。
9、 Minikube 应该下载并启动。 使用以下命令确保成功。
```
cat ~/.kube/config
```
10、 执行以下命令以运行 Minikube 作为上下文环境。 上下文环境决定了 `kubectl` 与哪个集群交互。 你可以在 `~/.kube/config` 文件中查看所有可用的上下文环境。
```
kubectl config use-context minikube
```
11、再次查看 `config` 文件以检查 Minikube 是否存在上下文环境。
```
cat ~/.kube/config
```
12、最后,运行以下命令打开浏览器查看 Kubernetes 仪表板。
```
minikube dashboard
```
现在 Minikube 已启动并运行,请阅读[通过 Minikube 在本地运行 Kubernetes](https://kubernetes.io/docs/setup/minikube) 这篇官网教程开始使用它。
---
via: <https://opensource.com/article/18/10/getting-started-minikube>
作者:[Bryant Son](https://opensource.com/users/brson) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Flowsnow](https://github.com/Flowsnow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
10,126 | 树莓派自建 NAS 云盘之——云盘构建 | https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi | 2018-10-17T23:19:00 | [
"NAS"
] | https://linux.cn/article-10126-1.html |
>
> 用自行托管的树莓派 NAS 云盘来保护数据的安全!
>
>
>

在前面两篇文章中,我们讨论了用树莓派搭建一个 NAS 云盘所需要的一些 [软硬件环境及其操作步骤](/article-10104-1.html?utm_source=index&utm_medium=more)。我们还制定了适当的 [备份策略](/article-10112-1.html) 来保护 NAS 上的数据。本文中,我们将介绍讨论利用 [Nestcloud](https://nextcloud.com/) 来方便快捷的存储、获取以及分享你的数据。

### 必要的准备工作
想要方便的使用 Nextcloud,需要一些必要的准备工作。首先,你需要一个指向 Nextcloud 的域名。方便起见,本文将使用 **nextcloud.pi-nas.com** 。如果你是在家庭网络里运行,你需要为该域名配置 DNS 服务(动态域名解析服务)并在路由器中开启 80 端口和 443 端口转发功能(如果需要使用 https,则需要开启 443 端口转发,如果只用 http,80 端口足以)。
你可以使用 [ddclient](https://sourceforge.net/p/ddclient/wiki/Home/) 在树莓派中自动更新 DNS。
### 安装 Nextcloud
为了在树莓派(参考 [第一篇](/article-10104-1.html?utm_source=index&utm_medium=more) 中步骤设置)中运行 Nextcloud,首先用命令 `apt` 安装 以下的一些依赖软件包。
```
sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl
```
其次,下载 Nextcloud。在树莓派中利用 `wget` 下载其 [最新的版本](https://nextcloud.com/install/#instructions-server)。在 [第一篇](/article-10104-1.html?utm_source=index&utm_medium=more) 文章中,我们将两个磁盘驱动器连接到树莓派,一个用于存储当前数据,另一个用于备份。这里在数据存储盘上安装 Nextcloud,以确保每晚自动备份数据。
```
sudo mkdir -p /nas/data/nextcloud
sudo chown pi /nas/data/nextcloud
cd /nas/data/
wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip
unzip nextcloud.zip
sudo ln -s /nas/data/nextcloud /var/www/nextcloud
sudo chown -R www-data:www-data /nas/data/nextcloud
```
截止到写作本文时,Nextcloud 最新版更新到如上述代码中所示的 14.0.0 版本。Nextcloud 正在快速的迭代更新中,所以你可以在你的树莓派中安装更新一点的版本。
### 配置数据库
如上所述,Nextcloud 安装完毕。之前安装依赖软件包时就已经安装了 MySQL 数据库来存储 Nextcloud 的一些重要数据(例如,那些你创建的可以访问 Nextcloud 的用户的信息)。如果你更愿意使用 Pstgres 数据库,则上面的依赖软件包需要做一些调整。
以 root 权限启动 MySQL:
```
sudo mysql
```
这将会打开 SQL 提示符界面,在那里可以插入如下指令——使用数据库连接密码替换其中的占位符——为 Nextcloud 创建一个数据库。
```
CREATE USER nextcloud IDENTIFIED BY '<这里插入密码>';
CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO nextcloud;
```
按 `Ctrl+D` 或输入 `quit` 退出 SQL 提示符界面。
### Web 服务器配置
Nextcloud 可以配置以适配于 Nginx 服务器或者其他 Web 服务器运行的环境。但本文中,我决定在我的树莓派 NAS 中运行 Apache 服务器(如果你有其他效果更好的服务器选择方案,不妨也跟我分享一下)。
首先为你的 Nextcloud 域名创建一个虚拟主机,创建配置文件 `/etc/apache2/sites-available/001-netxcloud.conf`,在其中输入下面的参数内容。修改其中 `ServerName` 为你的域名。
```
<VirtualHost *:80>
ServerName nextcloud.pi-nas.com
ServerAdmin [email protected]
DocumentRoot /var/www/nextcloud/
<Directory /var/www/nextcloud/>
AllowOverride None
</Directory>
</VirtualHost>
```
使用下面的命令来启动该虚拟主机。
```
a2ensite 001-nextcloud
sudo systemctl reload apache2
```
现在,你应该可以通过浏览器中输入域名访问到 web 服务器了。这里我推荐使用 HTTPS 协议而不是 HTTP 协议来访问 Nextcloud。一个简单而且免费的方法就是利用 [Certbot](https://certbot.eff.org/) 下载 [Let’s Encrypt](https://letsencrypt.org/) 证书,然后设置定时任务自动刷新。这样就避免了自签证书等的麻烦。参考 [如何在树莓派中安装](https://certbot.eff.org/lets-encrypt/debianother-apache) Certbot 。在配置 Certbot 的时候,你甚至可以配置将 HTTP 自动转到 HTTPS ,例如访问 `http://nextcloud.pi-nas.com` 自动跳转到 `https://nextcloud.pi-nas.com`。注意,如果你的树莓派 NAS 运行在家庭路由器的下面,别忘了设置路由器的 443 端口和 80 端口转发。
### 配置 Nextcloud
最后一步,通过浏览器访问 Nextcloud 来配置它。在浏览器中输入域名地址,插入上文中的数据库设置信息。这里,你可以创建 Nextcloud 管理员用户。默认情况下,数据保存目录在在 Nextcloud 目录下,所以你也无需修改我们在 [第二篇](/article-10112-1.html) 一文中设置的备份策略。
然后,页面会跳转到 Nextcloud 登陆界面,用刚才创建的管理员用户登陆。在设置页面中会有基础操作教程和安全安装教程(这里是访问 `https://nextcloud.pi-nas.com/settings/admin`)。
恭喜你,到此为止,你已经成功在树莓派中安装了你自己的云 Nextcloud。去 Nextcloud 主页 [下载 Nextcloud 客户端](https://nextcloud.com/install/#install-clients),客户端可以同步数据并且离线访问服务器。移动端甚至可以上传图片等资源,然后电脑桌面都可以去访问它们。
---
via: <https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi>
作者:[Manuel Dewald](https://opensource.com/users/ntlx) 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In the first two parts of this series, we discussed the [hardware and software fundamentals](https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi) for building network-attached storage (NAS) on a Raspberry Pi. We also put a proper [backup strategy](https://opensource.com/article/18/8/automate-backups-raspberry-pi) in place to secure the data on the NAS. In this third part, we will talk about a convenient way to store, access, and share your data with [Nextcloud](https://nextcloud.com/).

## Prerequisites
To use Nextcloud conveniently, you have to meet a few prerequisites. First, you should have a domain you can use for the Nextcloud instance. For the sake of simplicity in this how-to, we'll use **nextcloud.pi-nas.com**. This domain should be directed to your Raspberry Pi. If you want to run it on your home network, you probably need to set up dynamic DNS for this domain and enable port forwarding of ports 80 and 443 (if you go for an SSL setup, which is highly recommended; otherwise port 80 should be sufficient) from your router to the Raspberry Pi.
You can automate dynamic DNS updates from the Raspberry Pi using [ddclient](https://sourceforge.net/p/ddclient/wiki/Home/).
## Install Nextcloud
To run Nextcloud on your Raspberry Pi (using the setup described in the [first part](https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi) of this series), install the following packages as dependencies to Nextcloud using **apt**.
`sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl`
The next step is to download Nextcloud. [Get the latest release's URL](https://nextcloud.com/install/#instructions-server) and copy it to download via **wget** on the Raspberry Pi. In the first article in this series, we attached two disk drives to the Raspberry Pi, one for current data and one for backups. Install Nextcloud on the data drive to make sure data is backed up automatically every night.
```
sudo mkdir -p /nas/data/nextcloud
sudo chown pi /nas/data/nextcloud
cd /nas/data/
wget https://download.nextcloud.com/server/releases/nextcloud-14.0.0.zip -O /nas/data/nextcloud.zip
unzip nextcloud.zip
sudo ln -s /nas/data/nextcloud /var/www/nextcloud
sudo chown -R www-data:www-data /nas/data/nextcloud
```
When I wrote this, the latest release (as you see in the code above) was 14. Nextcloud is under heavy development, so you may find a newer version when installing your copy of Nextcloud onto your Raspberry Pi.
## Database setup
When we installed Nextcloud above, we also installed MySQL as a dependency to use it for all the metadata Nextcloud generates (for example, the users you create to access Nextcloud). If you would rather use a Postgres database, you'll need to adjust some of the modules installed above.
To access the MySQL database as root, start the MySQL client as root:
`sudo mysql`
This will open a SQL prompt where you can insert the following commands—substituting the placeholder with the password you want to use for the database connection—to create a database for Nextcloud.
```
CREATE USER nextcloud IDENTIFIED BY '<insert-password-here>';
CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO nextcloud;
```
You can exit the SQL prompt by pressing **Ctrl+D** or entering **quit**.
## Web server configuration
Nextcloud can be configured to run using Nginx or other web servers, but for this how-to, I decided to go with the Apache web server on my Raspberry Pi NAS. (Feel free to try out another alternative and let me know if you think it performs better.)
To set it up, configure a virtual host for the domain you created for your Nextcloud instance **nextcloud.pi-nas.com**. To create a virtual host, create the file **/etc/apache2/sites-available/001-nextcloud.conf** with content similar to the following. Make sure to adjust the ServerName to your domain and paths, if you didn't use the ones suggested earlier in this series.
```
<VirtualHost *:80>
ServerName nextcloud.pi-nas.com
ServerAdmin [email protected]
DocumentRoot /var/www/nextcloud/
<Directory /var/www/nextcloud/>
AllowOverride None
</Directory>
</VirtualHost>
```
To enable this virtual host, run the following two commands.
```
a2ensite 001-nextcloud
sudo systemctl reload apache2
```
With this configuration, you should now be able to reach the web server with your domain via the web browser. To secure your data, I recommend using HTTPS instead of HTTP to access Nextcloud. A very easy (and free) way is to obtain a [Let's Encrypt](https://letsencrypt.org/) certificate with [Certbot](https://certbot.eff.org/) and have a cron job automatically refresh it. That way you don't have to mess around with self-signed or expiring certificates. Follow Certbot's simple how-to [instructions to install it on your Raspberry Pi](https://certbot.eff.org/lets-encrypt/debianother-apache). During Certbot configuration, you can even decide to automatically forward HTTP to HTTPS, so visitors to **http://****nextcloud.pi-nas.com** will be redirected to ** https://nextcloud.pi-nas.com**. Please note, if your Raspberry Pi is running behind your home router, you must have port forwarding enabled for ports 443 and 80 to obtain Let's Encrypt certificates.
## Configure Nextcloud
The final step is to visit your fresh Nextcloud instance in a web browser to finish the configuration. To do so, open your domain in a browser and insert the database details from above. You can also set up your first Nextcloud user here, the one you can use for admin tasks. By default, the data directory should be inside the Nextcloud folder, so you don't need to change anything for the backup mechanisms from the [second part of this series](https://opensource.com/article/18/8/automate-backups-raspberry-pi) to pick up the data stored by users in Nextcloud.
Afterward, you will be directed to your Nextcloud and can log in with the admin user you created previously. To see a list of recommended steps to ensure a performant and secure Nextcloud installation, visit the Basic Settings tab in the Settings page (in our example: [https://nextcloud.pi-nas.com/](https://nextcloud.pi-nas.com/)
Congratulations! You've set up your own Nextcloud powered by a Raspberry Pi. Go ahead and [download a Nextcloud client](https://nextcloud.com/install/#install-clients) from the Nextcloud page to sync data with your client devices and access it offline. Mobile clients even provide features like instant upload of pictures you take, so they'll automatically sync to your desktop PC without wondering how to get them there.
## 2 Comments |
10,127 | 三个开源的分布式追踪工具 | https://opensource.com/article/18/9/distributed-tracing-tools | 2018-10-18T00:04:38 | [
"分布式跟踪"
] | https://linux.cn/article-10127-1.html |
>
> 这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
>
>
>

分布式追踪系统能够从头到尾地追踪跨越了多个应用、服务、数据库以及像代理这样的中间件的分布式软件的请求。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能的退化,总是比客户告诉你要好。
这种追踪是怎么工作的呢?给每个请求分配一个特殊 ID,这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做<ruby> 踪迹 <rt> trace </rt></ruby>,“踪迹”是整个事务的抽象概念。每一个“踪迹”由<ruby> 单元 <rt> span </rt></ruby>组成,“单元”代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个“单元”也有自己唯一的 ID。“单元”之下也可以创建子“单元”,子“单元”可以有多个父“单元”。
当一次事务(或者说踪迹)运行过之后,就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层,我们下文会讨论,不过,我们先看下面的图,它是我在 [Istio walkthrough](https://www.youtube.com/watch?v=T8BbeqZ0Rls) 视频教程中提到的 [Jaeger](https://www.jaegertracing.io/) 界面,展示了单个踪迹中的多个单元。很明显,这个图能让你一目了然地对事务有更深的了解。

这个演示使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger,它是兼容 OpenTracing 的。
那么 OpenTracing 到底是什么呢?我们来看看。
### OpenTracing API
[OpenTracing](http://opentracing.io/) 是源自 [Zipkin](https://zipkin.io/) 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API,用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 Zipkin、Jaeger 和 Appdash 等。甚至像 [Datadog](https://www.datadoghq.com/) 和 [Instana](https://www.instana.com/) 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
### OpenCensus
OpenTracing 已经说过了,可 [OpenCensus](https://opencensus.io/) 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗?
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下它们的不同(按照我的理解):OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范,而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
使用 OpenCensus,我们能够在运行着应用程序的主机上查看追踪数据,但它也有个可插拔的导出器系统,用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 Zipkin、Prometheus、Jaeger、Stackdriver、Datadog 和 SignalFx,不过任何人都可以创建一个导出器。
依我看这两者有很多重叠的部分,没有哪个一定比另外一个好,但是重要的是,要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范,具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。
### 可选工具
#### Zipkin
Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文](https://research.google.com/archive/papers/dapper-2010-1.pdf),Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java,用 Cassandra 或 ElasticSearch 作为可扩展的存储后端,这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6,它也使用了 [Thrift](https://thrift.apache.org/) 的二进制通信协议,Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。
这个系统包括上报器(客户端)、数据收集器、查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的踪迹 ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些单元的数据存到数据库,web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式:HTTP、Kafka 和 Scribe。
[Zipkin 社区](https://zipkin.io/pages/community.html) 还提供了 [Brave](https://github.com/openzipkin/brave),一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth](https://cloud.spring.io/spring-cloud-sleuth/) 的分布式追踪组件,它和 Zipkin 是兼容的。
#### Jaeger
[Jaeger](https://www.jaegertracing.io/) 来自 Uber,是一个比较新的项目,[CNCF](https://www.cncf.io/)(云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似,Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器)、数据收集器、查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift](https://en.wikipedia.org/wiki/Apache_Thrift) 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
默认情况下,Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个)追踪数据。对大多数系统来说,保留所有的追踪数据并传输的话就太多了。不过,通过配置代理可以调整这个值,客户端会从代理获取自己的配置。这个抽样并不是完全随机的,并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 [自适应采样已经在路线图当中](https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling),它将通过添加额外的、能够帮助做决策的上下文来改进采样算法。
#### Appdash
[Appdash](https://github.com/sourcegraph/appdash) 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph](https://about.sourcegraph.com/) 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。
从高层次来看,Appdash 的架构主要有三个部分:客户端、本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。Appdash 提供了 Python、Golang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集单元数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
---
via: <https://opensource.com/article/18/9/distributed-tracing-tools>
作者:[Dan Barker](https://opensource.com/users/barkerd427) 选题:[lujun9972](https://github.com/lujun9972) 译者:[belitex](https://github.com/belitex) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step.
A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. It’s always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you.
How does this tracing thing work? Well, each request gets a special ID that’s usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents.
Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that we’ll discuss later, but the picture below shows [Jaeger](https://www.jaegertracing.io/) from my [Istio walkthrough](https://www.youtube.com/watch?v=T8BbeqZ0Rls). It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transaction’s story at a glance.

This demo uses Istio’s built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible.
So what is OpenTracing? Let’s find out.
## OpenTracing API
[OpenTracing](http://opentracing.io/) is a spec that grew out of [Zipkin](https://zipkin.io/) to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog](https://www.datadoghq.com/) and [Instana](https://www.instana.com/) are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status.
## OpenCensus
Okay, we have OpenTracing, but what is this [OpenCensus](https://opencensus.io/) thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary?
The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems.
OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter.
From my perspective, there’s a lot of overlap. One isn’t necessarily better than the other, but it’s important to know what each does and doesn’t do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation.
## Tool options
### Zipkin
Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper](https://research.google.com/archive/papers/dapper-2010-1.pdf) about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift](https://thrift.apache.org/) binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project.
The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe.
The [Zipkin community](https://zipkin.io/pages/community.html) has also created [Brave](https://github.com/openzipkin/brave), a Java client implementation compatible with Zipkin. It has no dependencies, so it won’t drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth](https://cloud.spring.io/spring-cloud-sleuth/) that is compatible with Zipkin.
### Jaeger
[Jaeger](https://www.jaegertracing.io/) is a newer project from Uber Technologies that the [CNCF](https://www.cncf.io/) has since adopted as an Incubating project. It is written in Golang, so you don’t have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard.
Jaeger’s architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift](https://en.wikipedia.org/wiki/Apache_Thrift) protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI.
By default, a user won’t get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isn’t completely random, though, and it’s getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap](https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling), which will improve the sampling algorithm by adding additional context for making decisions.
### Appdash
[Appdash](https://github.com/sourcegraph/appdash) is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph](https://about.sourcegraph.com/) based on Google’s Dapper and Twitter’s Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity.
At a high level, Appdash’s architecture consists mostly of three components: a client, a local collector, and a remote collector. There’s not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdash’s OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system.
## Comments are closed. |
10,128 | 使用 Python 为你的油箱加油 | https://opensource.com/article/18/10/python-gas-pump | 2018-10-18T00:37:00 | [
"Python"
] | https://linux.cn/article-10128-1.html |
>
> 我来介绍一下我是如何使用 Python 来节省成本的。
>
>
>

我最近在开一辆烧 93 号汽油的车子。根据汽车制造商的说法,它只需要加 91 号汽油就可以了。然而,在美国只能买到 87 号、89 号、93 号汽油。而我家附近的汽油的物价水平是每增加一号,每加仑就要多付 30 美分,因此如果加 93 号汽油,每加仑就要多花 60 美分。为什么不能节省一些钱呢?
一开始很简单,只需要先加满 93 号汽油,然后在油量表显示油箱半满的时候,用 89 号汽油加满,就得到一整箱 91 号汽油了。但接下来就麻烦了,剩下半箱 91 号汽油加上半箱 93 号汽油,只会变成一箱 92 号汽油,再接下来呢?如果继续算下去,只会越来越混乱。这个时候 Python 就派上用场了。
我的方案是,可以根据汽油的实时状态,不断向油箱中加入 93 号汽油或者 89 号汽油,而最终目标是使油箱内汽油的号数不低于 91。我需要做的是只是通过一些算法来判断新旧汽油混合之后的号数。使用多项式方程或许也可以解决这个问题,但如果使用 Python,好像只需要进行循环就可以了。
```
#!/usr/bin/env python
# octane.py
o = 93.0
newgas = 93.0 # 这个变量记录上一次加入的汽油号数
i = 1
while i < 21: # 20 次迭代 (加油次数)
if newgas == 89.0: # 如果上一次加的是 89 号汽油,改加 93 号汽油
newgas = 93.0
o = newgas/2 + o/2 # 当油箱半满的时候就加油
else: # 如果上一次加的是 93 号汽油,则改加 89 号汽油
newgas = 89.0
o = newgas/2 + o/2 # 当油箱半满的时候就加油
print str(i) + ': '+ str(o)
i += 1
```
在代码中,我首先将变量 `o`(油箱中的当前混合汽油号数)和变量 `newgas`(上一次加入的汽油号数)的初始值都设为 93,然后循环 20 次,也就是分别加入 89 号汽油和 93 号汽油一共 20 次,以保持混合汽油号数稳定。
```
1: 91.0
2: 92.0
3: 90.5
4: 91.75
5: 90.375
6: 91.6875
7: 90.34375
8: 91.671875
9: 90.3359375
10: 91.66796875
11: 90.333984375
12: 91.6669921875
13: 90.3334960938
14: 91.6667480469
15: 90.3333740234
16: 91.6666870117
17: 90.3333435059
18: 91.6666717529
19: 90.3333358765
20: 91.6666679382
```
从以上数据来看,只需要 10 到 15 次循环,汽油号数就比较稳定了,也相当接近 91 号汽油的目标。这种交替混合直到稳定的现象看起来很有趣,每次交替加入同等量的不同号数汽油,都会趋于稳定。实际上,即使加入的 89 号汽油和 93 号汽油的量不同,也会趋于稳定。
因此,我尝试了不同的比例,我认为加入的 93 号汽油需要比 89 号汽油更多一点。在尽量少补充新汽油的情况下,我最终计算到的结果是 89 号汽油要在油箱大约 7/12 满的时候加进去,而 93 号汽油则要在油箱 ¼ 满的时候才加进去。
我的循环将会更改成这样:
```
if newgas == 89.0:
newgas = 93.0
o = 3*newgas/4 + o/4
else:
newgas = 89.0
o = 5*newgas/12 + 7*o/12
```
以下是从第十次加油开始的混合汽油号数:
```
10: 92.5122272978
11: 91.0487992571
12: 92.5121998143
13: 91.048783225
14: 92.5121958062
15: 91.048780887
```
如你所见,这个调整会令混合汽油号数始终略高于 91。当然,我的油量表并没有 1/12 的刻度,但是 7/12 略小于 5/8,我可以近似地计算。
一个更简单地方案是每次都首先加满 93 号汽油,然后在油箱半满时加入 89 号汽油直到耗尽,这可能会是我的常规方案。就我个人而言,这种方法并不太好,有时甚至会产生一些麻烦。但对于长途旅行来说,这种方案会相对简便一些。有时我也会因为油价突然下跌而购买一些汽油,所以,这个方案是我可以考虑的一系列选项之一。
当然最重要的是:开车不写码,写码不开车!
---
via: <https://opensource.com/article/18/10/python-gas-pump>
作者:[Greg Pittman](https://opensource.com/users/greg-p) 选题:[lujun9972](https://github.com/lujun9972) 译者:[HankChow](https://github.com/HankChow) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | I recently began driving a car that had traditionally used premium gas (93 octane). According to the maker, though, it requires only 91 octane. The thing is, in the US, you can buy only 87, 89, or 93 octane. Where I live, gas prices jump 30 cents per gallon jump from one grade to the next, so premium costs 60 cents more than regular. So why not try to save some money?
It’s easy enough to wait until the gas gauge shows that the tank is half full and then fill it with 89 octane, and there you have 91 octane. But it gets tricky to know what to do next—half a tank of 91 octane plus half a tank of 93 ends up being 92, and where do you go from there? You can make continuing calculations, but they get increasingly messy. This is where Python came into the picture.
I wanted to come up with a simple scheme in which I could fill the tank at some level with 93 octane, then at the same or some other level with 89 octane, with the primary goal to never get below 91 octane with the final mixture. What I needed to do was create some recurring calculation that uses the previous octane value for the preceding fill-up. I suppose there would be some polynomial equation that would solve this, but in Python, this sounds like a loop.
```
#!/usr/bin/env python
# octane.py
o = 93.0
newgas = 93.0 # this represents the octane of the last fillup
i = 1
while i < 21: # 20 iterations (trips to the pump)
if newgas == 89.0: # if the last fillup was with 89 octane
# switch to 93
newgas = 93.0
o = newgas/2 + o/2 # fill when gauge is 1/2 full
else: # if it wasn't 89 octane, switch to that
newgas = 89.0
o = newgas/2 + o/2 # fill when gauge says 1/2 full
print str(i) + ': '+ str(o)
i += 1
```
As you can see, I am initializing the variable *o* (the current octane mixture in the tank) and the variable *newgas* (what I last filled the tank with) at the same value of 93. The loop then will repeat 20 times, for 20 fill-ups, switching from 89 octane and 93 octane for every other trip to the station.
```
1: 91.0
2: 92.0
3: 90.5
4: 91.75
5: 90.375
6: 91.6875
7: 90.34375
8: 91.671875
9: 90.3359375
10: 91.66796875
11: 90.333984375
12: 91.6669921875
13: 90.3334960938
14: 91.6667480469
15: 90.3333740234
16: 91.6666870117
17: 90.3333435059
18: 91.6666717529
19: 90.3333358765
20: 91.6666679382
```
This shows is that I probably need only 10 or 15 loops to see stabilization. It also shows that soon enough, I undershoot my 91 octane target. It’s also interesting to see this stabilization of the alternating mixture values, and it turns out this happens with any scheme where you choose the same amounts each time. In fact, it is true even if the amount of the fill-up is different for 89 and 93 octane.
So at this point, I began playing with fractions, reasoning that I would probably need a bigger 93 octane fill-up than the 89 fill-up. I also didn’t want to make frequent trips to the gas station. What I ended up with (which seemed pretty good to me) was to wait until the tank was about 7⁄12 full and fill it with 89 octane, then wait until it was ¼ full and fill it with 93 octane.
Here is what the changes in the loop look like:
```
if newgas == 89.0:
newgas = 93.0
o = 3*newgas/4 + o/4
else:
newgas = 89.0
o = 5*newgas/12 + 7*o/12
```
Here are the numbers, starting with the tenth fill-up:
```
10: 92.5122272978
11: 91.0487992571
12: 92.5121998143
13: 91.048783225
14: 92.5121958062
15: 91.048780887
```
As you can see, this keeps the final octane very slightly above 91 all the time. Of course, my gas gauge isn’t marked in twelfths, but 7⁄12 is slightly less than 5⁄8, and I can handle that.
An alternative simple solution might have been run the tank to empty and fill with 93 octane, then next time only half-fill it for 89—and perhaps this will be my default plan. Personally, I’m not a fan of running the tank all the way down since this isn’t always convenient. On the other hand, it could easily work on a long trip. And sometimes I buy gas because of a sudden drop in prices. So in the end, this scheme is one of a series of options that I can consider.
The most important thing for Python users: Don’t code while driving!
## 4 Comments |
10,129 | 如何在家中使用 SSH 和 SFTP 协议 | https://opensource.com/article/18/10/ssh-sftp-home-network | 2018-10-18T20:50:02 | [
"SSH",
"SFTP"
] | https://linux.cn/article-10129-1.html |
>
> 通过 SSH 和 SFTP 协议,我们能够访问其他设备,有效而且安全的传输文件等等。
>
>
>

几年前,我决定配置另外一台电脑,以便我能在工作时访问它来传输我所需要的文件。要做到这一点,最基本的一步是要求你的网络提供商(ISP)提供一个固定的地址。
有一个不必要但很重要的步骤,就是保证你的这个可以访问的系统是安全的。在我的这种情况下,我计划只在工作场所访问它,所以我能够限定访问的 IP 地址。即使如此,你依然要尽多的采用安全措施。一旦你建立起来这个系统,全世界的人们马上就能尝试访问你的系统。这是非常令人惊奇及恐慌的。你能通过日志文件来发现这一点。我推测有探测机器人在尽其所能的搜索那些没有安全措施的系统。
在我设置好系统不久后,我觉得这种访问没什么大用,为此,我将它关闭了以便不再为它操心。尽管如此,只要架设了它,在家庭网络中使用 SSH 和 SFTP 还是有点用的。
当然,有一个必备条件,这个另外的电脑必须已经开机了,至于电脑是否登录与否无所谓的。你也需要知道其 IP 地址。有两个方法能够知道,一个是通过浏览器访问你的路由器,一般情况下你的地址格式类似于 192.168.1.254 这样。通过一些搜索,很容易找出当前是开机的并且接在 eth0 或者 wifi 上的系统。如何识别你所要找到的电脑可能是个挑战。
更容易找到这个电脑的方式是,打开 shell,输入 :
```
ifconfig
```
命令会输出一些信息,你所需要的信息在 `inet` 后面,看起来和 192.168.1.234 类似。当你发现这个后,回到你要访问这台主机的客户端电脑,在命令行中输入 :
```
ssh [email protected]
```
如果要让上面的命令能够正常执行,`gregp` 必须是该主机系统中正确的用户名。你会被询问其密码。如果你键入的密码和用户名都是正确的,你将通过 shell 环境连接上了这台电脑。我坦诚,对于 SSH 我并不是经常使用的。我偶尔使用它,我能够运行 `dnf` 来更新我所常使用电脑之外的其它电脑。通常,我用 SFTP :
```
sftp [email protected]
```
我更需要用简单的方法来把一个文件传输到另一个电脑。相对于闪存棒和额外的设备,它更加方便,耗时更少。
一旦连接建立成功,SFTP 有两个基本的命令,`get`,从主机接收文件 ;`put`,向主机发送文件。在连接之前,我经常在客户端移动到我想接收或者传输的文件夹下。在连接之后,你将处于一个顶层目录里,比如 `home/gregp`。一旦连接成功,你可以像在客户端一样的使用 `cd`,改变你在主机上的工作路径。你也许需要用 `ls` 来确认你的位置。
如果你想改变你的客户端的工作目录。用 `lcd` 命令( 即 local change directory 的意思)。同样的,用 `lls` 来显示客户端工作目录的内容。
如果主机上没有你想要的目录名,你该怎么办?用 `mkdir` 在主机上创建一个新的目录。或者你可以将整个目录的文件全拷贝到主机 :
```
put -r thisDir/
```
这将在主机上创建该目录并复制它的全部文件和子目录到主机上。这种传输是非常快速的,能达到硬件的上限。不像在互联网传输一样遇到网络瓶颈。要查看你能在 SFTP 会话中能够使用的命令列表:
```
man sftp
```
我也能够在我的电脑上的 Windows 虚拟机内用 SFTP,这是配置一个虚拟机而不是一个双系统的另外一个优势。这让我能够在系统的 Linux 部分移入或者移出文件。而我只需要在 Windows 中使用一个客户端就行。
你能够使用 SSH 或 SFTP 访问通过网线或者 WIFI 连接到你路由器的任何设备。这里,我使用了一个叫做 [SSHDroid](https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid) 的应用,能够在被动模式下运行 SSH。换句话来说,你能够用你的电脑访问作为主机的 Android 设备。近来我还发现了另外一个应用,[Admin Hands](https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US),不管你的客户端是平板还是手机,都能使用 SSH 或者 SFTP 操作。这个应用对于备份和手机分享照片是极好的。
---
via: <https://opensource.com/article/18/10/ssh-sftp-home-network>
作者:[Geg Pittman](https://opensource.com/users/greg-p) 选题:[lujun9972](https://github.com/lujun9972) 译者:[singledo](https://github.com/singledo) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Years ago, I decided to set up an extra computer (I always have extra computers) so that I could access it from work to transfer files I might need. To do this, the basic first step is to have your ISP assign a fixed IP address.
The not-so-basic but much more important next step is to set up your accessible system safely. In this particular case, I was planning to access it only from work, so I could restrict access to that IP address. Even so, you want to use all possible security features. What is amazing—and scary—is that as soon as you set this up, people from all over the world will *immediately* attempt to access your system. You can discover this by checking the logs. I presume there are bots constantly searching for open doors wherever they can find them.
Not long after I set up my computer, I decided my access was more a toy than a need, so I turned it off and gave myself one less thing to worry about. Nonetheless, there is another use for SSH and SFTP inside your home network, and it is more or less already set up for you.
One requirement, of course, is that the other computer in your home must be turned on, although it doesn’t matter whether someone is logged on or not. You also need to know its IP address. There are two ways to find this out. One is to get access to the router, which you can do through a browser. Typically, its address is something like **192.168.1.254**. With some searching, it should be easy enough to find out what is currently on and hooked up to the system by eth0 or WiFi. What can be challenging is recognizing the computer you’re interested in.
I find it easier to go to the computer in question, bring up a shell, and type:
`ifconfig`
This spits out a lot of information, but the bit you want is right after `inet`
and might look something like **192.168.1.234**. After you find that, go back to the client computer you want to access this host, and on the command line, type:
`ssh [email protected]`
For this to work, **gregp** must be a valid user on that system. You will then be asked for his password, and if you enter it correctly, you will be connected to that other computer in a shell environment. I confess that I don’t use SSH in this way very often. I have used it at times so I can run `dnf`
to upgrade some other computer than the one I’m sitting at. Usually, I use SFTP:
`sftp [email protected]`
because I have a greater need for an easy method of transferring files from one computer to another. It’s certainly more convenient and less time-consuming than using a USB stick or an external drive.
Once you’re connected, the two basic commands for SFTP are `get`
, to receive files from the host; and `put`
, to send files to the host. I usually migrate to the directory on my client where I either want to save files I will get from the host or send to the host before I connect. When you connect, you will be in the top-level directory—in this example, **home/gregp**. Once connected, you can then use `cd`
just as you would in your client, except now you’re changing your working directory on the host. You may need to use `ls`
to make sure you know where you are.
If you need to change the working directory on your client, use the command `lcd`
(as in **local change directory**). Similarly, use `lls`
to show the working directory contents on your client system.
What if the host doesn’t have a directory with the name you would like? Use `mkdir`
to make a new directory on it. Or you might copy a whole directory of files to the host with this:
`put -r ThisDir/`
which creates the directory and then copies all of its files and subdirectories to the host. These transfers are extremely fast, as fast as your hardware allows, and have none of the bottlenecks you might encounter on the internet. To see a list of commands you can use in an SFTP session, check:
`man sftp`
I have also been able to put SFTP to use on a Windows VM on my computer, yet another advantage of setting up a VM rather than a dual-boot system. This lets me move files to or from the Linux part of the system. So far I have only done this using a client in Windows.
You can also use SSH and SFTP to access any devices connected to your router by wire or WiFi. For a while, I used an app called [SSHDroid](https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid), which runs SSH in a passive mode. In other words, you use your computer to access the Android device that is the host. Recently I found another app, [Admin Hands](https://play.google.com/store/apps/details?id=com.arpaplus.adminhands&hl=en_US), where the tablet or phone is the client and can be used for either SSH or SFTP operations. This app is great for backing up or sharing photos from your phone.
## 7 Comments |
10,130 | 如何创建和维护你自己的 man 手册 | https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/ | 2018-10-19T06:26:35 | [
"man"
] | https://linux.cn/article-10130-1.html | 
我们已经讨论了一些 [man 手册的替代方案](https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/)。 这些替代方案主要用于学习简洁的 Linux 命令示例,而无需通过全面而过于详细的手册页。 如果你正在寻找一种快速而简单的方法来轻松快速地学习 Linux 命令,那么这些替代方案值得尝试。 现在,你可能正在考虑 —— 如何为 Linux 命令创建自己的 man 式的帮助页面? 这时 “Um” 就派上用场了。 Um 是一个命令行实用程序,可以用于轻松创建和维护包含你到目前为止所了解的所有命令的 man 页面。
通过创建自己的手册页,你可以在手册页中避免大量不必要的细节,并且只包含你需要记住的内容。 如果你想创建自己的一套 man 式的页面,“Um” 也能为你提供帮助。 在这个简短的教程中,我们将学习如何安装 “Um” 命令以及如何创建自己的 man 手册页。
### 安装 Um
Um 适用于 Linux 和Mac OS。 目前,它只能在 Linux 系统中使用 Linuxbrew 软件包管理器来进行安装。 如果你尚未安装 Linuxbrew,请参考以下链接:
* [Linuxbrew:一个用于 Linux 和 MacOS 的通用包管理器](https://www.ostechnix.com/linuxbrew-common-package-manager-linux-mac-os-x/)
安装 Linuxbrew 后,运行以下命令安装 Um 实用程序。
```
$ brew install sinclairtarget/wst/um
```
如果你会看到类似下面的输出,恭喜你! Um 已经安装好并且可以使用了。
```
[...]
==> Installing sinclairtarget/wst/um
==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz
==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0
-=#=# # #
==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem
######################################################################## 100.0%
==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939
==> Caveats
Bash completion has been installed to:
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
==> Summary
[] /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds
==> Caveats
==> openssl
A CA file has been bootstrapped using certificates from the SystemRoots
keychain. To add additional certificates (e.g. the certificates added in
the System keychain), place .pem files in
/home/linuxbrew/.linuxbrew/etc/openssl/certs
and run
/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash
==> ruby
Emacs Lisp files have been installed to:
/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby
==> um
Bash completion has been installed to:
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
```
在制作你的 man 手册页之前,你需要为 Um 启用 bash 补全。
要开启 bash 补全,首先你需要打开 `~/.bash_profile` 文件:
```
$ nano ~/.bash_profile
```
并在其中添加以下内容:
```
if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
. $(brew --prefix)/etc/bash_completion.d/um-completion.sh
fi
```
保存并关闭文件。运行以下命令以更新更改。
```
$ source ~/.bash_profile
```
准备工作全部完成。让我们继续创建我们的第一个 man 手册页。
### 创建并维护自己的man手册
如果你想为 `dpkg` 命令创建自己的 man 手册。请运行:
```
$ um edit dpkg
```
上面的命令将在默认编辑器中打开 markdown 模板:

我的默认编辑器是 Vi,因此上面的命令会在 Vi 编辑器中打开它。现在,开始在此模板中添加有关 `dpkg` 命令的所有内容。
下面是一个示例:

正如你在上图的输出中看到的,我为 `dpkg` 命令添加了概要,描述和两个参数选项。 你可以在 man 手册中添加你所需要的所有部分。不过你也要确保为每个部分提供了适当且易于理解的标题。 完成后,保存并退出文件(如果使用 Vi 编辑器,请按 `ESC` 键并键入`:wq`)。
最后,使用以下命令查看新创建的 man 手册页:
```
$ um dpkg
```

如你所见,`dpkg` 的 man 手册页看起来与官方手册页完全相同。 如果要在手册页中编辑和/或添加更多详细信息,请再次运行相同的命令并添加更多详细信息。
```
$ um edit dpkg
```
要使用 Um 查看新创建的 man 手册页列表,请运行:
```
$ um list
```
所有手册页将保存在主目录中名为 `.um` 的目录下
以防万一,如果你不想要某个特定页面,只需删除它,如下所示。
```
$ um rm dpkg
```
要查看帮助部分和所有可用的常规选项,请运行:
```
$ um --help
usage: um <page name>
um <sub-command> [ARGS...]
The first form is equivalent to `um read <page name>`.
Subcommands:
um (l)ist List the available pages for the current topic.
um (r)ead <page name> Read the given page under the current topic.
um (e)dit <page name> Create or edit the given page under the current topic.
um rm <page name> Remove the given page.
um (t)opic [topic] Get or set the current topic.
um topics List all topics.
um (c)onfig [config key] Display configuration environment.
um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
```
### 配置 Um
要查看当前配置,请运行:
```
$ um config
Options prefixed by '*' are set in /home/sk/.um/umconfig.
editor = vi
pager = less
pages_directory = /home/sk/.um/pages
default_topic = shell
pages_ext = .md
```
在此文件中,你可以根据需要编辑和更改 `pager`、`editor`、`default_topic`、`pages_directory` 和 `pages_ext` 选项的值。 比如说,如果你想在 [Dropbox](https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/) 文件夹中保存新创建的 Um 页面,只需更改 `~/.um/umconfig` 文件中 `pages_directory` 的值并将其更改为 Dropbox 文件夹即可。
```
pages_directory = /Users/myusername/Dropbox/um
```
这就是全部内容,希望这些能对你有用,更多好的内容敬请关注!
干杯!
---
via: <https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/>
作者:[SK](https://www.ostechnix.com/author/sk/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[way-ww](https://github.com/way-ww) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
10,131 | Linux vs Mac:Linux 比 Mac 好的 7 个原因 | https://itsfoss.com/linux-vs-mac/ | 2018-10-19T22:39:00 | [
"Linux",
"Mac"
] | https://linux.cn/article-10131-1.html | 最近我们谈论了一些[为什么 Linux 比 Windows 好](https://itsfoss.com/linux-better-than-windows/)的原因。毫无疑问,Linux 是个非常优秀的平台。但是它和其它操作系统一样也会有缺点。对于某些专门的领域,像是游戏,Windows 当然更好。而对于视频编辑等任务,Mac 系统可能更为方便。这一切都取决于你的偏好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac。
### Linux 比 Mac 好的 7 个原因

Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令、bash 和其它 shell,相比于 Windows,它们所支持的应用和游戏比较少。但也就是这点比较相似。
平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
#### 1、价格

假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
那在这种情况下,你觉得花费几百美金买个系统完成这项工作,或者花费更多直接买个 MacBook 更好?当然,最终的决定权还是在你。
买个装好 Mac 系统的电脑?还是买个便宜的电脑,然后自己装上免费的 Linux 系统?这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro(专业的视频编辑软件)和 Logic Pro X(专业的音乐制作软件)(LCTT 译注:这两款软件都是苹果公司推出的)。
#### 2、硬件支持

Linux 支持多种平台。无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了,你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行](https://itsfoss.com/lightweight-linux-beginners/)。
而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备配套的。
这有一些[在非苹果系统上安装 Mac OS 的教程](https://hackintosh.com/)。这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
#### 3、安全性

很多人都说 iOS 和 Mac 是非常安全的平台。的确,或许相比于 Windows,它确实比较安全,可并不一定有 Linux 安全。
我不是在危言耸听。Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增](https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html)。我认识一些不太懂技术的用户使用着很慢的 Mac 电脑并且为此深受折磨。一项快速调查显示[浏览器恶意劫持软件](https://www.imore.com/how-to-remove-browser-hijack)是罪魁祸首。
从来没有绝对安全的操作系统,Linux 也不例外。Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
#### 4、可定制性与灵活性

如果你有不喜欢的东西,自己定制或者修改它都行。
举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境](https://www.gnome.org/),你可以换成 [KDE Plasma](https://www.kde.org/plasma-desktop)。你也可以尝试一些 [Gnome 扩展](https://itsfoss.com/best-gnome-extensions/)丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
除此之外,你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)来打造适合你的系统。这个在 MacOS 上可以做吗?
另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你喜欢 MacOS 上的工作方式,[Elementary OS](https://elementary.io/) 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里有一个[轻量级 Linux 发行版列表](https://itsfoss.com/lightweight-linux-beginners/)。相比较而言,MacOS 缺乏这种灵活性。
#### 5、使用 Linux 有助于你的职业生涯(针对 IT 行业和科学领域的学生)

对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
但是当你开始使用 Linux 并且探索如何使用的时候,你将会积累非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行操作文件系统以及安装应用程序。你可能不会知道这些都是一些 IT 公司的新员工需要培训的内容。
除此之外,Linux 在就业市场上还有很大的发展空间。Linux 相关的技术有很多(云计算、Kubernetes、系统管理等),你可以学习、考取专业技能证书并获得一份相关的高薪工作。要学习这些,你必须使用 Linux。
#### 6、可靠

想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
但是它为什么可靠呢,相比于 MacOS,它的可靠体现在什么方面呢?
答案很简单 —— 给用户更多的控制权,同时提供更好的安全性。在 MacOS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux,你可以做任何你想做的事情 —— 这可能会导致(对某些人来说)糟糕的用户体验 —— 但它确实使其更可靠。
#### 7、开源

开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。上面讨论的大多数观点都是开源软件的直接优势。
简单解释一下,如果是开源软件,你可以自己查看或者修改源代码。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 MacOS 的源代码。
形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可太差劲了!
如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章](https://opensource.com/life/15/12/why-open-source)。
### 总结
现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
请在下方评论让我们知道你的想法。
提示:这里的图片是以“企鹅俱乐部”为原型的。
---
via: <https://itsfoss.com/linux-vs-mac/>
作者:[Ankush Das](https://itsfoss.com/author/ankush/) 选题:[lujun9972](https://github.com/lujun9972) 译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) 校对:[pityonline](https://github.com/pityonline)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Recently, we highlighted a few points about [why Linux is better than Windows](https://itsfoss.com/linux-better-than-windows/). Unquestionably, Linux is a superior platform. But, like other operating systems, it has its drawbacks as well.
For a very particular set of tasks (*such as Gaming*), Windows OS might prove to be better. And, likewise, for another set of tasks (*such as video editing*), a Mac-powered system might come in handy.
It all trickles down to your preference and what you would like to do with your system. So, in this article, we will highlight a number of reasons why Linux is better than Mac.
If you’re already using a Mac or planning to get one, we recommend you to thoroughly analyze the reasons and decide whether you want to
## 7 Reasons Why Linux is Better Than Mac

Both [Linux and macOS are Unix-like OS](https://itsfoss.com/mac-linux-difference/) and give access to Unix commands, BASH and other shells. Both of them have fewer applications and games than Windows. But the similarity ends here.
Graphic designers and video editors swear by macOS whereas Linux is a
So the question is should you use Linux over Mac? If yes, why? Let me give you some practical and some ideological reasons why Linux is better than Mac.
### 1. Price

Let’s s
In that case, you could choose to spend a couple of hundred bucks for a system to get things done. Or do you think spending more for a MacBook is a good idea? Well, you are the judge.
So, it really depends on what you prefer. Whether you want to spend on a Mac-powered system or get a budget laptop/PC and install any Linux distro for free. Personally, I’ll be happy with a Linux system except for editing videos and music production. In that case, Final Cut Pro (for video editing) and Logic Pro X (for music production) will be my preference.
[irp posts=16126]
### 2. Hardware Choices

Linux is free. You can install it on computers with any configuration. No matter how powerful/old your system is, Linux will work. [Even if you have an 8-year old PC laying around, you can have Linux installed and expect it to run smoothly by selecting the right distro](https://itsfoss.com/lightweight-linux-beginners/).
But, Mac is as an Apple-exclusive. If you want to assemble a PC or get a budget laptop (with DOS) and expect to install Mac OS, it’s almost impossible. Mac comes baked in with the system Apple manufactures.
There are [ways to install macOS on non Apple devices](https://hackintosh.com/). However, the kind of expertise and troubles it requires, it makes you question whether it’s worth the effort.
You will have a wide range of hardware choices when you go with Linux but a minimal set of configurations when it comes to Mac OS.
### 3. Security

A lot of people are all praises for iOS and Mac for being a secure platform. Well, yes, it is secure in a way (maybe more secure than Windows OS), but probably not as secure as Linux.
I am not bluffing. There are malware and adware targeting macOS and the [number is growing every day](https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html). I have seen not-so-techie users struggling with their slow mac. A quick investigation revealed that a [browser hijacking malware](https://www.imore.com/how-to-remove-browser-hijack) was the culprit.
*There are no 100% secure operating systems and Linux is not an exception.* There are vulnerabilities in the Linux world as well but they are duly patched by the timely updates provided by Linux distributions.
Thankfully, we don’t have auto-running viruses or browser hijacking malwares in Linux world so far. And that’s one more reason why you should use Linux instead of a Mac.
[irp posts = 29634]
### 4. Customization & Flexibility

You don’t like something? Customize it or remove it. End of the story.
For example, if you do not like the [Gnome desktop environment](https://www.gnome.org/) on Ubuntu 18.04.1, you might as well change it to [KDE Plasma](https://www.kde.org/plasma-desktop). You can also try some of the [Gnome extensions](https://itsfoss.com/best-gnome-extensions/) to enhance your desktop experience. You won’t find this level of freedom and customization on Mac OS.
Besides, you can even modify the source code of your OS to add/remove something (which requires necessary technical knowledge) and create your
Moreover, you get an array of Linux distributions to choose from as per your needs. For instance, if you need to mimic the workflow on Mac OS, [Elementary OS](https://elementary.io/) would help. Do you want to have a lightweight Linux distribution installed on your old PC? We’ve got you covered in our list of [lightweight Linux distros](https://itsfoss.com/lightweight-linux-beginners/). Mac OS lacks this kind of flexibility.
### 5. Using Linux helps your professional career [For IT/Tech students]

This is kind of controversial and applicable to students and job seekers in the IT field. Using Linux doesn’t make you a super-intelligent being and could possibly get you any IT related job.
However, as you start using Linux and exploring it, you gain experience. As a techie, sooner or later you dive into the terminal, learning your way to move around the file system, installing applications via command line. You won’t even realize that you have learned the skills that newcomers in IT companies get trained on.
In addition to that, Linux has enormous scope in the job market. There are so many Linux related technologies (Cloud, Kubernetes, Sysadmin etc.) you can learn, earn certifications and get a nice paying job. And to learn these, you have to use Linux.
[irp posts = 13080]
### 6. Reliability

Ever wondered why Linux is the best OS to run on any server? Because it is more reliable!
But, why is that? Why is Linux more reliable than Mac OS?
The answer is simple – more control to the user while providing better security. Mac OS does not provide you with the full control of its platform. It does that to make things easier for you
### 7. Open Source

Open Source is something not everyone cares about. But to me, the most important aspect of Linux being a superior choice is its Open Source nature. And, most of the points discussed below are the direct advantages of an Open Source software.
To briefly explain, you get to see/modify the source code yourself if it is an open source software. But, for Mac, Apple gets an exclusive control. Even if you have the required technical knowledge, you will not be able to independently take a look at the source code of Mac OS.
In other words, a Mac-powered system enables you to get a car for yourself but
If you want to [Ben Balter’s article](https://opensource.com/life/15/12/why-open-source) on OpenSource.com.
## Wrapping Up
Now that you’ve known why Linux is better than Mac OS. What do you think about it? Are these reasons enough for you to choose Linux over Mac OS? If not, then what do you prefer and why?
Let us know your thoughts in the comments below.
*Note: The artwork here is based on Club Penguins.* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.