id
int64 2.05k
16.6k
| title
stringlengths 5
75
| fromurl
stringlengths 19
185
| date
timestamp[s] | tags
sequencelengths 0
11
| permalink
stringlengths 20
37
| content
stringlengths 342
82.2k
| fromurl_status
int64 200
526
⌀ | status_msg
stringclasses 339
values | from_content
stringlengths 0
229k
⌀ |
---|---|---|---|---|---|---|---|---|---|
8,640 | 如何以 LaTeX 创建文档 | https://opensource.com/article/17/6/introduction-latex | 2017-06-26T18:26:00 | [
"LaTeX",
"TexStudio"
] | /article-8640-1.html |
>
> 学习以 LaTeX 文本标记语言排版文档
>
>
>

LaTeX(读作 `lay-tech` )是使用纯文本创建文档的方法,使用与 HTML/CSS 或 Markdown 类似的标记标签进行风格化。 LaTeX 最常用于为学术界(如学术期刊)创建文档。 在 LaTeX 中,作者不必直接对文档进行风格化,就像在 Microsoft Word,LibreOffice Writer 或 Apple Pages 等文字处理程序中一样; 而是用纯文本编写代码,这些代码必须经过编译才能生成 PDF 文档。
### 起步
要想使用 LaTeX 来书写文档,首先你必须要安装一个 LaTeX 编辑器。我用的是一款自由开源软件(FOSS),其在学术界也是大受欢迎,叫做 [TexStudio](http://www.texstudio.org/),它可以运行在 Windows、Unix/Linux、BSD 和 Mac OS X 上。同时你还需要安装一个 **Tex** 排版系统的分发版。因为我都是在 MacOS 上书写文档,所以我使用的分发版是 [MacTex 或 BasicTex](https://www.tug.org/mactex/morepackages.html)。对于 Windows 用户你可以使用 [MiKTex](https://miktex.org/download),而且 Linux 用户也可以在软件库中找到它。
当你完成了 TexStudio 和某个 LaTeX 的分发版的下载,你就可以开始对你的文档进行排版了。
### 创建你的第一个文档
在这个简短的教程里,我们会创建一个简单的文章,包括一个大标题、一个子标题和两个段落。
在启动 TexStudio 后,保存一份新的文档。 (我将其保存为 `helloworld.tex` ,因为我正在编写本教程的 Hello,World!文档。这是编程的一个传统。)接下来,你需要在你的 `.tex` 文件顶部添加一些样板代码用于指定文档的类型和大小。 这与 HTML5 文件中使用的样板代码类似。
我的代码(如下方)将会把页面大小设置为 A4,文本大小设置为 12pt 。 你可以直接把这些代码放入 TexStudio,并指定你自己的页面大小、字体大小、名称、标题和其他详细信息进行编辑:
```
\documentclass[a4paper,12pt]{article}
\begin{document}
\title{Hello World! My first LaTeX document}
\author{Aaron Cocker}
\date{\today}
\maketitle
content will go here
\end{document}
```
接下来,点击那个大的绿色箭头来编译该文档。就是下方截图中的中间的那个按钮。

如果这期间发生了什么错误,它将显示在底部的对话框里。
在你编译了这个文档之后,你可以看到它就像一个 PDF 一样显示在程序的 WYSIWYG (所见即所得)预览区域中。记住一旦你修改了代码就必须重新编译,就像我们在 C++ 中编程一样。
通过点击 **Tools > Commands > View PDF** 可以来预览你的文档,如下截图所示。

PDF 的输出将会显示在右侧,就像这样:

现在你可以添加一个段落。首先先通过 `\section{}` 命令来写一个子标题。在命令的大括号中输入你的子标题;我写的是 `Introduction`。
```
\section{Introduction}
```
现在你已经给你的段落标记了一个子标题,是时候来写一个段落了。在这个例子中,我使用了 Lipsum 的 [lorem ipsum 生成器](http://www.lipsum.com/feed/html)。要创建一个段落,要使用 `\paragraph{}` 命令, 将你的文本插入到 `\maketitle` 和 `\end{document}` 之间的 `\paragraph{}` 大括号下方,而不是中间。
以下就是我创建的段落的代码:
```
\section{Introduction}
\paragraph{}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras lorem nisi, tincidunt tempus sem nec, elementum feugiat ipsum. Nulla in diam libero. Nunc tristique ex a nibh egestas sollicitudin.
\paragraph{}
Mauris efficitur vitae ex id egestas. Vestibulum ligula felis, pulvinar a posuere id, luctus vitae leo. Sed ac imperdiet orci, non elementum leo. Nullam molestie congue placerat. Phasellus tempor et libero maximus commodo.
```
现在你的文档就已经完成了,你可以将其通过 **Save As** 选项导出并保存为一个 PDF 文档(和大多数程序一样)。
这是一个我已经完成的文档及其相应的代码:

本教程所有的代码如下所示:
```
\documentclass[a4paper,12pt]{article}
\begin{document}
\title{Hello World! My first LaTeX document}
\author{Aaron Cocker}
\date{\today}
\maketitle
\section{Introduction}
\paragraph{}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras lorem nisi, tincidunt tempus sem nec, elementum feugiat ipsum. Nulla in diam libero. Nunc tristique ex a nibh egestas sollicitudin.
\paragraph{}
Mauris efficitur vitae ex id egestas. Vestibulum ligula felis, pulvinar a posuere id, luctus vitae leo. Sed ac imperdiet orci, non elementum leo. Nullam molestie congue placerat. Phasellus tempor et libero maximus commodo.
\end{document}
```
### 更多
在 LaTeX 撰写的数以千计的优秀资源中,大多数大学制作的指南是可索引的,同时也可以在 Google 搜索中找到。 [普林斯顿大学](https://www.cs.princeton.edu/courses/archive/spr10/cos433/Latex/latex-guide.pdf) 提供了一个很好的扩展教程,为了更深入的了解,普林斯顿大学的导师 Donald Knuth 提供了 [The TexBook](http://www.ctex.org/documents/shredder/src/texbook.pdf),这是关于 LaTeX 的最好的教程。
(题图 : opensource.com)
---
作者简介:
Aaron Cocker - 一名在英国上大学的计算机学士。我是一个有抱负的数据科学家。我最喜欢的语言是 Python。 你可以随时通过邮箱联系我 : [[email protected]](mailto:[email protected]) 或者访问我的个人网站 : <https://aaroncocker.org.uk>
---
via: <https://opensource.com/article/17/6/introduction-latex>
作者:[Aaron Cocker](https://opensource.com/users/aaroncocker) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,642 | 使用 Ubuntu Cleaner 为 Ubuntu/LinuxMint 释放空间 | http://www.2daygeek.com/ubuntu-cleaner-system-cleaner-ubuntu-tweak-alternative-janitor/ | 2017-06-28T08:30:00 | [
"Ubuntu"
] | https://linux.cn/article-8642-1.html | 
我们中的大部分人都会经常忘记清理 Linux 系统中的垃圾文件,这会导致我们的系统空间不足。
一般情况下我们不得不按标准的程序来释放 Linux 发行版中的空间(删除发行版缓存、系统日志、应用程序缓存和垃圾邮件),但如果我们每次以手动方式执行相同的过程,那么会花费大量的时间和困难。
在 Linux 的应用程序中,可以使这个任务更容易。今天我们将教你如何使用 Ubuntu Cleaner,它衍生自 Ubuntu Tweak 中的 Janitor 模块。
[Ubuntu Cleaner](http://ubuntu-cleaner.blogspot.in/) 是一个可以简化你清理 Ubuntu 系统的工具。如我们所知道,Ubuntu Tweak 是帮助我们调整 Ubuntu 及其衍生发行版的最佳实用程序之一。但由于它的主要开发人员没有时间维护它,因此已被弃用。
建议阅读:
* [Stacer - Linux 系统优化和监控工具](http://www.2daygeek.com/stacer-linux-system-optimizer-and-monitoring-tool/)
* [BleachBit - 在 Linux 中清理系统的快速而最佳方法](http://www.2daygeek.com/bleachbit-system-cleaner-on-ubuntu-debian-fedora-opensuse-arch-linux-mint/)
因为许多用户在最新版本中仍使用 Ubuntu Tweak 这个工具(因为他们不想离开这个工具),所以 Ubuntu Cleaner 的开发人员从 Ubuntu Tweak 工具中复刻了 janitor 模块,并将这个有用的功能带回 Ubuntu 社区,并命名为 Ubuntu Cleaner。它也成为了 Ubuntu 多年来最受欢迎的实用程序之一。
建议阅读:
* [uCareSystem - 用于 Ubuntu / LinuxMint 的一体化系统更新和维护工具](http://www.2daygeek.com/ucaresystem-system-update-and-maintenance-tool-for-ubuntu-linuxmint/)
我猜所有那些怀念 Ubuntu Tweak 的人都会因为有 Ubuntu Cleaner 而感到高兴,因为它是从 janitor 模块衍生出来的。
Ubuntu Cleaner 将删除 Ubuntu 及其衍生发行版中的以下垃圾文件:
* 应用缓存 (浏览器缓存)
* 缩略图缓存
* Apt 缓存
* 旧的内核
* 包的配置文件
* 不需要的包
### 如何安装 Ubuntu Cleaner
因为开发者提供官方 PPA ,我们可以通过 PPA 轻松地将 Ubuntu Cleaner 安装到 Ubuntu 及其衍生发行版。 Ubuntu Cleaner 目前支持 Ubuntu 14.04 LTS 和 Ubuntu 16.04 LTS。
```
$ sudo add-apt-repository ppa:gerardpuig/ppa
$ sudo apt update
$ sudo apt install ubuntu-cleaner
```
### 如何使用 Ubuntu Cleaner
从主菜单启动 Ubuntu Cleaner ,你可以看到得以下默认界面。

勾选你要清理的文件前面的 “复选框”。 最后点击 “清理” 按钮从系统中删除垃圾文件。

现在我们已经成功清除了系统中的垃圾。

---
via: <http://www.2daygeek.com/ubuntu-cleaner-system-cleaner-ubuntu-tweak-alternative-janitor/#>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,643 | 极客漫画:Linus Torvalds 的家 | http://turnoff.us/geek/linus-torvalds-house/ | 2017-06-27T08:52:00 | [
"漫画"
] | https://linux.cn/article-8643-1.html | 
只能通过 22 端口(SSH)进入,没窗户,没天窗,没排风扇……
---
via: <http://turnoff.us/geek/linus-torvalds-house/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,644 | 极客漫画:Web 服务器中的生活 | http://turnoff.us/geek/life-in-a-web-server/ | 2017-06-30T09:56:00 | [
"漫画",
"Web",
"服务器"
] | https://linux.cn/article-8644-1.html | 
Web 服务器总是忙忙碌碌的,从不下班,这似乎比运维工程师还要辛苦。
每一个线程都在忙着,然而也有不太一样的,比如那个被数据库操作拖在那里的,就只能发呆;而那个被糟糕的代码搞得堆栈溢出的,看起来已经要崩溃了。
处理完请求之后,Web 服务器会给出生成的页面和 Cookie(饼干),如果下次带着这些饼干的编号来,那就可以很快地找到你要的饼干——这就是用饼干保存的会话。
这就是 Tomcat Web 服务器里面的生活。
---
via: <http://turnoff.us/geek/life-in-a-web-server/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,645 | 开发一个 Linux 调试器(二):断点 | http://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/ | 2017-06-27T08:53:00 | [
"调试器",
"断点"
] | https://linux.cn/article-8645-1.html | 
在该系列的第一部分,我们写了一个小的进程启动器,作为我们调试器的基础。在这篇博客中,我们会学习在 x86 Linux 上断点是如何工作的,以及如何给我们工具添加设置断点的能力。
### 系列文章索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](https://github.com/TartanLlama/minidbg/tree/tut_break)
3. 寄存器和内存
4. Elves 和 dwarves
5. 源码和信号
6. 源码层逐步执行
7. 源码层断点
8. 调用栈
9. 读取变量 10.之后步骤
### 断点是如何形成的?
有两种类型的断点:硬件和软件。硬件断点通常涉及到设置与体系结构相关的寄存器来为你产生断点,而软件断点则涉及到修改正在执行的代码。在这篇文章中我们只会关注软件断点,因为它们比较简单,而且可以设置任意多断点。在 x86 机器上任一时刻你最多只能有 4 个硬件断点,但是它们能让你在读取或者写入给定地址时触发,而不是只有当代码执行到那里的时候。
我前面说软件断点是通过修改正在执行的代码实现的,那么问题就来了:
* 我们如何修改代码?
* 为了设置断点我们要做什么修改?
* 如何告知调试器?
第一个问题的答案显然是 `ptrace`。我们之前已经用它为我们的程序设置跟踪并继续程序的执行,但我们也可以用它来读或者写内存。
当执行到断点时,我们的更改要让处理器暂停并给程序发送信号。在 x86 机器上这是通过 `int 3` 重写该地址上的指令实现的。x86 机器有个<ruby> 中断向量表 <rp> ( </rp> <rt> interrupt vector table </rt> <rp> ) </rp></ruby>,操作系统能用它来为多种事件注册处理程序,例如页故障、保护故障和无效操作码。它就像是注册错误处理回调函数,但是是在硬件层面的。当处理器执行 `int 3` 指令时,控制权就被传递给断点中断处理器,对于 Linux 来说,就是给进程发送 `SIGTRAP` 信号。你可以在下图中看到这个进程,我们用 `0xcc` 覆盖了 `mov` 指令的第一个字节,它是 `init 3` 的指令代码。

谜题的最后一个部分是调试器如何被告知中断的。如果你回顾前面的文章,我们可以用 `waitpid` 来监听被发送给被调试的程序的信号。这里我们也可以这样做:设置断点、继续执行程序、调用 `waitpid` 并等待直到发生 `SIGTRAP`。然后就可以通过打印已运行到的源码位置、或改变有图形用户界面的调试器中关注的代码行,将这个断点传达给用户。
### 实现软件断点
我们会实现一个 `breakpoint` 类来表示某个位置的断点,我们可以根据需要启用或者停用该断点。
```
class breakpoint {
public:
breakpoint(pid_t pid, std::intptr_t addr)
: m_pid{pid}, m_addr{addr}, m_enabled{false}, m_saved_data{}
{}
void enable();
void disable();
auto is_enabled() const -> bool { return m_enabled; }
auto get_address() const -> std::intptr_t { return m_addr; }
private:
pid_t m_pid;
std::intptr_t m_addr;
bool m_enabled;
uint64_t m_saved_data; //data which used to be at the breakpoint address
};
```
这里的大部分代码都是跟踪状态;真正神奇的地方是 `enable` 和 `disable` 函数。
正如我们上面学到的,我们要用 `int 3` 指令 - 编码为 `0xcc` - 替换当前指定地址的指令。我们还要保存该地址之前的值,以便后面恢复该代码;我们不想忘了执行用户(原来)的代码。
```
void breakpoint::enable() {
m_saved_data = ptrace(PTRACE_PEEKDATA, m_pid, m_addr, nullptr);
uint64_t int3 = 0xcc;
uint64_t data_with_int3 = ((m_saved_data & ~0xff) | int3); //set bottom byte to 0xcc
ptrace(PTRACE_POKEDATA, m_pid, m_addr, data_with_int3);
m_enabled = true;
}
```
`PTRACE_PEEKDATA` 请求告知 `ptrace` 如何读取被跟踪进程的内存。我们给它一个进程 ID 和一个地址,然后它返回给我们该地址当前的 64 位内容。 `(m_saved_data & ~0xff)` 把这个数据的低位字节置零,然后我们用它和我们的 `int 3` 指令按位或(`OR`)来设置断点。最后我们通过 `PTRACE_POKEDATA` 用我们的新数据覆盖那部分内存来设置断点。
`disable` 的实现比较简单,我们只需要恢复用 `0xcc` 所覆盖的原始数据。
```
void breakpoint::disable() {
ptrace(PTRACE_POKEDATA, m_pid, m_addr, m_saved_data);
m_enabled = false;
}
```
### 在调试器中增加断点
为了支持通过用户界面设置断点,我们要在 debugger 类修改三个地方:
1. 给 `debugger` 添加断点存储数据结构
2. 添加 `set_breakpoint_at_address` 函数
3. 给我们的 `handle_command` 函数添加 `break` 命令
我会将我的断点保存到 `std::unordered_map<std::intptr_t, breakpoint>` 结构,以便能简单快速地判断一个给定的地址是否有断点,如果有的话,取回该 breakpoint 对象。
```
class debugger {
//...
void set_breakpoint_at_address(std::intptr_t addr);
//...
private:
//...
std::unordered_map<std::intptr_t,breakpoint> m_breakpoints;
}
```
在 `set_breakpoint_at_address` 函数中我们会新建一个 breakpoint 对象,启用它,把它添加到数据结构里,并给用户打印一条信息。如果你喜欢的话,你可以重构所有的输出信息,从而你可以将调试器作为库或者命令行工具使用,为了简便,我把它们都整合到了一起。
```
void debugger::set_breakpoint_at_address(std::intptr_t addr) {
std::cout << "Set breakpoint at address 0x" << std::hex << addr << std::endl;
breakpoint bp {m_pid, addr};
bp.enable();
m_breakpoints[addr] = bp;
}
```
现在我们会在我们的命令处理程序中增加对我们新函数的调用。
```
void debugger::handle_command(const std::string& line) {
auto args = split(line,' ');
auto command = args[0];
if (is_prefix(command, "cont")) {
continue_execution();
}
else if(is_prefix(command, "break")) {
std::string addr {args[1], 2}; //naively assume that the user has written 0xADDRESS
set_breakpoint_at_address(std::stol(addr, 0, 16));
}
else {
std::cerr << "Unknown command\n";
}
}
```
我删除了字符串中的前两个字符并对结果调用 `std::stol`,你也可以让该解析更健壮一些。`std::stol` 可以将字符串按照所给基数转化为整数。
### 从断点继续执行
如果你尝试这样做,你可能会发现,如果你从断点处继续执行,不会发生任何事情。这是因为断点仍然在内存中,因此一直被重复命中。简单的解决办法就是停用这个断点、运行到下一步、再次启用这个断点、然后继续执行。不过我们还需要更改程序计数器,指回到断点前面,这部分内容会留到下一篇关于操作寄存器的文章中介绍。
### 测试它
当然,如果你不知道要在哪个地址设置,那么在某些地址设置断点并非很有用。后面我们会学习如何在函数名或者代码行设置断点,但现在我们可以通过手动实现。
测试你调试器的简单方法是写一个 hello world 程序,这个程序输出到 `std::err`(为了避免缓存),并在调用输出操作符的地方设置断点。如果你继续执行被调试的程序,执行很可能会停止而不会输出任何东西。然后你可以重启调试器并在调用之后设置一个断点,现在你应该看到成功地输出了消息。
查找地址的一个方法是使用 `objdump`。如果你打开一个终端并执行 `objdump -d <your program>`,然后你应该看到你的程序的反汇编代码。你就可以找到 `main` 函数并定位到你想设置断点的 `call` 指令。例如,我编译了一个 hello world 程序,反汇编它,然后得到了如下的 `main` 的反汇编代码:
```
0000000000400936 <main>:
400936: 55 push %rbp
400937: 48 89 e5 mov %rsp,%rbp
40093a: be 35 0a 40 00 mov $0x400a35,%esi
40093f: bf 60 10 60 00 mov $0x601060,%edi
400944: e8 d7 fe ff ff callq 400820 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt>
400949: b8 00 00 00 00 mov $0x0,%eax
40094e: 5d pop %rbp
40094f: c3 retq
```
正如你看到的,要没有输出,我们要在 `0x400944` 设置断点,要看到输出,要在 `0x400949` 设置断点。
### 总结
现在你应该有了一个可以启动程序、允许在内存地址上设置断点的调试器。后面我们会添加读写内存和寄存器的功能。再次说明,如果你有任何问题请在评论框中告诉我。
你可以在[这里](https://github.com/TartanLlama/minidbg/tree/tut_break) 找到该项目的代码。
---
via: <http://blog.tartanllama.xyz/c++/2017/03/24/writing-a-linux-debugger-breakpoints/>
作者:[Simon Brand](http://blog.tartanllama.xyz/) 译者:[ictlyh](https://github.com/ictlyh) 校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,647 | 六大标志性的开源形象概览 | https://opensource.com/article/17/2/six-open-source-brands | 2017-06-28T08:28:00 | [
"开源",
"品牌",
"Logo"
] | /article-8647-1.html | 
品牌是营销的重要组成部分。完成了品牌的塑造并形成一定的影响力之后,一个简单的 Logo (比如说耐克旋风一样) 就会成为这个品牌的强大广告。如果你常常在美国各州之间穿梭,你将会看各种描述品牌的标志符号,如麦当劳的<ruby> 金色拱门 <rp> ( </rp> <rt> golden arches </rt> <rp> ) </rp></ruby>。即便是没有任何文字或图像的简单色彩组合也是可以用来作为一个品牌的,比如美国弗吉尼亚理工大学的栗色和橙色,这种独特的色彩结合是很难被认错的。
所以,现在的问题是:品牌对于开源社区是否真的那么重要呢?
对于我和其他很多的人来说,是的,非常重要。开源软件要与付费软件进行竞争,那么它必须要将自己定义为切实可行的替代品。并且,它也必须要让人容易记住以及形成一定程度的影响力。如果某个开源软件项目以一种设计难看的 Logo、糟糕的口号、前后矛盾的信息来表现自己的话,那它就很难引起大众的注意、难以记住和得到广泛使用。
现有很多项目这方面做得很好,我们可以从中寻找灵感和指导方法。以下是我最喜欢的六个开源品牌。
### 六大开源品牌
#### Linux

这个可爱的 Linux 企鹅叫做 Tux,人们通常将其称为吉祥物,而非 Logo。
Tux 是 Larry Ewing 在 1996 年使用 GIMP 0.54 创建出来的。按 Jeff Ayers 讲述的[故事](https://en.wikipedia.org/wiki/Tux):自从 Linus Torvalds 1993 年在澳大利亚的某个动物园被一只企鹅咬了一口之后,他就特别的钟爱它们。Torvalds 当时正在为 Linux 寻找一个有趣的形象,他觉得一个饱食后正在休息的胖企鹅是一个不错的选择。Tux 同时也出现在视频游戏和麦片广告中,它甚至还有一个叫做 Gown 的异性同伴。正如 Mac 用户熟知那个被咬了一口的苹果、Windows 用户熟知那个飘动的窗口那样,作为 Linux 用户,你肯定也非常熟悉 Tux。
#### Mozilla

[Mozilla](https://www.mozilla.org/en-US/) 基金会是一个非营利组织和 [自由软件社区](https://en.wikipedia.org/wiki/Mozilla)。
近期,它完成了[品牌重建行动](https://blog.mozilla.org/opendesign/arrival/),其创意团队负责人 Tim Murray 这样写道:“该项目的核心就是应让人们更好地理解 Mozilla 自身的目的和商标的需求而生。我们的品牌标识,包括 Logo、口号及其设计,是我们用以传递我们自身的信仰和所做的工作的重要信号。”
以真正的开源方式,Mozilla 邀请所有的人来为项目贡献自己的力量。“数千个电子邮件、数百场会议、几十种理念,以及之后的三轮讨究,我们把自己的想法都分享了出来。”但是,他们仍然遵循指导方针进行,还需要你参与到贡献中来。
#### Firefox

[Firefox](https://en.wikipedia.org/wiki/Firefox) 是 Mozilla 开发的一款旗舰级软件产品,是一个非常受欢迎的 [web 浏览器](https://en.wikipedia.org/wiki/Web_browser)。
Firefox 中的 "fox" 实际上是一只小熊猫(亦称“<ruby> 红熊猫 <rp> ( </rp> <rt> Red Panda </rt> <rp> ) </rp></ruby>”、“火狐”),这是一种中国本土的像猫一样的真实动物。故事是这样的:Firefox 原本有个 "Phoenix" 的别称,表明它是由 Netscape 浏览器发展而来的。但在经历了 Phoenix 科技的商标起诉之后,它更名为 Mozilla Firebird。然后,Firebird RDMS 项目说它给其自己项目带来了歧义,其名称最终在 2004 年 02 月变更为 Mozilla Firefox。
平面设计师 Steve Garrity 对 Firefox 和 Phoenix 早期的 Logo 作出了批评,在“[品牌化 Mozilla:走向 Mozilla 2.0](http://actsofvolition.com/steven/mozillabranding/)”一文中详细阐述了各种缺陷。所以,Mozilla 邀请 Garrity 来领导更好的品牌化工作。新的形象是由 silverorange 开发出来的,但最终的渲染却是 Jon Hicks 完成的,他同时也为 Camino、MailChimp 和 Opera 进行过品牌化工作。
早在 2013 年,[Jeopardy!](http://www.complex.com/pop-culture/2013/09/firefox-jeopardy-answer) 上边关于询问 Firefox 使用哪个动物做 Logo 的帖子则成了最后的线索。三位回答者都不知道答案就是一个小熊猫,而是回答了 “su”、 “raccoon” 和 “Excel”。
#### GIMP

GIMP 的 Logo 是由 Tuomas Kuosmanen 在 1997 年 09 月 25 日创建的 [Wilber the GIMP](https://www.gimp.org/about/ancient_history.html)。
GIMP 是<ruby> GNU 图像处理程序 <rp> ( </rp> <rt> GNU Image Manipulation Program </rt> <rp> ) </rp></ruby>的缩写,主要用于相片修整和图像处理。Wilber 现在已经有了一些配饰,比如 Simon Budig 设计的一顶安全帽、Raphaël Quintet 设计的巫师帽。根据 GIMP 的“[链接到我们](https://www.gimp.org/about/linking.html)” 页面,它高度鼓励人们使用 Wilber,你甚至可以在源代码中的 `/docs/Wilber_Construction_Kit.xcf.gz` 获得 Wilber 的构建素材。
那么,Wilber 到底是那一种生物呢?很显然,这是一个值得热烈讨论的问题。在 [gimper.net](https://gimper.net/threads/what-is-wilber.793/) 上的一个论坛众说纷纭:一种产于北美大草原的<ruby> 小狼 <rp> ( </rp> <rt> coyote </rt> <rp> ) </rp></ruby>、熊猫、狗,或者<ruby> “高飞” <rp> ( </rp> <rt> Goofy </rt> <rp> ) </rp></ruby>的一种衍生形象,仅举几例。而 [GimpChat.com](http://gimpchat.com/viewtopic.php?f=4&t=10265) 上一位叫做 TheWarrior 的用户直接发邮件给 Wilber 的创造者 Kuosmanen,被告知说 “Wilber 是一种独立物种的动物 —— 就叫 ‘GIMP’。什么是 GIMP,这是个玩笑,因为人们一直在问,说它是一只狗、狐狸或者其他什么的就太没意思了。我设计的这一个形象的时候,在我脑袋中并没有特定哪种动物原型。”
#### PostgreSQL

正如你所见和熟悉的一样,使用动物头像来做 Logo 非常普遍。
一只名为 [Slonik](http://www.vertabelo.com/blog/notes-from-the-lab/the-history-of-slonik-the-postgresql-elephant-logo) 的大象就是 [PostgreSQL](https://wiki.postgresql.org/wiki/Logo) 的 Logo 的一部分,这是一个开源的关系型数据库管理系统 (RDMS)。Patrycja Dybka 在 Vertabelo 上写过博文,解释了这一名称是由俄语单词的大象(slony)演化而来的。Oleg Bartunov 也说过,这个 Logo 是在一个[邮件讨论](http://www.pgsql.ru/db/mw/msg.html?mid=1238939)中初步形成的。在讨论里,在费城圣约瑟夫大学的 David Yang 建议使用大象:“……但如果你想要一个动物头像的 Logo,那么使用某种大象如何? 毕竟就像阿加莎·克里斯蒂(侦探小说家 Agatha Christie)说的那样,*大象让人印象深刻*。”
#### VLC 媒体播放器

该 Logo 不再是动物主题了,而是交通锥筒。
VLC 是一款无处不在的媒体播放器,它神奇地出现在很多人的桌面电脑上,让很多人体验到了开源,即使不知道它是开源的。VLC 是由总部在法国的 VideoLAN 组织所支持的 VideoLAN 项目的一款产品。VideoLAN 源自 1996 年在法国中央理工大学的一个学生项目。根据维基百科的描述,这个交通锥标图标参考了由法国中央理工大学的网络学生协会收集自巴黎街道上的交通锥筒。最初的手绘 Logo 在 2006 年由 Richard Oistad 重新进行了渲染。
一些有趣的花絮:
* Seamus Islwyn 的帖子 “[VLC 中的交通锥表达了哪些含义?](http://www.ehow.com/info_10029162_traffic-cone-mean-vlc.html)” 告诉我们:在十二月的时候,VLC 锥筒会戴着一顶圣诞帽,但在 12 月 31 日它就会消失不见,恢复原样的锥筒。
* 有人说,VLC 的意思是 “<ruby> 非常大的锥筒 <rp> ( </rp> <rt> Very Large Cone </rt> <rp> ) </rp></ruby>” 或者选用它仅仅是为了和法国那些交通锥筒相关联而已。
* “官方” 的故事背景是否准确?在 VLC 的 jean-baptiste Kempf 和用户在 [VideoLAN 论坛](https://forum.videolan.org/viewtopic.php?f=5&t=92513) 上的交流似乎表明,交通锥筒收集说法以及漏斗、建筑区、扩音器和其他一些说法,可能是不正确的。
我们是否完全解答了 VLC 的交通锥筒起源的问题了呢?我个人觉得:那就是 “星期六夜现场” 的尖头外星人。他们就是来自法国的,记得吗?确切地说是来自 Remulak 星球。
**我很期待看到你关于自己喜欢、讨厌以及为它所代表的品牌而倍感激动的那些开源 Logo 的评论。**
(题图:opensource.com)
---
作者简介:
Jeff Macharyas - 他有多年的出版和印刷工作经验,他曾担任过快速印刷、美国观察家、USO 巡逻、今天校园和其他出版物的艺术总监以及项目经理、编辑和发行经理。杰夫持有佛罗里达州立大学的通信信息、罗格斯大学的社会媒体营销研究生证书以及 Utica 学院的网络安全与计算机取证硕士学位。
---
译者简介:
[GHLandy](http://ghlandy.com) —— 另一种生活中,有属于你也适合你的舞台。或许有你狠心放弃的专业,或者不必上妆,不用摆出另外一副面孔。—— 摘自林特特《以自己喜欢的方式过一生》
---
via: <https://opensource.com/article/17/2/six-open-source-brands>
作者:[Jeff Macharyas](https://opensource.com/users/jeffmacharyas) 译者:[GHLandy](https://github.com/GHLandy) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,650 | pass:一款简单的基于 Linux 命令行的密码管理工具 | http://www.2daygeek.com/pass-command-line-password-manager-linux/ | 2017-06-29T10:58:00 | [
"密码",
"pass",
"密码管理器"
] | https://linux.cn/article-8650-1.html | 
现如今要记住类似 email、银行、社交媒体、在线支付、ftp 等等这么多的密码相信对每一个人来说都是一个巨大的挑战。
由于需求和使用,密码管理器现如今变得非常的流行。在 Linux 中我们可以有很多选择,包括基于 GUI 和基于 CLI 两种。今天我们要讲的是一款基于 CLI 的密码管理器叫做 pass 。
[pass](https://www.passwordstore.org/) 是 Linux 上的一个简单的命令行密码管理器,它将密码存储在一个 `gpg` 加密后的文件里。这些加密后的文件很好地组织按目录结构存放。
所有密码都存在于 `~/.password-store` 中,它提供了添加、编辑、生成和检索密码等简单命令。
* 建议阅读:[KeePass - 存储/安全密码的最佳密码管理工具](http://www.2daygeek.com/keepass-best-linux-password-manager-arch-linux-mint-ubuntu-debian-fedora-opensuse/)
它是一个非常简短和简单的 shell 脚本。 它能够临时将密码放在剪贴板上,并使用 `git` 跟踪密码的修改。
这是一个很小的 shell 脚本,它还使用了少量的默认工具比如 `gnupg`、`tree` 和 `git`,同时还有活跃的社区为它提供 GUI 和扩展。
### 如何在 Linux 中安装 Pass
Pass 可从大多数 Linux 的主要发行版的仓库中获得。 所以,你可以使用你的分布式包管理器来安装它。
对于基于 Debian 的系统,你可以使用 [apt-get](http://www.2daygeek.com/apt-get-apt-cache-command-examples/) 或 [apt](http://www.2daygeek.com/apt-command-examples/) 包管理器命令来安装 pass。
```
$ sudo apt-get install pass
```
对于基于 RHEL/CentOS 的操作系统, 使用 [yum](http://www.2daygeek.com/yum-command-examples/) 包管理器命令来安装它。
```
$ sudo yum install pass
```
Fedora 系统可用 [dnf](http://www.2daygeek.com/dnf-command-examples/) 包管理器命令来安装。
```
$ sudo dnf install pass
```
openSUSE 系统可以用 [zypper](http://www.2daygeek.com/zypper-command-examples/) 包管理器命令来安装。
```
$ sudo zypper in password-store
```
对于基于 Arch Linux 的操作系统用 [pacman](http://www.2daygeek.com/pacman-command-examples/) 包管理器来安装它。
```
$ pacman -S pass
```
### 如何生成 GPG 密钥对
确保你拥有你个人的 GPG 密钥对。如果没有的话,你可以通过在终端中输入以下的命令并安装指导来创建你的 GPG 密钥对。
```
$ gpg --gen-key
```
运行以上的命令以生成 GPG 密钥对时会有一系列的问题询问,谨慎输入问题的答案,其中有一些只要使用默认值即可。
### 初始化密码存储
如果你已经有了 GPG 密钥对,请通过运行以下命令初始化本地密码存储,你可以使用 email-id 或 gpg-id 初始化。
```
$ pass init [email protected]
mkdir: created directory '/home/magi/.password-store/'
Password store initialized for [email protected]
```
上述命令将在 `~/.password-store` 目录下创建一个密码存储区。
`pass` 命令提供了简单的语法来管理密码。 我们一个个来看,如何添加、编辑、生成和检索密码。
通过下面的命令检查目录结构树。
```
$ pass
or
$ pass ls
or
$ pass show
Password Store
```
我没有看到任何树型结构,所以我们将根据我们的需求来创建一个。
### 插入一个新的密码信息
我们将通过运行以下命令来保存 gmail 的 id 及其密码。
```
$ pass insert eMail/[email protected]
mkdir: created directory '/home/magi/.password-store/eMail'
Enter password for eMail/[email protected]:
Retype password for eMail/[email protected]:
```
执行重复操作,直到所有的密码插入完成。 比如保存 Facebook 密码。
```
$ pass insert Social/Facebook_2daygeek
mkdir: created directory '/home/magi/.password-store/Social'
Enter password for Social/Facebook_2daygeek:
Retype password for Social/Facebook_2daygeek:
```
我们可以列出存储中的所有现有的密码。
```
$ pass show
Password Store
├── 2g
├── Bank
├── eMail
│ ├── [email protected]
│ └── [email protected]
├── eMail
├── Social
│ ├── Facebook_2daygeek
│ └── Gplus_2daygeek
├── Social
└── Sudha
└── [email protected]
```
### 显示已有密码
运行以下命令从密码存储中检索密码信息,它会询问你输入密码以解锁。

```
$ pass eMail/[email protected]
*******
```
### 在剪贴板中复制密码
要直接将密码直接复制到剪贴板上,而不是在终端上输入,请使用以下更安全的命令,它会在 45 秒后自动清除密码。
```
$ pass -c eMail/[email protected]
Copied eMail/[email protected] to clipboard. Will clear in 45 seconds.
```
### 生成一个新密码
如果你想生成一些比较难以猜测的密码用于代替原有的奇怪密码,可以通过其内部的 `pwgen` 功能来实现。
```
$ pass generate eMail/[email protected] 15
An entry already exists for eMail/[email protected]. Overwrite it? [y/N] y
The generated password for eMail/[email protected] is:
y!NZ<%T)5Iwym_S
```
生成没有符号的密码。
```
$ pass generate eMail/[email protected] 15 -n
An entry already exists for eMail/[email protected]. Overwrite it? [y/N] y
The generated password for eMail/[email protected] is:
TP9ACLyzUZUwBwO
```
### 编辑现有的密码
使用编辑器插入新密码或编辑现有密码。 当你运行下面的命令时,将会在包含密码的文本编辑器中打开文件 `/dev/shm/pass.wUyGth1Hv0rnh/[email protected]`。 只需在其中添加新密码,然后保存并退出即可。
```
$ pass edit eMail/[email protected]
File: /dev/shm/pass.wUyGth1Hv0rnh/[email protected]
TP9ACLyzUZUwBwO
```
### 移除密码
删除现有密码。 它将从 `~/.password-store` 中删除包含 `.gpg` 的条目。
```
$ pass rm eMail/[email protected]
Are you sure you would like to delete eMail/[email protected]? [y/N] y
removed '/home/magi/.password-store/eMail/[email protected]'
```
### 多选项功能
要保存详细信息,如 URL、用户名、密码、引脚等信息,可以使用以下格式。 首先确保你要将第一项设置为密码,因为它用于在使用剪贴板选项时将第一行复制为密码,以及后续行中的附加信息。
```
$ pass insert eMail/[email protected] -m
Enter contents of eMail/[email protected] and press Ctrl+D when finished:
H3$%hbhYT
URL : http://www.2daygeek.com
Info : Linux Tips & Tricks
Ftp User : 2g
```
(题图:Pixabay, CC0)
---
via: <http://www.2daygeek.com/pass-command-line-password-manager-linux/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,651 | Powerline:Vim 和 Bash 中的一个强大状态栏插件 | http://www.2daygeek.com/powerline-adds-powerful-statusline-to-vim-bash-tumx-in-ubuntu-fedora-debian-arch-linux-mint/ | 2017-06-29T11:31:00 | [
"命令行",
"powerline",
"状态行"
] | https://linux.cn/article-8651-1.html | 
[Powerline](https://github.com/powerline/powerline) 是 vim、zsh、bash、tmux、IPython、Awesome、bar、fish、lemonbar、pdb、rc、shell、tcsh、wm、i3 和 Qtil 中的一个状态栏插件。它给程序提供了状态栏,并使程序更好看。它用 Python 写成。
它是可扩展的并且功能丰富,它用 Python 写成,非常轻便不需要任何第三方的依赖,只需要一个 Python 解释器。
它的稳定以及可测试的代码库经过完整的测试,并且在 Python 2.6+ 和 Python 3 中工作良好。
最初该状态栏只在 vim 中可用,随后项目进化为许多 Linux 程序如 zsh、bash、tmux、IPython、Awesome、i3 和 Qtil 提供状态栏。
其配置以及配色方案用 JSON 写成。它是一种标准简易的文件格式,可以让用户配置 Powerline 支持的程序。
快速并且轻量级,支持守护进程可以提供更好的性能。
### 安装预先要求
确保你的系统有下面预先要求的包。如果没有,在安装 powerline 之前先安装它们。
对于 Debian 用户,使用 [APT 包管理器](http://www.2daygeek.com/apt-command-examples/)或者[Apt-Get 包管理器](http://www.2daygeek.com/apt-get-apt-cache-command-examples/)安装需要的包。
```
$ sudo apt-get install python-pip git
```
对于 openSUSE 用户,使用 [Zypper 包管理器](http://www.2daygeek.com/zypper-command-examples/)安装需要的包。
```
$ sudo zypper install python-pip git
```
对于 Fedora 用户,使用 [dnf 包管理器](http://www.2daygeek.com/dnf-command-examples/)安装需要的包。
```
$ sudo dnf install python-pip git
```
对于 Arch Linux 用户,使用 [pacman 包管理器](http://www.2daygeek.com/pacman-command-examples/)安装需要的包。
```
$ sudo pacman -S python-pip git
```
对于 CentOS/RHEL 用户,使用 [yum 包管理器](http://www.2daygeek.com/yum-command-examples/)安装需要的包。
```
$ sudo yum install python-pip git
```
### 如何在 Linux 中安装 Powerline
在本篇中,我们将向你展示如何安装 Powerline。以及如何在基于 Debian 以及 RHEL 的系统中在 Bash、tumx 和 Vim 中使用。
```
$ sudo pip install git+git://github.com/Lokaltog/powerline
```
找出 powerline 安装位置以便配置程序。
```
$ pip show powerline-status
Name: powerline-status
Version: 2.6.dev9999+git.517f38c566456d65a2170f9bc310e6b4f8112282
Summary: The ultimate statusline/prompt utility.
Home-page: https://github.com/powerline/powerline
Author: Kim Silkebaekken
Author-email: [email protected]
License: MIT
Location: /usr/lib/python2.7/site-packages
Requires:
```
### 在 Bash Shell 中添加/启用 Powerline
添加下面的行到 `.bashrc` 中,它会默认在基础 shell 中启用 powerline。
```
if [ -f `which powerline-daemon` ]; then
powerline-daemon -q
POWERLINE_BASH_CONTINUATION=1
POWERLINE_BASH_SELECT=1
. /usr/local/lib/python2.7/site-packages/powerline/bindings/bash/powerline.sh
fi
```
重新加载 `.bashrc` 文件使得 powerline 在当前窗口中立即生效。
```
$ source ~/.bashrc
```

### 在 tmux 中添加/启用 Powerline
tmux 是最好的终端仿真程序之一,它提供多窗口以及状态栏,但是相比 powerline 的状态栏看上去不那么好。添加下面的的行到 `.tmux.conf` 中,它会默认在 tmux 中启用 powerline。如果你没有找到 `.tmux.conf` 文件,那么创建一个新的。
```
# vi ~/.tmuc.conf
source "/usr/local/lib/python2.7/site-packages/powerline/bindings/tmux/powerline.conf"
```

### 在 Vim 中添加/启用 Powerline
vim 是管理员最爱的文本编辑器之一。添加下面的行到 `.vimrc` 中,启用 powerline 使 vim 更加强大。注意,在 vim 7.x 中,你可能不会在系统中发现 .vimrc 文件,因此不必担心,创建一个新的文件,并添加下面行。
```
# vi ~/.vimrc
set rtp+=/usr/local/lib/python2.7/site-packages/powerline/bindings/vim/
set laststatus=2
set t_Co=256
```


---
via: <http://www.2daygeek.com/powerline-adds-powerful-statusline-to-vim-bash-tumx-in-ubuntu-fedora-debian-arch-linux-mint/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,654 | 在 Linux 中使用 shell 脚本自动创建/移除并挂载交换文件 | http://www.2daygeek.com/shell-script-create-add-extend-swap-space-linux/ | 2017-06-30T08:12:00 | [
"交换分区",
"swap"
] | https://linux.cn/article-8654-1.html | 
几天前我们写了一篇关于在 Linux 中 3 种创建交换文件的方法,它们是常见的方法,但是需要人工操作。
今天我发现了一个 [Gary Stafford](https://programmaticponderings.com/2013/12/19/scripting-linux-swap-space/) 写的 shell 小脚本(两个 shell 脚本,一个用于创建交换文件,另外一个用于移除交换文件),它可以帮助我们在 Linux 中创建/移除并且自动挂载交换文件。
默认这个脚本创建并挂载 512MB 的交换文件。如果你想要更多的交换空间和不同的文件名,你需要相应地修改脚本。修改脚本不是一件困难的事,因为这是一个容易上手而且很小的脚本。
**推荐阅读:** [Linux 中 3 种简易创建或扩展交换空间的方法](http://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/)
### 如何检查当前交换文件大小
使用 [free](http://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/) 和 `swapon` 命令检查已经存在交换空间。
```
$ free -h
total used free shared buff/cache available
Mem: 2.0G 1.3G 139M 45M 483M 426M
Swap: 2.0G 655M 1.4G
$ swapon --show
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 655.2M -1
```
上面的输出显示我当前的交换空间是 `2GB`。
### 创建交换文件
创建 `create_swap.sh` 文件并添加下面的内容来自动化交换空间的创建和挂载。
```
$ nano create_swap.sh
#!/bin/sh
# size of swapfile in megabytes
swapsize=1024
# does the swap file already exist?
grep -q "swapfile" /etc/fstab
# if not then create it
if [ $? -ne 0 ]; then
echo 'swapfile not found. Adding swapfile.'
fallocate -l ${swapsize}M /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap defaults 0 0' >> /etc/fstab
else
echo 'swapfile found. No changes made.'
fi
echo '--------------------------------------------'
echo 'Check whether the swap space created or not?'
echo '--------------------------------------------'
swapon --show
```
给文件添加执行权限。
```
$ sudo +x create_swap.sh
```
运行文件来创建和挂载交换文件。
```
$ sudo ./create_swap.sh
swapfile not found. Adding swapfile.
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=d9004261-396a-4321-a45f-9923e3e1328c
--------------------------------------------
Check whether the swap space created or not?
--------------------------------------------
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 954.1M -1
/swapfile file 1024M 0B -2
```
你可以看到新的 1024M 的 `swapfile`。重启系统以使用新的交换文件。
### 移除交换文件
如果不再需要交换文件,接着创建 `remove_swap.sh` 文件并添加下面的内容来移除交换文件以及它的 `/etc/fstab` 挂载点。
```
$ nano remove_swap.sh
#!/bin/sh
# does the swap file exist?
grep -q "swapfile" /etc/fstab
# if it does then remove it
if [ $? -eq 0 ]; then
echo 'swapfile found. Removing swapfile.'
sed -i '/swapfile/d' /etc/fstab
echo "3" > /proc/sys/vm/drop_caches
swapoff -a
rm -f /swapfile
else
echo 'No swapfile found. No changes made.'
fi
echo '--------------------------------------------'
echo 'Check whether the swap space removed or not?'
echo '--------------------------------------------'
swapon --show
```
并给文件添加可执行权限。
```
$ sudo +x remove_swap.sh
```
运行脚本来移除并卸载交换文件。
```
$ sudo ./remove_swap.sh
swapfile found. Removing swapfile.
swapoff: /dev/sda5: swapoff failed: Cannot allocate memory
--------------------------------------------
Check whether the swap space removed or not?
--------------------------------------------
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 951.8M -1
```
---
via: <http://www.2daygeek.com/shell-script-create-add-extend-swap-space-linux/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,656 | 用 NMAP 探测操作系统 | https://www.linux.org/threads/nmap-os-detection.4564/ | 2017-07-01T17:08:00 | [
"扫描",
"NMAP"
] | https://linux.cn/article-8656-1.html | 
有时,能够知道一个网络里的机器的操作系统(OS)是有一定好处的。当你知道一台机器的操作系统后,因为你可以在网上搜索专门针对该系统的安全漏洞,所以入侵系统也会更加容易。当然,安全漏洞通常都会很快被修补,但安全漏洞存在时你需要知道。
对你自己的网络进行扫描以便发现操作系统类型可以帮助你了解黑客将如何侵入你的网络。
### 操作系统探测数据库
NAMP 带有一个数据库,它在你安装 NAMP 的时候就会被安装。这个数据库用于操作系统的探测,但是它不会自动更新。
这个数据库位于 `/usr/share/nmap/nmap-os-db`。进行更新的最简单方式是首先找到数据库的版本号,用文本编辑器打开这个文件,版本号通常位于第二行。我的数据库的第二行是 `# $Id: nmap-os-db 35407 2015-11-10 04:26:26Z dmiller $`,即这个文件的数据库版本是 35407。
要在网上查找一个可更新版本,可以浏览 <https://svn.nmap.org/nmap> ,如图 1 所示:

*图 1*
你可以从图中看到版本号为 36736,与我的系统上的版本号相比,这个版本号似乎是一个更新的版本。为了对更新的操作系统进行准确的操作系统探测,当然需要对这个数据库进行更新。
保留较旧的数据库版本也是一个不错的主意。我当前和版本是 35407,我将在终端执行下面的命令:
```
sudo mv /usr/share/nmap/nmap-os-db /usr/share/nmap/nmap-os-db-35407
```
这个数据库被以包含版本号的方式重命名了,下一步就是从网站上下载新版本的数据库,在终端执行下面命令:
```
cd /usr/share/nmap
sudo su
wget https://svn.nmap.org/nmap/nmap-os-db
```
新的数据库即将开始被下载,但是你应该加上版本号,就像你在图 1 中看到的版本号 36736。使用文本编辑器打开这个数据库,然后在第二行加上版本号。当版本号变化后,你可以更新你的数据库,然后在其中加入版本号,以便在再次检查更新时做好准备。
### 系统探测过程
在我们开始使用实际的命令并执行系统探测之前,我们应该详细介绍扫描过程中将会发生的事情。
会执行五种不同的测试,每种测试由一个或者多个数据包组成,目标系统对每个数据包作出的响应有助于确定操作系统的类型。
五种不同的测试是:
1. <ruby> 序列生成 <rp> ( </rp> <rt> Sequence Generation </rt> <rp> ) </rp></ruby>
2. ICMP 回显
3. TCP <ruby> 显式拥塞通知 <rp> ( </rp> <rt> Explicit Congestion Notification </rt> <rp> ) </rp></ruby>
4. TCP
5. UDP
现在让我们分别看看他们各自在做什么。
**序列生成**
序列生成测试由六个数据包组成,这六个包是每隔 100 毫秒分开发送的,且都是 TCP SYN 包。
每个 TCP SYN 包的结果将有助于 NMAP 确定操作系统的类型。
**ICMP 回显**
两个有着不同设置的 ICMP 请求包被送到目标系统,由此产生的反应将有助于实现验证操作系统类型。
**TCP <ruby> 显式拥塞通知 <rp> ( </rp> <rt> Explicit Congestion Notification </rt> <rp> ) </rp></ruby>**
当生成许多包通过路由器时会导致其负载变大,这称之为拥塞。其结果就是系统会变慢以降低拥堵,以便路由器不会发生丢包。
这个包仅为了得到目标系统的响应而发送。因为不同的操作系统以不同的方式处理这个包,所以返回的特定值可以用来判断操作系统。
**TCP**
在这个测试中会发送六个数据包。
一些带有特定的包设置的包被发送用来到打开的或关闭的端口。结果也将会因为操作系统的不同而不同。
所有 TCP 包都是以如下不同的标志被发送:
1. 无标志
2. SYN、FIN、URG 和 PSH
3. ACK
4. SYN
5. ACK
6. FIN、PSH 和 URG
**UDP**
这个测试由一个被发送给一个关闭的端口的数据包组成。
如果目标系统上的这个端口是关闭的,而且返回一条 ICMP 端口不可达的信息,那么就说明没有防火墙。
### NMAP 操作系统检测命令
现在我们开始实际动手进行系统探测。如果你已经读过一些我写的关于 NMAP 的文章,那么最好不要执行 PING 操作。为了跳过 PING,我们使用参数 `Pn`。为了看到详细的信息,你应该使用 `-v` 参数用于动态显示。为了获取系统的信息,需要使用 `-O` 参数。
为了使命令顺利运行和执行 TCP SYN 扫描,你需要以管理员的身份来执行这个命令。在我的例子中,我将只在一个系统上而不是整个网络上进行扫描,使用的命令是:
```
sudo nmap -v -Pn -O 192.168.0.63
```
扫描的结果如图2所示,扫描显示七个开放的端口。

*图 2*
开放的端口是:
1. 21/tcp ftp
2. 22/tcp ssh
3. 111/tcp rpcbind
4. 139/tcp netbios-ssn
5. 445/tcp microsoft-ds
6. 2049/tcp nfs
7. 54045/tcp unknown
系统的 MAC 地址为为:`00:1E:4F:9F:DF:7F`。
后面部分显示系统类型为:
```
Device type: general purpose
Running: Linux 3.X|4.X
OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4
OS details: Linux 3.2 - 4.6
Uptime guess: 0.324 days (since Sun Apr 23 08:43:32 2017)
```
系统当前运行的是 Ubuntu Server 16.04,Linux 内核为 4.8,运行时间猜的较准确。
我在另一个系统了又进行了一次测试,结果如图3所示:

*图 3*
这次扫描中开放的端口与上次不同。所猜的系统为 ‘Microsoft Windows 2000|XP’,实际上是 Windows XP sp3。
### 端口嗅探结果
让我们来看看图 2 所示的第一次扫描中在后台发生了什么。
首先 NMAP 执行一次 TCP 隐秘扫描,在本次系统探测实例中以一个如图 4 所示的数据包 2032 开始。

*图 4*
<ruby> 序列生成 <rp> ( </rp> <rt> Sequence Generation </rt> <rp> ) </rp></ruby>开始于数据包 2032/2033,第六个数据包是 2047/2048。注意每个都被发送两次,且每隔 100ms 发送下一个数据包。
发送的 ICMP 数据包是 2050 - 2053,2 个数据包重复一遍,实际上是 4 个数据包。
2056-2057 是 TCP 显式拥塞通知数据包。
对 TCP 的 6 个测试是 2059、2060、2061、2063、2065、2067。
最后的 UDP 测试是 2073。
这些就是用于确定目标系统上操作系统的类型的测试。
我希望这将有助于你理解如何更新 NMAP 系统探测数据库和执行对系统的扫描。请注意,你可以和你不在同一个网络中的系统进行扫描,所以可以对互联网上的任何系统进行扫描。
---
via: <https://www.linux.org/threads/nmap-os-detection.4564/>
作者:[Jarret B](https://www.linux.org/members/jarret-b.29858/) 译者:[zhousiyu325](https://github.com/zhousiyu325) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,657 | uCareSystem:Ubuntu/Linux Mint 的一体化系统更新和维护工具 | http://www.2daygeek.com/ucaresystem-system-update-and-maintenance-tool-for-ubuntu-linuxmint/ | 2017-07-02T10:18:00 | [
"uCareSystem"
] | https://linux.cn/article-8657-1.html | 
[uCareSystem Core](https://github.com/cerebrux/uCareSystem) 是一种能够自动执行基本的系统维护活动的轻型实用程序,另一方面它可以通过多种方式减少系统管理员的任务,节省大量时间。它没有任何 GUI,并提供纯粹的命令行界面来执行活动。
Ubuntu 中有几种实用程序来执行系统维护活动。每种工具有它们相应的独特功能和设计。你可以添加一个 cron 任务来自动化这些任务。
uCareSystem Core 会自动刷新发行版仓库、更新可用包列表、卸载包(过期包、孤儿包和旧的 Linux 内核)以及清理取回的包来节省系统磁盘空间。
* 建议阅读:[Stacer - Linux 系统优化器及监控工具](http://www.2daygeek.com/stacer-linux-system-optimizer-and-monitoring-tool/)
* 建议阅读:[BleachBit – 快速及最佳的方式清理你的 Linux 系统](http://www.2daygeek.com/bleachbit-system-cleaner-on-ubuntu-debian-fedora-opensuse-arch-linux-mint/)
* 建议阅读:[用 Ubuntu Cleaner 在 Ubuntu/LinuxMint 中释放一些空间](/article-8642-1.html)
### uCareSystem Core 功能
* 更新包列表(它将刷新包索引)
* 下载及安装更新
* 更新包及系统库到最新版本
* 移除不需要的、过期的和孤儿包。
* 移除旧内核(它为了安全保留当前和之前一个内核)
* 移除不需要的配置文件
* 清理已下载的临时包
### 在 Ubuntu/LinuxMint 中安装 uCareSystem Core
因为开发者提供了自己的 PPA,因此我们可以轻易地通过 PPA 在 Ubuntu/LinuxMint 中安装 uCareSystem Core。
```
$ sudo add-apt-repository ppa:utappia/stable
$ sudo apt update
$ sudo apt install ucaresystem-core
```
我们已经成功安装了 `uCareSystem Core` 包,并且在执行 CareSystem Core 命令之前要了解它是否会节省磁盘空间,使用 `df -h` 命令检查当前磁盘利用率。
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 979M 0 979M 0% /dev
tmpfs 200M 6.4M 194M 4% /run
/dev/sda1 38G 19G 17G 54% /
tmpfs 999M 216K 999M 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
tmpfs 200M 112K 200M 1% /run/user/1000
```
只需在终端中运行 `ucaresystem-core` 命令,在结束之前它会自动执行而不需要人类交互。
```
$ sudo ucaresystem-core
_______________________________________________________
uCareSystem Core v3.0
~ '' ~
Welcome to all-in-one System Update and maintenance
assistant app.
This simple script will automatically
refresh your packagelist, download and
install updates (if there are any), remove any old
kernels, obsolete packages and configuration files
to free up disk space, without any need of user
interference.
_______________________________________________________
uCareSystem Core will start in 5 seconds...
#########################
Started
#########################
Ign:1 https://wire-app.wire.com/linux/debian stable InRelease
Hit:2 https://wire-app.wire.com/linux/debian stable Release
Hit:4 https://deb.nodesource.com/node_6.x yakkety InRelease
Hit:5 https://repo.skype.com/deb stable InRelease
Hit:6 http://in.archive.ubuntu.com/ubuntu yakkety InRelease
Hit:7 http://archive.canonical.com/ubuntu yakkety InRelease
.
.
.
Removing linux-image-extra-4.8.0-34-generic (4.8.0-34.36) ...
Purging configuration files for linux-image-extra-4.8.0-34-generic (4.8.0-34.36) ...
Removing linux-image-extra-4.8.0-32-generic (4.8.0-32.34) ...
Purging configuration files for linux-image-extra-4.8.0-32-generic (4.8.0-32.34) ...
#####################################
Finished removing unused config files
#####################################
Reading package lists... Done
Building dependency tree
Reading state information... Done
Del tilix 1.5.6-1~webupd8~yakkety1 [449 kB]
Del tilix-common 1.5.6-1~webupd8~yakkety1 [174 kB]
Del libfreetype6 2.6.3-3ubuntu1.2 [336 kB]
Del terminix 1.5.6-1~webupd8~yakkety1 [13.7 kB]
######################################
Cleaned downloaded temporary packages
######################################
#########################
Done
#########################
```
我可以看见它如预期那样工作。同样也可以发现大概在`/` 分区节省了 `2GB`。
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 979M 0 979M 0% /dev
tmpfs 200M 6.4M 194M 4% /run
/dev/sda1 38G 18G 19G 49% /
tmpfs 999M 216K 999M 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
tmpfs 200M 112K 200M 1% /run/user/1000
```
---
via: <http://www.2daygeek.com/ucaresystem-system-update-and-maintenance-tool-for-ubuntu-linuxmint/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,658 | Boot Repair Tool: 可以修复与启动相关的大部分问题 | http://www.linuxandubuntu.com/home/boot-repair-tool-repair-the-most-boot-related-problems | 2017-07-01T19:38:59 | [
"引导",
"GRUB"
] | https://linux.cn/article-8658-1.html | 
我们都碰到过启动相关的问题,并且大部分时候都是简单的 **GRUB** 上的问题。 有时候很多人会觉得、输入一段很长的命令或在论坛中搜索来找到解决方法太麻烦了。今天我要告诉你如何使用一个简单而轻巧的软件来解决大部分的启动相关的问题。这个工具就是著名的 **Boot Repair Tool** 。好了,话不多说,让我们开始吧。
### 如何在 Linux 中安装和使用启动修复工具
你需要一个你所安装的操作系统的现场版的 USB 棒或 DVD,这是安装这个应用并使用它的前提条件。 引导到操作系统[17](http://www.linuxandubuntu.com/home/category/distros) 并打开终端并输入以下命令
```
sudo add-apt-repository -y ppa:yannubuntu/boot-repair
sudo apt-get update
sudo apt-get install -y boot-repair && boot-repair
```

在安装结束以后,你可以从应用菜单或或其它你在系统上启动应用的地方启动你的修复工具。

你可以在菜单栏中看到 Boot Repair。

启动它,它就会开始进行一些扫描操作,我们只要等它自己结束就好了。

现在你会看到这个界面,这是基于之前扫描的建议修复。 在底部还可以有一个高级选项,你可以在高级选项里进行各方面的设置。 我建议没有经验的用户使用推荐维修,因为它很简单,在大多数情况下我们都可以这样做。

选择推荐的更新后,它将开始修复。 等待进一步的处理。

你现在会看到一个指令界面。 现在是轮到我们操作的时候了。打开终端,逐个复制并粘贴其中高亮的命令到终端中。

命令完成后,你会看到一个上面提及的要求确认的界面。 使用箭头键或 Tab 键选择“Yes”,然后按回车键。 现在在 **启动修复工具** 界面中点击 “forward”。

现在你会看到这个界面。 复制在那里提到的命令,并将其粘贴到终端中,然后按回车并让其执行此操作。 需要一段时间所以请耐心等待,它将下载GRUB、内核或任何修复您的引导所需的内容。

现在你可能会看到一些选项用于配置安装 GRUB 的位置。 选择“yes”,然后按回车,你会看到上面的界面。使用空格键选择选项和按 TAB 以浏览选项。 选择并安装 GRUB 后,可以关闭终端。 现在在启动修复工具屏幕中选择 “forward” 选项。

现在它会做一些扫描操作,并且会询问你一些需要确认的一些选项。 每个选项都选择是即可。

它会显示一个成功的确认消息。 如果没有,并且显示失败的消息,则将生成链接。 转到该链接获取更多帮助。
成功后,重启你的电脑。 当你重新启动时,你会看到 GRUB。 现在已成功维修您的电脑。 一切顺利。
### 引导修复的高级技巧
当我的电脑出现双引导启动画面时,我发现在修复时,它无法识别 [安装在另一个分区上的 Windows 7](http://www.linuxandubuntu.com/home/how-to-dual-boot-windows-7-and-ubuntu)。 这里有一个简单的提示来帮你解决这个问题。
打开终端并安装 os-prober。 它很简单,可以在软件中心或通过终端找到。
os-prober 可以帮助您识别安装在 PC 上的其他操作系统。

os-prober 安装完成后,通过输入 `os-prober` 在终端运行它。 如果失败了那么试着用 root 账号运行它。 之后运行`update-grub` 命令。 这就是你可以用于从 GRUB 中启动 Windows 的所需要做的全部。

### 总结
以上就是全部的内容。现在你已经成功地修复了你的 PC。
---
via: <http://www.linuxandubuntu.com/home/boot-repair-tool-repair-the-most-boot-related-problems>
作者:[linuxandubuntu](http://www.linuxandubuntu.com/home/boot-repair-tool-repair-the-most-boot-related-problems) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,659 | 安卓编年史(26):Android Wear | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/27/ | 2017-07-02T11:35:11 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8659-1.html | 
### Android Wear

2014 年 6 月安卓装备上了新元素:智能手表。谷歌在 2014 的 Google I/O 上发布了“[Android Wear](http://arstechnica.com/gadgets/2014/06/android-wear-review/)”,意图在你的手腕上装备一台小电脑。为[一块 1.6 英寸的屏幕](http://arstechnica.com/gadgets/2014/06/reviewing-android-wears-first-watches-sometimes-promising-often-frustrating/)进行设计意味着需要从头构思整个界面,所以谷歌精简了安卓 4.4 KitKat,并创建出了一个小巧的智能手表操作系统。Android Wear 设备不是独立的计算机。它们依赖于运行着配套 Android Wear 应用的安卓智能手机进行连接,认证,以及应用数据获取。

Android Wear 智能手表主要是一个通知机器。有了安卓 4.3 及以上版本内建的新 API,任何手机收到的通知都能同时显示在手表上——无需任何应用支持。通知操作按钮也带到了手表上,让用户可以从手表上与通知进行交互。在手表上清除一条通知会同时在手机上将其清除,用户无需拿出另一台设备就可以管理通知消息。每个手表还带有一个语音命令系统和一个麦克风,让用户可以抬腕唤醒手表,说“OK Google”,并发出一条命令。你还可以通过语音回复信息。手表上甚至还有一个装有原生手表应用的应用抽屉。

主屏幕上显示的自然是时间了,它还允许用户切换无数不同的表盘风格。通知界面采用了卡片风格设计。一个垂直滚动的通知列表会在手表上堆积,包括一些显示天气或交通信息的 Google Now 卡片。向左滑动能够清除一条通知,向右滑动会一次打开一个操作按钮。在主屏幕点击会打开语音命令系统,在哪里你可以激活设置或应用抽屉。除了这些以外最初的 Android Wear 的主屏幕没有多少内容。

2014 年 Android Wear 设备仅仅发售了 720000 部,从那之后我们就没有从软件或硬件上看到多少发展。时至今日,智能手表的销售[一年不如一年](http://www.businesswire.com/news/home/20161024005145/en/Smartwatch-Market-Declines-51.6-Quarter-Platforms-Vendors),甚至在 [Apple Watch](http://arstechnica.com/apple/2015/05/review-the-absolutely-optional-apple-watch-and-watch-os-1-0/) 发布后,没人能够真正确定他们想要让小腕上计算机做什么。明显这种情况会持续到 2017 年 Android Wear 2.0 发布。从 Moto 360 给市场带来圆形设备以来,我们还没从硬件厂商那边看到一些新玩意。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/27/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,660 | 安卓编年史(27):Android 5.0 Lollipop——有史以来最重要的安卓版本 | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/27/ | 2017-07-03T09:36:00 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8660-1.html | 
### Android 5.0 Lollipop——有史以来最重要的安卓版本

2014 年 11 月,谷歌发布了 [安卓 5.0 棒棒糖](http://arstechnica.com/gadgets/2014/11/android-5-0-lollipop-thoroughly-reviewed/)。很多系统更新都被称作“有史以来最大的”,但那些陈词滥调在安卓 5.0 棒棒糖的身上成真了。首先,它改变了安卓发布的方式。谷歌与这个版本的安卓一同开始了“开发者预览”计划,可以以 beta 的形式提前几个月看到新版本。由于版本代号和版本号现在被用作市场营销工具,最终的名称在 beta 期间始终保密,仅以字母指代。在 2014 年的 Google I/O 大会,谷歌宣布了“Android L 开发者预览”。
给开发者(以及全世界)四个月的时间接触学习这个版本肯定是有必要的。安卓 L 包含了大范围的变动,它们在系统中都是首次亮相,直到今天还能感受到这些变动。棒棒糖引入了 Material Design 设计理念,作为修改安卓每个界面的指南。一个新的运行时环境“ART”,它代表着驱动安卓应用引擎的完全革新。谷歌的“OK Google”语音命令系统升级到能在任意界面生效,在一些设备上它甚至可以在设备锁定的时候工作。多用户系统从平板来到了手机上。棒棒糖还铺设了 Android for Work 的基础,这是一个关注企业与个人双重应用的特性。
#### Material Design 给了安卓(以及谷歌的所有产品)一个统一形象
当 Matias Duarte 踏上 I/O 大会的舞台并宣布 Material Design 时,它公布的不仅仅是一个安卓的统一设计蓝图,还是谷歌的所有产品以及第三方应用生态的统一设计蓝图。这个想法是安卓、iOS、Chrome OS,以及 Web 版的谷歌产品,都应该有一致的界面风格、字体,以及行为。它们在不需要在不同尺寸的屏幕上有相同的布局,Material Design 提供了有一致行为的构建元素,它们能够基于屏幕尺寸自适应布局。
Duarte 和他的团队在蜂巢中尝试了“电子”风格审美,以及在果冻豆中的“卡片”主题风格,但 Material Design 最终成了被选中的设计系统,应用到谷歌的所有产品线。Material Design 已经超出了一个用户界面设计指南的概念,成为了谷歌作为一个公司的一致性代表。
Material Design 主要的比喻是“纸和墨”。所有的用户界面是“纸”的片层,漂浮于最底端平面之上。阴影用来提供界面的层次结构——用户界面的每个层在 Z 轴上拥有一个位置,并在其之下的元素上投射一个阴影。这是在安卓 4.1 的 Google Now 中使用的“卡片”风格更清晰的进化版本。“墨”是用来指代谷歌推荐开发者在界面的重要项目上使用的泼色。这个概念也引用了真实世界的物体概念,与 Windows 8 和 iOS 7 带来的反拟物化“不惜一切代价扁平化”的趋势相违。
动画也是一个重点,思想是任何东西都不应该“弹出”到屏幕上。组件应该滑进,缩小,或放大。“纸”表面和真实世界中的纸还有点不同,它们可以缩小,扩展,合并以及增大。为了让动画系统协同图像资源工作,阴影不像之前版本中那样整合进用户界面组件——实际上谷歌创造了一个实时的 3D 阴影系统,这样阴影就能够在这些动画和转换中正确渲染。

为了将 Material Design 带到谷歌的其它地方以及应用生态系统,谷歌创建并持续维护着[一个统一的设计指南](https://design.google.com/resources/),描述了一切设计的规范。那里有应该与不应该做的,基线、基线网格、色板、堆叠图解、字体、库、布局建议以及更多内容。谷歌甚至开始定期举办[针对设计的会议](https://design.google.com/events/),来听取设计者的声音以及给他们教学,还成立了 [Material Design 奖项](https://design.google.com/articles/material-design-awards/)。在发布 Material Design 后不久,Duarte 离开了安卓团队并成为了谷歌的 Material Design 总监,创建了一个完全专注于设计的部门。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/27/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,662 | 为树莓派 3 构建 64 位内核 | https://devsidestory.com/build-a-64-bit-kernel-for-your-raspberry-pi-3/ | 2017-07-03T08:53:00 | [
"树莓派",
"64位"
] | https://linux.cn/article-8662-1.html |
>
> 编辑:在写完这个这篇文章之后,我在树莓派 3 上基于 Debian 开始打造 64 位的系统。你可以[在这里找到](https://github.com/bamarni/pi64)。
>
>
>
****
**树莓派 3** 配有 Broadcom BCM2837 64 位 ARMv8 四核 Cortex A53 处理器,它是一个 **64 位 CPU**。如果你有一块,运行以下命令可能会让你感到惊讶:
```
uname -a
Linux raspberrypi 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l GNU/Linux
```
是的,这是一个 **32 位内核**。这是因为树莓派基金会还没有为官方的树莓派系统 Raspbian 提供 64 位版本。然而你可以构建一个,多亏了 [Electron752](https://github.com/Electron752) 提供的许多补丁。
### 构建内核
树莓派基金会维护着[它们自己的 Linux 内核分支](https://github.com/raspberrypi/linux),它为它们的设备特别裁剪过,同时定期地从上游合并。
我们将会遵照[这个页面](https://www.raspberrypi.org/documentation/linux/kernel/building.md)的指导来**构建一个 64 位内核**。
我们不能使用“本地构建”的方法,因为它需要一块 64 位的树莓派,这个我们明显还没有。因此我们需要**交叉编译**它,**Ubuntu** 是推荐的系统。我个人没有 Ubuntu,因此我在一个有 2 个 CPU 的 Ubuntu 16.04 Digital Ocean 实例上构建,这应该花费我 $0.03。如果你也想这么做,你可以通过[这个链接](https://m.do.co/c/8ef9c5832a9c)得到 $10 的免费额度。或者你可以通过使用 Virtualbox 中的 Ubuntu VM 作为实例。
首先,我们需要一些**构建工具**以及\*\* aarch64 交叉编译器\*\*:
```
apt-get update
apt-get install -y bc build-essential gcc-aarch64-linux-gnu git unzip
```
接着我们可以下载 **Linux 内核源码**:
```
git clone –depth=1 -b rpi-4.8.y https://github.com/raspberrypi/linux.git
```
进入到创建的 git 目录。另外你可以为你的内核添加额外的版本标签,可以通过编辑 `Makefile` 的开始几行完成:
```
VERSION = 4
PATCHLEVEL = 8
SUBLEVEL = 13
EXTRAVERSION = +bilal
```
为了**构建它**,运行下面的命令:
```
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bcmrpi3_defconfig
make -j 3 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
```
第一个应该很快。第二个则会完全不同,我没有精确计时,但是对我来说大概要半个小时。根据你的 CPU 数(nproc \* 1.5)调整 `-j` 标志。
### 选择一个 Linux 发行版
在内核编译的时候,我们可以开始准备它的 Linux 发行版了。在本教程中为了简单我使用 **Raspbian**,即使这是一个只有 32 位的发行版。
>
> 如果你想要一直用 64 位系统,你应该选一个有 aarch64 支持的发行版,Debian 有一个健壮的 [ARM64 移植版](https://wiki.debian.org/Arm64Port)。得到它基本有三种方式:
>
>
> * 下载一个预构建的根文件系统,这很可能会如页面中提到的那样给你一个过期的版本。
> * 如果你熟悉 debootstrap,用它构建你自己的(这回比较棘手,因为它需要一些手工调整,它最初的目的是在已经运行的主机上进行 chroot,而不是为其他机器构建根文件系统)
> * 我建议使用 multistrap,这里有一个很好的教程:<http://free-electrons.com/blog/embdebian-with-multistrap/>
>
>
>
回到 Raspbian,我们现在可以下载官方系统,并开始准备了。
打开一个新的 shell 会话并运行下面的命令:
```
wget -O raspbian.zip https://downloads.raspberrypi.org/raspbian_lite_latest
unzip raspbian.zip
```
我们用下面的命令审查:
```
fdisk -l 2016-11-25-raspbian-jessie-lite.img
Disk 2016-11-25-raspbian-jessie-lite.img: 1.3 GiB, 1390411776 bytes, 2715648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x244b8248
Device Boot Start End Sectors Size Id Type
2016-11-25-raspbian-jessie-lite.img1 8192 137215 129024 63M c W95 FAT32 (LBA)
2016-11-25-raspbian-jessie-lite.img2 137216 2715647 2578432 1.2G 83 Linux
```
我们可以看到它有**两个分区**。第一个是**启动分区**,它主要包含了 bootloader、Linux 内核以及少量配置文件。第二个是**根分区**。
我们可以在我们的文件系统上**挂载这些分区**,从**根分区**开始:
```
mount -o loop,offset=70254592 2016-11-25-raspbian-jessie-lite.img /mnt
```
`offset` 取决于扇区大小(512):70254592 = 512 \* 137216
接着是**启动分区**:
```
mount -o loop,offset=4194304,sizelimit=66060288 2016-11-25-raspbian-jessie-lite.img /mnt/boot
```
`offset` :4194304 = 512 \* 8192,`sizelimit`:66060288 = 512 \* 129024 。
树莓派系统现在应该可以在 `/mnt` 中看到了。我们基本要完成了。
### 打包内核
内核编译完成后,最后一步包括**复制 Linux 内核**以及**设备树**到启动分区中:
```
cp arch/arm64/boot/Image /mnt/boot/kernel8.img
cp arch/arm64/boot/dts/broadcom/bcm2710-rpi-3-b.dtb /mnt/boot/
```
调整 `config.txt` :
```
echo “kernel=kernel8.img” >> /mnt/boot/config.txt
```
安装**内核模块** :
```
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu INSTALL_MOD_PATH=/mnt modules_install
umount /mnt/boot
umount /mnt
```
就是这样了,用于树莓派 3 的\*\* ARM64 Linux 内核\*\*诞生了!
现在你可以压缩镜像,通过 scp 下载下来,并按照标准的步骤放到你的 SD 卡中。
最后你会得到:
```
uname -a
Linux raspberrypi 4.8.13+bilal-v8+ #1 SMP Wed Dec 14 14:09:38 UTC 2016 aarch64 GNU/Linux
```
---
via: <https://devsidestory.com/build-a-64-bit-kernel-for-your-raspberry-pi-3/>
作者:[Bilal Amarni](http://devsidestory.com/about-me) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # Build a 64-bit kernel for your Raspberry Pi 3
EDIT : After writing this blog post I’ve started a 64-bit OS for the Raspberry Pi 3, based on Debian. You can find it[here].
The **Raspberry Pi 3** ships with a Broadcom BCM2837 64bit ARMv8 quad core Cortex A53 processor,
which is a **64-bit CPU**. If you own one of these, running the following command might surprise you :
```
$ uname -a
Linux raspberrypi 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l GNU/Linux
```
Yes, this is a **32-bit kernel**. The reason for this is that the Raspberry Pi foundation doesn’t yet provides a 64-bit
version of Raspbian, the official OS for Raspberry Pi. It is however possible to build one, thanks to the various
patches sent by [Electron752](https://github.com/Electron752).
# Build the Kernel
The Raspberry Pi foundation maintains [their own fork](https://github.com/raspberrypi/linux) of the Linux Kernel which
is especially tailored for their devices, while upstream gets merged regularly.
We’re going to adapt instructions from [that page](https://www.raspberrypi.org/documentation/linux/kernel/building.md)
to **build a 64-bit Kernel**.
We cannot use the “Local building” method as it’d require a 64-bit Raspberry Pi, which we obviously don’t have yet.
So we have to **cross-compile** it, **Ubuntu** is the recommended OS for this. I personally don’t have Ubuntu so
I’ll make my build on a 2 CPUs Ubuntu 16.04 Digital Ocean droplet, which should cost me $0.03. If you also want
to proceed like this, you can get $100 free credits through [this link](https://m.do.co/c/8ef9c5832a9c).
Alternatively, you could use a Ubuntu VM through Virtualbox for instance.
First, we’d need a few **build tools** and the **aarch64 cross-compiler** :
```
$ apt-get update
$ apt-get install -y bc build-essential gcc-aarch64-linux-gnu git unzip
```
Then we can download the **Linux Kernel sources** :
`$ git clone –depth=1 -b rpi-4.8.y https://github.com/raspberrypi/linux.git`
Enter now inside the created git directory. Optionally, you can add an extra version tag for your kernel. This is done by editing the beginning of the Makefile :
```
VERSION = 4
PATCHLEVEL = 8
SUBLEVEL = 13
EXTRAVERSION = +bilal
```
In order to **build it**, run the following commands :
```
$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bcmrpi3_defconfig
$ make -j 3 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
```
The first one should be pretty fast. For the second one it’s a whole different story, I haven’t timed it exactly but it was around 30 minutes for me. Make sure to adapt the -j flag depending on your number of CPUs (nproc * 1.5).
# Choose a Linux distribution
While the Kernel is being built, we can start preparing a Linux distribution for it. I’ll be using **Raspbian**
for simplicity in this tutorial, even though this is a 32-bit only distribution.
If you want to go 64-bit all the way you should pick up a distribution available in aarch64, Debian has a robust-[ARM64Port]. To grab it there are basically 3 options :download a pre-built root filesystem, this would most likely give you an outdated one as mentioned in that page-build your own with debootstrap if you’re familiar with it (otherwise it can be tricky as it requires some manual tweaks, the original purpose of it is to chroot from an already running host, not build a root filesystem for another machine).-the one I’d recommend, using multistrap, there seems to be a nice tutorial on this page :[http://free-electrons.com/blog/embdebian-with-multistrap/]
Back to Raspbian, we can now download the official OS and start preparing it.
Open a new shell session and run the following commands :
```
$ wget -O raspbian.zip https://downloads.raspberrypi.org/raspbian_lite_latest
$ unzip raspbian.zip
```
We can inspect it with the following command :
```
$ fdisk -l 2016-11-25-raspbian-jessie-lite.img
Disk 2016-11-25-raspbian-jessie-lite.img: 1.3 GiB, 1390411776 bytes, 2715648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x244b8248
Device Boot Start End Sectors Size Id Type
2016-11-25-raspbian-jessie-lite.img1 8192 137215 129024 63M c W95 FAT32 (LBA)
2016-11-25-raspbian-jessie-lite.img2 137216 2715647 2578432 1.2G 83 Linux
```
We can see it has **two partitions**. The first one is the **boot partition**, it mainly contains the bootloader,
the Linux Kernel and a few config files. The second one is the **root partition**.
We can **mount those partitions** on our filesystem, starting with the **root partition** :
`$ mount -o loop,offset=70254592 2016-11-25-raspbian-jessie-lite.img /mnt`
The offset depends on the sector size, which is 512 : 70254592 = 512 * 137216
Then the **boot partition** :
`$ mount -o loop,offset=4194304,sizelimit=66060288 2016-11-25-raspbian-jessie-lite.img /mnt/boot`
offset= 4194304 = 512 * 8192,sizelimit= 66060288 = 512 * 129024
The Raspbian OS can now be seen under **/mnt**. We’re almost there.
EDIT : I’ve later on found out about
[kpartx], which lets you mount image partitions without the hassle of dealing with offsets ([see this link]).
# Wrapping it up
Once the Kernel build is finished, the last steps involve **copying the Linux Kernel**
and the **device tree** to the boot partition :
```
$ cp arch/arm64/boot/Image /mnt/boot/kernel8.img
$ cp arch/arm64/boot/dts/broadcom/bcm2710-rpi-3-b.dtb /mnt/boot/
```
Tweaking **config.txt** :
`$ echo “kernel=kernel8.img” >> /mnt/boot/config.txt`
Installing **Kernel modules** :
```
$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu INSTALL_MOD_PATH=/mnt modules_install
$ umount /mnt/boot
$ umount /mnt
```
And… that’s it, a freshly baked **ARM64 Linux Kernel** for our Raspberry Pi 3!
You can now **compress the image**, **download it** (through [scp](https://linux.die.net/man/1/scp) for instance)
and follow the standard instructions to **put it on your SD card**.
Eventually you’ll get :
```
> uname -a
Linux raspberrypi 4.8.13+bilal-v8+ #1 SMP Wed Dec 14 14:09:38 UTC 2016 aarch64 GNU/Linux
``` |
8,663 | 开发一个 Linux 调试器(三):寄存器和内存 | https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/ | 2017-07-04T08:08:00 | [
"调试",
"调试器"
] | https://linux.cn/article-8663-1.html | 
上一篇博文中我们给调试器添加了一个简单的地址断点。这次,我们将添加读写寄存器和内存的功能,这将使我们能够使用我们的程序计数器、观察状态和改变程序的行为。
### 系列文章索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](/article-8645-1.html)
3. [寄存器和内存](https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/)
4. [Elves 和 dwarves](https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/)
5. [源码和信号](https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/)
6. [源码级逐步执行](https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/)
7. 源码级断点
8. 调用栈展开
9. 读取变量
10. 下一步
### 注册我们的寄存器
在我们真正读取任何寄存器之前,我们需要告诉调试器一些关于我们的目标平台的信息,这里是 x86*64 平台。除了多组通用和专用目的寄存器,x86*64 还提供浮点和向量寄存器。为了简化,我将跳过后两种寄存器,但是你如果喜欢的话也可以选择支持它们。x86\_64 也允许你像访问 32、16 或者 8 位寄存器那样访问一些 64 位寄存器,但我只会介绍 64 位寄存器。由于这些简化,对于每个寄存器我们只需要它的名称、它的 DWARF 寄存器编号以及 `ptrace` 返回结构体中的存储地址。我使用范围枚举引用这些寄存器,然后我列出了一个全局寄存器描述符数组,其中元素顺序和 `ptrace` 中寄存器结构体相同。
```
enum class reg {
rax, rbx, rcx, rdx,
rdi, rsi, rbp, rsp,
r8, r9, r10, r11,
r12, r13, r14, r15,
rip, rflags, cs,
orig_rax, fs_base,
gs_base,
fs, gs, ss, ds, es
};
constexpr std::size_t n_registers = 27;
struct reg_descriptor {
reg r;
int dwarf_r;
std::string name;
};
const std::array<reg_descriptor, n_registers> g_register_descriptors {{
{ reg::r15, 15, "r15" },
{ reg::r14, 14, "r14" },
{ reg::r13, 13, "r13" },
{ reg::r12, 12, "r12" },
{ reg::rbp, 6, "rbp" },
{ reg::rbx, 3, "rbx" },
{ reg::r11, 11, "r11" },
{ reg::r10, 10, "r10" },
{ reg::r9, 9, "r9" },
{ reg::r8, 8, "r8" },
{ reg::rax, 0, "rax" },
{ reg::rcx, 2, "rcx" },
{ reg::rdx, 1, "rdx" },
{ reg::rsi, 4, "rsi" },
{ reg::rdi, 5, "rdi" },
{ reg::orig_rax, -1, "orig_rax" },
{ reg::rip, -1, "rip" },
{ reg::cs, 51, "cs" },
{ reg::rflags, 49, "eflags" },
{ reg::rsp, 7, "rsp" },
{ reg::ss, 52, "ss" },
{ reg::fs_base, 58, "fs_base" },
{ reg::gs_base, 59, "gs_base" },
{ reg::ds, 53, "ds" },
{ reg::es, 50, "es" },
{ reg::fs, 54, "fs" },
{ reg::gs, 55, "gs" },
}};
```
如果你想自己看看的话,你通常可以在 `/usr/include/sys/user.h` 找到寄存器数据结构,另外 DWARF 寄存器编号取自 [System V x86\_64 ABI](https://www.uclibc.org/docs/psABI-x86_64.pdf)。
现在我们可以编写一堆函数来和寄存器交互。我们希望可以读取寄存器、写入数据、根据 DWARF 寄存器编号获取值,以及通过名称查找寄存器,反之类似。让我们先从实现 `get_register_value` 开始:
```
uint64_t get_register_value(pid_t pid, reg r) {
user_regs_struct regs;
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
//...
}
```
`ptrace` 使得我们可以轻易获得我们想要的数据。我们只需要构造一个 `user_regs_struct` 实例并把它和 `PTRACE_GETREGS` 请求传递给 `ptrace`。
现在根据要请求的寄存器,我们要读取 `regs`。我们可以写一个很大的 switch 语句,但由于我们 `g_register_descriptors` 表的布局顺序和 `user_regs_struct` 相同,我们只需要搜索寄存器描述符的索引,然后作为 `uint64_t` 数组访问 `user_regs_struct` 就行。(你也可以重新排序 `reg` 枚举变量,然后使用索引把它们转换为底层类型,但第一次我就使用这种方式编写,它能正常工作,我也就懒得改它了。)
```
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
[r](auto&& rd) { return rd.r == r; });
return *(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors)));
```
到 `uint64_t` 的转换是安全的,因为 `user_regs_struct` 是一个标准布局类型,但我认为指针算术技术上是<ruby> 未定义的行为 <rt> undefined behavior </rt></ruby>。当前没有编译器会对此产生警告,我也懒得修改,但是如果你想保持最严格的正确性,那就写一个大的 switch 语句。
`set_register_value` 非常类似,我们只是写入该位置并在最后写回寄存器:
```
void set_register_value(pid_t pid, reg r, uint64_t value) {
user_regs_struct regs;
ptrace(PTRACE_GETREGS, pid, nullptr, ®s);
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
[r](auto&& rd) { return rd.r == r; });
*(reinterpret_cast<uint64_t*>(®s) + (it - begin(g_register_descriptors))) = value;
ptrace(PTRACE_SETREGS, pid, nullptr, ®s);
}
```
下一步是通过 DWARF 寄存器编号查找。这次我会真正检查一个错误条件以防我们得到一些奇怪的 DWARF 信息。
```
uint64_t get_register_value_from_dwarf_register (pid_t pid, unsigned regnum) {
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
[regnum](auto&& rd) { return rd.dwarf_r == regnum; });
if (it == end(g_register_descriptors)) {
throw std::out_of_range{"Unknown dwarf register"};
}
return get_register_value(pid, it->r);
}
```
就快完成啦,现在我们已经有了寄存器名称查找:
```
std::string get_register_name(reg r) {
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
[r](auto&& rd) { return rd.r == r; });
return it->name;
}
reg get_register_from_name(const std::string& name) {
auto it = std::find_if(begin(g_register_descriptors), end(g_register_descriptors),
[name](auto&& rd) { return rd.name == name; });
return it->r;
}
```
最后我们会添加一个简单的帮助函数用于导出所有寄存器的内容:
```
void debugger::dump_registers() {
for (const auto& rd : g_register_descriptors) {
std::cout << rd.name << " 0x"
<< std::setfill('0') << std::setw(16) << std::hex << get_register_value(m_pid, rd.r) << std::endl;
}
}
```
正如你看到的,iostreams 有非常精确的接口用于美观地输出十六进制数据(啊哈哈哈哈哈哈)。如果你喜欢你也可以通过 I/O 操纵器来摆脱这种混乱。
这些已经足够支持我们在调试器接下来的部分轻松地处理寄存器,所以我们现在可以把这些添加到我们的用户界面。
### 显示我们的寄存器
这里我们要做的就是给 `handle_command` 函数添加一个命令。通过下面的代码,用户可以输入 `register read rax`、 `register write rax 0x42` 以及类似的语句。
```
else if (is_prefix(command, "register")) {
if (is_prefix(args[1], "dump")) {
dump_registers();
}
else if (is_prefix(args[1], "read")) {
std::cout << get_register_value(m_pid, get_register_from_name(args[2])) << std::endl;
}
else if (is_prefix(args[1], "write")) {
std::string val {args[3], 2}; //assume 0xVAL
set_register_value(m_pid, get_register_from_name(args[2]), std::stol(val, 0, 16));
}
}
```
### 接下来做什么?
设置断点的时候我们已经读取和写入内存,因此我们只需要添加一些函数用于隐藏 `ptrace` 调用。
```
uint64_t debugger::read_memory(uint64_t address) {
return ptrace(PTRACE_PEEKDATA, m_pid, address, nullptr);
}
void debugger::write_memory(uint64_t address, uint64_t value) {
ptrace(PTRACE_POKEDATA, m_pid, address, value);
}
```
你可能想要添加支持一次读取或者写入多个字节,你可以在每次希望读取另一个字节时通过递增地址来实现。如果你需要的话,你也可以使用 [`process_vm_readv` 和 `process_vm_writev`](http://man7.org/linux/man-pages/man2/process_vm_readv.2.html) 或 `/proc/<pid>/mem` 代替 `ptrace`。
现在我们会给我们的用户界面添加命令:
```
else if(is_prefix(command, "memory")) {
std::string addr {args[2], 2}; //assume 0xADDRESS
if (is_prefix(args[1], "read")) {
std::cout << std::hex << read_memory(std::stol(addr, 0, 16)) << std::endl;
}
if (is_prefix(args[1], "write")) {
std::string val {args[3], 2}; //assume 0xVAL
write_memory(std::stol(addr, 0, 16), std::stol(val, 0, 16));
}
}
```
### 给 `continue_execution` 打补丁
在我们测试我们的更改之前,我们现在可以实现一个更健全的 `continue_execution` 版本。由于我们可以获取程序计数器,我们可以检查我们的断点映射来判断我们是否处于一个断点。如果是的话,我们可以停用断点并在继续之前跳过它。
为了清晰和简洁起见,首先我们要添加一些帮助函数:
```
uint64_t debugger::get_pc() {
return get_register_value(m_pid, reg::rip);
}
void debugger::set_pc(uint64_t pc) {
set_register_value(m_pid, reg::rip, pc);
}
```
然后我们可以编写函数来跳过断点:
```
void debugger::step_over_breakpoint() {
// - 1 because execution will go past the breakpoint
auto possible_breakpoint_location = get_pc() - 1;
if (m_breakpoints.count(possible_breakpoint_location)) {
auto& bp = m_breakpoints[possible_breakpoint_location];
if (bp.is_enabled()) {
auto previous_instruction_address = possible_breakpoint_location;
set_pc(previous_instruction_address);
bp.disable();
ptrace(PTRACE_SINGLESTEP, m_pid, nullptr, nullptr);
wait_for_signal();
bp.enable();
}
}
}
```
首先我们检查当前程序计算器的值是否设置了一个断点。如果有,首先我们把执行返回到断点之前,停用它,跳过原来的指令,再重新启用断点。
`wait_for_signal` 封装了我们常用的 `waitpid` 模式:
```
void debugger::wait_for_signal() {
int wait_status;
auto options = 0;
waitpid(m_pid, &wait_status, options);
}
```
最后我们像下面这样重写 `continue_execution`:
```
void debugger::continue_execution() {
step_over_breakpoint();
ptrace(PTRACE_CONT, m_pid, nullptr, nullptr);
wait_for_signal();
}
```
### 测试效果
现在我们可以读取和修改寄存器了,我们可以对我们的 hello world 程序做一些有意思的更改。类似第一次测试,再次尝试在 `call` 指令处设置断点然后从那里继续执行。你可以看到输出了 `Hello world`。现在是有趣的部分,在输出调用后设一个断点、继续、将 `call` 参数设置代码的地址写入程序计数器(`rip`)并继续。由于程序计数器操纵,你应该再次看到输出了 `Hello world`。为了以防你不确定在哪里设置断点,下面是我上一篇博文中的 `objdump` 输出:
```
0000000000400936 <main>:
400936: 55 push rbp
400937: 48 89 e5 mov rbp,rsp
40093a: be 35 0a 40 00 mov esi,0x400a35
40093f: bf 60 10 60 00 mov edi,0x601060
400944: e8 d7 fe ff ff call 400820 <_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc@plt>
400949: b8 00 00 00 00 mov eax,0x0
40094e: 5d pop rbp
40094f: c3 ret
```
你要将程序计数器移回 `0x40093a` 以便正确设置 `esi` 和 `edi` 寄存器。
在下一篇博客中,我们会第一次接触到 DWARF 信息并给我们的调试器添加一系列逐步调试的功能。之后,我们会有一个功能工具,它能逐步执行代码、在想要的地方设置断点、修改数据以及其它。一如以往,如果你有任何问题请留下你的评论!
你可以在[这里](https://github.com/TartanLlama/minidbg/tree/tut_registers)找到这篇博文的代码。
---
via: <https://blog.tartanllama.xyz/c++/2017/03/31/writing-a-linux-debugger-registers/>
作者:[TartanLlama](https://www.twitter.com/TartanLlama) 译者:[ictlyh](https://github.com/ictlyh) 校对:[jasminepeng](https://github.com/jasminepeng)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,664 | arm64 服务器中的 Debian armhf 虚拟机 | https://www.collabora.com/news-and-blog/blog/2017/06/20/debian-armhf-vm-on-arm64-server/ | 2017-07-04T09:39:00 | [
"arm64",
"armhf"
] | https://linux.cn/article-8664-1.html | 
在 Collabora 公司,我们所做的许多工作之一就是为客户构建包括 32 位和 64 位 ARM 系统在内的各种架构的 [Debian](https://debian.org/) 衍生版。就像 Debian 做的那样,我们的 [OBS](http://openbuildservice.org/) 系统建立在原生系统而不是仿真器上。
幸运的是随着几年前 ARM 服务器系统的出现,为这些系统原生构建不再像以前那么痛苦了。对于 32 位的 ARM,我们一直依赖 [Calxeda](https://en.wikipedia.org/wiki/Calxeda) 刀片服务器,然而不幸的是 Calxeda 在不久前淘汰,硬件开始显露其年龄(尽管幸运的是 Debian Stretch 还支持它,因此至少软件还是新的)。
在 64 位 ARM 方面,我们运行在基于 Gigabyte MP30-AR1 的服务器上,该服务器可以运行 32 位的 ARM 代码(与之相反,比如基于 ThunderX 的服务器只能运行 64 位代码)。像这样在它们之上运行 armhf 虚拟机作为<ruby> 从构建服务器 <rp> ( </rp> <rt> build slaves </rt> <rp> ) </rp></ruby>似乎是一个很好的选择,但是设置起来可能会需要更多东西的介入。
第一个陷阱是 Debian 中没有标准的 bootloader 或者 boot 固件来启动 qemu 仿真的 “virt” 设备(我不想使用真实机器的仿真)。这也意味着在启动时客户机内没有任何东西会加载内核,也不会从客户机网络引导,这意味着需要直接的内核引导。
第二个陷阱是当前的 Debian Stretch 的 armhf 内核并不支持 qemu 虚拟机所提供的通用 PCI 主机控制器,这意味着客户机中不会出现存储器和网络。希望这会被尽快解决([Debian bug 864726](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864726)),并出现在 Stretch 更新中,那在之前需要使用 bug 报告中附带的补丁的自定义内核,但在这篇文章中我不想进一步说。
高兴的假设我们有一个可用的内核,剩下的挑战是很好地管理直接内核加载。或者更具体地说,如何确保主机启动客户机通过标准 apt 工具安装的内核,而不必在客户机/主机之间复制内核,这本质上归结于客户机将 `/boot` 暴露给主机。我们选择的方案是使用 qemu 的 9pfs 支持来从主机共享一个文件夹,并将其用作客户机的 `/boot`。对于 9p 文件夹,似乎需要 “mapped” 安全模式,因为 “none” 模式对 dpkg 有问题([Debian bug 864718](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864718))。
由于我们使用 libvirt 作为我们的虚拟机管理器,剩下的事情就是如何将它们组合到一起。
第一步是安装系统,基本和平常一样。可以直接引导进入由普通的 Stretch armhf netboot 安装程序提供的 `vmlinuz` 和 `initrd.gz`(下载到如 `/tmp` 中)。 设置整体来说很直接,会有一些小的调整:
* `/srv/armhf-vm-boot` 设置为 9p 共享文件夹(这个应该存在,并且由 libvirt-qemu 拥有),这之后会被用于共享 `/boot`。
* 内核参数中在 `root=` 后面加上 VM 中的 root 分区,根据你的情况调整。
* 镜像文件使用 virtio 总线,这似乎不是默认的。
除了这些调整,最后的示例命令与 virt-install 手册页中的相似。
```
virt-install --name armhf-vm --arch armv7l --memory 512 \
--disk /srv/armhf-vm.img,bus=virtio
--filesystem /srv/armhf-vm-boot,virtio-boot,mode=mapped \
--boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0,root=/dev/vda1"
```
按照通常的方式运行安装。到最后安装程序可能会提示它不知道如何安装引导程序,这个没什么问题。只要在结束安装和重启之前,切换到 shell 并以某种方式将目标系统中的 `/boot/vmlinuz` 和 `/boot/initrd.img` 复制到主机中(比如 chroot 进入 `/target` 并在已安装的系统中使用 scp)。 这是必需的,因为安装程序不支持 9p,但是要启动系统,需要 initramfs 以及能够挂载根文件系统的模块,这由已安装的 initramfs 提供。这些完成后,安装就可以完成了。
接下来,引导已安装的系统。调整 libvirt 配置(比如使用 virsh 编辑并调整 xml)来使用从安装程序复制过来的内核以及 initramfs,而不只是使用它提供的。再次启动虚拟机,它应该就能愉快地进入安装的 Debian 系统中了。
要在客户机这一侧完成,`/boot` 应该移动到共享的 9pfs 中,`/boot` 的新 fstab 条目看上去应该这样:
```
virtio-boot /boot 9p trans=virtio,version=9p2000.L,x-systemd.automount 0 0
```
有了这一步,这只是将 `/boot` 中的文件混到新的文件系统里面,并且客户机完事了(确保 `vmlinuz`/`initrd.img` 保持符号链接)。内核可以如常升级,并对主机可见。
这时对于主机端,有另外一个问题需要跨过,由于客户机使用 9p 映射安全模式,客户机的符号链接对主机而言将是普通的包含链接目标的文件。为了解决这个问题,我们在客户机启动前使用 libvirt qemu 的 hook 支持来设置合适的符号链接。作为一个例子,下面是我们最终使用的脚本(`/etc/libvirt/hooks/qemu`):
```
vm=$1
action=$2
bootdir=/srv/${vm}-boot
if [ ${action} != "prepare" ] ; then
exit 0
fi
if [ ! -d ${bootdir} ] ; then
exit 0
fi
ln -sf $(basename $(cat ${bootdir}/vmlinuz)) ${bootdir}/virtio-vmlinuz
ln -sf $(basename $(cat ${bootdir}/initrd.img)) ${bootdir}/virtio-initrd.img
```
有了这个,我们可以简单地定义 libvirt 使用 `/srv/${vm}-boot/virtio-{vmlinuz,initrd.img}` 作为机器的内核 / `initramfs`,并且当 VM 启动时,它会自动获取客户机安装的最新内核 / `initramfs`。
只有最后一个边缘情况了,当从 VM libvirt 重启会让 qemu 处理它而不是重启 qemu。如果这不幸发生的话,意味着重启不会加载新内核。所以现在我们通过配置 libvirt 来解决这个问题,从而在重启时停止虚拟机。由于我们通常只在升级内核(安装)时重启 VM,虽然这有点乏味,但这避免了重启加载的是旧内核 / `initramfs` 而不是预期的。
---
via: <https://www.collabora.com/news-and-blog/blog/2017/06/20/debian-armhf-vm-on-arm64-server/>
作者:[Sjoerd Simons](https://www.collabora.com/news-and-blog/blog/2017/06/20/debian-armhf-vm-on-arm64-server/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Sjoerd Simons
June 20, 2017
Reading time:
At Collabora one of the many things we do is build [Debian](https://debian.org) derivatives/overlays for customers on a variety of architectures including 32 bit and 64 bit ARM systems. And just as Debian does, our [OBS](http://openbuildservice.org/) system builds on native systems rather than emulators.
Luckily with the advent of ARM server systems some years ago building natively for those systems has been a lot less painful than it used to be. For 32 bit ARM we've been relying on [Calxeda](https://en.wikipedia.org/wiki/Calxeda) blade servers, however Calxeda unfortunately tanked ages ago and the hardware is starting to show its age (though luckily Debian Stretch does support it properly, so at least the software is still fresh).
On the 64 bit ARM side, we're running on Gigabyte MP30-AR1 based servers which can run 32 bit arm code (As opposed to e.g. ThunderX based servers which can only run 64 bit code). As such running armhf VMs on them to act as build slaves seems a good choice, but setting that up is a bit more involved than it might appear.
The first pitfall is that there is no standard bootloader or boot firmware available in Debian to boot on the "virt" machine emulated by qemu (I didn't want to use an emulation of a real machine). That also means there is nothing to pick the kernel inside the guest at boot nor something which can e.g. have the guest network boot, which means direct kernel booting needs to be used.
The second pitfall was that the current Debian Stretch armhf kernel isn't built with support for the generic PCI host controller which the qemu virtual machine exposes, which means no storage and no network shows up in the guest. Hopefully that will get solved soonish ([Debian bug 864726](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864726)) and can be in a Stretch update, until then a custom kernel package is required using the patch attach to the bug report is required but I won't go into that any further in this post.
So on the happy assumption that we have a kernel that works, the challenge left is to nicely manage direct kernel loading. Or more specifically, how to ensure the hosts boots the kernel the guest has installed via the standard apt tools without having to copy kernels around between guest/host, which essentially comes down to exposing /boot from the guest to the host. The solution we picked is to use qemu's 9pfs support to share a folder from the host and use that as /boot of the guest. For the 9p folder the "mapped" security mode seems needed as the "none" mode seems to get confused by dpkg ([Debian bug 864718](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864718)).
As we're using libvirt as our virtual machine manager the remainder of how to glue it all together will be mostly specific to that.
First step is to install the system, mostly as normal. One can directly boot into the vmlinuz and initrd.gz provided by normal Stretch armhf netboot installer (downloaded into e.g /tmp). The setup overall is straight-forward with a few small tweaks:
Apart from those tweaks the resulting example command is similar to the one that can be found in the virt-install man-page:
virt-install --name armhf-vm --arch armv7l --memory 512 \ --disk /srv/armhf-vm.img,bus=virtio --filesystem /srv/armhf-vm-boot,virtio-boot,mode=mapped \ --boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0,root=/dev/vda1"
Run through the install as you'd normally would. Towards the end the installer will likely complain it can't figure out how to install a bootloader, which is fine. Just before ending the install/reboot, switch to the shell and copy the /boot/vmlinuz and /boot/initrd.img from the target system to the host in some fashion (e.g. chroot into /target and use scp from the installed system). This is required as the installer doesn't support 9p, but to boot the system an initramfs will be needed with the modules needed to mount the root fs, which is provided by the installed initramfs :). Once that's all moved around, the installer can be finished.
Next, booting the installed system. For that adjust the libvirt config (e.g. using virsh edit and tuning the xml) to use the kernel and initramfs copied from the installer rather then the installer ones. Spool up the VM again and it should happily boot into a freshly installed Debian system.
To finalize on the guest side /boot should be moved onto the shared 9pfs, the fstab entry for the new /boot should look something like:
virtio-boot /boot 9p trans=virtio,version=9p2000.L,x-systemd.automount 0 0
With that setup, it's just a matter of shuffling the files in /boot around to the new filesystem and the guest is done (make sure vmlinuz/initrd.img stay symlinks). Kernel upgrades will work as normal and visible to the host.
Now on the host side there is one extra hoop to jump through, as the guest uses the 9p mapped security model symlinks in the guest will be normal files on the host containing the symlink target. To resolve that one, we've used libvirt's qemu hook support to setup a proper symlink before the guest is started. Below is the script we ended up using as an example (/etc/libvirt/hooks/qemu):
vm=$1 action=$2 bootdir=/srv/${vm}-boot if [ ${action} != "prepare" ] ; then exit 0 fi if [ ! -d ${bootdir} ] ; then exit 0 fi ln -sf $(basename $(cat ${bootdir}/vmlinuz)) ${bootdir}/virtio-vmlinuz ln -sf $(basename $(cat ${bootdir}/initrd.img)) ${bootdir}/virtio-initrd.img
With that in place, we can simply point the libvirt definition to use /srv/${vm}-boot/virtio-{vmlinuz,initrd.img} as the kernel/initramfs for the machine and it'll automatically get the latest kernel/initramfs as installed by the guest when the VM is started.
Just one final rough edge remains, when doing reboot from the VM libvirt leaves qemu to handle that rather than restarting qemu. This unfortunately means a reboot won't pick up a new kernel if any, for now we've solved this by configuring libvirt to stop the VM on reboot instead. As we typically only reboot VMs on kernel (security) upgrades, while a bit tedious, this avoid rebooting with an older kernel/initramfs than intended
08/10/2024
Having multiple developers work on pre-merge testing distributes the process and ensures that every contribution is rigorously tested before…
15/08/2024
After rigorous debugging, a new unit testing framework was added to the backend compiler for NVK. This is a walkthrough of the steps taken…
01/08/2024
We're reflecting on the steps taken as we continually seek to improve Linux kernel integration. This will include more detail about the…
27/06/2024
With each board running a mainline-first Linux software stack and tested in a CI loop with the LAVA test framework, the Farm showcased Collabora's…
26/06/2024
WirePlumber 0.5 arrived recently with many new and essential features including the Smart Filter Policy, enabling audio filters to automatically…
12/06/2024
Part 3 of the cmtp-responder series with a focus on USB gadgets explores several new elements including a unified build environment with…
## Comments (1)
Vincent Sanders:
Dec 12, 2018 at 05:29 PM
Thanks for this sjored, saved me a load of time today. Couple of things:
Stretch netboot does not work but the buster kernel/initrd at http://ftp.debian.org/debian/dists/buster/main/installer-armhf/current/images/netboot/ does
the boot parameter should be
--boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0 root=/dev/vda1"
i.e. with a space between the console and root lines else the reboot failed
Reply to this comment
Reply to this comment
## Add a Comment |
8,665 | 极客漫画: 一场 Java 惊魂之旅 | http://turnoff.us/geek/a-java-nightmare/ | 2017-07-04T10:00:00 | [
"Java",
"漫画"
] | https://linux.cn/article-8665-1.html | 
周末带着儿子去了一个不一样的迪尼斯乐园——Java 大世界。
公园的门口,有两个 Java 吉祥物 Duke,只是左边那个好像是戴着发带的女 Duke。看见没有,公园大门最顶部的标志是著名的咖啡杯——基本上是公众所熟知的 Java 语言的形象了,相对来说,Duke 的知名度不如咖啡杯。公园门口的标语上写着“堆”满了乐趣(“堆”,即 heap,是一种 Java 等语言用于操作数据的内存结构)。
驶过道路上的标牌,上面分别写着:
* “麦格王国”,麦格——mageek,可能影射的是 [Majava.A 安全漏洞](https://www.f-secure.com/v-descs/exploit_java_majava_a.shtml),这是一个攻击 JRE 漏洞的恶意文件,此处嘲讽 Java 的安全缺陷。
* “热点中心”,热点——Hotspot,是 Java 一个较新的虚拟机。
* “极演播室”,极——JEE,即 Java EE,是 J2EE 的一个新名称,面向企业的一种应用框架/标准。
进入公园看到的大型广告牌上写着:只需要写出来——即可运行。这里隐喻 Java 的跨平台特性。
远处的街道上,有大大的 Duke 充气人偶,而孩子手中的气球上画着咖啡杯,这真是一个 Java 的世界啊。
根据指示牌,有通往:
* 小小新世界,“hello world” 是各种编程语言教学中通常学生们接触到的第一个程序例子。
* 拼图馆,拼图——Jigsaw,是 OpenJDK 项目下的一个子项目,旨在为 Java SE 平台设计实现一个标准的模块系统,并应用到该平台和 JDK 中。
* 汤姆猫小岛,汤姆猫——Tomcat,Tomcat 服务器是一个自由开源的 Web 应用服务器,用于 JSP 程序。
孩子兴奋极了,要去那个“神秘阀门”去玩。我觉得也可以去汤姆猫小岛看看,此处用 servlets 指代了服务项目。而 servlet 是一种 Java 应用。
正在这时,广播发出警告,“异常抛出”——Exception Thrown 是 Java 等语言用于处理异常情况的机制。得赶紧疏散,而可怜的娃还不明白发生了什么。跑吧!
结果爸爸被触手怪抓去了——触手怪的下面写着“内存溢出”,一定是因为这个才导致触手怪出现的!
眼看电锯就要切到脑袋上了——啊,吓死我了,原来是又一个 Java 的噩梦啊!
---
via: <http://turnoff.us/geek/a-java-nightmare/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,666 | 极客漫画:Bash on Windows | http://turnoff.us/geek/bash-on-windows/ | 2017-07-06T09:52:00 | [
"微软",
"漫画"
] | https://linux.cn/article-8666-1.html | 
微软发布了 Bash on Windows,旨在吸引开发者使用 Windows 平台。其高级程序经理 Rich Turner [呼吁](/article-7998-1.html) Linux 开发人员放弃 Linux ,转到 Windows 10 上来:“Windows 中的 Linux 子系统将给开发者们提供所有他们在 Linux 开发中必要的工具,而不用丧失 Windows 10 的优势。”
这是诱惑么?这是诱惑么?这是诱惑么?
---
via: <http://turnoff.us/geek/bash-on-windows/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,668 | Linux 上如何安装并切换最新版本的 Python 3.6 | https://mintguide.org/other/794-python-3-6-install-latest-version-into-linux-mint.html | 2017-07-05T13:06:00 | [
"Python"
] | https://linux.cn/article-8668-1.html | 
**Python** 是 Linux 中一种最流行的编程语言。它被写成了各种工具和库。除此之外,Python 在开发者之间很流行因为它非常简单,并且实际很容易掌握。如果你安装了 Linux 系统,正在学习 **Python** 并想要使用最新的版本的话,那么这篇文章就是为你而写的。现在我已经安装好了 Linux Mint 18。默认安装的版本是 2.7 和 3.5。你可以用这个命令检查:
```
$ python -V
$ python2 -V
$ python3 -V
```
安装最新的 Python 3.6 到 Linux 中:
```
$ sudo add-apt-repository ppa:jonathonf/python-3.6
$ sudo apt update
$ sudo apt install python3.6
```
检查已安装的 Python 3.6 版本
```
$ python3.6 -V
```
**请注意**旧版本仍然还在,它仍然可以通过 `python3` 可用,新的版本可以通过命令 `python3.6`。如果你想要默认使用这个版本而不是 3.5 运行所有的程序,这有个工具叫 `update-alternatives`。但是如果你尝试获取可能的列表,我们会得到错误:

这是正常的,你首先需要为那个问题设置文件,因为维护者没有设置这个:
```
$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.5 1
$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 2
```
现在再次查看:
```
$ update-alternatives --list python3
```

现在我们选择需要的版本并按需切换。对于设置使用配置命令:
```
$ sudo update-alternatives --config python3
```

在提示符中,你需要指定默认使用的编号。
>
> 选择版本时要小心,不要去动 python(python2),只使用我说的 python3,Python 2.7 编写了各种系统工具,如果你尝试用错误的解释器版本运行它们,可能就不会工作。
>
>
>
愿原力与你同在,好运!!!
---
via: <https://mintguide.org/other/794-python-3-6-install-latest-version-into-linux-mint.html>
作者: [Shekin](https://mintguide.org/user/Shekin/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,669 | Chromebook 如何双启动:Ubuntu 17.04 GNOME 和 Chrome OS | http://news.softpedia.com/news/how-to-install-ubuntu-17-04-with-gnome-on-your-chromebook-alongside-chrome-os-516624.shtml | 2017-07-05T13:24:55 | [
"Chromebook",
"双启动"
] | https://linux.cn/article-8669-1.html |
>
> 本教程使用著名的 Crouton 安装器
>
>
>

在去年我拿到我的 Acer Chromebook 11 (C740) 时,我写了一篇教程教你们如何[如何移除 Google Chrome OS 并根据你的选择安装一个 GNU/Linux 发行版](http://news.softpedia.com/news/here-s-how-to-install-any-linux-operating-system-on-your-chromebook-506212.shtml),但是很快我觉得没意思了。
因此几个月之后,我使用了 Google 在网站上提供的恢复镜像重新安装了 Chrome OS,我写入了 USB 并从 Chromebook 启动。最近,我又感到无聊了,因此我决定使用 Crouton 在我的 Acer Chromebook 11 (C740) 上安装 Ubuntu。
为什么?因为在一次会议中来的一位朋友带了他的笔记本,一台 Dell Chromebook 13,在上面他运行了 Ubuntu Linux 还有 Chrome OS。看他用快捷键在两个操作系统之间切换很酷,这让我也想这么做。
现在有很多教程解释如何安装不同的发行版 Ubuntu、Debian 或者 Kali Linux(这些是当前 Crouton 安装器支持的 GNU/Linux 发行版),但是我想要运行最新的 Ubuntu,当前是 Ubuntu 17.04 (Zesty Zapus),它有 GNOME 3.24 桌面环境。
### 如何启用开发者模式并下载 Crouton
当我询问我的朋友他在他的 Chromebook 上运行的是什么 Ubuntu 时,回答是 Ubuntu 14.04 LTS (Trusty Tahr),我不得不承认这让我有点失望。我回家后立刻拿出我的 Chromebook 并尝试看看我是否能运行带有桌面环境的 Ubuntu 17.04。
我做的第一件事情是将我的 Chromebook 变成开发者模式。为此,你需要关闭你的 Chromebook 但不关闭翻盖,接着同时按住 `ESC`、`Refresh` 和 `Power` 键几秒直到进入恢复模式,这会擦除 Chromebook 上的所有数据。
进入开发者模式会花费你几分钟,所以耐心点。当准备完成后,你需要登录你的 Google 账户,并设置各种东西,比如壁纸或者头像之类。现在你进入开发者模式了,在你的 Chromebook 中访问这篇教程并[下载 Crouton](https://goo.gl/fd3zc),它会保存在下载文件夹中。
### 如何使用 Crouton 安装带有 GNOME 3.24 的 Ubuntu 17.04
现在打开 Google Chrome 并按下 `CTRL+ALT+T` 打开 Chrome OS 的终端模拟器,它叫做 crosh。在命令提示符中,输入 `shell` 命令,按下回车进入 Linux shell。让我们看看 Crouton 能为我们做什么。
这有两个命令(下面列出的),你可以运行它们查看 Crouton 支持的 GNU/Linux 发行版和桌面环境,并且我可以告诉你这可以安装 Debian 7 “Wheezy”、Debian 8 “Jessie”、Debian 9 “Stretch” 和 Debian Sid、Kali Linux 滚动版还有 Ubuntu 12.04 LTS、Ubuntu 14.04 LTS 和 Ubuntu 16.04 LTS 等等。
```
sh -e /Downloads/crouton -r list - ### 会列出支持的发行版
sh -e /Downloads/crouton -t list - ### 会列出支持的桌面
```
Crouton 也会列出一系列 Debian、Kali 和 Ubuntu 的旧发行版,但它们在上游被中止支持了(这些的名字后面都被标记了感叹号),并且因为安全风险你不应该安装它们,还有两个尚未支持的 Ubuntu 版本,Ubuntu 16.10 和 Ubuntu 17.04。
Crouton 开发者说这些“不支持”的 Ubuntu 版本用一些方法可能也可以使用,但是我试了一下并使用下面的命令安装了带有 GNOME 3.24 桌面环境(没有额外的应用)的 Ubuntu 17.04 (Zesty Zapus)。我使用 `-e` 参数来加密安装。
```
sh -e /Downloads/crouton -e -r zesty -t gnome
```
将所有的都下载下来并安装在 Crouton 在你的 Chromebook 中创建的 chroot 环境中会花费一些时间,因此再说一次,请耐心。当一切完成后,你会知道,并且你能通过在 shell 中运行下面的命令启动 Ubuntu 17.04。
```
sudo startgnome
```
瞧!我在我的旧 Acer Chromebook 11 (C740) 上运行着带有 GNOME 3.24 桌面环境的 Ubuntu 17.04 (Zesty Zapus),这笔记本 Google 还尚未支持 Android 程序。最棒的部分是我能够使用 `CTRL+ALT+Shift`+`Back`/`Forward` 键盘快捷键快速在 Chrome OS 和 Ubuntu 17.04 之间切换。

作为这篇笔记的结尾,我想提醒你注意,由于 Chromebook 现在始终处于开发人员模式,所以当电池电量耗尽、打开或关闭设备时,你会一直看到一个警告,显示 “OS verification is OFF - Press SPACE to re-enable”,当你看到它时,请按 `CTRL+D`。玩得开心!


---
via: <http://news.softpedia.com/news/how-to-install-ubuntu-17-04-with-gnome-on-your-chromebook-alongside-chrome-os-516624.shtml>
作者:[Marius Nestor](http://news.softpedia.com/editors/browse/marius-nestor) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,671 | 如何关闭一个不活动的或者空闲的 SSH 会话 | http://www.2daygeek.com/kill-inactive-idle-ssh-sessions/ | 2017-07-06T09:13:00 | [
"SSH"
] | https://linux.cn/article-8671-1.html | 
让我们来假设一下,当你通过 ssh 在服务器上工作时,由于网络、电源或者是本地 PC 重启等原因会导致你的会话连接断开。
你可能会再次登录服务器继续工作也可能不会,但是你始终会留下之前没有关闭的 ssh 会话。
如何关闭一个不活动的 ssh 会话?首先使用 `w` 命令来识别出不活动或者是空闲的 ssh 会话,接着使用 `pstree` 命令来获取空闲会话的 PID,最后就是使用 `kill` 命令来关闭会话了。
* 建议阅读:[Mosh(Mobile Shell)- 最好的SSH 远程连接替代选项](/article-6262-1.html)
### 如何识别不活动的或者是空闲的 SSH 会话
登录系统通过 `w` 命令来查看当前有多少用户登录着。如果你识别出了自己的会话连接就可以记下其它不活动或者是空闲的 ssh 会话去关闭。
在我当前的例子中,能看见两个用户登录着,其中一个是我当前在执行 `w` 命令的 ssh 会话另一个就是之前的空闲会话了。
```
# w
10:36:39 up 26 days, 20:29, 2 users, load average: 0.00, 0.02, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 219.91.219.14 10:34 28.00s 0.00s 0.00s -bash
root pts/2 219.91.219.14 10:36 0.00s 0.00s 0.00s w
```
### 如何获取 SSH 会话的 PID
为了关闭空闲的 ssh 会话,我们需要空闲会话进程的父进程的 PID。我们可以执行 `pstree` 命令来查看包括了所有进程的树状图,以便获取父进程的 pid。
你会获得与下方示例中相似的输出。`pstree` 命令的输出会比这个多得多,为了更好的理解我删去了许多不相关的内容。
```
# pstree -p
init(1)-+-abrtd(2131)
|-acpid(1958)
|-httpd(32413)-+-httpd(32442)
|
|-mingetty(2198)
|-mysqld_safe(24298)---mysqld(24376)-+-{mysqld}(24378)
|
|-php(32456)-+-php(32457)
|
|-sshd(2023)-+-sshd(10132)---bash(10136)
| `-sshd(10199)---bash(10208)---pstree(10226)
|-udevd(774)-+-udevd(2191)
`-udevd(27282)
```
从上方的输出中,你可以看到 `sshd` 进程与分支的树形图。`sshd` 的主进程是 `sshd(2023)`,另两个分支分别为 `sshd(10132)` 和 `sshd(10199)`。
跟我在文章开始讲的相同,其中一个是我新的会话连接 `sshd(10199)` 它展示了我正在执行的 `pstree` 命令,因此空闲会话是另一个进程为 `sshd(10132)`。
* 建议阅读:[如何通过标准的网页浏览器来接入 Secure Shell (SSH) 服务器](http://www.2daygeek.com/shellinabox-web-based-ssh-terminal-to-access-remote-linux-servers/)
* 建议阅读:[PSSH - 在多台 Linux 服务器上并行的执行命令](http://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/)
### 如何关闭空闲 SSH 会话
我们已经获得了有关空闲会话的所有信息。那么,就让我们来使用 `kill` 命令来关闭空闲会话。请确认你将下方的 PID 替换成了你服务器上的空闲会话 PID。
```
# kill -9 10132
```
(LCTT 译注:这里介绍另一个工具 `pkill`,使用 `pkill -t pts/0 -kill` 就可以关闭会话, debian 8 下可用,有些版本似乎需要更改 `-kill` 的位置)
### 再次查看空闲会话是否已经被关闭
再次使用 `w` 命令来查看空闲会话是否已经被关闭。没错,只有那个我自己的当前会话还在,因此那个空闲会话已经被关闭了。
```
# w
10:40:18 up 26 days, 20:33, 1 user, load average: 0.11, 0.04, 0.01
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/2 219.91.219.14 10:36 0.00s 0.00s 0.00s w
```
* 建议阅读:[rtop - 一个好用的通过 SSH 来监控远程服务器的工具](/article-8199-1.html)
* 建议阅读:[DSH - 同时在多台 Linux 服务器上执行命令](http://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/)
### 再次使用 pstree 命令检查
再次使用 `pstree` 命令确认。是的,只有那个我自己的 ssh 会话还在。
```
# pstree -p
init(1)-+-abrtd(2131)
|-acpid(1958)
|
|-httpd(32413)-+-httpd(32442)
|
|-mingetty(2198)
|-mysqld_safe(24298)---mysqld(24376)-+-{mysqld}(24378)
|
|-php(32456)-+-php(32457)
|
|-sshd(2023)---sshd(10199)---bash(10208)---pstree(10431)
|-udevd(774)-+-udevd(2191)
`-udevd(27282)
```
---
via: <http://www.2daygeek.com/kill-inactive-idle-ssh-sessions/>
作者:[Magesh Maruthamuthu](http://www.2daygeek.com/author/magesh/) 译者:[wcnnbdk1](https://github.com/wcnnbdk1) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,672 | FreeDOS: 已经积极开发了 23 年的 DOS | https://opensource.com/article/17/6/freedos-still-cool-today | 2017-07-06T09:37:10 | [
"FreeDOS",
"DOS"
] | https://linux.cn/article-8672-1.html |
>
> 23 年后,FreeDOS 仍在积极开发,并将 MS-DOS 的生命延长了几年
>
>
>

在 90 年代早期,我是一位 DOS 的 “资深用户”。我用 DOS 做任何事,甚至写我自己的工具来扩展 DOS 命令行。当然,我们有 Microsoft Windows,但是如果你还记得当时的计算机,Windows 3.1 并不是那么好,我更喜欢在 DOS 中工作。
你可能会理解,在 1994 年,当微软(在科技杂志的采访中)宣布下一个版本的 Windows 将会取消 MS-DOS 时,我感到有些困惑和不安。我想:“如果 Windows 3.2 或 4.0 看起来像 Windows 3.1 一样,我不想用它了。” 我四处选择,并决定如果 DOS 要继续下去的话,那么需要有人创建一个 DOS,在 MS-DOS 消失的时候,让每个人都可以使用它。
因此在 1994 年 6 月 29 日,我在 Usenet 讨论组写了一个信息,宣布了新的 “free DOS” 项目:
>
> 几个月前,我发表了一篇关于启动公共领域版本的 DOS 的文章。当时对此的普遍支持是很强烈的,并且很多人同意这个声明,“开始编写!”
>
>
> 所以,我要……
>
>
> 宣布首次尝试制作 PD-DOS。我写了一份描述这样一个项目的目标和工作纲要的“清单”,以及一份“任务清单”,展示了需要编写些什么。我会在这里发贴,让我们进行讨论。
>
>
>
今天,已经是 23 年之后了,[FreeDOS](https://opensource.com/article/17/6/www.freedos.org) 仍在变得更强大!
我们继续发布有新功能和特性的版本。我们的 FreeDOS 1.2 发布于 2016 年 12 月 25 日,这证明很多人喜欢使用和在 FreeDOS 上工作。当我回顾我们的历史,有一个你应该知道的关于 FreeDOS 的很酷的事实列表:
### 可以立即开始的软件开发
在原始的 DOS 中,很难做任何编程。DOS 提供了一个简单的 BASIC 解释器,一些用户可以用 DEBUG 做些精巧的事情,但是你不能在 DOS 中做真正编程的事情。在 FreeDOS 中,有许多不同的工具做软件开发:编译器、汇编器、调试器、解释器和编写脚本。在你安装 FreeDOS 之后,你可以立即用 C、汇编、Pascal、Perl 和几种其他语言编写代码。
### 浏览 web
DOS 是一个老式系统,并且原本不支持开箱即用的网络。通常,你必须安装硬件的设备驱动程序才能连接到网络,一般是像 IPX 这样的简单网络。只有很少的系统会支持 TCP/IP。
在 FreeDOS 中,我们不仅包含了 TCP/IP 网络栈,我们还包含了让你浏览 web 的工具和程序。使用 Dillo 作为图形 web 浏览体验,或则使用 Lynx 以纯文本形式浏览 web。如果你只想要抓取 HTML 代码并自己操作,使用 Wget 或者 Curl。
### 玩很棒的 DOS 游戏
我们知道很多的人安装 FreeDOS 来玩经典的 DOS 游戏,运行古老的商业程序或者做嵌入式开发。使用 FreeDOS 的许多人只是拿来玩游戏,那对我们来说是件很酷的事,因为一个游戏很老并不意味着它很无趣。DOS 有许多很棒的游戏!安装你最喜欢的经典游戏,你会玩得很开心。
因为有如此多的人使用 FreeDOS 来玩游戏,我们现在包含了不同的 DOS 游戏。FreeDOS 1.2 包含了第一人称射击游戏像 FreeDOOM、街机射击像 Kiloblaster、飞行模拟器像 Vertigo 等等。我们目标是为每人提供一些东西。
### FreeDOS 现在已经比 MS-DOS 活的更久了
微软在 1981 年 8 月发布了 MS-DOS 1.0。13 年之后,微软在 1995 年 8 月发布 Windows 95 后抛弃了 MS-DOS,虽然 MS-DOS 直到 2000 年 9 月之前一直存在。总的来说,MS-DOS 是一件已经存在了 19 年的东西。
我们在 1994 年 6 月宣布了 FreeDOS,并且在同年的 9 月发布了第一个 Alpha 版本。因此 FreeDOS 已经大约有 23 年了,比微软的 MS-DOS 还多了几年。确实我们在 FreeDOS 上努力的时间已经比 MS-DOS 更长了。而 FreeDOS 还将继续保持强大。
FreeDOS 与其他 DOS 另外一个重要的不同是*它仍在开发中*。我们有一个积极的开发者社区,并一直在寻找新人来帮助。请加入社区并帮助构造新的 FreeDOS 版本吧。
(题图: FreeDOS)
---
作者简介:
Jim Hall - 我是 FreeDOS 项目的创始人和协调者。我还担任 GNOME 基金董事会董事。在工作中,我是明尼苏达州拉姆齐县的首席信息官。在业余时间,我致力于开源软件的可用性,并通过 Outreachy(之前针对妇女的 GNOME Outreach 项目)指导 GNOME 中的可用性测试。
---
via: <https://opensource.com/article/17/6/freedos-still-cool-today>
作者:[Jim Hall](https://opensource.com/users/jim-hall) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In the early 1990s, I was a DOS "power user." I used DOS for everything and even wrote my own tools to extend the DOS command line. Sure, we had Microsoft Windows, but if you remember what computing looked like at the time, Windows 3.1 was not that great. I preferred working in DOS.
You might understand that I was a little confused and upset in 1994 when Microsoft announced (via interviews in tech magazines) that the next version of Windows would do away with MS-DOS. I thought, "If Windows 3.2 or 4.0 looks anything like Windows 3.1, I want nothing to do with that." I looked around for options and decided that if DOS was going to continue, someone would have to create a DOS that everyone could use when MS-DOS went away.
So it was on June 29, 1994, that I wrote a message to a Usenet discussion group, announcing a new "free DOS" project:
A few months ago, I posted articles relating to starting a public domain version of DOS. The general support for this at the time was strong, and many people agreed with the statement, "start writing!"
So, I have...
Announcing the first effort to produce a PD-DOS. I have written up a "manifest" describing the goals of such a project and an outline of the work, as well as a "task list" that shows exactly what needs to be written. I'll post those here, and let discussion follow.
Today, it's 23 years later, and [FreeDOS](http://www.freedos.org/) is still going strong!
We continue to make new releases with new functionality and features. Our FreeDOS 1.2 distribution, released on December 25, 2016, is a testament to how many people enjoy using and working on FreeDOS. As I look back over our history, there's a short list of cool facts about FreeDOS you should know:
### Software development right out of the box
With the original DOS, it was difficult to do any coding. DOS provided a simple BASIC interpreter, and some people could do neat things with DEBUG, but you couldn’t really do much programming in DOS. With FreeDOS, there are a bunch of different tools to do software development: compilers, assemblers, debuggers, interpreters, and scripting. After you install FreeDOS, you can immediately start writing code in C, Assembly, Pascal, Perl, and several other languages.
### Browse the web
DOS is an old system and the original didn't support networking out of the box. Typically, you had to install device drivers for your hardware to connect to a network, which was usually a simple network like IPX. Few systems supported TCP/IP.
With FreeDOS, not only do we include a TCP/IP networking stack, we include tools and programs that let you browse the web. Use Dillo for a graphical web browser experience, or Lynx to view the web as formatted plain text. If you just want to grab the HTML code and manipulate it yourself, use Wget or Curl.
### Play great DOS games
We know that many people install FreeDOS today to play the classic DOS games, run legacy business applications, or do embedded development. A lot of those people use FreeDOS just to play games, and that’s cool with us. Just because a game is old doesn’t mean it's boring. DOS had a lot of great games! Install your favorite classic game and have fun.
Because so many people use FreeDOS to play games, we now include a variety of DOS games. The FreeDOS 1.2 distribution includes first-person shooters like FreeDOOM, arcade shooters like Kiloblaster, flight simulators like Vertigo, and others. We aimed to include something for everyone.
### FreeDOS has now been around longer than MS-DOS
Microsoft released MS-DOS 1.0 in August 1981. And thirteen years later, Microsoft effectively deprecated MS-DOS with the release of Windows 95 in August 1995, although MS-DOS was still around in some form until September 2000. In total, MS-DOS was a thing for nineteen years.
We announced FreeDOS in June 1994 and made our first Alpha release in September that same year. So FreeDOS has been around for 23 years, edging out Microsoft’s MS-DOS by a few years. Truly, we've been working on FreeDOS for longer than MS-DOS was a thing. FreeDOS has staying power.
Another important way that FreeDOS is different from other versions of DOS is that *it is still being developed*. We have an active developer community and are always looking for new people to help out. Join the community and help build the next version of FreeDOS.
## 2 Comments |
8,673 | Samba 系列(十一):如何配置并集成 iRedMail 服务到 Samba4 AD DC 中 | https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/ | 2017-07-06T11:11:33 | [
"Samba",
"iRedMail",
"邮件"
] | https://linux.cn/article-8673-1.html | 
在本教程中,将学习如何修改提供邮件服务的 iRedMail 主要守护进程,相应地,[Postfix 用于邮件传输,Dovecot 将邮件传送到帐户邮箱](https://www.tecmint.com/setup-postfix-mail-server-and-dovecot-with-mariadb-in-centos/),以便将它们集成到 [Samba4 AD 域控制器](/article-8065-1.html)中。
将 iRedMail 集成到 Samba4 AD DC 中,你将得到以下好处:通过 Samba AD DC 得到用户身份验证、管理和状态,在 AD 组和 Roundcube 中的全局 LDAP 地址簿的帮助下创建邮件列表。
### 要求
1. [在 CentOS 7 中为 Samba4 AD 集成安装 iRedMail](/article-8567-1.html)
### 第一步:准备 iRedMail 系统用于 Samba4 AD 集成
1、 在第一步中,你需要[为你的机器分配一个静态的 IP 地址](/article-3977-1.html)以防你使用的是由 DHCP 服务器提供的动态 IP 地址。
运行 [ifconfig 命令](https://www.tecmint.com/ifconfig-command-examples/)列出你的机器网络接口名,并对正确的网卡发出 [nmtui-edit](/article-5410-1.html) 命令,使用自定义 IP 设置编辑正确的网络接口。
root 权限运行 nmtui-edit 命令。
```
# ifconfig
# nmtui-edit eno16777736
```

*找出网络接口名*
2、 在打开要编辑的网络接口后,添加正确的静态 IP 设置,确保添加了 Samba4 AD DC 的 DNS 服务器 IP 地址以及你的域的名字,以便从机器查询 realm。使用以下截图作为指导。

*配置网络设置*
3、 在你完成配置网络接口后,重启网络进程使更改生效,并对域名以及 samba 4 域控制器的 FQDN 使用 ping 命令测试。
```
# systemctl restart network.service
# cat /etc/resolv.conf # 验证 DNS 解析器配置是否对域解析使用了正确的 DNS 服务器 IP
# ping -c2 tecmint.lan # ping 域名
# ping -c2 adc1 # ping 第一个 AD DC
# ping -c2 adc2 # Ping 第二个 AD DC
```

*验证网络 DNS 配置*
4、 接下来,用下面的命令安装 `ntpdate` 包,与域控制器同步时间,并请求 samba4 机器的 NTP 服务器:
```
# yum install ntpdate
# ntpdate -qu tecmint.lan # querry domain NTP servers
# ntpdate tecmint.lan # Sync time with the domain
```

*与 Samba NTP 服务器同步时间*
5、 你或许想要本地时间自动与 samba AD 时间服务器同步。为了实现这个设置,通过运行 [crontab -e 命令](https://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/)并追加下面的行添加一条计划任务。
```
0 */1 * * * /usr/sbin/ntpdate tecmint.lan > /var/log/ntpdate.lan 2>&1
```

*自动与 Samba NTP 同步时间*
### 第二步:为 iRedMail 集成准备 Samba4 AD DC
6、 现在,如[这篇](/article-8097-1.html)教程所述进入一台[安装了 RSAT 工具的 Windows 机器](/article-8097-1.html)管理 Samba4 AD。
打开 DNS 管理器,转到你的域转发查找区并添加新的 A 记录、MX记录还有 PTR 记录指向你的 iRedMail 系统的 IP 地址。使用以下截图作为指导。
添加一条 A 记录(相应地用 iRedMail 机器的名字和 IP 替换)。

*为 iRedMail 创建 DNS A 记录*
添加 MX 记录(将子域留空,优先级为 10)。

*为 iRedMail 创建 DNS MX 记录*
在反向查找区域(相应地替换 iRedMail 服务器的 IP 地址)添加 PTR 记录。如果你尚未为域控制器配置反向区域,请阅读以下教程:[从 Windows 管理 Samba4 DNS 组策略](/article-8258-1.html)

*为 iRedMail 创建 DNS PTR 记录*
7、添加了使邮件服务器正常运行的基本 DNS 记录后,请进入 iRedMail 机器,安装 bind-utils 软件包,并按如下建议查询新添加的邮件记录。
Samba4 AD DC DNS 应该会响应之前添加的 DNS 记录。
```
# yum install bind-utils
# host tecmint.lan
# host mail.tecmint.lan
# host 192.168.1.245
```

*安装 Bind 并查询邮件记录*
在一台 Windows 机器上,打开命令行窗口并使用 [nslookup 命令](https://www.tecmint.com/8-linux-nslookup-commands-to-troubleshoot-dns-domain-name-server/)查询上面的邮件服务器记录。
8、 作为最后一个先决要求,在 Samba4 AD DC 中创建一个具有最小权限的新用户帐户,并使用名称 vmail, 为此用户选择一个强密码, 并确保该用户的密码永不过期。
vmail 帐户将被 iRedMail 服务用来查询 Samba4 AD DC LDAP 数据库并拉取电子邮件帐户。
要创建 vmail 账户,如截图所示,使用加入了已安装 RSAT 工具域的 Windows 机器上的 ADUC 图形化工具,或者按照先前主题中那样用 [samba-tool 命令行](/article-8070-1.html)直接在域控制器中运行。
在本指导中,我们会使用上面提到的第一种方法。

*AD 用户和计算机*

*为 iRedMail 创建新的用户*

*为用户设置强密码*
9、 在 iRedMail 系统中,用下面的命令测试 vmail 用户能够查询 Samba4 AD DC LDAP 数据库。返回的结果应该是你的域的对象总数, 如下截图所示。
```
# ldapsearch -x -h tecmint.lan -D '[email protected]' -W -b 'cn=users,dc=tecmint,dc=lan'
```
注意:相应地替换域名以及 Samba4 AD 的 LDAP dn (`cn=users,dc=tecmint,dc=lan`)。

*查询 Samba4 AD DC LDAP*
### 第三步:将 iRedMail 服务集成到 Samba4 AD DC 中
10、 现在是时候修改 iRedMail 服务(Postfix、Dovecot 和 Roundcube)以便为邮箱帐户查询 Samba4 域控制器。
第一个要修改的服务是 MTA 代理,Postfix。执行以下命令禁用一系列的 MTA 设置,添加你的域名到 Postfix 本地域以及邮箱域中,并使用 Dovecot 代理发送已接收的邮件到用户邮箱中。
```
# postconf -e virtual_alias_maps=' '
# postconf -e sender_bcc_maps=' '
# postconf -e recipient_bcc_maps= ' '
# postconf -e relay_domains=' '
# postconf -e relay_recipient_maps=' '
# postconf -e sender_dependent_relayhost_maps=' '
# postconf -e smtpd_sasl_local_domain='tecmint.lan' #用你自己的域替换
# postconf -e virtual_mailbox_domains='tecmint.lan' #用你自己的域替换
# postconf -e transport_maps='hash:/etc/postfix/transport'
# postconf -e smtpd_sender_login_maps='proxy:ldap:/etc/postfix/ad_sender_login_maps.cf' # 检查 SMTP 发送者
# postconf -e virtual_mailbox_maps='proxy:ldap:/etc/postfix/ad_virtual_mailbox_maps.cf' # 检查本地邮件帐户
# postconf -e virtual_alias_maps='proxy:ldap:/etc/postfix/ad_virtual_group_maps.cf' # 检查本地邮件列表
# cp /etc/postfix/transport /etc/postfix/transport.backup # 备份 transport 配置
# echo "tecmint.lan dovecot" > /etc/postfix/transport # 添加带 dovecot transport 的域
# cat /etc/postfix/transport # 验证 transport 文件
# postmap hash:/etc/postfix/transport
```
11、 接下来,用你最喜欢的文本编辑器创建 Postfix 的 `/etc/postfix/ad_sender_login_maps.cf` 配置文件,并添加下面的配置。
```
server_host = tecmint.lan
server_port = 389
version = 3
bind = yes
start_tls = no
bind_dn = [email protected]
bind_pw = ad_vmail_account_password
search_base = dc=tecmint,dc=lan
scope = sub
query_filter = (&(userPrincipalName=%s)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
result_attribute= userPrincipalName
debuglevel = 0
```
12、 使用下面的配置创建 `/etc/postfix/ad_virtual_mailbox_maps.cf`。
```
server_host = tecmint.lan
server_port = 389
version = 3
bind = yes
start_tls = no
bind_dn = [email protected]
bind_pw = ad_vmail_account_password
search_base = dc=tecmint,dc=lan
scope = sub
query_filter = (&(objectclass=person)(userPrincipalName=%s))
result_attribute= userPrincipalName
result_format = %d/%u/Maildir/
debuglevel = 0
```
13、 使用下面的配置创建 `/etc/postfix/ad_virtual_group_maps.cf`。
```
server_host = tecmint.lan
server_port = 389
version = 3
bind = yes
start_tls = no
bind_dn = [email protected]
bind_pw = ad_vmail_account_password
search_base = dc=tecmint,dc=lan
scope = sub
query_filter = (&(objectClass=group)(mail=%s))
special_result_attribute = member
leaf_result_attribute = mail
result_attribute= userPrincipalName
debuglevel = 0
```
替换上面三个配置文件中的 `server_host`、`bind_dn`、`bind_pw` 和 `search_base` 以反应你自己域的设置。
14、 接下来,打开 Postfix 主配置文件,通过在下面的行前添加 `#` 注释,搜索并禁用 iRedAPD 的 `check_policy_service` 和 `smtpd_end_of_data_restrictions`。
```
# nano /etc/postfix/main.cf
```
注释下面的行:
```
#check_policy_service inet:127.0.0.1:7777
#smtpd_end_of_data_restrictions = check_policy_service inet:127.0.0.1:7777
```
15、 现在,通过执行一系列查询,验证 Postfix 是否使用现有的域用户和域组绑定到 Samba AD,如以下示例所示。
结果应与下面的截图类似。
```
# postmap -q [email protected] ldap:/etc/postfix/ad_virtual_mailbox_maps.cf
# postmap -q [email protected] ldap:/etc/postfix/ad_sender_login_maps.cf
# postmap -q [email protected] ldap:/etc/postfix/ad_virtual_group_maps.cf
```

*验证 Postfix 绑定到了 Samba AD*
相应替换 AD 用户及组帐户。同样,确保你使用的 AD 组已被分配了一些成员。
16、 在下一步中修改 Dovecot 配置文件以查询 Samba4 AD DC。打开 `/etc/dovecot/dovecot-ldap.conf` 文件并添加下面的行。
```
hosts = tecmint.lan:389
ldap_version = 3
auth_bind = yes
dn = [email protected]
dnpass = ad_vmail_password
base = dc=tecmint,dc=lan
scope = subtree
deref = never
user_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
pass_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))
pass_attrs = userPassword=password
default_pass_scheme = CRYPT
user_attrs = =home=/var/vmail/vmail1/%Ld/%Ln/Maildir/,=mail=maildir:/var/vmail/vmail1/%Ld/%Ln/Maildir/
```
Samba4 AD 帐户的邮箱将会存储在 `/var/vmail/vmail1/your_domain.tld/your_domain_user/Maildir/` 中。
17、 确保 dovecot 的主配置文件中启用了 pop3 和 imap 协议。打开 `/etc/dovecot/dovecot.conf` 验证是否启用了 `quota` 和 `acl` 邮件插件,并检查这些值是否存在。

*在 Dovecot 中启用 POP3 和 IMAP*
18、 可选地,如果要将全局硬配额设置为每个域用户的最大不超过 500 MB 存储,请在 `/etc/dovecot/dovecot.conf` 文件中添加以下行。
```
quota_rule = *:storage=500M
```
19、 最后,为了使目前这些更改生效,用 root 权限执行下面的命令重启并验证 Postfix 和 Dovecot 守护进程的状态。
```
# systemctl restart postfix dovecot
# systemctl status postfix dovecot
```
20、 为了使用 IMAP 协议从命令行测试邮件服务器配置,请使用 telnet 或 [netcat 命令](https://www.tecmint.com/check-remote-port-in-linux/),如下所示。
```
# nc localhost 143
a1 LOGIN ad_user@your_domain.tld ad_user_password
a2 LIST “” “*”
a3 LOGOUT
```
[](https://www.tecmint.com/wp-content/uploads/2017/05/Test-iRedMail-Configuration.png)
*测试 iRedMail 配置*
如果你可以使用 Samba4 用户帐户从命令行执行 IMAP 登录,那么 iRedMail 服务器似乎已经准备好发送和接收 AD 帐户的邮件。
在下一个教程中将讨论如何将 Roundcube webmail 与 Samba4 AD DC 集成,并启用全局 LDAP 地址簿,自定义 Roudcube,从浏览器访问 Roundcube Web 界面,并禁用某些不需要的 iRedMail 服务。
---
作者简介:
我是一个电脑上瘾的家伙,开源和基于 linux 的系统软件的粉丝,在 Linux 发行版桌面、服务器和 bash 脚本方面拥有大约4年的经验。
---
via: <https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/>
作者:[Matei Cezar](https://www.tecmint.com/author/cezarmatei/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In this tutorial will learn how to modify iRedMail main daemons which provide mail services, respectively, [Postfix used for mail transfer and Dovecot](https://www.tecmint.com/setup-postfix-mail-server-and-dovecot-with-mariadb-in-centos/) which delivers mail to accounts mailboxes, in order to integrate them both in [Samba4 Active Directory Domain Controller](https://www.tecmint.com/install-samba4-active-directory-ubuntu/).
By integrating iRedMail to a Samba4 AD DC you will benefit from the following features: user authentication, management, and status via Samba AD DC, create mail lists with the help of AD groups and Global LDAP Address Book in Roundcube.
#### Requirements
### Step 1: Prepare iRedMail System for Sama4 AD Integration
**1.** On the first step, you need to [assign a static IP address for your machine](https://www.tecmint.com/set-add-static-ip-address-in-linux/) in case you’re using a dynamic IP address provided by a DHCP server.
Run [ifconfig command](https://www.tecmint.com/ifconfig-command-examples/) to list your machine network interfaces names and edit the proper network interface with your custom IP settings by issuing [nmtui-edit](https://www.tecmint.com/configure-network-connections-using-nmcli-tool-in-linux/) command against the correct NIC.
Run **nmtui-edit** command with root privileges.
# ifconfig # nmtui-edit eno16777736

**2.** Once the network interface is opened for editing, add the proper static IP settings, make sure you add the DNS servers IP addresses of your Samba4 AD DC and the name of your domain in order to query the realm from your machine. Use the below screenshot as a guide.

**3.** After you finish configuring the network interface, restart the network daemon to apply changes and issue a series of ping commands against the domain name and samba4 domain controllers FQDNs.
# systemctl restart network.service # cat /etc/resolv.conf # verify DNS resolver configuration if the correct DNS servers IPs are queried for domain resolution # ping -c2 tecmint.lan # Ping domain name # ping -c2 adc1 # Ping first AD DC # ping -c2 adc2 # Ping second AD DC

**4.** Next, sync time with samba domain controller by installing the **ntpdate** package and query Samba4 machine NTP server by issuing the below commands:
# yum install ntpdate # ntpdate -qu tecmint.lan # querry domain NTP servers # ntpdate tecmint.lan # Sync time with the domain

**5.** You might want the local time to be automatically synchronized with samba AD time server. In order to achieve this setting, add a scheduled job to run every hour by issuing [crontab -e command](https://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/) and append the following line:
0 */1 * * * /usr/sbin/ntpdate tecmint.lan > /var/log/ntpdate.lan 2>&1

### Step 2: Prepare Samba4 AD DC for iRedMail Integration
**6.** Now, move to a [Windows machine with RSAT tools installed](https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/) to manage Samba4 Active Directory as described in this tutorial [here](https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/).
Open **DNS Manager**, go to your domain **Forward Lookup Zones** and add a new **A** record, an **MX** record and a **PTR** record to point to your iRedMail system IP address. Use the below screenshots as a guide.
Add **A** record (replace the name and the IP Address of iRedMail machine accordingly).

Add **MX** record (leave child domain blank and add a 10 priority for this mail server).

Add **PTR** record by expanding to **Reverse Lookup Zones** (replace IP address of iRedMail server accordingly). In case you haven’t configured a reverse zone for your domain controller so far, read the following tutorial:

**7.** After you’ve added the basic DNS records which make a mail server to function properly, move to the iRedMail machine, install **bind-utils** package and query the newly added mail records as suggested on the below excerpt.
Samba4 AD DC DNS server should respond with the DNS records added in the previous step.
# yum install bind-utils # host tecmint.lan # host mail.tecmint.lan # host 192.168.1.245

From a Windows machine, open a **Command Prompt** window and issue [nslookup command](https://www.tecmint.com/8-linux-nslookup-commands-to-troubleshoot-dns-domain-name-server/) against the above mail server records.
**8.** As a final pre-requirement, create a new user account with minimal privileges in Samba4 AD DC with the name **vmail**, choose a strong password for this user and make sure the password for this user never expires.
The vmail user account will be used by iRedMail services to query Samba4 AD DC LDAP database and pull the email accounts.
To create the vmail account, use ADUC graphical tool from a Windows machine joined to the realm with RSAT tools installed as illustrated on the below screenshots or use samba-tool command line directly from a domain controller as explained on the following topic.
In this guide, we’ll use the first method mentioned above.



**9.** From iRedMail system, test the vmail user ability to query Samba4 AD DC LDAP database by issuing the below command. The returned result should be a total number of objects entries for your domain as illustrated on the below screenshots.
# ldapsearch -x -h tecmint.lan -D '[[email protected]]' -W -b 'cn=users,dc=tecmint,dc=lan'
**Note**: Replace the domain name and the LDAP base dn in Samba4 AD (‘**cn=users,dc=tecmint,dc=lan**‘) accordingly.

### Step 3: Integrate iRedMail Services to Samba4 AD DC
**10.** Now it’s time to tamper with iRedMail services (Postfix, Dovecot and Roundcube) in order to query Samba4 Domain Controller for mail accounts.
The first service to be modified will be the MTA agent, Postfix. Issue the following commands to disable a series of MTA settings, add your domain name to Postfix local domain and mailbox domains and use Dovecot agent to deliver received mails locally to user mailboxes.
# postconf -e virtual_alias_maps=' ' # postconf -e sender_bcc_maps=' ' # postconf -e recipient_bcc_maps= ' ' # postconf -e relay_domains=' ' # postconf -e relay_recipient_maps=' ' # postconf -e sender_dependent_relayhost_maps=' ' # postconf -e smtpd_sasl_local_domain='tecmint.lan' #Replace with your own domain # postconf -e virtual_mailbox_domains='tecmint.lan' #Replace with your own domain # postconf -e transport_maps='hash:/etc/postfix/transport' # postconf -e smtpd_sender_login_maps='proxy:ldap:/etc/postfix/ad_sender_login_maps.cf' # Check SMTP senders # postconf -e virtual_mailbox_maps='proxy:ldap:/etc/postfix/ad_virtual_mailbox_maps.cf' # Check local mail accounts # postconf -e virtual_alias_maps='proxy:ldap:/etc/postfix/ad_virtual_group_maps.cf' # Check local mail lists # cp /etc/postfix/transport /etc/postfix/transport.backup # Backup transport conf file # echo "tecmint.lan dovecot" > /etc/postfix/transport # Add your domain with dovecot transport # cat /etc/postfix/transport # Verify transport file # postmap hash:/etc/postfix/transport
**11.** Next, create Postfix `/etc/postfix/ad_sender_login_maps.cf`
configuration file with your favorite text editor and add the below configuration.
server_host = tecmint.lan server_port = 389 version = 3 bind = yes start_tls = no bind_dn =[[email protected]]bind_pw = ad_vmail_account_password search_base = dc=tecmint,dc=lan scope = sub query_filter = (&(userPrincipalName=%s)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2))) result_attribute= userPrincipalName debuglevel = 0
**12.** Create `/etc/postfix/ad_virtual_mailbox_maps.cf`
with the following configuration.
server_host = tecmint.lan server_port = 389 version = 3 bind = yes start_tls = no bind_dn =[[email protected]]bind_pw = ad_vmail_account_password search_base = dc=tecmint,dc=lan scope = sub query_filter = (&(objectclass=person)(userPrincipalName=%s)) result_attribute= userPrincipalName result_format = %d/%u/Maildir/ debuglevel = 0
**13.** Create `/etc/postfix/ad_virtual_group_maps.cf`
with the below configuration.
server_host = tecmint.lan server_port = 389 version = 3 bind = yes start_tls = no bind_dn =[[email protected]]bind_pw = ad_vmail_account_password search_base = dc=tecmint,dc=lan scope = sub query_filter = (&(objectClass=group)(mail=%s)) special_result_attribute = member leaf_result_attribute = mail result_attribute= userPrincipalName debuglevel = 0
On all three configuration files replace the values from **server_host**, **bind_dn**, **bind_pw** and **search_base** to reflect your own domain custom settings.
**14.** Next, open Postfix main configuration file and search and disable iRedAPD **check_policy_service** and **smtpd_end_of_data_restrictions** by adding a comment `#`
in front of the following lines.
# nano /etc/postfix/main.cf
Comment the following lines:
#check_policy_service inet:127.0.0.1:7777 #smtpd_end_of_data_restrictions = check_policy_service inet:127.0.0.1:7777
**15.** Now, verify Postfix binding to Samba AD using an existing domain user and a domain group by issuing a series of queries as presented in the following examples.
The result should be similar as illustrated on the bellow screenshot.
# postmap -q[[email protected]]ldap:/etc/postfix/ad_virtual_mailbox_maps.cf # postmap -q[[email protected]]ldap:/etc/postfix/ad_sender_login_maps.cf # postmap -q[[email protected]]ldap:/etc/postfix/ad_virtual_group_maps.cf

Replace AD user and group accounts accordingly. Also, assure that the AD group you’re using has some AD users members assigned to it.
**16.** On the next step modify Dovecot configuration file in order to query Samba4 AD DC. Open file `/etc/dovecot/dovecot-ldap.conf`
for editing and add the following lines.
hosts = tecmint.lan:389 ldap_version = 3 auth_bind = yes dn =[[email protected]]dnpass = ad_vmail_password base = dc=tecmint,dc=lan scope = subtree deref = never user_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2))) pass_filter = (&(userPrincipalName=%u)(objectClass=person)(!(userAccountControl:1.2.840.113556.1.4.803:=2))) pass_attrs = userPassword=password default_pass_scheme = CRYPT user_attrs = =home=/var/vmail/vmail1/%Ld/%Ln/Maildir/,=mail=maildir:/var/vmail/vmail1/%Ld/%Ln/Maildir/
The mailbox of a Samba4 AD account will be stored in **/var/vmail/vmail1/your_domain.tld/your_domain_user/Maildir/** location on the Linux system.
**17.** Make sure pop3 and imap protocols are enabled in dovecot main configuration file. Verify if quota and acl mail plugins are also enabled by opening file `/etc/dovecot/dovecot.conf`
and check if these values are present.

**18.** Optionally, if you want to set a global hard quota to not exceed the maximum of 500 MB of storage for each domain user, add the following line in **/etc/dovecot/dovecot.conf** file.
quota_rule = *:storage=500M
**19.** Finally, in order to apply all changes made so far, restart and verify the status of Postfix and Dovecot daemons by issuing the below commands with root privileges.
# systemctl restart postfix dovecot # systemctl status postfix dovecot
**20.** In order to test mail server configuration from the command line using IMAP protocol use **telnet** or [netcat command](https://www.tecmint.com/check-remote-port-in-linux/) as presented in the below example.
# nc localhost 143 a1 LOGIN ad_user@your_domain.tld ad_user_password a2 LIST “” “*” a3 LOGOUT

If you can perform an IMAP login from the command line with a Samba4 user account then iRedMail server seems ready to send and receive mail for Active Directory accounts.
On the next tutorial will discuss how to integrate Roundcube webmail with Samba4 AD DC and enable Global LDAP Address Book, customize Roudcube, access Roundcube web interface from a browser and disable some unneeded iRedMail services.
Hello, In the last versions of samba need to use TLS encrypted connections, otherwise, you cannot connect to LDAP.
`ldapsearch -H ldap://pdc1.domain.lan -D "cn=test1,cn=users,dc=domain,dc=lan" -W -s base -b "" supportedSASLMechanisms`
I spent a lot of time-solving this problem.
Here is my solution:
To fix it, you need to copy
/var/lib/samba/private/tls/ca.pemfrom samba ad server to centos 8 mail server folder/etc/pki/ca-trust/source/anchors/and runupdate-ca-trust.Then comment string beginning at
TLS_CACERTin file/etc/openldap/ldap.conf.Thereafter we can access to ldaps using simple authentication
`ldapsearch -H ldaps://pdc1.domain.lan:636 -x -D "cn=test1,cn=users,dc=domain,dc=lan" -W -s base -b "" supportedSASLMechanisms`
In postfix files
ad_sender_login_maps.cf,ad_virtual_mailbox_maps.cf, andad_virtual_group_maps.cfneed modify linesIn dovecot file
dovecot-ldap.confneed modify this lines.Everything should work now.
ReplyHi, thanks in advance for the series of articles they proved very helpful in getting my setup completed, even managed to get iRedMail/SOGo integrated with AD users.
My question is, how do I create alias domains e.g [email protected] maps to [email protected].
TIA
ReplyTrevor
Hi, how did you get iRedMail/SOGo integrated with AD users please ?
Replythanks
Hi, how did you get iRedMail/SOGo integrated with AD users please ?
Replythanks
you need to change auth part in /etc/sogo/sogo.conf
https://github.com/lemassykoi/temp/blob/master/sogo.conf
ReplyMany thanks..:)
ReplyExpression “ldapsearch -x -h tecmint.lan -D ‘[email protected]’ -W -b ‘cn=users,dc=tecmint,dc=lan'” won’t work. We get:
In last versions of samba there were some security updates applied restricting external applications to connect to AD using LDAP, unless they do not use or support TLS encrypted connections.
To overcome this obstacle we should edit DC smb.conf in the following manner. Please add this string in [global] section:
And restart AD service:
Not sure, whether we shall do such settings on the second DC server.
Replyldap server requires strong
Replyauth = noparameter should be set to the second DC as well. When the LDAP client queries the domain it won’t know which dc server will receive and serve the query first.Hi,
I have this error when I type telnet mailhost 143
In /var/log/dovecot.log, i have:
Replyimap(myUserAD@myDomainAD): Error Couldn't drop privileges: User i missing GID (see mail_gid setting)
imap: Error: Internal error occurred, Refer to server information.
Could you help me, please? Thank you
By typing the following command I’ve got the error.
I have the following error :
Could you help me please ? Thank you.
ReplyThank you very much for this article, there is a typing error in systemd the service is networking.service and not network.service.
Reply |
8,675 | Ring :一个专注隐私,开源的 Skype 替代品 | https://mintguide.org/internet/795-ring-is-a-privacy-focused-open-source-skype-alternative.html | 2017-07-07T11:30:42 | [
"Skype"
] | https://linux.cn/article-8675-1.html | 
**如果你对 Linux 上的 Skype 进度不满意,或者对即将[换代的、旧的(但是优秀的)Qt Skype 客户端](/article-7606-1.html)感到不快,这有一个 GNU 替代品叫 Ring。**
GNU Ring 是一个跨平台、关注隐私的交流程序,它很快得到了 FOSS 以及安全圈的注意。
就像一个 **Skype 的开源替代品**,不用背负微软 VoIP 客户端的那些东西,Ring 不仅能够做大多数 Skype 能够做的,还能以安全、让人放心的私密方式做到:
* 语音电话
* 电话会议
* 视频电话
* 即时通信
所有这些功能也可以跨设备运行。你同伴使用的是使用 Windows 版 Ring 或者 Android 版 Ring 都不重要。只要他们在 Ring 上,你就能联系他们!
Ring 通过分布式对等网络工作。这意味着它不用依赖于一个大型集中式服务器以存储你所有详细信息、通话记录和帐户信息。相反,该服务使用分布式哈希表建立通信。
[Ring 网站](https://ring.cx/)使用端到端加密的认证(sic)、使用 X.509 证书管理身份、并且基于 RSA/AES/DTLS/SRTP 技术。
### 下载 Linux 版 Ring
你可以从项目网站或者按照下面的安装指导在 Ubuntu 上添加 Ring 的官方仓库来[下载 Linux 版 Ring](https://ring.cx/en/download/gnu-linux)。这里有两个版本的客户端:KDE 版本和 GNOME 版本。
如果你运行的 Ubuntu 带有 Unity 或者 GNOME Shell,就选择 GNOME 版本。
如果你对安装感到疑惑,项目网站上有一个一个[手把手的教程](https://ring.cx/en/tutorials/gnu-linux#RingID)。
官方网站上还有直接的 Windows、macOS 和 Android 版链接。
### 在 Ubuntu 上设置 Ring
当你安装完 Linux 版客户端后,你就能够使用了:在开始打电话前,你只需注册一个 Ring ID。
Ring 的注册不同于 Skype、Telegram 和 WhatsApp 那样。你**不必**绑定一个手机号或者邮箱。
取而代之的是 Ring 会要求你输入一个用户名(你可以任意输入),接着会分配你一个冗长的身份号码。你需要给其他 Ring 用户发送这个身份号码,那么他们可以给你打电话。
要**在 Ring 中打第一通电话**,你需要主程序的搜索栏输入联系人的 Ring ID,接着点击电话按钮拨打电话。
其他事情应该相对直接。你可以在其他设备上安装 Ring 并用你的帐户验证,这样你就不会错过任何一个电话(如果你在 Android 设备上安装,你可以扫描二维码来快速设置)。
Ring 的细节以及配置存储你家目录下的配置文件夹内的 `dring.yml` 中。
### Ring 怎么样?
这篇文章“垃圾”的部分是告诉你 Ring 怎么样:我不知道因为我不认识任何使用 Ring 的人。因此我没有机会真正使用它。
我尝试 Google 来查找 Ring 的评测,但是这是一个噩梦,因为程序的名字太普遍了(虽然我现在非常了解 Ring 视频门铃)。
***如果你已经使用 Ring 或者你能够劝说至少一个你认识的人使用,那么记得回来分享一下关于连接质量和性能。***
---
via: <https://mintguide.org/internet/795-ring-is-a-privacy-focused-open-source-skype-alternative.html>
作者:[fabiomorales9999](https://mintguide.org/user/fabiomorales9999/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,676 | 为什么可以在任何地方工作的开发者们要聚集到世界上最昂贵的城市? | https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/ | 2017-07-07T15:29:00 | [
"开发者"
] | https://linux.cn/article-8676-1.html | 
政治家和经济学家都在[哀嚎](https://mobile.twitter.com/Noahpinion/status/846054187288866)某几个阿尔法地区——旧金山、洛杉矶、纽约、波士顿、多伦多、伦敦、巴黎——在吸引了所有最好的工作的同时变得令人却步的昂贵,降低了经济流动性而增大了贫富差异。为什么那些最好的工作不能搬到其它地区呢?
当然,很多工作都不能搬走。工作在纽约或者伦敦(当然,在英国脱欧毁灭伦敦的银行体系之前)普通的金融从业人员如果告诉他们的老板他们想要以后在清迈工作,将会在办公室里受到嘲笑而且不再受欢迎。
但是这对(大部分)软件领域不适用。大部分 web/app 开发者如果这样要求的话可能会被拒绝;但是它们至少不会被嘲笑或者被炒。优秀开发者往往供不应求,而且我们处在 Skype 和 Slack 的时代,软件开发完全可以不依赖物质世界的交互。
(这一切对作家来说更加正确,真的;事实上我是在波纳佩发表的这篇文章。但是作家并不像软件开发者一样具有足够的影响力。)
有些人会告诉你远程协助的团队天生比本地团队效率和生产力低下一些,或者那些“不经意的灵感碰撞”是如此重要以致于每一位员工每天都必须强制到一个地方来人为的制造这样的碰撞。这些人错了,有这种问题的团队只是讨论次数不够多——数量级不过几次、十几次或者几十次,而不是成百上千——也不够专注。
我知道:在 [HappyFunCorp](http://happyfuncorp.com/) 时,我们广泛的与远程团队工作,而且长期雇佣远程开发者,而结果难以置信的好。我在我旧金山的家中与斯德哥尔摩、圣保罗、上海、布鲁克林、新德里的开发者交流和合作的一天,完全没有任何不寻常。
在这一点上,不管它是不是个好主意,但我有点跑题了。供求关系指出那些拥有足够技能的开发者可以成为被称为“数字流浪者”的人,如果他们愿意的话。但是许多可以做到的人却不愿意。最近,我在雷克雅维克的一栋通过 Airbnb 共享的房子和一伙不断变化的临时远程工作团队度过了一段时间,我按照东海岸的时间来跟上他们的工作,也把早上和周末的时光都花费在探索冰岛了——但是最后几乎所有人都回到了湾区生活。
从经济层面来说,当然,这太疯狂了。搬到东南亚工作光在房租一项上每月就会为我们省下几千美金。 为什么那些可以在哥斯达黎加挣着旧金山的工资,或者在柏林赚着纽约水平薪资的人们,选择不这样做?为什么那些据说冷静固执的工程师在财政方面如此荒谬?
当然这里有社交和文化原因。清迈很不错,但没有大都会博物馆或者蒸汽朋克化装舞会,也没有 15 分钟脚程可以到的 50 家美食餐厅。柏林也很可爱,但没法提供风筝冲浪或者山脉远足和加州气候。当然也无法保证拥有无数和你一样分享同样价值观和母语的人们。
但是我觉得原因除了这些还有很多。我相信相比贫富之间的差异,还有一个更基础的经济分水岭存在。我认为我们在目睹世界上可以实现超凡成就的“<ruby> 极端斯坦 <rp> ( </rp> <rt> Extremistan </rt> <rp> ) </rp></ruby>”城市和无法成就伟大但可以工作和赚钱的“<ruby> 平均斯坦 <rp> ( </rp> <rt> Mediocristan </rt> <rp> ) </rp></ruby>”城市之间正在生成巨大的裂缝。(这些名词是从伟大的纳西姆·塔勒布那里偷来的)
(LCTT 译者注:平均斯坦与极端斯坦的概念是美国学者纳西姆·塔勒布[首先提出来的](http://blog.sina.com.cn/s/blog_5ba3d8610100q3b1.html)。他发现在我们所处的世界上,有些事物表现出相当的平均性,大部分个体都靠近均值,离均值越远则个体数量越稀少,与均值的偏离达到一定程度的个体数量将趋近于零。有些事物则表现出相当的极端性,均值这个概念在这个领域没有太多的意义,剧烈偏离均值的个体大量存在,而且偏离程度大得惊人。他把前者称为平均斯坦,把后者称为极端斯坦。)
艺术行业有着悠久历史的“极端斯坦”城市。这也解释了为什么有抱负的作家纷纷搬到纽约城,而那些已经在国际上大获成功的导演和演员仍然在不断被吸引到洛杉矶,如同飞蛾扑火。现在,这对技术行业同样适用。即使你不曾想试着(帮助)构造一些非凡的事物 —— 如今创业神话如此恢弘,很难想象有工程师完全没有梦想过它 —— *伟大事物发生的地方*正在以美好的前景如梦如幻的吸引着人们。
但是关于这一切有趣的是,理论上讲,它会改变;因为 —— 直到最近 —— 分布式的、分散管理的团队实际上可以获得超凡的成就。 情况对这些团队很不利,因为风投的目光趋于短浅。但是没有任何法律指出独角兽公司只能诞生在加州和某些屈指可数的次级地域上;而且似乎,不管结果好坏,极端斯坦正在扩散。如果这样的扩散最终可以矛盾地导致米申地区的房租变*便宜*那就太棒了!
---
via: <https://techcrunch.com/2017/04/02/why-do-developers-who-could-work-anywhere-flock-to-the-worlds-most-expensive-cities/>
作者:[Jon Evans](https://techcrunch.com/author/jon-evans/) 译者:[xiaow6](https://github.com/xiaow6) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Politicians and economists [lament](https://mobile.twitter.com/Noahpinion/status/846054187288866) that certain alpha regions — SF, LA, NYC, Boston, Toronto, London, Paris — attract all the best jobs while becoming repellently expensive, reducing economic mobility and contributing to further bifurcation between haves and have-nots. But why don’t the best jobs move elsewhere?
Of course, many of them can’t. The average financier in NYC or London (until Brexit annihilates London’s banking industry, of course…) would be laughed out of the office, and not invited back, if they told their boss they wanted to henceforth work from Chiang Mai.
But this isn’t true of (much of) the software field. The average web/app developer might have such a request declined; but they would not be laughed at, or fired. The demand for good developers greatly outstrips supply, and in this era of Skype and Slack, there’s nothing about software development that requires meatspace interactions.
(This is even more true of writers, of course; I did in fact post this piece from Pohnpei. But writers don’t have anything like the leverage of software developers.)
Some people will tell you that remote teams are inherently less effective and productive than localized ones, or that “serendipitous collisions” are so important that every employee must be forced to the same physical location every day so that these collisions can be manufactured. These people are wrong, as long as the team in question is small — on the order of handfuls, dozens or scores, rather than hundreds or thousands — and flexible.
I should know: at [HappyFunCorp](http://happyfuncorp.com/), we work extensively with remote teams, and actively recruit remote developers, and it works out fantastically well. A day in which I interact and collaborate with developers in Stockholm, São Paulo, Shanghai, Brooklyn and New Delhi, from my own home base in San Francisco, is not at all unusual.
At this point, whether it’s a good idea is almost irrelevant, though. Supply and demand is such that any sufficiently skilled developer could become a so-called digital nomad if they really wanted to. But many who could, do not. I recently spent some time in Reykjavik at a house Airbnb-ed for the month by an ever-shifting crew of temporary remote workers, keeping East Coast time to keep up with their jobs, while spending mornings and weekends exploring Iceland — but almost all of us then returned to live in the Bay Area.
Economically, of course, this is insane. Moving to and working from Southeast Asia would save us thousands of dollars a month in rent alone. So why do people who could live in Costa Rica on a San Francisco salary, or in Berlin while charging NYC rates, choose not to do so? Why are allegedly hardheaded engineers so financially irrational?
Of course there are social and cultural reasons. Chiang Mai is very nice, but doesn’t have the Met, or steampunk masquerade parties or 50 foodie restaurants within a 15-minute walk. Berlin is lovely, but doesn’t offer kite surfing, or Sierra hiking or California weather. Neither promises an effectively limitless population of people with whom you share values and a first language.
And yet I think there’s much more to it than this. I believe there’s a more fundamental economic divide opening than the one between haves and have-nots. I think we are witnessing a growing rift between the world’s Extremistan cities, in which truly extraordinary things can be achieved, and its Mediocristan towns, in which you can work and make money and be happy but never achieve greatness. (Labels stolen from the great Nassim Taleb.)
The arts have long had Extremistan cities. That’s why aspiring writers move to New York City, and even directors and actors who found international success are still drawn to L.A. like moths to a klieg light. Now it is true of tech, too. Even if you don’t even want to try to (help) build something extraordinary — and the startup myth is so powerful today that it’s a very rare engineer indeed who hasn’t at least dreamed about it — the prospect of being *where great things happen* is intoxicatingly enticing.
But the interesting thing about this is that it could, in theory, change; because — as of quite recently — distributed, decentralized teams can, in fact, achieve extraordinary things. The cards are arguably stacked against them, because VCs tend to be quite myopic. But no law dictates that unicorns may only be born in California and a handful of other secondary territories; and it seems likely that, for better or worse, Extremistan is spreading. It would be pleasantly paradoxical if that expansion ultimately leads to *lower* rents in the Mission. |
8,677 | GitHub 对软件开发业造成的冲击 | https://medium.com/@sitapati/the-impact-github-is-having-on-your-software-career-right-now-6ce536ec0b50 | 2017-07-08T12:57:00 | [
"GitHub",
"软件开发者"
] | https://linux.cn/article-8677-1.html | 
在未来的 12 到 24 个月内(也就是说,在 2018 年,或者是 2019 年),人们雇佣软件开发者的方式将会发生彻底的改变。
2004 至 2014 期间,我曾经就职于红帽,这是世界上最大的开源软件公司。还记得 2004 年七月的一天,我第一次来到这家公司,我的老板 Marty Messer 就跟我说,“所有你在这里所做的工作都会被开源,在未来,你将不需要任何的简历,因为所有的人都可以 Google 到你。”
供职于红帽的一个独特的好处是,在这种开源的工作期间,我们有机会建立自己的个人品牌和树立自己的声誉。我们可以通过邮件列表和 bug 追踪器与其它的软件工程师进行沟通,而且提交到 mercurial、subversion 和 CVS 仓库的源代码都会被开源,并且可以通过 google 找到。
(写本文时)马上就到 2017 年了,我们将生活在一个处处充满开源的世界。
以下两点会让你对这个新时代有一个真正意义上的了解:
1. 微软在过去的一段很长的时间里都在坚持闭源,甚至是排斥开源。但是现在也从心底里开始拥抱开源了。它们成立了 .NET 基金会(红帽也是其中的一个成员),并且也加入了 Linux 基金会。 .NET 项目现在是以一个开源项目的形式在开发着。
2. GitHub 已经成为了一个独特的社交网络,并将问题追踪器和分布式源码版本控制融入其中。
对于那些从闭源走过来的软件开发者来说,他们可能还不知道发生了什么。对于他们来说 ,开源就意味着“将业余时间的所有工作成果都免费开放”。
对于我们这些在过去十年创造了一家十亿美元的开源软件公司的人来说,参与开源以后就没有什么空闲的时间可言了。当然,为开源事业献身的好处也是很明显的,所得到的名誉是你自己的,并不隶属于某个公司。GitHub 是一个社交网络,在这个地方,你可以创建你的提交、你可以在你所专长的领域为一些全球性的组织做出贡献,你临时做的一些工作并不附属于所任职的公司。
聪明的人会利用这种工作环境。他们会贡献他们的补丁、<ruby> 工单 <rp> ( </rp> <rt> issue </rt> <rp> ) </rp></ruby>、评论给他们平时在工作中使用的语言和框架,比如 TypeScript、 .NET 和 Redux 。
他们也拥抱开源,并会尽可能多的开源他们的创新成果。甚至会提交他们的贡献给私有仓库。
GitHub 对平等居功至伟。比如说,你也许很难在澳大利亚得到一份来自印度的工作,但是,在 GitHub 上,却没有什么可以阻止你在印度跟澳大利亚的工作伙伴一起工作。
在过去十年里,想从红帽获得一个工作机会的方式很简单。你只需要在一些某些小的方面,与红帽的软件工程师在开源的项目上协作,然后当他们觉得你在某些方面做出了很多有价值的贡献,而且成为一个很好的工作伙伴时,那么你就可以申请一个红帽的工作机会了,或许他们会邀请你。
现在,在不同的技术领域,开源给了我们所有人同样的机会,随着开源在世界的各处都流行开来,这样的事情将会在不同的地方盛行开来。
在[最近一个访谈](http://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds/)中,Linux 和 git 的发明者 Linus Torvalds(在 GitHub 上有 49K 粉丝,0 关注),这么说,
>
> “你提交了很多小补丁,而在某个时候项目的维护者开始信任你,在那一刻,你跟一般人不同的是,你不仅仅是提交了一些补丁,而是真正成为了这个组织里被信任的一部分。”
>
>
>
实际上你的名声存在于那个你被信任的网络。我们都知道,当你离开一家公司以后,你的人脉和名声可能会削弱,有的甚至会丢失。就好像,你在一个小村庄里生活了很长的一段时间,这里所有的人都会知道你。然而,当你离开这个村庄,来到一个新的地方,这个地方可能没人听说过你,更糟糕的是,没有人认识任何知道你的人。
你已经失去了一度和二度连接关系,甚至有可能会失去这三度连接关系(LCTT 译注:指六度连接理论)。除非你通过在会议或其他大型活动中演讲来建立自己的品牌,否则你通过在公司内部提交代码建立起来的信任早晚都会过去的,但是,如果你在 GitHub 上完成你的工作,这些东西依然全部都在,对这个信任网络的连接仍然可见。
首先会发生的事情就是,一些弱势群体可能会利用这个。包括像学生、新毕业生、移民者--他们可能会利用这个“去往”澳大利亚。
这将会改变目前的现状。以前的一些开发人员可能会有过人际网络突然中断的情况,开源的一个原则是精英——最好的创意、最多的提交、最多的测试,和最快的落实执行,等等。
它并不完美,当然也没有什么事情是完美的,不能和伙伴一起工作,在人们对你的印象方面也会大打折扣。红帽公司会开除那些不能和团队和谐相处的人,而在 GitHub 工作的这些员工,他们主要是和其它的贡献者之间的交流。
GitHub 不仅仅是一个代码仓库或是一个原始提交成员的列表,因为有些人总是用稻草人论点描述它。它是一个社交网络。我会这样说:
>
> GitHub 有多少代码并不重要,重要的是有多少关于你代码的讨论。
>
>
>
GitHub 可以说是伴你而走的名声,并且在以后的 12 到 24 个月中,很多开发者使用它,而另外的一些依然并不使用,这将会形成一个很明显的差异。就像有电子邮件和没有电子邮件的区别(现在每个人都有电子邮件了),或者是有移动电话和没有移动电话的区别(现在每个人都有移动电话了),最终,绝大多数的人都会为开源工作,这将会是与别人的竞争中的一个差异化的优势。
但是现在,开发者的职业生涯已经被 GitHub 打乱了。
(题图: GitHub)
---
作者简介:
Josh Wulf - 我是 Just Digital People 的传奇招聘者,前红帽员工,CoderDojo 导师, Magikcraft.io 创始人之一,The JDP Internship 出品人——这是世界第一的软件开发真人秀,世界上最好的科技播客主持人,也是一位父亲。一直致力于昆士兰州的“硅经济”。
---
via: <https://medium.com/@sitapati/the-impact-github-is-having-on-your-software-career-right-now-6ce536ec0b50>
作者:[Josh Wulf](https://opensource.com/users/sitapati) 译者:[SysTick](https://github.com/SysTick) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # The Impact GitHub is Having on Your Software Career, Right Now…
**Update 2019**:* This article got **183 comments on Reddit**, and **237 comments on Hacker News**. Many commenters said it couldn’t be done — they worked in closed source fintech shops, on BitBucket, and reasons. So I quit my job as a recruiter, got a job at a closed source fintech on BitBucket, and got it done within 15 months —** read about it here**.*
Over the next 12–24 months — in other words, between 2018 and 2019 — how software developers are hired is going to change radically.
I spent 2004–2014 working at Red Hat, the world’s largest open source software engineering company. On my very first day there, in July 2004, my boss Marty Messer said to me: “*All the work you do here will be in the open. In the future you won’t have a CV — people will just Google you.*”
This was one of the unique characteristics of working at Red Hat at the time. We had the opportunity to create our own personal brands and reputation in the open. Communication with other software engineers through mailing lists and bug trackers, and source code commits to mercurial, subversion, and cvs repositories were all open and indexed by Google.
Fast-forward to 2017, and here we are living in world that is being eaten by open source software.
There are two factors that give you a real sense of the times:
- Microsoft — long the poster child for closed-source proprietary software and a crusader against open source — has embraced open source software whole-heartedly, forming the .NET Foundation (which has Red Hat as a member) and joining the Linux Foundation. .NET is now developed in the open as an open source project.
2. GitHub has become a singular social network that ties together issue tracking and distributed source control.
For software developers coming from a primarily closed source background, it’s not really clear yet what just happened. To them, open source equals “working for free in your spare time”.
For those of us who spent the past decade making a billion dollar open source software company however, there is nothing free or spare time about working in the open. And the benefits and consequences of working in the open are clear: your reputation is yours and is portable between companies. GitHub is a social network where your social capital, created by your commits and contribution to the global conversation in whatever technology you are working, is yours — not tied to the company you happen to be working at temporarily.
Smart people will take advantage of this — they’ll contribute patches, issues, and comments upstream to the languages and frameworks that they use on the daily in their job — TypeScript, .NET, Redux.
They’ll also advocate for and creatively arrange for as much of their work as possible to be done in the open — even if it is just their contribution graph to private repositories.
GitHub is a great equaliser. You may not be able to get a job in Australia from India, but there is nothing stopping you from working with Australians on GitHub from India.
The way to get a job at Red Hat last decade was obvious. You just started collaborating with Red Hat engineers on some piece of technology that they were working on in the open, then when it was clear that you were making a valuable contribution and were a great person to work with, you would apply for a job. Or they would hit you up.
Now that same pathway is open for everyone, into just about any technology. As the world is eaten by open source, the same dynamic is now prevalent everywhere.
In [a recent interview](http://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds/) Linus Torvalds (49k followers, following 0 on GitHub), the inventor of Linux and git, put it like this:
“You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust”
Your reputation is your location in a network of trust. When you change companies, that weakens and some of it is lost. If you live in a small town and have been there for a long time, then people all over town know you. If you move countries -however- that goes. You end up somewhere where no-one knows you — and worse, no-one knows anyone who knows you.
You’ve lost your first and second, and probably even third-degree connections. Unless you’ve built a brand by speaking at conferences or some other big ticket thing, the trust you built up by working with others and committing code to a corporate internal repository is gone.
However, if that work has been on GitHub, it’s not gone. It’s visible. It’s connected to a network of trust that is visible.
One of the first things that will happen is that the disadvantaged will start to take advantage of this. Students, new grads, immigrants. They’ll use this to move to Australia.
And that will change the landscape. Previously privileged developers will suddenly find their network disrupted. One of the principles of open source is meritocracy — the best idea wins, the most commits wins, the most passing tests wins, the best implementation wins, etc.
It’s not perfect (nothing is). And it doesn’t do away with or discount being a good person to work with. At Red Hat we fired some rock star engineers who just didn’t play well with others — and that stuff does show up in GitHub, mostly in the interactions with other contributors.
GitHub is not simply a code repository and list of raw commit numbers, as some people paint it in strawman arguments. It is a social network. I put it like this:
It’s not your code on GitHub that counts — it’s what other people say on GitHub about your code that counts.
That’s your portable reputation. And over the next 12–24 months, as some developers develop that and others don’t, it’s going to be a stark differentiator. It’s like having email versus not having email (now everyone has email) or having a cellphone versus not having a cellphone (now everyone has a cellphone). Eventually a vast majority will be working in the open, and it will again be a level playing field differentiated on other factors.
But right now, the developer career space is being disrupted by GitHub.
**About me:** I’m a Legendary Recruiter at [Just Digital People](http://www.justdigitalpeople.com.au/); a Red Hat alumnus; a [CoderDojo](http://coderdojobrisbane.com.au/) mentor; a founder of [Magikcraft.io](http://www.magikcraft.io/); the producer of [The JDP Internship — The World’s #1 Software Development Reality Show](http://www.tinyurl.com/thejdpinternship); the host of [The Best Tech Podcast in the World](http://www.thebesttechpodcastintheworld.com/); and a father. All inside a commitment to empowering Queensland’s Silicon Economy. |
8,678 | Samba 系列(十二):如何在 Samba4 AD 中集成 iRedMail Roundcube | https://www.tecmint.com/integrate-iredmail-roundcube-with-samba4-ad-dc/ | 2017-07-08T20:51:00 | [
"邮件",
"Samba",
"Roundcube"
] | https://linux.cn/article-8678-1.html | 
[Roundcube](https://www.tecmint.com/install-and-configure-roundcube-webmail-for-postfix-mail-server/) 是 Linux 中最常用的 Webmail 用户代理之一,它为终端用户提供了一个现代化的 Web 界面,它可以与所有邮件服务进行交互,以便阅读、撰写和发送电子邮件。Roundcube 支持各种邮件协议,包括安全的邮件协议,如IMAPS、POP3S 或者 submission。
在本文中,我们将讨论如何在 iRedMail 中使用 IMAPS 以及 submission 安全端口配置 Roundcube,以检索和发送 Samba4 AD 帐户的电子邮件、如何从浏览器访问 iRedMail Roundcube Web 界面,并添加网址别名、如何启用 Samba4 AD 集成全局 LDAP 地址簿以及如何禁用某些不需要的 iRedMail 服务。
### 要求
1. [如何在 CentOS 7 上安装 iRedMail,用于Samba4 AD集成](/article-8567-1.html)
2. [在 CentOS 7 上配置 iRedMail,用于 Samba4 AD 集成](/article-8673-1.html)
### 第一步:在 Samba4 AD DC 中声明域帐户的电子邮件地址
1、 为了发送和接收 Samba4 AD DC 域账户的邮件,您需要编辑每个用户帐户,如下所示,通过从[安装了 RSAT 工具的 Windows 机器](/article-8097-1.html)并且已经加入 Samba4 AD 中打开 ADUC 工具显式地在邮箱字段填写正确的地址。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Active-Directory-User-and-Computers.jpg)
*添加邮箱帐户来加入 Samba4 AD DC*
2、 相似地,要使用邮件列表,你需要在 ADUC 中创建组,为每个组添加相应的电子邮件地址,并分配合适的用户帐户作为每个组的成员。
这步创建了一个邮件列表,所有 Samba4 AD 组成员的邮箱都会收到给这个 AD 组邮箱地址的邮件。使用下面的截图作为指导为 Samba4 组声明电子邮件字段,并为组添加域成员。
确保所有的域账户成员都添加到了声明了邮件地址的组中。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Create-Group-Admin-for-Samba4-AD-DC.png)
*为 Samba4 AD DC 创建组管理员*
[](https://www.tecmint.com/wp-content/uploads/2017/05/Add-Users-to-Group.png)
*将用户添加到组*
在本例中,发送给 [[email protected]](mailto:[email protected]) 的所有邮件地址将被该组的每个成员邮箱接收,它是 “Domain Admins” 组声明的电子邮件地址。
3、 你可以声明 Samba4 AD 帐户的电子邮件地址的另一种方法是直接从其中一个 AD DC 控制台使用 samba-tool 命令行创建一个用户或组,并使用 `--mail-address` 标志指定邮件地址。
使用下面其中一个命令创建一个指定邮件地址的用户:
```
# samba-tool user add [email protected] --surname=your_surname --given-name=your_given_name your_ad_user
```
创建一个指定邮件地址的组:
```
# samba-tool group add [email protected] your_ad_group
```
将成员添加到组中:
```
# samba-tool group addmembers your_group user1,user2,userX
```
使用下面的语法列出 samba-tool 中有关用户或者组的命令字段:
```
# samba-tool user add -h
# samba-tool group add -h
```
### 第二步:安全 Roundcube Webmail
4、 开始修改 Roundcube 配置文件之前,首先使用 [netstat 命令](https://www.tecmint.com/20-netstat-commands-for-linux-network-management/)管道输出给 egrep 过滤器来列出 [Dovecot 和 Postfix](https://www.tecmint.com/configure-postfix-and-dovecot-with-virtual-domain-users-in-linux/) 监听的套接字,并确保安全端口(IMAPS 是 993,submission 是 587 )是活跃的并且已启用。
```
# netstat -tulpn| egrep 'dovecot|master'
```
5、 要强制邮件的接收和发送在使用安全的 IMAP 和 SMTP 端口的 Roundcube 以及 iRedMail 服务之间,打开位于 `/var/www/roundcubemail/config/config.inc.php` 的 Roundcube 配置文件并确保你修改过了下面的行,本例中是 `localhost`,如下片段所示:
```
// For IMAPS
$config['default_host'] = 'ssl://127.0.0.1';
$config['default_port'] = 993;
$config['imap_auth_type'] = 'LOGIN';
// For SMTP
$config['smtp_server'] = 'tls://127.0.0.1';
$config['smtp_port'] = 587;
$config['smtp_user'] = '%u';
$config['smtp_pass'] = '%p';
$config['smtp_auth_type'] = 'LOGIN';
```
这步强烈建议在远程主机中安装 Roudcube,而不是提供了邮件服务的主机中(IMAP、POP3 或者 SMTP 守护进程)。
6、 接下来,不要关闭配置文件,搜索并做如下小的修改以便 Roundcube 能够通过 HTTPS 协议访问、隐藏版本号以及自动为登录 Web 界面的帐户追加域名。
```
$config['force_https'] = true;
$config['useragent'] = 'Your Webmail'; // Hide version number
$config['username_domain'] = 'domain.tld'
```
7、 同样,禁用下面的插件:managesieve 和 password,通过在以 `$config[‘plugins’]` 开头的行前添加注释 `//`。
一旦登录并验证了域,用户将从连接到 Samba4 AD DC 的 Windows 或 Linux 机器上更改密码。系统管理员将全局管理域帐户的所有筛选规则。
```
// $config['plugins'] = array('managesieve', 'password');
```
8、 最后,保存并关闭配置文件,并打开浏览器访问 Roundcube Webmail,通过 HTTPS 协议进入 iRedMail IP 地址或者 FQDN/mail 位置。
由于浏览器使用的是自签名证书,所以你首次访问 Roundcube 会在浏览器上看到一个警告。接受证书并用 Samba AD 帐户凭证登录。
```
https://iredmail-FQDN/mail
```
[](https://www.tecmint.com/wp-content/uploads/2017/05/Roundcube-Webmail-Login.png)
*Roundcube Webmail 登录*
### 第三步:在 Roundcube 中启用 Samba AD 联系人
9、 要配置 Samba AD 全局 LDAP 地址簿以在 Roundcube 联系人中显示,再次打开 Roundcube 配置文件,并做如下修改:
到达文件的底部,并辨认以 `# Global LDAP Address Book with AD` 开头的部分,删除到文件底部的所有内容,并替换成如下代码段:
```
# Global LDAP Address Book with AD.
#
$config['ldap_public']["global_ldap_abook"] = array(
'name' => 'tecmint.lan',
'hosts' => array("tecmint.lan"),
'port' => 389,
'use_tls' => false,
'ldap_version' => '3',
'network_timeout' => 10,
'user_specific' => false,
'base_dn' => "dc=tecmint,dc=lan",
'bind_dn' => "[email protected]",
'bind_pass' => "your_password",
'writable' => false,
'search_fields' => array('mail', 'cn', 'sAMAccountName', 'displayname', 'sn', 'givenName'),
'fieldmap' => array(
'name' => 'cn',
'surname' => 'sn',
'firstname' => 'givenName',
'title' => 'title',
'email' => 'mail:*',
'phone:work' => 'telephoneNumber',
'phone:mobile' => 'mobile',
'department' => 'departmentNumber',
'notes' => 'description',
),
'sort' => 'cn',
'scope' => 'sub',
'filter' => '(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))',
'fuzzy_search' => true,
'vlv' => false,
'sizelimit' => '0',
'timelimit' => '0',
'referrals' => false,
);
```
在这段代码中替换相应的 `name`、`hosts`、`base_dn`、`bind_dn` 和 `bind_pass` 的值。
10、 做了所需修改后,保存并关闭文件,登录 Roundcube webmail 界面,并进入地址簿菜单。
所有域名帐户(用户和组)与其指定的电子邮件地址的联系人列表都将被显示在全局地址簿上。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Roundcube-User-Contact-List.png)
*Roundcube 用户联系人列表*
### 第四步:为 Roundcube Webmail 界面添加一个别名
11、 要从 <https://webmail.domain.tld> 访问 Roundcube 而不是从 iRedMail 默认提供的旧地址,你需要进行以下更改。
在已安装 RSAT 工具的已加入的 Windows 机器上打开 DNS 管理器,并如下所示,添加一条 iRedMail FQDN、named webmail 的 CNAME 记录。
[](https://www.tecmint.com/wp-content/uploads/2017/05/DNS-Webmail-Properties.jpg)
*DNS Webmail 属性*
12、 接下来,在 iRedMail 机器中,打开位于 `/etc/httpd/conf.d/ssl.conf` 的 Apache Web 服务器的 SSL 配置文件,将 `DocumentRoot` 指向 `/var/www/roundcubemail/`。
修改 `/etc/httpd/conf.d/ssl.conf` 片段:
```
DocumentRoot “/var/www/roundcubemail/”
```
重启 Apache 使更改生效。
```
# systemctl restart httpd
```
13、 现在打开下面的地址,Roundcube 界面应该会显示出来。接受自签名证书错误以进入登录页面。用你自己的域名替换例子中的 domain.tld。
```
https://webmail.domain.tld
```
### 第五步:禁用 iRedMail 未使用的服务
14、 由于 iRedMail 守护进程配置为查询 Samba4 AD DC LDAP 服务器的帐户信息和其他资源,因此可以通过使用以下命令来安全地停止和禁用 iRedMail 机器上的某些本地服务,例如 LDAP 数据库服务器和 iredpad 服务。
```
# systemctl stop slapd iredpad
# systemctl disable slapd iredpad
```
15、 另外,如下图所示,通过在 crontab 文件中的每行之前添加注释 `#`,禁用 iRedMail 执行的某些计划任务,例如 LDAP 数据库备份和 iRedPad 跟踪记录。
```
# crontab -e
```
[](https://www.tecmint.com/wp-content/uploads/2017/05/Disable-iRedMail-Tasks.png)
*禁用 iRedMail 任务*
### 第六步:在 Postfix 中使用邮件别名
16、 要将所有本地生成的邮件(发往 postmaster 并随后重定向到 root 帐户)重定向到特定的 Samba4 AD 帐户,请打开位于 `/etc/postfix/aliases` 中的 Postfix 别名配置文件,并修改 `root` 行,如下所示:
```
root: [email protected]
```
17、 应用别名配置文件,以便 Postfix 可以通过执行 `newaliases` 命令以其自己的格式读取它,并测试邮件是否通过发出以下命令发送到正确的域电子邮件帐户。
```
# echo “Test mail” | mail -s “This is root’s email” root
```
18、 邮件发送完毕后,请使用你为邮件重定向设置的域帐户登录 Roundcube webmail,并验证先前发送的邮件应该在你的帐户收件箱中。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Verify-User-Mail.png)
*验证用户邮件*
就是这样了!现在你已经有了一个完全工作的与 Samba4 AD 集成的邮件服务器了。域帐户可以用它们的内部或者其他外部域发送和接收邮件了。
本教程中使用的配置可以成功将 iRedMail 服务器集成到 Windows Server 2012 R2 或 2016 AD 中。
---
作者简介:
我是一个电脑上瘾的家伙,开源和基于 linux 的系统软件的粉丝,在 Linux 发行版桌面、服务器和 bash 脚本方面拥有大约4年的经验。
---
via: <https://www.tecmint.com/integrate-iredmail-roundcube-with-samba4-ad-dc/>
作者:[Matei Cezar](https://www.tecmint.com/author/cezarmatei/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [Roundcube](https://www.tecmint.com/install-and-configure-roundcube-webmail-for-postfix-mail-server/), one of the most used webmail user agent in Linux, offers a modern web interface for end users to interact with all mail services in order to read, compose and send e-mails. Roundcube supports a variety of mail protocols, including the secured ones, such IMAPS, POP3S or submission.
In this topic we’ll discuss how to configure Roundcube in iRedMail with IMAPS and submission secured ports to retrieve and send emails for Samba4 AD accounts, how to access iRedMail Roundcube web interface from a browser and add a web address alias, how to enable Samba4 AD integration for Global LDAP Address Book and how to disable some unneeded iRedMail services.
#### Requirements
[How to Install iRedMail on CentOS 7 for Samba4 AD Integration](https://www.tecmint.com/install-iredmail-on-centos-7-for-samba4-ad-integration/)[Configure iRedMail on CentOS 7 for Samba4 AD Integration](https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/)
### Step 1: Declare E-mail Address for Domain Accounts in Samba4 AD DC
**1.** In order send and receive mail for **Samba4 AD DC** domain accounts, you need to edit each user account and explicitly set email filed with the proper e-mail address by opening ADUC tool from a [Windows machine with RSAT tools installed](https://www.tecmint.com/manage-samba4-ad-from-windows-via-rsat/) and joined to Samba4 AD as illustrated in the below image.

**2.** Similarly, to use mail lists, you need to create groups in ADUC, add the corresponding e-mail address for each group and assign the proper user accounts as members of the group.
With this setup created as a mail list, all members mailboxes of a Samba4 AD group will receive mail destined for an AD group e-mail address. Use the below screenshots as a guide to declare e-mail filed for a Samba4 group account and add domain users as members of the group.
Make sure all accounts members added to a group have their e-mail address declared.


In this example, all mails sent to ** [email protected]** e-mail address declared for ‘
**Domain Admins**’ group will be received by each member mailbox of this group.
**3.** An alternative method that you can use to declare the e-mail address for a Samba4 AD account is by creating a user or a group with samba-tool command line directly from one of the AD DC console and specify the e-mail address with the `--mail-address`
flag.
Use one of the following command syntax to create a user with e-mail address specified:
# samba-tool user add[[email protected]]--surname=your_surname --given-name=your_given_name your_ad_user
Create a group with e-mail address specified:
# samba-tool group add[[email protected]]your_ad_group
To add members to a group:
# samba-tool group addmembers your_group user1,user2,userX
To list all available samba-tool command fields for a user or a group use the following syntax:
# samba-tool user add -h # samba-tool group add -h
### Step 3: Secure Roundcube Webmail
**4.** Before modifying Roundcube configuration file, first, use [netstat command](https://www.tecmint.com/20-netstat-commands-for-linux-network-management/) piped through egrep filter to list the sockets that [Dovecot and Postfix](https://www.tecmint.com/configure-postfix-and-dovecot-with-virtual-domain-users-in-linux/) listen to and assure that the properly secured ports (993 for IMAPS and 587 for submission) are active and enabled.
# netstat -tulpn| egrep 'dovecot|master'
**5.** To enforce mail reception and transfer between Roundcube and iRedMail services on secured IMAP and SMTP ports, open Roundcube configuration file located in **/var/www/roundcubemail/config/config.inc.php** and make sure you change the following lines, for localhost in this case, as shown in the below excerpt:
// For IMAPS $config['default_host'] = 'ssl://127.0.0.1'; $config['default_port'] = 993; $config['imap_auth_type'] = 'LOGIN'; // For SMTP $config['smtp_server'] = 'tls://127.0.0.1'; $config['smtp_port'] = 587; $config['smtp_user'] = '%u'; $config['smtp_pass'] = '%p'; $config['smtp_auth_type'] = 'LOGIN';
This setup is highly recommended in case Roudcube is installed on a remote host than the one that provides mail services (IMAP, POP3 or SMTP daemons).
**6.** Next, don’t close the configuration file, search and make the following small changes in order for Roundcube to be visited only via HTTPS protocol, to hide the version number and to automatically append the domain name for accounts who login in the web interface.
$config['force_https'] = true; $config['useragent'] = 'Your Webmail'; // Hide version number $config['username_domain'] = 'domain.tld'
**7.** Also, disable the following plugins: **managesieve** and **password** by adding a comment `(//)`
in front of the line that starts with **$config[‘plugins’]**.
Users will change their password from a Windows or Linux machine joined to Samba4 AD DC once they login and authenticate to the domain. A sysadmin will globally manage all sieve rules for domain accounts.
// $config['plugins'] = array('managesieve', 'password');
**8.** Finally, save and close the configuration file and visit Roundcube Webmail by opening a browser and navigate to iRedMail IP address or FQDN/mail location via HTTPS protocol.
The first time when you visit Roundcube an alert should appear on the browser due to the Self-Signed Certificate the web server uses. Accept the certificate and login with a Samba AD account credentials.
https://iredmail-FQDN/mail

### Step 3: Enable Samba AD Contacts in Roundcube
**9.** To configure Samba AD Global LDAP Address Book to appear Roundcube Contacts, open Roundcube configuration file again for editing and make the following changes:
Navigate to the bottom of the file and identify the section that begins with ‘**# Global LDAP Address Book with AD**’, delete all its content until the end of the file and replace it with the following code block:
# Global LDAP Address Book with AD. # $config['ldap_public']["global_ldap_abook"] = array( 'name' => 'tecmint.lan', 'hosts' => array("tecmint.lan"), 'port' => 389, 'use_tls' => false, 'ldap_version' => '3', 'network_timeout' => 10, 'user_specific' => false, 'base_dn' => "dc=tecmint,dc=lan", 'bind_dn' => "[[email protected]]", 'bind_pass' => "your_password", 'writable' => false, 'search_fields' => array('mail', 'cn', 'sAMAccountName', 'displayname', 'sn', 'givenName'), 'fieldmap' => array( 'name' => 'cn', 'surname' => 'sn', 'firstname' => 'givenName', 'title' => 'title', 'email' => 'mail:*', 'phone:work' => 'telephoneNumber', 'phone:mobile' => 'mobile', 'department' => 'departmentNumber', 'notes' => 'description', ), 'sort' => 'cn', 'scope' => 'sub', 'filter' => '(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))', 'fuzzy_search' => true, 'vlv' => false, 'sizelimit' => '0', 'timelimit' => '0', 'referrals' => false, );
On this block of code replace **name**, **hosts**, **base_dn**, **bind_dn** and **bind_pass** values accordingly.
**10.** After you’ve made all the required changes, save and close the file, login to Roundcube webmail interface and go to Address Book menu.
Hit on your **Global Address Book** chosen name and a contact list of all domain accounts (users and groups) with their specified e-mail address should be visible.

### Step 4: Add an Alias for Roundcube Webmail Interface
**11.** To visit Roundcube at a web address with the following form **https://webmail.domain.tld** instead of the old address provided by default by iRedMail you need to make the following changes.
From a joined Windows machine with RSAT tools installed, open DNS Manager and add a new CNAME record for iRedMail FQDN, named webmail, as illustrated in the following image.

**12.** Next, on iRedMail machine, open Apache web server SSL configuration file located in **/etc/httpd/conf.d/ssl.conf** and change DocumentRoot directive to point to **/var/www/roundcubemail/** system path.
file **/etc/httpd/conf.d/ssl.conf** excerpt:
DocumentRoot “/var/www/roundcubemail/”
Restart Apache daemon to apply changes.
# systemctl restart httpd
**13.** Now, point the browser to the following address and Roundcube interface should appear. Accept the Self-Signed Cerificate error to continue to login page. Replace domain.tld from this example with your own domain name.
https://webmail.domain.tld
### Step 5: Disable iRedMail Unused Services
**14.** Since iRedMail daemons are configured to query Samba4 AD DC LDAP server for account information and other resources, you can safely stop and disable some local services on iRedMail machine, such as LDAP database server and iredpad service by issuing the following commands.
# systemctl stop slapd iredpad # systemctl disable slapd iredpad
**15.** Also, disable some scheduled tasks performed by iRedMail, such as LDAP database backup and iRedPad tracking records by adding a comment **(#)** in front of each line from crontab file as illustrated on the below screenshot.
# crontab -e

### Step 6: Use Mail Alias in Postfix
**16.** To redirect all locally generated mail (destined for postmaster and subsequently redirected to root account) to a specific Samba4 AD account, open Postfix aliases configuration file located in **/etc/postfix/aliases** and modify root line as follows:
root:[[email protected]]
**17.** Apply the aliases configuration file so that Postfix can read it in its own format by executing newaliases command and test if the mail gets sent to the proper domain e-email account by issuing the following command.
# echo “Test mail” | mail -s “This is root’s email” root
**18.** After the mail has been sent, login to Roundcube webmail with the domain account you’ve setup for mail redirection and verify the previously sent mail should be received in your account Inbox.

That’all! Now, you have a fully working mail server integrated with Samba4 Active Directory. Domain accounts can send and receive mail for their internal domain or for other external domains.
The configurations used in this tutorial can be successfully applied to integrate an iRedMail server to a Windows Server 2012 R2 or 2016 Active Directory.
How could you create aliases for users created in the active directory, for example if an email comes for
Reply[email protected], deliver it to[email protected], user user1 does not exist in the active directory and user2 does exist.Excellent article!
If possible write an article about how to integrate (authenticate) Very Secure FTP Daemon (vsftpd) with Samba4 AD DC users.
Reply |
8,679 | OpenBSD 将在每次重启后都使用和之前不同的内核 | https://www.bleepingcomputer.com/news/security/openbsd-will-get-unique-kernels-on-each-reboot-do-you-hear-that-linux-windows/ | 2017-07-09T06:54:39 | [
"内核",
"安全"
] | https://linux.cn/article-8679-1.html | 
在 OpenBSD 的测试快照中加入了一个新的功能,每次当 OpenBSD 用户重启或升级计算机时都会创建一个独特的内核。
该功能被称之为 KARL(<ruby> 内核地址随机化链接 <rp> ( </rp> <rt> Kernel Address Randomized Link </rt> <rp> ) </rp></ruby>),即以随机的顺序重新链接其内部的内核文件,从而每次生成一个独特的内核二进制文件。
当前的稳定版中,OpenBSD 内核使用预先定义好的顺序来链接和加载内核二进制文件中的内部文件,这导致所有用户的内核都是一样的。
### KARL 与 ASLR 不同
KARL 由 Theo de Raadt 开发。KARL 会在安装、升级和重启时生成一个新的内核二进制文件。当用户启动、升级和重启机器时,最新生成的内核会替换已有的内核二进制,而操作系统会生成一个新的内核,其将用于下次启动、升级和重启,周而复始。
不要将 KARL 和 ASLR(<ruby> 地址空间布局随机化 <rp> ( </rp> <rt> Address Space Layout Randomization </rt> <rp> ) </rp></ruby>)相混淆,ASLR 是一种用于随机化应用代码执行的内存地址的技术,以防止知道应用或内核运行的特定区域而被针对性利用。
de Raadt 说,“它仍然装载在 KVA(<ruby> 内核虚拟地址空间 <rp> ( </rp> <rt> Kernel Virtual Address Space </rt> <rp> ) </rp></ruby>)中的同样位置,这不是内核的 ASLR!”
相反,KARL 以随机的内部结构生成内核二进制,这样漏洞利用程序就不能泄露或攻击内核内部函数、指针或对象。技术性的解释参见下面内容。
一个独特的内核是这样链接的,启动汇编代码仍放在原处,接着是随机大小的空隙,然后是随机重组的其它 .o 文件。这样的结果就是函数和变量之间的距离是全新的。一个指针的信息泄露将不会暴露其它指针或对象。这或许会减少可变体系架构的组件,因为指令流的多态性被嵌套偏移的改变所破坏。
“因此,每次的新内核都是独特的。”de Raadt 说。
### 该功能是最近两个月开发的
该功能的开发始于五月份,[首次讨论](https://marc.info/?l=openbsd-tech&m=149732026405941&w=2)出现于六月中旬的 OpenBSD 技术邮件列表中。KARL [最近出现于](http://marc.info/?l=openbsd-tech&m=149887978201230) OpenBSD 6.1 的快照版本中。
“当今的情况是许多人从 OpenBSD 安装内核二进制,然后这个相同的内核二进制将运行六个月以上。当然,如果你重复地引导这个相同的内核二进制,其内存布局也是一样的。这就是现在我们所提交的代码解决的问题。”de Raadt 说,“然而, -current 快照包含了一些我正在和 Robert Peichaer 开发的将来的变化。那个变化将可以使你每次重启都启动到一个新链接的内核上。”
### KARL 是一个独有的功能
一个销售针对隐私的硬件产品的初创企业 Technoethical 的创始人 Tiberiu C. Turbureanu 对 Bleeping Computer 说,这是一个 OpenBSD 独有的功能。
“在 Linux 中没有实现它,”Turbureanu 说,“看起来很棒的想法”,估计有可能该功能会移植到 Linux 内核中。
不过,Linux 刚刚增加了对 KASLR(<ruby> 内核地址空间布局随机化 <rp> ( </rp> <rt> Kernel Address Space Layout Randomization </rt> <rp> ) </rp></ruby>)的支持,该功能是将 KSLR 移植到了内核本身,它会将内核加载到随机的内存地址。
该功能在上周发布的 Linux 4.12 中[默认启用](https://www.phoronix.com/scan.php?page=news_item&px=KASLR-Default-Linux-4.12)。这两者的不同是 KARL 是装载不同的内核到同一个位置,而 KASLR 则是装载相同的二进制到随机的位置。目标相同,做法不同。
在 Windows 中没有支持 KARL,但是微软使用 KASLR 已经很多年了。反病毒创客公司 Emsisoft 的 CTO Fabian Wosar 正在全力将 KARL 增加到 Windows 内核中。
“OpenBSD 的这个思路需要进一步发扬到(当前的 Windows 内核防护中),这样大家都可以有一个独特内核二进制了,”Wosar 在和 Bleeping Computer 的私人谈话中说到。
“即便是你知道了(随机的)内核起始点,你也不能用它来找到要定位的特定函数,函数相对于内核起始点的位置是随系统不同而不同的,”Wosar 补充说。
其它的操作系统平台,如 Windows 和 Linux ,如果拥有 KARL 将极大的改善其用户的安全性。
| 200 | OK | A new feature added in test snapshots for OpenBSD releases will create a unique kernel every time an OpenBSD user reboots or upgrades his computer.
This feature is named KARL — Kernel Address Randomized Link — and works by relinking internal kernel files in a random order so that it generates a unique kernel binary blob every time.
Currently, for stable releases, the OpenBSD kernel uses a predefined order to link and load internal files inside the kernel binary, resulting in the same kernel for all users.
## KARL is different from ASLR
Developed by Theo de Raadt, KARL will work by generating a new kernel binary at install, upgrade, and boot time. If the user boots up, upgrades, or reboots his machine, the most recently generated kernel will replace the existing kernel binary, and the OS will generate a new kernel binary that will be used on the next boot/upgrade/reboot, constantly rotating kernels on reboots or upgrades.
KARL should not be confused with ASLR — Address Space Layout Randomization — a technique that randomizes the memory address where application code is executed, so exploits can't target a specific area of memory where an application or the kernel is known to run.
"It still loads at the same location in KVA [Kernel Virtual Address Space]. This is not kernel ASLR!," said de Raadt.
Instead, KARL generates kernel binaries with random internal structures, so exploits cannot leak or attack internal kernel functions, pointers, or objects. A technical explanation is available below.
"As a result, every new kernel is unique," de Raadt says.
## Feature developed in the last two months
Work on this feature started in May and was [first discussed](https://marc.info/?l=openbsd-tech&m=149732026405941&w=2) in mid-June on the OpenBSD technical mailing list. KARL has [recently landed](http://marc.info/?l=openbsd-tech&m=149887978201230) in snapshot versions of OpenBSD 6.1.
"The situation today is that many people install a kernel binary from OpenBSD, and then run that same kernel binary for 6 months or more. Of course, if you booted that same kernel binary repeatedly, the layout would be the same. That is where we are today, for commited code," de Raadt says.
"However, snapshots of -current contain a futher change, which I worked on with Robert Peichaer. That change is scaffolding to ensure you boot a newly-linked kernel upon every reboot.
## KARL is a unique feature
Speaking to Bleeping Computer, Tiberiu C. Turbureanu, founder of [Technoethical](https://tehnoetic.com), a startup that sells [privacy-focused hardware products](https://www.bleepingcomputer.com/news/hardware/there-are-only-25-devices-that-respect-your-privacy/), says this feature appears to be unique to OpenBSD.
"It's not implemented in Linux," Turbureanu said. "This looks like a great idea," the expert added, regarding the possibility of having this feature ported to the Linux kernel.
Instead, the Linux project has just added support for Kernel Address Space Layout Randomization (KASLR), a feature that ports ASLR to the kernel itself, loading the kernel at a randomized memory address.
This feature was [turned on by default](https://www.phoronix.com/scan.php?page=news_item&px=KASLR-Default-Linux-4.12) in Linux 4.12, released last week. The difference between the two is that KARL loads a different kernel binary in the same place, while KASLR loads the same binary in random locations. Same goal, different paths.
As for Windows, KARL is not supported, but Microsoft has used KASLR for many years. Fabian Wosar, Chief Technical Officer for antivirus maker [Emsisoft](https://www.emsisoft.com/) is all on board with adding KARL to the Windows kernel.
"OpenBSD's idea would go even further [than current Windows kernel protections] as everyone would have a unique kernel binary as well," Wosar said in a private conversation with Bleeping Computer.
"So even if you had the address where the kernel starts (which is randomised), you couldn't use it to figure out where certain functions are located, as the location of the functions relative to the kernel start will be different from system to system as well," Wosar added.
Having KARL on other OS platforms would greatly improve the security of both Windows and Linux users.
## Comments
## kenhall5551 - 7 years ago
This is a GREAT idea. But because Microsoft can't patten and control it's distribution, we will probably not see it there. There's hope for Linux though.
## EmpressTrudy - 7 years ago
The fact that Windows has used KASLR for several years is hardly a ringing endorsement, no?
## campuscodi - 7 years ago
Actually KASLR on windows has been very successful. At least compared to previous versions where rootkits were the norm.
## DavidsonDFGL - 7 years ago
Noob question:
So, in general terms, means that KARL 'just' randomly joins objects, I mean:
final-obj: obj
obj1.o obj2.o obj3.o
It could be:
final-obj: obj
obj3.o obj1.o obj2.o?
I do not understand much of the linux kernel. |
8,680 | 幼犬式免费:免费软件中的无形消费 | https://opensource.com/article/17/2/hidden-costs-free-software | 2017-07-09T10:34:00 | [
"自由软件",
"免费软件"
] | https://linux.cn/article-8680-1.html | 
>
> 编者按:本文中使用的“Free”一词,通常在开源世界会指“自由”,对于此词的辨析,有个著名的例句是,“<ruby> 如自由一词的自由 <rp> ( </rp> <rt> free as in freedom </rt> <rp> ) </rp></ruby>”,而非“<ruby> 如免费啤酒般的免费 <rp> ( </rp> <rt> free as in beer </rt> <rp> ) </rp></ruby>”,但是在本文中,作者使用的“Free”应该是只使用了其“免费”的词义,并分别在“freedom”、“beer”和“puppy”三个场景下进行了辨析。
>
>
>
我们习惯于软件被描述为“<ruby> 自由式免费 <rt> free as in freedom </rt></ruby>”和“<ruby> 啤酒式免费 <rt> free as in beer </rt></ruby>”。但还有另一种不常被提起的“免费”——“<ruby> 幼犬式免费 <rt> free as in puppy </rt></ruby>”。这个概念来自于当别人送你一只免费的小狗,但那只小狗不是真的免费。日常照顾它需要花费大量精力与金钱。商业术语是“总体拥有成本”,即 TCO ,这适用于所有场景,不仅仅是开源软件和小狗。
既然免费小狗问题适用于所有事,那么它是如何对于开源软件特别重要的呢?有一些解释。首先,如果你已经购买了软件,就会因为消费对它设定期望值。软件起初免费后来需要花钱似乎是很无理的要求。其次,如果这发生在一个组织首次采用开源项目的时候,就会阻碍该组织下一次采用开源项目。最终且违反直觉的是,看起来开源软件需要花钱可能使得它更轻易“卖出”,如果它真的免费,未免也太好了一点。
接下来的部分是软件消费渐显的共同之处。这绝不是一个详尽的列表。
### 起始消费
开始使用软件之前,你必须首先拥有这个软件。
* **软件:** 因为它是开源的不一定意味着它是*免费的*。
* **硬件:** 考虑到软件的需求。如果你没有使用软件所需的硬件(可能是服务器硬件或者客户端硬件),你得买。
* **培训:** 很少有软件完全直白如话的。在于你是选择培训还是自己去弄清楚。
* **实战:** 把所有零部件放在一起只是开始,现在,是时候把所有东西拼在一起了。
+ **安装和配置:** 至少这将花费一些员工的工作时间。如果这是一个大项目,你可能需要花钱请一个系统整合服务商或者其他供应商来做这件事。
+ **数据导入:** 如果要取代现成的系统,存在数据搬家的问题。皆大欢喜的是所有都是相同标准编译的,这样就没什么问题。然而在很多情况,需要写一些脚本来提取和重载数据。
+ **其他系统的接口:** 说到写脚本,这个软件能和你使用的其他软件(例如,目录服务或者工资软件)很好联系起来吗?
+ **定制:** 如果原本的软件不能满足你所有的需求,那它可能需要定制。你可以做到,但是仍需要努力或者是一些原材料。
* **商业变化:** 当你的组织希望有所提升时,新软件也可能会变化。然而这种转换不是免费的。例如,生产效率刚开始可能会下降因为员工还在适应新软件。
### 经营成本
安装软件是简单部分,现在你得使用它。
* **更多培训:** 什么,你认为我们已经做好了? 过段时间,你的组织会加入新人,他们需要学习如何使用这个软件,或者说是添加了额外功能的新版本软件发布了。
* **维护:**
+ **订阅费:** 有些软件通过收取订阅费来提供更新。
+ **补丁:**取决于软件的自身,打补丁需要费很多功夫。包括测试和部署。
+ **开发:**你自己做所有定制吗?现在你必须维护下去了。
* **支持:** 当它坏了得有人来修,无论是供应商还是自己的团队,确实需要一笔花费。
* **良好的做法:** 这个不是必需品,但如果你在使用开源软件,无论如何给一些回馈是非常好的。比如代码贡献、在邮件列表提供支持、赞助年度会议等等。
* **商业利益:** 好吧,这不是一个付费项,而且它抵消了一些费用。使用软件对你的组织意味着什么呢?如果它使得量产产品减少了 25% 的浪费,这就是价值。再换个例子,它可能帮助你减少 30% 不盈利部分。
这样例子数不胜数,确实需要一定想象力来想出所有的花费。算出正确的价值需要一些实验和大量的的优质客户,但是这样分析一遍的话会使得它更加清晰。就好像一只小狗狗,如果预先知道自己会付出多少,这可能会是一个有价值的实验。
(题图:opensource.com)
---
作者简介:
Ben Cotton - Ben Cotton 是一个培训过的气象学家和一个职业的高效计算机工程师。 Ben 在 Cycle Computing 做技术传教士。他是 Fedora 用户和贡献者,合作创办当地的一个开源集会,是一名开源倡议者和软件自由机构的支持者。他的推特 (@FunnelFiasco)
---
via: <https://opensource.com/article/17/2/hidden-costs-free-software>
作者:[Ben Cotton](https://opensource.com/users/bcotton) 译者:[XYenChi](https://github.com/XYenChi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | We're used to hearing of software being described as "free as in freedom" and "free as in beer." But there's another kind of "free" that doesn't get talked about as much: "free as in puppy." This concept is based around the idea that when someone gives you a free puppy, that puppy isn't really free. There's a lot of work and expenses that go into its daily care. The business term is "total cost of ownership," or TCO, and it applies to anything, not just open source software and puppies.
So if the free puppy problem applies to everything, how is it important to open source software specifically? There are a few ways. First, if you're already paying for software, then you've set the expectation that it has costs. Software that's free up front but costs money later seems like a major imposition. Secondly, if it happens on an organization's first open source adoption project, it can put the organization off of adopting open source software in the future. Lastly and counterintuitively, showing that open source software has a cost may make it an easier "sell." If it's truly no cost, it seems too good to be true.
The following sections represent common areas for software costs to sneak in. This is by no means a comprehensive list.
## Setup costs
To begin using software, you must first have the software.
**Software:**Just because it's open source doesn't necessarily mean it's*gratis*.**Hardware:**Consider the requirements of the software. If you don't have the hardware that (this could be server hardware or client hardware) you need to use the software, you'll need to buy it.**Training:**Software is rarely completely intuitive. The choice is to get training or figure it out on your own.**Implementation**Getting all of the pieces in the same room is only the start. Now, it's time to put the puzzle together.
**Installation and configuration:**At a minimum this will take some staff time. If it's a big project, you may need to pay a systems integrator or some other vendor to do this.**Data import:**If you're replacing an existing system, there is data to move into a new home. In a happy world where everything complies with the same standard, this is not a problem. In many cases, though, it may be necessary to write some scripts to extract and reload data.**Interfaces with other systems:**Speaking of writing scripts, does this software tie in nicely with other software you use (for example, your directory service or your payroll software)?**Customization:**If the software doesn't meet all of your needs out of the box, it may need to be customized. You can do that, but it still requires effort and maybe some materials.
**Business changes:**This new software will probably change how your organization does something—hopefully for the better. However, the shift isn't free. For example, productivity may dip initially as staff get used to the new software.
## Operational costs
Getting the software installed is the easy part. Now you have to use it.
**More training:**What, did you think we were done with this? Over time, new people will probably join your organization and they will also need to learn how to use the software, or a new release will come out that adds additional functionality.**Maintenance:**
**Subscription:**Some software provides updates via a paid subscription.**Patches:**Depending on the nature of the software, there may be some effort in applying patches. This includes both testing and deployment.**Development:**Did you make any customizations yourself? Now you have to maintain those forever.
**Support:**Someone has to fix it when it goes wrong, and whether that's a vendor or your own team, there's a real cost.**Good citizenship:**This one isn't a requirement, but if you're using open source software, it would be nice if you gave back somehow. This might be code contributions, providing support on the mailing list, sponsoring the annual conference, etc.**Business benefits:**Okay, so this isn't a cost, but it can offset some of the costs. What does using this software mean for your organization? If it enables you to manufacture widgets with 25% less waste, then that's valuable. To provide another example, maybe it helps you increase repeat contributions to your nonprofit by 30%.
Even with a list like this, it takes a lot of imagination to come up with all of the costs. Getting the values right requires some experience and a lot of good guessing, but just going through the process helps make it more clear. Much like with a puppy, if you know what you're getting yourself into up front, it can be a rewarding experience.
## 10 Comments |
8,682 | 安卓编年史(28):Android 5.0 Lollipop——有史以来最重要的安卓版本(2) | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/28/ | 2017-07-10T09:28:00 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8682-1.html | 
#### ART——为未来提供了一个平台的安卓运行时
安卓里没有多少组件的血统能追溯到 1.0 时代,但在 2014 年, Dalvik 这个驱动安卓应用的运行时是它们中的一员。Dalvik 最初是为单核、低端性能设备设计的,而且存储和内存占用的优先级要高于性能表现。在过去的几年里,谷歌给 Dalvik 扩充了越来越多的更新,比如 JIT 支持、并发垃圾回收,以及多进程支持。但是随着多核手机的出现,它们比 T-Mobile G1 快上很多倍,而这些功能升级扩充只能帮安卓到这里了。
解决方案就是用 ART 这个安卓运行时替换 Dalvik,这是一个完全为现代智能手机硬件重写的应用引擎。ART 更强调性能表现和用户界面流畅度。ART 带来了一个从 JIT(Just-in-time,即时)编译到 AOT(Ahead-of-time,提前)编译的转变。JIT 会在每次应用运行的时候即时编译,节省存储空间,因为编译后的代码从不写入存储,但它消耗更多的 CPU 和内存资源。AOT 会将编译后的代码保存到存储,让应用启动的时候更快并减少内存使用。ART 会在设备上将编译代码作为安装的一部分进行,而不分发预编译的代码,这样编译器可以进行一些针对特定设备的优化。ART 还带来了 64 位支持,扩大了内存寻址范围,由 64 位指令集带来更佳的性能表现(特别是在媒体和加密应用上)。
而最好的部分是这个变化将这些性能优化和 64 位支持带给了每个 java 安卓应用。ART 为每个 java 应用生成代码,因此任何对 ART 的改进都自动应用到了这些应用。同时 ART 也是在未来的升级计划下写就,所以它能够和安卓一同进化。
#### 一个系统层级的界面刷新

Material Design 带来了一个几乎对安卓所有界面的完全翻新。首先,整个核心系统界面改变了。安卓得到了一个全新的按钮集合,看起来有点像是 PlayStation 的手柄:三角形,圆形以及正方形按钮,分别代表后退,主屏幕,和最近应用。得益于全新的图标集,状态栏也是焕然一新。

“最近应用”获得了大翻新。从一个小略缩图纵向列表变成了一个巨大的,几乎全屏的略缩图串联列表。它还获得了一个新名字(也没那么守旧),“<ruby> 概览 <rp> ( </rp> <rt> Overview </rt> <rp> ) </rp></ruby>”。这明显是受到了前面版本的 Chrome 标签页切换器效果的启发。

顺带一说,在这个安卓版本里 Chrome 的标签页切换器效果消失了。作为一种将 Web 应用与本地应用同等对待的尝试,Chrome 标签合并到了概览列表。是的:最近“应用”列表现在显示的是最近打开的应用,加上最近打开的网站。在棒棒糖中,最近应用列表还采取了一种“以文档为中心”的方法,意味着应用可以在最近应用列表中显示多个项目。比如你在 Google Docs 中打开了两个文档,它们都会显示在最近应用中,让你可以在它们之间轻松切换,而不用到应用的文件列表去来回切换。

通知面板是全新的。谷歌给通知面板带来了“卡片”主题,将每个项目归整到它自己的矩形中。单个通知条目从黑色背景变成了白色背景,有了更佳的排版和圆形图标。这些新通知来到了锁屏上,将它从一个最没用的中间屏变成了很有用的屏幕,用于展示“这里是你不在的时候发生的事情”。

全屏的通知,比如来电以及闹钟,都被抛弃了,取而代之的是在屏幕顶部弹出一个“抬头(HUD)”通知。抬头通知也对“高优先级”应用可用,最初这是为即时消息设计的。但是否是高优先级的通知这取决于开发者的决定,在开发者意识到这可以让他们的通知更显眼之后,所有人都开始使用它。之后版本的安卓通过给用户提供“高优先级”的设置解决了这个问题。

谷歌还给棒棒糖添加了一个单独的,但很相似的“优先”通知系统。“优先”通知是一个介于完全静音和“提醒一切消息”之间的模式,允许用户将特定的联系人和应用标记为重要。优先模式只会为这些重要的人发出提醒。在界面上来看,它采用了音量控制附加通知优先级控制以及设置中心添加一项优先通知新设置的形式。当你处在优先模式的时候,状态栏会有一颗星形标识。

快速设置获得了一系列的大改善。控制项现在是一块在通知*上面*的面板,所以它可以通过“两次下拉”手势来打开它。第一次下拉会打开通知面板,第二次下拉手势会缩小通知面板并打开快速设置。快速设置的布局变了,抛弃了平铺排列,转为一个单独面板上的一系列浮动按钮。顶部是十分方便的亮度调节条,之后是连接,自动旋转,手电筒,GPS,以及 Chromecast 的按钮。
快速设置现在还有了实际的内嵌面板。它可以在主界面显示无线网络接入点,蓝牙设备,以及移动数据使用量。

Material Design 革新给了几乎每个应用一个新图标,并带来了一个更明亮,白色背景的应用抽屉。默认应用阵容也有了很大的变化。和这些新应用问声好吧:通讯录,谷歌文档,Fit,信息,照片,Play 报亭,以及谷歌幻灯片。和这些死去的应用说再见吧:相册,G+ 照片,People,Play 杂志,电子邮件,以及 Quickoffice。

这些新应用中很多来自 Google Drive,从一个单独的大应用分割成每个产品一个应用。现在我们有了云端硬盘,文档,表格,以及幻灯片,都来自于云端硬盘团队。云端硬盘同时也要对 Quickoffice 的死亡负责,云端硬盘团队令它元气大伤。在“谷歌从来没法做好决定”分类下:通讯录从“People”改回了“Contacts”,短信应用在运营商的要求下叫回了 “Messenger”。(那些运营商*不*喜欢谷歌环聊插手短信的职能。)我们有项真正的新服务:谷歌健身,一个健康追踪应用,可以在安卓手机和安卓手表上工作。Play 杂志也有了新的设计,添加了网站内容,所以它改名叫“Play 报亭”。
还有更多的谷歌专有应用接管 AOSP 的例子。
* “G+ 照片”变成了“谷歌照片”,并取代了 AOSP 的相册成为默认照片应用,而相册也就随之消亡了。改名成“谷歌照片”是为照片应用[退出 Google+](http://arstechnica.com/gadgets/2015/05/google-photos-leaves-google-launches-as-a-standalone-service/)并成为独立服务做准备。谷歌照片的发布在棒棒糖发布之后六个月——暂时应用只像是 Google+ 应用换了个图标和界面设计。
* Gmail 从电子邮件应用接管了 POP3,IMAP 以及 Exchange 邮件的任务。尽管死掉的电子邮件应用还有个图标,但那是假的——它仅仅只显示一条信息,告诉用户从 Gmail 应用设置电子邮件账户。
* “People”到“Contacts”的变化实际上是变为谷歌通讯录,又是一个取代 AOSP 对应应用的例子。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/28/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo/) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,683 | 在物联网中使用脚本语言面临的挑战与对策 | https://www.linux.com/news/event/elcna/2017/2/using-scripting-languages-iot-challenges-and-approaches | 2017-07-11T10:23:00 | [
"IoT",
"脚本语言"
] | https://linux.cn/article-8683-1.html | 
>
> 在即将到来的嵌入式 Linux 会议 + OpenIoT 峰会中,Paul Sokolovsky 将会讨论在嵌入式开发中使用脚本语言的一些挑战。
>
>
>
脚本语言(又称作<ruby> 超高级语言 <rt> Very High-Level Languages </rt></ruby>或 VHLLs ),例如 Python、 PHP 以及 JavaScript 常用在桌面、服务器和网页开发中。它们强大的内置功能能够让你花费少量的时间和精力来开发小型却有用的应用,Paul Sokolovsky,Linaro 公司物联网工程师如是说。然而,目前在物联网中使用超高级语言深度开发嵌入式应用相对来说有些别扭。
在即将到来的[嵌入式 Linux 会议](http://events.linuxfoundation.org/events/embedded-linux-conference) + [OpenIoT 峰会](https://events.linuxfoundation.org/events/openiot-summit/program/schedule)中,Sokolovsky 会讨论在嵌入式开发中使用 VHLLs 的挑战并且基于 MicroPython 的例子与 JerryScript + Zephyr.js 项目比较不同的对策。 我们与Sokolovsky 进行了一番交谈来获得更多信息。
**Linux.com:您可以给我们的读者一些 VHLLs 的背景知识吗?**
Paul Sokolovsky: 超高级语言成为计算机科学和信息技术风景中的一部分已经几十年了。也许第一个流行的脚本语言是 Unix shell(sh),尽管由于较小的特征集,它很少被认为是一种超高级语言,而是一种特定领域语言。所以第一个真正破纪录的 VHLLs 是 Perl(1987)和 Tcl(1988),很快紧跟着出现了 Python(1991),Ruby(1995),PHP(1995),JavaScript(1995)以及许多其它语言。
不同 VHLLs 之间的区别特性包括:它们的解析本能(从使用者的角度来看,也许是因为其中复杂的编译器作祟),内置可用的强大的数据类型如任意大小的列表和映射,可观的标准库,以及允许用户访问甚至更大的第三方库的外部模块系统。所有的这些特性都与相对容易使用的感觉(更少的输入,不需要构建等)和简单的学习曲线相耦合。
**Linux.com: 使用这些语言做开发有哪些优势?**
Sokolovsky: 优势的根源来自于以上描述的这些特性。一个新手可以非常轻松的开始使用脚本语言并且快速的学习它。很多 VHLLs 提供了一个强大的交互模式,所以你不需要去读那些厚厚的使用手册来开始使用脚本语言,而是直接去探索和体验它们。强大的内置功能允许你去开发小而有用的应用(脚本),而仅仅使用很少的时间和精力(这就是“脚本语言”名字的来源)。如果要转向开发大型应用,广泛的第三方库和可以轻而易举使用的模块系统使得开发变得流畅和高产。
**Linux.com: 在嵌入式平台上使用脚本开发和在其他平台开发有什么区别?**
Sokolovsky: 鉴于之前我们讨论过的 VHLLs 振奋人心的能力,有一个创意——为什么我们不能享受使用 VHLLs 为嵌入式设备做开发而具有所有(或者至少一部分)优势呢?这里我提到的“嵌入式设备”不仅仅是拥有 8-32 MB RAM 的小型 Linux 系统,还有运行在微控制器(MCU)上有几千字节内存的深度嵌入式系统。少量(有些时候几乎没有)的相关资源肯定使这个创意的实现变得更加复杂。 另一个问题是设备访问和交互。嵌入式设备通常没有显示屏和键盘,但是幸运的是解决这个问题的答案已经存在几十年了,这里要感谢 Unix,它提供了使用串口(UART)来搭建一个终端连接的方法。当然,在主机端,有些用户喜欢使用图形集成开发环境(IDE)来隐藏串口通信细节。
所以,由于嵌入式设备所有的这些不同特性,这个创意就是提供一个尽可能熟悉的工作环境。但熟悉只是其中一方面,另一方面,为了适应甚至最小的设备,工作环境需要尽可能的缩小。要想解决这些矛盾需要嵌入式 VHLLs 的操作可以高度配置,来适应不同的项目和硬件的需求。
**Linux.com:只有在物联网中使用这些语言才会遇到的挑战有哪些?比如说你如何处理内存限制?**
Sokolovsky: 当然,解释程序本身几乎不怎么消耗硬件资源。但是在当今世界,最珍贵的资源是人类的时间。不管你是一个研发工程师、一个仅仅有几个小时的周末创客、一个被 bug 和安全问题淹没的支持工程师,或者一个计划开发新产品的产品经理——你手头上大概都没有什么多余时间。因此需要将 VHLLs 的生产力提供到嵌入式工程师手上。
当前的工艺水平使得这些需求变得可行。公正的来讲,甚至于微处理器单元(MCU)平均 都有 16-32 KB RAM , 128-256 KB ROM。这仅仅足够搭载一个核心解释程序,一个标准库类型的规范子集,一些硬件驱动,以及一个很小但是依旧有用的应用程序。假如你的硬件配置稍微越过了中间线,其能力得到了快速的增长——这实际上是由于一个从 1970 年代就闻名的技巧:使用自定义的字节码和精码(pcode)相比原始机器代码能够让你获得更大的代码/特性密度。
在这条道路上有很多挑战,RAM 不够用是主要的一个。我是在一个 16 GB RAM 的笔记本上写下的这些话(但不断切换的话依然会很卡),而刚才提到的 16KB 比它小一百万倍!不过,通过小心的选择算法和编程技巧,在这样小的 RAM 下仍有可能通过脚本语言来执行简单程序,而相当复杂的程序可能需要 128-256K。
有很多的技术挑战需要解决(它们已经被成功的解决了),这里没有足够的篇幅来涵盖它们。不过,我在 OpenIoT 峰会上的演讲会涵盖使用两种嵌入式脚本语言的经验和成就:MicroPython(Python3 的子集)和 Zephyr.js(JavaScript/Node.js 的子集),都运行在 Linux 基金会的 Zephyr 实时操作系统上,它被寄希望于在 IoT 工业界取得和 Linux 在移动互联网和服务器方面一样的成就。(相关 PPT 会为无法参加 OpenIoT 会议的朋友在会议后放出。)
**Linux.com: 你能给我们一些 VHLLs 最适用的应用的例子吗?以及一些它们不适用的例子?**
Sokolovsky:以上是很多关于 VHLLs 的光明前景,公正的来说:在嵌入式开发中,这里有很多一厢情愿的幻想(或者说希望其是能够自我实现的预言)。在嵌入式开发中 VHLLs 现在可以提供的是:快速成型,以及教育/创客市场上所必须的易学性和易用性。有一些先行者在其它领域使用 VHLLs,但是就目前来看,它需要在基础构造和工具开发上投入更多。重要的是,这样的投入应遵循开源原则并分享,否则会逐渐损害到 VHLLs 能够节省使用者时间和精力的优势。
谨记这些,嵌入式 VHLLs 是发育完全(“逐渐变的完整”)的语言,能够适应各种类型的应用,但是要受制于硬件。例如,假如一个微处理器的规格低于之前提到的阈值,如一个老旧的 8-bit 微处理器,那么只有同样古老而优秀的 C 语言能够为你所用。另外一个限制是当你真的想要充分利用硬件时—— C 语言或者汇编程序才是正确的选择。但是,这里有一个惊喜——嵌入式 VHLLs 的开发者也想到了这一点,例如 MicroPython 允许你将 Python 和汇编程序在同一个应用中结合起来。
嵌入式 VHLLs 的优势是其可配置性和可(重复)编程性,外加灵活的连接性支持。这恰恰是 IoT 和智能设备最需要的,很多 IoT 应用使用起来也不需要太复杂。考虑一下,例如,一个可以贴在任何地方用来完成各种任务的智能按钮。但是,如果你需要调整双击的时间时怎么办?使用脚本语言,你可以做到。也许你完全不会考虑三连击,但是现在在某些情况下四连击都可能是有用的。使用脚本语言,你可以轻易修改它。
(题图:来自 Pixabay,基于 CC0 协议)
---
via: <https://www.linux.com/news/event/elcna/2017/2/using-scripting-languages-iot-challenges-and-approaches>
作者:[AMBER ANKERHOLZ](https://www.linux.com/users/aankerholz) 译者:[xiaow6](https://github.com/xiaow6) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,684 | 如何将树莓派变成电子书服务器 | https://opensource.com/article/17/6/raspberrypi-ebook-server | 2017-07-10T16:55:31 | [
"电子书",
"Calibre",
"树莓派"
] | https://linux.cn/article-8684-1.html |
>
> Calibre 电子书管理软件可以轻松地在树莓派 3 上设置电子书服务器,即使在连接较慢区域也是如此。
>
>
>

最近 [Calibre 3.0 发布了](https://the-digital-reader.com/2017/06/19/calibre-3-0-released/),它让用户能够在浏览器中阅读电子书!注意 Raspbian 的仓库还没有更新它(截至写作时)。
电子书是教师、图书馆员和其他人与学生共享书籍、课堂资料或其他文件的好方法,只需要你有可靠的带宽接入即可。但是,即使你的连接速度较慢或无法连接,还有一个简单的解决方案:使用在树莓派 3 上运行的开源 Calibre 电子书管理软件创建电子书服务器。这是我所做的,你也可以。
首先我下载了最新的 [Raspbian Pixel 镜像](https://www.raspberrypi.org/downloads/raspbian/),并安装在一个新的 8GB microSD 卡上。然后我插入 microSD,连接了键盘、鼠标并用一根 HDMI 线连接到一台旧的 LCD 电视,然后启动了 Pi。在我的显示器上[调整了 Pixel 环境分辨率](https://www.raspberrypi.org/forums/viewtopic.php?t=5851)并连接到本地网络之后,我准备开始了。我打开一个终端,并输入 `sudo apt-get update` 以获取操作系统的最新更新。

接下来,我在终端中输入 `sudo apt-get install calibre` 来安装 [Calibre](https://calibre-ebook.com/)。

我从命令行启动了 Calibre(注意它也可以从 GUI 启动)。Calibre 的界面非常直观。第一次启动时,你会看到 **Welcome to Calibre** 的向导。我将默认 “Calibre Library” 更改为 “CalibreLibrary”(一个词),因为这启动内容服务器时更容易。
在选择完我的 Calibre 内容位置后,我准备好开始下载书了。

我从菜单中选择了 **Get Books** 选项,在这很容易输入我的搜索字词,并选择我感兴趣的电子书提供者。我正在寻找[非 DRM](https://en.wikipedia.org/wiki/Digital_rights_management) 的材料,所以我选择 [Project Gutenberg](https://www.gutenberg.org/) 作为我的源。(Caliber 的免责声明指出,电子书交易是在你和个人内容提供商之间。)我在作者字段中输入 “Mark Twain”,并得到10个结果。

我选择了 *Adventures of Huckleberry Finn* 这本书。在下一页面上,我可以选择 **MOBI** 和 **EPUB** 这两种电子书格式。我选择了 EPUB,这本书下载得很快。

你也可以从其他内容提供商向库中添加图书,而不是在 Calibre 的列表中添加图书。例如,老师可以通过该内容服务器与学生分享电子书格式的开放教育资源。要加载内容,请使用界面最左侧的 “Add Books” 选项。
根据你图书库的大小,你也许需要增加 microSD 卡的大小。

将内容添加到电子书服务器后,即可与网络中的其他人共享内容。通过在终端中输入 `ifconfig` 获取你的树莓派 IP 地址。我正在使用无线网络,所以我在下面的例子中使用了 **wlan0** 中的结果。点击界面的最右侧并展开菜单。然后点击 “Connect and Share” 并启动服务器。

我下一步是通过我的电脑客户端连接到树莓派访问我添加的电子书。我在客户端上打开一个浏览器并输入树莓的地址,后面加上 **:8080** 端口。在我这里是 **<http://192.168.1.10:8080>** (根据你 Pi 的地址来适配)。
你会在浏览器中看到主页:

我已经测试,并能用 iPhone、Linux、MacOS 计算机轻易连接到服务器。
你可以在这个主页总探索选项,或者点击 **All Books** 显示服务器上的所有内容。

从这里,你可以下载书到你的设备并离线阅读了。
你还没有设置一台电子书服务器么?或者你考虑自己设置一台么?在评论中分享你的建议或者问题。
---
作者简介:
Don Watkins - 教育家、教育技术专家、企业家、开源倡导者。教育心理学硕士、教育领导硕士、Linux 系统管理员、CCNA、使用 Virtual Box 虚拟化。关注我 @Don\_Watkins。
---
via: <https://opensource.com/article/17/6/raspberrypi-ebook-server>
作者:[Don Watkins](https://opensource.com/users/don-watkins) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Recently [Calibre 3.0 was released](https://the-digital-reader.com/2017/06/19/calibre-3-0-released/) which enables users to read books in the browser! Note that Raspbian's repositories have not yet been updated yet (as of this writing).
eBooks are a great way for teachers, librarians, and others to share books, classroom materials, or other documents with students—provided you have ready and reliable access to broadband. But even if you have low or no connectivity, there's an easy solution: Create an eBook server with the open source Calibre eBook management software running on a Raspberry Pi 3. Here's how I did it—and you can, too.
First I downloaded the latest [Raspbian Pixel image](https://www.raspberrypi.org/downloads/raspbian/) and installed it on a new 8GB microSD card. Then I inserted the microSD; connected a keyboard, mouse, and an old LCD TV with an HDMI cable; and booted the Pi. After [adjusting the resolution](https://www.raspberrypi.org/forums/viewtopic.php?t=5851) of the Pixel environment on my monitor and connecting to the local network, I was ready to begin. I opened a terminal and entered **sudo apt-get update** to get the latest updates for the operating system.

opensource.com
Next, I installed the [Calibre](https://calibre-ebook.com/) software by entering **sudo apt-get install calibre** in a terminal.

opensource.com
I launched Calibre from the command line (note that it can be launched from the GUI also). The Calibre interface is very intuitive. The first time you launch, you see the **Welcome to Calibre** wizard. I changed the default Calibre Library to **CalibreLibrary** (one word), because it's easier when launching the content server.
After choosing the location for my Calibre content, I was ready to begin downloading books.

opensource.com
I selected the **Get Books** option from the menu, and it was very easy to enter my search terms and select the eBook provider I was interested in. I was looking for [non-DRM](https://en.wikipedia.org/wiki/Digital_rights_management) material, so I chose [Project Gutenberg](https://www.gutenberg.org/) as my source. (Calibre's disclaimer notes that eBook transactions are between you and the individual content providers.) I entered "Mark Twain" in the author field and got 10 results.

opensource.com
I selected *Adventures of Huckleberry Finn*. On the next screen, I could choose between the **MOBI** and **EPUB** eBook formats. I chose EPUB, and the book downloaded very quickly.

opensource.com
You can also add books to the library from other content providers, not in Calibre's list. For example, a teacher could share open educational resources in eBook format with students through this content server. To load the content, use the "Add Books" menu option at the far left of the interface
Depending on the size of your library, you may also need to increase the size of your microSD card.
After you have added content to your eBook server, you are ready to share it with the rest of your network. Get the IP address of your Raspberry Pi by entering **ifconfig** into the terminal. I was using the wireless network, so I used the result for **wlan0** in the example below. Navigate to the far right of the interface and expand the menu. Then, navigate to "Connect and Share" and start the server.

opensource.com
My next step was connecting my client computer to the Raspberry Pi to access the eBooks I'd added. I opened a browser on my client device and navigated to the Raspberry Pi's IP address with the port **:8080** appended. In my case, that was ** http://192.168.1.10:8080** (adapt that format to your Pi's IP address).
You'll see this home page in your browser:

opensource.com
I tested and easily connected to the server with an iPhone and Linux and MacOS computers.
You can explore the options on this home page, or click on **All Books** to display all the content on your eBook server.

opensource.com
From here, you can download the books to your device and read them offline.
Have you ever set up an eBook server? Or are you thinking about setting up one yourself? Share your advice or questions in the comments.
## 12 Comments |
8,685 | Linux 的 EXT4 文件系统的历史、特性以及最佳实践 | https://opensource.com/article/17/5/introduction-ext4-filesystem | 2017-07-11T15:40:08 | [
"EXT4",
"文件系统",
"EXT"
] | https://linux.cn/article-8685-1.html |
>
> 让我们大概地从 EXT4 的历史、特性以及最佳实践这几个方面来学习它和之前的几代 EXT 文件系统有何不同。
>
>
>

在之前关于 Linux 文件系统的文章里,我写过一篇 [Linux 文件系统介绍](https://opensource.com/life/16/10/introduction-linux-filesystems) 和一些更高级的概念例如 [一切都是文件](https://opensource.com/life/15/9/everything-is-a-file)。现在我想要更深入地了解 EXT 文件系统的特性的详细内容,但是首先让我们来回答一个问题,“什么样才算是一个文件系统 ?” 一个文件系统应该涵盖以下所有特点:
1. **数据存储:** 对于任何一个文件系统来说,一个最主要的功能就是能够被当作一个结构化的容器来存储和获取数据。
2. **命名空间:** 命名空间是一个提供了用于命名与组织数据的命名规则和数据结构的方法学。
3. **安全模型:** 一个用于定义访问权限的策略。
4. **API:** 操作这个系统的对象的系统功能调用,这些对象诸如目录和文件。
5. **实现:** 能够实现以上几点的软件。
本文内容的讨论主要集中于上述几点中的第一项,并探索为一个 EXT 文件系统的数据存储提供逻辑框架的元数据结构。
### EXT 文件系统历史
虽然 EXT 文件系统是为 Linux 编写的,但其真正起源于 Minix 操作系统和 Minix 文件系统,而 Minix 最早发布于 1987,早于 Linux 5 年。如果我们从 EXT 文件系统大家族的 Minix 起源来观察其历史与技术发展那么理解 EXT4 文件系统就会简单得多。
### Minix
当 Linux Torvalds 在写最初的 Linux 内核的时候,他需要一个文件系统但是他又不想自己写一个。于是他简单地把 [Minix 文件系统](https://en.wikipedia.org/wiki/MINIX_file_system) 加了进去,这个 Minix 文件系统是由 [Andrew S. Tanenbaum](https://en.wikipedia.org/wiki/Andrew_S._Tanenbaum) 写的,同时它也是 Tanenbaum 的 Minix 操作系统的一部分。[Minix](https://en.wikipedia.org/wiki/MINIX) 是一个类 Unix 风格的操作系统,最初编写它的原因是用于教育用途。Minix 的代码是自由可用的并有适当的许可协议,所以 Torvalds 可以把它用 Linux 的最初版本里。
Minix 有以下这些结构,其中的大部分位于生成文件系统的分区中:
* [**引导扇区**](https://en.wikipedia.org/wiki/Boot_sector) 是硬盘安装后的第一个扇区。这个引导块包含了一个非常小的引导记录和一个分区表。
* 每一个分区的第一个块都是一个包含了元数据的<ruby> 超级块 <rp> ( </rp> <rt> superblock </rt> <rp> ) </rp></ruby> ,这些元数据定义了其他的文件系统结构并将其定位于物理硬盘的具体分区上。
* 一个 **inode 位图块** 决定了哪些 inode 是在使用中的,哪一些是未使用的。
* **inode** 在硬盘上有它们自己的空间。每一个 inode 都包含了一个文件的信息,包括其所处的数据块的位置,也就是该文件所处的区域。
* 一个 **区位图** 用于保持追踪数据区域的使用和未使用情况。
* 一个 **数据区**, 这里是数据存储的地方。
对上述了两种位图类型来说,一个<ruby> 位 <rp> ( </rp> <rt> bit </rt> <rp> ) </rp></ruby>表示一个指定的数据区或者一个指定的 inode。 如果这个位是 0 则表示这个数据区或者这个 inode 是未使用的,如果是 1 则表示正在使用中。
那么,[inode](https://en.wikipedia.org/wiki/Inode) 又是什么呢 ? 就是 index-node(索引节点)的简写。 inode 是位于磁盘上的一个 256 字节的块,用于存储和该 inode 对应的文件的相关数据。这些数据包含了文件的大小、文件的所有者和所属组的用户 ID、文件模式(即访问权限)以及三个时间戳用于指定:该文件最后的访问时间、该文件的最后修改时间和该 inode 中的数据的最后修改时间。
同时,这个 inode 还包含了位置数据,指向了其所对应的文件数据在硬盘中的位置。在 Minix 和 EXT 1-3 文件系统中,这是一个数据区和块的列表。Minix 文件系统的 inode 支持 9 个数据块,包括 7 个直接数据块和 2 个间接数据块。如果你想要更深入的了解,这里有一个优秀的 PDF 详细地描述了 [Minix 文件系统结构](http://ohm.hgesser.de/sp-ss2012/Intro-MinixFS.pdf) 。同时你也可以在维基百科上对 [inode 指针结构](https://en.wikipedia.org/wiki/Inode_pointer_structure) 做一个快速了解。
### EXT
原生的 [EXT 文件系统](https://en.wikipedia.org/wiki/Extended_file_system) (意即<ruby> 扩展的 <rp> ( </rp> <rt> extended </rt> <rp> ) </rp></ruby>) 是由 [Rémy Card](https://en.wikipedia.org/wiki/R%C3%A9my_Card) 编写并于 1992 年与 Linux 一同发行。主要是为了克服 Minix 文件系统中的一些文件大小限制的问题。其中,最主要的结构变化就是文件系统中的元数据。它基于 Unix 文件系统 (UFS),其也被称为伯克利快速文件系统(FFS)。我发现只有很少一部分关于 EXT 文件系统的发行信息是可以被确证的,显然这是因为其存在着严重的问题,并且它很快地被 EXT2 文件系统取代了。
### EXT2
[EXT2 文件系统](https://en.wikipedia.org/wiki/Ext2) 就相当地成功,它在 Linux 发行版中存活了多年。它是我在 1997 年开始使用 Red Hat Linux 5.0 时接触的第一个文件系统。实际上,EXT2 文件系统有着和 EXT 文件系统基本相同的元数据结构。然而 EXT2 更高瞻远瞩,因为其元数据结构之间留有很多供将来使用的磁盘空间。
和 Minix 类似,EXT2 也有一个[引导扇区](https://en.wikipedia.org/wiki/Boot_sector) ,它是硬盘安装后的第一个扇区。它包含了非常小的引导记录和一个分区表。接着引导扇区之后是一些保留的空间,它填充了引导记录和硬盘驱动器上的第一个分区(通常位于下一个柱面)之间的空间。[GRUB2](https://opensource.com/article/17/2/linux-boot-and-startup) - 也可能是 GRUB1 - 将此空间用于其部分引导代码。
每个 EXT2 分区中的空间被分为<ruby> 柱面组 <rp> ( </rp> <rt> cylinder group </rt> <rp> ) </rp></ruby>,它允许更精细地管理数据空间。 根据我的经验,每一组大小通常约为 8MB。 下面的图 1 显示了一个柱面组的基本结构。 柱面中的数据分配单元是块,通常大小为 4K。

*图 1: EXT 文件系统中的柱面组的结构*
柱面组中的第一个块是一个<ruby> 超级块 <rp> ( </rp> <rt> superblock </rt> <rp> ) </rp></ruby>,它包含了元数据,定义了其它文件系统的结构并将其定位于物理硬盘的具体分区上。分区中有一些柱面组还会有备用超级块,但并不是所有的柱面组都有。我们可以使用例如 `dd` 等磁盘工具来拷贝备用超级块的内容到主超级块上,以达到修复损坏的超级块的目的。虽然这种情况不会经常发生,但是在几年前我的一个超级块损坏了,我就是用这种方法来修复的。幸好,我很有先见之明地使用了 `dumpe2fs` 命令来备份了我的系统上的分区描述符信息。
以下是 `dumpe2fs` 命令的一部分输出。这部分输出主要是超级块上包含的一些元数据,同时也是文件系统上的前两个柱面组的数据。
```
# dumpe2fs /dev/sda1
Filesystem volume name: boot
Last mounted on: /boot
Filesystem UUID: 79fc5ed8-5bbc-4dfe-8359-b7b36be6eed3
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122160
Block count: 488192
Reserved block count: 24409
Free blocks: 376512
Free inodes: 121690
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 238
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8144
Inode blocks per group: 509
Flex block group size: 16
Filesystem created: Tue Feb 7 09:33:34 2017
Last mount time: Sat Apr 29 21:42:01 2017
Last write time: Sat Apr 29 21:42:01 2017
Mount count: 25
Maximum mount count: -1
Last checked: Tue Feb 7 09:33:34 2017
Check interval: 0 (<none>)
Lifetime writes: 594 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: c780bac9-d4bf-4f35-b695-0fe35e8d2d60
Journal backup: inode blocks
Journal features: journal_64bit
Journal size: 32M
Journal length: 8192
Journal sequence: 0x00000213
Journal start: 0
Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-1
Reserved GDT blocks at 2-239
Block bitmap at 240 (+240)
Inode bitmap at 255 (+255)
Inode table at 270-778 (+270)
24839 free blocks, 7676 free inodes, 16 directories
Free blocks: 7929-32767
Free inodes: 440, 470-8144
Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32769
Reserved GDT blocks at 32770-33007
Block bitmap at 241 (bg #0 + 241)
Inode bitmap at 256 (bg #0 + 256)
Inode table at 779-1287 (bg #0 + 779)
8668 free blocks, 8142 free inodes, 2 directories
Free blocks: 33008-33283, 33332-33791, 33974-33975, 34023-34092, 34094-34104, 34526-34687, 34706-34723, 34817-35374, 35421-35844, 35935-36355, 36357-36863, 38912-39935, 39940-40570, 42620-42623, 42655, 42674-42687, 42721-42751, 42798-42815, 42847, 42875-42879, 42918-42943, 42975, 43000-43007, 43519, 43559-44031, 44042-44543, 44545-45055, 45116-45567, 45601-45631, 45658-45663, 45689-45695, 45736-45759, 45802-45823, 45857-45887, 45919, 45950-45951, 45972-45983, 46014-46015, 46057-46079, 46112-46591, 46921-47103, 49152-49395, 50027-50355, 52237-52255, 52285-52287, 52323-52351, 52383, 52450-52479, 52518-52543, 52584-52607, 52652-52671, 52734-52735, 52743-53247
Free inodes: 8147-16288
Group 2: (Blocks 65536-98303)
Block bitmap at 242 (bg #0 + 242)
Inode bitmap at 257 (bg #0 + 257)
Inode table at 1288-1796 (bg #0 + 1288)
6326 free blocks, 8144 free inodes, 0 directories
Free blocks: 67042-67583, 72201-72994, 80185-80349, 81191-81919, 90112-94207
Free inodes: 16289-24432
Group 3: (Blocks 98304-131071)
<截断>
```
每一个柱面组都有自己的 inode 位图,用于判定该柱面组中的哪些 inode 是使用中的而哪些又是未被使用的。每一个柱面组的 inode 都有它们自己的空间。每一个 inode 都包含了一个文件的相关信息,包括属于该文件的数据块的位置。而块位图纪录了文件系统中的使用中和非使用中的数据块。请注意,在上面的输出中有大量关于文件系统的数据。在非常大的文件系统上,柱面组的数据可以多达数百页的长度。柱面组的元数据包括组中所有空闲数据块的列表。
EXT 文件系统实现了数据分配策略以确保产生最少的文件碎片。减少文件碎片可以提高文件系统的性能。这些策略会在下面的 EXT4 中描述到。
我所遇见的关于 EXT2 文件系统最大的问题是 `fsck` (文件系统检查) 程序这一环节占用了很长一段时间来定位和校准文件系统中的所有的不一致性,从而导致在系统<ruby> 崩溃 <rp> ( </rp> <rt> crash </rt> <rp> ) </rp></ruby>后其会花费了数个小时来修复。有一次我的其中一台电脑在崩溃后重新启动时共花费了 28 个小时恢复磁盘,而且并且是在磁盘被检测量只有几百兆字节大小的情况下。
### EXT3
[EXT3 文件系统](https://en.wikipedia.org/wiki/Ext3)是应一个目标而生的,就是克服 `fsck` 程序需要完全恢复在文件更新操作期间发生的不正确关机而损坏的磁盘结构所需的大量时间。它对 EXT 文件系统的唯一新增功能就是 [日志](https://en.wikipedia.org/wiki/Journaling_file_system),它将提前记录将对文件系统执行的更改。 EXT3 的磁盘结构的其余部分与 EXT2 中的相同。
除了同先前的版本一样直接写入数据到磁盘的数据区域外,EXT3 上的日志会将文件数据随同元数据写入到磁盘上的一个指定数据区域。一旦这些(日志)数据安全地到达硬盘,它就可以几乎零丢失率地被合并或被追加到目标文件上。当这些数据被提交到磁盘上的数据区域上,这些日志就会随即更新,这样在日志中的所有数据提交之前,系统发生故障时文件系统将保持一致状态。在下次启动时,将检查文件系统的不一致性,然后将仍保留在日志中的数据提交到磁盘的数据区,以完成对目标文件的更新。
日志功能确实降低了数据写入性能,但是有三个可用于日志的选项,允许用户在性能和数据完整性、安全性之间进行选择。 我的个人更偏向于选择安全性,因为我的环境不需要大量的磁盘写入活动。
日志功能将失败后检查硬盘驱动器所需的时间从几小时(甚至几天)减少到了几分钟。 多年来,我遇到了很多导致我的系统崩溃的问题。要详细说的话恐怕还得再写一篇文章,但这里需要说明的是大多数是我自己造成的,就比如不小心踢掉电源插头。 幸运的是,EXT 日志文件系统将启动恢复时间缩短到两三分钟。此外,自从我开始使用带日志记录的 EXT3,我从来没有遇到丢失数据的问题。
EXT3 的日志功能可以关闭,然后其功能就等同于 EXT2 文件系统了。 该日志本身仍然是存在的,只是状态为空且未使用。 只需在 `mount` 命令中使用文件系统类型参数来重新挂载即可指定为 EXT2。 你可以从命令行执行此操作,但是具体还是取决于你正在使用的文件系统,不过你也可以更改 `/etc/fstab` 文件中的类型说明符,然后重新启动。 我强烈建议不要将 EXT3 文件系统挂载为 EXT2 ,因为这会有丢失数据和增加恢复时间的潜在可能性。
EXT2 文件系统可以使用如下命令来通过日志升级到 EXT3 。
```
tune2fs -j /dev/sda1
```
`/dev/sda1` 表示驱动器和分区的标识符。同时要注意修改 `/etc/fstab` 中的文件系统类型标识符并重新挂载分区,或者重启系统以确保修改生效。
### EXT4
[EXT4 文件系统](https://en.wikipedia.org/wiki/Ext4)主要提高了性能、可靠性和容量。为了提高可靠性,它新增了元数据和日志校验和。同时为了满足各种关键任务要求,文件系统新增了纳秒级别的时间戳,并在时间戳字段中添加了两个高位来延缓时间戳的 [2038 年问题](https://en.wikipedia.org/wiki/Year_2038_problem) ,这样 EXT4 文件系统至少可用到 2446 年。
在 EXT4 中,数据分配从固定块改为<ruby> 扩展盘区 <rp> ( </rp> <rt> extent </rt> <rp> ) </rp></ruby>方式,扩展盘区由硬盘驱动器上的开始和结束位置来描述。这使得可以在单个 inode 指针条目中描述非常长的物理上连续的文件,这可以显著减少描述大文件中所有数据的位置所需的指针数。其它在 EXT4 中已经实施的分配策略可以进一步减少碎片化。
EXT4 通过将新创建的文件散布在磁盘上,使其不会像早期的 PC 文件系统一样全部聚集在磁盘起始位置,从而减少了碎片。文件分配算法尝试在柱面组中尽可能均匀地散布文件,并且当文件(由于太大)需要分段存储时,使不连续的文件扩展盘区尽可能靠近同一文件中的其他部分,以尽可能减少磁头寻道和电机旋转等待时间。当创建新文件或扩展现有文件时,使用其它策略来预先分配额外的磁盘空间。这有助于确保扩展文件时不会自动导致其分段。新文件不会紧挨这现有文件立即分配空间,这也可以防止现有文件的碎片化。
除了磁盘上数据的实际位置外,EXT4 使用诸如延迟分配的功能策略,以允许文件系统在分配空间之前收集到所有正在写入磁盘的数据,这可以提高数据空间连续的可能性。
较旧的 EXT 文件系统(如 EXT2 和 EXT3)可以作为 EXT4 进行 `mount` ,以使其性能获得较小的提升。但不幸的是,这需要关闭 EXT4 的一些重要的新功能,所以我建议不要这样做。
自 Fedora 14 以来,EXT4 一直是 Fedora 的默认文件系统。我们可以使用 Fedora 文档中描述的 [流程](https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/ext4converting.html) 将 EXT3 文件系统升级到 EXT4,但是由于仍然存留的之前的 EXT3 元数据结构,它的性能仍将受到影响。从 EXT3 升级到 EXT4 的最佳方法是备份目标文件系统分区上的所有数据,使用 `mkfs` 命令将空 EXT4 文件系统写入分区,然后从备份中恢复所有数据。
### Inode
之前介绍过的 inode 是 EXT 文件系统中的元数据的关键组件。 图 2 显示了 inode 和存储在硬盘驱动器上的数据之间的关系。 该图是单个文件的目录和 inode,在这种情况下,可能会产生高度碎片化。 EXT 文件系统可以主动地减少碎片,所以不太可能会看到有这么多间接数据块或扩展盘区的文件。 实际上,你在下面将会看到,EXT 文件系统中的碎片非常低,所以大多数 inode 只使用一个或两个直接数据指针,而不使用间接指针。

*图 2 :inode 存储有关每个文件的信息,并使 EXT 文件系统能够查找属于它的所有数据。*
inode 不包含文件的名称。通过目录项访问文件,目录项本身就是文件的名称,并包含指向 inode 的指针。该指针的值是 inode 号。文件系统中的每个 inode 都具有唯一的 ID 号,但同一台计算机上的其它文件系统(甚至是相同的硬盘驱动器)中的 inode 可以具有相同的 inode 号。这对 [硬链接](https://en.wikipedia.org/wiki/Hard_link) 存在影响,但是这个讨论超出了本文的范围。
inode 包含有关该文件的元数据,包括其类型和权限以及其大小。 inode 还包含 15 个指针的空位,用于描述柱面组数据部分中数据块或扩展盘区的位置和长度。12 个指针提供对数据扩展盘区的直接访问,应该足以满足大多数文件的需求。然而,对于具有明显分段的文件,需要以间接<ruby> 节点 <rp> ( </rp> <rt> node </rt> <rp> ) </rp></ruby>的形式提供一些额外的容量——从技术上讲,这些不是真正的“inode”,所以为了方便起见我在这里使用这个术语“<ruby> 节点 <rp> ( </rp> <rt> node </rt> <rp> ) </rp></ruby>”。
间接节点是文件系统中的正常数据块,它仅用于描述数据而不用于存储元数据,因此可以支持超过 15 个条目。例如,4K 的块大小可以支持 512 个 4 字节的间接节点,允许单个文件有 **12(直接)+ 512(间接)= 524** 个扩展盘区。还支持双重和三重间接节点,但我们大多数人不太可能遇到需要那么多扩展盘区的文件。
### 数据碎片
对于许多较旧的 PC 文件系统,如 FAT(及其所有变体)和 NTFS,碎片一直是导致磁盘性能下降的重大问题。 碎片整理本身就成为一个行业,有各种品牌的整理软件,其效果范围从非常有效到仅仅是微乎其微。
Linux 的扩展文件系统使用数据分配策略,有助于最小化硬盘驱动器上的文件碎片,并在发生碎片时减少碎片的影响。 你可以使用 EXT 文件系统上的 `fsck` 命令检查整个文件系统的碎片。 以下示例检查我的主工作站的家目录,只有 1.5% 的碎片。 确保使用 `-n` 参数,因为它会防止 `fsck` 对扫描的文件系统采取任何操作。
```
fsck -fn /dev/mapper/vg_01-home
```
我曾经进行过一些理论计算,以确定磁盘碎片整理是否会产生任何明显的性能提升。 我做了一些假设条件,我使用的磁盘性能数据来自一个新的 300GB 的西部数字硬盘驱动器,具有 2.0ms 的轨到轨寻道时间。 此示例中的文件数是我在计算的当天的文件系统中存在的实际数。 我假设每天有相当大量的碎片化文件(约 20%)会被用到。
| **全部文件** | **271,794** |
| --- | --- |
| 碎片率 % | 5.00% |
| 不连续数 | 13,590 |
| | |
| % 每天用到的碎片化文件 | 20% (假设) |
| 额外寻道次数 | 2,718 |
| 平均寻道时间 | 10.90 ms |
| 每天全部的额外寻道时间 | 29.63 sec |
| | 0.49 min |
| | |
| 轨到轨寻道时间 | 2.00 ms |
| 每天全部的额外寻道时间 | 5.44 sec |
| | 0.091 min |
*表 1: 碎片对磁盘性能的理论影响*
我对每天的全部的额外寻道时间进行了两次计算,一次是轨到轨寻道时间,这是由于 EXT 文件分配策略而导致大多数文件最可能的情况,一个是平均寻道时间,我假设这是一个合理的最坏情况。
从表 1 可以看出,对绝大多数应用程序而言,碎片化甚至对性能适中的硬盘驱动器上的现代 EXT 文件系统的影响是微乎其微的。您可以将您的环境中的数字插入到您自己的类似电子表格中,以了解你对性能影响的期望。这种类型的计算不一定能够代表实际的性能,但它可以提供一些对碎片化及其对系统的理论影响的洞察。
我的大部分分区的碎片率都在 1.5% 左右或 1.6%,我有一个分区有 3.3% 的碎片,但是这是一个大约 128GB 文件系统,具有不到 100 个非常大的 ISO 映像文件;多年来,我扩展过该分区几次,因为它已经太满了。
这并不是说一些应用的环境并不需要更少的碎片的环境。 EXT 文件系统可以由有经验和知识的管理员小心调整,管理员可以针对特定的工作负载类型调整参数。这个工作可以在文件系统创建的时候或稍后使用 `tune2fs` 命令时完成。每一次调整变化的结果应进行测试,精心的记录和分析,以确保目标环境的最佳性能。在最坏的情况下,如果性能不能提高到期望的水平,则其他文件系统类型可能更适合特定的工作负载。并记住,在单个主机系统上混用文件系统类型以匹配每个文件系统上的不同负载是常见的。
由于大多数 EXT 文件系统的碎片数量较少,因此无需进行碎片整理。目前,EXT 文件系统没有安全的碎片整理工具。有几个工具允许你检查单个文件的碎片程度或文件系统中剩余可用空间的碎片程度。有一个工具,`e4defrag`,它可以对允许使用的剩余可用空间、目录或文件系统进行碎片整理。顾名思义,它只适用于 EXT4 文件系统中的文件,并且它还有一其它的些限制。
如果有必要在 EXT 文件系统上执行完整的碎片整理,则只有一种方法能够可靠地工作。你必须将文件系统中的所有要进行碎片整理的文件移动从而进行碎片整理,并在确保安全复制到其他位置后将其删除。如果可能,你可以增加文件系统的大小,以帮助减少将来的碎片。然后将文件复制回目标文件系统。但是其实即使这样也不能保证所有文件都被完全去碎片化。
### 总结
EXT 文件系统在一些 Linux 发行版本上作为默认文件系统已经超过二十多年了。它们用最少的维护代价提供了稳定性、高可用性、可靠性和性能。我尝试过一些其它的文件系统但最终都还是回归到 EXT。每一个我在工作中使用到 Linux 的地方都使用到了 EXT 文件系统,同时我发现了它们适用于任何主流负载。毫无疑问,EXT4 文件系统应该被用于大部分的 Linux 文件系统上,除非我们有明显需要使用其它文件系统的理由。
---
作者简介:
David Both - David Both 是一名 Linux 于开源的贡献者,目前居住在北卡罗莱纳州的罗利。他从事 IT 行业有 40 余年并在 IBM 中从事 OS/2 培训约 20 余年。在 IBM 就职期间,他在 1981 年为最早的 IBM PC 写了一个培训课程。他已经为红帽教授了 RHCE 课程,曾在 MCI Worldcom,思科和北卡罗来纳州工作。 他使用 Linux 和开源软件工作了近 20 年。
---
via: <https://opensource.com/article/17/5/introduction-ext4-filesystem>
作者:[David Both](https://opensource.com/users/dboth) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In previous articles about Linux filesystems, I wrote [an introduction to Linux filesystems](https://opensource.com/life/16/10/introduction-linux-filesystems) and about some higher-level concepts such as [everything is a file](https://opensource.com/life/15/9/everything-is-a-file). I want to go into more detail about the specifics of the EXT filesystems, but first, let's answer the question, "What is a filesystem?" A filesystem is all of the following:
**Data storage:**The primary function of any filesystem is to be a structured place to store and retrieve data.**Namespace:**A naming and organizational methodology that provides rules for naming and structuring data.**Security model:**A scheme for defining access rights.**API:**System function calls to manipulate filesystem objects like directories and files.**Implementation:**The software to implement the above.
This article concentrates on the first item in the list and explores the metadata structures that provide the logical framework for data storage in an EXT filesystem.
## EXT filesystem history
Although written for Linux, the EXT filesystem has its roots in the Minix operating system and the Minix filesystem, which predate Linux by about five years, being first released in 1987. Understanding the EXT4 filesystem is much easier if we look at the history and technical evolution of the EXT filesystem family from its Minix roots.
## Minix
When writing the original Linux kernel, Linus Torvalds needed a filesystem but didn't want to write one then. So he simply included the [Minix filesystem](https://en.wikipedia.org/wiki/MINIX_file_system), which had been written by [Andrew S. Tanenbaum](https://en.wikipedia.org/wiki/Andrew_S._Tanenbaum) and was a part of Tanenbaum's Minix operating system. [Minix](https://en.wikipedia.org/wiki/MINIX) was a Unix-like operating system written for educational purposes. Its code was freely available and appropriately licensed to allow Torvalds to include it in his first version of Linux.
Minix has the following structures, most of which are located in the partition where the filesystem is generated:
- A
in the first sector of the hard drive on which it is installed. The boot block includes a very small boot record and a partition table.**boot sector** - The first block in each partition is a
**superblock**that contains the metadata that defines the other filesystem structures and locates them on the physical disk assigned to the partition. - An
**inode bitmap block**, which determines which inodes are used and which are free. - The
**inodes**, which have their own space on the disk. Each inode contains information about one file, including the locations of the data blocks, i.e., zones belonging to the file. - A
**zone bitmap**to keep track of the used and free data zones. - A
**data zone**, in which the data is actually stored.
For both types of bitmaps, one bit represents one specific data zone or one specific inode. If the bit is zero, the zone or inode is free and available for use, but if the bit is one, the data zone or inode is in use.
What is an [inode](https://en.wikipedia.org/wiki/Inode)? Short for index-node, an inode is a 256-byte block on the disk and stores data about the file. This includes the file's size; the user IDs of the file's user and group owners; the file mode (i.e., the access permissions); and three timestamps specifying the time and date that: the file was last accessed, last modified, and the data in the inode was last modified.
The inode also contains data that points to the location of the file's data on the hard drive. In Minix and the EXT1-3 filesystems, this is a list of data zones or blocks. The Minix filesystem inodes supported nine data blocks, seven direct and two indirect. If you'd like to learn more, there is an excellent PDF with a detailed description of the [Minix filesystem structure](http://ohm.hgesser.de/sp-ss2012/Intro-MinixFS.pdf) and a quick overview of the [inode pointer structure](https://en.wikipedia.org/wiki/Inode_pointer_structure) on Wikipedia.
## EXT
The original [EXT filesystem](https://en.wikipedia.org/wiki/Extended_file_system) (Extended) was written by [Rémy Card](https://en.wikipedia.org/wiki/Rémy_Card) and released with Linux in 1992 to overcome some size limitations of the Minix filesystem. The primary structural changes were to the metadata of the filesystem, which was based on the Unix filesystem (UFS), which is also known as the Berkeley Fast File System (FFS). I found very little published information about the EXT filesystem that can be verified, apparently because it had significant problems and was quickly superseded by the EXT2 filesystem.
## EXT2
The [EXT2 filesystem](https://en.wikipedia.org/wiki/Ext2) was quite successful. It was used in Linux distributions for many years, and it was the first filesystem I encountered when I started using Red Hat Linux 5.0 back in about 1997. The EXT2 filesystem has essentially the same metadata structures as the EXT filesystem, however EXT2 is more forward-looking, in that a lot of disk space is left between the metadata structures for future use.
Like Minix, EXT2 has a [boot sector](https://en.wikipedia.org/wiki/Boot_sector) in the first sector of the hard drive on which it is installed, which includes a very small boot record and a partition table. Then there is some reserved space after the boot sector, which spans the space between the boot record and the first partition on the hard drive that is usually on the next cylinder boundary. [GRUB2](https://opensource.com/article/17/2/linux-boot-and-startup)—and possibly GRUB1—uses this space for part of its boot code.
The space in each EXT2 partition is divided into cylinder groups that allow for more granular management of the data space. In my experience, the group size usually amounts to about 8MB. Figure 1, below, shows the basic structure of a cylinder group. The data allocation unit in a cylinder is the block, which is usually 4K in size.
Figure 1: The structure of a cylinder group in the EXT filesystems
The first block in the cylinder group is a superblock, which contains the metadata that defines the other filesystem structures and locates them on the physical disk. Some of the additional groups in the partition will have backup superblocks, but not all. A damaged superblock can be replaced by using a disk utility such as **dd** to copy the contents of a backup superblock to the primary superblock. It does not happen often, but once, many years ago, I had a damaged superblock, and I was able to restore its contents using one of the backup superblocks. Fortunately, I had been foresighted and used the **dumpe2fs** command to dump the descriptor information of the partitions on my system.
Following is the partial output from the **dumpe2fs** command. It shows the metadata contained in the superblock, as well as data about each of the first two cylinder groups in the filesystem.
```
``````
# dumpe2fs /dev/sda1
Filesystem volume name: boot
Last mounted on: /boot
Filesystem UUID: 79fc5ed8-5bbc-4dfe-8359-b7b36be6eed3
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122160
Block count: 488192
Reserved block count: 24409
Free blocks: 376512
Free inodes: 121690
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 238
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8144
Inode blocks per group: 509
Flex block group size: 16
Filesystem created: Tue Feb 7 09:33:34 2017
Last mount time: Sat Apr 29 21:42:01 2017
Last write time: Sat Apr 29 21:42:01 2017
Mount count: 25
Maximum mount count: -1
Last checked: Tue Feb 7 09:33:34 2017
Check interval: 0 (<none>)
Lifetime writes: 594 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: c780bac9-d4bf-4f35-b695-0fe35e8d2d60
Journal backup: inode blocks
Journal features: journal_64bit
Journal size: 32M
Journal length: 8192
Journal sequence: 0x00000213
Journal start: 0
Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-1
Reserved GDT blocks at 2-239
Block bitmap at 240 (+240)
Inode bitmap at 255 (+255)
Inode table at 270-778 (+270)
24839 free blocks, 7676 free inodes, 16 directories
Free blocks: 7929-32767
Free inodes: 440, 470-8144
Group 1: (Blocks 32768-65535)
Backup superblock at 32768, Group descriptors at 32769-32769
Reserved GDT blocks at 32770-33007
Block bitmap at 241 (bg #0 + 241)
Inode bitmap at 256 (bg #0 + 256)
Inode table at 779-1287 (bg #0 + 779)
8668 free blocks, 8142 free inodes, 2 directories
Free blocks: 33008-33283, 33332-33791, 33974-33975, 34023-34092, 34094-34104, 34526-34687, 34706-34723, 34817-35374, 35421-35844, 35935-36355, 36357-36863, 38912-39935, 39940-40570, 42620-42623, 42655, 42674-42687, 42721-42751, 42798-42815, 42847, 42875-42879, 42918-42943, 42975, 43000-43007, 43519, 43559-44031, 44042-44543, 44545-45055, 45116-45567, 45601-45631, 45658-45663, 45689-45695, 45736-45759, 45802-45823, 45857-45887, 45919, 45950-45951, 45972-45983, 46014-46015, 46057-46079, 46112-46591, 46921-47103, 49152-49395, 50027-50355, 52237-52255, 52285-52287, 52323-52351, 52383, 52450-52479, 52518-52543, 52584-52607, 52652-52671, 52734-52735, 52743-53247
Free inodes: 8147-16288
Group 2: (Blocks 65536-98303)
Block bitmap at 242 (bg #0 + 242)
Inode bitmap at 257 (bg #0 + 257)
Inode table at 1288-1796 (bg #0 + 1288)
6326 free blocks, 8144 free inodes, 0 directories
Free blocks: 67042-67583, 72201-72994, 80185-80349, 81191-81919, 90112-94207
Free inodes: 16289-24432
Group 3: (Blocks 98304-131071)
<snip>
```
Each cylinder group has its own inode bitmap that is used to determine which inodes are used and which are free within that group. The inodes have their own space in each group. Each inode contains information about one file, including the locations of the data blocks belonging to the file. The block bitmap keeps track of the used and free data blocks within the filesystem. Notice that there is a great deal of data about the filesystem in the output shown above. On very large filesystems the group data can run to hundreds of pages in length. The group metadata includes a listing of all of the free data blocks in the group.
The EXT filesystem implemented data-allocation strategies that ensured minimal file fragmentation. Reducing fragmentation improved filesystem performance. Those strategies are described below, in the section on EXT4.
The biggest problem with the EXT2 filesystem, which I encountered on some occasions, was that it could take many hours to recover after a crash because the **fsck** (file system check) program took a very long time to locate and correct any inconsistencies in the filesystem. It once took over 28 hours on one of my computers to fully recover a disk upon reboot after a crash—and that was when disks were measured in the low hundreds of megabytes in size.
## EXT3
The [EXT3 filesystem](https://en.wikipedia.org/wiki/Ext3) had the singular objective of overcoming the massive amounts of time that the **fsck** program required to fully recover a disk structure damaged by an improper shutdown that occurred during a file-update operation. The only addition to the EXT filesystem was the [journal](https://en.wikipedia.org/wiki/Journaling_file_system), which records in advance the changes that will be performed to the filesystem. The rest of the disk structure is the same as it was in EXT2.
Instead of writing data to the disk's data areas directly, as in previous versions, the journal in EXT3 writes file data, along with its metadata, to a specified area on the disk. Once the data is safely on the hard drive, it can be merged in or appended to the target file with almost zero chance of losing data. As this data is committed to the data area of the disk, the journal is updated so that the filesystem will remain in a consistent state in the event of a system failure before all the data in the journal is committed. On the next boot, the filesystem will be checked for inconsistencies, and data remaining in the journal will then be committed to the data areas of the disk to complete the updates to the target file.
Journaling does reduce data-write performance, however there are three options available for the journal that allow the user to choose between performance and data integrity and safety. My personal preference is on the side of safety because my environments do not require heavy disk-write activity.
The journaling function reduces the time required to check the hard drive for inconsistencies after a failure from hours (or even days) to mere minutes, at the most. I have had many issues over the years that have crashed my systems. The details could fill another article, but suffice it to say that most were self-inflicted, like kicking out a power plug. Fortunately, the EXT journaling filesystems have reduced that bootup recovery time to two or three minutes. In addition, I have never had a problem with lost data since I started using EXT3 with journaling.
The journaling feature of EXT3 can be turned off and it then functions as an EXT2 filesystem. The journal itself still exists, empty and unused. Simply remount the partition with the mount command using the type parameter to specify EXT2. You may be able to do this from the command line, depending upon which filesystem you are working with, but you can change the type specifier in the **/etc/fstab** file and then reboot. I strongly recommend against mounting an EXT3 filesystem as EXT2 because of the additional potential for lost data and extended recovery times.
An existing EXT2 filesystem can be upgraded to EXT3 with the addition of a journal using the following command.
```
````tune2fs -j /dev/sda1`
Where **/dev/sda1** is the drive and partition identifier. Be sure to change the file type specifier in **/etc/fstab** and remount the partition or reboot the system to have the change take effect.
## EXT4
The [EXT4 filesystem](https://en.wikipedia.org/wiki/Ext4) primarily improves performance, reliability, and capacity. To improve reliability, metadata and journal checksums were added. To meet various mission-critical requirements, the filesystem timestamps were improved with the addition of intervals down to nanoseconds. The addition of two high-order bits in the timestamp field defers the [Year 2038 problem](https://en.wikipedia.org/wiki/Year_2038_problem) until 2446—for EXT4 filesystems, at least.
In EXT4, data allocation was changed from fixed blocks to extents. An extent is described by its starting and ending place on the hard drive. This makes it possible to describe very long, physically contiguous files in a single inode pointer entry, which can significantly reduce the number of pointers required to describe the location of all the data in larger files. Other allocation strategies have been implemented in EXT4 to further reduce fragmentation.
EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, as many early PC filesystems did. The file-allocation algorithms attempt to spread the files as evenly as possible among the cylinder groups and, when fragmentation is necessary, to keep the discontinuous file extents as close as possible to others in the same file to minimize head seek and rotational latency as much as possible. Additional strategies are used to pre-allocate extra disk space when a new file is created or when an existing file is extended. This helps to ensure that extending the file will not automatically result in its becoming fragmented. New files are never allocated immediately after existing files, which also prevents fragmentation of the existing files.
Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous.
Older EXT filesystems, such as EXT2 and EXT3, can be mounted as EXT4 to make some minor performance gains. Unfortunately, this requires turning off some of the important new features of EXT4, so I recommend against this.
EXT4 has been the default filesystem for Fedora since Fedora 14. An EXT3 filesystem can be upgraded to EXT4 using the [procedure ](https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/ext4converting.html)described in the Fedora documentation, however its performance will still suffer due to residual EXT3 metadata structures. The best method for upgrading to EXT4 from EXT3 is to back up all the data on the target filesystem partition, use the **mkfs** command to write an empty EXT4 filesystem to the partition, and then restore all the data from the backup.
## Inode
The inode, described previously, is a key component of the metadata in EXT filesystems. Figure 2 shows the relationship between the inode and the data stored on the hard drive. This diagram is the directory and inode for a single file which, in this case, may be highly fragmented. The EXT filesystems work actively to reduce fragmentation, so it is very unlikely you will ever see a file with this many indirect data blocks or extents. In fact, as you will see below, fragmentation is extremely low in EXT filesystems, so most inodes will use only one or two direct data pointers and none of the indirect pointers.
Figure 2: The inode stores information about each file and enables the EXT filesystem to locate all data belonging to it.
The inode does not contain the name of the file. Access to a file is via the directory entry, which itself is the name of the file and contains a pointer to the inode. The value of that pointer is the inode number. Each inode in a filesystem has a unique ID number, but inodes in other filesystems on the same computer (and even the same hard drive) can have the same inode number. This has implications for [links](https://en.wikipedia.org/wiki/Hard_link), and this discussion is beyond the scope of this article.
The inode contains the metadata about the file, including its type and permissions as well as its size. The inode also contains space for 15 pointers that describe the location and length of data blocks or extents in the data portion of the cylinder group. Twelve of the pointers provide direct access to the data extents and should be sufficient to handle most files. However, for files that have significant fragmentation, it becomes necessary to have some additional capabilities in the form of indirect nodes. Technically these are not really inodes, so I use the term "node" here for convenience.
An indirect node is a normal data block in the filesystem that is used only for describing data and not for storage of metadata, thus more than 15 entries can be supported. For example, a block size of 4K can support 512 4-byte indirect nodes, allowing **12 (direct) + 512 (indirect) = 524** extents for a single file. Double and triple indirect node support is also supported, but most of us are unlikely to encounter files requiring that many extents.
## Data fragmentation
For many older PC filesystems, such as FAT (and all its variants) and NTFS, fragmentation has been a significant problem resulting in degraded disk performance. Defragmentation became an industry in itself with different brands of defragmentation software that ranged from very effective to only marginally so.
Linux's extended filesystems use data-allocation strategies that help to minimize fragmentation of files on the hard drive and reduce the effects of fragmentation when it does occur. You can use the **fsck** command on EXT filesystems to check the total filesystem fragmentation. The following example checks the home directory of my main workstation, which was only 1.5% fragmented. Be sure to use the **-n** parameter, because it prevents **fsck** from taking any action on the scanned filesystem.
```
````fsck -fn /dev/mapper/vg_01-home`
I once performed some theoretical calculations to determine whether disk defragmentation might result in any noticeable performance improvement. While I did make some assumptions, the disk performance data I used were from a new 300GB, Western Digital hard drive with a 2.0ms track-to-track seek time. The number of files in this example was the actual number that existed in the filesystem on the day I did the calculation. I did assume that a fairly large amount of the fragmented files (20%) would be touched each day.
Total files |
271,794 |
% fragmentation | 5.00% |
Discontinuities | 13,590 |
% fragmented files touched per day | 20% (assume) |
Number of additional seeks | 2,718 |
Average seek time | 10.90 ms |
Total additional seek time per day | 29.63 sec |
0.49 min | |
Track-to-track seek time | 2.00 ms |
Total additional seek time per day | 5.44 sec |
0.091 min |
Table 1: The theoretical effects of fragmentation on disk performance
I have done two calculations for the total additional seek time per day, one based on the track-to-track seek time, which is the more likely scenario for most files due to the EXT file allocation strategies, and one for the average seek time, which I assumed would make a fair worst-case scenario.
As you can see from Table 1, the impact of fragmentation on a modern EXT filesystem with a hard drive of even modest performance would be minimal and negligible for the vast majority of applications. You can plug the numbers from your environment into your own similar spreadsheet to see what you might expect in the way of performance impact. This type of calculation most likely will not represent actual performance, but it can provide a bit of insight into fragmentation and its theoretical impact on a system.
Most of my partitions are around 1.5% or 1.6% fragmented; I do have one that is 3.3% fragmented but that is a large, 128GB filesystem with fewer than 100 very large ISO image files; I've had to expand the partition several times over the years as it got too full.
That is not to say that some application environments don't require greater assurance of even less fragmentation. The EXT filesystem can be tuned with care by a knowledgeable admin who can adjust the parameters to compensate for specific workload types. This can be done when the filesystem is created or later using the **tune2fs** command. The results of each tuning change should be tested, meticulously recorded, and analyzed to ensure optimum performance for the target environment. In the worst case, where performance cannot be improved to desired levels, other filesystem types are available that may be more suitable for a particular workload. And remember that it is common to mix filesystem types on a single host system to match the load placed on each filesystem.
Due to the low amount of fragmentation on most EXT filesystems, it is not necessary to defragment. In any event, there is no safe defragmentation tool for EXT filesystems. There are a few tools that allow you to check the fragmentation of an individual file or the fragmentation of the remaining free space in a filesystem. There is one tool, **e4defrag**, which will defragment a file, directory, or filesystem as much as the remaining free space will allow. As its name implies, it only works on files in an EXT4 filesystem, and it does have some limitations.
If it becomes necessary to perform a complete defragmentation on an EXT filesystem, there is only one method that will work reliably. You must move all the files from the filesystem to be defragmented, ensuring that they are deleted after being safely copied to another location. If possible, you could then increase the size of the filesystem to help reduce future fragmentation. Then copy the files back onto the target filesystem. Even this does not guarantee that all the files will be completely defragmented.
## Conclusions
The EXT filesystems have been the default for many Linux distributions for more than 20 years. They offer stability, high capacity, reliability, and performance while requiring minimal maintenance. I have tried other filesystems but always return to EXT. Every place I have worked with Linux has used the EXT filesystems and found them suitable for all the mainstream loads used on them. Without a doubt, the EXT4 filesystem should be used for most Linux systems unless there is a compelling reason to use another filesystem.
## 9 Comments |
8,686 | 游戏版 Linux :Ubuntu GamePack | https://www.linux.org/threads/ubuntu-gamepack.4559/ | 2017-07-11T18:20:00 | [
"游戏",
"GamePack"
] | https://linux.cn/article-8686-1.html | 
很多 Linux 爱好者喜欢用他们的 Linux 系统玩游戏,看起来似乎并不需要一个可以玩游戏的操作系统。UALinux 是一家推广使用 GNU/Linux 的乌克兰公司。UALinux 开发了一个 Ubuntu 版本填补了这一空白,并把这个基于 Ubuntu 16.04 的操作系统(OS)命名为 Ubuntu GamePack。
### 内容
(Linux 上的)游戏现在已经相当丰富,而游戏公司宣称可以访问超过 22,381 款游戏。
这个 GamePack 包括 Lutris 和 Steam 两部分,允许您访问发行版厂商提供的特定游戏服务。
对于基于 Windows 的游戏,可以用 PlayOnLinux,WINE 和 CrossOver 转换到 Linux 上运行。
对于 DOS 游戏,您可以在 DosBox 中运行游戏,这是一个 Linux 的 DOS 模拟器。
也安装了 Sparky APTus Gamer ,可以访问众多主机游戏模拟器。 模拟器包括:
* AdvanceMENU - AdvanceMAME、 AdvanceMESS、 MAME、 MESS、 xmame、 Raine 以及其他的模拟器的前端
* Atari800 - Atari 8 位系统、XE 游戏系统和 Atari 5200 超级系统的模拟器
* DeSmuME - 任天堂 DS 模拟器
* Desura - 支持 Windows、Linux 和 X 系统的数字化分发平台 - 在线安装器
* DOSBox - 支持 BeOS、Linux、Mac X、OS2 和 Windows 的 DOS 模拟器
* DOSEMU - 支持 Linux 的 DOS 模拟器
* ePSXe - 增强的 PSX 模拟器
* FCEUX - 任天堂娱乐系统(NES)、红白机(Famicom)和红白机磁盘系统(FDS)模拟器(仿真器)
* FS-UAE - 跨平台的 Amiga 模拟器
* GNOME Video Arcade - 简化的 MAME 前端
* Hatari - 支持 Linux 和其他系统的 Atari ST、STE、TT 和 Falcon 模拟器(仿真器)
* Higan - 任天堂 SNES、NES、Gameboy、Gameboy Color 和 Gameboy Advance 的模拟器
* Kega\_Fusion - 世嘉 SG/SC/SF,主系统、Master System、 Game Gear、 Genesis/Megadrive、 SVP、 Pico、 SegaCD/MegaCD 模拟器
* MAME - 忠实重现了许多街机效果的硬件模拟器
* Mednafen - Atari Lynx、GameBoy、NES、SNES、PC-FX、世嘉,索尼游戏站等系统
* MESS - 各种主机和计算机游戏的模拟器
* Nestopia - 任天堂娱乐系统/红白机模拟器
* PCSX - 索尼游戏站模拟器
* PlayOnLinux - Wine 前端
* PPSSPP - PPSSPP 是支持 Windows、MacOS、Linux 和 Android 的开源 PSP 仿真器
* Steam - Steam 软件分发服务的启动器 - 在线安装程序
* Stella -用于 SDL 和 X Window 系统的 Atari 2600 仿真器
* VisualBoyAdvance - 全功能 Game Boy Advance 的模拟器
* Virtual Jaguar - 用于 Atari 的 infamous Jaguar 主机游戏的跨平台模拟器
* Wine - Windows 二进制在 Linux 中运行
* Winetricks - 一个用于 WINE 的 POSIX shell 脚本的软件包管理器,能够很容易安装一些 Windows软件
* Yabause - 世嘉土星32位游戏机模拟器
* ZSNES - 超级任天堂娱乐系统模拟器
GamePack 还包括被一些游戏所必须的 Oracle java 和 Adobe Flash。
如果这是一个你感兴趣的操作系统,请继续阅读,看看如何下载它。
### 下载
下载此操作系统镜像的主要地方是 UALinux 。其下载链接是: <https://ualinux.com/en/download/category/25-ubuntu-gamepack>。由于此链接来自国外,所以下载速度很慢。另一种选择是利用种子文件下载此操作系统。如果你没有种子下载程序,你可以下载“Transmission”。有了种子下载程序后,你可以通过 [https://zooqle.com/ubuntu-gamepack-16-04-i386-amd64-январь-2017-pc-vkn99.html](https://zooqle.com/ubuntu-gamepack-16-04-i386-amd64-%D1%8F%D0%BD%D0%B2%D0%B0%D1%80%D1%8C-2017-pc-vkn99.html)下载。这个种子文件下载可以下载 64 位和 32 位 的 ISO 镜像文件。
(下载的)文件大小取决于您需要的架构。64 位操作系统 ISO 镜像文件大小是 2.27 GB,而 32 位的操作系统 ISO 镜像文件大小是 2.13 GB。 如果下载了你所用的 ISO 镜像文件,你可以利 ISO 文件创建一个可启动的 DVD 安装 GamePack ,或者你可以使用 “USB Image Writer”把 ISO 写入到优盘,并利用此优盘安装系统。 硬件需求和 Ubuntu 16.04 保持一致:
* 2 GHz 双核处理器或者更高
* 2 GB 系统内存
* 25 GB 的磁盘空间
* 用于安装介质的 DVD 驱动器或者 USB 端口
* 在线游戏系统(如 Steam)需要互联网接入。
不用说,对于游戏玩家来说,肯定希望拥有比这些“最低配置”要求更高的系统配置。更多的内存将是一个有把握的选择,也应该有一款显存大一点的正统显卡。
您如果有了硬件系统和系统的特定 32位 或者 64 位 ISO 文件,那么接下来就可以安装操作系统了。
### 安装过程
当你用安装介质的 ISO 镜像文件启动了系统,您就可以准备进行下一步了。
从 Ubuntu Gamepack 介质启动,你会看到一个类似图 1 的屏幕。

*图 1*
一旦加载完毕,安装程序就可以继续安装了。图 2 显示下一屏,可以定制语言,接下来是安装或者体验 Gamepack。如果你愿意,你可以点击 “Try Ubuntu” 在不改变硬盘内容的情况下把它加载到内存中来试试它。

*图 2*
接下来继续选择 ‘Install Ubuntu’ 进行安装了。
下一个屏幕,如图 3 所示,你可以在安装 Ubuntu 时指定是否下载 Ubuntu 的任何更新。您还可以选择安装第三方的软件,如:图形、WiFi、Flash、 MP3 和其他更新。 当定制好你的系统后,就可以点击“Continue” 
*图 3*
接下来,您必须指定驱动器将如何配置使用,如图 4 所示。如果您计划使用整个驱动器,那么可以更容易地设置,选择此驱动器即可,然后单击“Install Now”。

*图 4*
接下来在图 5 中可以根据提示确认所选择硬件配置。如果同意以上的更改,请单击“Continue”。 
*图 5*
接下来,如图 6 所示,你将按照提示选择时区,选择完毕后点击“Continue”。

*图 6*
接下来,如图 7 所示一个窗口,需要您设置默认的键盘布局。选择适合您的正确的布局后并按“Continue”。

*图 7*
最后一个配置屏幕是为您设置一个用户帐户,如图 8 所示。键入您的姓名、计算机名、用户名、密码并选择您需要键入密码登录系统的方式。您还可以为该用户设置加密主目录。

*图 8*
安装将按指定来设置驱动器。安装文件将从引导媒体复制到硬盘驱动器,如图 9 所示。所有内容复制到硬盘并设置好,您将被提示移除引导介质并允许重新启动系统。

*图 9*
重新启动后,您需要选择要求用户登录,会得到类似于图 10 的屏幕。输入指定的用户密码登录到 Ubuntu Gamepack。

*图 10*
当你登录到 Ubuntu Gamepack 你应该尝试执行可能需要的软件升级。打开一个终端并输入以下两个命令:
```
sudo apt-get update && sudo apt-get upgrade
```
任何没有安装的更新都应该安装,以便 GamePack 系统保持更新。
现在,只要看看菜单,找到你想玩的游戏就行了,打开模拟器或其它像 Steam 的游戏服务 。
希望你喜欢 Gamepack 并且玩得高兴!
---
via: <https://www.linux.org/threads/ubuntu-gamepack.4559/>
作者:[Jarret B](https://www.linux.org/members/jarret-b.29858/) 译者:[stevenzdg988](https://github.com/stevenzdg988) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,687 | 如何修补和保护 Linux 内核堆栈冲突漏洞 CVE-2017-1000364 | https://www.cyberciti.biz/faq/howto-patch-linux-kernel-stack-clash-vulnerability-cve-2017-1000364/ | 2017-07-12T07:34:00 | [
"安全漏洞",
"内核",
"堆栈"
] | https://linux.cn/article-8687-1.html | 在 Linux 内核中发现了一个名为 “Stack Clash” 的严重安全问题,攻击者能够利用它来破坏内存数据并执行任意代码。攻击者可以利用这个及另一个漏洞来执行任意代码并获得管理帐户(root)权限。
在 Linux 中该如何解决这个问题?
[](https://www.cyberciti.biz/media/new/faq/2017/06/the-stack-clash-on-linux-openbsd-netbsd-freebsd-solaris.jpeg)
Qualys 研究实验室在 GNU C Library(CVE-2017-1000366)的动态链接器中发现了许多问题,它们通过与 Linux 内核内的堆栈冲突来允许本地特权升级。这个 bug 影响到了 i386 和 amd64 上的 Linux、OpenBSD、NetBSD、FreeBSD 和 Solaris。攻击者可以利用它来破坏内存数据并执行任意代码。
### 什么是 CVE-2017-1000364 bug?
[来自 RHN](https://access.redhat.com/security/cve/cve-2017-1000364):
>
> 在用户空间二进制文件的堆栈中分配内存的方式发现了一个缺陷。如果堆(或不同的内存区域)和堆栈内存区域彼此相邻,则攻击者可以使用此缺陷跳过堆栈保护区域,从而导致进程堆栈或相邻内存区域的受控内存损坏,从而增加其系统权限。有一个在内核中减轻这个漏洞的方法,将堆栈保护区域大小从一页增加到 1 MiB,从而使成功利用这个功能变得困难。
>
>
>
[据原研究文章](https://blog.qualys.com/securitylabs/2017/06/19/the-stack-clash):
>
> 计算机上运行的每个程序都使用一个称为堆栈的特殊内存区域。这个内存区域是特别的,因为当程序需要更多的堆栈内存时,它会自动增长。但是,如果它增长太多,并且与另一个内存区域太接近,程序可能会将堆栈与其他内存区域混淆。攻击者可以利用这种混乱来覆盖其他内存区域的堆栈,或者反过来。
>
>
>
### 受到影响的 Linux 发行版
1. Red Hat Enterprise Linux Server 5.x
2. Red Hat Enterprise Linux Server 6.x
3. Red Hat Enterprise Linux Server 7.x
4. CentOS Linux Server 5.x
5. CentOS Linux Server 6.x
6. CentOS Linux Server 7.x
7. Oracle Enterprise Linux Server 5.x
8. Oracle Enterprise Linux Server 6.x
9. Oracle Enterprise Linux Server 7.x
10. Ubuntu 17.10
11. Ubuntu 17.04
12. Ubuntu 16.10
13. Ubuntu 16.04 LTS
14. Ubuntu 12.04 ESM (Precise Pangolin)
15. Debian 9 stretch
16. Debian 8 jessie
17. Debian 7 wheezy
18. Debian unstable
19. SUSE Linux Enterprise Desktop 12 SP2
20. SUSE Linux Enterprise High Availability 12 SP2
21. SUSE Linux Enterprise Live Patching 12
22. SUSE Linux Enterprise Module for Public Cloud 12
23. SUSE Linux Enterprise Build System Kit 12 SP2
24. SUSE Openstack Cloud Magnum Orchestration 7
25. SUSE Linux Enterprise Server 11 SP3-LTSS
26. SUSE Linux Enterprise Server 11 SP4
27. SUSE Linux Enterprise Server 12 SP1-LTSS
28. SUSE Linux Enterprise Server 12 SP2
29. SUSE Linux Enterprise Server for Raspberry Pi 12 SP2
### 我需要重启我的电脑么?
是的,由于大多数服务依赖于 GNU C Library 的动态连接器,并且内核自身需要在内存中重新加载。
### 我该如何在 Linux 中修复 CVE-2017-1000364?
根据你的 Linux 发行版来输入命令。你需要重启电脑。在应用补丁之前,记下你当前内核的版本:
```
$ uname -a
$ uname -mrs
```
示例输出:
```
Linux 4.4.0-78-generic x86_64
```
### Debian 或者 Ubuntu Linux
输入下面的 [apt 命令](https://www.cyberciti.biz/faq/ubuntu-lts-debian-linux-apt-command-examples/) / [apt-get 命令](https://www.cyberciti.biz/tips/linux-debian-package-management-cheat-sheet.html)来应用更新:
```
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
```
示例输出:
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64
linux-kbuild-4.9 linux-libc-dev locales multiarch-support
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/62.0 MB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc6:amd64 (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc-bin (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ...
Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ...
Setting up multiarch-support (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ...
Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ...
Unpacking locales (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ...
Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ...
Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ...
Setting up libc6-i386 (2.24-11+deb9u1) ...
Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ...
Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ...
Setting up libc-l10n (2.24-11+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libc-dev-bin (2.24-11+deb9u1) ...
Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64
cryptsetup: WARNING: failed to detect canonical device of /dev/md0
cryptsetup: WARNING: could not determine root device from /etc/fstab
W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/md1p1
I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.9.0-3-amd64
Found kernel: /boot/vmlinuz-3.16.0-4-amd64
Updating /boot/grub/menu.lst ... done
Setting up libc6-dev:amd64 (2.24-11+deb9u1) ...
Setting up locales (2.24-11+deb9u1) ...
Generating locales (this might take a while)...
en_IN.UTF-8... done
Generation complete.
Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...
```
使用 [reboot 命令](https://www.cyberciti.biz/faq/linux-reboot-command/)重启桌面/服务器:
```
$ sudo reboot
```
### Oracle/RHEL/CentOS/Scientific Linux
输入下面的 [yum 命令](https://www.cyberciti.biz/faq/rhel-centos-fedora-linux-yum-command-howto/):
```
$ sudo yum update
$ sudo reboot
```
### Fedora Linux
输入下面的 dnf 命令:
```
$ sudo dnf update
$ sudo reboot
```
### Suse Enterprise Linux 或者 Opensuse Linux
输入下面的 zypper 命令:
```
$ sudo zypper patch
$ sudo reboot
```
### SUSE OpenStack Cloud 6
```
$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot
```
### SUSE Linux Enterprise Server for SAP 12-SP1
```
$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot
```
### SUSE Linux Enterprise Server 12-SP1-LTSS
```
$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot
```
### SUSE Linux Enterprise Module for Public Cloud 12
```
$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot
```
### 验证
你需要确认你的版本号在 [reboot 命令](https://www.cyberciti.biz/faq/linux-reboot-command/)之后改变了。
```
$ uname -a
$ uname -r
$ uname -mrs
```
示例输出:
```
Linux 4.4.0-81-generic x86_64
```
### 给 OpenBSD 用户的注意事项
见[此页](https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/008_exec_subr.patch.sig)获取更多信息。
### 给 Oracle Solaris 的注意事项
[见此页](https://ftp.openbsd.org/pub/OpenBSD/patches/6.1/common/008_exec_subr.patch.sig)获取更多信息。
### 参考
* [堆栈冲突](https://blog.qualys.com/securitylabs/2017/06/19/the-stack-clash)
---
作者简介:
Vivek Gite
作者是 nixCraft 的创始人,对于 Linux 操作系统/Unix shell脚本有经验丰富的系统管理员和培训师。他曾与全球客户及各行各业,包括 IT、教育、国防和空间研究以及非营利部门合作。在 [Twitter](https://twitter.com/nixcraft)、[Facebook](https://facebook.com/nixcraft)、[Google +](https://plus.google.com/+CybercitiBiz) 上关注他。
---
via: <https://www.cyberciti.biz/faq/howto-patch-linux-kernel-stack-clash-vulnerability-cve-2017-1000364/>
作者:[Vivek Gite](https://plus.google.com/+CybercitiBiz) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,689 | Fedora 26 正式发布! | https://fedoramagazine.org/fedora-26-is-here/ | 2017-07-13T09:24:03 | [
"Fedora",
"发行版"
] | https://linux.cn/article-8689-1.html | 
大家好,我很高兴地宣布,从即刻起 Fedora 26 正式可用了。你可以从下面了解到具体信息,也可以马上开始下载:
* [下载 Fedora 26 Workstation](https://getfedora.org/workstation/)
* [下载 Fedora 26 Server](https://getfedora.org/server/)
* [下载 Fedora 26 Atomic Host](https://getfedora.org/atomic/) (包括 Amazon EC2 的一键启动链接)
如果你已经在使用 Fedora 了,你可以从命令行或 GNOME “软件” 升级,[升级建议在这里](https://fedoramagazine.org/upgrading-fedora-25-fedora-26/)。我们在升级方面做了许多工作以使它容易快捷。多数情况下,这需要半个小时左右就可以让你的系统继续工作起来而不会遇到什么麻烦。
### Fedora 26 的新特性
当然了,首先,我们对各个集成的上游软件做了上千个改进,这包括像 GCC 7、Golang 1.8、Python 3.6 等开发工具。我们也给 Fedora 安装器 Anaconda 添加了新的分区工具,现有的流程对非专业用户来说很棒,但是新的工具将得到爱好者和系统管理员们的喜爱,因为他们喜欢如搭积木般地构建其存储方案。 Fedora 26 也有许多隐藏在底层的改进,比如更好的缓存用户和组信息,对调试信息的处理更好。而且 DNF 软件包管理器也升级为新的主要版本(2.5),带来了[许多新功能](http://dnf.readthedocs.io/en/latest/release_notes.html)。真的,这次有许多新东西,你可以在[发布公告](https://docs.fedoraproject.org/en-US/Fedora/26/html/Release_Notes/index.html)里面了解更多。
### 如此之多的选择……
Fedora Workstation 构建于 GNOME 之上(现在的版本是 [3.24](https://help.gnome.org/misc/release-notes/3.24/))。如果你喜欢其他的流行桌面 ,如 KDE、Xfce、 Cinnamon 等等,你可以看看 [Fedora Spins](https://spins.fedoraproject.org/)。此外,也有一些用于特殊用途的版本,比如天文、设计、安全或机器人等方面,请参见 [Fedora Labs](https://labs.fedoraproject.org/)。STEM 教师们也能够利用新的 [Python Classroom](https://labs.fedoraproject.org/python-classroom/) 轻而易举地构建一个带有 Vagrant 和 Docker 容器的教学环境,无论是以现场版 USB 镜像还是传统的安装方式都行。
如果你想要在 EC2、OpenStack 或其它云平台中搭建一个 Fedora 环境,这里有 [Fedora Cloud Base](https://cloud.fedoraproject.org/) 。此外,我们也提供了网络安装器、其它架构(比如 Power 和 aarch64)、种子链接等等,你可以在 [Fedora 备选下载](https://alt.fedoraproject.org/)里面找到。当然,这些也不能漏掉:如果你想要将 Fedora 安装到树莓派或其他 ARM 设备上,可以从 [Fedora ARM](https://arm.fedoraproject.org/) 获取镜像文件。
呼!Fedora 带来了许多东西,我希望这里有每个人所需要的东西,但是如果你不能找到你想要的,那你可以[加入 Fedora 项目](https://fedoraproject.org/wiki/Join)来和我们一同创造它。我们的目标就是创建一个平台,以 Freedom、Friendship、Features、First 为基础,让贡献者们和其他开发者们解决各种用户的问题。如果你的问题没有解决, Fedora 将帮助你搞定它。
### 即将到来
同时,在 Fedora 背后还有许多有趣的事情。敬请关注本周稍晚时候的 Fedora Boltron,预览一种从以不同速度变化的构建块组合而成 Fedora Server 的方式。(如果我的开发栈是在一个稳定版本之上的滚动发行版怎么样?或者,我可以不但得到基础平台升级的好处,也能让我的 Web 服务器和数据库保持在某个已知版本?)我们也在开发一个关注于 Fedora Atomic、自动测试的[持续集成](https://fedoraproject.org/wiki/CI)的大项目,这样开发者就可以快速开发而不用担心破坏其它东西。
### 感谢整个 Fedora 社区!
总的来说,我相信这是又一次的、有史以来的、最好的 Fedora 版本。这是因为每年有数以千计的 Fedora 贡献者们的奉献精神、辛勤工作和爱心。这真是一个令人惊叹的社区项目,来自于一群令人惊叹的人们。这一次,特别感谢从质量保证到发布项目的每一个人,是他们在周末和假日的工作才将 Fedora 26 呈现于你的眼前。
哦,当然,在人类世界里面,每一个最好的版本都不会是完美无瑕的,总会有些边边角角的地方和后来才发现的问题,如果你遇到了一些奇奇怪怪的问题,请看看 [Fedora 26 常见问题](https://fedoraproject.org/wiki/Common_F26_bugs)。如果你遇到了问题,可以帮我们做得更好。但是在大多数情况下,请乐享这个最新版本吧!
*— Matthew Miller, Fedora 项目负责人*
| 200 | OK | *[This message comes from the desk of the Fedora Project Leader directly. Happy release day! — Ed.] *
Hi everyone! I’m incredibly proud to announce the immediate availability of Fedora 26. Read more below, or just jump to download from:
[Get Fedora 26 Workstation](https://getfedora.org/workstation/)[Get Fedora 26 Server](https://getfedora.org/server/)[Get Fedora 26 Atomic Host](https://getfedora.org/atomic/)*← includes click-to-launch link for Amazon EC2*
If you’re already using Fedora, you can upgrade from the command line or using GNOME Software — [upgrade instructions here](https://fedoramagazine.org/upgrading-fedora-25-fedora-26/). We’ve put a lot of work into making upgrades easy and fast. In most cases, this will take half an hour or so, bringing you right back to a working system with no hassle.
## What’s new in Fedora 26?
First, of course, we have thousands of improvements from the various upstream software we integrate, including new development tools like GCC 7, Golang 1.8, and Python 3.6. We’ve added a new partitioning tool to Anaconda (the Fedora installer) — the existing workflow is great for non-experts, but this option will be appreciated by enthusiasts and sysadmins who like to build up their storage scheme from basic building blocks. F26 also has many under-the-hood improvements, like better caching of user and group info and better handling of debug information. And the DNF package manager is at a new major version (2.5), bringing [many new features](http://dnf.readthedocs.io/en/latest/release_notes.html). Really, there’s new stuff everywhere — read more in the [release notes](https://docs.fedoraproject.org/en-US/Fedora/26/html/Release_Notes/index.html).
## So many Fedora options…
Fedora Workstation is built on GNOME (now [version 3.24](https://help.gnome.org/misc/release-notes/3.24/)). If you’re interested in other popular desktop environments like KDE, Xfce, Cinnamon, and more, check out [Fedora Spins](https://spins.fedoraproject.org/). Or, for versions of Fedora tailored to special use cases like Astronomy, Design, Security, or Robotics, see [Fedora Labs](https://labs.fedoraproject.org/). STEM teachers, take advantage of the new [Python Classroom](https://labs.fedoraproject.org/python-classroom/), which makes it a breeze to set up an instructional environment with Vagrant, Docker containers, a Live USB image, or traditional installation.
If you want a Fedora environment to build on in EC2, OpenStack, and other cloud environments, there’s the [Fedora Cloud Base](https://cloud.fedoraproject.org/). Plus, we’ve got network installers, other architectures (like Power and aarch64), BitTorrent links, and more at [Fedora Alternative Downloads](https://alt.fedoraproject.org/). And, not to be forgotten: if you’re looking to put Fedora on a Raspberry Pi or other ARM device, get images from the [Fedora ARM](https://arm.fedoraproject.org/) page.
Whew! Fedora makes a lot of stuff! I hope there’s something for everyone in all of that, but if you don’t find what *you* want, you can [Join the Fedora Project](https://fedoraproject.org/wiki/Join) and work with us to create it. Our mission is to build a platform which enables contributors and other developers to solve all kinds of user problems, on our foundations of Freedom, Friendship, Features, and First. If the problem you want to solve isn’t addressed, Fedora can help you fix that.
## Coming soon
Meanwhile, we have many interesting things going on in Fedora behind the scenes. Stay tuned later this week for Fedora Boltron, a preview of a new way to put together Fedora Server from building blocks which move at different speeds. (*What if my dev stack was a rolling release on a stable base? Or, could I get the benefits from base platform updates while keeping my web server and database at known versions?*) We’re also working on a big [continuous integration](https://fedoraproject.org/wiki/CI) project focused on Fedora Atomic, automating testing so developers can work rapidly without breaking things for others.
## Thanks to the whole Fedora community!
Altogether, I’m confident that this is the best Fedora release ever — yet again. That’s because of the dedication, hard work, and love from thousands of Fedora contributors every year. This is truly an amazing community project from an amazing group of people. This time around, thanks are particularly due to everyone from quality assurance and release engineering who worked over the weekend and holidays to get Fedora 26 to you today.
Oh, and one more thing… in the human world, even the best release ever can’t be perfect. There are always corner cases and late-breaking issues. Check out[ Common F26 Bugs](https://fedoraproject.org/wiki/Common_F26_bugs) if you run into something strange. If you find a problem, help us make things better. But mostly, enjoy this awesome new release.
*— Matthew Miller, Fedora Project Leader*
## Willian
Awesome work from everyone! I’m using Fedora 26 right now, updated from the Beta version. Everything is smooth!
## Rico
I’ll be updating soon! Awesome work!
## Matías de la Fuente
Thanks! I’m upgrading now. Happy to see the new version working.
## Alexis
Good job, guys! I’ve been using F26 for a week and it’s great!
## Yogesh Sharma
Very stable release.
This time I tried Gnome “Software” for upgrade process and it was painless, quick and smooth.
Great Job!!!
## Vivek Gite
Awesome and congratulations for the release.
## Jeff Sandys
Help. I am getting “403 Forbidden” when attempting to download LXDE 32bit.
https://download.fedoraproject.org/pub/fedora-secondary/releases/26/Spins/i386/iso/Fedora-LXDE-Live-i386-26-1.5.iso
## Paul W. Frields
@Jeff: The URL you posted here simply sends you to a mirror location, so the problem you’re seeing is with a mirror we don’t control. I’ve reported it but you may want to try another one. https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora/26
## Andy Mender
Well done, Fedora team!
## Wayne
Will 26 work in a VMWare VM at full 4k resolution? 25 doesn’t so I ‘ve stayed on 24 for now.
## Paul W. Frields
@Wayne: Why not download the Live version and find out? 😉
## Wayne
Yeah, I s’pose I can do that. 🙂
## Wayne
Well we now have full 4K resolution in a VMWare VM! Excellent. It’s a bit flaky though, Blueman app crashes immediately in both the VM and on the machine that has Fedora 26 installed natively. Visual Studio Code won’t start (on either) and Chrome freezes on the VM. Still, the bug reports are being sent so I’m sure these will all get fixed in time. Good job!
## Paweł
Upgrading from F25, via GNOME Software broke my IBM Notes client – not opening anymore, hanging on loading after entering password. Doing a clean install right now, and will check again.
Overall F26 seems to be faster and more responsive on my humble Lenovo “Thinkpad” e320 (I don’t consider anything with that bad build quality, to be a true Thinkpad). Scrolling websites up and down on Wayland is soooo smooth, I really like it (but there are some hiccups here and there).
Night Light is great, I think it even should be enabled by default, with a short info on how to change it, when user logs in for the first time, after installing.
Thanks for Your hard work, with this another great release 🙂
## Paweł
Nope, no dice.
IBM Notes 9.0.1 is broken on Fedora 26 (with/without FP6). Tried both ways – upgrade from F25 and fresh install on F26 🙁
## Eduardo Chachagua
I have the same problem with IBM Notes 9.0.1.
Does it work or not yet?
Thank you
regards
## Paweł
It still does not work for me on F26.
Also tried installing on openSUSE Tumbleweed but I have the same issue (hangs after password input with loading box visible).
I’m not sure what exactly causes this problem yet. Probably some never packages that are dependency of Notes or system component change.
If You are interested, below are terminal-startup logs from running Notes from terminal on F25 and on F26:
F25 (works): https://paste.fedoraproject.org/paste/I71MIQGhAQDoDBzN6aeWkQ
F26 (dosn’t work): https://paste.fedoraproject.org/paste/1G~p8vktrEzhMl2fJ-00Ag
## Yamada
https://developer.ibm.com/javasdk/downloads/sdk8/
Download this: ibm-java-sdk-8.0-4.7-i386-archive.bin
## DaveK
Thank you! That is the fix I needed.
## Paweł
Why? What does this do? Does not resolve my issue – Notes still won’t start.
## Stefan
Dear Pawel,
did you find a solution? I am experiencing the exact same problem.
I tried also both ways (upgrade and fresh install).
Thanks
## Whizz
Basically, it’s the JRE. I h avefound that MATLAB crashes when JRE 1.8 is used. Try with old version (ver 1.7) and make sure IBM Notes use that JRE instead of the default one.
## Magic
How did you do that?
I installed Java 1.7 and linked java in /opt/ibm/notes/jvm to my local installation and I wasnt successfull :/
## Yamada
https://developer.ibm.com/javasdk/downloads/sdk8/
Use this java: ibm-java-sdk-8.0-4.7-i386-archive.bin
## MaxMeranda
Excellent article of sheer awesome!
Going to wait a couple of weeks for an upgrade of my (5) machines here (I’ll do a fresh install). Have been looking forward to this!
## Fábio
Hello Fedora Team!
Excellent my Fedora 26 Workstation!
## VK Joseph
Transition from F25 to F26 was smooth and excellent. Thanks the fedora team.
## hos7ein
yeah,this is 😉
## Ali Yousefi Sabzevar
Congratulations!
Currently, I’m downloading F26 Workstation. I’m pretty sure it’s as stable as previous Fedora versions.
Thank you Fedora team and Red Hat engineers.
## Bbn
Good work, will upgrade mine soon
## Anatoly
Is any bootloader editor in f 26? I installed it yesterday. I can’t say I glad to do this. Before I install f 25 but after get to know that f 26 release appeared and decided to install it. Some difficulties were but i don’t now. I am not finish to set personal settings yet. It take 2-3 days. So I can tell my opinion.
## Simon Morgan
Using Fedora 26 Plasma Spin. This is the best Plasma release Fedora ever had. Runs even better than Kubuntu and Plasma on Arch. Finally! Thank you for all the hard work
## Pieter J.van Horssen
Thanks for the hard work put in this release. Great job done.
## eyes_only
Thanks for your work. Awesome release!
## Cristian
It is Awesome, I installed on Saturday and works perfect. And yesterday I have upgraded another pc from 25 to 26, without a problem.
I love Fedora.
## Pedro Coelho
Hi F26 team.
dnf system-upgrade makes gnome to hang after login .
I changed to gnome classic and works.
In boot I used the parameter :
systemd.unit= multi-user.target to check journalctl and there was no error.
Regards
## luca247
upgrade went fine via the software center and the system is fast and stable…unfortunately seems something broken in mesa as everytime i try to play some videos in mp4 or mkv format the screen freezes for seconds and then al colours gets messed up forcing me to reboot…terminal tells me some radeon problems (i have an old ati radeon 4570)…hope it can be fixed…
## Anatoly
At last a can say some certain opinion about fedora 26. In common it’s better. But have some disadvantages as to some Soft. It doesn’t matter to own OS. For example I found some API don’t work on root. There are dolphin, kate, kwrite, okteta and may be some others. If you found get to know me about it. And in f 25 it work. At list by using kdesu… So newer install f 26 from the begin. Or you get bad OS… First install f 25 do some blocks for upgrade and update so as to version of this soft not increased in any case. And only after that you may upgrade to f 26. In addition to is good to block IBus certainly or you input will be conflict with native Linux one and some architecture blocks so as not to install or update wrong arch packages. And may be some else. Dependencies may pull big undesired packages so you must pay match attention so as not to install undesired soft. Return to discussed… Disadvantage as i consider is that the is no easy way to switch to console from graphic and reverse. There are some disadvantages else. But they not special for fedora. They present in all modern Linux OS. That is bad tendentious that next version reduce functionalities and possibilities of tunnings some soft. I don’t know why? I noticed some aspect of this approach. This problems artificially created. May be more reasons to allow user what need him then forbid some functionality. Corporative politic is the worst thing. They don’t ask user, and do as they think. For example this design for admins.
## Sebastiaan Franken
Switching to a TTY is still done with Control + Alt + F{2,3,4,5,6,7,etc}. If you want to make this semi-permanent: “systemctl isolate multi-user.target” to go from a console to the GUI and “systemctl isolate graphical.target” to go from CLI to GUI
## Sergey
Fedora 26 It is the best upgrade, never mind who am i and how long i am participate in or not using Fedora. In my subjectivity mind it is 2 more efficient then the previous.
## Emanoel
Never a release upgrade process was so smooth
## Valenteriano
Hi, does anybody know if it works besides macOS on a MacBook Pro 2011?
It is a MacBook Pro 8,2 with AMD Radeon HD 6750M mac edition. This means it has only 512 MB VRAM. Other Notebooks have the “normal” version with 1024 MB VRAM.
This means, when I install Fedora 25, Fedora seems to assume, the card has 1024 MB as it sees it is this Radeon card. But after installation right after first boot, the login display is scrambled and not usable. Well, I can switch to terminal, but I do not know what to do, so I can not get the card working properly. Therefore I have to use Fedora 24. But it has some problems, as it does not support the hybrid graphics card system fully, yet (several dead cursors on the screen …). As installer I am able to start the left “EFI” on boot screen. The right “EFI” boots with a scrambled display. So I am afraid installing it on the MBP, if after installation the desktop is scrambled again as in Fedora 25. And I need a properly working graphics card as I need it to develop in c++ a system with OpenGL.
Has anybody a tipp if I can get it working on this MBP and what I have to do?
Thank you in forward!
## Michael Z
Well done! Boots faster than 25 and overall performance is snappier. I don’t have any hard numbers but just looking at utilization it appears to be using less resources. Thanks!
## Sanjay
Great work!
## StefanF.
Well I’ve a little problem to finish the upgrade succesfully.
After ‘system-upgrade’ get I the message:
Error: Transaction check error:
file /usr/lib64/gstreamer-1.0/libgsttranscode.so conflicts between attempted installs of gstreamer1-plugins-entrans-1.0.3-2.fc26.x86_64 and gst-transcoder-1.12.0-1.fc26.x86_64
Any ideas?
Thanks
Stefan
## Mimo Crocodile
Why clamd is auto runned?
## Paul W. Frields
@Mimo: The spec file for clamav says that in an upgrade, the systemd daemon settings will be reloaded, so it’s possible that your setting is enabled. Please consult community help forums for more information, as the Magazine isn’t one: https://fedoraproject.org/wiki/Communicating_and_getting_help
## karthik
Just installed 26 on with virtualbox. After installing guest addons, there is a LOT of screen flicker on mouse-over in the ‘activities’ menu. Noticed that 3D acceleration was disabled, so I enabled that, but now I can not login (stuck in a loop).
I’ve not noticed this behavior on F25, any ideas?
## Triston Broadway
I recently netinstalled F26 on a flash drive. It is not registering any devices. The laptop keyboard and trackpad both don’t work, and any external mouse or keyboard I use doesn’t seem to work.
## Angel Yocupicio
It is great release. I prefer to use Mate.
## Coen Fierst
Many thanks for this operating system! Free, non commercial, stable and good looking (oops).
## Costa A.
F26’s default wallpaper has set the bar too high.
Let’s see what F27 has come up with.
## Dave Huh
Upgraded flawlessly on ASUS laptop and DIY PCEngines Dual Band Wireless Router. Used CLI approach, no issues.
## Z
I have update 3 PC. Fedora 26 works like a charm!
## Karlir-Johanarnt Kristjanson
The only shockingly big blooper from the developer team this time was the OMISSION (ZZZZzzzz) of NetworkManager-PPP. The mobile broadband option should work out of the box. I had to search it up on the net, and luckily I identified the problem and downloaded the omitted file.
Otherwise it feels like a solid distro release. A nice piece of art.
Cheers!
Karlir
## Adeel
I have upgraded from fedora 25 and the process is smooth. Awesome work.
## Richard Gillian
The GUI-based ‘Upgrade to Fedora 26’ has failed for my F25 system. The installation completed but when I rebooted to the new F26 it failed to start anything. None of my previously installed F25 versions would start either. I have to switch off the power supply to end the start-up.
I have also tried the F26 iso image. That works but when I select ‘Install’ it cannot identify a suitable location to install (and nor can I).
Is there anything I can try, to recover? Or should I create a new partition and install there? Although that might work, it would leave me with no confidence in the upgrade process.
Should I report this in some other forum?
## Rkadrano
Hi all!
I work all days with Fedora 25, virtualBox and Other software. 2 years ago I changed the OS of my laptop (I am SysAdmin and I work all days with REd Hat Enterprise).
Today I tried to instal Server edition 26 and I can’t run graphics mode after install gnome or cinnamon desktops, i tryed all things like modify run levels, systemctl set-default, change run level, change Xwayland to Xorg manually… all things… and f26 Server only show me console prompt.
Ok, maybe I need to tray workstation… great, it works awesome in VBox, but VBox addons can’t be run. After create the KERN_DIR (I needed create it with mkdir, because can’t created it after export it) my surprise is VBox is not running error…
I think F26 has the last things to be the one, but for the moment is not a possibility to work at 100%.
Regard dudes, and sorry for my english.
## martinp
Just bought an asus mini vivo and installed fedora 26. 15 minutes later everything was working perfectly, wifi, dual monitor setup, wireless keyboard and mouse, sound thru HDMI. 15 more minutes and I had my software stack up and running (DropBox, R, Rstudio, VS code, latex).
My last experience with Linux was 10 years ago and it was a nightmare. This… this is too easy. Thanks a lot to all devs! Outstanding work.
## bijumon
Upgraded! reinstalled nvidia akmod packages, wine broke steam which got fixed but mostly uneventful upgrade, thanks to fedmag article https://fedoramag.wpengine.com/upgrading-fedora-25-fedora-26/
p.s looking foreword to ‘atomic workstation’ upgrade 😉 |
8,690 | 在 MacBook Air 上安装 Fedora 26 | https://www.linux.org/threads/installing-fedora-26-beta-on-a-macbook-air.12464/ | 2017-07-13T10:16:00 | [
"MacBook",
"Fedora"
] | https://linux.cn/article-8690-1.html | 
(写本文时)距离 Fedora 26 测试版发布已有几天,我认为是时候把它安装在我的 13 寸 MacBook Air 上了。
(LCTT 译注:[就在刚刚,Fedora 26 正式版发布了](/article-8689-1.html)。)
我这个 MacBook Air 的型号为 A1466 EMC 2925,拥有 8gb 内存,2.2GHz i7 处理器,512gb SSD,以及与 2015 款相似的外观。
首先我下载了 beta 版镜像,你能够从 [GetFedora](https://getfedora.org/en/workstation/prerelease/) 网站获取。一旦你下载完成,就可将它安装在 USB 闪存驱动器上。在 Linux 上,你可以用 `dd` 命令方便的完成这个操作。
将 USB 驱动器插入到你的电脑上,并使用 `tail` 命令读取 `/var/log/syslog` 或 `/var/log/messages` 文件的最后几行。你也可以使用 `df -h` 命令查看存储设备从而找到正确的 /dev/sdX。
在下面的例子中,我们假设 USB 闪存驱动器为 `/dev/sdc`:
```
dd if=/home/rob/Downloads/Fedora-Workstation-Live-x86_64-26_Beta-1.4.iso of=/dev/sdc bs=8M status=progress oflag=direct
```
这将花费一点点时间……让我们耐心等待。
接下来,我关掉 MacBook,等待 5 秒后将它重新启动。在按下电源键后,我按住 “option” 键来呼出启动选项。我的选择如下图:

点击 “fedora” 下面的箭头进入安装过程。
在进入安装过程后我注意到我没有 wifi 网络。幸运的是我有个雷电口转以太网的转接器,因为这个笔记本实际上没有以太网接口。我寄希望于谷歌搜索,并于 [此处](https://gist.github.com/jamespamplin/7a803fd5be61d4f93e0c5dcdea3f99ee) 找到了一些很棒的指导。
设置 wifi 前先更新内核:
```
sudo dnf update kernel
```
(然后重启)
安装 rpmfusion 仓库:
```
su -c 'dnf install -y http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'
```
安装 akmods 和 kernel-devel 包:
```
sudo dnf install -y akmods "kernel-devel-uname-r == $(uname -r)"
```
从 rpmfusion 仓库安装 broadcom-wl 包:
```
sudo dnf install -y broadcom-wl
```
重构内核扩展:
```
sudo akmods
```
然后重启连接你的 wifi!
到目前为止我们已经解决,这使我印象非常深刻!所有我关心的功能键都能够正常工作,如屏幕亮度、键盘背光、音量。
接下来,等 7 月份发布非测试版时,我将马上使用 dnf 升级!(LCTT 译注:[Fedora 26 正式版已经发布。](/article-8689-1.html))
感谢你,Fedora!
(题图:deviantart.net)
---
via: <https://www.linux.org/threads/installing-fedora-26-beta-on-a-macbook-air.12464/>
作者:[Rob](https://www.linux.org/members/rob.1/) 译者:[cycoe](https://github.com/cycoe) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 403 | Forbidden | null |
8,693 | 8 种在你没有时间的时候为开源做贡献的方式 | https://opensource.com/article/17/6/find-time-contribute | 2017-07-14T16:32:33 | [
"贡献",
"开源项目"
] | https://linux.cn/article-8693-1.html |
>
> 在忙碌的生活中抽出时间回馈给你关心的项目。
>
>
>

人们不给开源做贡献(或不能做更多贡献)的[最常见的原因](http://naramore.net/blog/why-people-don-t-contribute-to-os-projects-and-what-we-can-do-about-it)之一是缺乏时间。人艰不拆,有这么多的优先的事情争夺你有限的注意力。那么,如何才能在忙碌的生活中为你关心的开源项目抽出时间呢?
为了充分披露,我需要提醒你,我延误了把这篇文章给编辑的时间,因为我抽不出时间写它,所以是否接受我的建议,请自行承担风险。
### 找出你所关心的
贡献的第一步是弄清楚你正在做些什么。你有一个你自己的为之努力的项目吗?有没有一个你想要帮助的具体项目?或者你只是想做*某个事情*?弄清楚你正在做的事情将帮助你决定你的生活中的优先事项。
### 找出其他的方法贡献
编写新功能可能需要数小时的设计、编码和测试。这对于那种只有几分钟时间就得离开,然后再从原来的地方重新开始的情况下并不容易。如果你没有办法进行长于 30 分钟的无中断工作,当你试着完成一个大的任务时,你或许会感到沮丧。
但还有或许可以满足你的需求的其它贡献方式,可以让你利用起来空闲的时间。其中一些可以通过智能手机快速完成,这意味着人们可以避免在通勤上浪费时间,并将其用于开源贡献。以下是可以在小块时间中完成的一些事情列表:
* **Bug 分类:** 所有的 bug 报告都有必要的信息来诊断和解决它们么?它们是否妥善提交(给出正确的范围,正确的严重程度等)了么?
* **邮件列表支持:** 用户或其他贡献者在邮件列表中提出了问题?也许你可以帮忙。
* **文档修补:** 文档经常(但不总是)可以比代码用更小块的时间来处理。也许有几个地方你可以补充一下,或者也许是时候浏览一下文档并确保它们仍然准确了。
* **营销:** 在社交媒体上谈论你的项目或者社区。写一篇快速入门博文。在新闻聚合里投票和评论。
### 与你的老板交谈
你可能会认为在上班时间里你不能在开源项目上工作,但是你有*问过么*? 特别是如果这个项目以某种方式与你的日常工作相关,那你或许可以和你的老板谈谈,让你可在工作时做出贡献。请注意,这可能存在一些知识产权问题(例如,谁拥有你在工作时间内提供的代码的权利),因此首先做一下研究并以书面形式获得授权。
### 设置最后期限
我所学到的最佳时间管理建议可以归纳为两个规则:
1. 如果要完成,它必须有一个截止日期
2. 可以更改最后期限
这篇文章有一个最后期限。它没有特别的时间敏感性,但最后期限意味着我定义了什么时候想完成它,并给编辑一个什么时候可以提交的感觉。是的,如上所述,我错过了最后期限。但你知道发生了什么事么?我设定了一个新的期限(二手才最棒!)。
如果有些事*是*时间敏感的,在你需要返工一两次时,设置最后期限也可以给你一些空间。
### 将它放到你的日程上
如果你使用日历安排你的生活,那用它安排一些时间来开展你的开源项目,可能是完成此项工作的唯一方法。你计划多少时间取决于你自己,但即使你每周只用一小时作为开源时间,这仍会给你每周一小时的开源时间。
这有一个秘密:有时候,如果你需要时间去做别的事情,或者什么都不想做,那么可以自己取消它。
### 开拓未使用的时间
你在通勤中感到无聊吗?你晚上睡觉困难么?也许你可以利用这个时间来贡献。现在我认为“每周完全投入工作 169 个小时”的生活方式是一件非常可怕的事情。也就是说,有些夜晚你不能入睡。也许你已经意识到了可以做贡献,而不是躺在床上看看你的 Twitter 上的朋友在世界的另一边做了什么(如我做的)。但是不要养成放弃睡眠的习惯。
### 停止
有时,贡献最好的方式是一点不贡献。你是一个忙碌的人,不管你是多么的棒,你不能避开你的生理和心理的需要,它们会找上你。花点时间来休息,这也许可以提高你的生产力,使你的工作更快,突然间你就有时间去做那些你一直想做的开源贡献了。
### 说“不”
我不擅长这个,所以我做的并不好。但是没有人能做到任何想做的事情。有时候,你可以做的最好的事情是停止贡献,就像以前一样,或者没有贡献(参见上文)。
几年前,我领导了 Fedora 文档团队。团队的传统是,在每次发布结束时, 领导会主动提出下台。我已经做了一两次,没有人想要替代我,所以我继续保持着这个角色。但是在我的第二或第三次发布之后,我明确表示,我不会继续担任团队领导了。我还是很喜欢这份工作,但我有一份全职的工作,而且在研究生读到一半时,我的妻子怀了我们的第一个孩子。我没有办法做到始终如一的努力,所以我退出领导了。我继续做出贡献,但是在要求较低的能力的位置中。
如果你正在努力抽出时间来满足你的义务(自我强加的或者不是),那么也许现在是重新考虑你的角色了。这对于你自己创建的或者已经大量投资的项目来说很困难,但有时你不得不这么做——为了你自己好以及项目本身。
### 其他还有什么?
你如何找到时间作出贡献? 让我们在评论中知道。
(题图: opensource.com)
---
作者简介:
Ben Cotton - Ben Cotton 是一个培训过的气象学家和一个职业的高效计算机工程师。 Ben 在 Cycle Computing 做技术传教士。他是 Fedora 用户和贡献者,合作创办当地的一个开源集会,是一名开源倡议者和软件自由机构的支持者。他的推特 (@FunnelFiasco)
---
via: <https://opensource.com/article/17/6/find-time-contribute>
作者:[Ben Cotton](https://opensource.com/users/bcotton) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | One of the [most common reasons](http://naramore.net/blog/why-people-don-t-contribute-to-os-projects-and-what-we-can-do-about-it) people give for not contributing (or not contributing more) to open source is a lack of time. I get it; life is challenging, and there are so many priorities vying for your limited attention. So how can you find the time in your busy life to contribute to the open source projects you care about?
In the interest of full disclosure, I should warn you that I was late getting this article to the editors because I couldn't find the time to work on it. Take my advice at your own risk.
## Figure out what you care about
The first step in contributing is to figure out what exactly you're making time for. Do you have a project of your own that you want to work on? Is there a specific project that you use that you want to help with? Do you just want to do *something*? Figuring out what you're making time for will help you decide where in your life's priorities it belongs.
## Find alternate ways to contribute
Writing a new feature can take many hours of design, coding, and testing. It's not always easy to work on that for a few minutes, step away, and then pick up where you left off. If you never get more than 30 minutes of uninterrupted effort, you might find yourself pretty frustrated if you try to take on a big task.
But there are other ways to contribute that might satisfy your need to give back within the time you have available. Some of them can be done in quick spurts from a smartphone, which means you can take the time you used to spend avoiding people on your commute and put it toward your open source contributions. Here's a list of some things that can be done in small chunks:
**Bug triage:**Do all of the bug reports have the information necessary to diagnose and resolve them? Are they properly filed (to the right area, with the right severity, etc.)?**Mailing list support:**Are users or other contributors asking questions on the mailing list? Maybe you can help.**Documentation patches:**Documentation can often (but not always) be worked on in smaller chunks than code. Maybe there are a few places you can fill in. Or maybe it's time to run through the docs and make sure they're still accurate.**Marketing:**Talk about your project or community on social media. Write a quick blog post. Vote and comment on news aggregators.
## Talk to your boss
You might think you can't work on open source projects during the workday, but have you *asked*? Particularly if the project somehow relates to your day job, you might be able to sell your boss on letting you make contributions while at work. Note that there may be some intellectual property issues (e.g., who owns the rights to the code you contribute during working hours), so do your research first and get the conditions in writing.
## Set deadlines
The best time management advice I ever received can be summarized with two rules:
- If it's going to get done, it has to have a deadline
- It's okay to change a deadline
This article had a deadline. There's no particular time sensitivity to it, but a deadline meant that I defined when I wanted to get it done and gave the editors a sense of when it might be submitted. And yes, as I said above, I missed the deadline. You know what happened? I set a new deadline (second time's a charm!).
If something *is* time-sensitive, set the deadline well ahead to give yourself some space if you need to push it back a time or two.
## Put it on your calendar
If you use a calendar to schedule the rest of your life, scheduling some time to work on your open source project may be the only way to get it done. How much time you schedule is up to you, but even if you block off only one hour once a week as open source time, that still gives you an hour a week of open source time.
And here's a secret: It's okay to cancel on yourself sometimes if you need the time to do something else—or do nothing at all.
## Reclaim unused time
Are you bored on your commute? Are you having trouble falling asleep at night? Maybe you can use that time to contribute. Now I happen to think the "work 169 hours a week at full tilt" lifestyle is a terrible, terrible thing. That said, some nights you just can't fall asleep. Instead of lying in bed, seeing what your Twitter friends on the other side of the world are up to (which I do), maybe you can work on that contribution you've been meaning to get around to. Just don't make a habit out of forgoing sleep.
## Stop
Sometimes the best way to contribute is to not contribute for a little bit. You're a busy person and no matter how awesome you are, you can't avoid your physiological and psychological needs. They will catch up to you. Taking a little time to refresh yourself might improve your productivity enough that you get stuff done faster, and suddenly there's time for you to make those open source contributions you've been meaning to get around to.
## Say "no"
I am bad at this. So very bad. But none of us can do everything we'd like to do. Sometimes the best thing you can do to contribute is to stop contributing in the same way you have been—or not contribute at all (see above).
Several years ago, I led the Fedora documentation team. The team's tradition was that the leader would offer to step aside at the end of each release. I had done that a time or two and nobody stepped up, so I remained in the role. But after my second or third release, I made it clear that I would not be continuing as the team lead. I still enjoyed the work, but I was working full time, in graduate school half time, and my wife was pregnant with our first child. There was no way I could consistently give the level of effort that was required, so I stepped aside. I continued to contribute, but in a less-demanding capacity.
If you're getting to the point where you struggle to find the time to meet your obligations (self-imposed or not), then perhaps it's time to reconsider your role. This can be especially hard with a project that you founded or have made a considerable investment in. But sometimes you have to—for your own good and that of the project.
## What else?
How do you find time to make your contributions? Let us know in the comments.
## 2 Comments |
8,694 | 如何开始学习编程? | https://opensource.com/article/17/4/how-get-started-learning-program | 2017-07-15T09:12:00 | [
"编程",
"学习"
] | https://linux.cn/article-8694-1.html |
>
> 编程初学者可能都思考过这个问题,“我该怎么学编程?”这里我们提供些相关的参考指导来帮助你找到最适合自己学习情况和学习需要的方法。
>
>
>

最近有很多关于学习编程的争论。不仅仅是因为与软件开发公司公开的待应聘的职位数量相比较[符合招聘要求的人远远无法满足缺口](http://www.techrepublic.com/article/report-40-of-employers-worldwide-face-talent-shortages-driven-by-it/),编程也是[工资最高](http://web.archive.org/web/20170328065655/http://www.businessinsider.com/highest-paying-jobs-in-america-2017-3/#-25)和[工作满足感最强](https://stackoverflow.com/insights/survey/2017/#career-satisfaction)的众多职业之一。也难怪越来越多的人都想进入这个行业。
但是你要怎么做才能正确地入行呢?“**我应该怎么学习编程?**”是初学者常见的一个问题。尽管我没有这些问题的全部答案,但是我希望这篇文章能够给你提供相关指导来帮助你找到最适合你的需求和自身情况发展的解决办法。
### 你的学习方式是什么?
在你开始学习编程之前,你需要考虑的不仅仅是你的方向选择,还要更多的考虑下你自己。古罗马人有句谚语,<ruby> <a href="https://en.wikipedia.org/wiki/Know_thyself"> γνῶθι σεαυτόν </a> <rp> ( </rp> <rt> gnothi seauton </rt> <rp> ) </rp></ruby>,意思是“认识你自己”。投入到一个大型的编程学习过程中难度不小。足够的自我认识是非常有必要的,这能够确保你做出的选择通向成功的机会非常大。你需要思考并诚实地回答接下来的这些问题:
* **你最喜欢什么样的学习方式?**怎么做你才能学到最好?是通过阅读的方式吗?还是听讲座?还是主要通过动手实践?你需要选择对你最有效的方法。不要仅仅因为这种学习方法流行或者有其他人说过这种方法对他们很有用就选择了这种方法。
* **你的需要和要求是什么?**你为什么想学习如何编程?是因为你只是想换一份工作吗?如果是这样的话,你需要多次时间才能完成呢?你要牢记,这些是*需要的* ,不是*想要的* 。你可能*想要*下周就换份新工作,但是*需要*在接下来的一年里供养你正在成长的家庭。当你在人生的道路上面临方向的抉择时,时间的安排特别重要。
* **你能获取的参考资料有哪些?**当然,重返大学并获得一份计算机科学专业的学位证书可能也不错,但是你必须对你自己实事求是面对现实。你的生活必须和你学习相适应。你能承受花费几个月的时间和不菲的费用去参加集训吗?你是否生活在一个可以提供学习机会的地方,比如提供技术性的聚会或者大学课程?你能获取到的参考资料会对你的学习过程产生巨大的影响。在打算学编程换工作前先调查好这些。
### 选择一门编程语言
当你打算开始你的编程学习之路和考虑你的选择的时候,请记住不管其他人说什么,选择哪门编程语言来开始你的编程学习*关系不大*。是的,是有些编程语言比其他的更流行。比如,根据一份调查研究,目前 JavaScript,Java,PHP, 和 Python 处于 [最受欢迎最流行的编程](https://stackoverflow.com/insights/survey/2017/#most-popular-technologies) 中的前排。但是现在正流行的编程语言有可能过几年就过时了,所以不用太纠结编程语言的选择。像那些方法,类,函数,条件,控制流程和其他的编程的概念思想等等,不管你选的哪门编程语言,它们的底层原理基本是一致的。只有语法和社区的最佳实践会变。因此你能够用 [Perl](https://learn.perl.org/tutorials/) 学习编程,也可以用 [Swift](http://shop.oreilly.com/product/0636920045946.do) 或者 [Rust](https://doc.rust-lang.org/book/)。作为一个程序员,你会在你的职业生涯里用很多不同的编程语言来工作。不要认为你被困在了编程语言的选择上。
### 试水
除非你已经涉足过这个行业或者确信你愿意花费你生命的剩余时光来编程,我建议你最好还是下水之前先用脚趾头来试试水温之类的来判断这水适不适合。这种工作不是每个人都能做的。在把全部希望都压在学习编程之前,你可以先尝试花费少量时间金钱来学习一小部分知识点来了解自己是否会享受这种每周起码花费 40 个小时来编码工作的生活。如果你不喜欢这种工作,你不太可能完成编程项目的学习。即便你完成结束了编程的学习阶段,你也会在你以后的编程工作中感到无比痛苦。人生苦短就不要花费你人生三分之一的时间来做你不喜欢的事了。
谢天谢地,软件开发不仅仅需要编程。熟悉编程概念和理解软件是怎么和他们结合在一起的是非常有用的,但是你不需要成为一个程序员也能在软件开发行业中找到一份报酬不菲的工作。在软件开发过程中,另外的重要角色有技术文档撰写人、项目经理、产品经理、测试人员、设计人员、用户体验设计者、运维/系统管理员和数据科学家等。软件成功的启动需要很多角色之间相互配合。不要觉得学习了编程就要求你成为一个程序员。你需要探索你的选择并确定哪个选择才是最适合你的。
### 参考的学习资料
你对学习参考资料的选择是什么?可能正如你已经发现的那样,可供选择的参考资料非常多,尽管在你的那片区域不是所有的资料都能够获得。
* **训练营**:最近这几年像 [App Academy](https://www.appacademy.io/) 和 [Bloc](https://www.bloc.io/) 这样的训练营越来越流行。训练营通常收费 $10K 或者更多,他们宣称在几周内就能够把一个学生培训成一个称职的程序员。在参加编程集训营前,你需要研究下你将要学习的项目能确保正如它所承诺的那样,在学生学完毕业后能够找到一个高薪的长期供职的职位。一方面花费了数目不小的钱财,而且时间也花费了不少——通常这些都是典型的全日制课程并且要求学生在接下来的连续几周里把其它的事先放在一边专心课程学习。然而时间金钱这两项不菲的消耗通常会使很多未来的程序员无法参加训练营。
* **社区学院/职业培训中心**:社区学院常常被那些调研自己学习编程的方式的人所忽视,不得不说这些人该为自己对社区学院的忽视感到羞愧。你在社区学院或者职业培训中心能够接受到的教育是和你选择其他方式学习编程的学习效果一样有效,而且费用也不高。
* **国家/地方的培训项目**:许多地区都认识到在他们的地区增加技术投资的经济效益,并且已经制定了培训计划来培养受过良好教育和准备好的劳动力。培训项目的案例包括了 [Code Oregon](http://codeoregon.org/) 和 [Minneapolis TechHire](http://www.minneapolismn.gov/cped/metp/TechHire#start)。检查下你的州、省或自治区是否提供这样的项目。
* **在线训练**:许多公司和组织都提供在线技术培训项目。比如,[Linux 基金会](https://training.linuxfoundation.org/)致力于通过开源技术使人们获得成功。其他的像 [O'Reilly Media](http://shop.oreilly.com/category/learning-path.do)、[Lynda.com](https://www.lynda.com/) 和 [Coursera](https://www.coursera.org/) 在软件开发涉及的许多方面提供培训。[Codecademy](https://www.codecademy.com/) 提供对编程概念的在线介绍。每个项目的成本会有所不同,但大多数项目会允许你在你的日程安排中学习。
* **MOOC**:在过去的几年里,大规模开放在线课程的发展势头已经很好了。像 [哈佛](https://www.edx.org/school/harvardx)、[斯坦福](http://online.stanford.edu/courses)、[MIT](https://ocw.mit.edu/index.htm) 和其他的一些世界一流大学他们一直在记录他们的课程,并免费提供在线课程。课程的自我指导性质可能并不适合所有人,但可利用的材料使这成为一个有价值的学习选择。
* **专业书籍**:许多人喜欢用书自学。这是相当经济的,在初步学习阶段后提供了现成的参考资料。尽管你可以通过像 [Safari](https://www.safaribooksonline.com/) 和 [Amazon](https://amazon.com/) 这样的在线服务订购和访问图书,但是也不要忘了检查你本地的公共图书馆。
### 网络支持
无论你选择哪一种学习资源,有网络支持都将获得更大的成功。与他人分享你的经历和挑战可以帮助你保持动力,同时为你提供一个放心的地方去问那些你可能还没有足够自信到其他地方去问的问题。许多城镇都有当地的用户群聚在一起讨论和学习软件技术。通常你可以在 [Meetup.com](https://www.meetup.com/) 这里找到。专门的兴趣小组,比如 [Women Who Code](https://www.womenwhocode.com/) 和 [Code2040](http://www.code2040.org/),在大多数城市地区经常举行会议和黑客马拉松活动,这是在你学习的时候结识并建立一个支持网络的很好的方式。一些软件会议举办“黑客日”,在那里你可以遇到有经验的软件开发人员,他们能够帮助你解决你所困扰的一些问题。例如,每年的 [PyCon](https://us.pycon.org/) 会议都会提供几天的时间来让人们聚集在一起工作、研讨。一些项目,比如 [BeeWare](http://pybee.org/),使用这些短暂的时间来帮助新程序员学习和对这些项目做贡献。
你的网络支持不需要来自正式的聚会。一个小的学习小组可以有效地保持你的学习积极性,并且可以像在你最喜欢的社交网络上发布邀请一样容易形成。如果你生活在一个没有大量软件开发人员社区所支持的聚会和用户组的地区,那么这一点特别有用。
### 开始学习编程的几个步骤
简单的来说,既然你决定学习编程,可以参考这几个方法给自己一个尽可能成功的机会:
1. 将你的需要/需求和参考学习资料列出清单并进行收集
2. 搜寻在你的当地那里能够可用的选择
3. 放弃不能符合你的需求和参考学习资料的选择
4. 选择最符合你需求的和最适合你的学习参考资源
5. 找到一个能够得到支持的网络
务必牢记:你的学习过程永远不会结束。高速发展的软件产业,会导致新技术和新进展几乎每天都会出现。一旦你学会了编程,你就必须花时间去学习适应这些新的进步。你不能依靠你的工作来为你提供这种培训。只有你自己负责自己的职业发展,所以如果你想保持最新的技术和工作能力,你必须紧跟行业最新的技术。
祝你好运!
---
作者简介:
VM (Vicky) Brasseur - VM (aka Vicky) 是一个技术人员,也是项目、工作进程、产品和企业的经理。在她的长达 18 年的科技行业里,她曾是一名分析师、程序员、产品经理、软件工程经理和软件工程总监。目前,她是惠普企业上游开源开发团队的高级工程经理。她的博客是 anonymoushash.vmbrasseur.com,推特是 @vmbrasseur。
---
via: <https://opensource.com/article/17/4/how-get-started-learning-program>
作者:[VM (Vicky) Brasseur](https://opensource.com/users/vmbrasseur) 译者:[WangYueScream](https://github.com/WangYueScream) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | There's a lot of buzz lately about learning to program. Not only is there a [shortage of people](http://www.techrepublic.com/article/report-40-of-employers-worldwide-face-talent-shortages-driven-by-it/) compared with the open and pending positions in software development, programming is also a career with one of the [highest salaries](http://web.archive.org/web/20170328065655/http://www.businessinsider.com/highest-paying-jobs-in-america-2017-3/#-25) and [highest job satisfaction rates](https://stackoverflow.com/insights/survey/2017/#career-satisfaction). No wonder so many people are looking to break into the industry!
But how, exactly, do you do that? "**How can I learn to program?**" is a common question. Although I don't have all the answers, hopefully this article will provide guidance to help you find the approach that best suits your needs and situation.
## What's your learning style?
Before you start your learning process, consider not only your options, but also yourself. The ancient Greeks had a saying, [γνῶθι σεαυτόν](https://en.wikipedia.org/wiki/Know_thyself) (gnothi seauton), meaning "know thyself". Undertaking a large learning program is difficult. Self awareness is necessary to make sure you are making the choices that will lead to the highest chance of success. Be honest with yourself when you answer the following questions:
**What is your preferred learning style?**How do you learn best? Is it by reading? Hearing a lecture? Mostly hands-on experimentation? Choose the style that is most effective for you. Don't choose a style because it's popular or because someone else said it worked for them.**What are your needs and requirements?**Why are you looking into learning how to program? Is it because you wish to change jobs? If so, how quickly do you need to do that? Keep in mind, these are*needs*, not*wants*. You may*want*a new job next week, but*need*one within a year to help support your growing family. This sort of timing will matter when selecting a path.**What are your available resources?**Sure, going back to college and earning a computer science degree might be nice, but you must be realistic with yourself. Your life must accommodate your learning. Can you afford—both in time and money—to set aside several months to participate in a bootcamp? Do you even live in an area that provides learning opportunities, such as meetups or college courses? The resources available to you will have a large impact on how you proceed in your learning. Research these before diving in.
## Picking a language
As you start your path and consider your options, remember that despite what many will say, the choice of which programming language you use to start learning simply does *not* matter. Yes, some languages are more popular than others. For instance, right now JavaScript, Java, PHP, and Python are among the [most popular languages](https://stackoverflow.com/insights/survey/2017/#most-popular-technologies) according to one study. But what is popular today may be passé next year, so don't get too hung up on choice of language. The underlying principles of methods, classes, functions, conditionals, control flow, and other programming concepts will remain more or less the same regardless of the language you use. Only the grammar and community best practices will change. Therefore you can learn to program just as well in [Perl](https://learn.perl.org/tutorials/) as you can in [Swift](http://shop.oreilly.com/product/0636920045946.do) or [Rust](https://doc.rust-lang.org/book/). As a programmer, you will work with and in many different languages over the course of your career. Don't feel you're "stuck" with the first one you learn.
## Test the waters
Unless you already have dabbled a bit and know for sure that programming is something you'd like to spend the rest of your life doing, I advise you to dip a toe into the waters before diving in headfirst. This work is not for everyone. Before going all-in on a learning program, take a little time to try out one of the smaller, cheaper options to get a sense of whether you'll enjoy the work enough to spend 40 hours a week doing it. If you don't enjoy this work, it's unlikely you'll even finish the program. If you do finish your learning program despite that, you may be miserable in your subsequent job. Life is too short to spend a third of it doing something you don't enjoy.
Thankfully, there is a lot more to software development than simply programming. It's incredibly helpful to be familiar with programming concepts and to understand how software comes together, but you don't need to be a programmer to get a well-paying job in software development. Additional vital roles in the process are technical writer, project manager, product manager, quality assurance, designer, user experience, ops/sysadmin, and data scientist, among others. Many different roles and people are required to launch software successfully. Don't feel that learning to program requires you to become a programmer. Explore your options and choose what's best for you.
## Learning resources
What are your options for learning resources? As you've probably already discovered, those options are many and varied, although not all of them may be available in your area.
**Bootcamps**: Bootcamps such as[App Academy](https://www.appacademy.io)and[Bloc](https://www.bloc.io)have become popular in recent years. Often charging a fee of $10K USD or more, bootcamps advertise that they can train a student to become an employable programmer in a matter of weeks. Before enrolling in a coding bootcamp, research the program to make sure it delivers on its promises and is able to place its students in well-paying, long-term positions after graduation. The money is one cost, whereas the time is another—these typically are full-time programs that require the student to set aside any other obligations for several weeks in a row. These two costs often put bootcamps outside the budget of many prospective programmers.**Community college/vocational training center**: Community colleges often are overlooked by people investigating their options for learning to program, and that's a shame. The education you can receive at a community college or vocational training center can be as effective as other options, at a fraction of the cost.**State/local training programs**: Many regions recognize the economic benefits of boosting technology investments in their area and have developed training programs to create well-educated and -prepared workforces. Training program examples include[Code Oregon](http://codeoregon.org)and[Minneapolis TechHire](http://www.minneapolismn.gov/cped/metp/TechHire#start). Check to see whether your state, province, or municipality offers such a program.**Online training**: Many companies and organizations offer online technology training programs. Some, such as[Linux Foundation](https://training.linuxfoundation.org), are dedicated to training people to be successful with open source technologies. Others, like[O'Reilly Media](http://shop.oreilly.com/category/learning-path.do),[Lynda.com](https://www.lynda.com), and[Coursera](https://www.coursera.org)provide training in many aspects of software development.[Codecademy](https://www.codecademy.com)provides an online introduction to programming concepts. The costs of each program will vary, but most of them will allow you to learn on your schedule.**MOOCs**: MOOCs—Massive Open Online Courses—have really picked up steam in the past few years. World-class universities, such as[Harvard](https://www.edx.org/school/harvardx),[Stanford](http://online.stanford.edu/courses),[MIT](https://ocw.mit.edu/index.htm), and others have been recording their courses and making them available online for free. The self-directed nature of the courses may not be a good fit for everyone, but the material available makes this a valuable learning option.**Books**: Many people love self-directed learning using books. It's quite economical and provides ready reference material after the initial learning phase. Although you can order and access books through online services like[Safari](https://www.safaribooksonline.com)and[Amazon](https://amazon.com), don't forget to check your local public library as well.
## Support network
Whichever learning resources you choose, the process will be more successful with a support network. Sharing your experiences and challenges with others can help keep you motivated, while providing a safe place to ask questions that you might not feel confident enough to ask elsewhere yet. Many towns have local user groups that gather to discuss and learn about software technologies. Often you can find these listed at [Meetup.com](https://www.meetup.com). Special interest groups, such as [Women Who Code](https://www.womenwhocode.com) and [Code2040](http://www.code2040.org), frequently hold meetings and hackathons in most urban areas and are a great way to meet and build a support network while you're learning. Some software conferences host "hack days" where you can meet experienced software developers and get help with concepts on which you're stuck. For instance, every year [PyCon](https://us.pycon.org/) features several days of the conference for people to gather and work together. Some projects, such as [BeeWare](http://pybee.org), use these sprint days to assist new programmers to learn and contribute to the project.
Your support network doesn't have to come from a formal meetup. A small study group can be as effective at keeping you motivated to stay with your learning program and can be as easy to form as posting an invitation on your favorite social network. This is particularly useful if you live in an area that doesn't currently have a large community of software developers to support several meetups and user groups.
## Steps for getting started
In summary, to give yourself the best chance of success should you decide to learn to program, follow these steps:
- Gather your list of requirements/needs and resources
- Research the options available to you in your area
- Discard the options that do not meet your requirements and resources
- Select the option(s) that best suit your requirements, resources, and learning style
- Find a support network
Remember, though: Your learning process will never be complete. The software industry moves quickly, with new technologies and advances popping up nearly every day. Once you learn to program, you must commit to spending time to learn about these new advances. You cannot rely on your job to provide you this training. Only you are responsible for your own career development, so if you wish to stay up-to-date and employable, you must stay abreast of the latest technologies in the industry.
Good luck!
## 1 Comment |
8,697 | 安卓编年史(29):Android 5.0 Lollipop——有史以来最重要的安卓版本(3) | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/ | 2017-07-16T09:21:53 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8697-1.html | 
毫不夸张地说,谷歌搜索在棒棒糖中无处不在。“持续开启语音识别”这项特性让用户可以在任何界面随时说出“OK Google”,即时是在息屏状态也没有问题。谷歌应用依然是谷歌的首要主屏,这项特性是自奇巧时引入的。现在搜索栏也会显示在新的最近应用界面。
Google Now 依然是最左侧的主屏,但现在 Material Design 对它进行了大翻新,给了它一个色彩大胆的头部以及重新设计的排版。

Play 商店遵从了和其它棒棒糖应用相似的轨迹。它在视觉上焕然疑一新,大胆的色彩,新排版,还有一个全新的布局。通常这里不会有什么新增功能,就只是给一切换件新马甲。

Play 商店的导航面板现在真的可以用于导航了,每个分类有各自的入口。棒棒糖也不再在操作栏放“更多”按钮了,取而代之的是一个独立的操作按钮(通常是搜索),并且去掉了导航栏中多余的选项。这给了用户一个单独的地方来查找项目,而不用在两个菜单中寻找搜索的地方。
棒棒糖还给了应用让状态栏透明的能力。这让操作栏的颜色可以渗透到状态栏,让它只比周围的界面暗一点点。一些界面甚至在顶部使用了全幅英雄图片,同时显示到了状态栏上。
[](https://cdn.arstechnica.net/wp-content/uploads/2016/10/2-1.jpg)
谷歌日历完全重写了,获得了很多新设计,也失去了很多特性。你不再能够双指缩放来调整时间视图,月份视图也从手机上消失了,周视图从七天退化成了五天的视图。在用户抱怨之后,谷歌将会花费接下来几个版本的时间来重新添加回这里面的一些特性。“谷歌日历”还加强了“谷歌”部分,去除了直接在应用内添加第三方账户的能力。非谷歌账户现在需要从 Gamil 来添加。
尽管如此,它看起来还是很棒。在一些视图上,月份开头带有头图,就像是真实的纸质日历。带有地点的事件会附带显示来自那个地点的照片。举个例子,我的“去往旧金山”会显示金门大桥。谷歌日历还会从 Gamil 获取事件并在你的日历中显示。

其它应用都可以套用基本相同的描述:功能上没有太多新鲜的,但新设计换掉了奇巧中的灰色以大胆,明亮的色彩。环聊获得了收取 Google Voice 信息的能力,时钟应用的背景颜色会随着每天时间的变化而改变。
#### 任务调度器鞭策应用生态成型
谷歌决定在棒棒糖中实施“伏特计划(Project Volta)”,关注电量使用问题。谷歌从“电池史学家(Battery Historian)”开始,为自己和开发者创建了更多的电池追踪工具。这个 python 脚本获取所有的安卓电量日志数据,并转换成可读,交互式的图表。在这个新诊断工具的帮助下,谷歌将后台任务标记为主要的耗电大户。
在 2014 年的 I/O 大会上,这家公司注意到启用飞行模式并关闭屏幕可以让安卓手机待机将近一个月。但是,如果用户全部启用并使用设备,它们没法坚持一整天。结论就是如果你能让一切都停止活动,你的电池表现就能好得多。
因此,谷歌创建了一个新 API,称作“JobScheduler(任务调度器)”,这是个新的针对安卓后台任务的警察。在任务调度器出现之前,每个单独的应用为它自己的后台进程负责,这意味着每个应用会独立唤醒处理器和调制解调器,检查连通性、组织数据库、下载更新以及上传日志。所有东西都有它自己独立的定时器,所以你的手机会一直被唤醒。有了任务调度器,后台任务从无组织的混乱,转变为统一的批处理,有有序的后台进程处理窗口。
任务调度器可以让应用指定它们的任务所需的条件(连通性、Wi-Fi、接入电源等等),它会在那些条件满足的时候发送一条通知。这就像是推送邮件和每五分钟检查一次邮件的区别……但是带上任务需求的。谷歌还开始给后台任务推进一个“懒”实现。如果一些事情可以推迟到设备处于 Wi-Fi,接入电源以及待机状态,那它就应该等到那时候执行。你现在可以看到这一策略的成果,在 Wi-Fi 下,你可以将安卓手机接入电源,并且只有在*这种条件下*它才会开始下载应用更新。你通常不需要立即下载应用更新,最好的时候是等到用户有无限的电源和网络的时候进行。
#### 开机设置获得面向未来的新设计

开机设置经过了大翻新,它不止是为了跟上 Material Design 指南,还是“面向未来”的,这样不管未来谷歌采用什么新的登录和验证方案,它都能够适应。记住,写“安卓编年史”的部分原因就是一些旧版安卓已经不再能工作了。这些年来,谷歌已经为用户升级了更佳加密的验证方案以及二次验证,但添加这些新的登录要求破坏了旧客户端的兼容性。很多安卓特性要求访问谷歌云设施,所以你没法登录的话,像安卓 1.0 的 Gmail 这样的就没法工作了。

在棒棒糖中,开机设置工作的前几个界面和之前的很像。你可以看到“欢迎使用安卓界面”以及一些设置数据和 Wi-Fi 连接的选项。但在这个界面之后就有了变化。一旦棒棒糖连接到了互联网,它会连接到谷歌的服务器来“检查更新”。这并不是检查系统或应用的更新,是在检查即将执行的设置工作的更新。安卓下载了最新版本的设置,*然后*它会要求你登录你的谷歌账户。
在今天登录进棒棒糖和奇巧的时候这个好处很明显。有可以可升级的设置流程,“2014”的棒棒糖系统可以适应 2016 的改进,像是谷歌新的“[触碰登录](http://arstechnica.com/gadgets/2016/06/googles-new-two-factor-authentication-system-tap-yes-to-log-in/)”双重认证。奇巧在这就卡住了,但幸运的是它有个“浏览器登录”可以解决双重认证的问题。

棒棒糖的开机设置对将你的谷歌账户和密码放在单独的页面持极端立场。[谷歌讨厌密码](https://www.theguardian.com/technology/2016/may/24/google-passwords-android)并提供了一些[实验性的方式](https://www.theguardian.com/technology/2016/may/24/google-passwords-android)来不用单独页面登录到谷歌。如果你的账户设置为不使用密码,棒棒糖可以跳过密码页面。如果你设置了双重认证,设置页面就会进入到“输入双因素码”的设置流程。每个登录部分都是在单独的一个页面,所以设置流程是模块化的。页面可以随要求添加或移除。

开机设置还给了用户对应用还原的控制。安卓在这之前也提供了一些数据还原,但那是无法理解的,因为它仅仅只是在没有任何用户输入的情况下选择你的一台设备并开始恢复。开机设置流程中的一个新界面让用户可以看到在云端的设备配置集合,并选择合适的那个。你还可以选择要从那个备份还原的应用。备份有应用,你的主屏布局,以及一些小设置如 Wi-Fi 热点。它不是完全的应用数据备份。
#### 设置

设置从暗色主题切换到了亮色。除了新外观,它还有方便的搜索功能。每个界面用户都能访问放大镜,让他们可以更容易地找到难找的选项。

这里有一些和伏特计划有关的额设置。“网络限制”允许用户将一个 Wi-Fi 连接标记为计费的,让任务调度器处理后台处理时避免使用它。同时作为伏特计划的一部分,添加了一个“节电模式”。它会限制后台任务并限制 CPU 性能,给你更长的续航但更慢的设备。

多用户支持已经出现在安卓平板中有一段时间了,但棒棒糖终于将它带到了安卓手机上。设置界面添加了一个新的“用户”页面,让你添加额外的账户或设置一个“访客”账户。访客账户是临时的——它们可以一次点击轻松删除。它不会像正常账户那样尝试下载关联到你账户的每个应用,因为它注定要在不久后被删除。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/29/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,698 | 安卓编年史(30):Android TV 和 Android Auto | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/30/ | 2017-07-17T10:39:00 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8698-1.html | 
### Android TV

2014 年 11 月谷歌公布了安卓 TV,安卓继续进行它带着一块屏幕征服一切的征程。这家公司里的一个部门之前在蜂巢时代尝试过用谷歌 TV 掌控客厅,但这次完全是来自安卓团队的新点子。安卓 TV 使用安卓 5.0 棒棒糖,并给了它一个为室内最大屏幕设计的 Material Design 界面。首发硬件谷歌选择了华硕来代工“Nexus Player”,这是一个配置不足但够用的机顶盒。
安卓 TV 专注于三样东西:视频,音乐,以及游戏。你可以用一个小遥控器控制电视,它只有四向方向键以及四个按钮,后退、主页、麦克风以及播放/暂停。至于游戏,华硕克隆了一个 Xbox 360 手柄,给了玩家一堆按键和一对摇杆。

安卓 TV 的界面很简单。大幅的媒体略缩图占据了屏幕,可以横向滚动,这些媒体中有 Youtube、Google Play、Netflix、Hulu 以及其它来源。这些略缩图实际上是来自不同媒体源的“推荐”项目,它不是简单地将一个应用的内容填满屏幕。在那下面你可以直接访问应用和设置。

语音界面很赞。你可以告诉安卓 TV 播放任意你想要的内容,而不用通过图形界面去寻找。你还能在内容里获得更聪明的搜索结果,比如“显示和 Harrison Ford 有关的电影”。每个应用都可以给索引服务提供内容,而不是简单的应用集合。所有的这些应用都在 Play 商店有一个 TV 版本。开发者对安卓 TV 的特别支持还包括谷歌 cast 协议,允许用户从他们的手机和平板向电视投射视频和音乐。
### Android 5.1 Lollipop
安卓 5.1 在 2015 年 3 月发布,这是安卓最小的更新。它的目的主要是[修复 Nexus 6 上的加密性能问题](http://arstechnica.com/gadgets/2015/03/a-look-at-android-5-1-speed-security-tweaks/),还添加了设备保护和一些小的界面调整。


设备保护是唯一的新增界面,采用的是在开机设置显示新警告的形式。这个特性在设备被偷了之后“保护你的设备不被再次利用”。一旦设置了锁屏,设备保护就开始介入,并且会在擦除设备的时候被触发。如果你以机主正常的方式擦除设备——解锁手机并从设置选择“重置”——什么都不会发生。但如果你通过开发者工具擦除,设备会在下次开机设置的时候要求你“验证之前同步的谷歌账户”。


这个想法是基于开发者是会知道之前登录的谷歌帐号凭证的,但小偷就不知道了,他们会卡在设置这一步。在现实中这引发了[一个猫鼠游戏](http://www.androidpolice.com/2016/08/11/rootjunky-discovers-frp-bypass-method-newer-samsung-phones/),人们寻找漏洞来绕过设备保护,而谷歌知道了这个 bug 并修补它。OEM 定制也引入了一些有趣的 bug 来绕过设备保护。

还有很多特别微小的界面改动,我们没法一一列在上面的图中。除了上面的图片描述之外没什么可说的了。
### Android Auto


同样是在 2015 年 3 月,谷歌发布了“安卓 Auto”,一个基于安卓界面的全新车载娱乐信息系统。安卓 Auto 是谷歌面对苹果的 CarPlay 交出的答卷,它们很多地方都很相似。安卓 Auto 不完全是个操作系统——它是一个运行在你手机上的“投射”界面,使用车载显示屏作为一块外置显示器。运行安卓 Auto 意味着拥有一款兼容的汽车,在手机上(安卓 5.0 及以上版本)安装了安卓 Auto 应用,并用 USB 线将手机连接到汽车。

安卓 Auto 给你已有的车载系统带来了谷歌的 Material Design 界面,给这个[通常挣扎于]设计好软件的平台带来了顶级的软件设计。安卓 Auto 是个对安卓界面的完全重新设计,以遵循世界各地对车载系统无数的规定。它没有通常充满应用图标的“主屏”,安卓的导航栏也变为了一个常驻的应用启动器(几乎像是个标签页式的界面)。

算下来特性实际上只有四部分,导航栏从左到右是:谷歌地图,一个拨号/联系人界面,“主屏”部分混合了 Google Now 和一个通知面板,还有一个音乐页面。最后一个按钮是一个“OEM”页面,让你可以退出安卓 Auto,返回到自带的车载系统(这也是为了放置汽车制造商的定制特性)。安卓 Auto 还带有谷歌的语音命令系统,以一个麦克风按钮的形式显示在屏幕右上角。


安卓 Auto 的应用没有多少。它只允许两个类别的应用:音乐和信息应用。车载信息娱乐系统的规定意味着自定义界面不是个好选择。信息应用没有界面,并且可以接入语音系统,音乐应用也不会太多地改变界面,仅仅只是调整一下谷歌默认的“音乐应用”模板的颜色和图标。但实际上重要的是音乐和消息的送达,在不让驾驶员太多分心的情况下,一般的应用就没法使用了。
安卓 Auto 在它的最初发布之后就没看到多少更新了,但已经逐渐有很多汽车制造商支持了。到了 2017 年,将会有[超过 100](http://www.usatoday.com/story/money/cars/2016/10/11/android-auto-comes-more-than-100-car-models-2017/91884366/) 款支持的汽车型号。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/30/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo/) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,699 | 三种在 Linux 上创建或扩展交换分区的简单方法 | http://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/ | 2017-07-16T10:18:00 | [
"内存",
"交换分区"
] | https://linux.cn/article-8699-1.html | 
用户可以在任何 Linux 操作系统的安装过程中或者是其它必要的时候创建交换空间。如果你在安装 Linux 的时候忘记了创建或是你想要再增加交换分区的空间,你随时都可以再创建或增加。
有时候在你安装后摇升级 RAM 的时候需要增加一点交换分区的空间,比如你要将你的系统的 RAM 从 1GB 升级到 2GB 你,那么你就不得不将你的交换分区空间也升级一下(从 2GB 到 4GB),这是因为它使用的容量是物理 RAM 的双倍容量。(LCTT 译注:其实这里是个误区,交换分区不一定非得是双倍的物理内存容量,只是惯例如此。事实上,如果你的物理内存足够的话,你完全可以不用交换分区——在这里的情形下,或许你增加了物理内存,就没必要增加交换分区大小了。)
交换空间是当物理内存(RAM 随机存取存储器)的用量已满时,被保留用作虚拟内存的磁盘上的空间。 如果系统在 RAM 满载时需要更多的内存资源,内存中的非活动页面将被移动到交换空间,这样可以帮助系统运行应用程序更多的时间,但不应该把它当做 RAM 的扩展。
建议你创建一个专用的交换分区,但是如果你没有可用的分区,那么可以使用交换文件,或交换分区和交换文件的组合。 交换空间通常建议用户至少 4 GB,用户也可以根据自己的要求和环境创建交换空间。
我发现大部分 VM 和 云服务器都没有交换分区,所以在这种情况下,我们可以使用以下三种方法创建,扩展或增加交换空间。
### 如何检测当前交换分区大小
通过 [free](http://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/) & `swapon` 命令来检测当前的交换分区空间的大小。
```
$ free -h
total used free shared buff/cache available
Mem: 2.0G 1.3G 139M 45M 483M 426M
Swap: 2.0G 655M 1.4G
$ swapon --show
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 655.2M -1
```
上面的输出显示了当前的交换分区空间是 `2GB` 。
### 方法 1 : 通过 fallocate 命令创建交换文件
`fallocate` 程序是立即创建预分配大小的文件的最佳方法。
下面这个命令会创建一个 1GB 大小 的 `/swapfile`。
```
$ sudo fallocate -l 1G /swapfile
```
检查一下创建的文件的大小是否正确。
```
$ ls -lh /swapfile
-rw-r--r-- 1 root root 1.0G Jun 7 09:49 /swapfile
```
将该文件的权限设置为 `600` 这样只有 root 用户可以访问这个文件。
```
$ sudo chmod 600 /swapfile
```
通过运行以下的命令来将此文件转换为交换文件。
```
$ sudo mkswap /swapfile
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=cda50e0e-41f3-49c7-af61-b8cb4a33a464
```
通过运行以下的命令来使交换文件生效。
```
$ sudo swapon /swapfile
```
将新创建的交换文件添加到 `fstab` 文件中,这样交换分区空间的修改即使在重启后也可以生效。
```
$ vi /etc/fstab
/swapfile swap swap defaults 0 0
```
检查一下新创建的交换文件。
```
$ swapon --show
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 657.8M -1
/swapfile file 1024M 0B -2
```
现在我可以看到一个新的 1GB 的 `/swapfile1` 文件了。重启系统以使新的交换文件生效。
### 方法 2 : 通过 dd 命令来创建交换文件
`dd` 命令是另一个实用程序,可以帮助我们立即创建预分配大小的文件。
以下 dd 命令将创建 1GB 的 `/swapfile1`。
```
$ sudo dd if=/dev/zero of=/swapfile1 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.6154 s, 64.6 MB/s
```
**详解:**
* `if=/dev/zero` 是输入文件,`/dev/zero` 是类 Unix 操作系统中的一个特殊文件,它提供从它读取的尽可能多的空字符(ASCII NUL,0x00)。
* `of=/swapfile1` 设置输出文件。
* `bs=1G` 一次性读写的大小为 1GB
* `count=1` 仅复制一个输入块
检查一下创建的文件的大小是否正确。
```
$ ls -lh /swapfile1
-rw-r--r-- 1 root root 1.0G Jun 7 09:58 /swapfile1
```
将该文件的权限设置为 `600` 这样只有 root 用户可以访问这个文件。
```
$ sudo chmod 600 /swapfile1
```
通过运行以下的命令来将此文件转换为交换文件。
```
$ sudo mkswap /swapfile1
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=96def6d7-b2da-4954-aa72-aa32316ec993
```
通过运行以下的命令来使交换文件生效。
```
$ sudo swapon /swapfile1
```
将新创建的交换文件添加到 `fstab` 文件中,这样交换分区空间的修改即使在重启后也可以生效。
```
$ vi /etc/fstab
/swapfile1 swap swap defaults 0 0
```
检查新创建的交换文件。
```
$ swapon --show
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 1.3G -1
/swapfile file 1024M 0B -2
/swapfile1 file 1024M 0B -3
```
现在我可以看到一个新的 1GB 的 `/swapfile1` 了。重启系统以使新的交换文件生效。
### 方法 3 : 通过硬盘分区来创建交换文件
我们也推荐使用通过硬盘分区的方式来创建交换分区。
如果你已经在你的另一个硬盘上通过 `fdisk` 命令创建了一个新的分区,假设我们已经创建了一个叫做 `/dev/sda4` 的分区。
使用 `mkswap` 命令来将这个分区转换成交换分区。
```
$ sudo mkswap /dev/sda4
```
通过运行以下命令来使交换文件生效。
```
$ sudo swapon /dev/sda4
```
把新增的交换文件添加到 `fstab` 文件中,这样即使是重启了系统交换分区的修改也能生效。
```
$ vi /etc/fstab
/dev/sda4 swap swap defaults 0 0
```
检查新创建的交换文件。
```
$ swapon --show
NAME TYPE SIZE USED PRIO
/dev/sda5 partition 2G 1.3G -1
/swapfile file 1024M 0B -2
/swapfile1 file 1024M 0B -3
/dev/sda4 partition 1G 0B -4
```
我可以看到新的交换分区 1GB 的 `/dev/sda4`。重启系统就可以使用新的交换分区了。
(题图:Pixabay,CC0)
---
via: <http://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[chenxinlong](https://github.com/chenxinlong) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,700 | Samba 系列(十三):如何在 Samba4 AD 中使用 iRedMail 配置 Thunderbird | https://www.tecmint.com/configure-thunderbird-with-iredmail-for-samba4-ad-ldap/ | 2017-07-16T10:28:35 | [
"Samba"
] | https://linux.cn/article-8700-1.html | 
本教程将指导你如何使用 iRedMail 服务器配置 Mozilla Thunderbird 客户端,以便通过 IMAPS 和 SMTP 提交协议发送和接收邮件,如何使用 Samba AD LDAP 服务器设置联系人数据库以及如何配置其他相关的邮件功能,例如通过 LDAP 数据库离线副本启用 Thunderbird 联系人。
安装和配置 Mozilla Thunderbird 客户端的过程适用于安装在 Windows 或 Linux 操作系统上的 Thunderbird 客户端。
#### 要求
1. [如何在 CentOS 7 上安装 iRedMail 集成到 Samba4 AD](/article-8567-1.html)
2. [如何配置和集成 iRedMail 服务到 Samba4 AD DC](/article-8673-1.html)
3. [将 iRedMail Roundcube 与 Samba4 AD DC 集成](/article-8678-1.html)
### 第一步:为 iRedMail 服务器配置 Thunderbird
1、 在安装完成 Thunderbird 邮件客户端之后,点击启动器或者快捷方式打开程序,并在首屏检查 E-mail 系统集成,然后点击跳过集成按钮继续。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-System-Integration.png)
*Thunderbird 系统集成*
2、 在欢迎界面点击跳过并使用我已存在的邮件按钮添加你的名字、你的 Samba 帐户邮件地址以及密码,检查记住密码区域并点击继续按钮启动你的邮箱帐户设置。
在 Thunderbird 客户端尝试识别由 iRedMail 服务器提供的正确的IMAP设置后,点击手动配置按钮手动设置 Thunderbird。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-Mail-Account-Setup.png)
*Thunderbird 邮箱帐户设置*
3、 邮件帐户设置窗口展开后,通过添加正确的 iRedMail 服务器 FQDN 来手动编辑 IMAP 和 SMTP 设置,为邮件服务添加安全端口(IMAPS 为 993,发送为 587),为每个端口选择合适的 SSL 通信通道并验证然后点击完成完成设置。使用以下图片作为指导。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-iRedMail-Settings.png)
*Thunderbird iRedMail 设置*
4、 由于你的 iRedMail 服务器使用自签名证书,屏幕上应会显示一个新的“安全异常”窗口。点击永久存储此异常并按确认安全异常按钮添加此安全性异常,Thunderbird 客户端应该就被成功配置了。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-Security-Exception.png)
*Thunderbird 安全异常*
你会看到你的域帐号的所有已收文件,并且你能够从你的域或者其他域发送或者接收文件。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Domain-Mails-Inbox.png)
*域邮箱收件箱*
### 第二步:使用 Samba AD LDAP 设置 Thunderbird 联系人数据库
5、 为了让 Thunderbird 客户端查询 Samba AD LDAP 数据库中的联系人,点击“设置”菜单,在左边面板右键单击您的帐户,如下图片所示找到 “Composition & Addressing → Addressing → Use a different LDAP server → Edit Directories”
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-Samba-AD-LDAP-Settings.png)
*Thunderbird Samba AD LDAP 设置*
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-Composition-Addressing-Settings.png)
*Thunderbird Composition & Addressing 设置*
6、 LDAP 目录服务器窗口应该带开了,点击添加按钮并将下面的内容填写到目录服务器属性窗口中:
在 “常规” 选项卡上添加此对象的描述性名称,添加你的域的名称或 Samba 域控制器的 FQDN,你的域的基本 DN 形式是 “dc=你的域,dc=tld”,LDAP 端口号 389,vmail 绑定 DN 帐户用于以 [vmail@your\_domain.tld](mailto:vmail@your_domain.tld) 的形式查询 Samba AD LDAP 数据库。
使用下面的截图作为指导:
[](https://www.tecmint.com/wp-content/uploads/2017/05/Directory-Server-Properties.png)
*目录服务器属性*
7、 在下一步中,从目录服务器属性进入高级选项卡,并在搜索过滤栏添加下面的内容:
```
(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))
```
[](https://www.tecmint.com/wp-content/uploads/2017/05/Add-Search-Filter.png)
*添加搜索过滤*
让其他的设置保持默认,并点击 OK 按钮来应用更改,再次点击 OK 按钮关闭 LDAP 目录服务器窗口,在账户设置界面点击 OK 关闭窗口。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Select-LDAP-Directory-Server.png)
*选择 LDAP 目录服务器*
8、 要测试 Thunderbird 是否能够向 Samba AD LDAP 数据库请求联系人,点击上方的地址簿图标,选择之前创建的 LDAP 数据库名。
添加绑定 DN 帐户密码来查询 AD LDAP 服务器,勾选使用密码管理器记住密码,然后点击确定按钮保存更改并关闭窗口。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Thunderbird-Samba-AD-LDAP-Testing.png)
*Thunderbird Samba AD LDAP 测试*
9、 使用上面的搜索框搜索 Samba AD 联系人,并提供一个域名帐户名。注意没有在 AD E-mail 字段声明的邮件地址的 Samba AD 帐户不会在 Thunderbird 地址簿搜索中列出。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Search-Samba-AD-Mail-Contacts.png)
*搜索 Samba AD 邮件联系人*
10、 要在编写电子邮件时搜索联系人,请单击视图→联系人侧边栏或按 F9 键打开 “联系人” 面板。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Search-Mail-Contact-in-Thunderbird.png)
*在 Thunderbird 中搜索联系人*
11、 选择合适的地址簿,你应该能够搜索并添加收件人的电子邮件地址。发送第一封邮件时,会出现一个新的安全警报窗口。点击确认安全例外,邮件应该就能发送到收件人地址中了。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Send-Mail-in-Thunderbird.jpg)
*在 Thunderbird 发送邮件*
12、 如果你想通过仅针对特定 AD 组织单位的 Samba LDAP 数据库搜索联系人,请从左边面板编辑你的目录服务器名称的地址簿,点击属性并添加自定义的 Samba AD OU,如下所示。
```
ou=your_specific_ou,dc=your_domain,dc=tld
```
[](https://www.tecmint.com/wp-content/uploads/2017/05/Search-Contacts-in-Samba-LDAP-Database.png)
*Samba LDAP 数据库中搜索联系人*
### 第三步:设置 LDAP 离线副本
13、 要为 Thunderbird 配置 Samba AD LDAP 离线副本,请点击“地址簿”按钮,选择你的 LDAP 通讯录,打开“目录服务器属性” -> “常规” 选项卡,将端口号更改为 3268。
接着切换到离线选项卡并点击“现在下载”按钮开始在本地复制 Samba AD LDAP 数据库。
[](https://www.tecmint.com/wp-content/uploads/2017/05/Setup-LDAP-Offline-Replica-in-Thunderbird.png)
*在 Thunderbird 设置 LDAP 离线副本*
[](https://www.tecmint.com/wp-content/uploads/2017/05/Download-Samba-LDAP-Database-Offline.png)
*为离线下载 LDAP 数据库*
当同步联系人完成后,你将收到消息复制成功通知。点击 OK 并关闭所有窗口。在无法访问 Samba 域控制器的情况下,你仍然可以通过离线方式进行搜索。
---
作者简介:
我是一个电脑上瘾的家伙,开源和基于 linux 的系统软件的粉丝,在 Linux 发行版桌面、服务器和 bash 脚本方面拥有大约4年的经验。
---
via: <https://www.tecmint.com/configure-thunderbird-with-iredmail-for-samba4-ad-ldap/>
作者:[Matei Cezar](https://www.tecmint.com/author/cezarmatei/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | This tutorial will guide you on how to configure Mozilla Thunderbird client with an iRedMail server in order to send and receive mail via IMAPS and SMTP submission protocols, how to setup contacts database with Samba AD LDAP server and how to configure other related mail features, such as enabling Thunderbird contacts via LDAP database offline replica.
The process of installing and configuring Mozilla Thunderbird client described here is valid for Thunderbird clients installed on Windows or Linux operating systems.
#### Requirements
[How to Configure and Integrate iRedMail Services to Samba4 AD DC](https://www.tecmint.com/integrate-iredmail-to-samba4-ad-dc-on-centos-7/)[Integrate iRedMail Roundcube with Samba4 AD DC](https://www.tecmint.com/integrate-iredmail-roundcube-with-samba4-ad-dc/)
### Step 1: Configure Thunderbird for iRedMail Server
**1.** After installing Thunderbird mail client, hit on the launcher or shortcut to open the program and on the first screen check E-mail **System Integration** and click on **Skip Integration** button to continue.

**2.** On the welcome screen hit on **Skip this and use my existing mail** button and add your name, your Samba account e-mail address and password, check **Remember password** field and hit on **Continue** button to start your mail account setup.
After Thunderbird client tries to identify the correct IMAP settings provided by iRedMail server hit on **Manual** config button to manually setup Thunderbird.

**3.** After the Mail Account Setup window expands, manually edit IMAP and SMTP settings by adding your proper iRedMail server FQDN, add secured ports for both mail services (993 for IMAPS and 587 for submission), select the proper SSL communication channel for each port and authentication and hit **Done** to complete the setup. Use the below image as a guide.

**4.** A new Security Exception window should appear on your screen due to the Self-Signed Certificates your iRedMail server enforces. Check on **Permanently store this exception** and hit on **Confirm Security Exception** button to add this security exception and the Thunderbird client should be successfully configured.

You will see all received mail for your domain account and you should be able to send or receive mail to and from your domain or other domain accounts.

### Step 2: Setup Thunderbird Contacts Database with Samba AD LDAP
**5.** In order for Thunderbird clients to query Samba AD LDAP database for contacts, hit on **Settings** menu by right clicking on your account from the left plane and navigate to **Composition & Addressing → Addressing → Use a different LDAP server → Edit Directories** button as illustrated on the below images.


**6.** The **LDAP Directory Servers** windows should open by now. Hit on **Add** button and fill **Directory Server Properties** windows with the following content:
On **General** tab add descriptive name for this object, add the name of your domain or the FQDN of a Samba domain controller, the base DN of your domain in the form **dc=your_domain,dc=tld**, LDAP port number 389 and the vmail Bind DN account used to query the Samba AD LDAP database in the form **vmail@your_domain.tld**.
Use the below screenshot as a guide.

**7.** On the next step, move to **Advanced** tab from **Directory Server Properties**, and add the following content in Search filter filed:
(&(mail=*)(|(&(objectClass=user)(!(objectClass=computer)))(objectClass=group)))

Leave the rest of the settings as default and hit on **OK** button to apply changes and again on **OK** button to close LDAP Directory Servers window and **OK** button again on **Account Settings** to close the window.

**8.** To test if Thunderbird client can query Samba AD LDAP database for contacts, hit on the upper **Address Book** icon, select the name of the LDAP database created earlier.
Add the password for the Bind DN account configured to interrogate the AD LDAP server (**vmail@your_domain.tld**), check **Use Password Manager** to remember the password and hit **OK** button to reflect changes and close the window.

**9.** Search for a Samba AD contact by using the upper search filed and suppling a domain account name. Be aware that Samba AD accounts with no e-mail address declared in their AD E-mail field will not be listed in Thunderbird Address Book searches.

**10.** To search for a contact while composing an e-mail, click on **View → Contacts Sidebar** or press **F9** key to open Contacts panel.

**11.** Select the proper Address Book and you should be able to search and add an e-mail address for your recipient. When sending the first mail, a new security alert window should appear. Hit on **Confirm Security Exception** and the mail should be sent to your recipient e-mail address.

**12.** In case you want to search contacts through Samba LDAP database only for a specific AD Organizational Unit, edit the Address Book for your Directory Server name from the left plane, hit on **Properties** and add the custom Samba AD OU as illustrated on the below example.
ou=your_specific_ou,dc=your_domain,dc=tld

### Step 3: Setup LDAP Offline Replica
**13.** To configure Samba AD LDAP offline replica for Thunderbird hit on **Address Book** button, select your **LDAP Address Book**, open **Directory Server Properties** -> **General** tab and change the port number to 3268.
Then switch to **Offline** tab and hit on **Download Now** button to start replicate Samba AD LDAP database locally.


When the process of synchronizing contacts finishes you will be informed with the message Replication succeeded. Hit **OK** and close all windows. In case Samba domain controller cannot be reached you can still search for LDAP contacts by working in offline mode.
All configuration relates to
ReplyLOCALusers, but what if I want to assign real domain, like “tecmint.com” instead of “tecmint.lan” and send messages to the outside world?Matei, I just wanted to thank you for working on this great series for the last 6 months. I imagine that this is a fair investment of your time to share your knowledge and experience with us. Keep up the great work! It is much appreciated.
Reply |
8,701 | 如何为安卓开发搭建一个持续集成(CI)服务器 | https://medium.com/@pamartineza/how-to-set-up-a-continuous-integration-server-for-android-development-ubuntu-jenkins-sonarqube-43c1ed6b08d3#.x6jhcpg98 | 2017-07-17T09:09:00 | [
"Android",
"CI",
"开发"
] | https://linux.cn/article-8701-1.html | 我最近买了新 MacBook Pro 作为我的主要的安卓开发机,我的老式的 MacBookPro(13 寸,2011 年后期发布,16GB 内存, 500G 的固态硬盘,内核是 i5,主频 2.4GHz,64 位),我也没卖,我清理了它,并把他变成了一个 MacOS 和Ubuntu 双引导的持续集成(CI)服务器。

写这篇文章我主要想总结一下安装步骤,好给自己以后作参考,当然,这篇文章也是给同行看的,只要他们感兴趣。好了,现在开始:
1. 配置一个新的 Ubuntu ,以便运行 Android SDK。
2. 安装 Jenkins CI 服务来拉取、编译、运行测试托管在 Github 的多模块 Android 项目。
3. 安装 Docker 并在容器中运行 MySQL 服务器和 SonarQube。来运行由 Jenkins 触发的静态代码分析。
4. Android app 配置需求。
### 第一步-安装 Ubuntu:
我将使用 Ubuntu 作为持续集成的 SO,因为 Ubuntu 有一个强大的社区,它可以解决你遇到的任何问题,而且我个人推荐总是使用 LTS 版本,当前是 16.04 LTS。已经有很多教程教大家在各种硬件上怎么安装了,我就不废话了,贴个下载链接就行了。
* [下载 Ubuntu Desktop 16.04 LTS](https://www.ubuntu.com/download/desktop)
有人可能很奇怪:用什么桌面版,服务器版多好。额,这个嘛,萝卜青菜,各有所爱。我倒不在乎 UI 占用的那点运算资源。相反,用那一点资源换来生产力的提升我觉得挺值的。
### 第二步-远程管理:
#### SSH 服务器
Ubuntu 桌面版默认安装并没有 ssh 服务器,所以你想远程通过命令行管理的话就只好自己安装。
```
$ sudo apt-get install openssh-server
```
#### NoMachine 远程桌面
可能你的持续集成服务器没有挨着你,而是在你的路由器后面,或者其它屋子,甚至还可能远离你数里。我试过各种远程桌面方案,不得不说,IMHO NoMachine 在这方面表现的最好,它只需要你的 ssh 证书就可以工作了(显然你要先把它安装在 CI 和你的机器中)。
* [NoMachine - 任何人都能用的免费的远程访问工具](https://www.nomachine.com/download)
### 第三步-配置环境:
这里我打算安装 Java8,Git,和 Android SDK,Jenkins 需要它们来拉取、编译和运行 android 项目。
#### SDKMAN!
这个超级厉害的命令行工具让你可以安装各种流行的 SDK(比如说,Gradle、Groovy、Grails、Kotlin、 Scala……),并可以以容易方便的方式列出它们和在各个并行版本中切换。
* [SDKMAN! - SDK 管理器](http://sdkman.io/)
它们最近又增加了对 JAVA8 的支持,所以我使用它来安装 Java,而是用流行的 webupd8 仓库。所以在你安装开始前,务必要想清你要不要安装 SDKMAN,话说回来,最好还是装上,因为我们以后应该会用到。
安装 SDKMAN! 很容易,执行以下命令即可:
```
$ curl -s "https://get.sdkman.io" | bash
```
#### Oracle JAVA8
因为我们已经安装了 SDKMAN! ,所以安装 JAVA8 就相当简单了:
```
$ sdk install java
```
或者使用 webupd8 这个仓库:
* [在 Ubuntu 或 Linux Mint 上通过 PPA 仓库安装 Oracle Java 8 [JDK8]](http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html)
#### Git:
安装git的命令也非常直观,就不废话了。
```
$ sudo apt install git
```
#### Android SDK
这下面这篇文章的底部
* [下载 Android Studio 和 SDK Tools | Android Studio](https://developer.android.com/studio/index.html)
你可以找到 “Get just the command line tools” 等字样,复制这个链接。比如:
```
https://dl.google.com/android/repository/tools_r25.2.3-linux.zip
```
下载,然后解压到 `/opt/android-sdk-linux` 下:
```
$ cd /opt
$ sudo wget https://dl.google.com/android/repository/tools_r25.2.3-linux.zip
$ sudo unzip tools_r25.2.3-linux.zip -d android-sdk-linux
```
我们使用 root 用户创建了该目录,所以我们需要重新授权来使我们的主要用户对它可读可写。
```
$ sudo chown -R YOUR_USERNAME:YOUR_USERNAME android-sdk-linux/
```
然后,在 `~/.bashrc` 文件下设置 SDK 的环境变量
```
$ cd
$ nano .bashrc
```
在文件底部写入这些行(注意,但要在 SDKMAN! 配置文件前):
```
export ANDROID_HOME="/opt/android-sdk-linux"
export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$PATH"
```
关闭此终端,再打开一个新的终端看看环境变量是否正确生效:
```
$ echo $ANDROID_HOME
/opt/android-sdk-linux
```
然后我们启动图形界面的 Android SDK 管理器,并安装你所需的平台和依赖:
```
$ android
```

*运行 Android SDK Manager 的图形交互界面*
### 第四步-Jenkins 服务器
这里,我要讲讲怎么安装、配置该服务器,并创建 Jenkin 任务来拉取、构建和测试 Android 项目,并怎样获取控制台输出。
#### 安装 Jenkins
你可以在下面的链接找到 Jenkins 服务器相关信息:
* [Jenkins](https://jenkins.io/)
我们有许多办法运行 Jenkins,比如说运行 .war 文件,作为 Linux 服务,作为 Docker 容器等等。
我起初是想把它当做 Docker 容器运行,但是后来我意识到正确地配置代码文件夹、android-sdk 文件夹的可见性,和插到运行的 Android 测试机上的物理设备的 USB 可见性简直是一场噩梦。
少操点心,我最终决定以服务的方式,增加 Stable 仓库的 key 来通过 apt 安装和更新。
```
$ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
```
编辑 `source.list`,写入这一行:
```
$ sudo nano /etc/apt/sources.list
```
```
#Jenkin Stable
deb https://pkg.jenkins.io/debian-stable binary/
```
然后安装:
```
sudo apt-get update
sudo apt-get install jenkins
```
在你的用户组里面增加 `jenkins` ,允许其读写 Android SDK 文件夹。
```
$ sudo usermod -a -G 你的用户组 jenkins
```
Jenkins 服务在开机引导时就会被启动,并可通过 http://localhost:8080 访问:
安装完毕会有一些安全预警信息,跟着引导程序走,你的 Jenkins 就会运行了。

*启用安装成功的 Jenkins 服务器。*
#### Jenkins 配置
启用成功后,会有提示程序提示你安装插件,单击 “Select plugins to Install” 就可以开始浏览所有插件,然后选择你要安装的插件就 OK 了 。
* [JUnit 插件](https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin)
* [JaCoCo 插件](https://wiki.jenkins-ci.org/display/JENKINS/JaCoCo+Plugin)
* [EnvInject 插件](https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin)
* [GitHub 插件](https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin)

*安装 Jenkins 插件*
创建管理员用户,并完成安装。
要完成安全需要配置环境变量 `ANDROID_HOME`,`JAVA_HOME`。
点击 Manage Jenkins,接着 Configure System。
滚动文件至底部,在全局属性模块中找到环境变量,并增加 `ANDROID_HOMOE`,和 `JAVA_HOME` 变量。

*给所有 Jenkins 任务增加全局变量*
#### 创建 Jenkins 任务
一个 Jenkins 任务定义了一系列不间断的操作。如果你跟随本篇引导的话,那么你可以使用我已经在 GitHub 上为你准备了一个 Android 练习项目,你可以使用这个来试试手。它只是一个多模块的 app,带有单元测试、Android 测试,包括 JaCoCo、SonarQube 插件。
* [pamartineza/helloJenkins](https://github.com/pamartineza/helloJenkins)
首先创建一个新的 Freestyle 任务项目,取名为 `Hello_Android`。不要在名字中使用空格,这样可以避免与 SonarQube 不兼容的问题。

*创建一个 Freestyle Jenkins 任务*
接下来就是配置了,我给每一部分都做了截屏。
**概况**
这部分比较繁琐,你可以在这里变更任务的名字、增加简介。如果你使用 GitHub 项目,你还可以写下项目的 URL(不要 \*.git,这是 url 的部分,不是仓库的)。

*项目 Url 配置*
**源代码管理**
这时候我们就要选择我们的 CVS 作为 Git,并且增加仓库的 url(这次就要包括 \*.git)然后选择分支拉取。因为这是一个公开的 GitHub 仓库,我们就不需要提交证书了,否则的话就要设置账号和密码。
相比于使用你的带有完全权限的公开仓库,我更倾向于为你的私有库创建一个新的只读用户来专门配给 Jenkins 任务使用。
另外,如果你已经使用了双因子认证,Jenkins 就无法拉取代码,所以为 Jenkins 专门创建一个用户可以直接从私有库中拉取代码。

*配置仓库*
**构建触发器**
你可以手动开始构建,也可以远程地、周期性地、或者在另一个任务构建完成之后开始构建,只要这些改变可以被检测到。
最好的情况肯定是一旦你更改了某些地方,就会立刻触发构建事件,Github 为此提供了一个名叫 webhooks 的系统。
* [Webhooks | GitHub 开发者指南](https://developer.github.com/webhooks/)
这样,我们就可以配置来发送这些事件到 CI 服务器,然后触发构建。显然,我们的 CI 服务器必须要联网,并且可以与 GitHub 服务器通信。
你的 CI 服务器也许为了安全只限于内网使用,那么解决办法就只有集中周期性的提交。我就是只有工作时才打开 CI,我把它设置为每十五分钟轮询一次。轮询时间可以通过 CRON 语法设置,如果你不熟悉,请点击右侧的帮助按钮获取带有例子的丰富文档。

*仓库轮询配置*
**构建环境**
这里我推荐设置构建超时来避免 Jenkings 占用内存和 CPU ,毕竟有时候有意外发生。当然,你还可以插入环境变量和密码等等。

*构建超时*
**构建**
现在是见证魔法的时刻了,增加一个 Build 步骤,引入 Gradle 脚本,选择 Gradle Wrapper (默认情况下,Android 项目带有 Gradle Wrapper,不要忘记把它检入到 Git ),然后定义你要执行哪些任务:
1. clean:清除之前构建的所有历史输出,这样可以确保没有东西缓存,从头构建。
2. asseembleDebug: 生成调试 .apk 文件。
3. test:在所有模块上执行 JUnit 测试。
4. connectedDebugAndroidTest:在连接到 CI 的实体机上执行安卓测试单元(也可以使用安装了安卓模拟器的 Jenkins 插件,但是它不支持所有型号,而且相当麻烦)。

*配置 Gradle*
**构建后操作**
我们将要增加“发布 JUnit 测试报告”,这一步由 JUnit 插件提供,其搜集由 JUnit 测试结果生成的 XML 文件,它会生成漂亮的图表来按时间展示测试结果。
我们 app 模块中,测试运行结果的路径是: `app/build/test-results/debug/*.xml`。
在多模块项目中,其它的“纯” Java 模块中测试结果在这里:`*/build/test-results/*.xml`。

还要增加“记录 JaCoCo 覆盖率报告”,它要创建一张显示代码覆盖率的图表。

#### 运行 Jenkins 任务
只要有任何改变提交到仓库,我们的测试任务将每十五分钟执行一次,但是如果你不想等的话,或者你只是想验证一下配置的改变,你也可以手动运行。单击“现在构建”按钮,当前的构建将出现在构建历史中,点击它可以查看细节。

*手动执行任务*
最有趣的部分是控制台输出,你可以看到 Jenkins 是如何拉取代码并执行我们之前定义的 Gradle 项目,例如 clean。

*控制台输出的开始部分*
如果一切都正常的话,控制台将会有如下输出 (任何仓库连接问题,单元测试或 Android 测试的失败都将导致构建失败)。

*哈哈哈哈,构建成功,测试结果符合预期*
### 第五步-SonarQube
这部分我会讲讲如何安装、配置 SonarQube ,并配以使用 Docker 作为容器的 MySQL 数据库。
* [Continuous Code Quality | SonarQube](https://www.sonarqube.org/)
SonarQube 是个代码静态分析工具,它可以帮助开发者写出干净的代码、检测错误和学习最佳体验。它还可以跟踪代码覆盖、测试结果、功能需求等等。SonarQube 检测到的问题可以使用插件十分容易的导入到 Android Studion/IntelliJ 中去。
* [JetBrains Plugin Repository :: SonarQube Community Plugin](https://plugins.jetbrains.com/idea/plugin/7238-sonarqube-community-plugin)
#### 安装 Docker
安装 Docker 十分容易,按照下面的教程即可:
* [在 Ubuntu 上安装 Docker](https://docs.docker.com/engine/installation/linux/ubuntulinux/)
#### 生成容器
**MySQL**
我们先搭建一个 MySQL5.7.17 服务器容器,命名为 `mysqlserver`,它将在开机引导时启动,带有一个在你的家目录下的本地卷,带有密码,服务暴露在 localhost:3306 上(把命令中的 `YOUR_USER` 和 `YOUR_MYSQL_PASSWORD` 替换为你自己账号密码)。
```
$ docker run --name mysqlserver --restart=always -v /home/YOUR_USER/mysqlVolume:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=YOUR_MYSQL_PASSWORD -p 3306:3306 -d mysql:5.7.17
```
**phpMyAdmin**
想要优雅简单地管理 MySQL服务器,我强烈推荐 phpMyAdmin。你只要建立个容器,命名为 `phpmyadmin`,然后链接到我们的 `mysqlserver` 容器,它会在开机引导时启动,它暴露在 localhost:9090。使用最新的版本。
```
$ docker run --name phpmyadmin --restart=always --link mysqlserver:db -p 9090:80 -d phpmyadmin/phpmyadmin
```
你可以用你的 mysql 密码 `YOUR_MYSQL_PASSWORD` ,以 root 身份登录 localhost:9090 的 phpMyAdmin 界面,并创建一个数据库 sonar,使用`uft8_general_ci` 字符集。此外,也创建一个 sonar 的新用户,密码 `YOUR_SONAR_PASSWORD`,并给它 sonar 数据库的权限。
**SonarQube**
现在我们已经创建好了我们的 SonarQube 容器,就叫 `sonarqube`,它会在机器引导时启动,自动链接搭配我们的数据库,服务暴露在 localhost:9090,使用 5.6.4 版本。
```
$ docker run --name sonarqube --restart=always --link mysqlserver:db -p 9000:9000 -p 9092:9092 -e "SONARQUBE_JDBC_URL=jdbc:mysql://db:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" -e "SONARQUBE_JDBC_USER=sonar" -e "SONARQUBE_JDBC_PASSWORD=YOUR_SONAR_PASSWORD" -d sonarqube:5.6.4
```
#### 配置 SonarQube
如果一起都正常,你将在 localhost:9000 看到如下页面:

好了,让我们来配置必要的插件和基本的配置文件:
1. 在页面的右上角可以登录(默认的管理员账号和密码是 admin/admin)。
2. 进入到 Administration,然后点击 System,接下来是 Updata Center,最后是 Updates Only。
* 如果需要的话,更新 Java 插件。
3. 现在启用,并安装以下插件
* Android (提供 Android lint 规则)
* Checkstyle
* Findbugs
* XML
4. 返回顶部,点击重启按钮完成整个安装。
#### SonarQube 配置文件
我们刚刚安装的插件可以定义配置文件,可以用一套规则去衡量项目的代码质量。
同一时间一个项目只能使用一个配置文件。但是我们可以定义父配置文件并继承规则,所以要对我们的项目执行所有的规则,我们可以创建定制的配置文件并链状串联所有配置文件。
就这么干,点击 Quality Profiles ,跳转到 Create ,然后命名,比如 CustomAndroidProfile。
将 Android Lint 作为父级,然后选择 Android Lint 配置,增加 FindBugs Security Minial 作为上一级,继续此步骤,直到你完成父级继承方案,并且设置 CustomAndroidProfile 作为默认。

*继承链*
#### 运行 Sonarqube 分析器
现在我们的 SonarQube 已经正式配置完毕,我们需要添加一个 Gradle 任务 `sonarqube` 到我们的 Jenkins 任务。我们在最后执行。

再次运行 Jenkins 任务,一旦运行完毕,我们可以在 localhost:9090 中看到我们的 sonarQube 控制面板。

*分析结果的显示*
点击项目名称我们可以进入到不同的显示界面,最重要的可能就是问题界面了。
在下一屏,我将展示一个主要问题,它是一个空构造器方法。就我个人而言,使用 SonarQube 最大的好处就是当我点击“...”时可以在屏幕底部显示解释。这是一个学习编程十分有用的技能。

### 第六步 附加:配置其他 Android 应用
想要配置 Android 应用得到覆盖率和 sonarqube 的结果,只要安装 JaCoCo 和 Sonarqube 插件就可以了。你也可以在我的示例中得到更多信息
* [pamartineza/helloJenkins](https://github.com/pamartineza/helloJenkins)
你也可以看看我在云上测试的文章:
* [使用 Jenkins CI 服务器在云设备上运行 Android 测试](https://pamartinezandres.com/running-android-tests-on-cloud-devices-using-a-jenkins-ci-server-firebase-test-lab-amazon-device-b67cb4b16c40)
### 最后
啊,你终于走到头了,希望你觉得本文有点用处。你要是发现了任何错误,有任何疑问,别迟疑,赶紧评论。我拼了老命也要帮你。哦,忘了提醒,好东西要和朋友分享。
---
作者简介:

Entrepreneur & CEO at GreenLionSoft · Android Lead @MadridMBC & @Shoptimix · Android, OpenSource and OpenData promoter · Runner · Traveller
---
via: <https://medium.com/@pamartineza/how-to-set-up-a-continuous-integration-server-for-android-development-ubuntu-jenkins-sonarqube-43c1ed6b08d3#.x6jhcpg98>
作者:[Pablo A. Martínez](https://medium.com/@pamartineza) 译者:[Taylor1024](https://github.com/Taylor1024) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,703 | 深入实时 Linux | https://www.linux.com/news/event/open-source-summit-na/2017/2/inside-real-time-linux | 2017-07-18T12:52:21 | [
"RTL",
"实时"
] | https://linux.cn/article-8703-1.html | 
>
> 实时 Linux 在过去十年中已经走了很长的路。Linutronix 的 Jan Altenberg 提供了对该主题做了概述,并在 ELC Europe 的视频中提供了新的 RTL 性能基准。
>
>
>
实时 Linux(RTL)是一种启用 PREEMPT\_RT 的主线 Linux,在过去十年中已经走了很长的路。大约 80% 的确定性的 [PREEMPT\_RT](https://www.linux.com/blog/intro-real-time-linux-embedded-developers) 补丁现在可用于主线内核本身。然而,Linux 上单内核 RTL 的最强大的替代品——双内核 Xenomai——继续声称在减少延迟上有巨大的优势。在去年 10 月的 [欧洲嵌入式 Linux 会议](http://events.linuxfoundation.org/events/archive/2016/embedded-linux-conference-europe)的演讲中,Jan Altenberg 反驳了这些声明,同时对实时主题做了论述。
德国嵌入式开发公司 [Linutronix](https://linutronix.de/) 的 Altenberg 并不否认 Xenomai 和 RTAI 等双核方法提供较低的延迟。然而,他揭示了新的 Linutronix 基准,旨在表明差异不如所声称的那样大,特别是在实际使用中。争议较少的是,他认为 RTL 更易于开发和维护。
在我们深入永恒的 Xenomai 与 RTL 的辩论之前,请注意,2015 年 10 月,[开源自动化开发实验室](http://archive.linuxgizmos.com/celebrating-the-open-source-automation-development-labs-first-birthday/)(OSADL)将 RTL 项目的[控制权](http://linuxgizmos.com/real-time-linux-shacks-up-with-the-linux-foundation/)转移给了管理 Linux.com 的 Linux 基金会。此外,Linutronix 是 RTL 项目的主要贡献者,并承担了 x86 的维护者。
RTL 的进步是过去十年中 Linux 从实时操作系统(RTOS)中[获得市场占有率](https://www.linux.com/news/embedded-linux-keeps-growing-amid-iot-disruption-says-study)的几个原因之一。实时操作系统在微控制器上比应用处理器上更频繁出现,并且在缺乏高级用户级操作系统(如 Linux)的单用途设备上实现实时很简单。
Altenberg 通过清除关于实时确定性的内核方案的一些常见的误解开始他的演讲。Altenberg 告诉他的 ELCE 观众:“实时不是快速执行,这基本上是决定论和定时保证。实时为你提供保证某些内容将在给定的时间内执行。你不想要尽可能快,但是要尽快指定。”
在给定的执行时间内的迟缓的反应会导致严重后果,特别是当它可能导致人们受到伤害时,开发人员往往会使用实时的方式。这就是为什么实时性仍然在很大程度上受到工厂自动化行业的推动,并且越来越多地出现在汽车、火车和飞机上。然而,并不总是生死攸关的情况 - 金融服务公司使用 RTL 进行高频交易。
Altenberg 说:“实时需求包括确定性的定时行为、抢占、优先级继承和优先级上限。最重要的要求是高优先级任务总是需要能够抢占低优先级的任务。”
Altenberg 强烈建议不要使用术语“软实时”来描述轻量级实时解决方案。“你可以是确定性的或者不是,但两者之间什么也没有。”
### 双内核实时
像 Xenomai 和 RTAI 这样的双内核方案部署了一个与单独的 Linux 内核并行运行的微内核,而像 RTL 这样的单内核方案使得 Linux 本身能够实时运行。Altenberg 说:“使用双内核,当优先级实时程序不在微内核上运行时,Linux 可以获得一些运行时间。 “问题是人们需要维护微内核并在新的硬件上支持它。这需要巨大的努力,并且它的开发社区不是很大。另外,由于 Linux 不直接在硬件上运行,所以你需要一个硬件抽象层(HAL)。有两件事要维护,你通常会落后主流内核一步。”
Altenberg说:“RTL 的挑战以及花了这么久才出现的原因是,要使 Linux 实时,你基本要修改每个内核文件。” 然而,大部分工作已经完成并合并到主线,开发人员并不需要维护一个微内核或 HAL。
Altenberg 继续解释了 RTAI 和 Xenomai 之间的差异。“使用 RTAI,你将编写一个由微内核调度的内核模块。这就像内核开发 - 真的很难深入,很难调试。”
RTAI 的开发可能会更加复杂,因为工业用户通常希望包括封闭的源代码以及 GPL 内核代码。 Altenberg 说:“你必须决定哪些部分可以进入用户态,哪些部分可以通过实时的方式进入内核。”
RTAI 与 RTL 想必支持更少的硬件平台,特别是 x86 之外。双内核 Xenomai 将 RTAI 作为主要的双内核方式,比 RTAI 具有更大的操作系统支持。更重要的是,Altenberg 说:“它提供了“在用户空间中进行实时的合适解决方案。要做到这一点,他们实现了皮肤的概念 - 一个用于不同 RTOS 的 API 的仿真层,比如 POSIX。这使你可以重用一些 RTOS 中的现有代码的子集。”
然而,使用 Xenomai,你仍然需要维护一个单独的微内核和 HAL。有限的开发工具是另一个问题。Altenberg说:“与 RTAI 一样,你不能使用标准的 C 库。你需要专门的工具和库。即使对于 POSIX 来说,你也必须链接到 POSIX 层,这更复杂。” 他补充说,任何一个平台,很难将超过 8 到 16 个 CPU 的微内核扩展到金融服务中使用的大型服务器集群。
### 睡眠自旋锁
主要的单内核解决方案是基于 PREEMPT.RT 的 RTL,它主要由 Thomas Gleixner 和 IngoMolnár 在十多年前开发。PREEMPT.RT 重新生成内核的 “spinlock” 锁定原语,以最大化 Linux 内核中的可抢占部分。(PREEMPT.RT 最初称为睡眠自旋锁补丁)
PREEMPT.RT 不是在硬中断环境中运行中断处理程序,而是在内核线程中运行它们。Altenberg 说:“当一个中断到达时,你不会运行中断处理程序代码。你只是唤醒相应的内核线程,它运行处理程序。这有两个优点:内核线程可以中断,并且它会显示在进程列表中,有自己的 PID。所以你可以把低优先级的放在不重要的中断上,高优先级的放在重要的用户态任务上。”
由于大约 80% 的 PREEMPT.RT 已经在主线上,任何 Linux 开发人员都可以使用面向 PREEMPT.RT 的内核组件,如定时器、中断处理程序、跟踪基础架构和优先级继承。Altenberg说:“当他们制作实时 Linux 时,一切都变得可以抢占,所以我们发现了很多竞争条件和锁定问题。我们修复了这些,并把它们推送到主线,以提高 Linux 的稳定性。”
因为 RTL 主要在 Linux 主线上,Altenberg 说:“PREEMPT.RT 被广泛接受,拥有庞大的社区。如果你编写一个实时应用程序,你不需要知道很多关于 PREEMPT.RT 的知识。你不需要任何特殊的库或 API,只需要标准的 C 库、Linux 驱动程序和 POSIX 程序。”
你仍然需要运行一个补丁来使用 PREEMPT.RT,它每隔两个 Linux 版本更新一次。然而,在两年内,剩下的 20% 的 PREEMPT.RT 应该会进入 Linux,所以你就“不再需要补丁”了。
最后,Altenberg 透露了他的 Xenomai 对 RTL 延迟测试的结果。Altenberg说:“有很多论文声称 Xenomai 和 RTAI 的延迟比 PREEMPT.RT 更小。但是我认为大部分时候是 PREEMPT.RT 配置不好的问题。所以我们带来了一个 Xenomai 专家和一个 PREEMPT.RT 专家,让他们配置自己的平台。”
Altenberg 称,虽然 Xenomai 在大多数测试中表现更好,并且有更少的性能抖动,但是差异不如一些 Xenomai 拥护者声称的高达 300% 至 400% 的延迟优势。当用户空间任务执行测试时,Altenberg 说这是最现实的、最重要的是测试,最糟糕的情况下 Xenomai和 RTL/PREEMPT.RT 都是 90 到 95 微秒的反应时间。
当他们在双 Cortex-A9 系统中隔离单个 CPU 来处理中断时,Altenberg 表示这相当普遍,PREEMPT.RT 执行得更好,大约80微秒。(有关详细信息,请查看大约第 33 分钟的视频。)
Altenberg 承认与 OSADL 的两到三年测试相比,他的 12 小时测试是最低标准,而且它不是一个“数学证明”。无论如何,考虑到 RTL 更简单的开发流程,它都值得一试。他总结说:“在我看来,将完整功能的 Linux 系统与微内核进行比较是不公平的。”
---
via: <https://www.linux.com/news/event/open-source-summit-na/2017/2/inside-real-time-linux>
作者:[ERIC BROWN](https://www.linux.com/users/ericstephenbrown) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,704 | LinkArchiver:自动提交链接给互联网档案(Internet Archive) | https://opensource.com/article/17/7/linkarchiver-automatically-submits-links-internet-archive | 2017-07-18T18:32:36 | [
"互联网档案馆",
"机器人",
"Twitter"
] | /article-8704-1.html |
>
> 在 Twitter 上分享的链接可以永久保存,用户不用担心。
>
>
>

互联网是永远的,当发生下面的情况的就不是了。 “链接腐烂” - 当页面移动或者站点脱机,随着时间的流逝,到网站的有效链接就会断开 - 对于尝试在线做研究的人来说,这是一个真正的问题。<ruby> <a href="https://archive.org/"> 互联网档案馆 </a> <rp> ( </rp> <rt> Internet Archive </rt> <rp> ) </rp></ruby>通过在它的“<ruby> 时光机 <rp> ( </rp> <rt> Wayback Machine </rt> <rp> ) </rp></ruby>”中提供提交的内容来帮助解决这个问题。
当然,困难的是让人们记得提交档案链接。
这就是 Parker Higgins 的新 Twitter 机器人所切入的地方。[@LinkArchiver](https://twitter.com/linkarchiver) 会自动提交关注了<ruby> <a href="https://archive.org/"> 互联网档案馆 </a> <rp> ( </rp> <rt> Internet Archive </rt> <rp> ) </rp></ruby>的帐户所提交的链接。如果一个 Twitter 用户关注了 [@LinkArchiver](https://twitter.com/linkarchiver),它会回关,即使用户取消关注机器人,它也会继续添加链接。这意味着在 Twitter 上共享的链接可以永久保存,用户不用担心。
无需留意这个方面对 Higgins 非常有吸引力。他对 Opensource.com 说:“我对整个装置的被动程度非常在意。如果你依靠人们选择什么是*重要的*来存档,你会错过很多最重要的东西,只要抓取每个发表链接的副本,这个机器人应该有助于确保我们不会错过上下文。”
在最初开发机器人之后,Higgins 联系了<ruby> <a href="https://archive.org/"> 互联网档案馆 </a> <rp> ( </rp> <rt> Internet Archive </rt> <rp> ) </rp></ruby>。他对自动化造成问题的担忧很快被消除。尽管他在请求时给 API 请求用了一个自定义的用户代理字符串,但是他说:“他们处理的流量实际上是个舍入错误。”扩展性的问题在 Twitter 方面:其服务限制了帐户的关注者数量和新关注者的比例。这限制了 LinkArchiver 的单个实例的能力。
幸运的是,LinkArchiver 以 AGPLv3 授权在 [GitHub](https://github.com/thisisparker/linkarchiver) 上发布。
有了一台小的服务器和一个 Twitter 账号, 任何人都可以运行这个机器人。Higgins 设想人们运行一个关注特定的兴趣或社交圈子的 LinkArchiver 的实例。“发生在我身上的一件事是,你可以关闭回关行为,并关注特定的组或者兴趣。例如,机器人可以关注一群朋友或同学,或主要媒体,或每一个美国参议员和代表,并存档他们发表的 tweet。”
这不是 Higgins 第一次写 Twitter 机器人:[@securethenews](https://twitter.com/securethenews)、[@pomological](https://twitter.com/pomological) 以及受欢迎的 [@choochoobot](https://twitter.com/choochoobot) 是他之前的作品。这些机器人都是只写的。 LinkArchiver 是他开发的第一个互动机器人,这需要学习几种新技能。这是 Higgins 参与 [Recurse Center](https://www.recurse.com/) 的一部分,这是为程序员进行的为期 12 周的活动。
Higgins 鼓励大家的拉取请求以及其他的 LinkArchiver 机器人实例。
(题图:Beatrice Murch 拍摄的 Inernet Archive 总部; CC BY ([on Flickr](https://www.flickr.com/photos/blmurch/5079018246/)))
---
作者简介:
Ben Cotton - Ben Cotton 是一个受训过的气象学家和一名高性能计算机工程师。Ben 在 Cycle Computing 做技术传教士。他是 Fedora 用户和贡献者,合作创办当地的一个开源集会,是一名开源倡议者和软件自由机构的支持者。他的推特 (@FunnelFiasco)
---
via: <https://opensource.com/article/17/7/linkarchiver-automatically-submits-links-internet-archive>
作者:[Ben Cotton](https://opensource.com/users/bcotton) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,705 | 使用 Apex 和 Compose MongoDB 开发 serverless | https://www.compose.com/articles/go-serverless-with-apex-and-composes-mongodb/ | 2017-07-18T19:48:08 | [
"Go",
"Lambda",
"FaaS",
"MongoDB"
] | /article-8705-1.html | 
Apex 是一个将开发和部署 AWS Lambda 函数的过程打包了的工具。它提供了一个本地命令行工具来创建安全上下文、部署函数,甚至追踪云端日志。由于 AWS Lambda 服务将函数看成独立的单元,Apex 提供了一个框架层将一系列函数作为一个项目。另外,它将服务拓展到不仅仅是 Java,Javascript 和 Ptyhon 语言,甚至包括 Go 语言。
两年前 Express (基本上是 NodeJS 事实标准上的网络框架层)的作者,[离开](https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b#.dc9vkeybx)了 Node 社区,而将其注意力转向 Go (谷歌创造的后端服务语言),以及 Lambda(由 AWS 提供的函数即服务)。尽管一个开发者的行为无法引领一股潮流,但是来看看他正在做的名叫 [Apex](http://apex.run/) 项目会很有趣,因为它可能预示着未来很大一部分网络开发的改变。
### 什么是 Lambda?
如今,人们如果不能使用自己的硬件,他们会选择付费使用一些云端的虚拟服务器。在云上,他们会部署一个完整的协议栈如 Node、Express,和一个自定义应用。或者如果他们更进一步使用了诸如 Heroku 或者 Bluemix 之类的新玩意,也可能在某些已经预配置好 Node 的容器中仅仅通过部署应用代码来部署他们完整的应用。
在这个抽象的阶梯上的下一步是单独部署函数到云端而不是一个完整的应用。这些函数之后可以被一大堆外部事件触发。例如,AWS 的 API 网关服务可以将代理 HTTP 请求作为触发函数的事件,而函数即服务(FaaS)的供应方根据要求执行匹配的函数。
### Apex 起步
Apex 是一个将 AWS 命令行接口封装起来的命令行工具。因此,开始使用 Apex 的第一步就是确保你已经安装和配置了从 AWS 获取的命令行工具(详情请查看 [AWS CLI Getting Started](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) 或者 [Apex documentation](http://apex.run/))。
接下来,安装 Apex:
```
curl https://raw.githubusercontent.com/apex/apex/master/install.sh | sh
```
然后为你的新项目创建一个目录并运行:
```
apex init
```

这步会配置好一些必须的安全策略,并且将项目名字附在函数名后,因为 Lambda 使用扁平化的命名空间。同时它也会创建一些配置文件和默认的 “Hello World" 风格的 Javascript 函数的 functions 目录。

Apex/Lambda 一个非常友好的特性是创建函数非常直观。创建一个以你函数名为名的新目录,然后在其中创建项目。如果想要使用 Go 语言,你可以创建一个叫 `simpleGo` 的目录然后在其中创建一个小型的 `main` 函数:
```
// serverless/functions/simpleGo/main.go
package main
import (
"encoding/json"
"github.com/apex/go-apex"
"log"
)
type helloEvent struct {
Hello string `json:"hello"`
}
func main() {
apex.HandleFunc(func(event json.RawMessage, ctx *apex.Context) (interface{}, error) {
var h helloEvent
if err := json.Unmarshal(event, &h); err != nil {
return nil, err
}
log.Print("event.hello:", h.Hello)
return h, nil
})
}
```
Node 是 Lambda 所支持的运行环境,Apex 使用 NodeJS shim 来调用由上述程序产生的二进制文件。它将 `event` 传入二进制文件的 STDIN,将从二进制返回的 STDOUT 作为 `value`。通过 STDERR 来显示日志。`apex.HandleFunc` 用来为你管理所有的管道。事实上在 Unix 惯例里这是一个非常简单的解决方案。你甚至可以通过在本地命令行执行 `go run main.go` 来测试它。

通过 Apex 向云端部署稍显琐碎:

注意,这将会对你的函数指定命名空间,控制版本,甚至为其他多开发环境如 `staging` 和 `production`配置`env`。
通过 `apex invoke` 在云端执行也比较琐碎:

当然我们也可以追踪一些日志:

这些是从 AWS CloudWatch 返回的结果。它们都在 AWS 的 UI 中可见,但是当在另一个终端参照此结果来署它会更快。
### 窥探内部的秘密
来看看它内部到底部署了什么很具有指导性。Apex 将 shim 和所有需要用来运行函数的东西打包起来。另外,它会提前做好配置如入口与安全条例:

Lambda 服务实际上接受一个包含所有依赖的 zip 压缩包,它会被部署到服务器来执行指定的函数。我们可以使用 `apex build <functionName>` 在本地创建一个压缩包用来在以后解压以探索。

这里的 `_apex_index.js handle` 函数是原始的入口。它会配置好一些环境变量然后进入 `index.js`。
而 `index.js` 孕育一个 `main` Go 的二进制文件的子进程并且将所有关联联结在一起。
### 使用 `mgo` 继续深入
`mgo` 是 Go 语言的 MongoDB 驱动。使用 Apex 来创建一个函数来连接到 Compose 的 MongoDB 就如同我们已经学习过的 `simpleGo` 函数一样直观。这里我们会通过增加一个 `mgoGo` 目录和另一个 `main.go` 来创建一个新函数。
```
// serverless/functions/mgoGo/main.go
package main
import (
"crypto/tls"
"encoding/json"
"github.com/apex/go-apex"
"gopkg.in/mgo.v2"
"log"
"net"
)
type person struct {
Name string `json:"name"`
Email string `json:"email"`
}
func main() {
apex.HandleFunc(func(event json.RawMessage, ctx *apex.Context) (interface{}, error) {
tlsConfig := &tls.Config{}
tlsConfig.InsecureSkipVerify = true
//connect URL:
// "mongodb://<username>:<password>@<hostname>:<port>,<hostname>:<port>/<db-name>
dialInfo, err := mgo.ParseURL("mongodb://apex:[email protected]:15188, aws-us-west-2-portal.1.dblayer.com:15188/signups")
dialInfo.DialServer = func(addr *mgo.ServerAddr) (net.Conn, error) {
conn, err := tls.Dial("tcp", addr.String(), tlsConfig)
return conn, err
}
session, err := mgo.DialWithInfo(dialInfo)
if err != nil {
log.Fatal("uh oh. bad Dial.")
panic(err)
}
defer session.Close()
log.Print("Connected!")
var p person
if err := json.Unmarshal(event, &p); err != nil {
log.Fatal(err)
}
c := session.DB("signups").C("people")
err = c.Insert(&p)
if err != nil {
log.Fatal(err)
}
log.Print("Created: ", p.Name," - ", p.Email)
return p, nil
})
}
```
发布部署,我们可以通过使用正确类型的事件来模拟调用了一个 API:

最终结果是 `insert` 到在 [Compose 之上 的 MongoDB](https://www.compose.com/articles/composes-new-primetime-mongodb/) 中。

### 还有更多……
尽管目前我们已经涉及了 Apex 的方方面面,但是仍然有很多值得我们去探索的东西。它还和 [Terraform](https://www.terraform.io/) 进行了整合。如果你真的希望,你可以发布一个多语言项目包括 Javascript、Java、Python 以及 Go。你也可以为开发、演示以及产品环境配置多种环境。你可以调整运行资源如调整存储大小和运行时间来调整成本。而且你可以把函数勾连到 API 网关上来传输一个 HTTP API 或者使用一些类似 SNS (简单通知服务)来为云端的函数创建管道。
和大多数事物一样,Apex 和 Lambda 并不是在所有场景下都完美。 但是,在你的工具箱中增加一个完全不需要你来管理底层建设的工具完全没有坏处。
---
作者简介:
Hays Hutton 喜欢写代码并写一些与其相关的东西。喜欢这篇文章?请前往[Hays Hutton’s author page](https://www.compose.com/articles/author/hays-hutton/) 继续阅读其他文章。
---
via: <https://www.compose.com/articles/go-serverless-with-apex-and-composes-mongodb/>
作者:[Hays Hutton](https://www.compose.com/articles/author/hays-hutton/) 译者:[xiaow6](https://github.com/xiaow6) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='www.compose.com', port=443): Max retries exceeded with url: /articles/go-serverless-with-apex-and-composes-mongodb/ (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7b83275b67a0>: Failed to resolve 'www.compose.com' ([Errno -2] Name or service not known)")) | null |
8,706 | 不需要编码:树莓派上的 Node-RED | https://opensource.com/article/17/7/nodered-raspberrypi-hardware | 2017-07-19T10:33:00 | [
"树莓派",
"Node-RED"
] | https://linux.cn/article-8706-1.html |
>
> 查看本教程,看看使用 Node-RED 的拖放界面设置硬件流程是多么容易。
>
>
>

Node-RED 是一个编程工具,可让你使用基于浏览器的编辑器快速连接硬件设备。它具有大量的节点,可以以拖放的方式构建流程,这大大减少了开发时间。[Node-RED](https://nodered.org/) 与树莓派的 Raspian Jessie 一起安装,你还可以独立下载 Node-RED。
为了向你展示它如何工作,我们将使用 Node-RED 构建一个简单的工具,与连接到树莓派的蜂窝调制解调器通信。使用蜂窝调制解调器,你可以通过蜂窝网络从你的树莓派发送/接收数据。你可以使用蜂窝网络提供商通常提供的 3G/4G USB 加密狗,也可以将开发板与 3G 或 4G 无线调制解调器连接。
无论你是连接 USB 加密狗还是开发板,树莓派的连接接口都是通过 USB 端口的。在本教程中,我将一块 [SIM900](http://m2msupport.net/m2msupport/simcom-sim900-gprs-2g-module/) 开发板通过一根 USB 转串行电缆连接到树莓派。

第一步是检查 SIM900 开发板是否连接到树莓派上。

USB 转串行适配器在这里被显示为连接到树莓派的 USB 设备之一。
接下来,检查 SIM900 连接的 USB 端口号。

在最后一行,你可以看到 SIM900 板(通过 USB 转串行转换器连接)连接到了树莓派上的 **ttyUSB0**。现在我们准备开始使用 Node-RED。
在树莓派上启动 Node-RED。

下载[示例流图](http://m2msupport.net/m2msupport/wp-content/themes/admired/Node-RED/modem_commands)并将其导入到 Node-RED 中。请注意,流文件是该图形 UI 的 JSON 表示形式。
在 Node-RED 中,导入的流图应该看上去像这样:

注入节点设置 [AT 命令](http://m2msupport.net/m2msupport/software-and-at-commands-for-m2m-modules/)需要查询调制解调器。**添加换行** 功能节点会在注入节点传递过来的 AT 命令后面附加 **\r\n**。**添加换行** 的输出然后被连接到**串行输出**节点,它将数据写入串行端口。来自调制解调器的 AT 命令的响应通过 **串行输入** 节点读取,该节点将响应输出到 **调试** 窗口。确认串行端口号和端口速度在 **串行输入** 和 **串行输出** 节点中的配置。
Node-RED 是一种易于使用的编程工具,可用于快速集成和测试硬件设备。从本教程可以看出,使用 Node-RED 连接和测试使用树莓派的蜂窝模式不需要编码。
有关 Node-RED 和其他可以使用的方式的更多信息,请访问[项目网站](https://nodered.org/)。
(题图: Thomas Hawk 的 [Flickr](https://www.flickr.com/photos/thomashawk/3048157616/in/photolist-5DmB4E-BzrZ4-5aUXCN-nvBWYa-qbkwAq-fEFeDm-fuZxgC-dufA8D-oi8Npd-b6FiBp-7ChGA3-aSn7xK-7NXMyh-a9bQQr-5NG9W7-agCY7E-4QD9zm-7HLTtj-4uCiHy-bYUUtG). [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/). Opensource.com 修改)
---
作者简介:
Surya G - 我的兴趣是为物联网项目尤其是使用蜂窝调制解调器的项目做软件开发。
---
via: <https://opensource.com/article/17/7/nodered-raspberrypi-hardware>
作者:[Surya G](https://opensource.com/users/gssm2m) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Node-RED is a programming tool that lets you quickly connect hardware devices using a browser-based editor. It comes with a wide range of nodes that can be used to build flows in a drag-and-drop manner, significantly reducing your development time. [Node-RED](https://nodered.org/) is installed with Raspian Jessie for Raspberry Pi, and there is also an option to download Node-RED separately.
To show you how it works, we'll build a simple tool using Node-RED to communicate with a cellular modem connected to a Raspberry Pi. With cellular modems, you can send/receive data from your Raspberry Pi over a cellular network. You can use one of the 3G/4G USB dongles commonly available through cellular network providers, or you can connect a development board with a 3G or 4G wireless modem.
Whether you're connecting with a USB dongle or a development board, the connection interface to the Raspberry Pi is through a USB port. In this tutorial, I'm connecting a [SIM900](http://m2msupport.net/m2msupport/simcom-sim900-gprs-2g-module/) development board to Raspberry Pi through a USB-to-serial converter cable.

opensource.com
The first step is to check that the SIM900 development board is connected to the Raspberry Pi.

opensource.com
The USB-to-serial adapter shows up here as one of the USB devices connected to the Raspberry Pi.
Next, check the USB port number the SIM900 board is connected to.

opensource.com
In the last line above, you can see that the SIM900 board (connected through the USB-to-serial converter) is connected to **ttyUSB0** on the Raspberry Pi. Now we're ready to start using Node-RED.
Launch Node-RED on the Raspberry Pi.

opensource.com
Download this [sample flow](http://m2msupport.net/m2msupport/wp-content/themes/admired/Node-RED/modem_commands) and import it into Node-RED. Note that the flow file is a JSON representation of the graphical UI.
The imported flow should look like this in Node-RED:

opensource.com
Injection nodes set up [AT commands](http://m2msupport.net/m2msupport/software-and-at-commands-for-m2m-modules/) required to query the modem. The **Add Newline** function node appends **\r\n** to the AT commands passed from the injection nodes. Output from **Add Newline** is then wired to the **Serial Out** node, which writes data to the serial port. The AT command response from the modem is read through the **Serial In** node, which outputs the response to the **Debug **window. Make sure the serial port number and port speed are configured in both the **Serial In** and **Serial Out** nodes.
Node-RED is an easy-to-use programming tool that can be used to quickly integrate and test hardware devices. As you can see from this tutorial, connecting and testing a cellular mode with Raspberry Pi using Node-RED required no coding at all.
For more information about Node-RED and other ways it can be used, visit [the project's website](https://nodered.org/).
## Comments are closed. |
8,707 | 与开放社区讨论法律事宜的 7 种方式 | https://opensource.com/open-organization/17/3/legal-matters-community | 2017-07-19T17:12:51 | [
"开源",
"法律"
] | /article-8707-1.html |
>
> 你的组织的律师准备好与开源社区打交道了么?不要让他们犯这些错。
>
>
>

我注意到有相当多的人尝试与[开源推进联盟的许可证评估社区](https://opensource.org/approval)以及 [Apache 软件基金会的法律事务委员会](https://www.apache.org/legal/)建立沟通,当轮到*你*与开放社区进行法律讨论时,我想提供一些成功的提示和技巧。
### 不要代理人
首先,也是最重要的是,要确保进行谈话的人员既是*有资格的*,也是*有授权的*。不要用代理人,这只会让社区沮丧,他们很快会发现你的代表总是扮演二手车推销员的角色并且要求到后面的房间交易。显然,法律讨论将涉及公司的一个团队,可能涉及产品管理、工程和内部咨询。 但代表们需要能够自己控制谈话内容,不要总是引用幕后某个匿名人物的话。
### 多边主义
开源社区就安全合作所需的确定性达成了难得一致的共识。这种共识体现在其治理中,尤其是在他们使用的开源许可证中。所以当你提出一个新的提案时,就不像是一个普通的商业交易。这些是双边谈判,以双方的自由为代价来创造一个最佳妥协的和平条约。在这个讨论中,你只是许多方面之一,你需要解释为什么你的提案对所有人都有益。写上多边之间的调整本质上是缓慢的,所以不要设置最后期限。无论你做什么,不要建议对开源许可证进行更改!
### 首先学习
现有的共识和过程其存在是有原因的。你应该了解每个元素的原因,最好连同其发生的历史一起了解,然后再提出修改。这样,你可以在进一步发展的背景下表达你的提案,这样你可以避免在社区历史中受教育(浪费社区资源,降低你机会的有效性)。回看邮件列表,并向开发人员询问历史和来龙去脉。
### 透明
开源开发人员使用一个迭代、增量修改的过程。即使需要大的变化,它几乎总是用一系列更小、更好的解释或不言而喻的正确变化来实现的,这样每个人都可以跟进并支持。你提出的更改也是如此。不要弄出新的贡献者协议或者修改过的许可证,并期望每个人都相信你是专家、一切都是对的。你需要提供一根“红线”(相当于法律文件的差异),记录每个变化,并提供一个承认任何社区影响并为其辩护的理由。如果你*只是*为了你自己的利益需要一个东西,那就承认它,而不是希望没有人会注意到。
### 谦逊
你是一个炙手可热的律师,而你认为只有程序员才使用邮件列表。很明显,对你而言他们缺乏讨论的经验,所以你安排了一个你认为是同等的代理人,简化这一切,或者提出与社区选择的律师进行一对一的讨论。 我很抱歉地说你做的全都是错的。由于社区的政策是多边协商一致的,所以他们很有可能知道他们为什么定下现在的这些决定。名单上的一些人将具有优秀的领域知识,可能会比你的更好。而且一对一这件事是终极的羞辱,就像询问是否有一个成年人可以与你说话。
### 不要秘密渠道
有可能在某种领导机构,也许你认识在公司法务工作的 VP,也许你认识社区的总法律顾问。虽然在某些情况下,询问如何操控流程的提示可能是可以接受的,但尝试通过秘密渠道讨论或协商来试图影响甚至决定结果,那么结果会很糟糕。你最终可能会被邀请进行一对一的讨论, 但你不应该要求或期待它。
### 成为一个成员
如果你一切都做得正确,那么社区就有可能尊重你。坚持这些。作为一名冷静、机智的贡献者建立你的声誉。当人们犯你犯过的错误(或者已避免的)时,帮助他们。作为邮件列表社区的值得信赖的参与者,你是项目和雇主的真正资产。继续贡献,一些项目最终会在它们的治理中为你提供一个角色。
*这个文章的早期版本[最初发表](https://meshedinsights.com/2017/02/28/engaging-communities-on-legal-matters-7-tips/)在 Meshed Insights 中。*
(题图: opensource.com)
---
作者简介:
Simon Phipps - 计算机行业和开源老手 Simon Phipps 上线了 Public Software,一个欧洲的开源项目托管,Document Foundation 的志愿者总监。他的帖子由 Patreon 赞助者赞助 - 如果你想要看更多,成为其中一个!
---
via: <https://opensource.com/open-organization/17/3/legal-matters-community>
作者:[Simon Phipps](https://opensource.com/users/simonphipps) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,708 | 成为一名软件开发者你应该学习哪种语言? | https://www.linuxcareer.com/do-you-have-what-it-takes-to-be-a-software-developer | 2017-07-19T21:08:09 | [
"编程语言"
] | https://linux.cn/article-8708-1.html | 
应用程序的领域在不断发展。底层的 Linux 做了很多工作,而且还在继续,但是在过去几年里,应用程序领域开始增长。在这种情况下,开发人员使用哪种语言构建这些应用程序?简而言之,要看情况,我知道这个说法没啥稀奇的。但是,通过我们拥有的数据,我们可以确定哪些语言领先。
站在山顶的语言是 Java。它已经出现在开源软件领域 15 年以上,但它并不是一直在顶峰。在早期,我们没有看到那么多对 java 开发者感兴趣的,但现在情况已经改变了。它是目前应用领域的权威领导者。虽然这个数字在过去六个季度没有明显增长,但其整体数量却令人印象深刻。平均而言,关注于开源软件的公司发布的职位中有超过 1/3 的职位要求 Java 技能。这对几年前没有在榜单上出现的语言而言是一个非凡的成就。而且,由于它在 Android 中的大量使用,未来这个数字进一步增加也并不奇怪。
在应用程序领域中使用的另一种语言是 C++。虽然它的数量不能与 Java 竞争,但它仍然在这个领域占据了很大的市场份额。而且每 3 个招聘中有一个要求 Java,C++ 则是每 4 个中有一个要求它。与 Java 非常类似,其数量在过去六个季度中保持相对稳定。C++ 一直被大量使用,即使 Java 已经取代它,它仍然是一种高度相关的语言。

进入到网络应用领域,多年来一直在城头变幻大王旗。在早期,大多数 Web 程序毫无疑问地选择使用 PHP 开发。正如之前关于脚本的文章所讨论的,这几年来已经发生了变化。在过去几年中,PHP 的使用似乎有所恶化。在过去一年半的时间里,已经急剧下降了 30% 以上。这是一个令人震惊的数字,只有时间才能告诉我们趋势是否持续。
最初打破 PHP 领导地位的是 Ruby on Rails。多年来,我看到公司们和开发者们进行了这一转型。Ruby on Rails 经历了一段时间,在这个时期它是这个领域的首选语言。然而,从我们收集的数字来看,它的光泽似乎已经失去了一点。虽然没有像 PHP 这样的衰退,但其数量一直保持相对平稳,它曾经有过的增长似乎停滞不前。

目前在网络应用程序领域的王者似乎是 Javascript。它获得了最大的总数。虽然它的数量保持平坦,这很像 Ruby on Rails,但它已经吸引了更多的观众。平均来说,过去六个季度,公司在分析的 10,000 份工作清单中有 1,500 份需要 Javascript 技能。这比 PHP 或 Ruby on Rails 多了 70%。
随着 PHP 的衰落以及 Ruby on Rails 和 Javascript 停滞不前,是谁在 Web 程序领域保持增长呢?这个群体的突出者似乎是 Golang。它在 2007 年由 Google 内的几位开发人员创建,似乎这种语言开始获得更广泛的受众群体。虽然与我们讨论的其它三个的总数相比不多,但看起来这一年半以来增长了 50%。如果这种趋势继续下去,那将是非常有趣的。在我看来,我预计我们会继续看到 Golang 挤占 其它三个的群体份额。

如往常一样,我们会监测这些语言的各种前进方向,以观察市场趋势。而且,榜单的任何新进入者都会被密切关注。这是一个令人兴奋和动态的发展领域。一个会提供随时间不断变化的结果。
---
via: <https://www.linuxcareer.com/do-you-have-what-it-takes-to-be-a-software-developer>
作者:[Brent Marinaccio](https://www.linuxcareer.com/do-you-have-what-it-takes-to-be-a-software-developer) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [The application space is the place to be. A lot of work has been done in the low-level Linux arena, and it continues, but the growth over the last few years has been in the application space. With that being the case, which language are developers utilizing to build these apps? In short, it depends, which I know does not come as a huge surprise. But, with the data that we have, we are able to determine which languages are leading the way.]
The language that finds itself on the top of the mountain is Java. Being around open source software for over 15 years, this was not always the case. Early on, we did not see a lot of interest in Java developers, but boy has that changed. It is the definitive leader in the application space currently. While the numbers have not grown in the last six quarters, the sheer overall number is impressive. On average, companies are asking for Java skills in over 1 in 3 job postings focused on FLOSS. Quite a feat for a language that did not register on the radar years ago. And, based on its heavy use with Android, it would not be a surprise to see this number increase in the future.
Another language that is used prominently in the application space is C++. While its numbers can't quite compete with that of Java, it still commands a large marketshare in this arena. Whereas Java is asked for in 1 of 3 postings, C++ is required in 1 of 4. Much like that of Java, its numbers have remained relatively stable over the last six quarters. C++ has always been heavily utilized, and even though Java has superseded it, it remains a highly relevant language.
Moving toward the web application space, there has been a changing of the guard over the years. Early on, the clear choice was to develop most web applications utilizing PHP. As was discussed in the previous article on scripting, this has changed over the years. There appears to have been some deterioration in the usage of PHP in the last couple of years. In the last year and a half alone, there has been a precipitous decline of over 30%. That is an alarming number, and only time will tell if the trend continues.
Claiming some of PHP's thunder initially was that of Ruby on Rails. For a number of years, I watched companies and developers make that transition. Ruby on Rails went through a period of time where it was “the” language of choice in this space. However, from the numbers we have gathered, it appears that its luster has lost a little of its edge. While it is not experiencing any kind of decline like that of PHP, its numbers have been remained relatively flat, so the growth that it once experienced appears to have stagnated.
With PHP in decline and Ruby on Rails and Javascript stagnate, is anyone in the web application space growing? The outlier in this group seems to be Golang. Created by a couple of developers inside Google in 2007, it appears that this language is starting to gain a wider audience. While the overall numbers pale in comparison to the other three discussed, it has seen a 50% increase in the last year and a half. It will be very interesting to watch if this trend continues. In my opinion, I expect that we will continue to see gains in Golang at the expense of the other three. |
8,709 | 如何在 Linux 中恢复仍在活动进程中的已删除文件 | http://www.linuxnov.com/recover-deleted-files-still-running-active-processes-linux/ | 2017-07-20T09:38:10 | [
"删除",
"恢复"
] | https://linux.cn/article-8709-1.html |
>
> 使用终端恢复你 Linux 系统上仍在运行进程的已删除文件的快速指南。
>
>
>

许多情况下,删除的文件都可以恢复,比如在该文件有活动的进程在操作它,并且目前被单个或多个用户使用时。在 Linux 系统中,每个当前正在运行的进程都会获得 ID,其被称之为进程标识符 “PID”,并将它们存放在 `/proc` 目录中。这正是我们恢复仍在运行的进程中(具有PID)已删除的文件所需要的东西。这里就是介绍我们如何做到这一点的。
假设你打开了一个压缩文件,之后你删除了这个文件。为了演示目的,压缩文件称为 “opengapps.zip”,这将是之后我们将打开和删除的文件。
### 计算原始文件的 MD5 哈希
删除之前,我们将计算该文件的 MD5。这样我们可以将原来的 MD5 哈希值与恢复文件的 MD5 哈希进行比较。这个过程将保证我们恢复的压缩文件的完整性是一样的,它没有被破坏。
```
md5sum opengapps.zip >> md5-opengapps.txt
```
要显示文本文件的内容。
```
cat md5-opengapps.txt
86489b68b40d144f0e00a0ea8407f7c0 opengapps.zip
```
检查压缩文件的 MD5 哈希值之后。我们将压缩文件保持打开(LCTT 译注:此处是使用 file-roller 这个图形界面的解压程序保持对该压缩文件的打开,其内置在 GNOME 环境中;在桌面环境中,使用桌面工具打开一个压缩包也能起到同样的作用。又及,本文举例不是很恰当,如果是删除了某个服务进程的已经打开的配置文件,那么这种恢复就很有意义),并将其删除。之后,我们将从文件的恢复过程开始,步骤如下:
```
rm opengapps.zip
```
### 删除文件的恢复过程
正如我们前面提到的,运行的进程在 `/proc` 目录中。我们可以使用以下命令搜索该目录中需要的进程:
由于我们已经知道文件名包括 .zip 扩展名,因此我们可以使用 .zip 扩展名进行搜索。它将限制输出结果并显示所需的进程。
```
ps -axu | grep .zip
m 13119 0.8 1.0 121788 30788 ? Sl 06:17 0:00 file-roller /home/m/Downloads/Compressed/opengapps.zip
m 13164 0.0 0.0 5108 832 pts/20 S+ 06:18 0:00 grep --color=auto .zip
```
然后我们将进入到包含 PID `13119` 的目录并打开 `fd` 子目录。
```
cd /proc/13119/fd
```
`fd` (文件描述符)目录包含多个文件,包括我们需要恢复的文件。该文件以硬链接的方式链接到原始文件。 `fd` 目录中的所有文件都以数字链接到“文件名”。因此,要确定这些文件中的哪一个链接到该原始文件,我们将用详细列表选项列出 /fd 目录。
```
ls -l
total 0
lr-x------ 1 m m 64 Jul 14 06:17 0 -> /dev/null
lrwx------ 1 m m 64 Jul 14 06:17 1 -> socket:[26161]
lrwx------ 1 m m 64 Jul 14 06:17 10 -> anon_inode:[eventfd]
lr-x------ 1 m m 64 Jul 14 06:17 11 -> anon_inode:inotify
lrwx------ 1 m m 64 Jul 14 06:17 12 -> socket:[5752671]
lr-x------ 1 m m 64 Jul 14 06:17 13 -> /home/m/Downloads/Compressed/opengapps.zip (deleted)
lrwx------ 1 m m 64 Jul 14 06:17 2 -> socket:[26161]
lrwx------ 1 m m 64 Jul 14 06:17 3 -> anon_inode:[eventfd]
lrwx------ 1 m m 64 Jul 14 06:17 4 -> anon_inode:[eventfd]
lrwx------ 1 m m 64 Jul 14 06:17 5 -> socket:[5751361]
lrwx------ 1 m m 64 Jul 14 06:17 6 -> anon_inode:[eventfd]
lrwx------ 1 m m 64 Jul 14 06:17 7 -> anon_inode:[eventfd]
lrwx------ 1 m m 64 Jul 14 06:17 8 -> socket:[5751363]
lrwx------ 1 m m 64 Jul 14 06:17 9 -> socket:[5751365]
```
正如你在终端输出中看到的,原始文件 “opengapps.zip” 已被删除,但它仍然链接到一个文件名 `13`,其进程 PID `13119`。但是,我们仍然可以通过将链接的文件复制到安全的地方来恢复它。
```
cp 13 /home/m/Downloads/Compressed
```
文件复制后。我们将返回包含恢复文件的目录,并使用以下命令重命名它。
```
mv 13 opengapps-recovered.zip
```
### 计算恢复文件的 MD5 哈希
由于我们已经恢复了该文件。让我们检查该文件的完整性,这只是为了确保文件没有损坏,并且和原来一样。早先我们保存了原始文件的 MD5 哈希值。
```
md5sum opengapps-recovered.zip >> md5-opengapps.txt
```
该命令将检查文件的 MD5 哈希值,并在文件中追加新恢复文件的 MD5 哈希值,以轻松比较两个 MD5 哈希值。
可以显示文本文件的内容来比较原始文件和恢复文件的 MD5 哈希值。
```
cat md5-opengapps.txt
86489b68b40d144f0e00a0ea8407f7c0 opengapps.zip
86489b68b40d144f0e00a0ea8407f7c0 opengapps-recovered.zip
```
恢复文件的 MD5 哈希是一样的。所以,我们成功地恢复了我们以前删除的文件,并且恢复后文件完整性一致,并且工作正常。
[](http://www.linuxnov.com/wp-content/uploads/2017/07/Recovering-a-deleted-file-using-terminal-LinuxNov.png)
**注意:** 在某些情况下,某些文件无法通过 `ps -axu` 命令看到。 所以,尝试检查运行的程序,并从中恢复文件。
假设我们有一个使用 Totem 媒体播放器播放中的以 .avi 为扩展名的视频。你需要做的就是检查 Totem 的 PID,并按照本示例中提到的相同说明进行操作。
要查找正在运行的程序的 PID,请使用以下命令,后面跟程序的名称。
```
pidof 程序名
```
通过分享支持我们。
---
via: <http://www.linuxnov.com/recover-deleted-files-still-running-active-processes-linux/>
作者:[mhnassif](http://www.linuxnov.com/author/mhnassif/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,710 | 物联网是 Linux 的未来么? | http://www.datamation.com/open-source/is-iot-the-future-of-linux.html | 2017-07-20T13:40:49 | [
"物联网",
"IoT"
] | https://linux.cn/article-8710-1.html |
>
> Linux 无疑将在物联网中扮演一个关键角色,但是其光彩将与其它的一些分享。
>
>
>
随着 [Canonical 重新关注于](https://insights.ubuntu.com/2017/04/05/growing-ubuntu-for-cloud-and-iot-rather-than-phone-and-convergence/)赢利和新技术,我们中的一些人发现我们正在思考 Linux 未来将走向何方,IoT(物联网)是否是 Linux 的未来? 本文旨在解决这两个问题。

### Mycroft 运行于 Linux
对于大多数非技术世界的人来说,实际上有几个不同的 IoT 项目,它们不是我们不断在网络上看到广告的过度商业化的项目,其中最成功的就是 [Mycroft](https://mycroft.ai/) 项目。
使得 Mycroft 有趣的部分原因是你不需要在专门的硬件上得到它。这意味着你可以将其下载到 Raspberry Pi 或甚至您自己的 PC 上。这是物联网领域更常见的来自其它厂商的商业化替代品中所没有出现的自由元素。Mycroft 项目的另一个有趣的事实是,它最初是众筹的,所以从一开始它就是真正的社区驱动的项目。
那么它的技能(skill)——这个用来描述它能力的术语——怎么样?目前,我听到一些褒贬不一的评论。通过查看 Github 上列出的技能,其整个列表似乎相当令人印象深刻。更深层次挖掘的话,很容易看出,它的许多技能比使用专有的 IoT 设备要好。
值得注意的是,为物联网设备开发的官方技能与社区成员开发的功能之间存在明显的区别。Mycroft 的官方技能列表其实很薄弱。让我们面对这个情况,就像 Linux 运行在物联网设备上一样酷,让我大开眼界的是,在 [Mycroft Github](https://github.com/MycroftAI/mycroft-skills) 页面上并没有一个官方的邮件检查技能。好吧,在社区技能部分有一个 Gmail,它带有一个问号,因为它显然没有验证过是否可以工作(或不工作)。
### Google Home - 这是一个包含在谜语中的 Linux 谜题
那么 Google 的物联网产品 Google Home 呢?当然这运行在 Linux上,对吧?是的,在广义上说是这样……事实证明,Google Home [基于 Chromecast](https://www.theverge.com/circuitbreaker/2016/5/31/11822032/google-home-chromecast-android)。那 Chromecast 呢?它是基于 Google TV。我们还在 Linux 的部分么?不完全是。
显然,Chromecast 基本上运行的是 [Android 的精简版](https://www.extremetech.com/computing/162463-chromecast-hacked-its-based-on-google-tv-and-android-not-chrome-os)。而且我们大多数人都知道,Android 确实使用了 Linux 内核的定制版本。
在这一点上,我觉得我们需要问自己 - Google 是我们可以想出的最好的 Linux 物联网代表吗?我认为不是,因为我觉得他们会愿意做出隐私妥协,而这是我们在一个纯粹的 Linux 物联网环境中所不愿见的。 但这只是我个人的信仰。
假设我们愿意接受 Google Home 这种隐私方面的可疑而带来的好处,也假设有在底层有一些可辨识出来的 Linux 成分,那么与 Mycroft 的纯粹的开源体验相比如何呢?
目前,谷歌正在解决这个局面。首先,如果你愿意,你可以安装 Google Home的“大脑”(称为 Google Assistant)到树莓派上。这可以通过 [Google Assistant SDK](https://developers.google.com/assistant/sdk/) 获得。
如你猜的那样,这个 SDK 可以在 Linux 上安装。安装完 portaudio、各种库和用 pip 安装 google-assistant-sdk 之后,你可以开始用树莓派进行通话了,就像 Google Home 设备一样。
回到实际 Google Home 设备本身,你可能会想知道它的可用技能?开箱即用,它提供与 Google Play 音乐、Pandora、Spotify 和 iHeart Radio 以及其他流式音乐服务的音乐播放。Google Home 不仅拥有比 Mycroft 更多的“交流”技能,它还可以与像 Netflix 这样的服务和诸如 Philips、Nest 和 [IFTTT](https://ifttt.com/google_assistant) 等各种智能家居任务的家庭品牌一同工作。我有提到它还可以安排 Google 日历或者订购披萨么?
相比之下,Mycroft 对于想要创造自己的技能的 DIY 开发者来说更好,Google Home 现在可以就开始使用,而不是某一天。
### Amazon Echo 可以运行于 Linux
我首先要承认的是我不知道 Amazon Echo 本身是否运行在 Linux 的某些元素上。也就是说,我知道你可以将 Echo 背后的大脑安装到 Linux 驱动的树莓派上!当[第一次发布派上的版本时](https://www.raspberrypi.org/blog/amazon-echo-homebrew-version/),有点让人失望的是,你不得不按一个按钮来激活 Echo 的聆听模式。
转眼到了现在,派上的 Echo 现在支持用可编程的“热词”来激活它。这意味着你可以运行一个安装了 Linux 的派,其操作方式与官方 Amazon Echo 相似。然后,如果你买了 Echo Dot,你可以跳过额外的工作,省去在树莓派上安装 Mycroft 的那些极客的东西。
就像 Mycroft 和 Google Home 一样,Amazon Echo 可以在派上使用很重要,因为它使任何人都可以使用物联网技术 - 而不仅仅是那些选择官方硬件的人。而且由于亚马逊已经有更多的时间来开发这项技术,因此,可以说 Echo 是超前于可编程技能竞争以及整体进度的。
所以即使 Google Home 在问题回答上做的更好,但是 Echo 支持更多的第三方物联网设备,有些人认为它比 Google Home 的声音更自然。就个人而言,我认为两台设备的声音听起来都不错。但这只是我的意见。
### 物联网是 Linux 最好的
假如我们用一点时间来继续看看这些与 Linux 兼容的物联网设备或者像 Mycroft 这样真正使用 Linux 的社区伙伴的项目,有一点是可以肯定的,Linux 仍然是等式的一部分。
我认为不使用像 Linux 这样的自由/开放源代码平台是愚蠢的。毕竟,这些设备往往会连接到其他物联网自动化组件。这意味着安全性是一个真正的考虑。在 Linux 下运行物联网意味着我们可以有一个社区确保安全,而不是希望制造商为我们做到这一点。
需要一个例子说明为什么这很重要吗?看看那些不运行开源固件的路由器,当制造商停止支持该设备时会发生什么 - 安全风险开始出现。
物联网是 Linux 的未来吗?在我看来,我认为是……但不是全部。我认为对许多人来说,这是前进的道路。但是最后,我认为在 Linux 之上将会有许多专有的“东西”,只有像 Mycroft 这样纯粹的项目才能保持 Linux。
那么你怎么看?你认为像 Mycroft 这样的开源项目现在与 Google 和 Amazon 的产品在正常竞争么?反之,你觉得还有其他基于 Linux 的产品更适合这项工作么?无论是什么,点击评论,让我们来谈谈。
---
via: <http://www.datamation.com/open-source/is-iot-the-future-of-linux.html>
作者:[Matt Hartley](http://www.datamation.com/author/Matt-Hartley-3080.html) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,711 | Linux Bash 提示符的一些骚操作 | https://opensource.com/article/17/7/bash-prompt-tips-and-tricks | 2017-07-21T08:40:00 | [
"终端",
"命令行提示符"
] | https://linux.cn/article-8711-1.html |
>
> 一些能让你自定义 Bash 提示符的黑科技
>
>
>

当你在 Linux 环境下打开一个 Shell 终端时,会看到命令行中出现了类似下面的一个 Bash 提示符:
```
[user@$host ~]$
```
你知道命令行提示符其实是可以自己设置添加许多非常有用的信息的吗?在这篇文章中我就会教你如何自定义自己的 Bash 命令行提示符,想看的话就接着看吧~
### 如何设置 Bash 提示符
Bash 提示符是通过环境变量 `PS1` (<ruby> 提示符字符串 1 <rt> Prompt String 1 </rt></ruby>) 来设置的,它用于交互式 shell 提示符。当然如果你需要更多的输入才能完成一个 Bash 命令时,`PS2` 环境变量就是用来设置多行提示符的:
```
[dneary@dhcp-41-137 ~]$ export PS1="[Linux Rulez]$ "
[Linux Rulez] export PS2="... "
[Linux Rulez] if true; then
... echo "Success!"
... fi
Success!
```
### 在哪里设置 PS1 的值?
`PS1` 就是一个普通的环境变量,系统默认值设置在 `/etc/bashrc` 中,在我的系统中,默认提示符通过以下命令来设置的:
```
[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "
```
它判断 `PS1` 是否是系统的默认值 `\s-\v$` ,如果是的话则将值设置为 `[\u@\h \W]\$`。(LCTT 译注:注意命令中用 `\` 做了转义。)
但如果你想要自定义提示符,不应该修改 `/etc/bashrc` ,而是应该在你的主目录下将自定义命令加到 `.bashrc` 文件中。
### 上面提到的 `\u`、`\h`、`\W`、`\s` 和 `\v` 是什么意思?
在 `man bash` 中的 PROMPTING 章节中,你能够找到所有 `PS1` 和 `PS2` 相关的特殊字符的描述,以下是一些比较常用的:
* `\u`:用户名
* `\h`:短主机名
* `\W`:当前你所在的目录的名称(basename),`~` 表示你的主目录
* `\s`:Shell 名字(bash 或者 sh,取决于你的 Shell 的名字是什么)
* `\v`:Shell 的版本号
### 还有哪些特殊的字符串可以用在提示符当中
除了上面这些,还有很多有用的字符串可以用在提示符当中:
* `\d`:将日期扩展成 “Tue Jun 27” 这种格式
* `\D{fmt}`:允许自定义日期格式——可通过 `man strftime` 来获得更多信息
* `\D{%c}`:获得本地化的日期和时间
* `\n`:换行(参考下面的多行提示符)
* `\w`:显示当前工作目录的完整路径
* `\H`:当前工作机器的完整主机名
除了以上这些,你还可以在 Bash 的 man 页面的 PROMPTING 部分找到更多的特殊字符和它的用处。
### 多行提示符
如果你的提示符过长(比如说你想包括 `\H` 、`\w` 或完整的日期时间时 ),想将提示符切成两行,可以使用 `\n` 将提示符切断成两行显示,比如下面的多行的例子会在第一行显示日期、时间和当前工作目录,第二行显示用户名和主机名:
```
PS1="\D{%c} \w\n[\u@\H]$ "
```
### 还能再好玩点吗?
人们偶尔也想将提示符变成彩色的。虽然我觉得彩色提示符让人分心、易怒,但是也许你很喜欢。如果我们想将日期变成红色的,目录变成青蓝色,用户名搞一个黄色背景,你可以这样做:
```
PS1="\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\]\n[\[\e[1;43m\]\u\[\e[0m\]@\H]$ "
```
* `\[..\]` :表示一些非打印字符
* `\e[..` :转义字符,后面的跟着的特定的转义字符串在终端中表示颜色或者其他意思
* `31m` :表示红色字体(`41m` 表示是红色背景)
* `36m` :表示是青蓝色字体
* `1;43m` :表示黄色字体(`1;33m` 表示黄色字体)
* `\[\e[0m]\]` :它在最后将颜色恢复成系统终端默认颜色
你可以在 [Bash prompt HOWTO](http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html) 这里找到更多的颜色代码,甚至可以让字符反相和闪烁!我不知道为什么地球人会有这种想法,但是你可以这么干!
所以你最喜欢的自定义提示符是什么样子的呢?有没有让你抓狂的自定义提示符呢?请在评论里告诉我吧~
(照片来源:[ajmexico](https://www.flickr.com/photos/15587432@N02/3281139507/). 修改自 [Jason Baker](https://opensource.com/users/jason-baker). [CC BY-SA 2.0](https://creativecommons.org/licenses/by/2.0/).)
---
作者简介:
Dave Neary - Dave Neary 是红帽开源和标准化团队成员,帮助开源项目对红帽的成功至关重要。自从在 1999 年为 GIMP 提交了第一个补丁以来,他一直带着各种不同的帽子,在开源的世界徜徉。
---
via: <https://opensource.com/article/17/7/bash-prompt-tips-and-tricks>
作者:[Dave Neary](https://opensource.com/users/dneary) 译者:[吴霄/toyijiu](https://github.com/toyijiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Anyone who has started a terminal in Linux is familiar with the default Bash prompt:
```
````[user@$host ~]$`
But did you know is that this is completely customizable and can contain some very useful information? Here are a few hidden treasures you can use to customize your Bash prompt.
## How is the Bash prompt set?
The Bash prompt is set by the environment variable **PS1** (Prompt String 1), which is used for interactive shell prompts. There is also a **PS2** variable, which is used when more input is required to complete a Bash command.
```
``````
[dneary@dhcp-41-137 ~]$ export PS1="[Linux Rulez]$ "
[Linux Rulez] export PS2="... "
[Linux Rulez] if true; then
... echo "Success!"
... fi
Success!
```
## Where is the value of PS1 set?
PS1 is a regular environment variable.
The system default value is set in **/etc/bashrc**. On my system, the default prompt is set with this line:
```
````[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "`
This tests whether the value of PS1 is **\s-\v$** (the system default value), and if it is, it sets PS1 to the value **[\u@\h \W]\\$**.
If you want to see a custom prompt, however, you should not be editing **/etc/bashrc**. You should instead add it to **.bashrc** in your **Home** directory.
## What do \u, \h, \W, \s, and \v mean?
In the **PROMPTING** section of **man bash**, you can find a description of all the special characters in **PS1** and **PS2**. The following are the default options:
**\u**: Username**\h**: Short hostname**\W**: Basename of the current working directory (**~**for home, the end of the current directory elsewhere)**\s**: Shell name (**bash**or**sh**, depending on how the shell is called)**\v**: The shell's version
## What other special strings can I use in the prompts?
There are a number of special strings that can be useful.
**\d**: Expands to the date in the format "Tue Jun 27"**\D{fmt}**: Allows custom date formats—see**man strftime**for the available options**\D{%c}**: Gives the date and time in the current locale**\n**: Include a new line (see multi-line prompts below)**\w**: The full path of the current working directory**\H**: The full hostname for the current machine**\!**: History number—you can run any previous command with its history number by using the shell history event designator**!**followed by the number for the specific command you are interested in. (Using Linux history is yet another tutorial...)
There are many other special characters—you can see the full list in the **PROMPTING** section of the **Bash man page**.
## Multi-line prompts
If you use longer prompts (say if you include **\H** or **\w** or a full **date-time**), you may want to break things over two lines. Here is an example of a multi-line prompt, with the date, time, and current working directory on one line, and **username @hostname** on the second line:
```
````PS1="\D{%c} \w\n[\u@\H]$ "`
## Are there any other interesting things I can do?
One thing people occasionally do is create colorful prompts. While I find them annoying and distracting, you may like them. For example, to change the date-time above to display in red text, the directory in cyan, and your username on a yellow background, you could try this:
```
``````
PS1="\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\]\n[\[\e[1;43m\]\u\[\e[0m\]@\H]$ "
```
To dissect this:
**\[..\]**declares some non-printed characters**\e[..**is an escape character. What follows is a special escape sequence to change the color (or other characteristic) in the terminal**31m**is red text (**41m**would be a red background)**36m**is cyan text**1;43m**declares a yellow background (**1;33m**would be yellow text)**\[\e[0m\]**at the end resets the colors to the terminal defaults
You can find more colors and tips in the [Bash prompt HOWTO](http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html). You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't know. But you can!
What are your favorite Bash prompt customizations? And which ones have you seen that drive you crazy? Let me know in the comments.
## 8 Comments |
8,712 | 安卓编年史(31):安卓 6.0 棉花糖 | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/ | 2017-07-21T10:06:47 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8712-1.html | 
### 安卓 6.0 棉花糖
2015 年 10 月,谷歌给世界带来了安卓 6.0 棉花糖。配合这个版本的发布,谷歌委托生产了两部新的 Nexus 设备:[华为 Nexus 6P 和 LG Nexus 5X](http://arstechnica.com/gadgets/2015/10/nexus-5x-and-nexus-6p-review-the-true-flagships-of-the-android-ecosystem/)。除了常规的性能升级,新手机还带有一套关键硬件:为棉花糖的新指纹 API 准备的指纹识别器。棉花糖还引入了一个疯狂的全新搜索特性,被称作“Google Now on Tap”,用户控制的应用权限,一个全新的数据备份系统,以及许多其它的改良。

#### 新谷歌应用
棉花糖是[谷歌大标志重新设计](http://arstechnica.com/gadgets/2015/09/google-gets-a-new-logo/)后的第一个安卓版本。系统也随之升级,主要是一个新的谷歌应用,给搜索小部件,搜索页面以及应用图标添加了一个多彩的标志。

谷歌将应用抽屉从页面导航的横向布局还原回了单页竖直滚动表的形式。早期版本的安卓用的都是竖直滚动表的形式,直到谷歌在蜂巢中改成了横向页面系统。滚动单页面让人更容易从很多应用中找到目标。一项“快速滚动”的特性同样好用,它可以让你拖动滚动条来激活字母索引。新的应用抽屉布局也用到了小部件抽屉上。考虑到旧系统中小部件轻松就超过了 15 页,这是个大改进。

棉花糖应用抽屉顶部的“建议应用”栏也让查找应用变得更快。该栏显示的内容一直在变化,试图在你需要的时候为你提供你需要的应用。它使用了算法来统计应用使用,经常一起打开的应用以及每天的打开次数。
#### Google Now on Tap——一个没有完美实现的特性

棉花糖的头等新特性之一就是“Google Now on Tap”。有了 Now on Tap,你可以在安卓的任意界面长按 home 键,安卓会将整个屏幕发送给谷歌进行处理。谷歌会试着分析页面上的内容,并从屏幕底部弹出显示一个特殊的搜索结果列表。

Now on Tap 产生的结果不是通常的 10 个蓝色链接——即便那必定有一个通往谷歌搜索的链接。Now on Tap 还可以深度连接到其它使用了谷歌的应用索引功能的应用。他们的想法是你可以在 Youtube 音乐视频那里唤出 Now on tap,然后获得一个到谷歌 Play 或亚马逊“购买”页面的链接。在演员新闻文章处唤出 Now on Tap 可以链接到 IMDb 应用中该演员的页面上。

谷歌没有让这成为私有特性,而是给安卓创建了一个全新的“Assistant API(助理 API)”。用户可以挑选一个“助理应用”,它可以在长按 home 键的时候获取很多信息。助理应用会获取所有由当前应用加载的数据——不仅是直接从屏幕获取到的——连同所有这些图片还有任何开发者想要包含的元数据。这个 API 驱动了谷歌 Now on Tap,如果愿意的话,它还允许第三方打造 Now on Tap 的竞争对手。
谷歌在棉花糖的发布会上炒作了 Now on Tap,但实际上,这项特性不是很实用。谷歌搜索的价值在于你可以问它准确的问题——你输入你想要的内容,它搜索整个互联网寻找答案或网页。Now on Tap 让事情变得无限困难,因为它甚至不知道你要问的是什么。你带着特定意图打开了 Now on Tap,但你发送给谷歌的查询是很不准确的“屏幕上的所有内容”。谷歌需要猜测你查询的内容然后试着基于它给出有用的结果或操作。
在这背后,谷歌可能在疯狂处理整个页面文字和图片来强行获得你想要的结果。但往往 Now on Tap 给出的结果像是页面每个合适的名词的搜索结果列表。从结果列表中筛选多个查询就像是陷入必应的“[搜索结果过载](https://www.youtube.com/watch?v=9yfMVbaehOE)”广告里那样的情形。查询目标的缺失让 Now on Tap 感觉像是让谷歌给你读心,而它永远做不到。谷歌最终给文本选中菜单打了补丁,添加了一个“助理”按钮,给 Now on Tap 提供一些它极度需要的搜索目标。
不得不说 Now on Tap 是个失败的产物。Now on Tap 的快捷方式——长按 home 键——基本上让它成为了一个隐藏,难以发现的特性,很容易就被遗忘了。我们推测这个特性的使用率非常低。即便用户发现了 Now on Tap,它经常没法读取你的想法,在几次尝试之后,大多数用户可能会选择放弃。
随着 2016 年 Google Pixels 的发布,谷歌似乎承认了失败。它把 Now on Tap 改名成了“屏幕搜索”并把它降级成了谷歌助手的支援。谷歌助理——谷歌的新语音命令系统——接管了 Now on Tap 的 home 键手势并将它关联到了语音系统激活后的二次手势。谷歌似乎还从 Now on Tap 差劲的可发现性中学到了教训。谷歌给助理给 home 键添加了一组带动画的彩点,帮助用户发现并记住这个特性。
#### 权限

安卓 6.0 终于引入了应用权限系统,让用户可以细粒度地控制应用可以访问的数据。


应用在安装的时候不再给你一份长长的权限列表。在棉花糖中,应用安装根本不询问任何权限。当应用需要一个权限的时候——比如访问你的位置、摄像头、麦克风,或联系人列表的时候——它们会在需要用到的时候询问。在你使用应用的时候,如果需要新权限时会弹出一个“允许或拒绝”的对话框。一些应用的设置流程这么处理:在启动的时候询问获取一些关键权限,其它的等到需要用到的时候再弹出提示。这样更好地与用户沟通了需要权限是为了做什么——应用需要摄像头权限,因为你刚刚点击了摄像头按钮。


除了及时的“允许或拒绝”对话框,棉花糖还添加了一个权限设置界面。这个复选框大列表让数据敏感用户可以浏览应用拥有的权限。他们不仅可以通过应用来查询,也可以通过权限来查询。举个例子,你可以查看所有拥有访问麦克风权限的应用。

谷歌试验应用权限已经有一段时间了,这些设置界面基本就是隐藏的“[App Ops](http://www.androidpolice.com/2013/07/25/app-ops-android-4-3s-hidden-app-permission-manager-control-permissions-for-individual-apps/)”系统的重生,它是在安卓 4.3 中不小心引入并很快被移除的权限管理系统。

尽管谷歌在之前版本就试验过了,棉花糖的权限系统最大的不同是它代表了一个向权限系统的有序过渡。安卓 4.3 的 App Ops 从没有计划暴露给用户,所以开发者不了解它。在 4.3 中拒绝一个应用需要的一个权限经常导致奇怪的错误信息或一个彻底的崩溃。棉花糖的系统对开发者是缺省的——新的权限系统只适用于针对棉花糖 SDK 开发的应用,谷歌将它作为开发者已经为权限处理做好准备的信号。权限系统还允许在一项功能由于权限被拒绝无法正常工作时与用户进行沟通。应用会被告知它们的权限请求被拒绝,它们可以指导用户在需要该功能的时候去打开该权限访问。
#### 指纹 API


在棉花糖出现之前,少数厂商推出了他们自己的指纹解决方案以作为对[苹果的 Touch ID](http://arstechnica.com/apple/2014/09/ios-8-thoroughly-reviewed/10/#h3) 的回应。但在棉花糖中,谷歌终于带来了生态级别的指纹识别 API。新系统包含了指纹注册界面,指纹验证锁屏以及允许应用将内容保护在一个指纹扫描或锁屏验证之后的 API。

Play 商店是最先支持该 API 的应用之一。你可以使用你的指纹来购买应用,而不用输入你的密码。Nexus 5X 和 6P 是最先支持指纹 API 的手机,手机背面带有指纹读取硬件。

指纹 API 推出不久后时间,是罕见的安卓生态合作例子之一。所有带有指纹识别的手机都使用谷歌的 API,并且大多数银行和购物应用都很好地支持了它。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/31/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,714 | Libral:一个提供资源和服务统一管理 API 的系统管理库 | https://opensource.com/article/17/5/intro-libral-systems-management-library-linux | 2017-07-22T12:32:42 | [
"Libral",
"Puppet",
"Ansible"
] | /article-8714-1.html |
>
> Libral 为系统资源和服务提供了一个统一的管理 API ,其可以作为脚本管理任务和构建配置管理系统的坚实基础。
>
>
>

作为继承了 Unix 的传统的 Linux 操作系统,其并没有一个综合的系统管理 API 接口,相反,管理操作是通过多种特定用途的工具和 API 来实现的,其每一个都有自己约定和特有的风格。这就使得编写一个哪怕是简单的系统管理任务的脚本也很困难、很脆弱。
举个例子来说,改变 “app” 用户的登录 shell 要运行 `usermod -s /sbin/nologin app`。这个命令通常没有问题,只是当系统上没有 “app” 用户时就不行了。为了解决这个例外错误,具有创新精神的脚本编写者也许要这样写:
```
grep -q app /etc/passwd \
&& usermod -s /sbin/nologin app \
|| useradd ... -s /sbin/nologin app
```
这样,当 “app” 用户存在于系统中时,会执行更改登录 shell 的操作;而当此用户不存在时,就会创建此用户。不幸的是,这种编写系统管理任务脚本的方式是不适合的:对于每一种资源来说都会有一套不同的工具,而且每个都有其不同的使用惯例;不一致性和经常出现的不完备的错误报告将会使错误的处理变得困难;再者也会因为工具自身的特性引发的故障而导致执行失败。
实际上,以上所举的例子也是不正确的:`grep` 并不是用来查找 “app” 用户的,它只能在文件 `/etc/passwd` 的一些行中简单的查找是否有字符串 “app”,在大多数情况下它或许可以工作,但是也可能会在最关键的时刻出错。
很显然,那些执行简单任务的脚本管理工具,很难成为大型管理系统的基础。认识到这一点,现有的配置管理系统,比如 Puppet、Chef 及 Ansible,围绕基本的操作系统资源的管理竭尽全力的建立其内部 API 就是明智之举。这些资源抽象是内部 API,其与所需的相应工具密切相关。但这不仅导致大量的重复性工作,也为尝试一个新创新的管理工具设置了强大的障碍。
在创建虚拟机或者容器镜像这一领域,这种障碍就变得非常明显:比如在创建镜像的过程中,就要么需要回答关于它们的简单问题,要么需要对其进行简单的更改才行。但是工具全都需要特别处理,那些所遇到的问题和更改需要用脚本逐个处理。因此,镜像构建要么依靠特定的脚本,要么需要使用(和安装)一个相当强大的配置管理系统。
[Libral](https://github.com/puppetlabs/libral) 将为管理工具和任务提供一个可靠的保证,通过对系统资源提供一个公用的管理 API,并使其可以通过命令行工具 `ralsh` 使用,它允许用户按照相同的方法查询和修改系统资源,并有可预见的错误报告。对以上的举例来说,可以通过命令 `ralsh -aq user app` 检查 “app” 用户是否存在;通过 `ralsh -aq package foo` 检查 “foo” 软件包是否已经安装;一般情况下,可以通过命令 `ralsh -aq TYPE NAME` 检查 ”NAME“ 是否是 ”TYPE“ 类型的资源。类似的,要创建或更改已存在的用户,可以运行:
```
ralsh user app home=/srv/app shell=/sbin/nologin
```
以及,要在文件 `/etc/hosts` 创建和修改条目,可以运行命令:
```
ralsh hostmyhost.example.com ip=10.0.0.1 \
host_aliases=myhost,apphost
```
以这种方式运行,“ralsh” 的使用者事实上完全隔离在那两个命令内部的不同运行工作机制之外:第一个命令需要适当的调用命令 `useradd` 或者 `usermod`,而第二个需要在 `/etc/hosts` 文件中进行编辑。而对于该用户来说,他们似乎都采取同样的模型:“确保资源处于我所需要的状态。”
### 怎样获取和使用 Libral 呢?
Libral 可以在[这个 git 仓库](https://github.com/puppetlabs/libral)找到并下载。其核心是由 C++ 编写的,构建它的说明可以[在该仓库中](https://github.com/puppetlabs/libral#building-and-installation)找到,不过只是在你想要为 Libral 的 C++ 核心做贡献的时候才需要看它。Libral 的网站上包含了一个 [预构建的 tarball](http://download.augeas.net/libral/ralsh-latest.tgz),可以用在任何使用 “glibc 2.12” 或者更高版本的 Linux 机器上。可以使用该 “tarball” 的内容进一步探究 ralsh ,以及开发新的提供者(provider),它使得 Libral 具备了管理新类型资源的能力。
在下载解压 tarball 后,在目录 `ral/bin` 下能找到 `ralsh` 命令。运行不需要任何参数的 `ralsh` 命令就会列举出来 Libral 的所有资源类型。利用 `--help` 选项可以打印输出关于 `ralsh` 的更多说明。
### 与配置管理系统的关系
知名的配置管理系统,如 Puppet、Chef 及 Ansible,解决了一些 Libral 所解决的同样的问题。将 Libral 与它们的区分开的主要是它们所做工作和 Libral 不同。配置管理系统被构建来处理跨大量节点管理各种事物的多样性和复杂性。而另一方面 Libral 旨在提供一个定义明确的低级别的系统管理 API ,独立于任何特定的工具,可用于各种各样的编程语言。
通过消除大型配置管理系统中包含的应用程序逻辑,Libral 从前面介绍里提及的简单的脚本任务,到作为构建复杂的管理应用的构建块,它在如何使用方面是非常灵活的。专注与这些基础层面也使其保持很小,目前不到 2.5 MB,这对于资源严重受限的环境,如容器和小型设备来说是一个重要考虑因素。
### Libral API
在过去的十年里,Libral API 是在实现配置管理系统的经验下指导设计的,虽然它并没有直接绑定到它们其中任何一个应用上,但它考虑到了这些问题,并规避了它们的缺点。
在 API 设计中四个重要的原则:
* 期望的状态
* 双向性
* 轻量级抽象
* 易于扩展
基于期望状态的管理 API,举个例子来说,用户表示当操作执行后希望系统看起来是什么状态,而不是怎样进入这个状态,这一点什么争议。双向性使得使用(读、写)相同的 API 成为可能,更重要的是,相同的资源可以抽象成读取现有状态和强制修改成这种状态。轻量级抽象行为确保能容易的学习 API 并能快速的使用;过去在管理 API 上的尝试过度加重了学习建模框架的使用者的负担,其中一个重要的因素是他们的接受力缺乏。
最后,它必须易于扩展 Libral 的管理功能,这样用户可以教给 Libral 如何管理新类型的资源。这很重要,因为人们也许要管理的资源可能很多(而且 Libral 需要在适当时间进行管理),再者,因为即使是完全成熟的 Libral 也总是存在达不到用户自定义的管理需求。
目前与 Libral API 进行交互的主要方式是通过 `ralsh` 命令行工具。它也提供了底层的 C++ API ,不过其仍处在不断的演变当中,主要的还是为简单的脚本任务做准备。该项目也提供了为 CRuby 提供语言绑定,其它语言也在陆续跟进。
未来 Libral 还将提供一个提供远程 API 的守护进程,它可以做为管理系统的基础服务,而不需要在管理节点上安装额外的代理。这一点,加上对 Libral 管理功能的定制能力,可以严格控制系统的哪些方面可以管理,哪些方面要避免干扰。
举个例子来说,一个仅限于管理用户和服务的 Libral 配置会避免干扰到在节点上安装的包。当前任何现有的配置管理系统都不可能控制以这种方式管理的内容;尤其是,需要对受控节点进行任意的 SSH 访问也会将该系统暴露不必要的意外和恶意干扰。
Libral API 的基础是由两个非常简单的操作构成的:“get” 用来检索当前资源的状态,“set” 用来设置当前资源的状态。理想化地实现是这样的,通过以下步骤:
```
provider.get(names) -> List[resource]
provider.set(List[update]) -> List[change]
```
“provider” 是要知道怎样管理的一种资源的对象,就像用户、服务、软件包等等,Libral API 提供了一种查找特定资源的<ruby> 管理器 <rp> ( </rp> <rt> provider </rt> <rp> ) </rp></ruby>的方法。
“get” 操作能够接收资源名称列表(如用户名),然后产生一个资源列表,其本质来说是利用散列的方式列出每种资源的属性。这个列表必须包含所提供名称的资源,但是可以包含更多内容,因此一个简单的 “get” 的实现可以忽略名称并列出所有它知道的资源。
“set” 操作被用来设置所要求的状态,并接受一个更新列表。每个更新可以包含 “update.is”,其表示当前状态的资源,“update.should” 表示被资源所期望的状态。调用 “set” 方法将会让更新列表中所提到的资源成为 “update.should” 中指示的状态,并列出对每个资源所做的更改。
在 `ralsh` 下,利用 `ralsh user root` 能够重新获得 “root” 用户的当前状态;默认情况下,这个命令会产生一个用户可读的输出,就像 Puppet 中一样,但是 `ralsh` 支持 `--json` 选项,可以生成脚本可以使用的 JSON 输出。用户可读的输出是:
```
# ralsh user root
user::useradd { 'root':
ensure => 'present',
comment => 'root',
gid => '0',
groups => ['root'],
home => '/root',
shell => '/bin/bash',
uid => '0',
}
```
类似的,用户也可以用下面的形式修改:
```
# ralsh user root comment='The superuser'
user::useradd { 'root':
ensure => 'present',
comment => 'The superuser',
gid => '0',
groups => ['root'],
home => '/root',
shell => '/bin/bash',
uid => '0',
}
comment(root->The superuser)
```
`ralsh` 的输出列出了 “root” 用户的新状态和被改变的 `comment` 属性,以及修改了什么内容(在这种情形下单指 `comment` 属性)。下一秒运行相同的命令将产生同样的输出,但是不会提示修改,因为没有需要修改的内容。
### 编写<ruby> 管理器 <rp> ( </rp> <rt> provider </rt> <rp> ) </rp></ruby>
为 `ralsh` 编写新的管理器(provider)<ruby> <rp> ( </rp> <rt> </rt> <rp> ) </rp></ruby>是很容易的,也花费不了多少工夫,但是这一步骤是至关重要的。正因为如此,`ralsh` 提供了大量的调用约定,使得可以根据管理器所能提供的能力而调整其实现复杂性成为可能。管理器可以使用遵循特定调用约定的外部脚本,也可以以 C++ 实现并内置到 Libral 里面。到目前为止,有三种调用约定:
* [simple](https://github.com/puppetlabs/libral/blob/master/examples/providers/python.prov) 调用约定是编写 shell 脚本用为管理器。
* [JSON](https://github.com/puppetlabs/libral/blob/master/doc/invoke-json.md) 调用约定意味着可以利用 Ruby 或者 Python 脚本语言编写管理器。
* [内部 C++ API[8](https://github.com/puppetlabs/libral/blob/master/doc/invoke-native.md) 可以被用来实现原生的管理器。
强烈建议使用 “simple” 或者 “JSON” 调用约定开始开发管理器。GitHub 上的 [simple.prov](https://github.com/puppetlabs/libral/blob/master/examples/providers/simple.prov) 文件包含了一个简单的 shell 管理器框架,应该很容易的替换为你自己的管理器。[python.prov](https://github.com/puppetlabs/libral/blob/master/examples/providers/python.prov) 文件包含了利用 Python编写的 JSON 管理器框架。
利用高级脚本语言编写的管理器存在一个问题是,对于这些语言,在当前运行 Libral 的系统上需要包含所有的支持库在内运行环境。在某些情况下,这不是一个障碍;举例子来说,基于 “yum” 的包管理的管理器需要 Python 被安装在当前的系统上,因为 “yum” 就是用 Python 开发的。
然而在很多时候,无法预期一种 Bourne shell(Bash)之外的设计语言能够安装到所有的管理系统上。通常,管理器的编写者需要一个更加强大的脚本编译环境是实际需要。然而事与愿违,绑定一个完整的 Ruby 或 Python 作为解释器来运行将会增加 Libral 的大小超出了实际使用环境对资源的限制。另一方面,通常选择 Lua 或者 JavaScript 作为可嵌入的脚本编辑语言是不适应这种环境的,因为大多数的管理器的编写者不熟悉他们,通常情况下需要做大量的工作才能满足系统管理的实际需求。
Libral 绑定了一个 [mruby](http://mruby.org/) 版本,一个小的、嵌入在 Ruby 的版本,提供给管理器的编写者一个稳定的基础以及功能强大的可实现的程序设计语言。mruby 是 Ruby 语言的一个完整实现,尽管其减少了大量的标准库支持。与 Libral 绑定的 mruby 包含了 Ruby 用于脚本编辑管理任务的大多数的重要标准库,其将基于管理器编写者的需求随着时间的推移得到加强。Libral 的 mruby 绑定了 API 适配器使编写管理器更适合 JSON 约定,比如它包含了简单的工具(如编译修改结构体文件的 [Augeas](http://augeas.net/))来解决解析和输出 JSON 的约定。[mruby.prov](https://github.com/puppetlabs/libral/blob/master/examples/providers/mruby.prov) 文件包含了利用 mruby 编写的 JSON 管理器框架实例。
### 下一步工作
Libral 最关键的是下一步要使其增加广泛的可用性——从 [预编译的 tarball](http://download.augeas.net/libral/ralsh-latest.tgz) 对于开发管理器的起步和深入是一个不错的方法,但是 Libral 也需要打包到主流的发行版上,并在上面可以找到。同样的,Libral 强大的功能取决于它附带的管理器的集合,及需要被扩展覆盖一组核心管理功能。Libral 的网站上包含了[一个 todo 列表](https://github.com/puppetlabs/libral#todo-list)列出了管理器的最迫切需求。
现在有多种方法能够提高不同用途的 Libral 的可用性:从编写更多程序语言的绑定,举例来说,Python 或者 Go;到使 `ralsh` 更容易在 shell 脚本中使用,除了现有的可读输出和 JSON 输出之外,可以很轻松的在 shell 脚本中格式化输出。在大规模的管理中使用 Libral 也能够在增加上面讨论过的远程 API 而得到改良,Libral 利用像 SSH 这样的传输工具实现更好的支持批量安装要求为各种架构提供预编译的 tarball,并且脚本基于所发现的目标系统架构选择正确的包。
Libral 和它的 API、它的性能能够不断地进化发展;一个有趣的可能性是为 API 增加提醒能力,这样做可以向系统报告资源在超出它的范围发生的变化。Libral 面临的挑战是保持小型化、轻量级和良好定义的工具,来对抗不断增加的用例和管理性能——我希望每一个读者都能成为这个挑战之旅的一部分。
如果这让你很好奇,我很想听听你的想法,可以用拉取请求的方式,也可以是增强请求方式,亦或者报告你对 ralsh 体验经验。
(题图:[Internet Archive Book Images](https://www.flickr.com/photos/internetarchivebookimages/14803082483/in/photolist-oy6EG4-pZR3NZ-i6r3NW-e1tJSX-boBtf7-oeYc7U-o6jFKK-9jNtc3-idt2G9-i7NG1m-ouKjXe-owqviF-92xFBg-ow9e4s-gVVXJN-i1K8Pw-4jybMo-i1rsBr-ouo58Y-ouPRzz-8cGJHK-85Evdk-cru4Ly-rcDWiP-gnaC5B-pAFsuf-hRFPcZ-odvBMz-hRCE7b-mZN3Kt-odHU5a-73dpPp-hUaaAi-owvUMK-otbp7Q-ouySkB-hYAgmJ-owo4UZ-giHgqu-giHpNc-idd9uQ-osAhcf-7vxk63-7vwN65-fQejmk-pTcLgA-otZcmj-fj1aSX-hRzHQk-oyeZfR),修改:Opensource.com. CC BY-SA 4.0)
---
作者简介:
David Lutterkort - 戴维是一个 Puppet 的软件工程师,他曾经参与的项目有 Direct Puppet 和最好的供给工具 Razor。他是 Puppet 最早的编著者之一,也是配置编辑工具 Augeas 的主要作者。
---
via: <https://opensource.com/article/17/5/intro-libral-systems-management-library-linux>
作者:[David Lutterkort](https://opensource.com/users/david-lutterkort) 译者:[stevenzdg988](https://github.com/stevenzdg988) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,715 | Nylas Mail: 一个 Linux 的免费邮件客户端 | http://www.linuxandubuntu.com/home/nylas-mail-an-amazing-free-email-client-for-linux | 2017-07-23T09:31:00 | [
"邮件"
] | https://linux.cn/article-8715-1.html | [](http://www.linuxandubuntu.com/home/nylas-mail-an-amazing-free-email-client-for-linux)
有一个经常被提及的问题是 Ubuntu 是否还应该提供默认的电子邮件客户端。就个人而言,我已经很长时间没有使用 [Thunderbird](http://www.linuxandubuntu.com/home/thunderbird-release-with-several-bug-fixes) 了。我相信这不是一个第一次被问到的问题,但我相信这是一个把它解决掉的很好机会。这是因为日常用户倾向于使用基于网络的客户端,例如 Gmail 或 Outlook 来满足其邮件需求。而对于 Linux 上的经验丰富的用户而言,还有很多可供选择的选项。[Geary](http://www.linuxandubuntu.com/home/geany-a-lightweight-ide-or-code-editor-for-programmers)、Empathy、Evolution 和 Thunderbird 本身已经为很多用户提供了很好的服务,但是我发现了值得一试的东西:它被称为 Nylas Mail。
它以前被称为 [Nylas N1](http://www.linuxandubuntu.com/home/nylas-n1-a-premium-email-client-for-linux),Nylas Mail 于今年初在 1 月份推出,同时还发布了一个免费版本;**Nylas Mail** Basic 以前是一个付费版本。此外,在 1 月份,客户端仅适用于 Mac,但现在可用于 Linux 和 Windows 用户。
(在我写此文时,它还处于活跃开发状态,但是现在已经停止开发了。)
### 为什么使用 Nylas?
很多人因为种种原因选择了 Nylas Mail。让我们来看看一些常见的原因。
**简单** - Nylas Mail 客户端管理优雅简单。用 electron 构建,应用非常漂亮,易于使用。其设计还确保了在 Nylas 中设置电子邮件非常简单直接。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/nylas-mail-an-awesome-email-client-for-linux_orig.jpg)
**兼容性** - Nylas Mail 与所有电子邮件提供商兼容。它与 Gmail、Yahoo、Exchange 和 IMAP 帐户兼容,因此你可以在任何地方收到邮件。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/nylas-compatible-with-gmail-facebook-imap_orig.jpg)
**功能强大** - Nylas 拥有大量功能。它有一个全屏模式、离线支持、多布局格式、多帐户、统一的收件箱、提醒、打盹、签名和稍后发送功能。其中一些功能随 Nylas Mail Basic 一起提供。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/nylas-email-client-powerful-features_orig.jpg)
**混合后端** - 以前,Nylas 将邮件的一个副本同步到使用 Nylas 云的服务器中,这对许多人来说就像 mehn 一样。幸运的是,在最新版本中,Nylas 采用了一个混合后端,可直接连接到 Gmail 或 Outlook 等电子邮件提供商。云同步虽然仍然可用,但仅在使用高级订阅工具(如<ruby> 打盹 <rt> snoozing </rt></ruby>和跟踪)时使用。缺点是它是一而二的。如果想要一些专业功能,你需要云同步;你不想用云同步,那你错过了这些功能。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/nylas-email-hybrid-backend_orig.jpg)
**开源和免费版** - Nylas 作为开源项目。这意味着你可以自己编写代码并自行构建。你甚至可以设置自己的服务器以回避问题。
[](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/nylas-open-source-and-free-email-client_orig.jpg)
**跨平台** - Nylas 是在 Linux、Windows 和 Mac OS X 上提供的跨平台应用程序。因此,无论你喜欢哪种桌面操作系统,你都可以放心,因为 Nylas 都已经覆盖了。而且用法是相同的。
### 还要做些什么?
到目前为止,Nylas 邮件客户端很好,但有一些抱怨。首先是在 2016 年推出的付费选项。引入免费版本有点像是这个解决这个问题的方案,但事实上,一些功能只能以每月 $9 的价格使用让包括我在内的大部分人不快。此外,没有多少人喜欢将邮件的副本保存在某台服务器上。当然,你可以设置自己的服务器,但有些麻烦。最后,对于在后台运行的应用程序来说,它需要相当多的内存。我希望这不是因为它主要是用 electron 写的,我相信随着它的更新和改进,它会变得更好。
### 总结
**Nylas Mail** 在特性和功能方面对我来说非常棒,我相信你一定要用一下。作为一个电子邮件客户端,它是非常有效的,我真的很喜欢 Nylas Mail,并且一定会一直使用它。也许你应该也会。向开发人员的所做的工作致谢。你有其他的程序想让我们看下么?在下面的评论中指出,并分享你的想法和意见。
---
via: <http://www.linuxandubuntu.com/home/nylas-mail-an-amazing-free-email-client-for-linux>
作者:[linuxandubuntu](http://www.linuxandubuntu.com/home/nylas-mail-an-amazing-free-email-client-for-linux) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,716 | 拯救者 Linux:我是如何给我的团队引入 Linux 的 | https://opensource.com/article/17/7/how-introduced-organization-linux | 2017-07-24T10:03:00 | [
"Linux"
] | https://linux.cn/article-8716-1.html |
>
> 在项目早期就遇到公开的失败后,一个著名大学的 IT 团队决定将他们的 web 注册系统部署到 Linux上,此举几乎将服务器的最大用户访问量提高了 3 倍
>
>
>

1998年,我在明尼苏达大学为一个新的 web 团队管理他们的服务器管理组。明尼苏达大学是一个非常大的大学,个个院校拥有接近 6000 名在校大学生。当时学校是用一个上了年纪的大型机系统来做学生的档案管理系统,这种系统已经过时了,所以需要做出改变。
这个系统不是 Y2K 类型的(LCTT 译注:保存年份时只用两位数,导致记录 2000 年时计算机会记录为 1900 年,详见 [What Does Y2K Compliant Mean?](https://stackoverflow.com/questions/18200744/what-does-y2k-compliant-mean)),所以我们准备建立一个由仁科软件公司来交付的新的学生档案管理系统。这个新系统对明尼苏达大学来说有很多作用,不仅能够管理学生的档案,还能提供其他的一些功能。然而它却缺少了一项关键特性:你不能在你的浏览器上通过 web 来给你的班级进行注册。
按照今天的标准来看,这是一个重大的疏忽,但是在上世纪九十年代,互联网还是一个新生概念。亚马逊才建立不久,ebay 刚创业一年,google 呱呱坠地,Wikipedia 还没有影儿。所以 1998 年仁科公司没有支持 web 在线注册课程这个功能也就不足为奇了。但是明尼苏达大学作为 Gopher 网络的发源地,并且给之前的大型机系统开发了一套 web 功能接口,我们觉得 web 在线注册功能对于这个新的学生档案管理系统是至关重要的。
我们在这个 web 团队的任务就是去实现此管理系统的 web 在线注册功能。
幸运的是,我们并不是孤军奋战。我们联系了 IBM ,在第二年一起开始来搭建这个新的 web 在线注册系统。IBM 负责提供硬件和软件环境来运行这个 web 系统:3 个运行最新的 AIX 系统(类 UNIX 操作系统)、IBM Java 和 IBM WebSphere 平台的 SP 电脑节点,并用一个 IBM 的负载均衡器来实现 3 个节点的负载分流。

在经过一年多的开发和测试后,我们的系统终于上线了!但不幸的是失败却接踵而至。
### 负载过大
在开发过程中,我们无法准确地模拟测试真实场景下许多学生同时登录的场景。原因不是没有测试环境,明尼苏达大学有定制的 web 负载测试软件包,而且 IBM 有自己的工具做补充,但是这个 web 系统在当时对我们来说实在是太陌生了,我们没有意识到这些测试工具是不能满足要求的。
通过数月的测试,我们将此 web 系统的预期负载量调整到 240 个并发用户。但不幸的是,我们实际的使用量却是预期的两倍左右,在第一天系统上线时,超过 400 名学生马上同时登录进系统,由于负载远远超出预期值,3 台 web 服务器直接宕机了。由于持续的高负载,服务器一直崩溃,只能不断地重启。一台刚重启完,另一台又宕机重启了,这种场景居然持续了一个月。
由于不能有效地通过 web 注册,学生只能通过原来的方法来注册:来到登记员的办公室,拿着笔注册,然后再出门。当地报纸也幸灾乐祸地嘲讽道:"电脑软件的失败强迫学生只能面对面地注册!"

面对失败这个事实,我们尽自己全力在下一个开发周期中来提高软件性能,在之后 6 个月的时间里,我们疯狂地想去增强这套系统的负载能力。尽管增加了更多的代码,调整了多次配置,还是不能支持更多的用户。尽力了,然而面对的还是失败。
就如所料的,在下一个迭代周期后,迎接我们的还是失败。服务器由于负载问题一次又一次地宕机。这一次新闻标题已经变成了:“web 注册系统就是垃圾”。
在开始下一个为期 6 个月的迭代前,我们已经绝望了。没有人知道服务器不停宕机的原因,我们已经预期这个问题现在是无解的。我们是要采取一些措施来搞定这个问题,但是怎么做呢?以下是我们讨论得出的方法:
### 是否需要切换新的平台?
IBM 当时引入了 Linux,给它的 Java 和 WebSphere 平台做了二次开发。所有产品都获得了红帽公司的 RHEL 认证,并且有几个产品已经在我们的桌面系统上运行了。我们意识到了现在在 Linux 上已经有了完整的生态系统来运行我们的 web 管理系统,但是它能表现的比 AIX 更好吗?
在搭建好一个测试服务器并进行基本的负载测试后,我们惊奇的发现一台 Linux 服务器能够轻松地支持之前 3 台 AIX 服务器所不能支持的负载量,在相同的 web 代码、IBM Java 和 WebSphere 平台下,单台 Linux 服务器能够支持超过 200 个用户。
我们将这个消息告诉了登记员和 CIO,他们同意将 web 注册系统切换到 Linux 平台上。虽然这是我们第一次在明尼苏达大学跑 Linux,但是失败已成习惯,反而无所畏惧了。AIX 只会失败,Linux 却是我们唯一的希望。
我们马上基于 Linux 来进行开发。另一个组的同事也提供了几台 Intel 服务器来给我们使用,我们给服务器装上红帽系统和相关的 IBM 组件,然后在新系统上进行了持续性的负载测试,令人欣喜的是,Linux 服务器没有出现任何问题。
经过两个月高强度的开发测试,我们的新系统终于上线了,而且是巨大的成功!在巨大的负载下,web 注册系统在 Linux 的表现都堪称完美。同时在线峰值甚至超过了 600 名用户。Linux 拯救了明尼苏达大学的 web 注册系统~
### 成功的经验
当我回首这个项目时,我发现你可以用以下几个点来向你的团队介绍 Linux:
1、 **解决问题, 不要自欺欺人**
当我们提议在企业中使用 Linux 时,并不是因为我们认为 Linux 很酷才使用它。当然,我们是 Linux 的爱好者并且已经在自己的环境中运行过它,但是我们在公司是为了解决问题的。能用 Linux 是因为我们的登记员和出资人同意 Linux 是解决问题的一个方法,而不仅仅是因为 Linux很酷我们想用它。
2、 **尽可能小的去做改变**
我们的成功是建立在 IBM 已经基于 Linux 做出了它的 Java 和 WebSphere 产品的基础上的。这能让我们在将 web 系统从 AIX 切换到 Linux 上不用做过多的修改适配。两者比起来只有硬件和操作系统改变了,其他系统相关的组件都保持了一致,这些都是保证平台切换成功的基石。


3、 **诚实对待风险和回报**
我们的问题很明显:web 注册系统在前两个迭代周期中都失败了,而且很可能再次失败。当我们将自己的想法(AIX 切换到 Linux)告诉出资方后,我们对其中的风险和回报是心知肚明的。如果我们什么都不做,就只有失败,如果我们尝试切换到 Linux 平台,我们可能会成功,而且从最初的测试结果分析,成功的概率是高于失败的。
而且就算在 Linux 平台下项目还是失败了,我们也可以迅速地切换回 AIX 服务器。有了这些细致的分析和措施,终于使登记员能够安心让我们试试 Linux。
4、 **言简意赅地交流**
在项目平台切换的过程中,我们做了一个整体计划。我们在一张白纸上明确地写下了我们计划做什么,为什么要这么做。这种方式的成功关键就在于计划的简短性。领导们不喜欢像看小说一样来看技术性的主意,他们不想纠缠在技术细节中。所以我们有意地在执行层面上进行计划安排,在框架层面上进行描述。
当我们在进行平台切换时,我们会定期的告诉出资人当前进展。当新系统成功完成后,我们每天都会提交更新,报告已经有多少学生成功通过此系统完成注册和遇到的问题。
尽管这个项目已经过去了接近 20 年,但是其中的经验教训在今天仍然适用。尽管 Linux 在其中起了举足轻重的作用,但是最重要的还是我们成功地将所有人的目标引导到解决共同的问题上。我认为这种经验也可以运用到很多你所面对的事情当中。
---
作者简介:
Jim Hall -我是 FreeDOS 项目的发起者和协调人,我也在 GNOME 理事会中担任董事。工作上我是明尼苏达州拉姆西县的首席信息官,空闲时间里我为开源软件的可用性做出相关的贡献,并通过 [Outreachy](https://en.wikipedia.org/wiki/Outreachy)(为女性提供帮助的一项GNOME外展服务)来指导可用性测试。
via: <https://opensource.com/article/17/7/how-introduced-organization-linux>
作者:[Jim Hall](https://opensource.com/users/jim-hall) 译者:[吴霄/toyijiu](https://github.com/toyijiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | In 1998, I managed the server administration group for the new web team at the University of Minnesota. The U of M is a very large institution, with over 60,000 students across all system campuses. Until then, the university managed its student records on an aging mainframe system. But that was all about to change.
The mainframe was not Y2K compliant, so we were working to set up a new student records system delivered by PeopleSoft. The new system was a big deal to the university in many ways, not only for modernizing our records system but also for offering new features. Yet it lacked one key feature: You couldn't register for classes from your web browser.
That may seem like a major oversight by today's standards, but in the late 1990s, the World Wide Web was still pretty new. Amazon was only a few years old. eBay had just reached its first birthday. Google had recently gone live. Wikipedia didn't exist yet. In context, it's not that surprising that in 1998 PeopleSoft didn't support registering for courses via the web. But as a pioneering university that originated the Gopher network and created a functional web interface to the previous mainframe system, we believed web registration was a critical feature for the new student records system.
Our job on the web team was to build that missing web registration frontend to PeopleSoft.
Fortunately, we didn't have to do it alone. We contracted with IBM, and over the next year, we worked together to build the new web registration system. IBM donated hardware and software to run the new web system: Three SP computer nodes running the latest versions of AIX, IBM Java, and IBM WebSphere, with a separate IBM load balancer dividing traffic between the three nodes.
After more than a year of development and testing, we finally went live! Unfortunately, it was an immediate failure.
## Too much load
Throughout development, we were unable to realistically simulate the load of many students accessing the new system at once. But it was not from lack of trying. The university had a custom web load test software package, and IBM supplemented it with its own tool. But the web was still pretty new, and we didn't realize the web load testing tools just weren't up to the job yet.
After months of load testing with both tools, we had tuned the new web registration system for an expected load of 240 concurrent users.
Unfortunately, our actual usage was almost twice that. On day one, as soon as the system came online, over 400 students simultaneously signed into the new web registration system. Overwhelmed by the unexpected load, the three web servers crashed. We found ourselves constantly restarting the web servers as the high web traffic continued to crush them. As soon as we restarted one web server, the next would go down. And so on, for the entire month-long registration period.
Without a reliable way to register for classes on the web, students had to sign up for classes the old-fashioned way: by going to the registrar's office. Lines to register went down the hallway and out the door. It wasn't long before the bad news hit the local news, with headlines such as "Computer failure forces students to register in person."
Having faced a very public defeat, we did our best to improve things for the next registration cycle, only six months away. We worked frantically to increase the capacity of the web system. Despite many code fixes and configuration tweaks, we were unable to boost the system to sufficiently support more users. Try as we might, we faced certain failure at the next registration cycle.
And as feared, the web system again failed dramatically at our next registration. The servers crashed again and again under the immense load. This time, the news headlines included such gems as: "Web registration system is worthless."
With another six months before our next go-live, we felt trapped. No one could figure out why the system was constantly crashing under load. We knew it would fail again at the next registration period. We had to do something, anything, to improve the system. But what to do? Every option was on the table.
## What if we changed platforms?
IBM had recently embraced Linux, releasing Linux versions of its Java and WebSphere products. All products were certified for RHEL (Red Hat Enterprise Linux), which several of us were already running on our desktop systems. We realized we now had the ecosystem to run the web registration system on Linux, as a supported platform. But would it perform any better on Linux than AIX?
After setting up a test server and running initial load tests, we were stunned to see one Linux server easily supporting what three AIX servers could not. A single Linux server running the same web registration code with the same IBM Java and IBM WebSphere sustained over 200 users.
We shared our findings with the registrar and the CIO, who approved our plan to migrate the web registration system to Linux. It was our first time running Linux in the University of Minnesota enterprise, but we had nothing to lose. The AIX system would fail again, anyway. Linux was a long shot, but it was our only hope.
We immediately ramped up new Linux servers for production. Colleagues in another team diverted several Intel servers to our effort, where we installed RHEL and the IBM components. We performed countless series of load tests on the new system, looking for weak points, only to find the Linux servers running smoothly.
After a restless two months, we finally went live. And it was a resounding success! The web registration system performed flawlessly on Linux, despite heavy usage. At our peak during that registration period, the Linux servers managed over 600 concurrent users, with barely a blip. Linux had rescued web registration at the University of Minnesota.
## Lessons for success
As I look back on that massive rescue operation, I find several themes you can use to introduce Linux in your own organization:
**Solve a problem, don't stroke an ego.**
When we proposed running Linux in the enterprise, we weren't doing it because we thought Linux was cool. Sure, we were Linux fans and we already ran Linux on the desktop and at home, but we were there to solve a problem. Our registrar and other stakeholders appreciated that Linux was a solution to a problem, not just something we wanted to do because Linux was cool.
**Change as little as possible.**
Our success hinged on the fact that IBM had finally released versions of its Java and WebSphere products for Linux. This allowed us to minimize changes to the system as we migrated from AIX to Linux. Comparing the AIX configuration to the Linux configuration, only the hardware and operating system changed. Every other component on the system remained the same. It was this "known" quantity that instilled confidence in making the change.
**Be honest about the risks and benefits.**
Our problem was obvious: Web registration had failed in our previous two registration cycles and would likely fail again. When we presented our idea to our stakeholders, proposing that we replace the AIX web servers with Linux, we were open about the expected risks and benefits. The bottom line was if we changed nothing, we would fail. If we tried Linux, we might fail or we might not. We shared our findings from our initial load tests, which demonstrated that Linux was more likely to succeed than fail.
But even if Linux failed, we could easily put the old AIX servers back into production. That "fallback" preparation reassured the registrar that we had appropriately measured the benefits and the risks and were prepared in case things went wrong.
**Communicate broadly.**
In making our pitch to migrate to Linux, we cast a wide net. We wrote an executive white paper that clearly communicated what we planned to do and why we thought it would work. The key to this white paper's success was its brevity. Executives do not want to read a "novel" about a technical idea, nor do they want to get mired in the technical details. We intentionally wrote the white paper for the executive level, describing our proposal in broad strokes.
As we replaced the system with Linux, we provided regular updates to inform our stakeholders about our progress to build the new Linux system. After we finally went live on the Linux web registration system, we posted daily updates, reporting how many students had registered for classes on the new system, and if we saw any problems.
Even though it's been nearly two decades since our early failure with AIX and very successful experiment with Linux, all of these lessons still apply. Sure, Linux did the heavy lifting here, but our overall success was due to bringing people together in the spirit of solving a common problem. And that's a lesson that I think you can apply to pretty much any situation you face.
## 4 Comments |
8,719 | 开发一个 Linux 调试器(四):Elves 和 dwarves | https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/ | 2017-07-24T14:41:03 | [
"调试器"
] | https://linux.cn/article-8719-1.html | 
到目前为止,你已经偶尔听到了关于 dwarves、调试信息、一种无需解析就可以理解源码方式。今天我们会详细介绍源码级的调试信息,作为本指南后面部分使用它的准备。
### 系列文章索引
随着后面文章的发布,这些链接会逐渐生效。
1. [准备环境](/article-8626-1.html)
2. [断点](/article-8645-1.html)
3. [寄存器和内存](/article-8663-1.html)
4. [Elves 和 dwarves](https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/)
5. [源码和信号](https://blog.tartanllama.xyz/c++/2017/04/24/writing-a-linux-debugger-source-signal/)
6. [源码级逐步执行](https://blog.tartanllama.xyz/c++/2017/05/06/writing-a-linux-debugger-dwarf-step/)
7. 源码级断点
8. 调用栈展开
9. 读取变量
10. 下一步
### ELF 和 DWARF 简介
ELF 和 DWARF 可能是两个你没有听说过,但可能大部分时间都在使用的组件。ELF([Executable and Linkable Format](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format "Executable and Linkable Format"),可执行和可链接格式)是 Linux 系统中使用最广泛的目标文件格式;它指定了一种存储二进制文件的所有不同部分的方式,例如代码、静态数据、调试信息以及字符串。它还告诉加载器如何加载二进制文件并准备执行,其中包括说明二进制文件不同部分在内存中应该放置的地点,哪些位需要根据其它组件的位置固定(*重分配*)以及其它。在这些博文中我不会用太多篇幅介绍 ELF,但是如果你感兴趣的话,你可以查看[这个很好的信息图](https://github.com/corkami/pics/raw/master/binary/elf101/elf101-64.pdf)或[该标准](http://www.skyfree.org/linux/references/ELF_Format.pdf)。
[DWARF](https://en.wikipedia.org/wiki/DWARF "DWARF WIKI")是通常和 ELF 一起使用的调试信息格式。它不一定要绑定到 ELF,但它们两者是一起发展的,一起工作得很好。这种格式允许编译器告诉调试器最初的源代码如何和被执行的二进制文件相关联。这些信息分散到不同的 ELF 部分,每个部分都衔接有一份它自己的信息。下面不同部分的定义,信息取自这个稍有过时但非常重要的 [DWARF 调试格式简介](http://www.dwarfstd.org/doc/Debugging%20using%20DWARF-2012.pdf):
* `.debug_abbrev` `.debug_info` 部分使用的缩略语
* `.debug_aranges` 内存地址和编译的映射
* `.debug_frame` 调用帧信息
* `.debug_info` 包括 <ruby> DWARF 信息条目 <rp> ( </rp> <rt> DWARF Information Entries </rt> <rp> ) </rp></ruby>(DIEs)的核心 DWARF 数据
* `.debug_line` 行号程序
* `.debug_loc` 位置描述
* `.debug_macinfo` 宏描述
* `.debug_pubnames` 全局对象和函数查找表
* `.debug_pubtypes` 全局类型查找表
* `.debug_ranges` DIEs 的引用地址范围
* `.debug_str` `.debug_info` 使用的字符串列表
* `.debug_types` 类型描述
我们最关心的是 `.debug_line` 和 `.debug_info` 部分,让我们来看一个简单程序的 DWARF 信息。
```
int main() {
long a = 3;
long b = 2;
long c = a + b;
a = 4;
}
```
### DWARF 行表
如果你用 `-g` 选项编译这个程序,然后将结果传递给 `dwarfdump` 执行,在行号部分你应该可以看到类似这样的东西:
```
.debug_line: line number info for a single cu
Source lines (from CU-DIE at .debug_info offset 0x0000000b):
NS new statement, BB new basic block, ET end of text sequence
PE prologue end, EB epilogue begin
IS=val ISA number, DI=val discriminator value
<pc> [lno,col] NS BB ET PE EB IS= DI= uri: "filepath"
0x00400670 [ 1, 0] NS uri: "/home/simon/play/MiniDbg/examples/variable.cpp"
0x00400676 [ 2,10] NS PE
0x0040067e [ 3,10] NS
0x00400686 [ 4,14] NS
0x0040068a [ 4,16]
0x0040068e [ 4,10]
0x00400692 [ 5, 7] NS
0x0040069a [ 6, 1] NS
0x0040069c [ 6, 1] NS ET
```
前面几行是一些如何理解 dump 的信息 - 主要的行号数据从以 `0x00400670` 开头的行开始。实际上这是一个代码内存地址到文件中行列号的映射。`NS` 表示地址标记一个新语句的开始,这通常用于设置断点或逐步执行。`PE` 表示函数序言(LCTT 译注:在汇编语言中,[function prologue](https://en.wikipedia.org/wiki/Function_prologue "function prologue") 是程序开始的几行代码,用于准备函数中用到的栈和寄存器)的结束,这对于设置函数断点非常有帮助。`ET` 表示转换单元的结束。信息实际上并不像这样编码;真正的编码是一种非常节省空间的排序程序,可以通过执行它来建立这些行信息。
那么,假设我们想在 `variable.cpp` 的第 4 行设置断点,我们该怎么做呢?我们查找和该文件对应的条目,然后查找对应的行条目,查找对应的地址,在那里设置一个断点。在我们的例子中,条目是:
```
0x00400686 [ 4,14] NS
```
假设我们想在地址 `0x00400686` 处设置断点。如果你想尝试的话你可以在已经编写好的调试器上手动实现。
反过来也是如此。如果我们已经有了一个内存地址 - 例如说,一个程序计数器值 - 想找到它在源码中的位置,我们只需要从行表信息中查找最接近的映射地址并从中抓取行号。
### DWARF 调试信息
`.debug_info` 部分是 DWARF 的核心。它给我们关于我们程序中存在的类型、函数、变量、希望和梦想的信息。这部分的基本单元是 DWARF 信息条目(DWARF Information Entry),我们亲切地称之为 DIEs。一个 DIE 包括能告诉你正在展现什么样的源码级实体的标签,后面跟着一系列该实体的属性。这是我上面展示的简单事例程序的 `.debug_info` 部分:
```
.debug_info
COMPILE_UNIT<header overall offset = 0x00000000>:
< 0><0x0000000b> DW_TAG_compile_unit
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final)
DW_AT_language DW_LANG_C_plus_plus
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_stmt_list 0x00000000
DW_AT_comp_dir /super/secret/path/MiniDbg/build
DW_AT_low_pc 0x00400670
DW_AT_high_pc 0x0040069c
LOCAL_SYMBOLS:
< 1><0x0000002e> DW_TAG_subprogram
DW_AT_low_pc 0x00400670
DW_AT_high_pc 0x0040069c
DW_AT_frame_base DW_OP_reg6
DW_AT_name main
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_decl_line 0x00000001
DW_AT_type <0x00000077>
DW_AT_external yes(1)
< 2><0x0000004c> DW_TAG_variable
DW_AT_location DW_OP_fbreg -8
DW_AT_name a
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_decl_line 0x00000002
DW_AT_type <0x0000007e>
< 2><0x0000005a> DW_TAG_variable
DW_AT_location DW_OP_fbreg -16
DW_AT_name b
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_decl_line 0x00000003
DW_AT_type <0x0000007e>
< 2><0x00000068> DW_TAG_variable
DW_AT_location DW_OP_fbreg -24
DW_AT_name c
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_decl_line 0x00000004
DW_AT_type <0x0000007e>
< 1><0x00000077> DW_TAG_base_type
DW_AT_name int
DW_AT_encoding DW_ATE_signed
DW_AT_byte_size 0x00000004
< 1><0x0000007e> DW_TAG_base_type
DW_AT_name long int
DW_AT_encoding DW_ATE_signed
DW_AT_byte_size 0x00000008
```
第一个 DIE 表示一个编译单元(CU),实际上是一个包括了所有 `#includes` 和类似语句的源文件。下面是带含义注释的属性:
```
DW_AT_producer clang version 3.9.1 (tags/RELEASE_391/final) <-- 产生该二进制文件的编译器
DW_AT_language DW_LANG_C_plus_plus <-- 原编程语言
DW_AT_name /super/secret/path/MiniDbg/examples/variable.cpp <-- 该 CU 表示的文件名称
DW_AT_stmt_list 0x00000000 <-- 跟踪该 CU 的行表偏移
DW_AT_comp_dir /super/secret/path/MiniDbg/build <-- 编译目录
DW_AT_low_pc 0x00400670 <-- 该 CU 的代码起始
DW_AT_high_pc 0x0040069c <-- 该 CU 的代码结尾
```
其它的 DIEs 遵循类似的模式,你也很可能推测出不同属性的含义。
现在我们可以根据新学到的 DWARF 知识尝试和解决一些实际问题。
### 当前我在哪个函数?
假设我们有一个程序计数器值然后想找到当前我们在哪一个函数。一个解决该问题的简单算法:
```
for each compile unit:
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
for each function in the compile unit:
if the pc is between DW_AT_low_pc and DW_AT_high_pc:
return function information
```
这对于很多目的都有效,但如果有成员函数或者内联(inline),就会变得更加复杂。假如有内联,一旦我们找到其范围包括我们的程序计数器(PC)的函数,我们需要递归遍历该 DIE 的所有孩子检查有没有内联函数能更好地匹配。在我的代码中,我不会为该调试器处理内联,但如果你想要的话你可以添加该功能。
### 如何在一个函数上设置断点?
再次说明,这取决于你是否想要支持成员函数、命名空间以及类似的东西。对于简单的函数你只需要迭代遍历不同编译单元中的函数直到你找到一个合适的名字。如果你的编译器能够填充 `.debug_pubnames` 部分,你可以更高效地做到这点。
一旦找到了函数,你可以在 `DW_AT_low_pc` 给定的内存地址设置一个断点。不过那会在函数序言处中断,但更合适的是在用户代码处中断。由于行表信息可以指定序言的结束的内存地址,你只需要在行表中查找 `DW_AT_low_pc` 的值,然后一直读取直到被标记为序言结束的条目。一些编译器不会输出这些信息,因此另一种方式是在该函数第二行条目指定的地址处设置断点。
假如我们想在我们示例程序中的 `main` 函数设置断点。我们查找名为 `main` 的函数,获取到它的 DIE:
```
< 1><0x0000002e> DW_TAG_subprogram
DW_AT_low_pc 0x00400670
DW_AT_high_pc 0x0040069c
DW_AT_frame_base DW_OP_reg6
DW_AT_name main
DW_AT_decl_file 0x00000001 /super/secret/path/MiniDbg/examples/variable.cpp
DW_AT_decl_line 0x00000001
DW_AT_type <0x00000077>
DW_AT_external yes(1)
```
这告诉我们函数从 `0x00400670` 开始。如果我们在行表中查找这个,我们可以获得条目:
```
0x00400670 [ 1, 0] NS uri: "/super/secret/path/MiniDbg/examples/variable.cpp"
```
我们希望跳过序言,因此我们再读取一个条目:
```
0x00400676 [ 2,10] NS PE
```
Clang 在这个条目中包括了序言结束标记,因此我们知道在这里停止,然后在地址 `0x00400676` 处设一个断点。
### 我如何读取一个变量的内容?
读取变量可能非常复杂。它们是难以捉摸的东西,可能在整个函数中移动、保存在寄存器中、被放置于内存、被优化掉、隐藏在角落里,等等。幸运的是我们的简单示例是真的很简单。如果我们想读取变量 `a` 的内容,我们需要看它的 `DW_AT_location` 属性:
```
DW_AT_location DW_OP_fbreg -8
```
这告诉我们内容被保存在以栈帧基(base of the stack frame)偏移为 `-8` 的地方。为了找到栈帧基,我们查找所在函数的 `DW_AT_frame_base` 属性。
```
DW_AT_frame_base DW_OP_reg6
```
从 [System V x86\_64 ABI](https://www.uclibc.org/docs/psABI-x86_64.pdf) 我们可以知道 `reg6` 在 x86 中是帧指针寄存器。现在我们读取帧指针的内容,从中减去 `8`,就找到了我们的变量。如果我们知道它具体是什么,我们还需要看它的类型:
```
< 2><0x0000004c> DW_TAG_variable
DW_AT_name a
DW_AT_type <0x0000007e>
```
如果我们在调试信息中查找该类型,我们得到下面的 DIE:
```
< 1><0x0000007e> DW_TAG_base_type
DW_AT_name long int
DW_AT_encoding DW_ATE_signed
DW_AT_byte_size 0x00000008
```
这告诉我们该类型是 8 字节(64 位)有符号整型,因此我们可以继续并把这些字节解析为 `int64_t` 并向用户显示。
当然,类型可能比那要复杂得多,因为它们要能够表示类似 C++ 的类型,但是这能给你它们如何工作的基本认识。
再次回到帧基(frame base),Clang 可以通过帧指针寄存器跟踪帧基。最近版本的 GCC 倾向于使用 `DW_OP_call_frame_cfa`,它包括解析 `.eh_frame` ELF 部分,那是一个我不会去写的另外一篇完全不同的文章。如果你告诉 GCC 使用 DWARF 2 而不是最近的版本,它会倾向于输出位置列表,这更便于阅读:
```
DW_AT_frame_base <loclist at offset 0x00000000 with 4 entries follows>
low-off : 0x00000000 addr 0x00400696 high-off 0x00000001 addr 0x00400697>DW_OP_breg7+8
low-off : 0x00000001 addr 0x00400697 high-off 0x00000004 addr 0x0040069a>DW_OP_breg7+16
low-off : 0x00000004 addr 0x0040069a high-off 0x00000031 addr 0x004006c7>DW_OP_breg6+16
low-off : 0x00000031 addr 0x004006c7 high-off 0x00000032 addr 0x004006c8>DW_OP_breg7+8
```
位置列表取决于程序计数器所处的位置给出不同的位置。这个例子告诉我们如果程序计数器是在 `DW_AT_low_pc` 偏移量为 `0x0` 的位置,那么帧基就在和寄存器 7 中保存的值偏移量为 8 的位置,如果它是在 `0x1` 和 `0x4` 之间,那么帧基就在和相同位置偏移量为 16 的位置,以此类推。
### 休息一会
这里有很多的信息需要你的大脑消化,但好消息是在后面的几篇文章中我们会用一个库替我们完成这些艰难的工作。理解概念仍然很有帮助,尤其是当出现错误或者你想支持一些你使用的 DWARF 库所没有实现的 DWARF 概念时。
如果你想了解更多关于 DWARF 的内容,那么你可以从[这里](http://dwarfstd.org/Download.php)获取其标准。在写这篇博客时,刚刚发布了 DWARF 5,但更普遍支持 DWARF 4。
---
via: <https://blog.tartanllama.xyz/c++/2017/04/05/writing-a-linux-debugger-elf-dwarf/>
作者:[Simon Brand](https://www.twitter.com/TartanLlama) 译者:[ictlyh](https://github.com/ictlyh) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Redirecting…
Click here if you are not redirected. |
8,720 | Linus Torvalds 说:谈论技术创新是愚蠢的,闭上嘴把事情做好 | http://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds/ | 2017-07-24T19:02:00 | [
"创新"
] | https://linux.cn/article-8720-1.html |
>
> 来自 Linux 内核首领的最佳生活提示。
>
>
>

**OSLS 报道:** Linus Torvalds 认为,技术行业的创新庆祝活动是沾沾自喜,自我陶醉和自私自利的。
他所使用的艺术化术语更为直率:“行业的创新如此之多都是胡说。” 他说:“人人创新——不要做这种‘不同思考’,这是无意义的,它们有百分之九十九只是工作而已。”
周三在加利福尼亚州召开的[开源领袖峰会(OSLS)](https://www.theregister.co.uk/2017/02/14/the_government_is_coming_for_your_code/)中,Linux 基金会执行总监 Jim Zemlin 采访了 Linus,讨论了他如何管理 Linux 内核的开发和他对工作的态度。
Torvalds 说:“所有的炒作都不是真正的工作,真正的工作是在细节之中。”
Torvalds 表示赞成这样一个观点,即成功的项目是 99% 的汗水,百分之一的创新。
作为[开源 Linux 内核](https://www.kernel.org/)的创造者和仁慈独裁者,不用提还是 Git 分布式版本控制系统的发明者,Torvalds 已经证明他的方法产生了结果。Linux 对技术行业的影响已经不用再夸大了。Linux 是服务器的主要操作系统,几乎所有的高性能计算都运行在 Linux 上,而大多数移动设备和嵌入式设备都依赖于 Linux。
Linux 内核可能是 PC 时代最成功的技术协作项目。根据 Zemlin 的说法,内核贡献者自 2005 年以来总共增加了 13500 多个,其中每天大约增加 10000 行代码,移除 8000 行代码,修改 1500 到 1800 行代码,而且这一直在继续 —— 虽然不是一直以目前的速度 —— 但这已经超过了二十五年。
Torvalds 说:“我们已经这样做了 25 年,而且我们遇到的一个常见问题是人们彼此需要磨合。所以组织代码、组织代码流、[以及]组织我们的维护关系构成了我们的历史,最终那些痛点,我说的是代码争议,基本上消失了。”
Torvalds 解释说,该项目的结构使人们能够独立工作。他说:“我们已经能够真正模块化代码和开发模式,所以我们可以并行做很多事情。”
Torvalds 说,技术起着明显的作用,但流程至少是同样重要的。
Torvalds 说:“这是一个社会化项目。这是技术层面的东西,而技术是让人们能够就问题达成一致的东西,因为……它通常有非常明确的对和错。”
但是现在 Torvalds 并没有像 20 年前一样对每一个变化进行审查,而是依靠贡献者的社交网络。他说:“这是社交网络和信任,并且我们有一个非常强大的网络,这就是为什么我们可以有一千人参与到每个发布当中。”
对信任的重视解释了参与内核开发的困难,因为人们不可以登录、提交代码然后消失不见。Torvalds 说:“你要提交很多小的补丁直到维护者信任你才行,在这一点上,你不仅仅是一个提交补丁的人,而是成为信任网络的一部分。”
十年前,Torvalds 表示,他告诉其他内核贡献者,他希望有一个八周的发布时间表,而不是可能拖延几年的发布周期。内核开发人员设法将其发布周期减少到大约两个半月。从那时起,开发工作就一直很平稳。
Torvalds 说:“说我们的流程有多么好很无聊。对于我来说,所有真正紧张的时刻都是关于流程的,它们和代码无关,当代码不工作时,这实际上是令人高兴的……流程问题是很痛苦的。你永远不会想有流程问题……尤其是当人们开始彼此生气时。”
---
via: <http://www.theregister.co.uk/2017/02/15/think_different_shut_up_and_work_harder_says_linus_torvalds/>
作者:[Thomas Claburn](http://www.theregister.co.uk/Author/3190) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,721 | 6 个学习 OpenStack 的新指南和教程 | https://opensource.com/article/17/6/openstack-guides-and-tutorials | 2017-07-25T08:11:00 | [
"OpenStack"
] | /article-8721-1.html |
>
> 想了解更多关于 OpenStack 的内容?这些免费资源可能只是你所需要的。
>
>
>

云基础设施是一个非常需要的技能。如果你正在为你的云基础架构需求寻找开源解决方案,那么 [OpenStack](https://opensource.com/resources/what-is-openstack) 就是其中之一。
OpenStack 是一个巨大的项目集合,为云服务的几乎每一个部分都提供了解决方案和集成。虽然这个巨大范围使得它成为一个强大的工具,但这也意味着可能很难跟上并了解整个项目,了解如何使用它们、如何自定义它们以及如何向其提供代码。
幸运的是,有很多选择可以帮助你。除了[官方项目文档](http://docs.openstack.org/)、纸质书籍和认证培训计划外,还有大量社区创造的优秀资源。我们每个月可以在 Opensource.com 上查看它在博客和其他网站上最近发布的指南和教程,这会给你启发。我们来看看这次我们发现了什么。
* 首先在本月的这一批中,我们有一篇来自 Antony Messerli 的指南,介绍如何通过 Ansible [设置 OpenStack 云](https://www.reversengineered.com/2016/05/09/setting-up-an-openstack-cloud-using-ansible/)。Messerli 将引导我们完成实验室环境中的配置以及在集群上运行 OpenStack 所需的 playbook,还有添加镜像、设置网络等的基础知识。如果你正在考虑使用 Ansible 安装 OpenStack 小型本地测试环境,这是一篇很好的文章。
* 接下来,你有没有想过 Neutron 网络如何在 OpenStack 中的工作的?应用程序中发生的事情如何对应于底层代码?Arie Bregman 在[这篇文章](http://abregman.com/2017/05/29/openstack-neutron-service-code-deep-dive/)中提供了一段 OpenStack Neutron 代码。你需要熟悉一般的网络原理,至少有一点 OpenStack 代码基础才能跟上。
* Gerrit是 OpenStack 使用的开源代码审查项目,用于管理上传的修补程序,并允许在将更改合并到主 OpenStack 代码库之前进行反馈和测试。对于那些习惯于不同的代码审查系统(或根本没有的),Gerrit 可能会有点混乱,尽管它具有很好的仪表板功能,因此你只能看到对你很重要的信息。Dougal Matthews 在[这篇文章](http://www.dougalmatthews.com/2017/May/19/how-i-gerrit/)中带我们看了他的 Gerrit 仪表板设置,这可能会帮助你设置自己的。
* 上个月在波士顿举办的 OpenStack 峰会的[视频](https://www.openstack.org/videos/)已经发布了,无论你是否参加过上个月的活动,这都包含了技术和非技术专题的宝库。不知道从哪里开始?这有个来自 Julio Villarreal Pelegrino 关于如何规划、构建、运行一个成功的 OpenStack 云计算的[演讲](http://www.juliosblog.com/dont-fail-at-scale-how-to-plan-for-build-and-operate-a-successful-openstack-cloud-video-openstack-summit2017/)。
* 任何云管理员都应该担心安全问题。但你从哪里开始?Naveen Joy 发布了一个很好的十个安全问题的[清单](https://blogs.cisco.com/cloud/securing-openstack-networking),用于加固你的 OpenStack 网络;你可以在上个月的同一主题演讲中查看[这个视频](https://www.openstack.org/videos/boston-2017/securing-openstack-networking)。
* OpenStack 中的内部消息服务在一个公共库中进行管理,该库存在于一个称为 Oslo 的项目中,自然它被称为 Oslo.Messenging。要了解这个库的基础知识,它在这个分为[两个](https://pigdogweb.wordpress.com/2017/05/22/intro-to-oslo-messaging/)[部分](https://pigdogweb.wordpress.com/2017/06/02/oslo-messaging-the-cloud-is-calling/)的博客中提到。
想要了解更多?可以从这三年来社区说提供的内容中找到我们完整的 [OpenStack 指南、如何做和教程](https://opensource.com/resources/openstack-tutorials),以帮助你学习成为一名高效的 OpenStack 开发人员或管理员。
有很棒教程、指导或者如何做需要我们分享的么?在下面的评论中分享。
---
作者简介:
Jason Baker - Jason 热衷于使用技术使世界更加开放,从软件开发到阳光政府行动。Linux 桌面爱好者、地图/地理空间爱好者、树莓派工匠、数据分析和可视化极客、偶尔的码农、云本土主义者。在 Twitter 上关注他。
via: <https://opensource.com/article/17/6/openstack-guides-and-tutorials>
作者:[Jason Baker](https://opensource.com/users/jason-baker) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,722 | 教你如何比谷歌搜索更快速有效地利用 man | https://opensource.com/article/17/7/using-man-pages | 2017-07-26T08:53:00 | [
"手册",
"帮助",
"man"
] | /article-8722-1.html |
>
> Linux 的帮助手册其实拥有很多有用的信息,而且比你想象中更容易使用
>
>
>

我们通常通过 google 来查询 Linux 中的命令说明,但是其实还有一个更好的办法:那就是通过 Linux 自带的 **man 帮助页**来查询命令详尽完整的使用说明。
man 页面的历史本身比 Linux 还长,可以追溯到 Unix 早期那个年代。 通过[这个 Wikipedia](https://en.wikipedia.org/wiki/Man_page) 可以知道,Dennis Ritchie 和 Ken Thompson 在 1971 年写了第一个 man 帮助页,那个年代的计算器使用的还是像烤箱一样的计算机,个人电脑还未出世。man 帮助页也有它自己的一套设计精炼的语法,和 Unix 与 Linux 一样,man 帮助页也不是一成不变的,它就像 Linux 内核一样不停地发展更新。
Man 帮助页通过数字标识符来分成不同类型的内容:
1. 一般用户命令
2. 系统调用命令
3. 库函数
4. 特殊的文件和驱动程序
5. 文件格式
6. 游戏和屏保
7. 杂项
8. 系统管理命令和守护进程
尽管如此,用户一般也不需要知道他们想查询的命令是属于哪一个类型的。
这些文件格式化的方式在当今许多用户看来有点古怪。因为最开始他们是用 **trooff** 的方式,通过 PostScript 打印机来打印,所以包含了头部和布局方面的格式化信息。在 Linux 中,取而代之使用了一种叫做 [groff](https://en.wikipedia.org/wiki/Groff_(software)) 的方法。
在我的 Fedora 系统中,man 帮助页相关的文件存储在 `/usr/share/man` 下的子目录中(比如 `man1` 存储第一部分的命令),还有进一步的子目录用于存储 man 帮助页的翻译。
如果你在 Shell 中查找 `man` 命令的 man 帮助页,你时间看到将是 gzip 工具压缩的 `man.1.gz` 文件。想要查询 man 帮助页,需要输入类似如下命令:
```
man man
```
这个例子会显示 `man` 命令的 man 帮助页,这将先解压 man 帮助页文件,然后解释格式化指令并用 `less` 显示结果,所以导航操作和在 `less` 中一样。
所有的 man 帮助页都应该显示这些子段落:**Name**、 **Synopsis**、 **Description**、**Examples**、**See**、**Also**。有些还会添加一些额外的子段落,比如 **Options**、 **Exit**、**Status**、 **Environment**、**Bugs**、**Files**、**Author**、**Reporting**、**Bugs**、**History**、**Copyright**。
### 详细说明一个 man 帮助页
为了更详细地介绍一个典型的 man 帮助页,就用 ls 命令的帮助页来分析吧,在 **Name** 分段下,我们可以看到如下内容:
```
ls - list directory contents
```
它会简要地告诉我 `ls` 这条命令的作用.
在 `Synopsis` 分段下,我们可以看到如下的内容:
```
ls [OPTION]... [FILE]…
```
任何在中括号中的元素都是可选的。你可以只输入 `ls` 命令,后面不接任何参数。参数后面的省略号表示你可以添加任意多个彼此兼容的参数,以及许多文件名。对于 `[FILE]` 参数,你可以指定具体的目录名,或者可以使用通配符 `*`,比如这个例子,它会显示 `Documents` 文件夹下的 `.txt` 文件:
```
ls Documents/*.txt
```
在 **Description** 分段下, 我们可以看到关于这条命令更加详细的信息,还有关于这条命令各个参数作用的详细介绍的列表,比如说 `ls` 命令第一个选项 `-a` 参数,它的作用是显示包括隐藏文件/目录在内的所有文件:
```
-a, --all
```
如果我们想用这些参数,要么用它们的别名,比如 `-a`,要么用它们的全名,比如 `--all`(两条中划线)。然而并不是所有参数都有全名和别名(比如 `--author` 只有一种),而且两者的名字并不总是相互关联的(`-F` 和 `--classify`)。当你想用多个参数时,要么以空格隔开,要么共用一个连字符 `-`,在连字符后连续输入你需要的参数(不要添加空格)。比如下面两个等价的例子:
```
ls -a -d -l
```
```
ls -adl
```
但是 `tar` 这个命令有些例外,由于一些历史遗留原因,当参数使用别名时可以不用添加连字符 `-`,因此以下两种命令都是合法的:
```
tar -cvf filearchive.tar thisdirectory/
tar cvf filearchive.tar thisdirectory/
```
**ls** 的 **Description** 分段后是 **Author**、**Reporting Bugs**、**Copyright**、 **See Also** 等分段。
**See Also** 分段会提供一些相关的 man 帮助页,没事的话可以看看。毕竟除了命令外还有许多其他类型的 man 帮助页。
有一些命令不是系统命令,而是 Bash 特有的,比如 `alias` 和 `cd`。这些 Bash 特有的命令可以在 **BASH\_BUILTINS** man 帮助页中查看,和上面的比起来它们的描述更加精炼,不过内容都是类似的。
其实通过 man 帮助页让你可以获得大量有用的信息,特别是当你想用一个已经很久没用过的命令,需要复习下这条命令的作用时。这个时候 man 帮助页饱受非议的简洁性反而对你来说是更好的。
---
作者简介:
Greg Pittman - Greg 是住在肯塔基州路易斯维尔的一位退休神经学家,但是却对计算机和编程保持着长久的兴趣,从二十世纪六十年代就开始捣腾 Fortran IV 了。随着 Linux 和开源软件的到来,更加激起了他去学习的兴趣并投身于这项事业中,并成为 Scribus 组织的一员。
---
via: <https://opensource.com/article/17/7/using-man-pages>
作者:[Greg Pittman](https://opensource.com/users/greg-p) 译者:[吴霄/toyijiu](https://github.com/toyijiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,723 | 那些乌央乌央的、普普通通的 Ubuntu 用户们 | http://fossforce.com/2017/01/many-humble-ubuntu-users/ | 2017-07-25T17:07:03 | [
"Ubuntu"
] | https://linux.cn/article-8723-1.html |
>
> “更好的捕鼠器“ 并不是一个合格生物学家的专用谚语。就像 Ubuntu,它只需要好好工作,而不会让我惊着。
>
>
>
(LCTT 译注,“做个更好的捕鼠器”是个国外的谚语,意指不断有人发明更好的捕鼠器,然而其实没什么大的意义。)

### Roblimo 的藏身处
我从来就不是一个计算机极客。事实上,我在网上第一次小有名气是为 Time/Life 写一个每周专栏的时候,这个专栏的名字是“这台老电脑”。它介绍的是一台古老的设备能够做什么——通常是在上面安装 Linux——在那之后,我又在Andover.net 上开设了一个相同的专栏,名字叫做“廉价计算”,是关于如何在这个世界省钱的——而其它那些大部分在线计算机专栏都是在让你花钱花到你没饭吃(LCTT 译者注:作者的意思是“我算是一股清流了”)。
据我所知的那些大部分的 Linux 的早期使用者,都痴迷于他们的电脑,还有那些让他们电脑有用的软件。他们乐于仔细研读源代码并且做出一点小小的修改。最关键的是,他们大多是学计算机的,或者是从事 IT 行业的。他们的计算机和计算机网络迷住了他们,就像他们本该那样似的。
我过去是(现在也是)一个作者,不是一个搞计算机的。于我而言,计算机一直都只是一个工具而已。我希望它们能够老老实实的听话,我让它们做啥就做啥,最多可以出一点点小毛病。我喜欢图形化界面,因为我记不住那么长的命令行来管理我的电脑和网络。当然,我能找到它们,也能输入它们,但是我可不想费劲。
在 Linux 这个圈子里,有那么一段时间用户很少。“你什么意思?你只是想要你的电脑去写点文章,或者最多添加点 HTML 么?”开发者和管理员这样询问道,好像所有除了编写代码之外的奉献者们都不如他们所贡献的。
但是尽管面临这些讥笑和嘲讽,我在一次又一次的宣讲和谈话中提出一个像这样的主题:“不要光挠自己的痒痒,也帮你女朋友解决一下啊!还有你的同事,以及那些你最喜欢的饭店的厨子们,还有你的医生。难道你不希望你的医生专注于医治病人,而是在 `apt get` 以及 `grep` 里心烦意乱?”
所以,是的,因为我希望能够更简单的使用 Linux,我是一个[早期的曼德拉草用户](https://linux.slashdot.org/story/00/11/02/2324224/mandrake-72-in-wal-mart-a-good-idea),而现在,我是一个快乐的 Ubuntu 使用者。
为什么是 Ubuntu?你一定是在逗我?为什么不是呢?它是 Linux 发行版中的丰田凯美瑞(或者本田思域)!平凡而卓越。它是如此的流行,你可以在 IRC、LinuxQuestion 社区、 Ubuntu 自己的论坛,以及许许多多的地方得到帮助。
当然,使用 Debian 或者 Fedora 也是很酷的,还有 Mint 也是开箱即用又华丽万分。但是我依然感兴趣于写一些文章和添加一点点 HTML,以及在浏览器中随时看点什么、在 Google Docs 为一两个公司客气写点什么,随时接收我的邮件,用一张过去或者现在的图片做这做那……基本上我就在电脑上干点这些。
当这一切都正常的时候,我的桌面是什么样子就没有意义了。我又看不见它!它整个被应用窗口覆盖了!然后我使用了两个显示器,不仅仅是一个。我有……让我数数,17 个标签页打开在两个 Chrome 窗口中。并且 GIMP 在运行中,[Bluefish](http://bluefish.openoffice.nl/index.html),这个我刚才在使用的东西,用来书写这篇文章。
所以对我而言,Ubuntu 是我的阻力最小的道路。Mint 或许比较可爱,但是当你使用它的时候去掉它的那些边边角角的修饰,它不就是 Ubuntu 吗?所以如果我是用一个个相同的程序,并且不能看见桌面环境,所以谁还在意它长什么样子?
有些研究表明 Mint 正在变得更加流行,而有些人说 Debian 更甚。但是在他们的研究中,Ubuntu 永远都是居于最上位的,尽管这一年又一年过去了。
所以称我为大多数人吧!说我乏味吧!称我为乌央乌央的、普普通通的 Ubuntu 用户吧——至少从现在开始……
---
作者简介:

Robin "Roblimo" Miller 是一个自由职业作者,并且是开源技术集团的前主编,该公司旗下有 SourceForge、 freshmeat、Linux.com、NewForge、ThinkGeek 以及 Slashdot,并且他直到最近依然在 Slashdot 担任视频编辑。虽然他现在基本上退休了,但他仍然兼任 Grid Dynamics 的社论顾问,而且(显然)在为 FOSS Force 在写文章。
---
via: <http://fossforce.com/2017/01/many-humble-ubuntu-users/>
作者:[Robin "Roblimo" Miller](http://www.roblimo.com/) 译者:[svtter](https://github.com/svtter) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,724 | 开放组织公开追踪问题的好处 | https://opensource.com/open-organization/17/2/tracking-issues-publicly | 2017-07-26T09:00:00 | [
"问题追踪器"
] | /article-8724-1.html |
>
> 在开放组织中,在线追踪问题可以将客户转变成伙伴。
>
>
>

一个公开的问题追踪器是开放组织的重要沟通工具,和在公开渠道中开展工作相比,没有比这种方式更[透明和包容](https://opensource.com/open-organization/resources/open-org-definition)的方式了。因此,让我们来探讨一下在开放组织中使用问题跟踪器的一些最佳实践。
在开始之前,我们先来定义“<ruby> 问题追踪器 <rt> issue tracker </rt></ruby>”的含义。简单地说,问题跟踪器是**一个共享的待办事项列表**。想象一下一张胡乱写的待办事项列表:购买面包、邮件打包、归还图书馆书籍。当你在城里开车时,把列表上的每一项划掉的感觉很好。现在,将它扩展到你需要在组织中完成的工作,并增加合适的软件协作能力。这样你就有一个问题追踪器了!
无论你使用的是 GitHub 还是其他的,如 Bitbucket、GitLab 或 Trello,问题跟踪器都是与你的伙伴进行协调的正确工具。这对将外部人员*变成*伙伴是至关重要的,这是开放组织的特征之一。那么这是如何工作的?我很高兴你问了这个问题。
### 使用问题追踪器的最佳实践
使用公共问题跟踪器将外部人员转换为伙伴的最佳实践是基于过去五年来我们在 [Gratipay](https://gratipay.com/) 的经验。我们帮助公司和其他组织为开源买单,我们喜欢使用我们的问题跟踪器与我们的社区合作。这就是我们所学到的。
**0、 隐私优先。** 在这篇文章中谈论公共问题追踪器的隐私看上去是一个奇怪的开始。但是我们必须记住,[开放本身并不是目的](https://opensource.com/open-organization/16/9/openness-means-to-what-end),任何真诚和真正的开放是建立在安全和认可的坚实基础之上的。不要公开发布客户或其他第三方私下给你的信息,除非你明确提出了要求并且得到了他们的明确同意。采纳一个隐私政策并培训你的人员。[这篇 Gratipay 的政策](http://inside.gratipay.com/howto/seek-consent)可供参考。好的!现在这部分清楚之后,让我们继续。
**1、 默认采用公开决策。** 如果你私下做决策,你就会失去运行一个开放组织的好处,比如表达多样化的想法、招募有活力的人才、实现更大的责任感。即使你的全职员工是最初使用你的公共问题跟踪器的唯一员工,那也要这么做。 避免将你的公共问题追踪器视为二等公民。如果你在办公室进行对话,请在公共问题跟踪器上发布摘要,并在最终决策之前给你的社区留出时间来响应。这是使用问题跟踪器解锁你组织开放权力的第一步:如果它不在问题追踪器中,那就没用!
**2、 关联其他工具。** 我们许多人都喜欢 IRC 和 Slack,这并不是什么秘密。或者你的组织已经使用 Trello,但是你也想开始使用 GitHub。这没问题!在 GitHub issue 中放入 Trello 卡片的链接很容易,反之亦然。交叉关联保证了被一个或另一个问题困住的外部人员能够发现帮助它们充分理解这个问题的更多相关信息。对于聊天服务,你可能需要配置公共日志记录才能维护关联(隐私注意事项:当你这么做时,请务必在你的频道描述中宣传该事实)。也就是说,你应该在私人 Slack 或其他私人渠道中对话,就像在办公室里面对面的对话一样。换句话说,一定要总结公众问题跟踪器上的对话。看看上面说的:无论离线或在线,如果不在问题跟踪器中,那就没用!
**3、 将谈话导向追踪器中。** 社交媒体很快就能获得很多反馈,尤其是发现问题,但这不是解决问题的地方。问题跟踪器为深入的对话和分析根本原因留出了空间。更重要的是,它们是为了完成任务而优化的,而不是无限拖延。当问题解决后点击“关闭”按钮的感觉真的很好!现在将公共问题跟踪器作为你的主要工作场所,你可以开始邀请在社交媒体上与你接触的外部人员在追踪器中进一步地讨论。简单的如,“感谢您的反馈!听起来像是(某个公共问题的链接)?” ,这可以在很大程度上向外界传达你的组织没有隐藏的内容,并欢迎他们的参与。
**4、 设置一个“元”追踪器。** 在一开始,你的问题追踪器很自然聚焦在*具体产品*上。当你准备好开放到一个新的水平时,考虑设置一个关于你的*组织*本身的问题跟踪。在 Gratipay,我们有一个称之为 “Inside Gratipay”的公共问题追踪器,我们愿意在其中讨论我们组织的任何方面,从[我们的预算](https://github.com/gratipay/inside.gratipay.com/issues/928)到[我们的法律结构](https://github.com/gratipay/inside.gratipay.com/issues/72)到[我们的名字](https://github.com/gratipay/inside.gratipay.com/issues/73)。 是的,这有时有点混乱 —— 重命名组织是一个特别激烈的[争议话题](http://bikeshed.com/)!但对我们来说,在社区参与方面的好处是值得的。
**5、 将你的元追踪器用于加入过程。** 一旦你有一个元问题追踪器,就有了一个新的加入过程:邀请潜在的伙伴创建自己的入场券。如果他们以前从未使用过你的特定的问题追踪器,那么这将是他们学习的极好机会。注册帐号并提交问题应该很简单(如果不是,请考虑切换工具!)。这将让你的新伙伴尽早跨过门槛,开始分享参与感,并在组织内部拥有一席之地。当然,没有问题是愚蠢的,尤其是在某个人的入场券中更是如此。这是你的新伙伴熟悉组织工作方式时询问任何问题的地方。你需要确保快速回复他们的问题,让他们参与并帮助他们融入你的组织。这也是一个很好的方法来记录你最终授予这个人的各种系统的访问权限。至关重要的是,这可以[在雇佣他们之前](https://opensource.com/open-organization/16/5/employees-let-them-hire-themselves)开始。
**6、 雷达项目。** 大多数问题跟踪器包括一些组织和排序任务的方法。例如,GitHub 有[里程碑](https://help.github.com/articles/creating-and-editing-milestones-for-issues-and-pull-requests/)和[项目](https://help.github.com/articles/about-projects/)。这些通常旨在使组织成员的工作重点相一致。在 Gratipay,我们发现使用这些工具可以帮助协作者拥有并排序各自的工作重点。与其它的问题跟踪器通常提供的将问题分配给特定个人的做法不同,我们发现这一点提供了不同的价值。 我可能会关心别人正在积极工作的问题,或者我可能有兴趣开始某些事情,但别人先提出来我也很高兴。拥有自己的项目空间来组织我对该组织的工作的看法是与我的伙伴们沟通“我的雷达上有什么”的强大方式。
**7、 使用机器人自动化任务。** 最终,你可能会发现某些任务会一再出现。这表明自动化可以简化你的工作流程。在 Gratipay,我们[构建](https://github.com/gratipay/bot)了一个[机器人](https://github.com/gratipay-bot)帮助我们完成一些重复的任务。诚然,这是一个有点高级的用法。如果你达到了这一点,你能完全使用公共问题追踪器来开放你的组织!
如 Jim Whitehurst 所说 “吸引社区内外参与” ,这些是我们在 Gratipay 内使用我们的问题跟踪器的一些最有用的做法。也就是说,我们一直在学习。如果你有自己的经验分享,请发表评论!
---
作者简介:
Chad Whitacre - 我是 Gratipay 的创始人,Gratipay 是一个开放组织,致力于培育一个感恩、慷慨和爱的经济。我们帮助公司和其它组织为开源买单,以及赞助我们的平台。在线下,我居住在美国宾夕法尼亚州匹兹堡,在线上,我活跃在 GitHub。
---
via: <https://opensource.com/open-organization/17/2/tracking-issues-publicly>
作者:[Chad Whitacre](https://opensource.com/users/whit537) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,725 | 安卓编年史(32):安卓 6.0 棉花糖(2) | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/ | 2017-07-26T10:23:00 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8725-1.html | 
#### 幕后变化
棉花糖对自棒棒糖引入的节电任务调度器 API 进行了扩展。任务调度器将应用后台进程从随意唤醒设备规整成一个有组织的系统。任务调度器基本上就是一个后台进程的交通警察。
在棉花糖中,谷歌还添加了一个 “Doze(休眠)”模式来在设备闲置的时候节约更多电量。如果设备静止不动,未接入电源,并且屏幕处于关闭状态,它会慢慢进入低功耗离线模式,锁定后台进程。在一段时间之后,网络连接会被禁用。唤醒会被锁定——应用会请求唤醒手机来运行后台进程——而它会被忽略。系统警报(不含用户设置的闹钟)以及[任务调度器](http://arstechnica.com/gadgets/2014/11/android-5-0-lollipop-thoroughly-reviewed/6/#h2)也会被关闭。
如果你曾经发现把设备设置为飞行模式,并且注意到电池续航延长了许多,Doze 就像是一个自动的飞行模式,在你手机不用的时候介入——它真的延长了电池续航。它对于整天或整夜放在桌子上的手机很有效,对于平板就更好了,因为它常常被遗忘在咖啡桌上。
唯一能把设备从 Doze 模式唤醒的来自谷歌云消息推送服务的“高优先级消息”。这是为信息服务准备的,所以即使设备处于休眠状态,也能够收取信息。

“应用待机”是另一项或多或少有用的节电特性,它在后台静默运行。这背后的逻辑很简单:如果你和应用停止交互一段时间,安卓就认为它是不重要的,并取消它访问网络和后台进程的权力。

对于应用待机而言,和一个应用“交互”意味着打开一个应用,开始一项前台服务,或者生成一条通知。任何其中的一条操作就会重置该应用的待机计时器。对于每种其它边界情况,谷歌在设置里添加了一个名字意味模糊的“电池优化”界面。用户可以在这里设置应用白名单让它免疫应用待机。至于开发者,他们在开发者设置中有个“不活跃应用”选项可以手动设置应用到待机状态来测试。
应用待机主要是自动禁用你不用的应用,这是个对抗垃圾应用或被遗忘的应用消耗电量的好方法。因为它是完全静默且后台自动执行的,它还能让新手也能拥有一部精心调教过的设备。

谷歌在过去几年尝试了很多应用备份方案,在棉花糖中它又[换了个方案](http://arstechnica.com/gadgets/2015/10/android-6-0-marshmallow-thoroughly-reviewed/6/#h2)。棉花糖的暴力应用备份系统的目标是保存整个应用数据文件夹到云端。这是可能的并且在技术上是可行的,但即便是谷歌自己的应用对它的支持都不怎么好。设置好一部新安卓手机依然是个大麻烦,要处理无数的登录和弹出教程。

到界面这里,棉花糖的备份系统使用了谷歌 Drive 应用。在谷歌 Drive 的设置中有一个“管理备份”界面,不仅会显示新备份系统的应用数据,还有谷歌过去几年尝试的其它应用备份方案的数据。

藏在设置里的还有一个新的“应用关联”功能,它可以将应用“链接”到网站。在应用关联出现之前,在全新安装的机器上打开一个谷歌地图链接通常会弹出一个“使用以下方式打开”的对话框,来获知你是想在浏览器还是在谷歌地图应用中打开这个链接。
这是个愚蠢的问题,因为你当然是想用应用而不是网站——这不就是你安装这个应用的原因嘛。应用关联让网站拥有者可以将他们的应用和网页相关联。如果用户安装了应用,安卓会跳过“使用以下方式打开”直接使用应用。要激活应用关联,开发者只需要在网站放一些安卓会识别的 json 代码。
应用关联对拥有指定客户端应用的站点来说很棒,比如谷歌地图、Instagram、Facebook。对于有 API 以及多种客户端的站点,比如 Twitter,应用关联界面让用户可以设置任意地址的默认关联应用。不过默认应用关联覆盖了百分之九十的用例,在新手机上大大减少了烦人的弹窗。

可选存储是棉花糖的最佳特性之一。它将 SD 卡从一个二级存储池转换成一个完美的合并存储方案。插入 SD 卡,将它格式化,然后你就有了更多的存储空间,而这是你从来没有想过的事情。

插入 SD 卡会弹出一条设置通知,用户可以选择将卡格式化成“外置”或“内置”存储。“内置”存储选项就是新的可选存储模式,它会将存储卡格式化为 ext4 文件系统。唯一的缺点?那就是存储卡和数据都被“锁定”到你的手机上了。如果不格式化的话,你没法取出存储卡插到其它地方使用。谷歌对于内置存储的使用场景判断就是一旦设置就不再更改。

如果你强行拔出了 SD 卡,安卓会尽它最大的努力处理好。它会弹出一条消息“建议最好将 SD 卡插回”和一个“忘记”这张卡的选项。当然“忘记”这张卡将会导致各种数据丢失,建议最好不要这么做。
不幸的是实际上可以使用可选存储的设备很长时间都没有出现。Nexus 设备不支持存储卡,所以为了测试我们插上了一个 U 盘作为我们的可选存储。OEM 厂商最初抵制这项功能,[LG 和三星](http://arstechnica.com/gadgets/2016/02/the-lg-g5-and-galaxy-s7-wont-support-android-6-0s-adoptable-storage/)都在他们 2016 年初的旗舰机上禁用了它。三星说“我们相信我们的用户需要 microSD 卡是用来在手机和其它设备之间转移数据的”,一旦卡被格式化成 ext4 这就不可能了。
谷歌的实现让用户可以在外置和内置存储选项之间选择。但 OEM 厂商完全拿掉了内置存储功能,不给用户选择的机会。高级用户们对此很不高兴,并且安卓定制组们很快就重启启用了可选存储。在 Galaxy S7 上,第三方定制组甚至在官方发布的[的前一天](http://www.androidpolice.com/2016/03/10/modaco-manages-to-get-adoptable-sd-card-storage-working-on-the-galaxy-s7-and-galaxy-s7-edge-no-root-required/)解除了三星的 SD 卡锁定。
#### 音量和通知

为了更简洁的设计,谷歌将通知优先级控制从音量弹窗中移除了。按下音量键会弹出一个单独的滑动条,对应当前音源控制,还有一个下拉按钮可以展开控制面板,显示所有的三个声音控制条:通知,媒体和闹钟。所有的通知优先级控制还在——它们现在在一个“勿扰模式”的快速设置方块中。

通知控制最有用的新功能之一是允许用户控制抬头通知——现在叫“预览”通知。这项功能让通知在屏幕顶部弹出,就像 iOS 那样。谷歌认为最重要的通知应该提升到高于你普通日常通知的地位。

但是,在棒棒糖中,这项特性引入的时候,谷歌糟糕地让开发者来决定他们的应用是否“重要”。毫无疑问,所有开发者都认为他的应用是世界上最重要的东西。所以尽管最初这项特性是为你亲密联系人的即时消息设计的,最后变成了被 Facebook 的点赞通知所操控。在棉花糖中,每个应用在通知设置都有一个“设置为优先”复选框,给了用户一把大锤来收拾不守规矩的应用。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/32/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,726 | 安卓编年史(33):安卓 7.0 牛轧糖,Pixel 手机,以及未来 | http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/ | 2017-07-27T10:33:00 | [
"Android",
"安卓编年史"
] | https://linux.cn/article-8726-1.html | 
#### 每月安全更新

在棉花糖发布的几个月前,安卓的“Stagefright”媒体服务器漏洞被披露给了公众,这个漏洞允许在旧版本安卓上远程执行代码。由于这个漏洞影响到了数十亿安卓设备,安卓受到了媒体广泛的批评。
谷歌则以开始一项月度安卓安全更新项目作为回应。每个月它都会收集 bug,修复它们,然后推送新代码给 AOSP 和 Nexus 设备。OEM 厂商——它们已经在更新的泥潭中挣扎(也许是因为不关心)——基本上就是被告知“面对现实”然后跟上步伐。每个其它的主流操作系统有经常的安全更新——这就是成为这么大的平台的代价。为了协调 OEM 厂商,谷歌让他们提前一个月可以获取到更新。30 天之后,发布安全公告并将更新推送给谷歌设备。
月度更新项目是在棉花糖发布的两个月前开始的,但在这个主要的系统更新中谷歌添加了一栏“安卓安全补丁程序级别”到关于手机界面中。这里没有使用晦涩难懂的版本号,就用了日期来表示。这让任何人都能轻松看懂手机系统过时了多久,是个很好的让动作缓慢的 OEM 厂商羞愧的方式。

文本选择菜单现在是个浮动工具条,会在你选择的时候在文本旁边弹出。这里也不止是常规的“剪切/复制/粘贴”命令。应用可以在工具栏上放置特殊选项,比如谷歌文档里的“添加链接”。
在标准的文本命令之后,一个省略号按钮会显示一个二级菜单,应用可以在这里给文本选择菜单添加额外的功能。使用新的“文本处理”API,直接给其它应用传递文本现在变得非常轻松。如果你装了谷歌翻译,在这个菜单上会显示一个“翻译”选项。谷歌搜索也在这个工具栏里为谷歌 Now on Tap 添加了一个“助理”选项。

棉花糖添加了一个隐藏的设置,叫做“系统界面调谐器”。这个部分包含了全方位的高级用户特性和试验性项目。要访问这个设置,你需要下拉通知中心,按住“设置”按钮几秒钟。设置齿轮会旋转,然后最终你会看到一条消息显示系统界面调谐器已解锁。一旦它被打开,你就可以在系统设置的底部,开发者选项附近找到它了。

在系统界面调谐器的第一部分,用户可以在快速设置面板添加自定义瓷片,这项特性后来被重制成了应用可以使用的 API。此时这项特性还很粗糙,基本就是允许用户在文本框里输入一条自定义命令。系统状态图标可以单独打开或关闭,所以如果你真的很讨厌知道你已经连接到另外无线网络,你可以把这个图标关闭。这里有一项受欢迎的电源显示新增特性,可以将电池电量百分比显示嵌入到电池图标中。还有个用来截屏的“演示”模式,它会将正常的状态栏替换成一个假的,干净的版本。

### 安卓 7.0 牛轧糖,Pixel 手机,以及未来
[安卓 7.0 牛轧糖](http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/)和 [Pixel 手机](http://arstechnica.com/gadgets/2016/10/google-pixel-review-bland-pricey-but-still-best-android-phone/)几个月前刚刚发布,你可以读读我们对它们的完整评测。二者都有无数我们还没看到最终效果的特性和影响,所以我们暂且搁置,等它真正成为“历史”之后再进行评说。
牛轧糖为了 Daydream VR 对[图形和传感器](http://arstechnica.com/gadgets/2016/08/android-7-0-nougat-review-do-more-on-your-gigantic-smartphone/11/#h1)做出了很大的改动,Daydream VR 是谷歌即将来临的智能手机驱动 VR 体验,[我们已经尝试过了](http://arstechnica.com/gadgets/2016/10/daydream-vr-hands-on-googles-dumb-vr-headset-is-actually-very-clever/)但还没花时间写下点什么。从 Chrome OS 借鉴来了新的“无缝升级”功能,系统拥有两个系统分区,可以在你使用其中一个的时候,对另一个在后台进行静默升级。考虑到 Pixel 是这个功能的唯一首发设备,并且还没有收到更新,我们也不确定这个功能是什么样子的。

牛轧糖最有趣的新功能之一是改进的应用框架,它允许改变应用尺寸大小。这让谷歌可以在手机和平板上实现分屏,在安卓 TV 上实现画中画,以及一个神秘的浮动窗口模式。我们已经能够通过一些软件技巧来访问浮动窗口模式,但还没有看到谷歌在实际产品中使用它。难道它的目标是桌面计算?
安卓和 Chrome OS 则继续走在共同成长的道路上。安卓应用现在已经能够在一些 Chromebook 上[运行了](http://arstechnica.com/gadgets/2016/05/if-you-want-to-run-android-apps-on-chromebooks-youll-need-a-newer-model/),给了这个“仅 Web”的系统 Play 商店以及一系列的应用生态。有传言称 Chrome OS 和安卓未来将会更加紧密结合成“[Andromeda](http://arstechnica.com/gadgets/2016/09/android-chrome-andromeda-merged-os-reportedly-coming-to-the-pixel-3/)”——一个“Android”和“Chrome”混合而成的词——它可能是一个融合的 Chrome/安卓系统的代号。
我们还没有看到 Pixel 手机会创造怎样的历史。谷歌最近发布了两部新旗舰机 Pixel 和 Pixel XL,加入了硬件竞争之中。谷歌之前和合作伙伴合作生产联名的 Nexus 手机,但 Pixel 产品线用的是“Google”品牌。谷歌声明这是一个完全的硬件 OEM,HTC 是合同制造商,就类似苹果把富士康作为合同制造商那样。
拥有自己的硬件改变了谷歌制造软件的方式。谷歌发布了“谷歌助理”作为未来版的“OK Google”语音命令系统。但助理是 Pixel 的独占特性,谷歌没有把它推送给所有安卓设备。谷歌对界面做了一些改变,一个定制版的“Pixel 启动器”主屏应用和一套新的系统界面,这些都是 Pixel 独占的。我们需要时间来见证未来功能点在“安卓”和“Pixel”之间的平衡。
见证这这些改变,我们可能正处在安卓历史上最不确定的点上。但在安卓最近的 2016 年 10 月活动前,安卓、Chrome OS以及谷歌 Play 的 SVP [Hiroshi Lockheimer](http://arstechnica.com/gadgets/2016/10/chatting-with-googles-hiroshi-lockheimer-about-pixel-android-oems-and-more/) 说,他相信我们将来都会动情回顾这些最新的安卓更新。Lockheimer 是现在谷歌事实上的软件统治者,他认为最新的更新会是自安卓八年前亮相以来最有意义的事件。尽管在发布会后他没有对这个观点进行详细阐述,但事实表明明年这时候我们*可能*都不会讨论安卓了——它可能已经是一个安卓/Chrome OS 结合体了!正如 2008 年以来那样,安卓历史的下一篇章一定会更加有趣。
---

[Ron Amadeo](http://arstechnica.com/author/ronamadeo) / Ron 是 Ars Technica 的评论编缉,专注于安卓系统和谷歌产品。他总是在追寻新鲜事物,还喜欢拆解事物看看它们到底是怎么运作的。[@RonAmadeo](https://twitter.com/RonAmadeo)
---
via: <http://arstechnica.com/gadgets/2016/10/building-android-a-40000-word-history-of-googles-mobile-os/33/>
作者:[RON AMADEO](http://arstechnica.com/author/ronamadeo) 译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,727 | ss:查看网络连接的另一种方法 | https://insights.ubuntu.com/2017/07/25/ss-another-way-to-get-socket-statistics/ | 2017-07-27T14:36:55 | [
"ss",
"netstat"
] | https://linux.cn/article-8727-1.html | 
在之前的文章中,我提到过 `ss`,它是 iproute2 包附带的另一个工具,允许你查询 socket 的有关统计信息。可以完成 `netstat` 同样的任务,但是,`ss` 稍微快一点而且命令更简短。
直接输入 `ss`,默认会显示与 `netstat` 同样的内容,并且输入类似的参数可以获取你想要的类似输出。例如:
```
$ ss -t
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 127.0.0.1:postgresql 127.0.0.1:48154
ESTAB 0 0 192.168.0.136:35296 192.168.0.120:8009
ESTAB 0 0 192.168.0.136:47574 173.194.74.189:https
[…]
```
`ss -t` 只显示 TCP 连接。`ss -u` 用于显示 UDP 连接,`-l` 参数只会显示监听的端口,而且可以进一步过滤到任何想要的信息。
我并没有测试所有可用参数,但是你甚至可以使用 `-K` 强制关闭 socket。
`ss` 真正耀眼的地方是其内置的过滤能力。让我们列出所有端口为 22(ssh)的连接:
```
$ ss state all sport = :ssh
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:ssh *:*
tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540
tcp LISTEN 0 128 :::ssh :::*
```
如果只想看已建立的 socket(排除了 *listening* 和 *closed* ):
```
$ ss state connected sport = :ssh
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540
```
类似的,可以列出指定的 host 或者 ip 段。例如,列出到达 74.125.0.0/16 子网的连接,这个子网属于 Google:
```
$ ss state all dst 74.125.0.0/16
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.0.136:33616 74.125.142.189:https
tcp ESTAB 0 0 192.168.0.136:42034 74.125.70.189:https
tcp ESTAB 0 0 192.168.0.136:57408 74.125.202.189:https
```
`ss`与 iptables 的语法非常相同,如果已经熟悉了其语法,`ss` 非常容易上手。也可以安装 iproute2-doc 包, 通过 `/usr/share/doc/iproute2-doc/ss.html` 获得完整文档。
还不快试试! 你就可以知道它有多棒。无论如何,让我输入的字符越少我越高兴。
---
via: <https://insights.ubuntu.com/2017/07/25/ss-another-way-to-get-socket-statistics/>
作者:[Mathieu Trudel-Lapierre](https://insights.ubuntu.com/author/mathieu-trudel-lapierre/) 译者:[VicYu](https://vicyu.com) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,728 | Neo4j 图数据库基础 | https://opensource.com/article/17/7/fundamentals-graph-databases-neo4j | 2017-07-27T15:13:00 | [
"Neo4j",
"图数据库"
] | https://linux.cn/article-8728-1.html |
>
> 在这个三篇系列文章的第一篇文章中,我们将学习<ruby> 图数据库 <rp> ( </rp> <rt> graph database </rt> <rp> ) </rp></ruby>的基础知识,它支持了这地球上最大的一些数据池。
>
>
>

对于海量的各种非结构化信息来说,图数据库已经成为帮助收集、管理和搜索大量数据的技术。在这三篇系列文章中,我们将使用开源图数据库软件 [Neo4j](https://neo4j.com/) 来研究图数据库。
在本文中,我将向你展示图数据库的基础知识,帮助你快速了解概念模型。在第二篇中,我将向你展示如何启动 Neo4j 数据库,并使用内置的浏览器工具填充一些数据。而且,在本系列的最后一篇文章中,我们将探讨一些在开发工作中使用的 Neo4j 编程库。
掌握图数据库的概念模型是有用的,所以我们从那里开始。图数据库中只存储两种数据:<ruby> 节点 <rt> node </rt></ruby>和<ruby> 边 <rt> edge </rt></ruby>。
* **节点是实体:**诸如人物、发票、电影、书籍或其他具体事物。这些有些等同于关系数据库中的记录或行。
* **边名关系:**连接节点的概念、事件或事物。在关系数据库中,这些关系通常存储在具有链接字段的数据库行中。在图数据库中,它们本身就是有用的,是可以以其自己的权限搜索的对象。
节点和边都可以拥有可搜索的*属性*。例如,如果你的节点代表人,他们可能拥有名字、性别、出生日期、身高等属性。而边的属性可能描述了两个人之间的关系何时建立,见面的情况或关系的性质。
这是一个帮助你可视化的图表:

在这张图中,你知道 Jane Doe 有一个新的丈夫 John;一个女儿(来自她以前的夫妻关系)Mary Smith 和朋友 Robert 和 Rhonda Roe。Roes 有一个儿子 Ryan,他正在与 Mary Smith 约会。
看看它怎么工作?每个节点代表一个独立于其他节点的人。你需要找到关于*那个*人的一切都可以存储在节点的属性中。边描述了人们之间的关系,这与你在程序中需要的一样多。
关系是单向的,且不能是无向的,但这没有问题。由于数据库可以以相同的速度遍历两个方向,并且方向可以忽略,你只需要定义一次此关系。如果你的程序需要定向关系,则可以自由使用它们,但如果双向性是暗含的,则不需要。
另外需要注意的是,图数据库本质上是无 schema 的。这与关系数据库不同,关系数据库每行都有一组列表,并且添加新的字段会给开发和升级带来很多工作。
每个节点都可以拥有一个<ruby> 标签 <rt> label </rt></ruby>;对于大多数程序你需要“输入”这个标签,是对典型的关系数据库中的表名的模拟。标签可以让你区分不同的节点类型。如果你需要添加新的标签或属性,修改程序来用它就行!
使用图数据库,你可以直接开始使用新的属性和标签,节点将在创建或编辑时获取它们。不需要转换东西;只需在你的代码中使用它们即可。在这里的例子中,你可以看到,我们知道 Jane 和 Mary 最喜欢的颜色和 Mary 的出生日期,但是别人没有(这些属性)。这个系统不需要知道它;用户可以在正常使用程序的过程中访问节点时为其添加信息(属性)。
作为一名开发人员,这是一个有用的特性。你可以将新的标签或属性添加到由节点处理的表单中并开始使用它,而不必进行数据库 schema 的修改。对于没有该属性的节点,将不显示任何内容。你可以使用任何一种类型的数据库来为表单进行编码,但是你可以放下在关系型数据库中要进行的许多后端工作了。
让我们添加一些新的信息:

这是一个新的节点类型,它代表一个位置,以及一些相关关系。现在我们看到 John Doe 出生在加利福尼亚州的 Petaluma,而他的妻子 Jane 则出生在德克萨斯州的 Grand Prairie。 他们现在住在得克萨斯州的赛普拉斯,因为 Jane 在附近的休斯顿工作。Ryan Roe 缺乏城市关系对数据库来说没有什么大不了的事情,我们*不知道*那些信息而已。当用户输入更多数据时,数据库可以轻松获取新数据并添加新数据,并根据需要创建新的节点和关系。
了解节点和边应该足以让你开始使用图形数据库。如果你像我一样,已经在考虑如何在一个图中重组你的程序。在本系列的下一篇文章中,我将向你展示如何安装 Neo4j、插入数据,并进行一些基本的搜索。
---
via: <https://opensource.com/article/17/7/fundamentals-graph-databases-neo4j>
作者:[Ruth Holloway](https://opensource.com/users/druthb) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | For very large collections of diverse, unstructured information, graph databases have emerged as a technology to help collect, manage, and search large sets of data. In this three-part series, we'll explore graph databases, using [Neo4j](https://neo4j.com/), an open source graph database.
In this article, I'll show you the basics of graph databases, bringing you up to speed on the conceptual model. In the second, I'll show you how to spin up a Neo4j database and populate it with some data using the built-in browser tools. And, in the third and final article in the series, we'll explore a couple of programming libraries for using Neo4j in your development work.
Grasping the conceptual model for graph databases is useful, so we'll start there. A graph database has only two kinds of data stored in it: *nodes* and *edges*.
**Nodes are entities:**things such as people, invoices, movies, books, or other concrete things. These are somewhat equivalent to a record or row in a relational database.**Edges name relationships:**the concepts, events, or things that connect nodes. In relational databases, these relationships are ordinarily stored in the database rows with a linking field. In graph databases, they are themselves useful, searchable objects in their own right.
Both nodes and edges can possess searchable *properties*. For instance, if your nodes represented people, they might own properties like name, gender, date of birth, height, and so forth. Edge properties might describe when a relationship between two people was established, the circumstances of meeting, or the nature of the relationship.
Here's a diagram to help you visualize this:

opensource.com
In this diagram, you learn that Jane Doe has a new husband, John; a daughter (from her prior relationship) Mary Smith; and friends Robert and Rhonda Roe. The Roes have a son, Ryan, who is dating Mary Smith.
See how it works? Each node represents a person, standing alone, in isolation from other nodes. Everything you need to find about *that* person can be stored in the node's properties. Edges describe the relationships between the people, with as much detail as you need for the application.
Relationships are one-way and cannot be undirected, but that's no problem. Since the database can traverse both directions with the same speed, and direction can be ignored, you only need to define this relationship once. If your application requires directional relationships, you're free to use them, but if bidirectionality is implied, it's not required.
Another thing to notice is that graph databases are, by nature, schema-less. This differs from a relational database, where each row has a set list of fields, and adding new fields is a major investment in development and upgrades.
Each node can possess a *label*; this label is all the "typing" you need for most applications and is the analog of the table name in a typical relational database. A label lets you differentiate between different node types. If you need to add a new label or property, change your application to start using it!
With graph databases, you can simply start using the new properties and labels, and nodes will acquire them as they are created or edited. There's no need to convert things; just start using them in your code. In the example here, you can see that we know Jane's and Mary's favorite colors and Mary's date of birth, but not anyone else's. The system doesn't need to know about that; users can add that information at will as nodes are accessed in the normal course of the application's usage.
As a developer, this is a useful thing. Instead of having to handle the database schema changes, you can just add the new label or property to forms that deal with the nodes and start using it. For nodes that don't have the property, nothing is displayed. You'd have to code the form with either type of database, but you drop a lot of the backend work that you'd need to do with a relational database.
Let's add some new information:

opensource.com
Here is a new type of node, representing a location, and some relevant relationships. Now we see that John Doe was born in Petaluma, Calif., while his wife, Jane, was born in Grand Prairie, Texas. They both now live in Cypress, Texas, because Jane works in nearby Houston. The lack of city relationships with Ryan Roe isn't any big deal to the database, we just *don't know* that information yet. The database could easily acquire new data and add it, creating new nodes and relationships as needed, as application users entered more data.
Understanding nodes and edges should be enough to get you started with graph databases. If you're like me, you're already thinking about how an application you work on might be restructured in a graph. In the next article in this series, I'll show you how to install Neo4j, insert data, and do some basic searching.
## 1 Comment |
8,730 | 在 Linux Mint 安装 Linux Kernel 4.12(稳定版) | https://mintguide.org/system/798-install-linux-kernel-4-12-stable-on-linux-mint.html | 2017-07-28T08:00:00 | [
"内核"
] | https://linux.cn/article-8730-1.html | 
**Linus Torvalds** 发布了 **Linux 内核 4.12**。你可以从**[这里](https://mintguide.org/engine/dude/index/leech_out.php?a%3AaHR0cDovL2tlcm5lbC51YnVudHUuY29tL35rZXJuZWwtcHBhL21haW5saW5lL3Y0LjEyLw%3D%3D)**直接下载相关的 **deb** 包来安装。或者,继续阅读本文,按下面的步骤安装新内核。
**警告:Linux 内核是系统的关键元素。在某个硬件设备不正常工作时,可以尝试执行升级,新的内核可能会解决此问题。 但同样的,非必须地更新一个新的内核也可能导致不必要的回滚,例如,无网络连接, 没有声音,甚至是无法正常启动系统,所以安装一个新的内核,请正确认识风险。**
最简单的安装任意内核方法 - 在**Linux Mint** 使用 [UKUU](https://mintguide.org/tools/691-ukuu-ubuntu-kernel-upgrade-utility.html)。
```
TerminalShekin@mylinuxmintpc~$sudo apt-add-repository -y ppa:teejee2008/ppa
sudo apt-get update
sudo apt-get install ukuu
```
**提醒:所有的 Nvidia/AMD 电脑用户, 在安装内核之前,建议切换到 free 版本的驱动。**
**如果决定删除内核 4.12,**
1. 首先,重启计算机,选择 GRUB 菜单中的旧内核启动。系统引导完成之后,通过以下命令删除新的内核:
2. 然后,使用 [UKUU](https://mintguide.org/tools/691-ukuu-ubuntu-kernel-upgrade-utility.html) 程序,或者命令:`sudo apt purge linux-image-4.12-*`
3. 最后,更新 **GRUB** 或者 **[BURG](https://mintguide.org/effects/716-burg-graphical-bootloader-install-to-linux-mint.html)**:`sudo update-grub`
在启动 **GRUB** 的时候,选择**以前的 Linux 版本**即可回到以前版本的内核。
Good Luck!!!
---
via: <https://mintguide.org/system/798-install-linux-kernel-4-12-stable-on-linux-mint.html>
作者:[Shekin](https://mintguide.org/user/Shekin/) 译者:[VicYu](https://vicyu.com) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | Not Found | null |
8,731 | 漏洞修复八个月后,仍有超过七万台 memcached 服务器面临危险 | https://thenewstack.io/70000-memcached-servers-can-hacked-using-eight-month-old-flaws/ | 2017-07-28T12:12:00 | [
"安全漏洞",
"memcached"
] | https://linux.cn/article-8731-1.html | 
在开源缓存软件 memcached 修复了三个关键漏洞的八个月之后,仍有超过 70000 台未打补丁的缓存服务器直接暴露在互联网上。安全研究员警告说,黑客可能会在服务器上执行恶意代码或从其缓存中窃取潜在的敏感数据。
[memcached](https://memcached.org/) 是一个实现了高性能缓存服务的软件包,用于在内存中存储从数据库和 API 调用中获取的数据块。这有助于提高动态 Web 应用程序的响应速度,使其更加适合大型网站和大数据项目。
虽然 memcached 不是数据库的替代品,但它存储在内存中的数据包括了来自数据库查询的用户会话和其他敏感信息。因此,该服务器在设计上并不能直接暴露在互联网等不受信任的环境中,其最新的版本已经支持了基本身份验证。
去年 10 月份,memcached 的开发者修复了由 [思科 Talos 部门](https://www.talosintelligence.com/) 安全研究员发现并报告的三个远程代码执行漏洞([CVE-2016-8704](https://www.talosintelligence.com/reports/TALOS-2016-0219/)、[CVE-2016-8705](https://www.talosintelligence.com/reports/TALOS-2016-0220/) 和 [CVE-2016-8706](https://www.talosintelligence.com/reports/TALOS-2016-0221/))。所有这些漏洞都影响到了 memcached 用于存储和检索数据的二进制协议,其中一个漏洞出现在 [Simple Authentication and Security Layer](https://tools.ietf.org/html/rfc4422) (SASL)的实现中。
在去年 12 月到今年 1 月期间,成队的攻击者从数万个公开的数据库中擦除数据,这包括 MongoDB、CouchDB、Hadoop 和 Elasticsearch 集群。在很多情况下,攻击者勒索想要恢复数据的服务器管理员,然而没有任何证据表明他们的确对所删除的数据进行了复制。
Talos 的研究人员认为, memcached 服务器可能是下一个被攻击的目标,特别是在几个月前发现了漏洞之后。所以在二月份他们决定进行一系列的互联网扫描来确定潜在的攻击面。
扫描结果显示,大约有 108000 个 memcached 服务器直接暴露在互联网上,其中只有 24000 个服务器需要身份验证。如此多的服务器在没有身份验证的情况下可以公开访问已经足够糟糕,但是当他们对所提交的三个漏洞进行测试时,他们发现只有 200 台需要身份验证的服务器部署了 10 月的补丁,其它的所有服务器都可能通过 SASL 漏洞进行攻击。
总的来说,暴露于互联网上的 memcached 服务器有大约 80%,即 85000 个都没有对 10 月份的三个关键漏洞进行安全修复。
由于补丁的采用率不佳,Talos 的研究人员决定对所有这些服务器的 IP 地址进行 whois 查询,并向其所有者发送电子邮件通知。
本月初,研究人员决定再次进行扫描。他们发现,虽然有 28500 台服务器的 IP 地址与 2 月份时的地址不同,但仍然有 106000 台 memcached 服务器暴露在因特网上。
在这 106000 台服务器中,有大约 70%,即 73400 台服务器在 10 月份修复的三个漏洞的测试中仍然受到攻击。超过 18000 个已识别的服务器需要身份验证,其中 99% 的服务器仍然存在 SASL 漏洞。
即便是发送了成千上万封电子邮件进行通知,补丁的采用率也仅仅提高了 10%。
Talos 研究人员在周一的[博客](http://blog.talosintelligence.com/2017/07/memcached-patch-failure.html)中表示:“这些漏洞的严重程度不能被低估。这些漏洞可能会影响到小型和大型企业在互联网上部署的平台,随着最近大量的蠕虫利用漏洞进行攻击,应该为全世界的服务器管理员敲响警钟。如果这些漏洞没有修复,就可能被利用,对组织和业务造成严重的影响。”
这项工作的结论表明,许多网络应用程序的所有者在保护用户数据方面做得不好。首先,大量的 Memcached 服务器直接暴露在互联网上,其中大多数都没有使用身份验证。即使没有任何漏洞,这些服务器上缓存的数据也存在着安全风险。
其次,即使提供了关键漏洞的补丁,许多服务器管理员也不会及时地进行修复。
在这种情况下,看到 memcached 服务器像 MongoDB 数据库一样被大规模攻击也并不奇怪。
---
via: <https://thenewstack.io/70000-memcached-servers-can-hacked-using-eight-month-old-flaws/>
作者:[Lucian Constantin](https://thenewstack.io/author/lucian/) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # 70,000 Memcached Servers Can Be Hacked Using Eight-Month-Old Flaws

Eight months after three critical vulnerabilities were fixed in the memcached open source caching software, there are over 70,000 caching servers directly exposed on the internet that have yet to be patched. Hackers could execute malicious code on them or steal potentially sensitive data from their caches, security researchers warn.
[Memcached](https://memcached.org/) is a software package that implements a high performance caching server for storing chunks of data obtained from database and API calls in RAM. This helps speed up dynamic web applications, making it well suited for large websites and big-data projects.
While memcached is not a database replacement, the data it stores in RAM can include user sessions and other sensitive information from database queries. As such, the server was not designed to be directly exposed to untrusted environments like the internet, even though some of the more recent versions support basic authentication.
Back in October, the memcached developers fixed three remote code execution vulnerabilities ([CVE-2016-8704](https://www.talosintelligence.com/reports/TALOS-2016-0219/), [CVE-2016-8705](https://www.talosintelligence.com/reports/TALOS-2016-0220/) and [CVE-2016-8706](https://www.talosintelligence.com/reports/TALOS-2016-0221/)) that were found and reported by security researchers from [Cisco Systems’ Talos division](https://www.talosintelligence.com/). All of these flaws affected memcached’s binary protocol for storing and retrieving data and one of them was in the [Simple Authentication and Security Layer ](https://tools.ietf.org/html/rfc4422)(SASL) implementation.
Throughout December and January several groups of attackers wiped data from tens of thousands of publicly exposed databases including MongoDB, CouchDB, Hadoop and Elasticsearch clusters. In many cases they asked server administrators for money to return the data, but there was no evidence they actually copied it.
The Talos researchers thought that memcached servers might be the next target, especially giving the flaws they had identified a few months earlier, so in February they decided to run a series of internet scans to determine the potential attack surface.
The scan results revealed that around 108,000 memcached servers were directly exposed to the internet and only 24,000 of them required authentication. The fact that so many servers were publicly accessible without authentication was bad enough, but when they also tested for the presence of the three vulnerabilities, they found that only 200 servers requiring authentication actually had the October patches deployed. All the rest were open to hacking through the SASL vulnerability.
Overall, 85,000 or around 80 percent of all memcached servers exposed to the internet lacked the security fixes for the three critical flaws announced in October.
Troubled by the poor patch adoption rate, the Talos researchers decided to run whois queries on the IP addresses of all of those servers and send notification emails to their owners.
Earlier this month the researchers decided to redo their scans. They found that there are still 106,000 memcached servers exposed to the internet, although 28,500 have different IP addresses than the ones found in February.
Of these 106,000 servers, 73,400 or around 70 percent continue to be vulnerable to the three exploits patched in October. Over 18,000 of the identified servers require authentication and 99 percent of those continue to have the SASL vulnerability.
Even after sending tens of thousands of notification emails, the patch adoption rate improved by only 10 percent in six months.
“The severity of these types of vulnerabilities cannot be understated,” the Talos researchers said Monday in a [blog post](http://blog.talosintelligence.com/2017/07/memcached-patch-failure.html). “These vulnerabilities potentially affect a platform that is deployed across the internet by small and large enterprises alike. With the recent spate of worm attacks leveraging vulnerabilities this should be a red flag for administrators around the world. If left unaddressed the vulnerabilities could be leveraged to impact organizations globally and impact business severely.”
The conclusions of this exercise suggest that many web application owners do a poor job of safeguarding their users’ data. First, a surprisingly large number of memcached servers are directly exposed to the internet and the majority of them do not use authentication. The data cached on these servers is at risk even without the presence of any vulnerabilities.
Second, even when critical vulnerabilities that could be used to completely compromise servers are patched, many server administrators don’t apply the security fixes in a timely manner, if ever.
Under these circumstances, seeing large scale attacks against memcached servers like those that targeted MongoDB databases would not be surprising.
Feature image via Pixabay.
[
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
](https://youtube.com/thenewstack?sub_confirmation=1) |
8,732 | Linux “天气预报” | https://www.linux.com/news/2017/7/linux-weather-forecast | 2017-07-28T12:36:00 | [
"Linux",
"内核"
] | https://linux.cn/article-8732-1.html | 
### 欢迎来到 Linux 天气预报
本页面是为了跟踪在不久的将来某个时间内有可能出现在主线内核和/或主要发行版中的 Linux 开发社区的进展情况。你的“首席气象学家”是 [LWN.net](http://www.lwn.net/) 执行主编 Jonathan Corbet。如果你有改进预测的建议(特别是如果你有一个你认为应该跟踪的项目或修补程序的情况下),请在下面补充你的意见。
### 预测摘要
**当前情况**:内核 4.12 于 7 月 2 日发布。它包含了许多新功能,包括:
* [BFQ 和 Kyber 块 I/O 调度器](https://lwn.net/Articles/720675/)。已经开发多年的 BFQ 在交互式系统上表现更好,这引起了移动设备领域的兴趣。相反,Kyber 是一个更简单的调度程序,旨在用于通常出现在企业配置中的快速设备。
* epoll\_wait() 系统调用现在可以执行网络套接字的繁忙轮询。
* 实时修补机制已经实现了[混合一致性模型](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d83a7cb375eec21f04c83542395d08b2f6641da2),这将可以把更复杂的补丁应用于运行中的内核。
* [可信执行框架](https://lwn.net/Articles/717125/)应该使得内核与在 ARM TrustZone 安全世界中运行的代码之间的交互更加容易。
4.12 是最繁忙的内核开发周期之一,合并了近 15000 次更新。有关这些变化来源的概述,请参阅[这里](https://lwn.net/Articles/726950/)。
**短期预测**:4.13 内核预期在 2017 年 9 月初发布。这个内核中会出现的一些变化是:
* 更好地支持非阻塞直接块 I/O 操作。
* [结构布局随机化机制](https://lwn.net/Articles/722293/)是正在进行的内核加固防攻击项目的下一步。
* 内核现在已经[原生支持 TLS 网络协议](https://lwn.net/Articles/666509/)。
* 采取[透明大页交换](https://lwn.net/Articles/717707/),这使得有更好的内存管理性能。
* 块子系统中回写错误的处理[已经改进](https://lwn.net/Articles/724307/),这使得错误不太可能不告知程序而写入数据。
4.13 内核现在处于稳定时期,所以在这个开发周期的剩余时间内只会接受修复补丁。
这篇[文章](http://purl.org/dc/elements/1.1/)根据[知识共享署名 - 共享 3.0 许可证](http://creativecommons.org/licenses/by-sa/3.0/)获得许可。
---
via: <https://www.linux.com/news/2017/7/linux-weather-forecast>
作者:[JONATHAN CORBET](https://www.linux.com/users/corbet) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,735 | GPL 没落了吗? | https://opensource.com/article/17/2/decline-gpl | 2017-07-31T10:00:00 | [
"GPL",
"开源许可证"
] | /article-8735-1.html | 
不久之前我看到了 RedMonk 的 Stephen O'Grady 发了一个[关于开源协议](https://twitter.com/sogrady/status/820001441733607424)的有趣的推特,那个推特里面有这张图。

这张图片显示了从 2010 到 2017 年间各种开源协议之间的使用率的变化。在这张图片里,显然 GPL 2.0 —— 最纯净的 copyleft 协议之一 —— 的使用率降低了一多半。该图表表明,开源项目中 [MIT](https://opensource.org/licenses/MIT) 协议和 [Apache](http://apache.org/licenses/) 协议开始受欢迎。[GPL 3.0](https://www.gnu.org/licenses/gpl-3.0.en.html) 的使用率也有所上涨。
这些意味着什么?
为什么 GPL 2.0 的使用率跌的这么多但是 GPL 3.0 仅仅是涨了一丁点?为什么 MIT 协议和 Apache 协议的使用率涨了那么多?
当然,有很多原因可以解释这件事情,但是我想这是因为商业开源项目的增多,以及商业社会对于 GPL 协议的担心导致的,我们细细掰扯。
### GPL 协议与商业社会
我知道我要说的可能会激怒一些 GPL 粉,所以在你们开始喷之前,我想说明的是:我支持 GPL,我也是 GPL 粉丝。
我写过的所有软件都使用的是 GPL 协议,我也是一直是积极出资支持 [自由软件基金会](http://www.fsf.org/) 以及 [软件自由保护组织](https://sfconservancy.org/) 以及他们的工作的,我支持使用 GPL 协议。我在这说的无关 GPL 的合法性或者 GPL 的巨大价值 —— 毫无疑问这是一个好协议 —— 我在这要说的是业内对于这个协议的看法。
大概四年之前,我参加了一个叫做<ruby> 开源智库 <rt> Open Source Think Tank </rt></ruby>的峰会。这个峰会是一个私人小型峰会,每年都会把各大开源企业的管理人员请到加利福尼亚的酒庄。这个峰会聚焦于建立关系、构建联盟,找到并解决行业问题。
在这个峰会上,有一个分组研究,在其中,与会者被分成小组,被要求给一个真实存在的核心的开源技术推荐一个开源协议。每个小组都给出了回应。不到十分之一的小组推荐了宽容许可证,没有人推荐 GPL 许可证。
我看到了开源行业对于 Apache 协议以及 MIT 协议的逐步认可,但是他们却对花时间理解、接受和熟悉 GPL 这件事高高挂起。
在这几年里,这种趋势仍在蔓延。除了 Black Duck 的调查之外, 2015 年 [GitHub 上的开源协议调查](https://github.com/blog/1964-open-source-license-usage-on-github-com) 也显示 MIT 是人们的首选。我还能看到,在我工作的 XPRIZE (我们为我们的 [Global Learning XPRIZE](http://learning.xprize.org/) 选择了开源协议),在我作为[社区领导顾问](http://www.jonobacon.org/consulting)的工作方面,我也能感觉到那种倾向,因为越来越多的客户觉得把他们的代码用 GPL 发布不舒服。
随着 [大约 65% 的公司对开源事业做贡献](https://opensource.com/business/16/5/2016-future-open-source-survey) ,自从 2010 年以后显然开源行业已经引来了不少商业兴趣和投资。我相信,我之前说的那些趋势,已经表明这行业不认为 GPL 适合搞开源生意。
### 连接社区和公司
说真的,GPL 的没落不太让人吃惊,因为有如下原因。
首先,开源行业已经转型升级,它要在社区发展以及……你懂的……真正能赚钱的商业模型中做出均衡,这是它们要做的最重要的决策。在开源思想发展之初,人们有种误解说,“如果你搞出来了,他们就会用”,他们确实会来使用你的软件,但是在很多情况下,都是“如果你搞出来了,他们不是一定要给你钱。”
随着历史的进程,我们看到了许多公司,比如 Red Hat、Automattic、Docker、Canonical、Digital Ocean 等等等等,探索着在开源领域中赚钱的法子。他们探索过分发模式、服务模式,核心开源模式等等。现在可以确定的是,传统的商业软件赚钱的方式已经不再适用开源软件;因此,你得选择一个能够支持你的公司的经营方式的开源协议。在赚钱和免费提供你的技术之间找到平衡在很多情况下是很困难的一件事。
这就是我们看到那些变化的原因。尽管 GPL 是一个开源协议,但是它根本上是个自由软件协议,作为自由软件协议,它的管理以及支持是由自由软件基金会提供的。
我喜欢自由软件基金会的作品,但是他们已经把观点局限于软件必须 100% 绝对自由。对于自由软件基金会没有多少可以妥协的余地,甚至很多出名的开源项目(比如很多 Linux 发行版)仅仅是因为一丁点二进制固件就被认为是 “非自由” 软件。
对于商业来说,最复杂的是它不是非黑即白的,更多的时候是两者混合的灰色,很少有公司有自由软件基金会(或者类似的组织,比如软件自由保护组织)的那种纯粹的理念,因此我想那些公司也不喜欢选择和那些理念相关的协议。
我需要说明,我不是在这是说自由软件基金会以及类似的组织(比如软件自由保护组织)的错。他们有着打造完全自由的软件的目标,对于他们来说,走它们选择的路十分合理。自由软件基金会以及软件自由保护组织做了*了不起*的工作,我将继续支持这些组织以及为他们工作的人们。我只是觉得这种对纯粹性的高要求的一个后果就是让那些公司认为自己难以达到要求,因此,他们使用了非 GPL 的其他协议。
我怀疑 GPL 的使用是随着开源软件增长而变化的。在以前,启动(开源)项目的根本原因之一是对开放性和软件自由的伦理因素的严格关注。GPL 无疑是项目的自然选择,Debian、Ubuntu、Fedora 和 Linux 内核以及很多都是例子。
近年来,尽管我们已经看到了不再那么挑剔的一代开发者的出现,但是如果我说的过激一些,他们缺少对于自由的关注。对于他们来说开源软件是构建软件的务实、实用的一部分,而无关伦理。我想,这就是为什么我们发现 MIT 和 Apache 协议的流行的原因。
### 未来 ?
这对于 GPL 意味着什么?
我的猜想是 GPL 依然将是一个主要选项,但是开发者将将之视为纯粹的自由软件协议。我想对于软件的纯粹性有高要求的项目会优先选择 GPL 协议。但是对于商业软件,为了保持我们之前讨论过的那种平衡,他们不会那么做。我猜测, MIT 以及 Apache 依然会继续流行下去。
不管怎样,好消息是开源/自由软件行业确实在增长。无论使用的协议会发生怎样的变化,技术确实变得更加开放,可以接触,人人都能使用。
---
作者简介:
Jono Bacon - Jono Bacon 是一位领袖级的社区管理者、演讲者、作者和播客主。他是 Jono Bacon 咨询公司的创始人,提供社区战略和执行、开发者流程和其它的服务。他之前任职于 GitHub、Canonical 和 XPRIZE、OpenAdvantage 的社区总监,并为大量的组织提供咨询和顾问服务。
---
via: <https://opensource.com/article/17/2/decline-gpl>
作者:[Jono Bacon](https://opensource.com/users/jonobacon) 译者:[name1e5s](https://github.com/name1e5s) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| null | HTTPSConnectionPool(host='opensource.com', port=443): Read timed out. (read timeout=10) | null |
8,736 | 网络分析利器:在 Ubuntu 16.04 上安装 Bro | https://www.unixmen.com/how-to-install-bro-ubuntu-1604/ | 2017-07-30T11:08:00 | [
"网络监控",
"Bro"
] | https://linux.cn/article-8736-1.html | [](https://www.unixmen.com/wp-content/uploads/2017/07/brologo.jpg)
### 简介:Bro 网络分析框架
Bro 是一个开源的网络分析框架,侧重于网络安全监控。这是一项长达 15 年的研究成果,被各大学、研究实验室、超级计算机中心和许多开放科学界广泛使用。它主要由伯克利国际计算机科学研究所和伊利诺伊大学厄巴纳-香槟分校的国家超级计算机应用中心开发。
Bro 的功能包括:
* Bro 的脚本语言支持针对站点定制监控策略
* 针对高性能网络
* 分析器支持许多协议,可以在应用层面实现高级语义分析
* 它保留了其所监控的网络的丰富的应用层统计信息
* Bro 能够与其他应用程序接口实时地交换信息
* 它的日志全面地记录了一切信息,并提供网络活动的高级存档
本教程将介绍如何从源代码构建,并在 Ubuntu 16.04 服务器上安装 Bro。
### 准备工作
Bro 有许多依赖文件:
* Libpcap ([http://www.tcpdump.org](http://www.tcpdump.org/))
* OpenSSL 库 ([http://www.openssl.org](http://www.openssl.org/))
* BIND8 库
* Libz
* Bash (BroControl 所需要)
* Python 2.6+ (BroControl 所需要)
从源代码构建还需要:
* CMake 2.8+
* Make
* GCC 4.8+ or Clang 3.3+
* SWIG
* GNU Bison
* Flex
* Libpcap headers
* OpenSSL headers
* zlib headers
### 起步
首先,通过执行以下命令来安装所有必需的依赖项:
```
# apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev
```
#### 安装定位 IP 地理位置的 GeoIP 数据库
Bro 使用 GeoIP 的定位地理位置。安装 IPv4 和 IPv6 版本:
```
$ wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
$wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz
```
解压这两个压缩包:
```
$ gzip -d GeoLiteCity.dat.gz
$ gzip -d GeoLiteCityv6.dat.gz
```
将解压后的文件移动到 `/usr/share/GeoIP` 目录下:
```
# mvGeoLiteCity.dat /usr/share/GeoIP/GeoIPCity.dat
# mv GeoLiteCityv6.dat /usr/share/GeoIP/GeoIPCityv6.dat
```
现在,可以从源代码构建 Bro 了。
### 构建 Bro
最新的 Bro 开发版本可以通过 `git` 仓库获得。执行以下命令:
```
$ git clone --recursive git://git.bro.org/bro
```
转到克隆下来的目录,然后使用以下命令就可以简单地构建 Bro:
```
$ cd bro
$ ./configure
$ make
```
`make` 命令需要一些时间来构建一切。确切的时间取决于服务器的性能。
可以使用一些参数来执行 `configure` 脚本,以指定要构建的依赖关系,特别是 `--with-*` 选项。
### 安装 Bro
在克隆的 `bro` 目录中执行:
```
# make install
```
默认安装路径为 `/usr/local/bro`。
### 配置 Bro
Bro 的配置文件位于 `/usr/local/bro/etc` 目录下。 这里有三个文件:
* `node.cfg`,用于配置要监视的单个节点(或多个节点)。
* `broctl.cfg`,BroControl 的配置文件。
* `networks.cgf`,包含一个使用 CIDR 标记法表示的网络列表。
#### 配置邮件设置
打开 `broctl.cfg` 配置文件:
```
# $EDITOR /usr/local/bro/etc/broctl.cfg
```
查看 `Mail Options` 选项,并编辑 `MailTo` 行如下:
```
# Recipient address for emails sent out by Bro and BroControl
MailTo = [email protected]
```
保存并关闭。还有许多其他选项,但在大多数情况下,默认值就足够好了。
#### 选择要监视的节点
开箱即用,Bro 被配置为以独立模式运行。在本教程中,我们就是做一个独立的安装,所以没有必要改变。但是,也请查看 `node.cfg` 配置文件:
```
# $EDITOR /usr/local/bro/etc/node.cfg
```
在 `[bro]` 部分,你应该看到这样的东西:
```
[bro]
type=standalone
host=localhost
interface=eth0
```
请确保 `inferface` 与 Ubuntu 16.04 服务器的公网接口相匹配。
保存并退出。
#### 配置监视节点的网络
最后一个要编辑的文件是 `network.cfg`。使用文本编辑器打开它:
```
# $EDITOR /usr/local/bro/etc/networks.cfg
```
默认情况下,你应该看到以下内容:
```
# List of local networks in CIDR notation, optionally followed by a
# descriptive tag.
# For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes.
10.0.0.0/8 Private IP space
172.16.0.0/12 Private IP space
192.168.0.0/16 Private IP space
```
删除这三个条目(这只是如何使用此文件的示例),并输入服务器的公用和专用 IP 空间,格式如下:
```
X.X.X.X/X Public IP space
X.X.X.X/X Private IP space
```
保存并退出。
### 使用 BroControl 管理 Bro 的安装
管理 Bro 需要使用 BroControl,它支持交互式 shell 和命令行工具两种形式。启动该 shell:
```
# /usr/local/bro/bin/broctl
```
要想使用命令行工具,只需将参数传递给上一个命令,例如:
```
# /usr/local/bro/bin/broctl status
```
这将通过显示以下的输出来检查 Bro 的状态:
```
Name Type Host Status Pid Started
bro standalone localhost running 6807 20 Jul 12:30:50
```
### 结论
这是一篇 Bro 的安装教程。我们使用基于源代码的安装,因为它是获得可用的最新版本的最有效的方法,但是该网络分析框架也可以下载预构建的二进制格式文件。
下次见!
---
via: <https://www.unixmen.com/how-to-install-bro-ubuntu-1604/>
作者:[Giuseppe Molica](https://www.unixmen.com/author/tutan/) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ### Introduction: Bro Network Analysis Framework
Bro is an open source network analysis framework with a focus on network security monitoring. It is the result of 15 years of research, widely used by major universities, research labs, supercomputing centers and many open-science communities. It is developed mainly at the International Computer Science Institute, in Berkeley, and the National Center for Supercomputing Applications, in Urbana-Champaign.
Bro has various features, including the following:
- Bro’s scripting language enables site-specific monitoring policies
- Targeting of high-performance networks
- Analyzers for many protocols, enabling high-level semantic analysis at the application level
- It keeps extensive application-layer stats about the network it monitors.
- Bro interfaces with other applications for real-time exchange of information
- It comprehensively logs everything and provides a high-level archive of a network’s activity.
This tutorial explains how to build from source and install Bro on an Ubuntu 16.04 Server.
### Prerequisites
Bro has many dependencies:
- Libpcap (
[http://www.tcpdump.org](http://www.tcpdump.org)) - OpenSSL libraries (
[http://www.openssl.org](https://www.openssl.org)) - BIND8 library
- Libz
- Bash (required for BroControl)
- Python 2.6+ (required for BroControl)
Building from source requires also:
- CMake 2.8+
- Make
- GCC 4.8+ or Clang 3.3+
- SWIG
- GNU Bison
- Flex
- Libpcap headers
- OpenSSL headers
- zlib headers
### Getting Started
First of all, install all the required dependencies, by executing the following command:
# apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev
#### Install GeoIP Database for IP Geolocation
Bro depends on GeoIP for address geolocation. Install both the IPv4 and IPv6 versions:
$ wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz $wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz
Decompress both archives:
$ gzip -d GeoLiteCity.dat.gz $ gzip -d GeoLiteCityv6.dat.gz
Move the decompressed files to
directory:
# mvGeoLiteCity.dat /usr/share/GeoIP/GeoIPCity.dat
# mv GeoLiteCityv6.dat /usr/share/GeoIP/GeoIPCityv6.dat
Now, it’s possible to build Bro from source.
### Build Bro
The latest Bro development version can be obtained through
repositories. Execute the following command:
$ git clone --recursive git://git.bro.org/bro
Go to the cloned directory and simply build Bro with the following commands:
$ cd bro $ ./configure $ make
The make command will require some time to build everything. The exact amount of time, of course, depends on the server performances.
The
script can be executed with some argument to specify what dependencies you want build to, in particular the
options.
### Install Bro
Inside the cloned
directory, execute:
# make install
The default installation path is
.
### Configure Bro
Bro configuration files are located in the
directory. There are three files:
-
node.cfg
, used to configure which node (or nodes) to monitor.
-
broctl.cfg
, the BroControl configuration file.
-
networks.cgf
, containing a list of networks in CIDR notation.
#### Configure Mail Settings
Open the
configuration file:
# $EDITOR /usr/local/bro/etc/broctl.cfg
Look for the **Mail Options** section, and edit the
line as follow:
# Recipient address for emails sent out by Bro and BroControl MailTo = [email protected]
Save and close. There are many other options, but in most cases the defaults are good enough.
#### Choose Nodes To Monitor
Out of the box, Bro is configured to operate in the standalone mode. In this tutorial we are doing a standalone installation, so it’s not necessary to change very much. However, look at the
configuration file:
# $EDITOR /usr/local/bro/etc/node.cfg
In the
section, you should see something like this:
```
[bro]
type=standalone
host=localhost
interface=eth0
```
Make sure that the interface matches the public interface of the Ubuntu 16.04 server.
Save and exit.
### Configure Node’s Networks
The last file to edit is
. Open it with a text editor:
# $EDITOR /usr/local/bro/etc/networks.cfg
By default, you should see the following content:
# List of local networks in CIDR notation, optionally followed by a # descriptive tag. # For example, "10.0.0.0/8" or "fe80::/64" are valid prefixes.10.0.0.0/8 Private IP space 172.16.0.0/12 Private IP space 192.168.0.0/16 Private IP space
Delete the three entries (which are just example for how to use this file), and enter the public and private IP space of your server, in the format:
X.X.X.X/X Public IP space X.X.X.X/X Private IP space
Save and exit.
### Manage Bro Installation with BroControl
Managing Bro requires using BroControl, which comes in form of an interactive shell and a command line tool. Start the shell with:
# /usr/local/bro/bin/broctl
To use as a command line tool, just pass an argument to the previous command, for example:
# /usr/local/bro/bin/broctl status
This will check Bro’s status, by showing output like:
Name Type Host Status Pid Started bro standalone localhost running 6807 20 Jul 12:30:50
### Conclusion
This concludes the Bro’s installation tutorial. We used the source based installation because it is the most efficient way to obtain the latest version available, however this network analysis framework can also be downloaded in pre-built binary format.
See you next time! |
8,737 | 使用 Kdump 检查 Linux 内核崩溃 | https://opensource.com/article/17/6/kdump-usage-and-internals | 2017-07-30T15:02:00 | [
"内核",
"kdump",
"转储"
] | https://linux.cn/article-8737-1.html |
>
> 让我们先看一下 kdump 的基本使用方法,和 kdump/kexec 在内核中是如何实现。
>
>
>

[kdump](https://www.kernel.org/doc/Documentation/kdump/kdump.txt) 是获取崩溃的 Linux 内核转储的一种方法,但是想找到解释其使用和内部结构的文档可能有点困难。在本文中,我将研究 kdump 的基本使用方法,和 kdump/kexec 在内核中是如何实现。
[kexec](https://linux.die.net/man/8/kexec) 是一个 Linux 内核到内核的引导加载程序,可以帮助从第一个内核的上下文引导到第二个内核。kexec 会关闭第一个内核,绕过 BIOS 或固件阶段,并跳转到第二个内核。因此,在没有 BIOS 阶段的情况下,重新启动变得更快。
kdump 可以与 kexec 应用程序一起使用 —— 例如,当第一个内核崩溃时第二个内核启动,第二个内核用于复制第一个内核的内存转储,可以使用 `gdb` 和 `crash` 等工具分析崩溃的原因。(在本文中,我将使用术语“第一内核”作为当前运行的内核,“第二内核” 作为使用 kexec 运行的内核,“捕获内核” 表示在当前内核崩溃时运行的内核。)
kexec 机制在内核以及用户空间中都有组件。内核提供了几个用于 kexec 重启功能的系统调用。名为 kexec-tools 的用户空间工具使用这些调用,并提供可执行文件来加载和引导“第二内核”。有的发行版还会在 kexec-tools 上添加封装器,这有助于捕获并保存各种转储目标配置的转储。在本文中,我将使用名为 distro-kexec-tools 的工具来避免上游 kexec 工具和特定于发行版的 kexec-tools 代码之间的混淆。我的例子将使用 Fedora Linux 发行版。
### Fedora kexec-tools 工具
使用 `dnf install kexec-tools` 命令在 Fedora 机器上安装 fedora-kexec-tools。在安装 fedora-kexec-tools 后可以执行 `systemctl start kdump` 命令来启动 kdump 服务。当此服务启动时,它将创建一个根文件系统(initramfs),其中包含了要挂载到目标位置的资源,以保存 vmcore,以及用来复制和转储 vmcore 到目标位置的命令。然后,该服务将内核和 initramfs 加载到崩溃内核区域内的合适位置,以便在内核崩溃时可以执行它们。
Fedora 封装器提供了两个用户配置文件:
1. `/etc/kdump.conf` 指定修改后需要重建 initramfs 的配置参数。例如,如果将转储目标从本地磁盘更改为 NFS 挂载的磁盘,则需要由“捕获内核”所加载的 NFS 相关的内核模块。
2. `/etc/sysconfig/kdump` 指定修改后不需要重新构建 initramfs 的配置参数。例如,如果只需修改传递给“捕获内核”的命令行参数,则不需要重新构建 initramfs。
如果内核在 kdump 服务启动之后出现故障,那么“捕获内核”就会执行,其将进一步执行 initramfs 中的 vmcore 保存过程,然后重新启动到稳定的内核。
### kexec-tools 工具
编译 kexec-tools 的源代码得到了一个名为 `kexec` 的可执行文件。这个同名的可执行文件可用于加载和执行“第二内核”,或加载“捕获内核”,它可以在内核崩溃时执行。
加载“第二内核”的命令:
```
# kexec -l kernel.img --initrd=initramfs-image.img –reuse-cmdline
```
`--reuse-command` 参数表示使用与“第一内核”相同的命令行。使用 `--initrd` 传递 initramfs。 `-l` 表明你正在加载“第二内核”,其可以由 `kexec` 应用程序本身执行(`kexec -e`)。使用 `-l` 加载的内核不能在内核崩溃时执行。为了加载可以在内核崩溃时执行的“捕获内核”,必须传递参数 `-p` 取代 `-l`。
加载捕获内核的命令:
```
# kexec -p kernel.img --initrd=initramfs-image.img –reuse-cmdline
```
`echo c > /pros/sysrq-trigger` 可用于使内核崩溃以进行测试。有关 kexec-tools 提供的选项的详细信息,请参阅 `man kexec`。在转到下一个部分之前,请看这个 kexec\_dump 的演示:
### kdump: 端到端流
下图展示了流程图。必须在引导“第一内核”期间为捕获内核保留 crashkernel 的内存。您可以在内核命令行中传递 `crashkernel=Y@X`,其中 `@X` 是可选的。`crashkernel=256M` 适用于大多数 x86\_64 系统;然而,为崩溃内核选择适当的内存取决于许多因素,如内核大小和 initramfs,以及 initramfs 中包含的模块和应用程序运行时的内存需求。有关传递崩溃内核参数的更多方法,请参阅 [kernel-parameters 文档](https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt)。

您可以将内核和 initramfs 镜像传递给 `kexec` 可执行文件,如(`kexec-tools`)部分的命令所示。“捕获内核”可以与“第一内核”相同,也可以不同。通常,一样即可。Initramfs 是可选的;例如,当内核使用 `CONFIG_INITRAMFS_SOURCE` 编译时,您不需要它。通常,从第一个 initramfs 中保存一个不一样的捕获 initramfs,因为在捕获 initramfs 中自动执行 vmcore 的副本能获得更好的效果。当执行 `kexec` 时,它还加载了 `elfcorehdr` 数据和 purgatory 可执行文件(LCTT 译注:purgatory 就是一个引导加载程序,是为 kdump 定作的。它被赋予了“炼狱”这样一个古怪的名字应该只是一种调侃)。 `elfcorehdr` 具有关于系统内存组织的信息,而 purgatory 可以在“捕获内核”执行之前执行并验证第二阶段的二进制或数据是否具有正确的 SHA。purgatory 也是可选的。
当“第一内核”崩溃时,它执行必要的退出过程并切换到 purgatory(如果存在)。purgatory 验证加载二进制文件的 SHA256,如果是正确的,则将控制权传递给“捕获内核”。“捕获内核”根据从 `elfcorehdr` 接收到的系统内存信息创建 vmcore。因此,“捕获内核”启动后,您将看到 `/proc/vmcore` 中“第一内核”的转储。根据您使用的 initramfs,您现在可以分析转储,将其复制到任何磁盘,也可以是自动复制的,然后重新启动到稳定的内核。
### 内核系统调用
内核提供了两个系统调用:`kexec_load()` 和 `kexec_file_load()`,可以用于在执行 `kexec -l` 时加载“第二内核”。它还为 `reboot()` 系统调用提供了一个额外的标志,可用于使用 `kexec -e` 引导到“第二内核”。
`kexec_load()`:`kexec_load()` 系统调用加载一个可以在之后通过 `reboot()` 执行的新的内核。其原型定义如下:
```
long kexec_load(unsigned long entry, unsigned long nr_segments,
struct kexec_segment *segments, unsigned long flags);
```
用户空间需要为不同的组件传递不同的段,如内核,initramfs 等。因此,`kexec` 可执行文件有助于准备这些段。`kexec_segment` 的结构如下所示:
```
struct kexec_segment {
void *buf;
/* 用户空间缓冲区 */
size_t bufsz;
/* 用户空间中的缓冲区长度 */
void *mem;
/* 内核的物理地址 */
size_t memsz;
/* 物理地址长度 */
};
```
当使用 `LINUX_REBOOT_CMD_KEXEC` 调用 `reboot()` 时,它会引导进入由 `kexec_load` 加载的内核。如果标志 `KEXEC_ON_CRASH` 被传递给 `kexec_load()`,则加载的内核将不会使用 `reboot(LINUX_REBOOT_CMD_KEXEC)` 来启动;相反,这将在内核崩溃中执行。必须定义 `CONFIG_KEXEC` 才能使用 `kexec`,并且为 `kdump` 定义 `CONFIG_CRASH_DUMP`。
`kexec_file_load()`:作为用户,你只需传递两个参数(即 `kernel` 和 `initramfs`)到 `kexec` 可执行文件。然后,`kexec` 从 sysfs 或其他内核信息源中读取数据,并创建所有段。所以使用 `kexec_file_load()` 可以简化用户空间,只传递内核和 initramfs 的文件描述符。其余部分由内核本身完成。使用此系统调用时应该启用 `CONFIG_KEXEC_FILE`。它的原型如下:
```
long kexec_file_load(int kernel_fd, int initrd_fd, unsigned long
cmdline_len, const char __user * cmdline_ptr, unsigned long
flags);
```
请注意,`kexec_file_load` 也可以接受命令行,而 `kexec_load()` 不行。内核根据不同的系统架构来接受和执行命令行。因此,在 `kexec_load()` 的情况下,`kexec-tools` 将通过其中一个段(如在 dtb 或 ELF 引导注释等)中传递命令行。
目前,`kexec_file_load()` 仅支持 x86 和 PowerPC。
#### 当内核崩溃时会发生什么
当第一个内核崩溃时,在控制权传递给 purgatory 或“捕获内核”之前,会执行以下操作:
* 准备 CPU 寄存器(参见内核代码中的 `crash_setup_regs()`);
* 更新 vmcoreinfo 备注(请参阅 `crash_save_vmcoreinfo()`);
* 关闭非崩溃的 CPU 并保存准备好的寄存器(请参阅 `machine_crash_shutdown()` 和 `crash_save_cpu()`);
* 您可能需要在此处禁用中断控制器;
* 最后,它执行 kexec 重新启动(请参阅 `machine_kexec()`),它将加载或刷新 kexec 段到内存,并将控制权传递给进入段的执行文件。输入段可以是下一个内核的 purgatory 或开始地址。
#### ELF 程序头
kdump 中涉及的大多数转储核心都是 ELF 格式。因此,理解 ELF 程序头部很重要,特别是当您想要找到 vmcore 准备的问题。每个 ELF 文件都有一个程序头:
* 由系统加载器读取,
* 描述如何将程序加载到内存中,
* 可以使用 `Objdump -p elf_file` 来查看程序头。
vmcore 的 ELF 程序头的示例如下:
```
# objdump -p vmcore
vmcore:
file format elf64-littleaarch64
Program Header:
NOTE off 0x0000000000010000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**0 filesz
0x00000000000013e8 memsz 0x00000000000013e8 flags ---
LOAD off 0x0000000000020000 vaddr 0xffff000008080000 paddr 0x0000004000280000 align 2**0 filesz
0x0000000001460000 memsz 0x0000000001460000 flags rwx
LOAD off 0x0000000001480000 vaddr 0xffff800000200000 paddr 0x0000004000200000 align 2**0 filesz
0x000000007fc00000 memsz 0x000000007fc00000 flags rwx
LOAD off 0x0000000081080000 vaddr 0xffff8000ffe00000 paddr 0x00000040ffe00000 align 2**0 filesz
0x00000002fa7a0000 memsz 0x00000002fa7a0000 flags rwx
LOAD off 0x000000037b820000 vaddr 0xffff8003fa9e0000 paddr 0x00000043fa9e0000 align 2**0 filesz
0x0000000004fc0000 memsz 0x0000000004fc0000 flags rwx
LOAD off 0x00000003807e0000 vaddr 0xffff8003ff9b0000 paddr 0x00000043ff9b0000 align 2**0 filesz
0x0000000000010000 memsz 0x0000000000010000 flags rwx
LOAD off 0x00000003807f0000 vaddr 0xffff8003ff9f0000 paddr 0x00000043ff9f0000 align 2**0 filesz
0x0000000000610000 memsz 0x0000000000610000 flags rwx
```
在这个例子中,有一个 note 段,其余的是 load 段。note 段提供了有关 CPU 信息,load 段提供了关于复制的系统内存组件的信息。
vmcore 从 `elfcorehdr` 开始,它具有与 ELF 程序头相同的结构。参见下图中 `elfcorehdr` 的表示:

`kexec-tools` 读取 `/sys/devices/system/cpu/cpu%d/crash_notes` 并准备 `CPU PT_NOTE` 的标头。同样,它读取 `/sys/kernel/vmcoreinfo` 并准备 `vmcoreinfo PT_NOTE` 的标头,从 `/proc/iomem` 读取系统内存并准备存储器 `PT_LOAD` 标头。当“捕获内核”接收到 `elfcorehdr` 时,它从标头中提到的地址中读取数据,并准备 vmcore。
#### Crash note
Crash notes 是每个 CPU 中用于在系统崩溃的情况下存储 CPU 状态的区域;它有关于当前 PID 和 CPU 寄存器的信息。
#### vmcoreinfo
该 note 段具有各种内核调试信息,如结构体大小、符号值、页面大小等。这些值由捕获内核解析并嵌入到 `/proc/vmcore` 中。 `vmcoreinfo` 主要由 `makedumpfile` 应用程序使用。在 Linux 内核,`include/linux/kexec.h` 宏定义了一个新的 `vmcoreinfo`。 一些示例宏如下所示:
* `VMCOREINFO_PAGESIZE()`
* `VMCOREINFO_SYMBOL()`
* `VMCOREINFO_SIZE()`
* `VMCOREINFO_STRUCT_SIZE()`
#### makedumpfile
vmcore 中的许多信息(如可用页面)都没有用处。`makedumpfile` 是一个用于排除不必要的页面的应用程序,如:
* 填满零的页面;
* 没有私有标志的缓存页面(非专用缓存);
* 具有私有标志的缓存页面(专用缓存);
* 用户进程数据页;
* 可用页面。
此外,`makedumpfile` 在复制时压缩 `/proc/vmcore` 的数据。它也可以从转储中删除敏感的符号信息; 然而,为了做到这一点,它首先需要内核的调试信息。该调试信息来自 `VMLINUX` 或 `vmcoreinfo`,其输出可以是 ELF 格式或 kdump 压缩格式。
典型用法:
```
# makedumpfile -l --message-level 1 -d 31 /proc/vmcore makedumpfilecore
```
详细信息请参阅 `man makedumpfile`。
### kdump 调试
新手在使用 kdump 时可能会遇到的问题:
#### `kexec -p kernel_image` 没有成功
* 检查是否分配了崩溃内存。
* `cat /sys/kernel/kexec_crash_size` 不应该有零值。
* `cat /proc/iomem | grep "Crash kernel"` 应该有一个分配的范围。
* 如果未分配,则在命令行中传递正确的 `crashkernel=` 参数。
* 如果没有显示,则在 `kexec` 命令中传递参数 `-d`,并将输出信息发送到 kexec-tools 邮件列表。
#### 在“第一内核”的最后一个消息之后,在控制台上看不到任何东西(比如“bye”)
* 检查 `kexec -e` 之后的 `kexec -l kernel_image` 命令是否工作。
* 可能缺少支持的体系结构或特定机器的选项。
* 可能是 purgatory 的 SHA 验证失败。如果您的体系结构不支持 purgatory 中的控制台,则很难进行调试。
* 可能是“第二内核”早已崩溃。
* 将您的系统的 `earlycon` 或 `earlyprintk` 选项传递给“第二内核”的命令行。
* 使用 kexec-tools 邮件列表共享第一个内核和捕获内核的 `dmesg` 日志。
### 资源
#### fedora-kexec-tools
* GitHub 仓库:`git://pkgs.fedoraproject.org/kexec-tools`
* 邮件列表:[[email protected]](mailto:[email protected])
* 说明:Specs 文件和脚本提供了用户友好的命令和服务,以便 `kexec-tools` 可以在不同的用户场景下实现自动化。
#### kexec-tools
* GitHub 仓库:git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
* 邮件列表:[[email protected]](mailto:[email protected])
* 说明:使用内核系统调用并提供用户命令 `kexec`。
#### Linux kernel
* GitHub 仓库: `git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git`
* 邮件列表:[[email protected]](mailto:[email protected])
* 说明:实现了 `kexec_load()`、`kexec_file_load()`、`reboot()` 系统调用和特定体系结构的代码,例如 `machine_kexec()` 和 `machine_crash_shutdown()`。
#### Makedumpfile
* GitHub 仓库: `git://git.code.sf.net/p/makedumpfile/code`
* 邮件列表:[[email protected]](mailto:[email protected])
* 说明:从转储文件中压缩和过滤不必要的组件。
(题图:[Penguin](https://pixabay.com/en/penguins-emperor-antarctic-life-429136/)、 [Boot](https://pixabay.com/en/shoe-boots-home-boots-house-1519804/),修改:Opensource.com. [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))
---
作者简介:
Pratyush Anand - Pratyush 正在以以为 Linux 内核专家的身份与 Red Hat 合作。他主要负责 Red Hat 产品和上游所面临的几个 kexec/kdump 问题。他还处理 Red Hat 支持的 ARM64 平台周围的其他内核调试、跟踪和性能问题。除了 Linux 内核,他还在上游的 kexec-tools 和 makedumpfile 项目中做出了贡献。他是一名开源爱好者,并在教育机构举办志愿者讲座,促进了 FOSS。
---
via: <https://opensource.com/article/17/6/kdump-usage-and-internals>
作者:[Pratyush Anand](https://opensource.com/users/pratyushanand) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | [Kdump](https://www.kernel.org/doc/Documentation/kdump/kdump.txt) is a way to acquire a crashed Linux kernel dump, but finding documents that explain its usage and internals can be challenging. In this article, I'll examine the basics of kdump usage and look at the internals of kdump/kexec kernel implementation.
[Kexec](https://linux.die.net/man/8/kexec) is a Linux kernel-to-kernel boot loader that helps to boot the second kernel from the context of first kernel. Kexec shuts down the first kernel, bypasses the BIOS or firmware stage, and jumps to second kernel. Thus, reboots become faster in absence of the BIOS stage.
Kdump can be used with the kexec application—for example, when the second kernel is booted when the first kernel panics, the second kernel is used to copy the memory dump of first kernel, which can be analyzed with tools such as gdb and crash to determine the panic reasons. (In this article, I'll use the terms *first kernel* for the currently running kernel, *second kernel* for the kernel run using kexec, and *capture kernel* for kernel run when current kernel panics.)
The kexec mechanism has components in the kernel as well as in user space. The kernel provides few system calls for kexec reboot functionality. A user space tool called kexec-tools uses those calls and provides an executable to load and boot the second kernel. Sometimes a distribution also adds wrappers on top of kexec-tools, which helps capture and save the dump for various dump target configurations. In this article, I will use the name *distro-kexec-tools* to avoid confusion between upstream kexec-tools and distro-specific kexec-tools code. My example will use the Fedora Linux distribution.
## Fedora kexec-tools
**dnf install kexec-tools** installs fedora-kexec-tools on Fedora machines. The kdump service can be started by executing **systemctl start kdump** after installation of fedora-kexec-tools. When this service starts, it creates a root file system (initramfs) that contains resources to mount the target for saving vmcore, and a command to copy/dump vmcore to the target. This service then loads the kernel and initramfs at the suitable location within the crash kernel region so that they can be executed upon kernel panic.
Fedora wrapper provides two user configuration files:
**/etc/kdump.conf**specifies configuration parameters whose modification requires rebuild of the initramfs. For example, if you change the dump target from a local disk to an NFS-mounted disk, you will need NFS-related kernel modules to be loaded by the capture kernel.**/etc/sysconfig/kdump**specifies configuration parameters whose modification do not require rebuild of the initramfs. For example, you do not need to rebuild the initramfs if you only need to modify command-line arguments passed to the capture kernel.
If the kernel panics after the kdump service starts, then the capture kernel is executed, which further executes the vmcore save process from initramfs and reboots to a stable kernel afterward.
## kexec-tools
Compilation of kexec-tools source code provides an executable called **kexec**. The same executable can be used to load and execute a second kernel or to load a capture kernel, which can be executed upon kernel panic.
A typical command to load a second kernel:
# kexec -l kernel.img --initrd=initramfs-image.img –reuse-cmdline
**--reuse-command** line says to use the same command line as that of first kernel. Pass initramfs using **--initrd**. **-l** says that you are loading the second kernel, which can be executed by the kexec application itself (**kexec -e**). A kernel loaded using **-l** cannot be executed at kernel panic. You must pass **-p** instead of **-l** to load the capture kernel that can be executed upon kernel panic.
A typical command to load a capture kernel:
# kexec -p kernel.img --initrd=initramfs-image.img –reuse-cmdline
**echo c > /pros/sysrq-trigger** can be used to crash the kernel for test purposes. See **man kexec** for detail about options provided by kexec-tools. Before moving to the next section, which focuses on internal implementation, watch this kexec_dump demo:
## Kdump: End-to-end flow
Figure 1 shows a flow diagram. Crashkernel memory must be reserved for the capture kernel during booting of the first kernel. You can pass **crashkernel=Y@X** in the kernel command line, where **@X** is optional. **crashkernel=256M** works with most of the x86_64 systems; however, selecting a right amount of memory for the crash kernel is dependent on many factors, such as kernel size and initramfs, as well as runtime memory requirement of modules and applications included in initramfs. See the [kernel-parameters documentation](https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt) for more ways to pass crash kernel arguments.
You can pass kernel and initramfs images to a **kexec** executable as shown in the typical command of section (**kexec-tools**). The capture kernel can be the same as the first kernel or can be different. Typically, keep it the same. Initramfs can be optional; for example, when the kernel is compiled with **CONFIG_INITRAMFS_SOURCE**, you do not need it. Typically, you keep a different capture initramfs from the first initramfs, because automating a copy of vmcore in capture initramfs is better. When **kexec** is executed, it also loads **elfcorehdr** data and the purgatory executable. **elfcorehdr** has information about the system RAM memory organization, and purgatory is a binary that executes before the capture kernel executes and verifies that the second stage binary/data have the correct SHA. Purgatory can be made optional as well.
When the first kernel panics, it does a minimal necessary exit process and switches to purgatory if it exists. Purgatory verifies SHA256 of loaded binaries and, if those are correct, then it passes control to the capture kernel. The capture kernel creates vmcore per the system RAM information received from **elfcorehdr**. Thus, you will see a dump of the first kernel in /proc/vmcore after the capture kernel boots. Depending on the initramfs you have used, you can now analyze the dump, copy it to any disk, or there can be an automated copy followed by reboot to a stable kernel.
## Kernel system calls
The kernel provides two system calls—**kexec_load()** and **kexec_file_load()**, which can be used to load the second kernel when **kexec -l** is executed. It also provides an extra flag for the **reboot()** system call, which can be used to boot into second kernel using **kexec -e**.
**kexec_load()**: The **kexec_load()** system call loads a new kernel that can be executed later by **reboot()**. Its prototype is defined as follows:
long kexec_load(unsigned long entry, unsigned long nr_segments, struct kexec_segment *segments, unsigned long flags);
User space needs to pass segments for different components, such as kernel, initramfs, etc. Thus, the **kexec** executable helps in preparing these segments. The structure of **kexec_segment** looks like as follows:
struct kexec_segment { void *buf; /* Buffer in user space */ size_t bufsz; /* Buffer length in user space */ void *mem; /* Physical address of kernel */ size_t memsz; /* Physical address length */ };
When **reboot()** is called with **LINUX_REBOOT_CMD_KEXEC**, it boots into a kernel loaded by **kexec_load()**. If a flag **KEXEC_ON_CRASH** is passed to **kexec_load()**, then the loaded kernel will not be executed with **reboot(LINUX_REBOOT_CMD_KEXEC)**; rather, that will be executed on kernel panic. **CONFIG_KEXEC** must be defined to use kexec and **CONFIG_CRASH_DUMP** should be defined for kdump.
**kexec_file_load()**: As a user you pass only two arguments (i.e., kernel and initramfs) to the **kexec** executable. **kexec** then reads data from sysfs or other kernel information source and creates all segments. So, **kexec_file_load()** gives you simplification in user space, where you pass only file descriptors of kernel and initramfs. The rest of the all segment preparation is done by the kernel itself. **CONFIG_KEXEC_FILE** should be enabled to use this system call. Its prototype looks like:
long kexec_file_load(int kernel_fd, int initrd_fd, unsigned long cmdline_len, const char __user * cmdline_ptr, unsigned long flags);
Notice that **kexec_file_load()** also accepts the command line, whereas **kexec_load()** did not. The kernel has different architecture-specific ways to accept the command line. Therefore, **kexec-tools** passes the command-line through one of the segments (like in **dtb** or **ELF boot notes**, etc.) in case of **kexec_load()**.
Currently, **kexec_file_load()** is supported for x86 and PowerPC only.
### What happens when the kernel crashes
When the first kernel panics, before control is passed to the purgatory or capture kernel it does the following:
- prepares CPU registers (see
**crash_setup_regs()**in kernel code); - updates vmcoreinfo note (see
**crash_save_vmcoreinfo()**); - shuts down non-crashing CPUs and saves the prepared registers (see
**machine_crash_shutdown()**and**crash_save_cpu()**); - you also might need to disable the interrupt controller here;
- and at the end, it performs the kexec reboot (see
**machine_kexec()**), which loads/flushes kexec segments to memory and passes control to the execution of entry segment. Entry segment could be purgatory or start address of next kernel.
### ELF program headers
Most of the dump cores involved in kdump are in ELF format. Thus, understanding ELF Program headers is important, especially if you want to find issues with vmcore preparation. Each ELF file has a program header that:
- is read by the system loader,
- describes how the program should be loaded into memory,
- and one can use
**Objdump -p elf_file**to look into program headers.
An example of ELF program headers of a vmcore:
# objdump -p vmcore vmcore: file format elf64-littleaarch64 Program Header: NOTE off 0x0000000000010000 vaddr 0x0000000000000000 paddr 0x0000000000000000 align 2**0 filesz 0x00000000000013e8 memsz 0x00000000000013e8 flags --- LOAD off 0x0000000000020000 vaddr 0xffff000008080000 paddr 0x0000004000280000 align 2**0 filesz 0x0000000001460000 memsz 0x0000000001460000 flags rwx LOAD off 0x0000000001480000 vaddr 0xffff800000200000 paddr 0x0000004000200000 align 2**0 filesz 0x000000007fc00000 memsz 0x000000007fc00000 flags rwx LOAD off 0x0000000081080000 vaddr 0xffff8000ffe00000 paddr 0x00000040ffe00000 align 2**0 filesz 0x00000002fa7a0000 memsz 0x00000002fa7a0000 flags rwx LOAD off 0x000000037b820000 vaddr 0xffff8003fa9e0000 paddr 0x00000043fa9e0000 align 2**0 filesz 0x0000000004fc0000 memsz 0x0000000004fc0000 flags rwx LOAD off 0x00000003807e0000 vaddr 0xffff8003ff9b0000 paddr 0x00000043ff9b0000 align 2**0 filesz 0x0000000000010000 memsz 0x0000000000010000 flags rwx LOAD off 0x00000003807f0000 vaddr 0xffff8003ff9f0000 paddr 0x00000043ff9f0000 align 2**0 filesz 0x0000000000610000 memsz 0x0000000000610000 flags rwx
In this example, there is one note section and the rest are load sections. The note section has information about CPU notes and load sections have information about copied system RAM components.
Vmcore starts with **elfcorehdr**, which has the same structure as that of a an ELF program header. See the representation of **elfcorehdr** in Figure 2:
**kexec-tools** reads /sys/devices/system/cpu/cpu%d/crash_notes and prepares headers for **CPU PT_NOTE**. Similarly, it reads **/sys/kernel/vmcoreinfo** and prepares headers for **vmcoreinfo PT_NOTE**, and reads system RAM values from **/proc/iomem** and prepares memory **PT_LOAD** headers. When the capture kernel receives **elfcorehdr**, it appends data from addresses mentioned in the header and prepares vmcore.
### Crash notes
Crash notes is a per-CPU area for storing CPU states in case of a system crash; it has information about current PID and CPU registers.
### vmcoreinfo
This note section has various kernel debug information, such as struct size, symbol values, page size, etc. Values are parsed by capture kernel and embedded into **/proc/vmcore**. **vmcoreinfo** is used mainly by the **makedumpfile** application. **include/linux/kexec.h** in the Linux kernel has macros to define a new **vmcoreinfo**. Some of the example macros are like these:
**VMCOREINFO_PAGESIZE()****VMCOREINFO_SYMBOL()****VMCOREINFO_SIZE()****VMCOREINFO_STRUCT_SIZE()**
### makedumpfile
Much information (such as free pages) in a vmcore is not useful. Makedumpfile is an application that excludes unnecessary pages, such as:
- pages filled with zeroes,
- cache pages without private flag (non-private cache);
- cache pages with private flag (private cache);
- user process data pages;
- free pages.
Additionally, makedumpfile compresses **/proc/vmcore** data while copying. It can also erase sensitive symbol information from dump; however, it first needs kernel's debug information to do that. This debug information comes from either **VMLINUX** or **vmcoreinfo**, and its output either can be in ELF format or kdump-compressed format.
Typical usage:
# makedumpfile -l --message-level 1 -d 31 /proc/vmcore makedumpfilecore
See **man makedumpfile** for detail.
## Debugging kdump issues
Issues that new kdump users might have:
*Kexec -p kernel_image* did not succeed
- Check whether crash memory is allocated.
**cat /sys/kernel/kexec_crash_size**should have no zero value.**cat /proc/iomem | grep "Crash kernel"**should have an allocated range.- If not allocated, then pass the proper
**crashkernel=**argument in the command line. - If nothing shows up, then pass
**-d**in the**kexec**command and share the debug output with the kexec-tools mailing list.
### Do not see anything on console after last message from first kernel (such as "bye")
- Check whether
**kexec -l kernel_image**followed by**kexec -e**works. - Might be missing architecture- or machine-specific options.
- Might have purgatory SHA verification failed. If your architecture does not support a console in purgatory, then it is difficult to debug.
- Might have second kernel crashed early.
- Pass
**earlycon**/**earlyprintk**option for your system to the second kernel command line. - Share
**dmesg**log of both the first and capture kernel with kexec-tools mailing list.
## Resources
#### fedora-kexec-tools
- Repository:
**git://pkgs.fedoraproject.org/kexec-tools** - Mailing list:
[[email protected]](mailto:[email protected]) - Description: Specs file and scripts to provide user-friendly command/services so that
**kexec-tools**can be automated in different user scenarios.
#### kexec-tools
- Repository: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
- Mailing list:
[[email protected]](mailto:[email protected]) - Description: Uses kernel system calls and provides a user command
**kexec**.
#### Linux kernel
- Repository:
**git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git** - Mailing list:
[[email protected]](mailto:[email protected]) - Description: Implements
**kexec_load()**,**kexec_file_load()**, and**reboot()**system call and architecture-specific code, such as**machine_kexec()**and**machine_crash_shutdown()**, etc.
#### Makedumpfile
- Repository:
**git://git.code.sf.net/p/makedumpfile/code** - Mailing list:
[[email protected]](mailto:[email protected]) - Description: Compresses and filters unnecessary component from the dumpfile.
*Learn more in Pratyush Anand's KDUMP: Usage and Internals talk at LinuxCon ContainerCon CloudOpen China on June 20, 2017.*
## Comments are closed. |
8,738 | 使用统一阻止列表和白名单来更新主机文件 | https://www.darrentoback.com/this-script-updates-hosts-files-using-a-multi-source-unified-block-list-with-whitelisting | 2017-08-01T10:52:00 | [
"黑名单",
"hosts"
] | https://linux.cn/article-8738-1.html | 
网上有许多持续维护的含有不同垃圾域的有用列表。将这些列表复制到你的主机文件中可以轻松阻止大量的域,你的系统将根本不用去连接它们。此方法可以在不安装浏览器插件的情况下工作,并且将为系统上任何浏览器(和任何其他程序)提供阻止操作。
在本教程中,我将向你展示如何在 Linux 中启动并运行 Steven Black 的[统一主机脚本](https://github.com/StevenBlack/hosts)。该脚本将使用来自多个来源的最新已知的广告服务器、网络钓鱼网站和其他网络垃圾的地址来更新你的计算机主机文件,同时提供一个漂亮、干净的方式来管理你自己的黑名单/白名单,其分别来自于该脚本管理的各个列表。
在将 30,000 个域放入主机文件之前,需要注意两点。首先,这些巨大的列表包含可能需要解除封锁的服务器,以便进行在线购买或其他一些临时情况。如果你弄乱了你的主机文件,你要知道网上的某些东西可能会出现问题。为了解决这个问题,我将向你展示如何使用方便的打开/关闭开关,以便你可以快速禁用你的阻止列表来购买喜马拉雅盐雾灯(它是等离子灯)。我仍然认为这些列表的目的之一是将所有的一切都封锁(有点烦人,直到我想到了做一个关闭开关)。如果你经常遇到你需要的服务器被阻止的问题,只需将其添加到白名单文件中即可。
第二个问题是性能受到了轻微的影响, 因为每次调用一个域时, 系统都必须检查整个列表。只是有一点点影响, 而没有大到让我因此而放弃黑名单,让每一个连接都通过。你具体要怎么选择自己看着办。
主机文件通过将请求定向到 127.0.0.1 或 0.0.0.0(换句话说定向到空地址)来阻止请求。有人说使用 0.0.0.0 是更快,问题更少的方法。你可以将脚本配置为使用 `-ip nnn.nnn.nnn.nnn` 这样的 ip 选项来作为阻止 ip,但默认值是 0.0.0.0,这是我使用的值。
我曾经将 Steven Black 的脚本做的事每隔一段时间就手动做一遍,进到每一个站点,将他们的列表拷贝/粘贴到我的主机文件中,做一个查找替换将其中的 127 变成 0 等等。我知道整件事情可以自动化,这样做有点傻,但我从来没有花时间解决这个问题。直到我找到这个脚本,现在这事已经是一个被遗忘的杂务。
让我们先下载一份最新的 Steven Black 的代码拷贝(大约 150MB),以便我们可以进行下一步。你需要安装 git,因此如果还没安装,进入到终端输入:
```
sudo apt-get install git
```
安装完之后,输入:
```
mkdir unifiedhosts
cd unifiedhosts
git clone https://github.com/StevenBlack/hosts.git
cd hosts
```
当你打开了 Steven 的脚本时,让我们来看看有什么选项。该脚本有几个选项和扩展,但扩展我不会在这里提交,但如果你到了这一步并且你有兴趣,[readme.md](https://github.com/StevenBlack/hosts/blob/master/readme.md) 可以告诉你所有你需要知道的。
你需要安装 python 来运行此脚本,并且与版本有关。要找到你安装的 Python 版本,请输入:
```
python --version
```
如果你还没安装 Python:
```
sudo apt-get install python
```
对于 Python 2.7,如下所示,输入 `python` 来执行脚本。对于 Python 3,在命令中的 `python` 替换成 `python3`。执行后,该脚本会确保它具有每个列表的最新版本,如果没有,它会抓取一个新的副本。然后,它会写入一个新的主机文件,包括了你的黑名单/白名单中的任何内容。让我们尝试使用 `-r` 选项来替换我们的当前的主机文件,而 `-a` 选项可以脚本不会问我们任何问题。回到终端:
```
python updateHostsFile.py -r -a
```
该命令将询问你的 root 密码,以便能够写入 `/etc/`。为了使新更新的列表处于激活状态,某些系统需要清除 DNS 缓存。在同一个硬件设备上,我观察到不同的操作系统表现出非常不同的行为,在没有刷新缓存的情况下不同的服务器变为可访问/不可访问所需的时间长度都不同。我已经看到了从即时更新(Slackware)到重启更新(Windows)的各种情况。有一些命令可以刷新 DNS 缓存,但是它们在每个操作系统甚至每个发行版上都不同,所以如果没有生效,只需要重新启动就行了。
现在,只要将你的个人例外添加到黑名单/白名单中,并且只要你想要更新主机文件,运行该脚本就好。该脚本将根据你的要求调整生成的主机文件,每次运行文件时会自动追加你额外的列表。
最后,我们来创建一个打开/关闭开关,对于打开和关闭功能每个都创建一个脚本,所以回到终端输入下面的内容创建关闭开关(用你自己的文本编辑器替换 leafpad):
```
leafpad hosts-off.sh
```
在新文件中输入下面的内容:
```
#!/bin/sh
sudo mv /etc/hosts /etc/hostsDISABLED
```
接着让它可执行:
```
chmod +x hosts-off.sh
```
相似地,对于打开开关:
```
leafpad hosts-on.sh
```
在新文件中输入下面的内容:
```
#!/bin/sh
sudo mv /etc/hostsDISABLED /etc/hosts
```
最后让它可执行:
```
chmod +x hosts-on.sh
```
你所需要做的是为每个脚本创建一个快捷方式,标记为 HOSTS-ON 和 HOSTS-OFF,放在你能找到它们的地方。
---
via: <https://www.darrentoback.com/this-script-updates-hosts-files-using-a-multi-source-unified-block-list-with-whitelisting>
作者:[dmt](https://www.darrentoback.com/about-me) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 404 | null |
|
8,739 | LKU:一套在 Ubuntu/LinuxMint 上编译、安装和更新最新内核的 Shell 脚本 | http://www.2daygeek.com/lku-linux-kernel-utilities-compile-install-update-latest-kernel-in-linux-mint-ubuntu/ | 2017-08-01T13:19:00 | [
"内核",
"升级"
] | https://linux.cn/article-8739-1.html | 
以手动方式安装和升级最新的 Linux 内核对于每个人来说都不是一件小事,甚至包括一些有经验的人也是如此。它需要对 Linux 内核有深入的了解。过去我们已经介绍了 UKUU(Ubuntu Kernel Upgrade Utility),它可以从 kernel.ubuntu.com 网站上自动检测最新的主线内核,并弹出一个不错的窗口界面进行安装。
[Linux Kernel Utilities](https://github.com/mtompkins/linux-kernel-utilities) (LKU)提供一组 shell 脚本(三个 Shell 脚本),可以帮助用户从 kernel.org 获取并编译和安装最新的 Linux 内核,也可以从 kernel.ubuntu.com 获取安装最新的预编译的 Ubuntu 内核。甚至可以根据需要选择所需的内核(手动内核选择)。
该脚本还将根据 PGP 签名文件检查下载的归档文件,并且可以选择通用和低延迟版内核。
建议阅读:[ukuu:一种在基于 Ubuntu 的系统上轻松安装升级 Linux 内核的方式](http://www.2daygeek.com/ukuu-install-upgrade-linux-kernel-in-linux-mint-ubuntu-debian-elementary-os/)
它可以删除或清除所有非活动的内核,并且不会为了安全目的留下备份的内核。强烈建议在执行此脚本之前重新启动一次。
* `compile_linux_kernel.sh` :用户可以从 kernel.org 编译和安装所需的或最新的内核
* `update_ubuntu_kernel.sh` :用户可以从 kernel.ubuntu.com 安装并更新所需或最新的预编译 Ubuntu 内核
* `remove_old_kernels.sh` :这将删除或清除所有非活动内核,并且只保留当前加载的版本
kernel.org 有固定的发布周期(每三个月一次),发布的内核包括了新的功能,改进了硬件和系统性能。由于它具有标准的发布周期,除了滚动发布的版本(如 Arch Linux,openSUSE Tumbleweed 等),大多数发行版都不提供最新的内核。
### 如何安装 Linux Kernel Utilities (LKU)
正如我们在文章的开头所说的,它的 shell 脚本集只是克隆开发人员的 github 仓库并运行相应的 shell 文件来执行这个过程。
```
$ git clone https://github.com/mtompkins/linux-kernel-utilities.git && cd linux-kernel-utilities
```
### 安装指定版本内核
为了测试的目的,我们将安装 Linux v4.4.10-xenial 内核。在安装新内核之前,我们需要通过 `uanme -a` 命令检查当前安装的内核版本,以便我们可以检查新内核是否可以安装。
```
$ uname -a
Linux magi-VirtualBox 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
根据上面的输出,我们的系统使用的是 4.4.0-21 通用内核。
只需运行 `update_ubuntu_kernel.sh` shell 脚本。第一次运行脚本时会检查是否满足所有的依赖关系,然后自动安装缺少的依赖项。它会检测系统使用的发行版,并检索 kernel.ubuntu.com 中可用的预编译内核。现在,从列表中选择你需要的内核并输入序号,然后按回车键,它将下载内核映像(linux-headers-4.4.10,linux-headers-4.4.10-xxx-generic 和 linux-image-4.4.10-xxx-generic)。
一旦内核镜像被下载,它将要求输入 `sudo` 密码来启动新内核的安装。
```
$ ./update_ubuntu_kernel.sh
[+] Checking Distro
\_ Distro identified as LinuxMint.
[+] Checking Dependencies
curl Found
dkms Found
git Found
sudo Found
wget Found
whiptail Found
lynx Not Found
-- Installing Dependencies --
[!] The first time this script is run missing dependencies will be installed.
For compiling a kernel this may take a bit of time. Feedback will be provided.
[+] Dependencies
\_Elevating permissions as necessary . . .
[%] Elevated
[+] Testing for previous held packages and trying to correct any found.
\_Passed
[+] Updating package cache . . .
\_Complete
[+] Installing dependencies . . .
\_Complete
curl Found
dkms Found
git Found
sudo Found
wget Found
whiptail Found
lynx Found
[+] Changing to temporary directory to work in . . .
\_ Temporary directory access granted: /tmp/tmp.97eHDsmg2K
[+] Removing any conflicting remnants . . .
\_ Done
[+] Retrieving available kernel choices . . .
\_ Precompiled kernels available from kernel.ubuntu.com:
1) Linux v4.11 2) Linux v4.11.3 3) Linux v4.11.2 4) Linux v4.11.1
5) Linux v4.10 6) Linux v4.10.17 7) Linux v4.10.16 8) Linux v4.10.15
9) Linux v4.10.14 10) Linux v4.10.13 11) Linux v4.10.12
[ 节略 ……]
249) Linux v4.0.3-wily 250) Linux v4.0.2-wily 251) Linux v4.0.1-wily 252) Linux v4.0-vivid
Select your desired kernel: 158
Do you want the lowlatency kernel? (y/[n]):
[+] Processing selection
\_ Determining CPU type: amd64
\_ Locating source of v4.4.10-xenial generic kernel packages.
\_ Done
[+] Checking AntiVirus flag and disabling if necessary
[+] Installing kernel . . .
[sudo] password for magi:
Selecting previously unselected package linux-headers-4.4.10-040410.
(Reading database ... 230647 files and directories currently installed.)
Preparing to unpack linux-headers-4.4.10-040410_4.4.10-040410.201605110631_all.deb ...
Unpacking linux-headers-4.4.10-040410 (4.4.10-040410.201605110631) ...
Selecting previously unselected package linux-headers-4.4.10-040410-generic.
Preparing to unpack linux-headers-4.4.10-040410-generic_4.4.10-040410.201605110631_amd64.deb ...
Unpacking linux-headers-4.4.10-040410-generic (4.4.10-040410.201605110631) ...
Selecting previously unselected package linux-image-4.4.10-040410-generic.
Preparing to unpack linux-image-4.4.10-040410-generic_4.4.10-040410.201605110631_amd64.deb ...
Done.
Unpacking linux-image-4.4.10-040410-generic (4.4.10-040410.201605110631) ...
Setting up linux-headers-4.4.10-040410 (4.4.10-040410.201605110631) ...
Setting up linux-headers-4.4.10-040410-generic (4.4.10-040410.201605110631) ...
Examining /etc/kernel/header_postinst.d.
run-parts: executing /etc/kernel/header_postinst.d/dkms 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
Setting up linux-image-4.4.10-040410-generic (4.4.10-040410.201605110631) ...
Running depmod.
update-initramfs: deferring update (hook will be called later)
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
run-parts: executing /etc/kernel/postinst.d/dkms 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
update-initramfs: Generating /boot/initrd.img-4.4.10-040410-generic
Warning: No support for locale: en_IN
run-parts: executing /etc/kernel/postinst.d/pm-utils 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
run-parts: executing /etc/kernel/postinst.d/unattended-upgrades 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 4.4.10-040410-generic /boot/vmlinuz-4.4.10-040410-generic
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.4.10-040410-generic
Found initrd image: /boot/initrd.img-4.4.10-040410-generic
Found linux image: /boot/vmlinuz-4.4.9-040409-lowlatency
Found initrd image: /boot/initrd.img-4.4.9-040409-lowlatency
Found linux image: /boot/vmlinuz-4.4.0-21-generic
Found initrd image: /boot/initrd.img-4.4.0-21-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
\_ Done
```
安装后需要重新启动以使用新安装的内核。
```
$ sudo reboot now
```
现在,你正在使用的就是新安装的 4.4.10-040410-generic 版本内核。
```
$ uname -a
Linux magi-VirtualBox 4.4.10-040410-generic #201605110631 SMP Wed May 11 10:33:23 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
### 安装最新版本内核
过程与上述相同,它将自动安装最新版本的内核。
```
$ ./update_ubuntu_kernel.sh --latest
[+] Checking Distro
\_ Distro identified as LinuxMint.
[+] Checking Dependencies
curl Found
dkms Found
git Found
sudo Found
wget Found
whiptail Found
lynx Found
[+] Changing to temporary directory to work in . . .
\_ Temporary directory access granted: /tmp/tmp.pLPYmCze6S
[+] Removing any conflicting remnants . . .
\_ Done
[+] Retrieving available kernel choices . . .
\_ Precompiled kernels available from kernel.ubuntu.com:
.
.
.
.
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.11.3-041103-generic
Found initrd image: /boot/initrd.img-4.11.3-041103-generic
Found linux image: /boot/vmlinuz-4.4.10-040410-generic
Found initrd image: /boot/initrd.img-4.4.10-040410-generic
Found linux image: /boot/vmlinuz-4.4.9-040409-lowlatency
Found initrd image: /boot/initrd.img-4.4.9-040409-lowlatency
Found linux image: /boot/vmlinuz-4.4.0-21-generic
Found initrd image: /boot/initrd.img-4.4.0-21-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
\_ Done
```
安装后需要重新启动以使用新安装的内核。
```
$ sudo reboot now
```
现在,你正在使用的就是最新版本 4.11.3-041103-generic 的内核。
```
$ uname -a
Linux magi-VirtualBox 4.11.3-041103-generic #201705251233 SMP Thu May 25 16:34:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
```
### 删除或清除旧内核
只需要运行 `remove_old_kernels.sh` shell 脚本即可删除或清除所有非活动状态的内核。
```
$ ./remove_old_kernels.sh
++++++++++++++++++++++++++++++++
+++ W A R N I N G +++
++++++++++++++++++++++++++++++++
A reboot is recommended before running this script to ensure the current kernel tagged
as the boot kernel is indeed registered and old kernels properly marked for removal.
If you have just installed or modified your existing kernel and do not reboot before
running this script it may render you system INOPERABLE and that would indeed suck.
You have been warned.
~the Mgmt
[?]Continue to automagically remove ALL old kernels? (y/N)y
\_ Removing ALL old kernels . . .
[sudo] password for magi:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
linux-headers-4.4.0-21* linux-headers-4.4.0-21-generic* linux-headers-4.4.10-040410*
linux-headers-4.4.10-040410-generic* linux-headers-4.4.9-040409* linux-headers-4.4.9-040409-lowlatency*
linux-image-4.4.0-21-generic* linux-image-4.4.10-040410-generic* linux-image-4.4.9-040409-lowlatency*
linux-image-extra-4.4.0-21-generic* linux-kernel-generic*
0 upgraded, 0 newly installed, 11 to remove and 547 not upgraded.
After this operation, 864 MB disk space will be freed.
(Reading database ... 296860 files and directories currently installed.)
Removing linux-kernel-generic (4.4.0-21) ...
Removing linux-headers-4.4.0-21-generic (4.4.0-21.37) ...
Removing linux-headers-4.4.0-21 (4.4.0-21.37) ...
Removing linux-headers-4.4.10-040410-generic (4.4.10-040410.201605110631) ...
Removing linux-headers-4.4.10-040410 (4.4.10-040410.201605110631) ...
Removing linux-headers-4.4.9-040409-lowlatency (4.4.9-040409.201605041832) ...
Removing linux-headers-4.4.9-040409 (4.4.9-040409.201605041832) ...
Removing linux-image-extra-4.4.0-21-generic (4.4.0-21.37) ...
.
.
.
done
Purging configuration files for linux-image-4.4.9-040409-lowlatency (4.4.9-040409.201605041832) ...
Examining /etc/kernel/postrm.d .
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.4.9-040409-lowlatency /boot/vmlinuz-4.4.9-040409-lowlatency
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 4.4.9-040409-lowlatency /boot/vmlinuz-4.4.9-040409-lowlatency
```
---
via: <http://www.2daygeek.com/lku-linux-kernel-utilities-compile-install-update-latest-kernel-in-linux-mint-ubuntu/>
作者:[2DAYGEEK](http://www.2daygeek.com/author/2daygeek/) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,740 | 使用开源代码构建机器人时需要考虑的事项 | https://insights.ubuntu.com/2017/07/18/things-to-consider-when-building-a-robot-with-open-source/ | 2017-07-31T10:04:00 | [
"机器人",
"ROS",
"IoT"
] | https://linux.cn/article-8740-1.html | 
或许你正在考虑(或正在进行)将机器人使用开源软件推向市场。这个机器人是基于 linux 构建的。也许你正在使用[机器人操作系统](http://www.ros.org/)(ROS)或[任务导向操作套件](http://www.robots.ox.ac.uk/%7Emobile/MOOS/wiki/pmwiki.php/Main/HomePage)(MOOS),或者是另外一个可以帮助你简化开发过程的开源中间件。当开发接近实用化,对回报的期望开始给你带来一些压力。你可能会被问到“我们的产品什么时候可以开始销售?”,这时你将面临重要的抉择。
你可以做下面两件事之一:
1. 对现有的产品开始出货
2. 回过头去,把产品化当做一个全新的问题来解决,并处理新的问题
不需要看很远,就可以找到采用方式(1)的例子。事实上,在物联网设备市场上,到处都是这样的设备。由于急于将设备推向市场,这些可以在设备中找到硬编码证书、开发密钥、各种安全漏洞和没有更新方式的产品并不少见。
想想 Mirai 僵尸网络,通过该僵尸网络发起的流量超过 1Tbps 的分布式拒绝服务攻击(DDos),导致一些互联网上最大的网站停止服务。这个僵尸网络主要由物联网设备组成。这种攻破设备的防御机制进而控制设备所开发的僵尸程序,是采用了超级酷的黑魔法在一个没有窗户的实验室(或地下基地)开发的吗?不是,默认(通常是硬编码)证书而已。这些设备的制造商是否快速反应并发布所有这些设备的更新,以确保设备的安全?不,很多制造商根本没有更新方法。[他们召回设备而不是发布更新](https://krebsonsecurity.com/2016/10/iot-device-maker-vows-product-recall-legal-action-against-western-accusers/)。
不要急于将产品推向市场,而是退后一步。只要多思考几点,就可以使你自己和你所在公司避免痛苦。
例如,**你的软件如何更新?**你*必须*能回答这个问题。你的软件不是完美的。只要几个星期你就会发现,当你在加利福尼亚使用自主的高机动性多用途轮式车辆(HMMWV)时,它把小灌木识别为一棵橡树。或者你不小心在软件中包含了你的 SSH 密钥。
**基础操作系统如何更新?**也许这仍然是你的产品的一部分,也是你回答上一个问题的答案。但也许你使用的操作系统来自于另外一个供应商。你如何从供应商那里得到更新并提供给客户?这就是安全漏洞真正让你头痛的地方:从来不更新的内核,或者严重过时的 openssl。
当你解决了更新问题,**在更新过程出现问题时,机器人怎么恢复?**我的示例是对前面问题的一个常见解决方案:自动安全更新。对于服务器和台式机以及显然是计算机的东西来说,这是一个很好的做法,因为大多数人意识到有一个可接受的方法来关闭它,而*不是*按住电源按钮 5 秒钟。机器人系统(以及大多数物联网系统)有一个问题,有时它们根本不被认为是计算机。如果您的机器人表现奇怪,有可能会被强制关闭。如果你的机器人行为奇怪是因为它正在快速安装一个内核更新,那么,现在你就有一个安装了半个内核的机器人镇纸了。你需要能够处理这种情况。
最后,**你的工厂流程是什么?**你如何安装 Linux,ROS(或者你使用的中间件),以及你要安装在设备上的你自己的东西?小的工厂可能会手工操作,但这种方法不成规模,也容易出错。其他厂商可能会制作一个定制化的初始发行版 ISO,但这是个不小的任务,在更新软件时也不容易维护。还有一些厂商会使用 Chef 或者有陡峭学习曲线的自动化工具,不久你就会意识到,你把大量的工程精力投入到了本来应该很简单的工作中。
所有这些问题都很重要。针对这些问题,如果你发现自己没有任何明确的答案,你应该加入我们的[网络研讨会](https://www.brighttalk.com/webcast/6793/268763?utm_source=insights),在研讨会上我们讨论如何使用开放源代码构建一个商业化机器人。我们会帮助你思考这些问题,并可以回答你更多问题。
---
作者简介:
Kyle 是 Snapcraft 团队的一员,也是 Canonical 公司的常驻机器人专家,他专注于 snaps 和 snap 开发实践,以及 snaps 和 Ubuntu Core 的机器人技术实现。
---
via: <https://insights.ubuntu.com/2017/07/18/things-to-consider-when-building-a-robot-with-open-source/>
作者:[Kyle Fazzari](https://insights.ubuntu.com/author/kyrofa/) 译者:[SunWave](https://github.com/SunWave) 校对:[wxy](https://github.com/wxy)
本文由 LCTT 原创编译,Linux中国 荣誉推出
| 301 | null |
|
8,742 | NoSQL: 如何在 Ubuntu 16.04 上安装 OrientDB | https://www.unixmen.com/nosql-install-orientdb-ubuntu-16-04/ | 2017-08-01T10:19:00 | [
"NoSQL",
"OrientDB"
] | https://linux.cn/article-8742-1.html | 
### 说明 - 非关系型数据库(NoSQL)和 OrientDB
通常在我们提及数据库的时候,想到的是两个主要的分类:使用用于用户和应用程序之间进行对接的一种被称为结构化查询语言(**S**tructured **Q**uery **L**anguage ,缩写 SQL)的关系型数据库管理系统(**R**elational **D**ata **b**ase **M**anagement **S**ystem,缩写 RDBMS) 以及非关系型数据库管理系统(non-relational database management systems 或称 NoSQL 数据库)。
这两种模型在如何处理(存储)数据的方面存在着巨大的差异。
#### 关系数据库管理系统
在关系模型中(如 MySQL,或者其分支 MariaDB),一个数据库是一个表的集合,其中每个表包含一个或多个以列组织的数据分类。数据库的每行包含一个唯一的数据实例,其分类由列定义。
举个例子,想象一个包含客户的表。每一行相当于一个客户,而其中的每一列分别对应名字、地址以及其他所必须的信息。
而另一个表可能是包含订单、产品、客户、日期以及其它的种种。而这个数据库的使用者则可以获得一个满足其需要的视图,例如一个客户在一个特定的价格范围购买产品的报告。
#### 非关系型数据库管理系统
在非关系型数据库(或称为<ruby> 不仅仅是数据库 <rt> Not only SQL </rt></ruby>)管理系统中,数据库被设计为使用不同的方式存储数据,比如文档存储、键值对存储、图形关系存储以及其他方式存储。使用此种形式实现的数据库系统专门被用于大型数据库集群和大型 Web 应用。现今,非关系型数据库被用于某些大公司,如谷歌和亚马逊。
##### 文档存储数据库
文档存储数据库是将数据用文档的形式存储。这种类型的运用通常表现为 JavaScript 和 JSON,实际上,XML 和其他形式的存储也是可以被采用的。这里的一个例子就是 MongoDB。
##### 键值对存储数据库
这是一个由唯一的<ruby> 键 <rt> key </rt></ruby>配对一个<ruby> 值 <rt> value </rt></ruby>的简单模型。这个系统在高速缓存方面具有高性能和高度可扩展性。这里的例子包括 BerkeleyDB 和 MemacacheDB。
##### 图形关系数据库
正如其名,这种数据库通过使用<ruby> 图 <rt> graph </rt></ruby>模型存储数据,这意味着数据通过节点和节点之间的互连进行组织。这是一个可以随着时间的推移和使用而发展的灵活模型。这个系统应用于那些强调映射关系的地方。这里的例子有 IBM Graphs、Neo4j 以及 **OrientDB**。
### OrientDB
[OrientDB](https://orientdb.com/) 是一个多模式的非关系型数据库管理系统。正如开发它的公司所说的“*它是一个将图形关系与文档、键值对、反应性、面向对象和地理空间模型结合在一起的**可扩展的、高性能的数据库***”。
OrientDB 还支持 SQL ,经过扩展可以用来操作树和图。
### 目标
这个教程旨在教会大家如何在运行 Ubuntu 16.04 的服务器上下载并配置 OrientDB 社区版。
### 下载 OrientDB
我们可以从最新的服务端上通过输入下面的指令来下载最新版本的 OrientDB。
```
$ wget -O orientdb-community-2.2.22.tar.gz http://orientdb.com/download.php?file=orientdb-community-2.2.22.tar.gz&os=linux
```
这里下载的是一个包含预编译二进制文件的压缩包,所以我们可以使用 `tar` 指令来操作解压它:
```
$ tar -zxf orientdb-community-2.2.22.tar.gz
```
将从中提取出来的文件夹整体移动到 `/opt`:
```
# mv orientdb-community-2.2.22 /opt/orientdb
```
### 启动 OrientDB 服务器
启动 OrientDB 服务器需要运行 `orientdb/bin/` 目录下的 shell 脚本:
```
# /opt/orientdb/bin/server.sh
```
如果你是第一次开启 OrientDB 服务器,安装程序还会显示一些提示信息,以及提醒你设置 OrientDB 的 root 用户密码:
```
+---------------------------------------------------------------+
| WARNING: FIRST RUN CONFIGURATION |
+---------------------------------------------------------------+
| This is the first time the server is running. Please type a |
| password of your choice for the 'root' user or leave it blank |
| to auto-generate it. |
| |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_ROOT_PASSWORD to the root password to use. |
+---------------------------------------------------------------+
Root password [BLANK=auto generate it]: ********
Please confirm the root password: ********
```
在完成这些后,OrientDB 数据库服务器将成功启动:
```
INFO OrientDB Server is active v2.2.22 (build fb2b7d321ea8a5a5b18a82237049804aace9e3de). [OServer]
```
从现在开始,我们需要用第二个终端来与 OrientDB 服务器进行交互。
若要强制停止 OrientDB 执行 `Ctrl+C` 即可。
### 配置守护进程
此时,我们可以认为 OrientDB 仅仅是一串 shell 脚本,可以用编辑器打开 `/opt/orientdb/bin/orientdb.sh`:
```
# $EDITOR /opt/orientdb/bin/orientdb.sh
```
在它的首段,我们可以看到:
```
#!/bin/sh
# OrientDB service script
#
# Copyright (c) OrientDB LTD (http://orientdb.com/)
# chkconfig: 2345 20 80
# description: OrientDb init script
# processname: orientdb.sh
# You have to SET the OrientDB installation directory here
ORIENTDB_DIR="YOUR_ORIENTDB_INSTALLATION_PATH"
ORIENTDB_USER="USER_YOU_WANT_ORIENTDB_RUN_WITH"
```
我们需要配置`ORIENTDB_DIR` 以及 `ORIENTDB_USER`.
然后创建一个用户,例如我们创建一个名为 `orientdb` 的用户,我们需要输入下面的指令:
```
# useradd -r orientdb -s /sbin/nologin
```
`orientdb` 就是我们在 `ORIENTDB_USER` 处输入的用户。
再更改 `/opt/orientdb` 目录的所有权:
```
# chown -R orientdb:orientdb /opt/orientdb
```
改变服务器配置文件的权限:
```
# chmod 640 /opt/orientdb/config/orientdb-server-config.xml
```
#### 下载系统守护进程服务
OrientDB 的压缩包包含一个服务文件 `/opt/orientdb/bin/orientdb.service`。我们将其复制到 `/etc/systemd/system` 文件夹下:
```
# cp /opt/orientdb/bin/orientdb.service /etc/systemd/system
```
编辑该服务文件:
```
# $EDITOR /etc/systemd/system/orientdb.service
```
其中 `[service]` 内容块看起来应该是这样的:
```
[Service]
User=ORIENTDB_USER
Group=ORIENTDB_GROUP
ExecStart=$ORIENTDB_HOME/bin/server.sh
```
将其改成如下样式:
```
[Service]
User=orientdb
Group=orientdb
ExecStart=/opt/orientdb/bin/server.sh
```
保存并退出。
重新加载系统守护进程:
```
# systemctl daemon-reload
```
启动 OrientDB 并使其开机自启动:
```
# systemctl start orientdb
# systemctl enable orientdb
```
确认 OrientDB 的状态:
```
# systemctl status orientdb
```
上述指令应该会输出:
```
● orientdb.service - OrientDB Server
Loaded: loaded (/etc/systemd/system/orientdb.service; disabled; vendor preset: enabled)
Active: active (running) ...
```
流程就是这样了!OrientDB 社区版成功安装并且正确运行在我们的服务器上了。
### 总结
在这个指导中,我们看到了一些关系型数据库管理系统(RDBMS)以及非关系型数据库管理系统(NoSQL DBMS)的简单对照。我们也安装 OrientDB 社区版的服务器端并完成了其基础的配置。
这是我们部署完全的 OrientDB 基础设施的第一步,也是我们用于管理大型系统数据的起步。
---
via: <https://www.unixmen.com/nosql-install-orientdb-ubuntu-16-04/>
作者:[Giuseppe Molica](https://www.unixmen.com/author/tutan/) 译者:[a972667237](https://github.com/a972667237) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ### Introduction – NoSQL and OrientDB
When talking about databases, in general, we refer to two major families: RDBMS (**R**elational **D**ata**b**ase **M**anagement **S**ystem), which use as user and application program interface a language named **S**tructured **Q**uery **L**anguage (or SQL) and non-relational database management systems, or **NoSQL** databases.
Between the two models there is a huge difference in the way they consider (and store) data.
#### Relational Database Management Systems
In the relational model (like MySQL, or its fork, MariaDB), a database is a set of tables, each containing one or more data categories organized in columns. Each row of the DB contains a unique instance of data for categories defined by columns.
Just as an example, consider a table containing customers. Each row correspond to a customer, with columns for name, address, and every required information.
Another table could contain an order, with product, customer, date and everything else. A user of this DB can obtain a view that fits its needs, for example a report about customers that bought products in a specific range of prices.
#### NoSQL Database Management Systems
In the NoSQL (or Not only SQL) database management systems, databases are designed implementing different “formats” for data, like a document, key-value, graph and others. The database systems realized with this paradigm are built especially for large-scale database clusters, and huge web applications. Today, NoSQL databases are used by major companies like Google and Amazon.
##### Document databases
Document databases store data in document format. The usage of this kind of DBs is usually raised with JavaScript and JSON, however, XML and other formats are accepted. An example is MongoDB.
##### Key-value databases
This is a simple model pairing a unique key with a value. These systems are performant and highly scalable for caching. Examples include BerkeleyDB and MemcacheDB.
##### Graph databases
As the name predicts, these databases store data using graph models, meaning that data is organized as nodes and interconnections between them. This is a flexible model which can evolve over time and use. These systems are applied where there is the necessity of mapping relationships.
Examples are IBM Graphs and Neo4j **and OrientDB**.
### OrientDB
[OrientDB](https://orientdb.com/), as stated by the company behind it, is a multi-model NoSQL Database Management System that “*combines the power of graphs with documents, key/value, reactive, object-oriented and geospatial models into one scalable, high-performance operational database“.*
OrientDB has also support for SQL, with extensions to manipulate trees and graphs.
### Goals
This tutorial explains how to install and configure OrientDB Community on a server running Ubuntu 16.04.
### Download OrientDB
On an up to date server, download the latest version of OrientDB by executing the following command:
$ wget -O orientdb-community-2.2.22.tar.gz http://orientdb.com/download.php?file=orientdb-community-2.2.22.tar.gz&os=linux
This is a tarball containing pre-compiled binaries, so extract the archive with
:
$ tar -zxf orientdb-community-2.2.22.tar.gz
Move the extracted directory into
:
# mv orientdb-community-2.2.22 /opt/orientdb
### Start OrientDB Server
Starting the OrientDB server requires the execution of the shell script contained in
:
# /opt/orientdb/bin/server.sh
During the first start, this installer will display some information and will ask for an OrientDB root password:
+---------------------------------------------------------------+ | WARNING: FIRST RUN CONFIGURATION | +---------------------------------------------------------------+ | This is the first time the server is running. Please type a | | password of your choice for the 'root' user or leave it blank | | to auto-generate it. | | | | To avoid this message set the environment variable or JVM | | setting ORIENTDB_ROOT_PASSWORD to the root password to use. | +---------------------------------------------------------------+ Root password [BLANK=auto generate it]: ******** Please confirm the root password: ********
After that, the OrientDB server will start:
INFO OrientDB Server is active v2.2.22 (build fb2b7d321ea8a5a5b18a82237049804aace9e3de). [OServer]
From now on, we will need a second terminal to interact with the OrientDB server.
Stop OrientDB by hitting
.
### Configure a Daemon
At this point, OrientDB is just a bunch of shell scripts. With a text editor, open
:
# $EDITOR /opt/orientdb/bin/orientdb.sh
In the first lines, we will see:
#!/bin/sh # OrientDB service script # # Copyright (c) OrientDB LTD (http://orientdb.com/) # chkconfig: 2345 20 80 # description: OrientDb init script # processname: orientdb.sh # You have to SET the OrientDB installation directory hereORIENTDB_DIR="YOUR_ORIENTDB_INSTALLATION_PATH"ORIENTDB_USER="USER_YOU_WANT_ORIENTDB_RUN_WITH"
Configure
and
.
Create a user, for example **orientdb**, by executing the following command:
# useradd -r orientdb -s /sbin/nologin
**orientdb** is the user we enter in the
line.
Change the ownership of
:
# chown -R orientdb:orientdb /opt/orientdb
Change the configuration server file’s permission:
# chmod 640 /opt/orientdb/config/orientdb-server-config.xml
#### Install the Systemd Service
OrientDB tarball contains a service file,
. Copy it to the
directory:
# cp /opt/orientdb/bin/orientdb.service /etc/systemd/system
Edit the OrientDB service file:
# $EDITOR /etc/systemd/system/orientdb.service
There, the
block should look like this:
[Service] User=ORIENTDB_USER Group=ORIENTDB_GROUP ExecStart=$ORIENTDB_HOME/bin/server.sh
Edit as follow:
[Service] User=orientdbGroup=orientdbExecStart=/opt/orientdb/bin/server.sh
Save and exit.
Reload systemd daemon service:
# systemctl daemon-reload
Start OrientDB and enable for starting at boot time:
# systemctl start orientdb # systemctl enable orientdb
Check OrientDB status:
# systemctl status orientdb
The command should output:
● orientdb.service - OrientDB Server Loaded: loaded (/etc/systemd/system/orientdb.service; disabled; vendor preset: enabled)Active: active (running)...
And that’s all! OrientDB Community is installed and correctly running.
### Conclusion
In this tutorial we have seen a brief comparison between RDBMS and NoSQL DBMS. We have also installed and completed a basic configuration of OrientDB Community server-side.
This is the first step for deploying a full OrientDB infrastructure, ready for managing large-scale systems data. |
8,743 | 我选择 dwm 作为窗口管理器的 4 大理由 | https://opensource.com/article/17/7/top-4-reasons-i-use-dwm-linux-window-manager | 2017-08-01T13:46:00 | [
"窗口管理器",
"dwm"
] | https://linux.cn/article-8743-1.html |
>
> <ruby> 窗口管理器 <rt> window manager </rt></ruby>负责管理打开窗口的大小、布置以及其它相关的方面。
>
>
>

我喜欢极简。如果可能,我会尽量在一个终端下运行所有需要的程序。这避免了一些浮夸的特效占用我的资源或者分散我的注意力。而且,无论怎么调整窗口大小和位置却依旧无法使它们完美地对齐,这也让我感到厌烦。
出于对极简化的追求,我喜欢上了 [Xfce](https://xfce.org/) 并且把它作为我主要的 Linux [桌面环境](https://en.wikipedia.org/wiki/Desktop_environment)好几年了。直到后来我看了 [Bryan Lunduke](http://lunduke.com/) 关于他所使用的名为 [Awesome](https://awesomewm.org/) 的[窗口管理器](https://en.wikipedia.org/wiki/Window_manager)的视频。Awesome 为用户整齐地布置好他们的窗口,看起来就是我想要的效果。但在我尝试之后却发现我难以把它配置成我喜欢的样子。于是我继续搜寻,发现了 [xmonad](http://xmonad.org/),然而我遇到了同样的问题。[xmonad](http://xmonad.org/) 可以良好运作但为了把它配置成我理想中的样子我却不得不先通过 Haskell 语言这关。(LCTT 译注: AwesomeWM 使用 lua 语言作为配置语言,而 xmonad 使用 Haskell 语言)
几年后,我无意间发现了 [suckless.org](http://suckless.org/) 和他们的窗口管理器 [dwm](http://dwm.suckless.org/)。
简而言之,一个窗口管理器,例如 KDE,Gnome 或者 Xfce,包括了许多部件,其中除了窗口管理器还有其它应用程序。窗口管理器负责管理打开窗口的大小、布置(以及其它窗口相关的方面)。不同的桌面环境使用不同的窗口管理器,KDE 使用 KWin,Gnome 2 使用 Metacity, Gnome 3 使用 Mutter, 以及 Xfce 使用 Xfwm。当然,你可以方便地替换这些桌面环境的默认窗口管理器。我已经把我的窗口管理器替换成 dwm,下面我说说我喜欢 dwm 的理因。
### 动态窗口管理
与 Awesome 及 xmonad 一样,dwm 的杀手锏是它能利用屏幕的所有空间为你自动排布好窗口。当然,在现在的大多数桌面环境中,你也可以设置相应的快捷键把你的窗口放置在屏幕的上下左右或者是全屏,但是有了 dwm 我们就不需要考虑这么多了。
dwm 把屏幕分为主区域和栈区域。它包含三种布局:平铺,单片镜(monocle)和浮动。平铺模式是我最常使用的,它把一个主要的窗口放置在主区域来获取最大关注力,而将其余窗口平铺在栈区域中。在单片镜模式中,所有窗口都会被最大化,你可以在它们之间相互切换。浮动模式允许你自由调整窗口大小(就像在大多数窗口管理器下那样),这在你使用像 Gimp 这类需要自定义窗口大小的应用时更为方便。
一般情况下,在你的桌面环境下你可以使用不同的工作空间(workspace)来分类你的窗口,把相近的应用程序放置在计划好的工作空间中。在工作时,我会使用一个工作空间来进行工作,同时使用另一个工作空间来浏览网页。dwm 有一个相似的功能叫标签。你可以使用标签给窗口分组,当你选中一个标签时,就能显示具有相应标签的窗口。
### 高效
dwm 能让你的计算机尽量地节省电量。Xfce 和其它轻量桌面环境在较旧或者较低性能的机器上很受欢迎,但是相比于 Xfce,dwm 在登录后只使用了它们 1/3 的资源(在我的例子中)。当我在使用一台 1 GB 内存的 Eee PC (LCTT 译注:华硕生产的一款上网本,已停产)时,占用 660 MB 和 230MB 的差别就很大了。这让我有足够的内存空间运行我的编辑器和浏览器。
### 极简
通常,我让我的应用程序彼此相邻:作为主窗口的终端(通常运行着 Vim)、用来查阅邮件的浏览器,和另外一个用来查阅资料或者打开 [Trello](https://opensource.com/node/22546) 的浏览器窗口。对于临时的网页浏览,我会在另一个工作空间或者说是另一个 *标签* 中开启一个 Chromium 窗口。

*来自作者的屏幕截图。*
在标准的桌面环境下,通常会有一或两个面板占据着屏幕上下或者两侧的空间。我尝试过使用自动隐藏功能,但当光标太靠近边缘导致面板弹出造成的不便实在让我很厌烦。你也可以把它们设置得更小,但我还是更喜欢 dwm 的极简状态栏。
### 速度
评判速度时,我比较看重 dwm 在登录后的加载速度和启动程序的速度。如果使用更快、更新的计算机,你可能不会在意这些细节,但是对我来说,不同的桌面环境和窗口管理器会有明显的差距。我实在不想连这种简单的操作也要等待,它们应该一下子就完成。另外,使用键盘快捷键来启动程序比起用鼠标或者触控板要快一些,而且我不想让双手离开键盘。
### 小结
即便如此,我也不会向新手用户推荐 dwm。研究如何配置它需要耗费一些时间(除非你对你的发行版提供的默认配置感到满意)。我发现要让一些你想要的补丁正常工作可能会有点棘手,而且相应的社区也比较小(其 IRC 频道明确表示不提供补丁的手把手帮助)。所以,为了得到你想要的效果,你得有些付出才行。不过,这也是值得的。
而且你看,它就像 Awesome 一样 awesome。
(题图:cinderwick.ca)
---
作者简介:
Jimmy Sjölund - 他是 Telia Company 的高级 IT 服务经理,关注团队开发、探索敏捷工作流和精益工作流的创新导师,以及可视化方向爱好者。他同时也是一名开源布道者,先前从事于 Ubuntu Studio 和 Plume Creator。
---
via: <https://opensource.com/article/17/7/top-4-reasons-i-use-dwm-linux-window-manager>
作者:[Jimmy Sjölund](https://opensource.com/users/jimmysjolund) 译者:[haoqixu](https://github.com/haoqixu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | I like minimalistic views. If I could run everything in a terminal I would. It's free from shiny stuff that hogs my resources and distracts my feeble mind. I also grow tired of resizing and moving windows, never getting them to align perfectly.
On my quest for minimalism, I grew fond of [Xfce](https://xfce.org) and used it as my main [desktop environment](https://en.wikipedia.org/wiki/Desktop_environment) for years on my Linux computers. Then, one day I came across a video of [Bryan Lunduke](http://lunduke.com) talking about the awesome [window manager](https://en.wikipedia.org/wiki/Window_manager) he used called [Awesome](https://awesomewm.org). It neatly arranges all of your windows for you, and so, sounded like just what I wanted. I tried it out but didn't get the hang of the configuration needed to tweak it into my liking. So, I moved on and discovered [xmonad](http://xmonad.org), but I had a similar result. It worked fine but I couldn't get around the Haskell part to really turn it into my perfect desktop.
Years passed and by accident, I found my way to [suckless.org](http://suckless.org) and their version of a window manager called [dwm](http://dwm.suckless.org).
In short, a desktop environment such as KDE, Gnome, or Xfce includes many things, of which a window manager is one, but also with select applications. A window manager alone handles (among other window related things) the sizing and arrangement of the windows you open. Different desktop environments use different window managers. KDE has KWin, Gnome 2 has Metacity, Gnome 3 has Mutter, and Xfce has Xfwm. Conveniently, for all of these, you can change the default window manager to something else, which is what I've been doing for a while. I've been switching mine to dwm, and here's why I love it.
## Dynamic window management
The killer feature for dwm, as with Awesome and xmonad, is the part where the tool automatically arranges the windows for you, filling the entire space of your screen. Sure, for most desktop environments today it's possible to create keyboard shortcuts to arrange windows to the left, right, top, bottom or full screen, but with dwm it's just one less thing to think about.
Dwm divides the screen into a master and a stack area. There are three layouts to choose from: tile, monocle, and floating. When using tile mode, which is what I use the most, it puts the window which requires the most attention in the master area while the others are tiled in the stack area. In the monocle layout, all windows are maximized and you toggle between them. The floating layout allows you resize the windows as you want (as the most window managers do), which is handy if you're using Gimp or a similar application where custom size windows makes more sense.
Usually, in your desktop environment, you can use different workspaces to sort your windows and gather similar applications in designated workspaces. At work, I use one workspace for ongoing work and one for internet browsing. Dwm has a similar function called tags. You can group windows by tags and by selecting a tag, you display all the windows with that tag.
## Efficiency
Dwm is efficient if you want to save as much power as you can on your computer. Xfce and other lightweight environments are great on older or weaker machines, but dwm uses (in my case) about 1/3 of resources compared to Xfce after login. When I was using an eee pc with 1 GB RAM it made quite a difference if the memory was occupied to 660 MB or 230 MB. That left me with more room for the editors and browsers I wanted to use.
## Minimalistic
I typically set up my applications beside each other: the terminal as the master window (usually running Vim as an editor), a browser for email, and another browser window open for research or [Trello](https://opensource.com/node/22546). For casual internet browsing, I fire up a Chromium window in another workspace or a *tag* as I mentioned earlier.

Screenshot by author.
With standard desktop environments, you often have at least one or two panels, top and bottom or on the side, taking up space. I have tried out the autohide panel function that most of them have, but I was annoyed every time I accidently put the mouse pointer too close to the edge and the panel popped out at the most inconvenient time. You can make them smaller as well but I still enjoy the minimalistic status bar on top of the screen available in dwm.
## Speed
When evaluating speed, I count both how quickly dwm is loaded when I log in and how quickly the applications launch when I start them. When using newer, faster computers you might not care about this detail yet, but for me, there is a noticable difference between various desktop environments and window managers. I don't want to actually wait for my computer to perform such easy tasks, it should just pop up. Also, using keyboard shortcuts to launch everything is faster than using a mouse or a touchpad, and my hands don't have to leave the keyboard.
## Conclusion
That said, I would not recommend dwm to the novice user. It takes some time to read up on how to configure it to your liking (unless you are satisfied with the setup provided by your Linux distribution). I have found some of the patches you might want to include can be tricky to get working and the support community is small (the IRC channel states "No patch-handholding"). So, it might take a bit of effort to get exactly what you want. However, once you do, it's well worth that bit investment.
And hey, it looks as awesome as Awesome.
## 16 Comments |
8,745 | 为什么你应该成为一名系统管理员? | https://opensource.com/article/17/7/why-become-sysadmin | 2017-08-02T10:18:00 | [
"系统管理员"
] | https://linux.cn/article-8745-1.html |
>
> 网络和系统管理工作工资高、岗位多。
>
>
>

我们为秩序而战,而服务器大叔则需要你成为系统管理员。
这是个很好的机会,因为你已经管理过你有的那些系统,你本可以不需酬劳地管理那些日逐一日地运行的系统。但还是有一些面试官,愿意拿一笔很不错的薪水来找一些人去管理他们的系统。目前,系统和网络管理的失业率几乎为零,但是美国劳工统计局估计到 2024 年该领域将持续 [9% 的增长](https://www.bls.gov/ooh/Computer-and-Information-Technology/Network-and-computer-systems-administrators.htm#tab-1)。
你可能会问,自动化运维怎么样?大概你已经听一些系统管理员说过,他们打算如何来自动化他们所有的工作,或者,他们如何在一个 shell 脚本里自动化完成他们前任者的工作。你听过的有多少成功的?自动化了某个任务,就会有更多的东西需要自动化。
如果你参加过系统管理者会议或观看过他们的视频,你会发现这是一个需要新鲜血液的领域。不仅仅是明显的缺少年轻人,而且相当地性别和种族不平等。虽然有点儿跑题,但是多样性已经被证明可以提高系统管理员非常感兴趣的自我恢复力、解决问题的能力,创新力和决策力。
### 我要成为一名系统管理员吗?
有人要你,但是你要做系统管理工作吗?如果你生活在一个第一世界国家,每年 70,000 美元的收入看上去是一个可以感到幸福的标准收入,或者说至少减少了大部分的与钱相关的压力。系统和网络管理员的中位年度收入是 80,000 美元,很容易达标,尽管明显有些人赚的更少或更多。
你曾经免费做过一些系统管理工作吗?我们大多数人都多多少少地管理一些设备,但是那并不代表你喜欢管理它。如果你曾经在你家的网络中添加了一些服务器,你就是一个备选的系统管理员。你想要为自己的孩子添加一个“我的世界”的游戏服务器么,所以你用树莓派搭建了一个?也许是时候来考虑通过做这样的工作来获取报酬了。
作为一份需要全面知识的工作,系统管理工作有很大的跨度。各种行业和各种规模的公司都需要系统管理员。大多数的位于市区里的公司需要一些现场上班的人员,但是远程办公也有很大的可能性。
每个人都是免费地用开源软件工作,但是,作为系统管理员,你也可以计酬工作。系统管理员经常把对开源项目做贡献、支持开源供应商、使用各种各样的有趣而强大的开源软件为他们日常工作的一部分。
最后,谈一下社区。以我多年的经验来看,找到一个欢迎新人的、鼓励参与的、气氛友好的、彼此帮助的社区是非常难的。系统管理员给人的刻板印象就是性情乖戾,但是我感觉现在这只不过是个过时的笑话。现在大多数的系统管理员积极地回应各种形式的有建设性的反馈,尤其是欢迎新手。我们总是能敏锐的察觉到各种各样的入门级问题,并且热心于解决它们。我们是一个有深度的、兴趣广泛的社区。问一下一个系统管理员在工作之余都干了些什么,你肯定会感到吃惊。
### 那么,我要从哪儿开始?
如果你被说服了,首先有个好消息。与许多技术类职业相比,从不是直接相关的四年制学历转为系统管理员是可能的。事实上,全世界有四年本科学历的系统管理员相对较少,所以不用太担心这个。对于现在的白领工作,学历确实有用,而且越相关越好。与其他的技术类职业相比,这儿有一个很自然的入门级别的职位:技术支持。
对系统管理工作的候选人来说有个坏消息是真的,在技术类领域(甚至非技术类领域):雇主在寻找有多年经验的和有高学历的人,即便是在入门级别的岗位上。雇主是刻薄的,雇主不会花钱给你培训。不过稍嫌安慰的是,这些问题在人才市场上是非常普遍的,所以不管你找什么工作都得面对这种问题。
系统管理工作可以作为其他 IT 领域的起点。比如:网络管理、数据库管理、网站可靠性工程师(SRE),再远一些的也有,软件工程师、质量保证(QA)、IT 管理。
系统管理工作的具体情况,包括其刻板的印象和一些明显的方面以及不明显的方面,是未来我们的文章的主题。我们也会涵盖这个领域可能会改变的重要方面,提供一些方法来评估和比较系统管理职位,以及在面试中问到的一些重要问题。
如果你对加入这个领域有兴趣,可以考虑加入[专业系统管理者联盟](https://lopsa.org/)(LOPSA),我在这个组织任董事会成员。会员资格是你负担得起的(可能由您的雇主负责),而学生免费。成为 LOPSA 会员,是找到同行们和与有各种经验、各种行业和工作环境中的人员讨论职业生涯的绝佳方式。
(题图 : Jason Baker, CC BY-SA 4.0)
---
作者简介:
Paul English - 他是 PreOS 安全公司的首席执行官,也是专业系统管理员联盟的董事会成员,该专业系统管理员联盟是一个非营利性专业协会,从 2015 年至今致力于提升系统管理工作实践。 Paul 在 1998 年获得了伍斯特理工学院的计算机学士学位。自 1996 年以来,Paul 一直是 UNIX 和 Linux 系统管理员和许多场合冠以 IT 之名的那个人。最近他在管理一些系统管理员。
---
via: <https://opensource.com/article/17/7/why-become-sysadmin>
作者:[Paul English](https://opensource.com/users/penglish) 译者:[sugarfillet](https://github.com/sugarfillet) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | We are at war with entropy, and Uncle Server wants YOU to be a system administrator.
Chances are good that you are already an administrator for some systems you own, and you do it for free because that's just how it goes these days. But there are employers willing and eager to pay good money for someone to help administer their systems. We're currently near zero unemployment in system and network administration, and the Bureau of Labor Statistics projects continued [9% growth](https://www.bls.gov/ooh/Computer-and-Information-Technology/Network-and-computer-systems-administrators.htm#tab-1) in the field through 2024.
What about automation, you ask. Perhaps you've heard sysadmins say how they intend to automate away their entire job, or how they automated their predecessor's job in a single shell script. How many have you heard of that succeeding? When the job is automation, there is always more to automate.
If you attend or watch videos of sysadmin conferences, you'll see a field that needs new blood. Not only is there a distinct lack of younger people, but also fairly extreme gender and racial imbalances. While those are topics for a different article, diversity is well proven to improve resilience, problem-solving, innovation, and decision-making—things of great interest to sysadmins.
## Do I want to be a sysadmin?
So you're needed, but do you need system administration? Assuming you live in a first world country, US$ 70,000 annual income seems to be a threshold for happiness—or at least shedding most money-related stress. The median for system and network administrators is US$ 80,000, so hitting that threshold is quite achievable, though obviously there are plenty of people making less (and more).
Is system administration something you've done for free? Most of us manage at least a few devices, but that doesn't mean you enjoy it. If you've added some extra systems acting as servers on your home network, you're a candidate. Did you justify adding a Minecraft server for your kid mostly so that you could have fun setting up a Raspberry Pi server? Maybe it's time to consider getting paid to do that sort of work.
System administration as a generalist job offers a particular kind of flexibility. System administrators are needed across a wide variety of industries and company sizes. Most urban areas have a need for some on-site workers, and remote work is also a strong possibility.
Anyone can work with open source software for free, but as a sysadmin you can also get paid to do it. System administrators often contribute to open source projects, support open source vendors, and work with a wide variety of interesting and powerful open source software as part of their regular job.
Last, what about the community? I can say from years of experience that it is hard to find a more welcoming, encouraging, friendly, and helpful community. Sure, there is the stereotype of the grumpy sysadmin, but I feel that's mostly an outdated joke now. Most sysadmins today respond positively to all forms of constructive feedback, particularly when it comes to welcoming newcomers. We're almost all keenly aware of our diversity and demographic problems and eager to solve them. We are a community of deep and diverse interests. Ask a sysadmin what they do outside of work and you are unlikely to be bored.
## So where do I sign up?
If you're convinced, first the good news. More than many STEM careers, it's possible to transition into system administration without a directly relevant four-year degree. In fact, there are relatively few four-year degrees in system administration in the entire world, which helps set expectations. As with every white-collar job these days, a degree does help, and the more relevant the better. Unlike many other STEM careers, there is a natural entry-level position: the helpdesk.
The bad news for system administration hopefuls is true for many STEM (and even non-STEM) fields: Employers are looking for people with years of experience and advanced degrees, even for explicitly entry-level positions. Employers are underpaying. Employers aren't investing in training. The consolation is that these problems are so prevalent in the job market that you'll be facing them in any job.
System administration can be a starting point for other IT fields, such as network administration, database administration, and site reliability engineering. And just a bit further away are software engineering, quality assurance, and IT management.
Details of the sysadmin job, both the stereotypes and obvious aspects, as well as some less obvious things to consider, are topics for future Opensource.com articles. We'll also cover some important ways in which the field is likely to change and offer ways to evaluate and compare sysadmin positions and ask powerful questions during interviews.
If you're interested in joining the field, consider joining the [League of Professional System Administrators](https://lopsa.org/) (LOPSA), the professional organization on which I serve as a board member. Membership is affordable (and may be covered by your employer) and free for students. Being a LOPSA member is a fantastic way to meet sysadmins and discuss the career with people at various levels of experience and in a variety of industries and work environments.
## 1 Comment |
8,746 | 在 Ubuntu 16.04 中使用 Docker Compose | https://www.unixmen.com/container-docker-compose-ubuntu-16-04/ | 2017-08-02T14:05:00 | [
"容器",
"Docker"
] | https://linux.cn/article-8746-1.html | 
### 什么是 Docker Compose
[Docker Compose](https://docs.docker.com/compose/overview/) 是一个运行多容器 Docker 应用的工具。Compose 通过一个配置文件来配置一个应用的服务,然后通过一个命令创建并启动所有在配置文件中指定的服务。
Docker Compose 适用于许多不同的项目,如:
* **开发**:利用 Compose 命令行工具,我们可以创建一个隔离(而可交互)的环境来承载正在开发中的应用程序。通过使用 [Compose 文件](https://docs.docker.com/compose/compose-file/),开发者可以记录和配置所有应用程序的服务依赖关系。
* **自动测试**:此用例需求一个测试运行环境。Compose 提供了一种方便的方式来管理测试套件的隔离测试环境。完整的环境在 Compose 文件中定义。
Docker Compose 是在 [Fig](http://www.fig.sh/) 的源码上构建的,这个社区项目现在已经没有使用了。
在本教程中,我们将看到如何在 Ubuntn 16.04 上安装 Docker Compose。
### 安装 Docker
我们需要安装 Docker 来安装 Docker Compose。首先为官方 Docker 仓库添加公钥。
```
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
接下来,添加 Docker 仓库到 `apt` 源列表:
```
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
```
更新包数据库,并使用 `apt` 安装 Docker
```
$ sudo apt-get update
$ sudo apt install docker-ce
```
在安装进程结束后,Docker 守护程序应该已经启动并设为开机自动启动。我们可以通过下面的命令来查看它的状态:
```
$ sudo systemctl status docker
---------------------------------
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running)
```
### 安装 Docker Compose
现在可以安装 Docker Compose 了。通过执行以下命令下载当前版本。
```
# curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
```
为二进制文件添加执行权限:
```
# chmod +x /usr/local/bin/docker-compose
```
检查 Docker Compose 版本:
```
$ docker-compose -v
```
输出应该如下:
```
docker-compose version 1.14.0, build c7bdf9e
```
### 测试 Docker Compose
Docker Hub 包含了一个用于演示的 Hello World 镜像,可以用来说明使用 Docker Compose 来运行一个容器所需的配置。
创建并打开一个目录:
```
$ mkdir hello-world
$ cd hello-world
```
创建一个新的 YAML 文件:
```
$ $EDITOR docker-compose.yml
```
在文件中粘贴如下内容:
```
unixmen-compose-test:
image: hello-world
```
***注意:** 第一行是容器名称的一部分。*
保存并退出。
### 运行容器
接下来,在 `hello-world` 目录执行以下命令:
```
$ sudo docker-compose up
```
如果一切正常,Compose 输出应该如下:
```
Pulling unixmen-compose-test (hello-world:latest)...
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f
Status: Downloaded newer image for hello-world:latest
Creating helloworld_unixmen-compose-test_1 ...
Creating helloworld_unixmen-compose-test_1 ... done
Attaching to helloworld_unixmen-compose-test_1
unixmen-compose-test_1 |
unixmen-compose-test_1 | Hello from Docker!
unixmen-compose-test_1 | This message shows that your installation appears to be working correctly.
unixmen-compose-test_1 |
unixmen-compose-test_1 | To generate this message, Docker took the following steps:
unixmen-compose-test_1 | 1\. The Docker client contacted the Docker daemon.
unixmen-compose-test_1 | 2\. The Docker daemon pulled the "hello-world" image from the Docker Hub.
unixmen-compose-test_1 | 3\. The Docker daemon created a new container from that image which runs the
unixmen-compose-test_1 | executable that produces the output you are currently reading.
unixmen-compose-test_1 | 4\. The Docker daemon streamed that output to the Docker client, which sent it
unixmen-compose-test_1 | to your terminal.
unixmen-compose-test_1 |
unixmen-compose-test_1 | To try something more ambitious, you can run an Ubuntu container with:
unixmen-compose-test_1 | $ docker run -it ubuntu bash
unixmen-compose-test_1 |
unixmen-compose-test_1 | Share images, automate workflows, and more with a free Docker ID:
unixmen-compose-test_1 | https://cloud.docker.com/
unixmen-compose-test_1 |
unixmen-compose-test_1 | For more examples and ideas, visit:
unixmen-compose-test_1 | https://docs.docker.com/engine/userguide/
unixmen-compose-test_1 |
helloworld_unixmen-compose-test_1 exited with code 0
```
Docker 容器只能在命令(LCTT 译注:此处应为容器中的命令)还处于活动状态时运行,因此当测试完成运行时,容器将停止运行。
### 结论
本文是关于在 Ubuntu 16.04 中安装 Docker Compose 的教程。我们还看到了如何通过一个 YAML 格式的 Compose 文件构建一个简单的项目。
---
via: <https://www.unixmen.com/container-docker-compose-ubuntu-16-04/>
作者:[Giuseppe Molica](https://www.unixmen.com/author/tutan/) 译者:[Locez](https://github.com/locez) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ### What is Docker Compose
[Docker Compose](https://docs.docker.com/compose/overview/) is a tool for running multi-container Docker applications. To configure an application’s services with Compose we use a configuration file, and then, executing a single command, it is possible to create and start all the services specified in the configuration.
Docker Compose is useful for many different projects like:
**Development**: with the Compose command line tools we create (and interact with) an isolated environment which will host the application being developed.
By using the[Compose file](https://docs.docker.com/compose/compose-file/), developers document and configure all of the application’s service dependencies.**Automated testing**: this use case requires an environment for running tests in. Compose provides a convenient way to manage isolated testing environments for a test suite. The full environment is defined in the Compose file.
Docker Compose was made on the [Fig](http://www.fig.sh/) source code, a community project now unused.
In this tutorial we will see how to install Docker Compose on an Ubuntu 16.04 machine.
### Install Docker
We need Docker in order to install Docker Compose. First, add the public key for the official Docker repository:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add-
Next, add the Docker repository to
sources list:
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the packages database and install Docker with
:
$ sudo apt-get update $ sudo apt install docker-ce
At the end of the installation process, the Docker daemon should be started and enabled to load at boot time. We can check its status with the following command:
$ sudo systemctl status docker --------------------------------- ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)Active: active (running)
### Install Docker Compose
At this point it is possible to install Docker Compose. Download the current release by executing the following command:
# curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
Make the downloaded binary executable:
# chmod +x /usr/local/bin/docker-compose
Check the Docker Compose version:
$ docker-compose -v
The output should be something like this:
docker-compose version 1.14.0, build c7bdf9e
### Testing Docker Compose
The Docker Hub includes a Hello World image for demonstration purposes, illustrating the configuration required to run a container with Docker Compose.
Create a new directory and move into it:
$ mkdir hello-world $ cd hello-world
Create a new YAML file:
$ $EDITOR docker-compose.yml
In this file paste the following content:
unixmen-compose-test: image: hello-world
**Note:** the first line is used as part of the container name.
Save and exit.
#### Run the container
Next, execute the following command in the
directory:
$ sudo docker-compose up
If everything is correct, this should be the output shown by Compose:
Pulling unixmen-compose-test (hello-world:latest)... latest: Pulling from library/hello-world b04784fba78d: Pull complete Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f Status: Downloaded newer image for hello-world:latest Creating helloworld_unixmen-compose-test_1 ... Creating helloworld_unixmen-compose-test_1 ... done Attaching to helloworld_unixmen-compose-test_1 unixmen-compose-test_1 | unixmen-compose-test_1 | Hello from Docker! unixmen-compose-test_1 | This message shows that your installation appears to be working correctly. unixmen-compose-test_1 | unixmen-compose-test_1 | To generate this message, Docker took the following steps: unixmen-compose-test_1 | 1. The Docker client contacted the Docker daemon. unixmen-compose-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. unixmen-compose-test_1 | 3. The Docker daemon created a new container from that image which runs the unixmen-compose-test_1 | executable that produces the output you are currently reading. unixmen-compose-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it unixmen-compose-test_1 | to your terminal. unixmen-compose-test_1 | unixmen-compose-test_1 | To try something more ambitious, you can run an Ubuntu container with: unixmen-compose-test_1 | $ docker run -it ubuntu bash unixmen-compose-test_1 | unixmen-compose-test_1 | Share images, automate workflows, and more with a free Docker ID: unixmen-compose-test_1 | https://cloud.docker.com/ unixmen-compose-test_1 | unixmen-compose-test_1 | For more examples and ideas, visit: unixmen-compose-test_1 | https://docs.docker.com/engine/userguide/ unixmen-compose-test_1 | helloworld_unixmen-compose-test_1 exited with code 0
Docker containers only run as long as the command is active, so the container will stop when the test finishes running.
### Conclusion
This concludes the tutorial about the installation of Docker Compose on an Ubuntu 16.04 machine. We have also seen how to create a simple project through the Compose file in YAML format. |
8,747 | 如何保护 Ubuntu 16.04 上的 NGINX Web 服务器 | https://www.unixmen.com/encryption-secure-nginx-web-server-ubuntu-16-04/ | 2017-08-02T14:33:00 | [
"Nginx",
"SSL"
] | https://linux.cn/article-8747-1.html | 
### 什么是 Let’s Encrypt
[Let’s Encrypt](https://letsencrypt.org/) 是互联网安全研究组织 (ISRG) 提供的免费证书认证机构。它提供了一种轻松自动的方式来获取免费的 SSL/TLS 证书 - 这是在 Web 服务器上启用加密和 HTTPS 流量的必要步骤。获取和安装证书的大多数步骤可以通过使用名为 [Certbot](https://certbot.eff.org/) 的工具进行自动化。
特别地,该软件可在可以使用 shell 的服务器上使用:换句话说,它可以通过 SSH 连接使用。
在本教程中,我们将看到如何使用 `certbot` 获取免费的 SSL 证书,并在 Ubuntu 16.04 服务器上使用 Nginx。
### 安装 Certbot
第一步是安装 `certbot`,该软件客户端可以几乎自动化所有的过程。 Certbot 开发人员维护自己的 Ubuntu 仓库,其中包含比 Ubuntu 仓库中存在的软件更新的软件。
添加 Certbot 仓库:
```
# add-apt-repository ppa:certbot/certbot
```
接下来,更新 APT 源列表:
```
# apt-get update
```
此时,可以使用以下 `apt` 命令安装 `certbot`:
```
# apt-get install certbot
```
Certbot 现已安装并可使用。
### 获得证书
有各种 Certbot 插件可用于获取 SSL 证书。这些插件有助于获取证书,而证书的安装和 Web 服务器配置都留给管理员。
我们使用一个名为 **Webroot** 的插件来获取 SSL 证书。
在有能力修改正在提供的内容的情况下,建议使用此插件。在证书颁发过程中不需要停止 Web 服务器。
#### 配置 NGINX
Webroot 会在 Web 根目录下的 `.well-known` 目录中为每个域创建一个临时文件。在我们的例子中,Web 根目录是 `/var/www/html`。确保该目录在 Let’s Encrypt 验证时可访问。为此,请编辑 NGINX 配置。使用文本编辑器打开 `/etc/nginx/sites-available/default`:
```
# $EDITOR /etc/nginx/sites-available/default
```
在该文件中,在 `server` 块内,输入以下内容:
```
location ~ /.well-known {
allow all;
}
```
保存,退出并检查 NGINX 配置:
```
# nginx -t
```
没有错误的话应该会显示如下:
```
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
```
重启 NGINX:
```
# systemctl restart nginx
```
#### 使用 Certbot 获取证书
下一步是使用 Certbot 的 Webroot 插件获取新证书。在本教程中,我们将保护示例域 www.example.com。需要指定应由证书保护的每个域。执行以下命令:
```
# certbot certonly --webroot --webroot-path=/var/www/html -d www.example.com
```
在此过程中,Cerbot 将询问有效的电子邮件地址,用于进行通知。还会要求与 EFF 分享,但这不是必需的。在同意服务条款之后,它将获得一个新的证书。
最后,目录 `/etc/letsencrypt/archive` 将包含以下文件:
* `chain.pem`:Let’s Encrypt 加密链证书。
* `cert.pem`:域名证书。
* `fullchain.pem`:`cert.pem`和 `chain.pem` 的组合。
* `privkey.pem`:证书的私钥。
Certbot 还将创建符号链接到 `/etc/letsencrypt/live/domain_name/` 中的最新证书文件。这是我们将在服务器配置中使用的路径。
### 在 NGINX 上配置 SSL/TLS
下一步是服务器配置。在 `/etc/nginx/snippets/` 中创建一个新的代码段。 **snippet** 是指一段配置,可以包含在虚拟主机配置文件中。如下创建一个新的文件:
```
# $EDITOR /etc/nginx/snippets/secure-example.conf
```
该文件的内容将指定证书和密钥位置。粘贴以下内容:
```
ssl_certificate /etc/letsencrypt/live/domain_name/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
```
在我们的例子中,`domain_name` 是 `example.com`。
#### 编辑 NGINX 配置
编辑默认虚拟主机文件:
```
# $EDITOR /etc/nginx/sites-available/default
```
如下:
```
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name www.example.com
return 301 https://$server_name$request_uri;
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/secure-example.conf
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
# ...
}
```
这将启用 NGINX 加密功能。
保存、退出并检查 NGINX 配置文件:
```
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
```
重启 NGINX:
```
# systemctl restart nginx
```
### 总结
按照上述步骤,此时我们已经拥有了一个安全的基于 NGINX 的 Web 服务器,它由 Certbot 和 Let’s Encrypt 提供加密。这只是一个基本配置,当然你可以使用许多 NGINX 配置参数来个性化所有东西,但这取决于特定的 Web 服务器要求。
---
via: <https://www.unixmen.com/encryption-secure-nginx-web-server-ubuntu-16-04/>
作者:[Giuseppe Molica](https://www.unixmen.com/author/tutan/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ### What is Let’s Encrypt
[Let’s Encrypt](https://letsencrypt.org/) is a free certificate authority brought by the Internet Security Research Group (ISRG). It provides an easy and automated way to obtain free SSL/TLS certificates – a required step for enabling encryption and HTTPS traffic on web servers. Most of the steps in obtaining and installing a certificate can be automated by using a tool called [Certbot](https://certbot.eff.org/).
In particular, this software can be used in presence of shell access to the server: in other words, when it’s possible to connect to the server through SSH.
In this tutorial we will see how to use
to obtain a free SSL certificate and use it with Nginx on an Ubuntu 16.04 server.
### Install Certbot
The first step is to install
, the software client which will automate almost everything in the process. Certbot developers maintain their own Ubuntu software repository which contain software newer than those present in the Ubuntu repositories.
Add the Certbot repository:
# add-apt-repository ppa:certbot/certbot
Next, update the APT sources list:
# apt-get update
At this point, it is possible to install
with the following
command:
# apt-get install certbot
Certbot is now installed and ready to use.
### Obtain a Certificate
There are various Certbot plugins for obtaining SSL certificates. These plugins help in obtaining a certificate, while its installation and web server configuration are both left to the admin.
We will use a plugin called **Webroot** to obtain a SSL certificate.
This plugin is recommended in those cases where there is the ability to modify the content being served. There is no need to stop the web server during the certificate issuance process.
#### Configure NGINX
Webroot works by creating a temporary file for each domain in a directory called
, placed inside the web root directory. In our case, the web root directory is
. Ensure that the directory is accessible to Let’s Encrypt for validation. To do so, edit the NGINX configuration. With a text editor, open the
file:
# $EDITOR /etc/nginx/sites-available/default
In this file, in the server block, place the following content:
location ~ /.well-known { allow all; }
Save, exit and check the NGINX configuration:
# nginx -t
Without errors, it should display:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Restart NGINX:
# systemctl restart nginx
#### Obtain Certificate with Certbot
The next step is to obtain a new certificate using Certbot with the Webroot plugin. In this tutorial, we will secure (as example) the domain www.example.com. It is required to specify every domain that should be secured by the certificate. Execute the following command:
# certbot certonly --webroot --webroot-path=<span class="highlight">/var/www/html</span> <strong>-d</strong> <span class="highlight">www.example.com</span>
During the process, Cerbot will ask for a valid email address for notification purposes. It will also ask to share it with the EFF, but this is not required. After agreeing the Terms of Services, it will obtain a new certificate.
At the end, the directory
will contain the following files:
- chain.pem: Let’s Encrypt chain certificate.
- cert.pem: domain certificate.
- fullchain.pem: combination of
cert.pem
and
chain.pem.
- privkey.pem: certificate’s private key.
Certbot will also create symbolic links to the most recent certificate files in
. This is the path we will use in server configuration.
### Configure SSL/TLS on NGINX
The next step is server configuration. Create a new snippet in the
. A **snippet** is a part of a configuration file that can be included in virtual host configuration files. So, create a new file:
# $EDITOR /etc/nginx/snippets/secure-example.conf
The content of this file will be the directives specifying the locations of the certificate and key. Paste the following content:
ssl_certificate /etc/letsencrypt/live/domain_name/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
In our case,
would be
.
#### Edit NGINX Configuration
Edit the default Virtual Host file:
# $EDITOR /etc/nginx/sites-available/default
As follows:
server { listen 80 default_server; listen [::]:80 default_server; server_name www.example.com return 301 https://$server_name$request_uri; # SSL configuration #listen 443 ssl default_server;listen [::]:443 ssl default_server;include snippets/secure-example.conf# # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 ...
This will enable encryption on NGINX.
Save, exit and check the NGINX configuration file:
# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Restart NGINX:
# systemctl restart nginx
### Conclusion
Following all the steps above, at this point we have a secured NGINX-based web server, with encryption granted by Certbot and Let’s Encrypt. This is just a basic configuration, of course, and it’s possible to use many NGINX configuration parameters for personalizing everything, but that depends on specific web server requirements. |
8,749 | 运行 Ubuntu 的 Husarion CORE2-ROS 使得机器人开发变得容易 | https://insights.ubuntu.com/2017/07/12/robot-development-made-easy-with-husarion-core2-ros-running-ubuntu/ | 2017-08-03T18:17:17 | [
"ROS",
"RTOS"
] | https://linux.cn/article-8749-1.html |
>
> 这是游客投稿的本系列两篇中的第一篇;作者:Dominik Nowak,Husarion 的 CEO
>
>
>
过去十年,我们见证了 IT 行业的许多突破。可以说对消费者最有意义的一个方面是智能手机和移动开发的普及。接下来的大事件是什么,现在智能手机是如此常见,我们天天对着它,是不是有点无聊吗?所以,我们猜是:机器人。
众所周知,许多生产线完全由机器人运行。但在消费者和服务方面,还没有看到巨大的突破。我们认为这是一个可达性和降低开发人员进入的门槛的问题。只需要有好的、简单的工具来快速做出原型和开发机器人。为了测试新的想法并赋予工程师们更多能力,以便他们可以解决许多人类仍然面临的问题,那些比在应用中的点按一下更棘手的问题。
构建机器人是一个具有挑战性的任务,[Husarion](https://husarion.com/) 团队正在努力使其更容易。Husarion 是一家从事于机器人快速开发平台的机器人公司。该公司的产品是 CORE2 机器人控制器和用于管理所有基于 CORE2 的机器人的云平台。CORE2 是第二代 Husarion 机器人控制器,它可在[这里](https://www.crowdsupply.com/husarion/core2)找到。
CORE2 结合了实时微控制器板和运行 Ubuntu 的单板计算机。Ubuntu 是最受欢迎的 Linux 发行版,不仅适用于[桌面](https://www.ubuntu.com/desktop),还适用于物联网和 [机器人](https://www.ubuntu.com/internet-of-things/robotics)程序中的嵌入式硬件。

CORE2 控制器有两种配置。第一款是采用 ESP32 Wi-Fi 模块的,专用于需要低功耗和实时、安全遥控的机器人应用。第二款,称为 CORE2-ROS,基本上是将两块板子集成到了一起:
* 使用实时操作系统(RTOS)的实时微控制器并集成电机、编码器和传感器接口的电路板
* 带有 ROS([Robot Operating System](http://www.ros.org/))包的运行 Linux 的单板计算机(SBC)和其他软件工具。
“实时”电路板做底层工作。它包含高效的 STM32F4 系列微控制器,非常适用于驱动电机、读码器、与传感器通信,并控制整个机电或机器人系统。在大多数应用中,CPU 负载不超过几个百分点,实时操作由基于 RTOS 的专用编程框架支持。我们还保证与 Arduino 库的兼容性。大多数任务都在微控制器外设中处理,如定时器、通信接口、ADC 等,它具有中断和 DMA 通道的强大支持。简而言之,对于具有其他任务的单板计算机来说,这不是一项任务。
另一方面,很显然,现代先进的机器人程序不能仅仅基于微控制器,原因如下:
* 自动机器人需要大量的处理能力来执行导航、图像和声音识别、移动等等,
* 编写先进的软件需要标准化才能有效 - SBC 在行业中越来越受欢迎,而对于为 SBC 编写的软件也是如此,这与 PC 电脑非常相似
* SBC 每年都变得越来越便宜 – Husarion 认为,结合这两个世界在机器人技术方面是非常有益的。
CORE2-ROS 控制器有两种配置:[Raspberry Pi 3](https://www.raspberrypi.org/products/raspberry-pi-3-model-b/) 或 [ASUS Tinker Board](https://www.asus.com/uk/Single-Board-Computer/Tinker-Board/)。CORE-ROS 运行于 Ubuntu、Husarion 开发和管理工具以及 ROS 软件包上。
下篇文章将介绍为何 Husarion 决定使用 Ubuntu 。
---
via: <https://insights.ubuntu.com/2017/07/12/robot-development-made-easy-with-husarion-core2-ros-running-ubuntu/>
作者:[Dominik Nowak](https://insights.ubuntu.com/author/guest/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,750 | LXD 2.15 中的存储管理 | https://insights.ubuntu.com/2017/07/12/storage-management-in-lxd-2-15/ | 2017-08-03T18:37:38 | [
"LXD",
"容器",
"存储"
] | https://linux.cn/article-8750-1.html | 
长久以来 LXD 已经支持多种存储驱动。用户可以在 zfs、btrfs、lvm 或纯目录存储池之间进行选择,但他们只能使用单个存储池。一个被频繁被提到的需求是不仅支持单个存储池,还支持多个存储池。这样,用户可以维护一个由 SSD 支持的 zfs 存储池用于 I/O 密集型容器,另一个简单的基于目录的存储池用于其他容器。幸运的是,现在这是可能的,因为 LXD 在几个版本后有了自己的存储管理 API。
### 创建存储池
新安装 LXD 没有定义任何存储池。如果你运行 `lxd init` ,LXD 将提供为你创建一个存储池。由 `lxd init` 创建的存储池将是创建容器的默认存储池。

### 创建更多的存储池
我们的客户端工具使得创建额外的存储池变得非常简单。为了创建和管理新的存储池,你可以使用 `lxc storage` 命令。所以如果你想在块设备 `/dev/sdb` 上创建一个额外的 btrfs 存储池,你只需使用 `lxc storage create my-btrfs btrfs source=/dev/sdb`。让我们来看看:

### 在默认存储池上创建容器
如果你从全新安装的 LXD 开始,并通过 `lxd init` 创建了一个存储池,LXD 将使用此池作为默认存储池。这意味着如果你执行 `lxc launch images:ubuntu/xenial xen1`,LXD 将为此存储池上的容器的根文件系统创建一个存储卷。在示例中,我们使用 `my-first-zfs-pool` 作为默认存储池。

### 在特定存储池上创建容器
但是你也可以通过传递 `-s` 参数来告诉 `lxc launch` 和 `lxc init` 在特定存储池上创建一个容器。例如,如果要在 `my-btrfs` 存储池上创建一个新的容器,你可以执行 `lxc launch images:ubuntu/xenial xen-on-my-btrfs -s my-btrfs`:

### 创建自定义存储卷
如果你其中一个容器需要额外的空间存储额外的数据,那么新的存储 API 将允许你创建可以连接到容器的存储卷。只需要 `lxc storage volume create my-btrfs my-custom-volume`:

### 连接自定义卷到容器中
当然,这个功能是有用的,因为存储 API 让你把这些存储卷连接到容器。要将存储卷连接到容器,可以使用 `lxc storage volume attach my-btrfs my-custom-volume xen1 data /opt/my/data`:

### 在容器之间共享自定义存储卷
默认情况下,LXD 将使连接的存储卷由其所连接的容器写入。这意味着它会将存储卷的所有权更改为容器的 id 映射。但存储卷也可以同时连接到多个容器。这对于在多个容器之间共享数据是非常好的。但是,这有一些限制。为了将存储卷连接到多个容器,它们必须共享相同的 id 映射。让我们创建一个额外的具有一个隔离的 id 映射的容器 `xen-isolated`。这意味着它的 id 映射在这个 LXD 实例中将是唯一的,因此没有其他容器具有相同的id映射。将相同的存储卷 `my-custom-volume` 连接到此容器现在将会失败:

但是我们让 `xen-isolated` 与 `xen1` 有相同的映射,并把它重命名为 `xen2` 来反映这个变化。现在我们可以将 `my-custom-volume` 连接到 `xen1` 和 `xen2` 而不会有问题:

### 总结
存储 API 是 LXD 非常强大的补充。它提供了一组基本功能,有助于在大规模使用容器时处理各种问题。这个简短的介绍希望给你一个印象,你可以做什么。将来会有更多介绍。
本篇文章最初在 [Brauner 的博客](https://cbrauner.wordpress.com/)中发布。
---
via: <https://insights.ubuntu.com/2017/07/12/storage-management-in-lxd-2-15/>
作者:[Christian Brauner](https://cbrauner.wordpress.com/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,751 | Docker、Kubernetes 和 Apache Mesos 对比中的一些误区 | https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/ | 2017-08-04T08:03:15 | [
"Docker",
"Kubernetes",
"Mesos",
"容器",
"编排"
] | https://linux.cn/article-8751-1.html | 
有无数的文章、讨论、以及很多社区喋喋不休地比较 Docker、Kubernetes 和 Mesos。如果你只是听信了只言片语,你可能会认为这三个开源项目正为了称霸容器界而殊死搏斗。你可能还相信从他们中选出一个如宗教信仰般神圣——真正的信徒会忠于他们的信仰,而且会烧死那些敢于考虑替代方案的异教徒。
那都是废话。
虽然所有这三种技术都使得使用容器来部署、管理和伸缩应用成为可能,但实际上它们各自解决了不同的问题,并且根植于迥异的上下文环境中。事实上,这三种被广泛采用的工具链,都是有差别的。
让我们重新审视每个项目的原始任务、技术架构,以及它们是如何相互补充和交互的,而不是纠结于比较这些快速迭代的技术之间重叠的特性。
### 让我们从 Docker 开始……
Docker 公司,始于名为 dotCloud 的平台即服务(PaaS)供应商。dotCloud 团队发现,在许多应用和客户之间管理依赖和二进制文件时需要付出大量的工作。因此他们将 Linux 的 [cgroups](https://en.wikipedia.org/wiki/Cgroups) 和 namespace 的一些功能合并成一个单一且易于使用的软件包,以便于应用程序可以一致地运行在任何基础设施上。这个软件包就是所谓的 [Docker 镜像](https://docs.docker.com/engine/docker-overview/),它提供了如下的功能:
* **将应用程序和依赖库封装在一个软件包**(即 Docker 镜像)中,因此应用可以被一致地部署在各个环境上;
* **提供类似 Git 的语义**,例如 `docker push`,`docker commit` 等命令让应用开发者可以快速接受这门新的技术,并将其融入到现有的工作流中;
* **定义 Docker 镜像为不可变的层**,支持不可变的基础设施。新提交的变更被分别保存为只读层,让复用镜像和追踪变更记录变得十分简单。层还通过只传输更新而不是整个镜像来节省磁盘空间和网络流量;
* **通过实例化不可变的镜像**和读写层来运行 Docker 容器,读写层可以临时地存储运行时变更,从而轻松部署和扩展应用程序的多个实例。
Docker 变得越来越受欢迎,开发者们开始从在笔记本电脑上运行容器转而在生产环境中运行容器。跨多个机器之间协调这些容器需要额外的工具,这称之为<ruby> 容器编排 <rt> container orchestration </rt></ruby>。有趣的是,第一个支持 Docker 镜像的容器编排工具(2014 年 6月)是 Apache Mesos 的 [Marathon](https://mesosphere.github.io/marathon/)(后面会有详细介绍) 。那年,Docker 的创始人兼首席技术官 Solomon Hykes 将 Mesos 推荐为“[生产集群的黄金标准](https://www.google.com/url?q=https://www.youtube.com/watch?v=sGWQ8WiGN8Y&feature=youtu.be&t=35m10s&sa=D&ust=1500923856666000&usg=AFQjCNFLtW96ZWnOUGFPX_XUuVOPdWrd_w)”。不久之后,除了 Mesos 的 Marathon 之外,还出现了许多的容器编排技术:[Nomad](https://www.google.com/url?q=https://www.youtube.com/watch?v=sGWQ8WiGN8Y&feature=youtu.be&t=35m10s&sa=D&ust=1500923856666000&usg=AFQjCNFLtW96ZWnOUGFPX_XUuVOPdWrd_w)、[Kubernetes](https://www.nomadproject.io/),不出所料还有 Docker Swarm ([它如今是 Docker 引擎的一部分](https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/))。
随着 Docker 开始商业化其开源的文件格式(LCTT 译注:指 Docker 镜像的 dockerfile 文件格式),该公司还开始引入工具来完善其核心的 Docker 文件格式和运行时引擎,包括:
* 为公开存储 Docker 镜像的而生的 Docker hub;
* 存储私有镜像的 Docker 仓库(Docker registry);
* Docker cloud,用于构建和运行容器的管理性服务;
* Docker 数据中心作为一种商业产品体现了许多 Docker 技术;

*来源: [www.docker.com](http://www.docker.com)*
Docker 将软件及其依赖关系封装在一个软件包中的洞察力改变了软件行业的游戏规则,正如 mp3 的出现重塑了音乐行业一般。Docker 文件格式成为行业标准,领先的容器技术供应商(包括 Docker、Google、Pivotal、Mesosphere 等) 组建了 [<ruby> 云计算基金会 <rt> Cloud Native Computing Foundation </rt></ruby> (CNCF)](https://www.cncf.io/) 和 [<ruby> 开放容器推进联盟 <rt> Open Container Initiative </rt></ruby> (OCI)](https://www.opencontainers.org/)。如今,CNCF 和 OCI 旨在确保容器技术之间的互操性和标准化接口,并确保使用任何工具构建的任何 Docker 容器都可以在任何运行时或基础架构上运行。
### 进入 Kubernetes
Google 很早就认识到了 Docker 的潜力,并试图在 Google Cloud Platform (GCP)上提供容器编排“即服务”。 Google 在容器方面拥有丰富的经验(是他们在 Linux 中引入了 cgroups),但现有的内部容器和 Borg 等分布式计算工具直接与其基础架构相耦合。所以,Google 没有使用原有系统的任何代码,而是从头开始设计 Kubernetes (K8S)来编排 Docker 容器。 Kubernetes 于 2015 年 2 月发布,其目标和考虑如下:
* **为应用程序开发人员提供**编排 Docker 容器的强大工具,而无需与底层基础设施交互;
* **提供标准部署接口**和原语,以实现云端一致的应用部署体验和 API;
* **基于模块化 API 核心**,允许供应商围绕 Kubernetes 的核心技术集成其系统。
2016 年 3 月,Google [将 Kubernetes 捐赠](https://www.linuxfoundation.org/news-media/announcements/2016/03/cloud-native-computing-foundation-accepts-kubernetes-first-hosted-0)给了 CNCF,并且直到今天仍然是该项目的主要贡献者(其次是Redhat,CoreOS 等)。

*来源: wikipedia*
Kubernetes 对应用程序开发人员非常有吸引力,因为它减轻了对基础架构和运营团队的依赖程度。供应商也喜欢 Kubernetes,因为它提供了一个容易的方式来拥抱容器化运动,并为客户部署自己的 Kubernetes(这仍然是一个值得重视的挑战)提供商业解决方案。 Kubernetes 也是有吸引力的,因为它是 CNCF 旗下的开源项目,与 Docker Swarm 相反,Docker Swarm 尽管是开源的,但是被 Docker 公司紧紧地掌控着。
Kubernetes 的核心优势是为应用程序开发人员提供了用于编排无状态 Docker 容器的强大工具。 虽然有多个扩大项目范围的提议,以提供更多的功能(例如分析和有状态数据服务),但这些提议仍处于非常早期的阶段,它们能取得多大的成功还有待观察。
### Apache Mesos
Apache Mesos 始于<ruby> 加州大学伯克利分校 <rt> UC Berkeley </rt></ruby>的下一代容器集群管理器项目,并应用了从云计算级别的分布式基础架构(如 [Google 的 Borg](https://research.google.com/pubs/pub43438.html) 和 [Facebook 的 Tupperware](https://www.youtube.com/watch?v=C_WuUgTqgOc))中习得的经验和教训。 虽然 Borg 和 Tupperware 具有单一的架构,并且是与物理基础架构紧密结合的闭源专有技术,但 Mesos 推出了一种模块化架构,一种开源的开发方法,旨在完全独立于基础架构。Mesos 迅速被 [Twitter](https://youtu.be/F1-UEIG7u5g)、[Apple(Siri 中)](http://www.businessinsider.com/apple-siri-uses-apache-mesos-2015-8)、[Yelp](https://engineeringblog.yelp.com/2015/11/introducing-paasta-an-open-platform-as-a-service.html)、[Uber](http://highscalability.com/blog/2016/9/28/how-uber-manages-a-million-writes-per-second-using-mesos-and.html)、[Netflix](https://medium.com/netflix-techblog/distributed-resource-scheduling-with-apache-mesos-32bd9eb4ca38) 和许多领先的技术公司采用,支持从微服务、大数据和实时分析到弹性扩展的一切。
作为集群管理器,Mesos 被设计用来解决一系列不同的挑战:
* **将数据中心资源抽象**为单个池来简化资源分配,同时在私有云或公有云中提供一致的应用和运维体验;
* 在相同的基础架构上**协调多个工作负载**,如分析、无状态微服务、分布式数据服务和传统应用程序,以提高利用率,降低成本和台面空间;
* 为应用程序特定的任务(如部署、自我修复、扩展和升级),**自动执行第二天的操作**;提供高度可用的容错基础设施;
* **提供持久的可扩展性**来运行新的应用程序和技术,而无需修改集群管理器或其上构建的任何现有应用程序;
* **弹性扩展**可以将应用程序和底层基础设施从少量扩展到数十到数万个节点。
Mesos 独有的独立管理各种工作负载的能力 —— 包括 Java 这样的传统应用程序、无状态 Docker 微服务、批处理作业、实时分析和有状态的分布式数据服务。Mesos 广泛的工作负载覆盖来自于其两级架构,从而实现了“应用感知”调度。通过将应用程序特定的操作逻辑封装在“Mesos 框架”(类似于操作中的运行手册)中来实现应用程序感知调度。资源管理器 Mesos Master 提供了这些框架基础架构的部分,同时保持隔离。这种方法允许每个工作负载都有自己的专门构建的应用程序调度程序,可以了解其部署、扩展和升级的特定操作要求。应用程序调度程序也是独立开发、管理和更新的,这让 Mesos 拥有高度可扩展的能力,支持新的工作负载或随着时间的推移而增加更多的操作功能。

举一个团队如何管理应用软件升级的例子。无状态应用程序可以从[“蓝/绿”](https://martinfowler.com/bliki/BlueGreenDeployment.html)部署方案中受益;当新版本的应用运行起来时,原先旧版本的软件依然还正常运转着,然后当旧应用被销毁时流量将会切换到新的应用上。但是升级数据工作负载例如 HDFS 或者 Cassandra 要求节点停机一次,此时需要持久化本地数据卷以防止数据丢失,并且按照特定的顺序执行原位升级,在升级之前和升级完成之后,都要在每一个节点类型上执行特定的检查和命令。任何这些步骤都是应用程序或服务特定的,甚至可能是版本特定的。这让使用常规容器编排调度程序来管理数据服务变得非常困难。
Mesos 以每一个工作负载所需的特定方式管理各种工作负载,使得许多公司将 Mesos 作为一个统一的平台,将微服务和数据服务结合在一起。数据密集型应用程序的通用参考架构是 [“SMACK 家族”](https://mesosphere.com/blog/2017/06/21/smack-stack-new-lamp-stack/)(LCTT 译注:SMACK 即 Spark、Mesos、Akka、Cassandra、Kafka)。
### 是时候搞清楚这些了
请注意,我们尚未对 Apache Mesos 的容器编排有任何描述。所以为什么人们会自动地将 Mesos 和容器编排联系起来呢?容器编排是可以在 Mesos 的模块化架构上运行的工作负载的一个例子,它是通过一个专门的编排“框架”来完成的,这个框架就 Marathon,一个构建于 Mesos 之上的工具。 Marathon 最初是为了在 [cgroup](https://en.wikipedia.org/wiki/Cgroups) 容器中编排应用归档(如 JAR、tarball、ZIP 文件)而开发的,是 2014 年最先支持 Docker 容器的编排工具之一。
所以当人们将 Docker 和 Kubernetes 与 Mesos 进行比较时,他们实际上是将 Kubernetes 和 Docker Swarm 与在 Mesos 上运行的 Marathon 进行比较。
为什么搞清楚这一点很重要? 因为 Mesos 坦率地讲并不在乎它上面运行了什么。 Mesos 可以在共享的基础设施上弹性地为 Java 应用服务器提供集群服务、Docker 容器编排、Jenkins 持续集成任务、Apache Spark 分析、Apache Kafka 流,以及更多其他的服务。Mesos 甚至可以运行 Kubernetes 或者其他的容器编排工具,即使公共的集成目前还不可用。

*来源: Apache Mesos 2016 调查问卷*
Mesos 的另一个考虑因素(也是为什么它对许多企业架构师来说如此有吸引力)是运行关键任务工作负载的成熟度。 Mesos 已经在大规模生产环境下(成千上万台服务器)运行了超过 7 年的时间,这就是为什么它比市场上许多其他的容器技术更具有生产上的可行性和扩展上的可靠性。
### 我所说的这些什么意思?
总而言之,所有这三种技术都与 Docker 容器有关,可以让你在容器编排上实现应用程序的可移植性和扩展性。那么你在它们之间如何选择呢? 归根到底是为工作选择合适的工具(也可能是为不同的工作选择不同的工具)。如果您是一个应用开发人员,正在寻找现代化的方式来构建和打包你的应用程序,或者想加速你的微服务计划,Docker 容器和开发工具就是最好的选择。
如果你们是一个开发人员或者 DevOps 的团队,并希望构建一个专门用于 Docker 容器编排的系统,而且愿意花时间折腾集成解决方案与底层基础设施(或依靠公共云基础架构,如 Google 容器引擎(GCE)或 Azure 容器服务(ACS)),Kubernetes 是一个可以考虑的好技术。
如果你们想要建立一个运行多个关键任务工作负载的可靠平台,包括 Docker 容器、传统应用程序(例如 Java)和分布式数据服务(例如 Spark、Kafka、Cassandra、Elastic),并希望所有这些可依移植到云端提供商或者数据中心,那么 Mesos(或我们自己的 Mesos 发行版,Mesosphere DC/OS)更适合你们的需求。
无论您选择什么,您都将拥抱一套可以更有效地利用服务器资源的工具,简化应用程序的可移植性,并提高开发人员的敏捷性。你的选择真的不会有错。
---
via: <https://mesosphere.com/blog/docker-vs-kubernetes-vs-apache-mesos/>
作者:[Amr Abdelrazik](https://mesosphere.com/blog/author/amr-abdelrazik/) 译者:[rieonke](https://github.com/rieonke) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,753 | 3 个开源的音乐播放器:Aqulung、Lollypop 和 GogglesMM | https://opensource.com/article/17/1/open-source-music-players | 2017-08-05T10:35:00 | [
"音乐播放器"
] | https://linux.cn/article-8753-1.html | 
音乐是生活的一部分。[维基百科关于音乐发展历史的文章](https://en.wikipedia.org/wiki/History_of_music)有这样一段不错的描述说:“全世界所有的人们,包括哪怕是最孤立、与世隔绝的部落,都会有自己的特色音乐……”好吧,我们开源人就构成了一个部落。我建议我们的“音乐形式”应该包括开源音乐播放器。在过去几年里,我已经使用体验过不少我能接触到的音乐播放器;[2016 年 12 月份](https://opensource.com/article/16/12/soundtrack-open-source-music-players)我根据这六个标准来总结概括了我使用开源音乐播放器的感受:
1. 必须是能够通过设置让音乐一成不变地转换到 [ALSA](http://www.alsa-project.org/main/index.php/Main_Page)。(最高分 5分)
2. 应该有一个不错的“智能播放列表”。(1 分)
3. 不应该强迫用户只能通过播放列表来进行交互。(1 分)
4. 应该能够提供一个简单的方法来显示歌曲的封面图片——使用内嵌的封面图或使用在音乐目录里面 cover.jpg(或者 .png)文件替代。
5. 应该能够在音乐播放的时候显示信号级别和实际比特率。(1 分)
6. 能够呈现出不错的整体组织,结构布局和执行性能。(1 分)
热心的读者让告诉我有三个播放器是在我的资源仓库里没有的:[Aqualung](http://aqualung.jeremyevans.net/)、[Lollypop](https://gnumdk.github.io/lollypop-web/) 和 [GogglesMM](https://gogglesmm.github.io/)。我并不想在我办公用的电脑里面安装那些来自外面的软件,我承诺过我会配置一个“试验台”来测试这三个音乐播放器,并给出测试的细节。
### Aqualung
[Aqualung](http://aqualung.jeremyevans.net/) 有一个写的清晰明了的网站来解释它众多的特点。其上提供的说明中我发现其中一点特别有趣:
“你能够(也应该)将你的所有音乐按照艺术家/档案/声轨这样组织成一个树型结构,这样比生成一个一体化的 Winamp/XMMS 播放列表更舒服。”
这点让我有些困惑,因为我总是把我的音乐按照艺术家、专辑和声轨这样组织成树状。但这就可能解释了为什么我有时发现 XMMS 流派的播放器在浏览音乐时有一点古怪。
根据 Aqualung 官网的下载页面说明,官方发布的只有源代码。但是文档上的说明暗示了绝大多数主流的 Linux 发行版本都包括一份 Aqualung 的构建副本,但我当前用的办公电脑所使用的 Linux 发行版 Ubuntu 16.10 并不在此范围内。[Launchpad.net](https://launchpad.net/+search?field.text=aqualung+ppa) 提供有 PPA,但那些软件看起来都有些过时了,所以为什么不试试编译源码安装软件呢?
我根据官网上编译文档的建议和配置脚本的提示安装了 **pkgconf** 以及 **libasound**、**libflac**、**libmp3lame**、**libvorbis**、**libxml2**、**libglib2.0** 和 **libgtk+-2.0** 的开发库。接下来,我就能够干净利索的进行 `configure` 然后进行 `make` 和 `make install`。最终我可以执行 `/usr/local/bin/aqualung` 了。

*Aqualung,不能切换音乐播放的码率。*
一旦 Aqualung 启动运行,我就能看到相当简洁直接的两窗口界面:播放器本身和“音乐商店”。我通过右键点击播放器的音乐面板打开参数设置查看这些可设置的参数,看是否能找到 AudioQuest DragonFly 这个数模转换器,但我没有找到任何相关的迹象。然而,站点上的说明指出可以通过命令行指定输出设备。最终我用 **plughw** 设备才让 Aqualung 启动起来。
在那个时候,真正让我对 Aqualung 感到失望的是 Aqualung 似乎是需要一个固定的输出采样频率。我能够用 Aqualung 播放器的默认设置来正常播放我的 44.1 Khz 文件,但是同样的采样频率播放 96 Khz 的音乐文件时,我不得不关闭软件并重新启动。也正是因为这一点,我不会再继续对 Aqualung 进行使用测评。
**无评分。**
### Lollypop

*优美的 Lollypop 用户界面。*
[Lollypop](https://gnumdk.github.io/lollypop-web/) 有一个华丽的网站。尽管它不在我办公专用的电脑的软件仓库里面,但是有一个“针对 Ubuntu/Debian 用户的下载”链接带你跳转到 [launchpad.net 站点提供的最新的 PPA](https://launchpad.net/%7Egnumdk/+archive/ubuntu/lollypop)。这个站点还提供针对 Flatpak、Arch Linux、Fedora 和 OpenSUSE 这些系统的 Lollypop 软件包的下载。我看了下 [Fedora COPR 上针对各个 Fedora 版本的 Lollypop 下载链接](https://copr.fedorainfracloud.org/coprs/gnumdk/lollypop/),看起来 Lollypop 更新的比较及时而且从 Fedora 版本的 23 到 26 都有对应的软件包提供下载安装。
一天内做一次源码编译就足够了,所以我决定试试从 PPA 安装这款软件。我通过命令行来执行 Lollypop 软件。设置菜单能够在 Lollypop 界面的右上方很显眼地看见。更新完我的音乐后,我开始找电脑的输出设备设置,但是在一番查看后,我不知道该怎么选择合适的输出设备。即便我在命令行通过 **-help** 也找不到有用的帮助信息。
经过一番网上搜索后我找到一个 Lollypop 的开发者的提示才知道我需要 **gstreamer libav** 来让 Lollypop 工作。通过这个说明我决定停止,因为这可能需要一个 **gstreamer** 相关配置才有能工作,但是我不太想继续尝试了。
Lollypop 有一个优美的用户交互界面和它的优美的网站相得益彰,但是我现在不会进一步对它进行测评,否则我就又多了一个进一步去学习了解 **gstreamer** 的理由。
**无评分。**
### GogglesMM
[Goggles Music Manager](https://gogglesmm.github.io/) 也有一个[在 launchpad.net 及时更新的 PPA](https://launchpad.net/%7Es.jansen/+archive/ubuntu/gogglesmm);安装流程简单明了,我现在可以在命令行执行 **gogglesmm** 了。
GogglesMM,非常容易上手使用,看上去和 Rhythmbox 有点像。我在 GogglesMM 的设置里面的参数设置中找到了音频选项设置,能够让我选择 ALSA 和设置音频输出设备。通过查看 **/proc/asound/DragonFly/stream0** 文件和 DragonFly 自己的 LED 颜色,我确定我能够用 GogglesMM 播放 44.1-KHz/21-bit 和 96-KHz/24-bit 这两种规格的 mp3;因此,就凭 “rate/depth passthrough” 我给 GogglesMM 打 5 分。

\*GogglesMM 在播放 96/24 这种规格的音乐,显示音频输出设备选择。 \*
GogglesMM 的说明文档并没有大量的细节介绍,但是我尽可能说明的是,开发者们使用了过滤器来实现类似“智能播放列表”的功能。我在我的测试环境下使用三张专辑来尽我所能检测过滤功能,当我使用“智能播放列表”功能的时候尽管我喜欢我看到的通过过滤筛选出来的歌曲(特别是能够基于广泛的标准来针对歌曲定义筛选条件),但这并不是我认为的“智能播放列表”,对我来说我认为“智能播放列表”应该是这样的,通过借助一些社区数据库来推荐提供和你近期播放的歌曲类似的曲目。或者我该把这个叫作“自动的 DJ”而不是“智能播放列表”,但是通过测试我最终能够确定的是,这个特性并不会在近期版本的 GogglesMM 中出现,所以我给它这个所谓的“智能播放列表”打 0 分。
至于播放列表队列的操作,这款应用能够支持播放你选中的音乐,也能够随机播放音乐或者把一些音乐整合到一个播放列表里面,所以我因为“播放列表的队列选项”给它打 1 分。
同样的,它看起来也能够很好地不需要额外的干预来管理我的音乐艺术封面(每个专辑都包含一张合适的艺术封面, GogglesMM 可以自动识别),所以为“内嵌的艺术封面或者封面图片”打 1 分。
我找不到任何方法来让 GogglesMM 显示信号级别或者实际的比特率。我也不能找到显示比特率和位深度的方法;尽管这款应用能够显示一个“格式”列,但是在我的音乐栏里面除了显示音乐格式不会显示其他的信息了,所以为 GogglesMM 的“信号级别和有效比特率”打 0 分。
至于 GogglesMM 的整体结构,它的所有按钮选项都正好完全符合我的使用习惯。我能够在播放队列里面看到歌曲的时间和歌曲当前已播放的时间所占歌曲总体时间的比例,专辑封面,歌曲名,专辑名和歌唱者。可用的播放栏列表看起来相当大而有用,比如也包括了作曲者。最后,一个真正让我眼前一亮的特点是,音量控制竟然包含了 ALSA 音量。也就是如果我启动 alsamixer 的话,然后不管是在 alsamixer 还是在 GogglesMM 里面调整音量,另一个音量控制也会做相应的音量调整。这个出乎我意外之外的功能相当的酷而且这个功能在其他的音乐播放器上也不常见,因此为它的整体架构给 GogglesMM 加 1 分。
最终 GogglesMM 的这些优点共计得分 8。所表现出来的特点确实很优秀。
**评分:8**
### 到目前为止所给出的评分
我之前所提到的这几个开源音乐播放器中,我最喜欢的还是 [Guayadeque](http://www.guayadeque.org/),根据我制定的标准来进行排名的话,我给 Guayadeque 打满分 10 分。来看下我对这三个开源音乐播放器的评分总结吧(N/R 代表“无评分”,因为我不确定如何配置这些播放器来让它们以完美的码率和贯穿模式工作,以便我的数模信号转换器在相应源的码率和位深度接收 PCM 数据):

请注意下我用的这个排名方法并不适合每个人。特别是很多人并不清楚高品质音乐的价值,他们更喜欢专有格式的音乐能够给他们带来更好的音乐品质。
与此同时,我会继续评测一些之前向大家承诺的音乐播放器一些和评测评分无关的特性。我特别喜欢 Lollypop 的外观,我也觉得待揭秘的 **gstreamer** 有一种神秘的魅力,它能让基于 **gstreamer** 的音乐播放器不用通过转换就能传输它们的数据。
### 关于音乐的部分……
我还在保持继续购买唱片的习惯,对于唱片的购买我有些不错的推荐。
第一个就是 Nils Frahm 的专辑 [Felt](http://www.nilsfrahm.com/works/felt/),这是我女儿送我的一份非常贴心的礼物。我真的真的很喜欢这张专辑,它的绝大部分歌曲都是在深夜用电麦录制的非常接近钢琴的弦乐,而且也有不少有趣的钢琴演奏的背景音乐,真的是很棒的音乐。至于 Nils Frahm 其他的音乐,这些唱片提供的下载链接允许你下载质量高达 96-KHz,24-bit FLAC 格式的音乐。
第二个就是 Massive Attack 的专辑 Protection 的 [Mad Professor 的重混版](https://en.wikipedia.org/wiki/No_Protection_(Massive_Attack_album)),专辑名是 No Protection。你可以[在这里了解这份专辑](https://www.youtube.com/watch?v=9TvgRb4wiB0),并且如果你想要尝试这份专辑最原始的版本,[这里是它的所有汇总信息](https://www.youtube.com/watch?v=LCUv-hLN71c)。该专辑最初发布于 20 世纪 90 年代,这份专辑刻录在唱片上面而且听起来非常奇幻。遗憾的是,不提供下载链接。
第三个就是 Bayonne 的 [Primitives](https://musicglue.com/bayonne/products/primitives---vinyl--/)。[这是专辑要表达的想法](https://www.youtube.com/watch?v=WZ6xl6CKITE)。Guardian 报社把这份专辑称作是“新式无聊”。那么这种类型的音乐到底怎么样呢?如果这些音乐真的是非常令人乏味的,或许是时候来换份工作了,无论如何你可以试试听这些音乐;或许你会觉得它确实很乏味或者你会像我一样喜欢上这份音乐。
(图片来源:[互联网档案馆](https://www.flickr.com/photos/internetarchivebookimages/14565158187/in/photolist-ocoBRG-ocqdPM-ot9YYX-ovb7SE-oroqfj-ot8Sfi-of1HoD-oc5c28-otBk3B-foZxvq-ocoUvo-4TqEKE-otsG7t-oeYo4w-ornGMQ-orpD9y-wLDBUf-outZV7-oc26Ui-ortZpW-ocpWLH-ocoK6c-ocYDY1-od6ADb-xxAKyY-ocofDx-oc4Jr5-otyT2E-ocpUyu-xqTAb6-oc8gK1-otdsK5-ovhkz2-ocpcHj-oc8xwk-otgmZG-otr595-otnv4o-otvdRs-ovfYEt-ovDXUV-obUPJ6-oc2MuJ-oc4zLE-oruPbN-oc1P2H-ouRk93-otaGd3-otTmwB-oc5f62)书中的图片;由 Opensource.com 编辑发布。遵循 [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/) 协议。)
---
作者简介:
Chris Hermansen - 自 1978 年毕业于 British Columbia 大学后一直从事计算机相关工作,2005 年之前是 Solaris、SunOS、UNIX System V 的忠实用户,之后是 Linux 的忠实用户。在技术方面,我的职业生涯大部分时间都是在做数据分析;特别是空间数据分析。拥有丰富的和数据分析相关的编程经验,用过的编程语言有 awk,Python、PostgreSQL、 PostGIS 和 最新的 Groovy。
---
via: <https://opensource.com/article/17/1/open-source-music-players>
作者:[Chris Hermansen](https://opensource.com/users/clhermansen) 译者:[WangYueScream](https://github.com/WangYueScream) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Music is a part of life. [Wikipedia's article on the history of music](https://en.wikipedia.org/wiki/History_of_music) contains this great phrase: "Since all people of the world, including the most isolated tribal groups, have a form of music...." Well, we open source folk form a tribe—that's for sure. I propose that our "form of music" includes open music players. Over the past year, I've been taking a look at the various players available; in [December 2016](https://opensource.com/article/16/12/soundtrack-open-source-music-players) I summarized my ongoing evaluation of open music players using these six criteria:
- Must be configurable to pass the music through unchanged to
[ALSA](http://www.alsa-project.org/main/index.php/Main_Page). (max 5 marks) - Should have a good "smart playlist" feature. (1 mark)
- Should not force the user to always interact through playlists. (1 mark)
- Should provide a simple approach to cover art—use the embedded cover art or fall back to cover.jpg (or .png) in the music directory. (1 mark)
- Should show the signal level and effective bit rate as the music plays. (1 mark)
- Should present good-to-great overall organization, layout, and performance. (1 mark)
Three players suggested to me by kind readers were not available in my repositories: [Aqualung](http://aqualung.jeremyevans.net/), [Lollypop](https://gnumdk.github.io/lollypop-web/), and [GogglesMM](https://gogglesmm.github.io/). Not wanting to install stuff from the wild on my work computer, I promised to configure a "test bed" for this purpose and detail the results.
## Aqualung
[Aqualung](http://aqualung.jeremyevans.net/) has a clearly written website that explains its various features. One of the comments there I found interesting was this one:
"You can (and should) organize your music into a tree of Artists/Records/Tracks, thereby making life easier than with the all-in-one Winamp/XMMS playlist."
This puzzled me because I think I have always had my music organized into a tree of artists, albums, and tracks. But maybe this explains why I find the XMMS-derived players to be a bit odd in terms of their music browsing capability.
According to the Aqualung download page, the official release is source-only. While the comments there suggest that most major Linux distributions include a built copy of Aqualung, this is not the case with the distro I'm currently using on my work computer, Ubuntu 16.10. [Launchpad.net](https://launchpad.net/+search?field.text=aqualung+ppa) does have personal package archives (PPAs), but they seem a bit out of date, so why not build from source?
I installed **pkgconf** and dev versions of **libasound**, **libflac**, **libmp3lame**, **libvorbis**, **libxml2**, **libglib2.0**, and **libgtk+-2.0**, generally following the suggestions of the compiling page on the site and the usual "hints" from the configure script. Next, I was able to **configure** cleanly and do a **make** and a **make install**. And from there, I was able to execute **/usr/local/bin/aqualung**.
Aqualung, unable to switch resolution.
Once Aqualung was up and running, I saw a straightforward and relatively barebones two-window user interface, the player itself and the "Music Store." I opened Preferences by right-clicking on the music pane of the player and looked around to see where I could select my AudioQuest DragonFly digital-to-analog converter, but I saw no sign of anything there. However, the site notes that you can specify the output device on the command line. I ended up needing to use the **plughw** device to get Aqualung to start.
At that point, I was disappointed to discover that Aqualung seems to require a fixed output sample rate. I could play my 44.1-KHz files just fine with the default setting, but to play my 96-KHz files, I had to stop and restart with that sample rate. Aqualung will not pass the bit stream unchanged through to the digital-to-analog converter. With that, I did not bother to continue my evaluation.
**Not rated.**
## Lollypop
The lovely Lollypop user interface.
[Lollypop](https://gnumdk.github.io/lollypop-web/) has a gorgeous website. Although it's not in my work computer's repositories, there is a "Download Ubuntu/Debian" link that points to an [up-to-date PPA on launchpad.net](https://launchpad.net/~gnumdk/+archive/ubuntu/lollypop). The site offers other downloads for Flatpak, Arch Linux, Fedora, FreeBSD, and OpenSUSE. Out of curiosity, I took a look at the [Fedora link on Fedora COPR](https://copr.fedorainfracloud.org/coprs/gnumdk/lollypop/), and it also looks quite up-to-date, offering builds for Fedora 23–26.
One build from source was enough excitement for that day, so I decided to try the PPA. I was able to execute Lollypop from the command line. The Settings menu was obvious in the upper right of the screen. After updating my music, I went looking for my output device configuration, but after some poking around, I couldn't find how to select the output device. Even executing on the command line with **–help** did not enlighten me.
After some searching on the Internet I found a Lollypop developer stating that I needed **gstreamer libav** to get Lollypop to work. From this I have tentatively concluded that there may be a **gstreamer** configuration possibility to make this work, but I'm not going to pursue that for now, at least.
Lollypop has a lovely user interface to match its lovely web page, but for now, I did not rate it. I have another reason to learn more about **gstreamer**.
**Not rated.**
### GogglesMM
[Goggles Music Manager](https://gogglesmm.github.io/) also has an [up-to-date PPA on launchpad.net](https://launchpad.net/~s.jansen/+archive/ubuntu/gogglesmm); the installation was straightforward and I was able to execute **gogglesmm** from the command line.
GogglesMM, out of the box, looks somewhat like Rhythmbox. I found the Audio tab under Settings > Preferences, which let me select ALSA and set my output device. I confirmed that I can play MP3, 44.1-KHz / 24-bit and 96-KHz / 24-bit music by looking at **/proc/asound/DragonFly/stream0** and the LED color on the DragonFly itself; therefore, 5 points for "rate/depth passthrough."
GogglesMM playing at 96/24, showing output device.
The documentation for GogglesMM is not largely detailed at this point, but as far as I am able to tell, the developers use filters to implement something like "smart playlists." I reviewed the functioning of filters as best as I could with the three albums installed on my test bed, and while I like what I see (especially being able to define selection criteria for songs based on a broad range of criteria), this is not what I mean when I use the term "smart playlists," which I think of as using some kind of community database of "songs like the current one." Maybe I should call this "automatic DJ" instead, but as far as I am able to determine, this feature does not exist in the current version of GogglesMM, so 0 points for "smart playlist."
As for the queue versus playlist operation, the application supports either playing through the selected songs both in order or randomly or putting songs in a playlist, so 1 for "queue option to playlist."
Similarly, it seemed to manage my cover art well without extra intervention (each album contained the appropriate cover art, which was recognized automatically by GogglesMM), so 1 for "embedded cover art or cover.jpg."
I could not find any way to show signal level or effective bit rate. Nor could I find any way of seeing the bit rate and bit depth; although the application can display a "format" column, it doesn't show anything in that field on my music, so 0 for "signal level and effective bit rate."
With respect to overall organization, GogglesMM hits all the right buttons for me. I can see what's in the play queue, the time and proportion of song played and left to play, album cover, song name, album title, and artist. Also, the available display column list seems quite large and useful, including composer for example. Finally, a really wonderful thing, the volume control actually controls the ALSA volume. If I bring up alsamixer and adjust the volume in either GogglesMM or alsamixer, the other's volume control moves and the volume adjusts. This is pretty cool and surprisingly not all that common, so 1 for overall organization.
In total, then, GogglesMM merits an 8. Excellent performance indeed.
**Rating: 8**
## The ratings so far
As I've mentioned in the past, my favorite player is [Guayadeque](http://www.guayadeque.org/), which gets a perfect 10, according to my ranking. Take a look at a summary of my ratings to date (N/R meaning "not rated," because I was unable to determine how to configure those players to work in bit-perfect, passthrough mode so that my digital-to-analog converter receives the PCM data at the bit rate and bit depth of the source):
Please note that my ranking scheme is not for everyone. In particular, many people don't find value in music files at higher-than-CD resolution, and many people are happy with proprietary formats that promise better audio quality.
Meanwhile, I will continue to evaluate some of the promising non-rated options. I especially like the look of Lollypop, and I feel that there is a secret spell for **gstreamer** just waiting to be unlocked that will let **gstreamer**-based players pass their data through without conversions.
## And the music...
My vinyl buying spree continues, and I have some great recommendations.
First is Nils Frahm's album, [Felt](http://www.nilsfrahm.com/works/felt/), which was a very thoughtful gift from my daughter. I really, really like this album, which was largely recorded late at night with microphones very close to the piano strings and lots of interesting ambient piano noise—really beautiful music. Like other Nils Frahm music, the vinyl comes with a download code that allows the downloading of the album in up to 96-KHz, 24-bit FLAC format.
The second is [Mad Professor's remix](https://en.wikipedia.org/wiki/No_Protection_(Massive_Attack_album)) of Massive Attack's album, Protection, titled No Protection. You can [get an idea of it here](https://www.youtube.com/watch?v=9TvgRb4wiB0), and if you would like to try out the original, [here is what it's all about](https://www.youtube.com/watch?v=LCUv-hLN71c). Originally released in the 1990s, this album is back on vinyl and it sounds fantastic. Unfortunately, no download code accompanies it.
The third is [Primitives](https://musicglue.com/bayonne/products/primitives---vinyl--/) by Bayonne. [Here is an idea](https://www.youtube.com/watch?v=WZ6xl6CKITE) of what it's like. The Guardian newspaper has lumped this in with "the new boring," how's that for a genre? Really, if it's all so boring, maybe it's time for a career change, Anyway, give this one a whirl; maybe you'll find it boring, or maybe, like me, you'll like it!
## 6 Comments |
8,754 | 值得收藏的 27 个机器学习的小抄 | https://unsupervisedmethods.com/cheat-sheet-of-machine-learning-and-python-and-math-cheat-sheets-a4afe4e791b6 | 2017-08-05T11:09:00 | [
"机器学习",
"ML"
] | https://linux.cn/article-8754-1.html | 
<ruby> 机器学习 <rp> ( </rp> <rt> Machine Learning </rt> <rp> ) </rp></ruby>有很多方面,当我开始研究学习它时,我发现了各种各样的“小抄”,它们简明地列出了给定主题的关键知识点。最终,我汇集了超过 20 篇的机器学习相关的小抄,其中一些我经常会翻阅,而另一些我也获益匪浅。这篇文章里面包含了我在网上找到的 27 个小抄,如果你发现我有所遗漏的话,请告诉我。
机器学习领域的变化是日新月异的,我想这些可能很快就会过时,但是至少在 2017 年 6 月 1 日时,它们还是很潮的。
如果你喜欢这篇文章,那就分享给更多人,如果你想感谢我,就到[原帖地址](https://unsupervisedmethods.com/cheat-sheet-of-machine-learning-and-python-and-math-cheat-sheets-a4afe4e791b6)点个赞吧。
### 机器学习
这里有一些有用的流程图和机器学习算法表,我只包括了我所发现的最全面的几个。
#### 神经网络架构
来源: <http://www.asimovinstitute.org/neural-network-zoo/>

*神经网络公园*
#### 微软 Azure 算法流程图
来源: <https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-cheat-sheet>

*用于微软 Azure 机器学习工作室的机器学习算法*
#### SAS 算法流程图
来源: <http://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/>

*SAS:我应该使用哪个机器学习算法?*
#### 算法总结
来源: <http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/>

*机器学习算法指引*
来源: [http://thinkbigdata.in/best-known-machine-learning-algorithms-infographic/](http://thinkbigdata.in/best-known-machine-learning-algorithms-infographic/?utm_source=datafloq&utm_medium=ref&utm_campaign=datafloq)

*已知的机器学习算法哪个最好?*
#### 算法优劣
来源: <https://blog.dataiku.com/machine-learning-explained-algorithms-are-your-friend>

### Python
自然而然,也有许多在线资源是针对 Python 的,这一节中,我仅包括了我所见过的最好的那些小抄。
#### 算法
来源: <https://www.analyticsvidhya.com/blog/2015/09/full-cheatsheet-machine-learning-algorithms/>

#### **Python 基础**
来源: <http://datasciencefree.com/python.pdf>

来源: <https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics#gs.0x1rxEA>

#### Numpy
来源: <https://www.dataquest.io/blog/numpy-cheat-sheet/>

来源: <http://datasciencefree.com/numpy.pdf>

来源: <https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.Nw3V6CE>

来源: <https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/numpy/numpy.ipynb>

#### Pandas
来源: <http://datasciencefree.com/pandas.pdf>

来源: <https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.S4P4T=U>

来源: <https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/pandas/pandas.ipynb>

#### Matplotlib
来源: <https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet>

来源: <https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/matplotlib/matplotlib.ipynb>

#### Scikit Learn
来源: <https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet#gs.fZ2A1Jk>

来源: <http://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html>

来源: <https://github.com/rcompton/ml_cheat_sheet/blob/master/supervised_learning.ipynb>

#### Tensorflow
来源: <https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/1_Introduction/basic_operations.ipynb>

#### Pytorch
来源: <https://github.com/bfortuner/pytorch-cheatsheet>

### 数学
如果你希望了解机器学习,那你就需要彻底地理解统计学(特别是概率)、线性代数和一些微积分。我在本科时辅修了数学,但是我确实需要复习一下了。这些小抄提供了机器学习算法背后你所需要了解的大部分数学知识。
#### 概率
来源: <http://www.wzchen.com/s/probability_cheatsheet.pdf>

*概率小抄 2.0*
#### 线性代数
来源: <https://minireference.com/static/tutorials/linear_algebra_in_4_pages.pdf>

*四页内解释线性代数*
#### 统计学
来源: <http://web.mit.edu/~csvoss/Public/usabo/stats_handout.pdf>

*统计学小抄*
#### 微积分
来源: <http://tutorial.math.lamar.edu/getfile.aspx?file=B,41,N>

*微积分小抄*
| 200 | OK | null |
8,755 | lxc exec 介绍 | https://cbrauner.wordpress.com/2017/01/20/lxc-exec-vs-ssh/ | 2017-08-06T09:26:00 | [
"LXD",
"lxc"
] | https://linux.cn/article-8755-1.html | 
最近,我对 `lxc exec` 进行了几个改进。如果你不知道它的话我介绍一下,`lxc exec` 是 [LXD](https://github.com/lxc/lxd) 的客户端工具,使用 [LXD](https://github.com/lxc/lxd) [客户端 api](https://github.com/lxc/lxd/blob/master/client.go) 与 LXD 守护程序通信,并执行用户想要执行的各种程序,以下是你可以使用的一个例子:

我们的主要目标之一就是使 `lxc exec` 与 `ssh` 类似,因为它是交互式或非交互式远程运行命令的标准。这使得 把 `lxc exec` 做得很好变得有点棘手。
### 1、 处理后台任务
一个长期存在的问题当然是如何正确处理后台任务。这是一个关于 [LXD](https://github.com/lxc/lxd) 2.7 实例的问题的例子:

你可以看到,在后台执行任务将导致 `lxc exec` 无法退出。许多命令可以触发此问题:
```
chb@conventiont|~
> lxc exec zest1 bash
root@zest1:~# yes &
y
y
y
.
.
.
```
现在没有什么能救你了。`yes` 将会永远直接写入`stdout`。
问题的根源在于 `stdout` 是一直打开着的,但这是必要的,因为它用以确保用户所启动的进程写入的任何数据实际上都是通过我们建立的 websocket 连接读取并发回的。
假如你想这样,运行一个 shell 会话,然后在后台运行一个进程,并马上退出 shell。对不起,它并不能如预期那样。
第一种并且原始的方法是一旦你检测到前台程序(例如 shell)已经退出就直接关闭 `stdout`。但这不像想得那么好,当你运行快速执行程序时,这个问题会变得明显,比如:
```
lxc exec -- ls -al /usr/lib
```
这里 `lxc exec` 进程(和相关的 `forkexec` 进程。但现在不要考虑它,只要记住 `Go` + `setns()` 不相往来就行了……)会在 `stdout` 中所有的*缓冲*数据被读取之前退出。这种情况下将会导致截断输出,没有人想要这样。在尝试使用几个方法来解决问题之后,包括禁用 pty 缓冲(我告诉你,这不太漂亮,也没有如预期工作。)和其他奇怪的思路,我设法通过几个 `poll()` “技巧”(在某种意义上说一个“技巧”)解决了这个问题。现在你终于可以运行后台任务,并且可以完全退出了。如图:

### 2、 报告由信号引起的退出码
`ssh` 是一个很棒的工具。但有一件事,我一直以来不喜欢当 ssh 运行的命令接收到一个信号时, `ssh` 总是会报告 `-1`,也就是退出码 `255`。当你想要了解导致程序终止的信号时,这很烦人。这就是为什么我最近实施标准 shell 所使用的惯例 `128 + n` 来报告任何由信号导致的退出,其中 `n` 被定义为导致执行程序退出的信号量。例如,在 `SIGKILL` 信号上,你会看到 `128 + SIGKILL = 137`(计算其他致命信号的退出码作为读者的练习)。所以你可以这么做:
```
chb@conventiont|~
> lxc exec zest1 sleep 100
```
现在,将 `SIGKILL` 发送到执行程序(不是 `lxc exec`本身,因为 `SIGKILL` 不可转发)。
```
kill -KILL $(pidof sleep 100)
```
最后检查你程序的退出码:
```
chb@conventiont|~
> echo $?
137
```
瞧。这显然只有当 a) 退出码没有超过 `8` 位计算壁垒,b)当执行程序不使用 `137` 来表示成功(这可真……有趣?!)。这两个论点似乎对我来说都不太有说服力。前者因为致命信号量不*应该*超过这个范围。后者因为(i)这是用户问题,(ii)这些退出代码实际上是保留的(我是*这样认为*。),(iii)你在本地或其他上面运行程序时会遇到同样的问题。
我看到的主要优点是这能够回报执行程序细粒度的退出状态。注意,我们不会报告*所有*被信号杀死的程序实例。比如说,当你的程序能够处理 `SIGTERM` 并且完全退出时,[LXD](https://github.com/lxc/lxd) 没有简单的方法来检测到这个情况并报告说这个程序被信号杀死了。你只会简单地收到退出码 `0`。
### 3、 转发信号
这可能不太有趣(或者也许不是,谁知道呢),但我发现它非常有用。正如你在 `SIGKILL` 案例中看到的那样,我明确地指出,必须将 `SIGKILL` 发送到执行程序,而不是 `lxc exec`命令本身。这是因为 `SIGKILL` 在程序中无法处理。程序可以做的唯一的事情就是去死,像现在这样……像这个例子……马上(你明白了了吧……)。但是程序可以处理很多其他信号 `SIGTERM`、`SIGHUP',当然也可以处理`SIGUSR1`和`SIGUSR2`。因此,当你发送可以被`lxc exec` 处理而不是被执行程序处理的信号时,较新版本的 [LXD](https://github.com/lxc/lxd) 会将信号转发到执行进程。这在脚本中非常方便。
无论如何,我希望你觉得这篇小小的 `lxc exec` 文章/胡言乱语有用。享受 [LXD](https://github.com/lxc/lxd) 吧,这是与一只疯狂的美丽的野兽玩耍。请试试在线实验:<https://linuxcontainers.org/lxd/try-it/>,对于开发人员看看这里:<https://github.com/lxc/lxd> 并给我们补丁。
我们不要求签署任何 CLA,我们遵循内核风格,只要其中有 “Signed-off-by” 这行就好。
---
via: <https://cbrauner.wordpress.com/2017/01/20/lxc-exec-vs-ssh/>
作者:[brauner](https://cbrauner.wordpress.com) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | Recently, I’ve implemented several improvements for `lxc exec`
. In case you didn’t know, `lxc exec`
is [LXD](https://github.com/lxc/lxd)‘s client tool that uses the [LXD](https://github.com/lxc/lxd) [client api](https://github.com/lxc/lxd/blob/master/client.go) to talk to the LXD daemon and execute any program the user might want. Here is a small example of what you can do with it:
One of our main goals is to make `lxc exec`
feel as similar to `ssh`
as possible since this is the standard of running commands interactively or non-interactively remotely. Making `lxc exec`
behave nicely was tricky.
## 1. Handling background tasks
A long-standing problem was certainly how to correctly handle background tasks. Here’s an asciinema illustration of the problem with a pre [LXD](https://github.com/lxc/lxd) 2.7 instance:
What you can see there is that putting a task in the background will lead to `lxc exec`
not being able to exit. A lot of sequences of commands can trigger this problem:
chb@conventiont|~ > lxc exec zest1 bash root@zest1:~# yes & y y y . . .
Nothing would save you now. `yes`
will simply write to `stdout`
till the end of time as quickly as it can…
The root of the problem lies with `stdout`
being kept open which is necessary to ensure that any data written by the process the user has started is actually read and sent back over the websocket connection we established.
As you can imagine this becomes a major annoyance when you e.g. run a shell session in which you want to run a process in the background and then quickly want to exit. Sorry, you are out of luck. Well, you were.
The first, and naive approach is obviously to simply close `stdout`
as soon as you detect that the foreground program (e.g. the shell) has exited. Not quite as good as an idea as one might think… The problem becomes obvious when you then run quickly executing programs like:
lxc exec -- ls -al /usr/lib
where the `lxc exec`
process (and the associated `forkexec`
process (Don’t worry about it now. Just remember that `Go`
+ `setns()`
are not on speaking terms…)) exits before all *buffered* data in `stdout`
was read. In this case you will cause truncated output and no one wants that. After a few approaches to the problem that involved, disabling pty buffering (Wasn’t pretty I tell you that and also didn’t work predictably.) and other weird ideas I managed to solve this by employing a few `poll()`
“tricks” (In some sense of the word “trick”.). Now you can finally run background tasks and cleanly exit. To wit:
## 2. Reporting exit codes caused by signals
`ssh`
is a wonderful tool. One thing however, I never really liked was the fact that when the command that was run by ssh received a signal `ssh`
would always report `-1`
aka exit code `255`
. This is annoying when you’d like to have information about what signal caused the program to terminate. This is why I recently implemented the standard shell convention of reporting any signal-caused exits using the standard convention `128 + n`
where `n`
is defined as the signal number that caused the executing program to exit. For example, on `SIGKILL`
you would see `128 + SIGKILL = 137`
(Calculating the exit codes for other deadly signals is left as an exercise to the reader.). So you can do:
chb@conventiont|~ > lxc exec zest1 sleep 100
Now, send `SIGKILL`
to the executing program (Not to `lxc exec`
itself, as `SIGKILL`
is not forwardable.):
kill -KILL $(pidof sleep 100)
and finally retrieve the exit code for your program:
chb@conventiont|~ > echo $? 137
Voila. This obviously only works nicely when a) the exit code doesn’t breach the `8`
-bit wall-of-computing and b) when the executing program doesn’t use `137`
to indicate success (Which would be… interesting(?).). Both arguments don’t seem too convincing to me. The former because most deadly signals *should* not breach the range. The latter because (i) that’s the users problem, (ii) these exit codes are actually reserved (I *think*.), (iii) you’d have the same problem running the program locally or otherwise.
The main advantage I see in this is the ability to report back fine-grained exit statuses for executing programs. Note, by no means can we report back *all* instances where the executing program was killed by a signal, e.g. when your program handles `SIGTERM`
and exits cleanly there’s no easy way for [LXD](https://github.com/lxc/lxd) to detect this and report back that this program was killed by signal. You will simply receive success aka exit code `0`
.
## 3. Forwarding signals
This is probably the least interesting (or maybe it isn’t, no idea) but I found it quite useful. As you saw in the `SIGKILL`
case before, I was explicit in pointing out that one must send `SIGKILL`
to the executing program not to the `lxc exec`
command itself. This is due to the fact that `SIGKILL`
cannot be handled in a program. The only thing the program can do is die… like right now… this instance… sofort… (You get the idea…). But a lot of other signals `SIGTERM`
, `SIGHUP`
, and of course `SIGUSR1`
and `SIGUSR2`
can be handled. So when you send signals that can be handled to `lxc exec`
instead of the executing program, newer versions of [LXD](https://github.com/lxc/lxd) will forward the signal to the executing process. This is pretty convenient in scripts and so on.
In any case, I hope you found this little `lxc exec`
post/rant useful. Enjoy [LXD](https://github.com/lxc/lxd) it’s a crazy beautiful beast to play with. Give it a try online [https://linuxcontainers.org/lxd/try-it/](https://linuxcontainers.org/lxd/try-it/) and for all you developers out there: Checkout [https://github.com/lxc/lxd](https://github.com/lxc/lxd) and send us patches. 🙂 We don’t require any `CLA`
to be signed, we simply follow the kernel style of requiring a `Signed-off-by`
line. 🙂 |
8,756 | CoreOS 和 OCI 揭开了容器工业标准的论战 | http://www.linuxinsider.com/story/84689.html | 2017-08-06T11:27:16 | [
"Docker",
"容器",
"OCI"
] | https://linux.cn/article-8756-1.html | 
[CoreOS](https://coreos.com/) 和 [开放容器联盟(OCI)](https://www.opencontainers.org/) 周三(2017 年 7 月 19 日)发布的镜像和运行时标准主要参照了 Docker 的镜像格式技术。
然而,OCI 决定在 Docker 的事实标准平台上建立模型引发了一些问题。一些批评者提出其他方案。
CoreOS 的 CTO 及 OCI 技术管理委员会主席 Brandon Philips 说, 1.0版本为应用容器提供了一个稳定标准。他说,拥有产业领导者所创造的标准,应能激发 OCI 合作伙伴进一步地发展标准和创新。Philips 补充道,OCI 完成 1.0 版本意味着 OCI 运行时规范和 OCI 镜像格式标准现在已经可以广泛使用。此外,这一成就将推动 OCI 社区稳固日益增长的可互操作的可插拔工具集市场。
产业支持的标准将提供一种信心:容器已被广泛接受,并且 Kubernetes 用户将获得更进一步的支持。
Philips 告诉 LinuxInsider,结果是相当不错的,认证流程已经开始。
### 合作挑战
Philips 说,开放标准是容器生态系统取得成功的关键,构建标准的最好方式是与社区密切协作。然而,在 1.0 版本上达成共识所花费的时间超出了预期。
“早期,最大的挑战在于如今解决项目的发布模式及如何实施该项目”,他追述道,”每个人都低估了项目所要花费的时间。“
他说,OCI 联盟成员对他们想做的事情抱有不相匹配的预期,但是在过去的一年中,该组织了解了期望程度,并且经历了更多的测试。
### 追逐标准
CoreOS 官方在几年前就开始讨论行业支持的容器镜像和运行时规范的开放标准的想法,Phillips 说,早期的探索使我们认识到:在标准镜像格式上达成一致是至关重要的。
CoreOS 和容器技术创造者 [Docker](https://www.docker.com/) 在 2015 年 6 月宣布 OCI 成立。合作起始于 21 个行业领导者制定开放容器计划(OCP)。它作为一个非营利组织,旨在建立云存储软件容器的最低通用标准。
联盟包括容器业界的领导者:Docker、微软、红帽、IBM、谷歌和 Linux 基金会。
OCI 的目的是让应用开发者相信:当新的规范出来并开发出新的工具时,部署在容器上的软件仍然能够持续运行。这种信心必须同时满足所有私有和开源软件。
工具和应用是私有还是开源的并没有什么关系。当规范开始应用,产品可以被设计成与任何容器配置相适应,Philips 说。
“你需要有意识地超越编写代码的人能力之外创建标准。它是一个额外的付出。”他补充道。
作为联盟的一部分,Docker 向 OCP(开放容器计划)捐献出它的镜像格式的事实标准技术。它包括该公司的容器格式、运行时代码和规范。建立 OCI 镜像标准的工作起始于去年。
标准的里程碑给予容器使用者开发、打包、签名应用容器的能力。他们也能够在各种容器引擎上运行容器,Philips 强调。
### 唯一选择?
[Pund-IT](http://www.pund-it.com/) 的首席分析师 Charles King 表示:联盟面临着两种实现标准的方式。第一种选择是汇集相同意向的人员来避免分歧从零开始建立标准。
但是联盟成员似乎满足于第二种方案:采用一种强大的市场领先的平台作为一个有效的标准。
“Docker 对 [Linux 基金会](http://www.linuxfoundation.org/)的贡献使 OCI 坚定的选择了第二种方案。但是那些关注于 Docker 的做法和它的市场地位的人也许感觉应该有更好的选择。”King 对 LinuxInsider 说。
事实上,有个 OCI 成员 CoreOS 在开始的时候对该组织的总体方向进行了一些强烈的批评。他说,“所以看看 V1.0 版本是否处理或不处理那些关注点将是很有趣的事情。”
### 更快之路
Docker 已经被广泛部署的运行时实现是建立开放标准的合适基础。据 [Cloud Technology Partners](https://www.cloudtp.com/) 的高级副总裁 David Linthicum 所说,Docker 已经是一个事实标准。
“我们能很快就让它们工作起来也是很重要的。但是一次次的标准会议、处理政治因素以及诸如此类的事情只是浪费时间” 。他告诉 LinuxInsider。
但是现在没有更好的选择,他补充道。
据 RedHat 公司的 Linux 容器技术高级布道者 Joe Brockmeier 所说,Docker 的运行时是 runC 。它是 OCI 运行时标准的一种实现。
“因此,runC 是一个合适的运行时标准的基础。它被广泛的接受并成为了大多数容器技术实现的基础。他说。
OCI 是比 Docker 更进一步的标准。尽管 Docker 确实提交了遵循 OCI 规范的底层代码,然而这一系列代码就此停止,并且没真正的可行替代方案存在。
### 对接问题
Pund-IT 的 King 建议:采用一种广泛使用的产业标准将简化和加速许多公司对容器技术的采纳和管理。也有可能一些关键的供应商将继续关注他们自己的专有容器技术。
“他们辩称他们的做法是一个更好的方式,但这将有效的阻止 OCI 取得市场的主导地位。”他说,“从一个大体上实现的标准开始,就像 OCI 所做的那样,也许并不能完美的使所有人满意,但是这也许能比其他方案更加快速有效的实现目标。”
容器已经标准化的部署到了云上,Docker 显然是领先的。[Semaphore](http://www.semaphoreci.com/) 联合创始人 Marko Anastasov 说。
他说,Docker 事实标准的容器代表了开发开放标准的的最佳基础。Docker 的商业利益将如何影响其参与 OCI 的规模还有待观察。
### 反对观点
开放标准并不是在云部署中采用更多的容器的最终目标。[ThoughtWorks](https://www.thoughtworks.com/) 的首席顾问 Nic Cheneweth 表示。更好的的方法是查看 IT 行业的服务器虚拟化部分的影响。
Cheneweth 对 LinuxInsider 说:“持续增长和广泛采用的主要动力不是在行业标准的声明中,而是通过使用任何竞争技术所获得的潜在的和实现的效率,比如 VMware、Xen 等。”
容器技术的某些方面,例如容器本身,可以根据标准来定义。他说,在此之前,深入开源软件参与引导的健康竞争将有助于成为一个更好的标准。
据 Cheneweth 说,容器编排标准对该领域的持续增长并不特别重要。
不过,他表示,如果行业坚持锁定容器事实标准,那么 OCI 所选择的模型是一个很好的起点。“我不知道是否有更好的选择,但肯定这不是最糟糕的选择。”
作者简介:
自 2003 年以来,Jack M.Germain一直是一个新闻网络记者。他主要关注的领域是企业 IT、Linux 和开源技术。他已经写了很多关于 Linux 发行版和其他开源软件的评论。
---
via: <http://www.linuxinsider.com/story/84689.html>
作者:[Jack M. Germain]([email protected]) 译者:[LHRchina](https://github.com/LHRchina) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,757 | 文件系统层次标准(FHS)简介 | http://www.linuxinsider.com/story/84658.html | 2017-08-07T18:20:13 | [
"FHS",
"目录"
] | https://linux.cn/article-8757-1.html | 
当你好奇地看着系统的根目录(`/`)的时候,可能会发现自己有点不知所措。大多数三个字母的目录名称并没有告诉你它们是做什么的,如果你需要做出一些重要的修改,那就很难知道在哪里可以查看。
我想给那些没有深入了解过自己的根目录的人简单地介绍下它。
### 有用的工具
在我们开始之前,这里有几个需要熟悉的工具,它们可以让您随时挖掘那些您自己找到的有趣的东西。这些程序都不会对您的文件进行任何更改。
最有用的工具是 `ls` -- 它列出了使用完整路径或相对路径(即从当前目录开始的路径)作为参数给出的任何目录的内容。
```
$ ls 路径
```
当您进一步深入文件系统时,重复输入长路径可能会变得很麻烦,所以如果您想简化这一操作,可以用 `cd` 替换 `ls` 来更改当前的工作目录到该目录。与 `ls` 一样,只需将目录路径作为 `cd` 的参数。
```
$ cd 路径
```
如果您不确定某个文件是什么文件类型的,可以通过运行 `file` 并且将文件名作为 `file` 命令的参数。
```
$ file 文件名
```
最后,如果这个文件看起来像是适宜阅读的,那么用 `less` 来看看(不用担心文件有改变)。与最后一个工具一样,给出一个文件名作为参数来查看它。
```
$ less 文件名
```
完成文件翻阅后,点击 `q` 键退出,即可返回到您的终端。
### 根目录之旅
现在就开始我们的旅程。我将按照字母顺序介绍直接放在根目录下的目录。这里并没有介绍所有的目录,但到最后,我们会突出其中的亮点。
我们所有要遍历的目录的分类及功能都基于 Linux 的文件系统层次标准(FHS)。[Linux 基金会](http://www.linuxfoundation.org/)维护的 Linux FHS 帮助发行版和程序的设计者和开发人员来规划他们的工具的各个组件应该存放的位置。
通过将各个程序的所有文件、二进制文件和帮助手册保存在一致的组织结构中,FHS 让对它们的学习、调试或修改更加容易。想象一下,如果不是使用 `man` 命令找到使用指南,那么你就得对每个程序分别寻找其手册。
按照字母顺序和结构顺序,我们从 `/bin` 开始。该目录是存放所有核心系统二进制文件的地方,其包含的命令可以在 shell (解释终端指令的程序)中使用。没有这个目录的内容,你的系统就基本没法使用。
接下来是 `/boot` 目录,它存储了您的计算机启动所需的所有东西。其中最重要的是引导程序和内核。引导程序是一个通过初始化一些基础工具,使引导过程得以继续的程序。在初始化结束时,引导程序会加载内核,内核允许计算机与所有其它硬件和固件进行接口。从这一点看,它可以使整个操作系统工作起来。
`/dev` 目录用于存储类似文件的对象来表示被系统识别为“设备”的各种东西。这里包括许多显式的设备,如计算机的硬件组件:键盘、屏幕、硬盘驱动器等。
此外,`/dev` 还包含被系统视为“设备”的数据流的伪文件。一个例子是流入和流出您的终端的数据,可以分为三个“流”。它读取的信息被称为“标准输入”。命令或进程的输出是“标准输出”。最后,被分类为调试信息的辅助性输出指向到“标准错误”。终端本身作为文件也可以在这里找到。
`/etc`(发音类似工艺商业网站 “Etsy”,如果你想让 Linux 老用户惊艳一下的话,囧),许多程序在这里存储它们的配置文件,用于改变它们的设置。一些程序存储这里的是默认配置的副本,这些副本将在修改之前复制到另一个位置。其它的程序在这里存储配置的唯一副本,并期望用户可以直接修改。为 root 用户保留的许多程序常用一种配置模式。
`/home` 目录是用户个人文件所在的位置。对于桌面用户来说,这是您花费大部分时间的地方。对于每个非特权用户,这里都有一个具有相应名称的目录。
`/lib` 是您的系统赖以运行的许多库的所在地。许多程序都会重复使用一个或多个功能或子程序,它们经常会出现在几十上百个程序中。所以,如果每个程序在其二进制文件中重复写它需要的每一个组件,结果会是产生出一些大而无当的程序,作为更好的替代方案,我们可以通过进行“库调用”来引用这些库中的一个或多个。
在 `/media` 目录中可以访问像 USB 闪存驱动器或摄像机这样的可移动媒体。虽然它并不是所有系统上都有,但在一些专注于直观的桌面系统中还是比较普遍的,如 Ubuntu。具有存储能力的媒体在此处被“挂载”,这意味着当设备中的原始位流位于 `/dev` 目录下时,用户通常可以在这里访问那些可交互的文件对象。
`/proc` 目录是一个动态显示系统数据的虚拟文件系统。这意味着系统可以即时地创建 `/proc` 的内容,用包含运行时生成的系统信息(如硬件统计信息)的文件进行填充。
`/tmp` 正如其名字,用于放置缓存数据等临时信息。这个目录不做其他更多的事情。
现代 Linux 系统上大多数程序的二进制文件保存在 `/usr` 目录中。为了统一包含二进制文件的各种目录,`/usr` 包含 `/bin`、`/sbin` 和 `/lib` 中的所有内容的副本。
最后,`/var` 里保存“<ruby> 可变 <rt> variable </rt></ruby>”长度的数据。这里的可变长度数据的类型通常是会累积的数据,就像日志和缓存一样。一个例子是你的内核保留的日志。
为了避免硬盘空间用尽和崩溃的情况,`/var` 内置了“日志旋转”功能,可删除旧信息,为新信息腾出空间,维持固定的最大大小。
### 结尾
正如我所说,这里介绍的绝对不是您在根目录中可以找到的一切,但是确定系统核心功能所在地是一个很好的开始,而且可以更深入地研究这些功能是什么。
所以,如果你不知道要学习什么,就可能有很多的想法。如果你想得到一个更好的想法,就在这些目录中折腾自己吧!
---
作者简介:
自 2017 年以来 Jonathan Terrasi 一直是 ECT 新闻网的专栏作家。他的主要兴趣是计算机安全(特别是 Linux 桌面),加密和分析政治和时事。他是全职自由作家和音乐家。他的背景包括在芝加哥委员会发表的保卫人权法案文章中提供技术评论和分析。
---
via: <http://www.linuxinsider.com/story/84658.html>
作者:[Jonathan Terrasi](http://www.linuxinsider.com/perl/mailit/?id=84658) 译者:[firmianay](https://github.com/firmianay) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,758 | 在 Azure 中部署 Kubernetes 容器集群 | https://docs.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough | 2017-08-07T21:27:53 | [
"Kubernetes",
"Azure"
] | https://linux.cn/article-8758-1.html | 
在这个快速入门教程中,我们使用 Azure CLI 创建一个 Kubernetes 集群,然后在集群上部署运行由 Web 前端和 Redis 实例组成的多容器应用程序。一旦部署完成,应用程序可以通过互联网访问。

这个快速入门教程假设你已经基本了解了 Kubernetes 的概念,有关 Kubernetes 的详细信息,请参阅 [Kubernetes 文档](https://kubernetes.io/docs/home/)。
如果您没有 Azure 账号,请在开始之前创建一个[免费帐户](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)。
### 登录 Azure 云控制台
Azure 云控制台是一个免费的 Bash shell,你可以直接在 Azure 网站上运行。它已经在你的账户中预先配置好了, 单击 [Azure 门户](https://portal.azure.com/)右上角菜单上的 “Cloud Shell” 按钮;
[](https://portal.azure.com/)
该按钮会启动一个交互式 shell,您可以使用它来运行本教程中的所有操作步骤。
[](https://portal.azure.com/)
此快速入门教程所用的 Azure CLI 的版本最低要求为 2.0.4。如果您选择在本地安装和使用 CLI 工具,请运行 `az --version` 来检查已安装的版本。 如果您需要安装或升级请参阅[安装 Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) 。
### 创建一个资源组
使用 [az group create](https://docs.microsoft.com/en-us/cli/azure/group#create) 命令创建一个资源组,一个 Azure 资源组是指 Azure 资源部署和管理的逻辑组。
以下示例在 *eastus* 区域中创建名为 *myResourceGroup* 的资源组。
```
az group create --name myResourceGroup --location eastus
```
输出:
```
{
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup",
"location": "eastus",
"managedBy": null,
"name": "myResourceGroup",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
```
### 创建一个 Kubernetes 集群
使用 [az acs create](https://docs.microsoft.com/en-us/cli/azure/acs#create) 命令在 Azure 容器服务中创建 Kubernetes 集群。 以下示例使用一个 Linux 主节点和三个 Linux 代理节点创建一个名为 *myK8sCluster* 的集群。
```
az acs create --orchestrator-type=kubernetes --resource-group myResourceGroup --name=myK8sCluster --generate-ssh-keys
```
几分钟后,命令将完成并返回有关该集群的 json 格式的信息。
### 连接到 Kubernetes 集群
要管理 Kubernetes 群集,可以使用 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/user-guide/kubectl/)。
如果您使用 Azure CloudShell ,则已经安装了 kubectl 。如果要在本地安装,可以使用 [az acs kubernetes install-cli](https://docs.microsoft.com/en-us/cli/azure/acs/kubernetes#install-cli) 命令。
要配置 kubectl 连接到您的 Kubernetes 群集,请运行 [az acs kubernetes get-credentials](https://docs.microsoft.com/en-us/cli/azure/acs/kubernetes#get-credentials) 命令下载凭据并配置 Kubernetes CLI 以使用它们。
```
az acs kubernetes get-credentials --resource-group=myResourceGroup --name=myK8sCluster
```
要验证与集群的连接,请使用 [kubectl get](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#get) 命令查看集群节点的列表。
```
kubectl get nodes
```
输出:
```
NAME STATUS AGE VERSION
k8s-agent-14ad53a1-0 Ready 10m v1.6.6
k8s-agent-14ad53a1-1 Ready 10m v1.6.6
k8s-agent-14ad53a1-2 Ready 10m v1.6.6
k8s-master-14ad53a1-0 Ready,SchedulingDisabled 10m v1.6.6
```
### 运行应用程序
Kubernetes 清单文件为集群定义了一个所需的状态,包括了集群中应该运行什么样的容器镜像。 对于此示例,清单用于创建运行 Azure Vote 应用程序所需的所有对象。
创建一个名为 `azure-vote.yaml` ,将下面的内容拷贝到 YAML 中。
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:redis-v1
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
```
使用 [kubectl create](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#create) 命令来运行该应用程序。
```
kubectl create -f azure-vote.yaml
```
输出:
```
deployment "azure-vote-back" created
service "azure-vote-back" created
deployment "azure-vote-front" created
service "azure-vote-front" created
```
### 测试应用程序
当应用程序的跑起来之后,需要创建一个 [Kubernetes 服务](https://kubernetes.io/docs/concepts/services-networking/service/),将应用程序前端暴露在互联网上。 此过程可能需要几分钟才能完成。
要监控这个进程,使用 [kubectl get service](https://kubernetes.io/docs/user-guide/kubectl/v1.6/#get) 命令时加上 `--watch` 参数。
```
kubectl get service azure-vote-front --watch
```
最初,*azure-vote-front* 服务的 EXTERNAL-IP 显示为 *pending* 。 一旦 EXTERNAL-IP 地址从 *pending* 变成一个具体的 IP 地址,请使用 “CTRL-C” 来停止 kubectl 监视进程。
```
azure-vote-front 10.0.34.242 <pending> 80:30676/TCP 7s
azure-vote-front 10.0.34.242 52.179.23.131 80:30676/TCP 2m
```
现在你可以通过这个外网 IP 地址访问到 Azure Vote 这个应用了。

### 删除集群
当不再需要集群时,可以使用 [az group delete](https://docs.microsoft.com/en-us/cli/azure/group#delete) 命令删除资源组,容器服务和所有相关资源。
```
az group delete --name myResourceGroup --yes --no-wait
```
### 获取示例代码
在这个快速入门教程中,预先创建的容器镜像已被用于部署 Kubernetes 。相关应用程序代码 Dockerfile 和 Kubernetes 清单文件可在 GitHub 中获得。Github 仓库地址是 [https://github.com/Azure-Samples/azure-voting-app-redis](https://github.com/Azure-Samples/azure-voting-app-redis.git)
### 下一步
在这个快速入门教程中,您部署了一个 Kubernetes 集群,并部署了一个多容器应用程序。
要了解有关 Azure 容器服务的更多信息,走完一个完整的从代码到部署的全流程,请继续阅读 Kubernetes 集群教程。
---
via: <https://docs.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough>
作者:[neilpeterson](https://github.com/neilpeterson),[mmacy](https://github.com/mmacy) 译者:[rieonke](https://github.com/rieonke) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,760 | 在 Ubuntu 16.04 Server 上安装 Zabbix | https://www.unixmen.com/monitoring-server-install-zabbix-ubuntu-16-04/ | 2017-08-08T18:45:00 | [
"监控",
"Zabbix"
] | https://linux.cn/article-8760-1.html | 
### 监控服务器 - 什么是 Zabbix
[Zabbix](http://www.zabbix.com/) 是企业级开源分布式监控服务器解决方案。该软件能监控网络的不同参数以及服务器的完整性,还允许为任何事件配置基于电子邮件的警报。Zabbix 根据存储在数据库(例如 MySQL)中的数据提供报告和数据可视化功能。软件收集的每个测量指标都可以通过基于 Web 的界面访问。
Zabbix 根据 GNU 通用公共许可证版本 2(GPLv2)的条款发布,完全免费。
在本教程中,我们将在运行 MySQL、Apache 和 PHP 的 Ubuntu 16.04 server 上安装 Zabbix。
### 安装 Zabbix 服务器
首先,我们需要安装 Zabbix 所需的几个 PHP 模块:
```
# apt-get install php7.0-bcmath php7.0-xml php7.0-mbstring
```
Ubuntu 仓库中提供的 Zabbix 软件包已经过时了。使用官方 Zabbix 仓库安装最新的稳定版本。
通过执行以下命令来安装仓库软件包:
```
$ wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
# dpkg -i zabbix-release_3.2-1+xenial_all.deb
```
然后更新 `apt` 包源:
```
# apt-get update
```
现在可以安装带有 MySQL 支持和 PHP 前端的 Zabbix 服务器。执行命令:
```
# apt-get install zabbix-server-mysql zabbix-frontend-php
```
安装 Zabbix 代理:
```
# apt-get install zabbix-agent
```
Zabbix 现已安装。下一步是配置数据库来存储数据。
### 为 Zabbix 配置 MySQL
我们需要创建一个新的 MySQL 数据库,Zabbix 将用来存储收集的数据。
启动 MySQL shell:
```
$ mysql -uroot -p
```
接下来:
```
mysql> CREATE DATABASE zabbix CHARACTER SET utf8 COLLATE utf8_bin;
Query OK, 1 row affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON zabbix.* TO zabbix@localhost IDENTIFIED BY 'usr_strong_pwd';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> EXIT;
Bye
```
接下来,导入初始表和数据。
```
# zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix
```
输入在 MySQL shell 中创建的 **zabbix** 用户的密码。
接下来,我们需要编辑 Zabbix 服务器配置文件,它是 `/etc/zabbix/zabbis_server.conf`:
```
# $EDITOR /etc/zabbix/zabbix_server.conf
```
搜索文件的 `DBPassword` 部分:
```
### Option: DBPassword
# Database password. Ignored for SQLite.
# Comment this line if no password is used.
#
# Mandatory: no
# Default:
# DBPassword=
```
取消注释 `DBPassword=` 这行,并添加在 MySQL 中创建的密码:
```
DBPassword=usr_strong_pwd
```
接下来,查找 `DBHost=` 这行并取消注释。
保存并退出。
### 配置 PHP
我们需要配置 PHP 来使用 Zabbix。在安装过程中,安装程序在 `/etc/zabbix` 中创建了一个名为 `apache.conf` 的配置文件。打开此文件:
```
# $EDITOR /etc/zabbix/apache.conf
```
此时,只需要取消注释 `date.timezone` 并设置正确的时区:
```
<IfModule mod_php7.c>
php_value max_execution_time 300
php_value memory_limit 128M
php_value post_max_size 16M
php_value upload_max_filesize 2M
php_value max_input_time 300
php_value always_populate_raw_post_data -1
php_value date.timezone Europe/Rome
</IfModule>
```
保存并退出。
此时,重启 Apache 并启动 Zabbix Server 服务,使其能够在开机时启动:
```
# systemctl restart apache2
# systemctl start zabbix-server
# systemctl enable zabbix-server
```
用 `systemctl` 检查 Zabbix 状态:
```
# systemctl status zabbix-server
```
这个命令应该输出:
```
â zabbix-server.service - Zabbix Server
Loaded: loaded (/lib/systemd/system/zabbix-server.service; enabled; vendor pr
Active: active (running) ...
```
此时,Zabbix 的服务器端已经正确安装和配置了。
### 配置 Zabbix Web 前端
如介绍中所述,Zabbix 有一个基于 Web 的前端,我们将用于可视化收集的数据。但是,必须配置此接口。
使用 Web 浏览器,进入 URL `http://localhost/zabbix`。

点击 **Next step**

确保所有的值都是 **Ok**,然后再次单击 **Next step** 。

输入 MySQL **zabbix** 的用户密码,然后点击 **Next step**。

单击 **Next step** ,安装程序将显示具有所有配置参数的页面。再次检查以确保一切正确。


点击 **Next step** 进入最后一页。
点击完成以完成前端安装。默认用户名为 **Admin**,密码是 **zabbix**。
### Zabbix 服务器入门

使用上述凭证登录后,我们将看到 Zabbix 面板:

前往 **Administration -> Users**,了解已启用帐户的概况:

通过点击 **Create user** 创建一个新帐户。

点击 **Groups** 中的 **Add**,然后选择一个组:

保存新用户凭证,它将显示在 **Administration -> Users** 面板中。
**请注意,在 Zabbix 中,主机的访问权限分配给用户组,而不是单个用户。**
### 总结
我们结束了 Zabbix Server 安装的教程。现在,监控基础设施已准备好完成其工作并收集有关需要在 Zabbix 配置中添加的服务器的数据。
---
via: <https://www.unixmen.com/monitoring-server-install-zabbix-ubuntu-16-04/>
作者:[Giuseppe Molica](https://www.unixmen.com/author/tutan/) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | ### Monitoring Server – What is Zabbix
[Zabbix](http://www.zabbix.com/) is an enterprise-class open source distributed monitoring server solution. The software monitors different parameters of a network and the integrity of a server, and also allows the configuration of email based alerts for any event. Zabbix offers reporting and data visualization features based on the data stored in a database (MySQL, for example). Every metric collected by the software is accessible through a web-based interface.
Zabbix is released under the terms of the GNU General Public License version 2 (GPLv2), totally free of cost.
In this tutorial we will install Zabbix on an Ubuntu 16.04 server running MySQL, Apache and PHP.
### Install the Zabbix Server
First, we’ll need to install a few PHP modules required by Zabbix:
# apt-get install php7.0-bcmath php7.0-xml php7.0-mbstring
Install the repository package by executing the following commands:
$ wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb # dpkg -i zabbix-release_3.2-1+xenial_all.deb
Then update the
packages source:
# apt-get update
# apt-get install zabbix-server-mysql zabbix-frontend-php
Install the Zabbix agent:
# apt-get install zabbix-agent
Zabbix is now installed. The next step is to configure a database for storing its data.
### Configure MySQL for Zabbix
We need to create a new MySQL database, in which Zabbix will store the collected data.
Start the MySQL shell:
$ mysql -uroot -p
Next:
mysql> CREATE DATABASE zabbix CHARACTER SET utf8 COLLATE utf8_bin; Query OK, 1 row affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON zabbix.* TO zabbix@localhost IDENTIFIED BY 'usr_strong_pwd'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> EXIT; Bye
Next, import the initial schema and data.
# zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix
Enter the password for the **zabbix** user created in the MySQL shell.
Next, we need to edit the Zabbix Server configuration file, which is
:
# $EDITOR /etc/zabbix/zabbix_server.conf
Search the
section of the file:
### Option: DBPassword
# Database password. Ignored for SQLite.
# Comment this line if no password is used.
#
# Mandatory: no
# Default:
# DBPassword=
Uncomment the
line and edit by adding the password created in MySQL:
DBPassword=<span class="highlight">usr_strong_pwd</span>
Next, look for the
line and uncomment it.
Save and exit.
### Configure PHP
We need to configure PHP for working with Zabbix. During the installation process, the installer created a configuration file in
, named
. Open this file:
# $EDITOR /etc/zabbix/apache.conf
Here, right now, it’s necessary only to uncomment the
setting and set the correct timezone:
<IfModule <strong>mod_php7</strong>.c>
php_value max_execution_time 300
php_value memory_limit 128M
php_value post_max_size 16M
php_value upload_max_filesize 2M
php_value max_input_time 300
php_value always_populate_raw_post_data -1
<strong><span class="highlight">php_value date.timezone Europe/Rome</span></strong>
</IfModule>
Save and exit.
At this point, restart Apache and start the Zabbix Server service, enabling it for starting at boot time:
# systemctl restart apache2 # systemctl start zabbix-server # systemctl enable zabbix-server
:
# systemctl status zabbix-server
This command should output:
â zabbix-server.service - Zabbix Server Loaded: loaded (/lib/systemd/system/zabbix-server.service; enabled; vendor prActive: active (running)...
At this point, the server-side part of Zabbix has been correctly installed and configured.
### Configure Zabbix Web Fronted
As mentioned in the introduction, Zabbix has a web-based front-end which we’ll use for visualizing collected data. However, this interface has to be configured.
With a web browser, go to URL
.
Click on **Next step**
Be sure that all the values are **Ok**, and then click on * Next step *again.
Insert the MySQL
**zabbix** user password, and then click on * Next step*.
Click on * Next step*, and the installer will show the following page with all the configuration parameters. Check again to ensure that everything is correct..
Click
**Next step**to proceed to the final screen.
Click finish to complete the front-end installation. The default user name is **Admin** with **zabbix **as the password.
### Getting Started with the Zabbix Server
*Administration -> Users*for an overview about enabled accounts
**Create user****Add**in the
**Groups**section and select one group
*Administration -> Users*panel.
**Note that in Zabbix access rights to hosts are assigned to user groups, not individual users.**
### Conclusion
This concludes the tutorial for the Zabbix Server installation. Now, the monitoring infrastructure is ready to do its job and collect data about servers that need to be added in the Zabbix configuration. |
8,761 | GNU GPL 许可证常见问题解答(一) | https://www.gnu.org/licenses/gpl-faq.html | 2017-08-08T19:34:00 | [
"GPL",
"GNU",
"许可证"
] | https://linux.cn/article-8761-1.html | 
本文由高级咨询师薛亮据自由软件基金会(FSF)的[英文原文](https://www.gnu.org/licenses/gpl-faq.html)翻译而成,这篇常见问题解答澄清了在使用 GNU 许可证中遇到许多问题,对于企业和软件开发者在实际应用许可证和解决许可证问题时具有很强的实践指导意义。
1. 关于 GNU 项目、自由软件基金会(FSF)及其许可证的基本问题
2. 对于 GNU 许可证的一般了解
3. 在您的程序中使用 GNU 许可证
4. 依据 GNU 许可证分发程序
5. 在编写其他程序时采用依据 GNU 许可证发布的程序
6. 将作品与依据 GNU 许可证发布的代码相结合
7. 关于违反 GNU 许可证的问题
### 1、关于 GNU 项目、自由软件基金会(FSF)及其许可证的基本问题
#### 1.1 “GPL” 代表什么意思?
“GPL” 代表“<ruby> 通用公共许可证 <rp> ( </rp> <rt> General Public License </rt> <rp> ) </rp></ruby>”。 最常见的此类许可证是 GNU 通用公共许可证,简称 GNU GPL。 如果人们能够自然而然地将其理解为 GNU GPL,可以进一步缩短为“GPL”。
#### 1.2 自由软件是否意味着必须使用 GPL?
根本不是的,还有许多其他自由软件许可证。我们有一个[不完整的列表](https://www.gnu.org/licenses/license-list.html)。任何为用户提供特定自由的许可证都是自由软件许可证。
#### 1.3 为什么我要使用 GNU GPL,而不是其他自由软件许可证?
使用 GNU GPL 将要求所有发布的改进版本都是自由软件。这意味着您可以避免与您自己作品的专有修改版本进行竞争的风险。不过,在某些特殊情况下,最好使用一个更宽松的许可证。
#### 1.4 所有 GNU 软件都使用 GNU GPL 作为其许可证吗?
大多数 GNU 软件包都使用 GNU GPL,但是有一些 GNU 程序(以及程序的一部分)使用更宽松的许可证,例如 LGPL(<ruby> 较宽松公共许可证 <rp> ( </rp> <rt> Lesser GPL </rt> <rp> ) </rp></ruby>)。我们这样做是基于战略考虑。
#### 1.5 如果一个程序使用 GPL 许可证,是否会使其成为 GNU 软件?
任何人都可以依据 GNU GPL 发布一个程序,但并不能使其成为 GNU 软件包。
让程序成为 GNU 软件包意味着将其明确地贡献给 GNU 项目。当程序的开发人员和 GNU 项目都同意这样做时,才会发生这种情况。如果您有兴趣向 GNU 项目贡献程序,请写信至 <[email protected]> 。
#### 1.6 我可以将 GPL 应用于软件以外的其他作品吗?
您可以将 GPL 应用于任何类型的作品,只需明确该作品的“源代码”构成即可。 GPL 将“源代码”定义为作品的首选形式,以便在其中进行修改。
不过,对于手册和教科书,或更一般地,任何旨在教授某个主题的作品,我们建议使用 GFDL,而非 GPL。
#### 1.7 手册为什么不使用 GPL 许可证?
手册也可以使用 GPL 许可证,但对于手册来说,最好使用 GFDL(<ruby> 自由文本授权 <rp> ( </rp> <rt> GNU Free Documentation License </rt> <rp> ) </rp></ruby>)许可证。
GPL 是为软件程序设计的,它包含许多复杂的条款,对于程序来说至关重要;但对于图书或手册来说,这将是麻烦和不必要的。例如,任何人如果(以 GPL )出版纸质图书,就要么必须为每份印刷版本配置该图书的机器可读形式“源代码”,或提供书面文件,表明将稍后发送“源代码”。
同时,GFDL 中包含了帮助免费手册的出版商从销售副本中获利的条款,例如,出售封面文字。<ruby> 背书 <rp> ( </rp> <rt> Endorsements </rt> <rp> ) </rp></ruby>部分的特殊规则使得 GFDL 可以作为官方标准。修改版本的手册是被允许的,但该修改版本不能被标记为“该标准”。
使用 GFDL,我们允许对手册中涵盖其技术主题的文本进行修改。能够修改技术部分非常重要,因为修改程序的人理所当然地要去修改对应的文档。人们有这样做的自由,它是一种道德责任。我们的手册还包括阐述我们对自由软件政治立场的部分。我们将它们标记为<ruby> “不变量” <rp> ( </rp> <rt> invariant </rt> <rp> ) </rp></ruby>,使得它们不被更改或删除。 GFDL 中也为这些“不变部分”做出了规定。
#### 1.8 GPL 被翻译成其他语言了吗?
将 GPL 翻译成英文以外的语言将是有用的。人们甚至进行了翻译,并将文本发送给我们。但我们不敢将翻译文本批准为正式有效。其中的风险如此之大,以至于我们不敢接受。
法律文件在某种程度上就像一个程序。翻译它就像将程序从一种语言和操作系统转换到另一种语言。只有同时熟练使用这两种语言的律师才能做到这一点——即便如此,也有引入错误的风险。
如果我们正式批准 GPL 的翻译文本,我们将不得不给予所有人许可,让他们可以去做翻译文本规定可以做的任何事情。如果这是一个完全准确的翻译,那没关系。但如果翻译错误,后果可能是我们无法解决的灾难。
如果一个程序有 bug,我们可以发布一个新的版本,最终旧版本将会逐渐消失。但是,一旦我们给予每个人去根据特定翻译文本行事的许可,如果我们稍后发现它有一个错误,我们无法收回该权限。
乐意提供帮助的人有时会为我们做翻译工作。如果问题是要找人做这个工作的话,那问题就解决了。但实际的问题是错误的风险,做翻译工作不能避免风险。我们无法授权非律师撰写的翻译文本。
因此,目前我们并不认可GPL的翻译文本是全球有效和具有约束力的。相反,我们正在做两件事情:
* 将非正式的翻译指引给人们。这意味着我们允许人们进行GPL的翻译,但是我们不认可翻译文本具有法律效力和约束力。
未经批准的翻译文本没有法律效力,应该如此明确地表述。翻译文本应标明如下:
> This translation of the GPL is informal, and not officially approved by the Free Software Foundation as valid. To be completely sure of what is permitted, refer to the original GPL (in English).
> 本 GPL 翻译文本是非正式的,没有被自由软件基金会(FSF)正式批准为有效。若要完全确定何种行为被允许,请参阅原始 GPL(英文)。
但未经批准的翻译文本可以作为如何理解英文 GPL 的参考。对于许多用户来说,这就足够了。不过,在商业活动中使用 GNU 软件的企业,以及进行公共 ftp 发行的人员,需要去核查实际的英文 GPL,以明确其允许的行为。
* 发布仅在单个国家/地区有效的翻译文本。
我们正在考虑发布仅在单个国家正式生效的翻译文本。这样一来,如果发现有错误,那么错误将局限于这个国家,破坏力不会太大。
即便是一个富有同情心和能力的律师来做翻译,仍然需要相当多的专门知识和努力,所以我们不能很快答应任何这样的翻译。
#### 1.9 为什么有一些 GNU 库依据普通 GPL 而不是 LGPL 来发布?
对于任何特定库使用 LGPL 构成了自由软件的倒退。这意味着我们部分放弃了捍卫用户自由权利的努力,对基于 GPL 软件所构建产品的分享要求也降低了。在它们自身而言,这是更糟糕的变化。
有时一个小范围的倒退是很好的策略。某种情况下,使用 LGPL 的库可能会带来该库的广泛使用,从而进一步改善该库,为自由软件带来更广泛的支持,诸如此类。如果在相当大的程度上出现这种情况,这可能对自由软件很有好处。但它发生的几率有多少呢?我们只能推测。
在每个库上用一段时间的 LGPL,看看它是否有帮助,如果 LGPL 没有帮助,再将其更改为 GPL。这种做法听起来很好,但却是不可行的。一旦我们对特定库使用了 LGPL,那就很难进行改变。因此,我们根据具体情况决定每个库使用哪个许可证。对于我们如何判断该问题,有一段[很长的解释](https://www.gnu.org/licenses/why-not-lgpl.html)。
#### 1.10 谁有权力执行 GPL 许可证?
由于 GPL 是版权许可,软件的版权所有者将是有权执行 GPL 的人。如果您发现违反 GPL 的行为,您应该向遵循GPL的该软件的开发人员通报。他们是版权所有者,或与版权所有者有关。若要详细了解如何报告 GPL 违规,请参阅“[如果发现了可能违反 GPL 许可证的行为,我该怎么办?](https://www.gnu.org/licenses/gpl-faq.html#ReportingViolation)”
#### 1.11 为什么 FSF 要求为 FSF 拥有版权的程序做出贡献的贡献者将版权<ruby> 分配 <rp> ( </rp> <rt> assign </rt> <rp> ) </rp></ruby>给 FSF?如果我持有 GPL 程序的版权,我也应该这样做吗?如果是,怎么做?
我们的律师告诉我们,为了最大限度地向法院要求违规者强制执行 GPL,我们应该让程序的版权状况尽可能简单。为了做到这一点,我们要求每个贡献者将贡献部分的版权分配给 FSF,或者放弃对贡献部分的版权要求。
我们也要求个人贡献者从雇主那里获得版权放弃声明(如果有的话),以确保雇主不会声称拥有这部分贡献的版权。
当然,如果所有的贡献者把他们的代码放在公共领域,也就没有用之来执行 GPL 许可证的版权了。所以我们鼓励人们为大规模的代码贡献分配版权,只把小规模的修改放在公共领域。
如果您想要在您的程序中执行 GPL,遵循类似的策略可能是一个好主意。如果您需要更多信息,请联系 <[email protected]>。
#### 1.12 我可以修改 GPL 并创建一个修改后的许可证吗?
您可以制作GPL的修改版本,但这往往会产生实践上的后果。
您可以在其他许可证中合法使用GPL条款(可能是修改过的),只要您以其他名称来称呼您的许可证,并且不包括 GPL 的<ruby> 引言 <rp> ( </rp> <rt> preamble </rt> <rp> ) </rp></ruby>,只要您对最后的使用说明进行了足够多的修改,使其措辞明显不同,没有提到 GNU(尽管您描述的实际过程可能与其类似)。
如果您想在修改后的许可证中使用我们的引言,请写信至 <[email protected]>,以获得许可。我们需要查看实际的许可证要求,才能决定我们是否能够批准它们。
虽然我们不会以这种方式对您修改许可证提出法律上的反对意见,但我们希望您三思而行,别去修改许可证。类似这些修改后的许可证几乎肯定[与 GNU GPL 不兼容](https://www.gnu.org/licenses/gpl-faq.html#WhatIsCompatible),并且这种不兼容性阻碍了模块之间的有用组合。
不同自由软件许可证的扩散本身就是一个负担。请使用 GPL v3 提供的<ruby> 例外 <rp> ( </rp> <rt> exception </rt> <rp> ) </rp></ruby>机制,而不是去修改 GPL。
#### 1.13 为什么你们决定将 GNU Affero GPL v3 作为一个单独的许可证?
GPLv3 的早期草案在第 7 节中允许许可人在发布源代码时添加一个类似 Affero 的要求。但是,一些开发和依赖自由软件的公司认为这个要求太过繁重。他们希望避免使用遵循这个要求的代码,并且对检查代码是否符合这个附加要求所带来的管理成本表示担忧。通过将 GNU Affero GPL v3 作为单独的许可证发布,在该许可证以及 GPL v3 中允许遵循该许可证的代码链接到彼此,我们完成了所有最初的目标,同时更容易确定哪些源代码需要遵循发布要求。
(题图:pycom.io)
---
译者:薛亮,北京集慧智佳知识产权管理咨询股份有限公司互联网事业部高级咨询师,擅长专利检索、专利分析、竞争对手跟踪、FTO 分析、开源软件知识产权风险分析,致力于为互联网企业、高科技公司提供知识产权咨询服务。
| 200 | OK | ## Frequently Asked Questions about the GNU Licenses
### Table of Contents
**Basic questions about the GNU Project, the Free Software Foundation, and its licenses****General understanding of the GNU licenses****Using GNU licenses for your programs****Distribution of programs released under the GNU licenses****Using programs released under the GNU licenses when writing other programs****Combining work with code released under the GNU licenses****Questions about violations of the GNU licenses**
#### Basic questions about the GNU Project, the Free Software Foundation, and its licenses
[What does “GPL” stand for?](#WhatDoesGPLStandFor)[Does free software mean using the GPL?](#DoesFreeSoftwareMeanUsingTheGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Does all GNU software use the GNU GPL as its license?](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)[Does using the GPL for a program make it GNU software?](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[Are there translations of the GPL into other languages?](#GPLTranslations)[Why are some GNU libraries released under the ordinary GPL rather than the Lesser GPL?](#WhySomeGPLAndNotLGPL)[Who has the power to enforce the GPL?](#WhoHasThePower)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[Can I modify the GPL and make a modified license?](#ModifyGPL)[Why did you decide to write the GNU Affero GPLv3 as a separate license?](#SeparateAffero)
#### General understanding of the GNU licenses
[Why does the GPL permit users to publish their modified versions?](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)[Does the GPL require that source code of modified versions be posted to the public?](#GPLRequireSourcePostedPublic)[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[If I know someone has a copy of a GPL-covered program, can I demand they give me a copy?](#CanIDemandACopy)[What does “written offer valid for any third party” mean in GPLv2? Does that mean everyone in the world can get the source to any GPLed program no matter what?](#WhatDoesWrittenOfferValid)[The GPL says that modified versions, if released, must be “licensed … to all third parties.” Who are these third parties?](#TheGPLSaysModifiedVersions)[Does the GPL allow me to sell copies of the program for money?](#DoesTheGPLAllowMoney)[Does the GPL allow me to charge a fee for downloading the program from my distribution site?](#DoesTheGPLAllowDownloadFee)[Does the GPL allow me to require that anyone who receives the software must pay me a fee and/or notify me?](#DoesTheGPLAllowRequireFee)[If I distribute GPLed software for a fee, am I required to also make it available to the public without a charge?](#DoesTheGPLRequireAvailabilityToPublic)[Does the GPL allow me to distribute a copy under a nondisclosure agreement?](#DoesTheGPLAllowNDA)[Does the GPL allow me to distribute a modified or beta version under a nondisclosure agreement?](#DoesTheGPLAllowModNDA)[Does the GPL allow me to develop a modified version under a nondisclosure agreement?](#DevelopChangesUnderNDA)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[What if the work is not very long?](#WhatIfWorkIsShort)[Am I required to claim a copyright on my modifications to a GPL-covered program?](#RequiredToClaimCopyright)[What does the GPL say about translating some code to a different programming language?](#TranslateCode)[If a program combines public-domain code with GPL-covered code, can I take the public-domain part and use it as public domain code?](#CombinePublicDomainWithGPL)[I want to get credit for my work. I want people to know what I wrote. Can I still get credit if I use the GPL?](#IWantCredit)[Does the GPL allow me to add terms that would require citation or acknowledgment in research papers which use the GPL-covered software or its output?](#RequireCitation)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[What does it mean to say that two licenses are “compatible”?](#WhatIsCompatible)[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)[Why is the original BSD license incompatible with the GPL?](#OrigBSD)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[When it comes to determining whether two pieces of software form a single work, does the fact that the code is in one or more containers have any effect?](#AggregateContainers)[Why does the FSF require that contributors to FSF-copyrighted programs assign copyright to the FSF? If I hold copyright on a GPLed program, should I do this, too? If so, how?](#AssignCopyright)[If I use a piece of software that has been obtained under the GNU GPL, am I allowed to modify the original code into a new program, then distribute and sell that new program commercially?](#GPLCommercially)[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)[I'd like to license my code under the GPL, but I'd also like to make it clear that it can't be used for military and/or commercial uses. Can I do this?](#NoMilitary)[Can I use the GPL to license hardware?](#GPLHardware)[Does prelinking a GPLed binary to various libraries on the system, to optimize its performance, count as modification?](#Prelinking)[How does the LGPL work with Java?](#LGPLJava)[Why did you invent the new terms “propagate” and “convey” in GPLv3?](#WhyPropagateAndConvey)[Is “convey” in GPLv3 the same thing as what GPLv2 means by “distribute”?](#ConveyVsDistribute)[If I only make copies of a GPL-covered program and run them, without distributing or conveying them to others, what does the license require of me?](#NoDistributionRequirements)[GPLv3 gives “making available to the public” as an example of propagation. What does this mean? Is making available a form of conveying?](#v3MakingAvailable)[Since distribution and making available to the public are forms of propagation that are also conveying in GPLv3, what are some examples of propagation that do not constitute conveying?](#PropagationNotConveying)[How does GPLv3 make BitTorrent distribution easier?](#BitTorrent)[What is tivoization? How does GPLv3 prevent it?](#Tivoization)[Does GPLv3 prohibit DRM?](#DRMProhibited)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[Does GPLv3 have a “patent retaliation clause”?](#v3PatentRetaliation)[In GPLv3 and AGPLv3, what does it mean when it says “notwithstanding any other provision of this License”?](#v3Notwithstanding)[In AGPLv3, what counts as “ interacting with [the software] remotely through a computer network?”](#AGPLv3InteractingRemotely)[How does GPLv3's concept of “you” compare to the definition of “Legal Entity” in the Apache License 2.0?](#ApacheLegalEntity)[In GPLv3, what does “the Program” refer to? Is it every program ever released under GPLv3?](#v3TheProgram)[If some network client software is released under AGPLv3, does it have to be able to provide source to the servers it interacts with?](#AGPLv3ServerAsUser)[For software that runs a proxy server licensed under the AGPL, how can I provide an offer of source to users interacting with that code?](#AGPLProxy)
#### Using GNU licenses for your programs
[How do I upgrade from (L)GPLv2 to (L)GPLv3?](#v3HowToUpgrade)[Could you give me step by step instructions on how to apply the GPL to my program?](#CouldYouHelpApplyGPL)[Why should I use the GNU GPL rather than other free software licenses?](#WhyUseGPL)[Why does the GPL require including a copy of the GPL with every copy of the program?](#WhyMustIInclude)[Is putting a copy of the GNU GPL in my repository enough to apply the GPL?](#LicenseCopyOnly)[Why should I put a license notice in each source file?](#NoticeInSourceFile)[What if the work is not very long?](#WhatIfWorkIsShort)[Can I omit the preamble of the GPL, or the instructions for how to use it on your own programs, to save space?](#GPLOmitPreamble)[How do I get a copyright on my program in order to release it under the GPL?](#HowIGetCopyright)[What if my school might want to make my program into its own proprietary software product?](#WhatIfSchool)[I would like to release a program I wrote under the GNU GPL, but I would like to use the same code in nonfree programs.](#ReleaseUnderGPLAndNF)[Can the developer of a program who distributed it under the GPL later license it to another party for exclusive use?](#CanDeveloperThirdParty)[Can the US Government release a program under the GNU GPL?](#GPLUSGov)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Why should programs say “Version 3 of the GPL or any later version”?](#VersionThreeOrLater)[Is it a good idea to use a license saying that a certain program can be used only under the latest version of the GNU GPL?](#OnlyLatestVersion)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[Why don't you use the GPL for manuals?](#WhyNotGPLForManuals)[How does the GPL apply to fonts?](#FontException)[What license should I use for website maintenance system templates?](#WMS)[Can I release a program under the GPL which I developed using nonfree tools?](#NonFreeTools)[I use public key cryptography to sign my code to assure its authenticity. Is it true that GPLv3 forces me to release my private signing keys?](#GiveUpKeys)[Does GPLv3 require that voters be able to modify the software running in a voting machine?](#v3VotingMachine)[The warranty and liability disclaimers in GPLv3 seem specific to U.S. law. Can I add my own disclaimers to my own code?](#v3InternationalDisclaimers)[My program has interactive user interfaces that are non-visual in nature. How can I comply with the Appropriate Legal Notices requirement in GPLv3?](#NonvisualLegalNotices)
#### Distribution of programs released under the GNU licenses
[Can I release a modified version of a GPL-covered program in binary form only?](#ModifiedJustBinary)[I downloaded just the binary from the net. If I distribute copies, do I have to get the source and distribute that too?](#UnchangedJustBinary)[I want to distribute binaries via physical media without accompanying sources. Can I provide source code by FTP instead of by mail order?](#DistributeWithSourceOnInternet)[My friend got a GPL-covered binary with an offer to supply source, and made a copy for me. Can I use the offer to obtain the source?](#RedistributedBinariesGetSource)[Can I put the binaries on my Internet server and put the source on a different Internet site?](#SourceAndBinaryOnDifferentSites)[I want to distribute an extended version of a GPL-covered program in binary form. Is it enough to distribute the source for the original version?](#DistributeExtendedBinary)[I want to distribute binaries, but distributing complete source is inconvenient. Is it ok if I give users the diffs from the “standard” version along with the binaries?](#DistributingSourceIsInconvenient)[Can I make binaries available on a network server, but send sources only to people who order them?](#AnonFTPAndSendSources)[How can I make sure each user who downloads the binaries also gets the source?](#HowCanIMakeSureEachDownloadGetsSource)[Does the GPL require me to provide source code that can be built to match the exact hash of the binary I am distributing?](#MustSourceBuildToMatchExactHashOfBinary)[Can I release a program with a license which says that you can distribute modified versions of it under the GPL but you can't distribute the original itself under the GPL?](#ReleaseNotOriginal)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[A company is running a modified version of a GPLed program on a web site. Does the GPL say they must release their modified sources?](#UnreleasedMods)[A company is running a modified version of a program licensed under the GNU Affero GPL (AGPL) on a web site. Does the AGPL say they must release their modified sources?](#UnreleasedModsAGPL)[Is use within one organization or company “distribution”?](#InternalDistribution)[If someone steals a CD containing a version of a GPL-covered program, does the GPL give him the right to redistribute that version?](#StolenCopy)[What if a company distributes a copy of some other developers' GPL-covered work to me as a trade secret?](#TradeSecretRelease)[What if a company distributes a copy of its own GPL-covered work to me as a trade secret?](#TradeSecretRelease2)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Does moving a copy to a majority-owned, and controlled, subsidiary constitute distribution?](#DistributeSubsidiary)[Can software installers ask people to click to agree to the GPL? If I get some software under the GPL, do I have to agree to anything?](#ClickThrough)[I would like to bundle GPLed software with some sort of installation software. Does that installer need to have a GPL-compatible license?](#GPLCompatInstaller)[Does a distributor violate the GPL if they require me to “represent and warrant” that I am located in the US, or that I intend to distribute the software in compliance with relevant export control laws?](#ExportWarranties)[The beginning of GPLv3 section 6 says that I can convey a covered work in object code form “under the terms of sections 4 and 5” provided I also meet the conditions of section 6. What does that mean?](#v3Under4and5)[My company owns a lot of patents. Over the years we've contributed code to projects under “GPL version 2 or any later version”, and the project itself has been distributed under the same terms. If a user decides to take the project's code (incorporating my contributions) under GPLv3, does that mean I've automatically granted GPLv3's explicit patent license to that user?](#v2OrLaterPatentLicense)[If I distribute a GPLv3-covered program, can I provide a warranty that is voided if the user modifies the program?](#v3ConditionalWarranty)[If I give a copy of a GPLv3-covered program to a coworker at my company, have I “conveyed” the copy to that coworker?](#v3CoworkerConveying)[Am I complying with GPLv3 if I offer binaries on an FTP server and sources by way of a link to a source code repository in a version control system, like CVS or Subversion?](#SourceInCVS)[Can someone who conveys GPLv3-covered software in a User Product use remote attestation to prevent a user from modifying that software?](#RemoteAttestation)[What does “rules and protocols for communication across the network” mean in GPLv3?](#RulesProtocols)[Distributors that provide Installation Information under GPLv3 are not required to provide “support service” for the product. What kind of “support service” do you mean?](#SupportService)
#### Using programs released under the GNU licenses when writing other programs
[Can I have a GPL-covered program and an unrelated nonfree program on the same computer?](#GPLAndNonfreeOnSameMachine)[Can I use GPL-covered editors such as GNU Emacs to develop nonfree programs? Can I use GPL-covered tools such as GCC to compile them?](#CanIUseGPLToolsForNF)[Is there some way that I can GPL the output people get from use of my program? For example, if my program is used to develop hardware designs, can I require that these designs must be free?](#GPLOutput)[In what cases is the output of a GPL program covered by the GPL too?](#WhatCaseIsOutputGPL)[If I port my program to GNU/Linux, does that mean I have to release it as free software under the GPL or some other free software license?](#PortProgramToGPL)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[If I distribute a proprietary program that links against an LGPLv3-covered library that I've modified, what is the “contributor version” for purposes of determining the scope of the explicit patent license grant I'm making—is it just the library, or is it the whole combination?](#LGPLv3ContributorVersion)[Under AGPLv3, when I modify the Program under section 13, what Corresponding Source does it have to offer?](#AGPLv3CorrespondingSource)[Where can I learn more about the GCC Runtime Library Exception?](#LibGCCException)
#### Combining work with code released under the GNU licenses
[Is GPLv3 compatible with GPLv2?](#v2v3Compatibility)[Does GPLv2 have a requirement about delivering installation information?](#InstInfo)[How are the various GNU licenses compatible with each other?](#AllCompatibility)[What is the difference between an “aggregate” and other kinds of “modified versions”?](#MereAggregation)[Do I have “fair use” rights in using the source code of a GPL-covered program?](#GPLFairUse)[Can the US Government release improvements to a GPL-covered program?](#GPLUSGovAdd)[Does the GPL have different requirements for statically vs dynamically linked modules with a covered work?](#GPLStaticVsDynamic)[Does the LGPL have different requirements for statically vs dynamically linked modules with a covered work?](#LGPLStaticVsDynamic)[If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license?](#IfLibraryIsGPL)[You have a GPLed program that I'd like to link with my code to build a proprietary program. Does the fact that I link with your program mean I have to GPL my program?](#LinkingWithGPL)[If so, is there any chance I could get a license of your program under the Lesser GPL?](#SwitchToLGPL)[If a programming language interpreter is released under the GPL, does that mean programs written to be interpreted by it must be under GPL-compatible licenses?](#IfInterpreterIsGPL)[If a programming language interpreter has a license that is incompatible with the GPL, can I run GPL-covered programs on it?](#InterpreterIncompat)[If I add a module to a GPL-covered program, do I have to use the GPL as the license for my module?](#GPLModuleLicense)[When is a program and its plug-ins considered a single combined program?](#GPLPlugins)[If I write a plug-in to use with a GPL-covered program, what requirements does that impose on the licenses I can use for distributing my plug-in?](#GPLAndPlugins)[Can I apply the GPL when writing a plug-in for a nonfree program?](#GPLPluginsInNF)[Can I release a nonfree program that's designed to load a GPL-covered plug-in?](#NFUseGPLPlugins)[I'd like to incorporate GPL-covered software in my proprietary system. I have no permission to use that software except what the GPL gives me. Can I do this?](#GPLInProprietarySystem)[Using a certain GNU program under the GPL does not fit our project to make proprietary software. Will you make an exception for us? It would mean more users of that program.](#WillYouMakeAnException)[I'd like to incorporate GPL-covered software in my proprietary system. Can I do this by putting a “wrapper” module, under a GPL-compatible lax permissive license (such as the X11 license) in between the GPL-covered part and the proprietary part?](#GPLWrapper)[Can I write free software that uses nonfree libraries?](#FSWithNFLibs)[Can I link a GPL program with a proprietary system library?](#SystemLibraryException)[In what ways can I link or combine AGPLv3-covered and GPLv3-covered code?](#AGPLGPL)[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)[I'm writing a Windows application with Microsoft Visual C++ and I will be releasing it under the GPL. Is dynamically linking my program with the Visual C++ runtime library permitted under the GPL?](#WindowsRuntimeAndGPL)[I'd like to modify GPL-covered programs and link them with the portability libraries from Money Guzzler Inc. I cannot distribute the source code for these libraries, so any user who wanted to change these versions would have to obtain those libraries separately. Why doesn't the GPL permit this?](#MoneyGuzzlerInc)[If license for a module Q has a requirement that's incompatible with the GPL, but the requirement applies only when Q is distributed by itself, not when Q is included in a larger program, does that make the license GPL-compatible? Can I combine or link Q with a GPL-covered program?](#GPLIncompatibleAlone)[In an object-oriented language such as Java, if I use a class that is GPLed without modifying, and subclass it, in what way does the GPL affect the larger program?](#OOPLang)[Does distributing a nonfree driver meant to link with the kernel Linux violate the GPL?](#NonfreeDriverKernelLinux)[How can I allow linking of proprietary modules with my GPL-covered library under a controlled interface only?](#LinkingOverControlledInterface)[Consider this situation: 1) X releases V1 of a project under the GPL. 2) Y contributes to the development of V2 with changes and new code based on V1. 3) X wants to convert V2 to a non-GPL license. Does X need Y's permission?](#Consider)[I have written an application that links with many different components, that have different licenses. I am very confused as to what licensing requirements are placed on my program. Can you please tell me what licenses I may use?](#ManyDifferentLicenses)[Can I use snippets of GPL-covered source code within documentation that is licensed under some license that is incompatible with the GPL?](#SourceCodeInDocumentation)
#### Questions about violations of the GNU licenses
[What should I do if I discover a possible violation of the GPL?](#ReportingViolation)[Who has the power to enforce the GPL?](#WhoHasThePower)[I heard that someone got a copy of a GPLed program under another license. Is this possible?](#HeardOtherLicense)[Is the developer of a GPL-covered program bound by the GPL? Could the developer's actions ever be a violation of the GPL?](#DeveloperViolate)[I just found out that a company has a copy of a GPLed program, and it costs money to get it. Aren't they violating the GPL by not making it available on the Internet?](#CompanyGPLCostsMoney)[Can I use GPLed software on a device that will stop operating if customers do not continue paying a subscription fee?](#SubscriptionFee)[What does it mean to “cure” a violation of GPLv3?](#Cure)[If someone installs GPLed software on a laptop, and then lends that laptop to a friend without providing source code for the software, have they violated the GPL?](#LaptopLoan)[Suppose that two companies try to circumvent the requirement to provide Installation Information by having one company release signed software, and the other release a User Product that only runs signed software from the first company. Is this a violation of GPLv3?](#TwoPartyTivoization)
This page is maintained by the Free Software
Foundation's Licensing and Compliance Lab. You can support our efforts by
[making a donation](http://donate.fsf.org) to the FSF.
You can use our publications to understand how GNU licenses work or help you advocate for free software, but they are not legal advice. The FSF cannot give legal advice. Legal advice is personalized advice from a lawyer who has agreed to work for you. Our answers address general questions and may not apply in your specific legal situation.
Have a
question not answered here? Check out some of our other [licensing resources](https://www.fsf.org/licensing) or contact the
Compliance Lab at [[email protected]](mailto:[email protected]).
- What does “GPL” stand for?
(
[#WhatDoesGPLStandFor](#WhatDoesGPLStandFor)) “GPL” stands for “General Public License”. The most widespread such license is the GNU General Public License, or GNU GPL for short. This can be further shortened to “GPL”, when it is understood that the GNU GPL is the one intended.
- Does free software mean using
the GPL?
(
[#DoesFreeSoftwareMeanUsingTheGPL](#DoesFreeSoftwareMeanUsingTheGPL)) Not at all—there are many other free software licenses. We have an
[incomplete list](/licenses/license-list.html). Any license that provides the user[certain specific freedoms](/philosophy/free-sw.html)is a free software license.- Why should I use the GNU GPL rather than other
free software licenses?
(
[#WhyUseGPL](#WhyUseGPL)) Using the GNU GPL will require that all the
[released improved versions be free software](/philosophy/pragmatic.html). This means you can avoid the risk of having to compete with a proprietary modified version of your own work. However, in some special situations it can be better to use a[more permissive license](/licenses/why-not-lgpl.html).- Does all GNU
software use the GNU GPL as its license?
(
[#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense](#DoesAllGNUSoftwareUseTheGNUGPLAsItsLicense)) Most GNU software packages use the GNU GPL, but there are a few GNU programs (and parts of programs) that use looser licenses, such as the Lesser GPL. When we do this, it is a matter of
[strategy](/licenses/why-not-lgpl.html).- Does using the
GPL for a program make it GNU software?
(
[#DoesUsingTheGPLForAProgramMakeItGNUSoftware](#DoesUsingTheGPLForAProgramMakeItGNUSoftware)) Anyone can release a program under the GNU GPL, but that does not make it a GNU package.
Making the program a GNU software package means explicitly contributing to the GNU Project. This happens when the program's developers and the GNU Project agree to do it. If you are interested in contributing a program to the GNU Project, please write to
[<[email protected]>](mailto:[email protected]).- What should I do if I discover a possible
violation of the GPL?
(
[#ReportingViolation](#ReportingViolation)) You should
[report it](/licenses/gpl-violation.html). First, check the facts as best you can. Then tell the publisher or copyright holder of the specific GPL-covered program. If that is the Free Software Foundation, write to[<[email protected]>](mailto:[email protected]). Otherwise, the program's maintainer may be the copyright holder, or else could tell you how to contact the copyright holder, so report it to the maintainer.- Why
does the GPL permit users to publish their modified versions?
(
[#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions](#WhyDoesTheGPLPermitUsersToPublishTheirModifiedVersions)) A crucial aspect of free software is that users are free to cooperate. It is absolutely essential to permit users who wish to help each other to share their bug fixes and improvements with other users.
Some have proposed alternatives to the GPL that require modified versions to go through the original author. As long as the original author keeps up with the need for maintenance, this may work well in practice, but if the author stops (more or less) to do something else or does not attend to all the users' needs, this scheme falls down. Aside from the practical problems, this scheme does not allow users to help each other.
Sometimes control over modified versions is proposed as a means of preventing confusion between various versions made by users. In our experience, this confusion is not a major problem. Many versions of Emacs have been made outside the GNU Project, but users can tell them apart. The GPL requires the maker of a version to place his or her name on it, to distinguish it from other versions and to protect the reputations of other maintainers.
- Does the GPL require that
source code of modified versions be posted to the public?
(
[#GPLRequireSourcePostedPublic](#GPLRequireSourcePostedPublic)) The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.
But
*if*you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL.Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you.
- Can I have a GPL-covered
program and an unrelated nonfree program on the same computer?
(
[#GPLAndNonfreeOnSameMachine](#GPLAndNonfreeOnSameMachine)) Yes.
- If I know someone has a copy of a GPL-covered
program, can I demand they give me a copy?
(
[#CanIDemandACopy](#CanIDemandACopy)) No. The GPL gives a person permission to make and redistribute copies of the program
*if and when that person chooses to do so*. That person also has the right not to choose to redistribute the program.- What does “written offer
valid for any third party” mean in GPLv2? Does that mean
everyone in the world can get the source to any GPLed program
no matter what?
(
[#WhatDoesWrittenOfferValid](#WhatDoesWrittenOfferValid)) If you choose to provide source through a written offer, then anybody who requests the source from you is entitled to receive it.
If you commercially distribute binaries not accompanied with source code, the GPL says you must provide a written offer to distribute the source code later. When users non-commercially redistribute the binaries they received from you, they must pass along a copy of this written offer. This means that people who did not get the binaries directly from you can still receive copies of the source code, along with the written offer.
The reason we require the offer to be valid for any third party is so that people who receive the binaries indirectly in that way can order the source code from you.
- GPLv2 says that modified
versions, if released, must be “licensed … to all third
parties.” Who are these third parties?
(
[#TheGPLSaysModifiedVersions](#TheGPLSaysModifiedVersions)) Section 2 says that modified versions you distribute must be licensed to all third parties under the GPL. “All third parties” means absolutely everyone—but this does not require you to
*do*anything physically for them. It only means they have a license from you, under the GPL, for your version.- Am I required to claim a copyright
on my modifications to a GPL-covered program?
(
[#RequiredToClaimCopyright](#RequiredToClaimCopyright)) You are not required to claim a copyright on your changes. In most countries, however, that happens automatically by default, so you need to place your changes explicitly in the public domain if you do not want them to be copyrighted.
Whether you claim a copyright on your changes or not, either way you must release the modified version, as a whole, under the GPL (
[if you release your modified version at all](#GPLRequireSourcePostedPublic)).- What does the GPL say about translating
some code to a different programming language?
(
[#TranslateCode](#TranslateCode)) Under copyright law, translation of a work is considered a kind of modification. Therefore, what the GPL says about modified versions applies also to translated versions. The translation is covered by the copyright on the original program.
If the original program carries a free license, that license gives permission to translate it. How you can use and license the translated program is determined by that license. If the original program is licensed under certain versions of the GNU GPL, the translated program must be covered by the same versions of the GNU GPL.
- If a program combines
public-domain code with GPL-covered code, can I take the
public-domain part and use it as public domain code?
(
[#CombinePublicDomainWithGPL](#CombinePublicDomainWithGPL)) You can do that, if you can figure out which part is the public domain part and separate it from the rest. If code was put in the public domain by its developer, it is in the public domain no matter where it has been.
- Does the GPL allow me to sell copies of
the program for money?
(
[#DoesTheGPLAllowMoney](#DoesTheGPLAllowMoney)) Yes, the GPL allows everyone to do this. The
[right to sell copies](/philosophy/selling.html)is part of the definition of free software. Except in one special situation, there is no limit on what price you can charge. (The one exception is the required written offer to provide source code that must accompany binary-only release.)- Does the GPL allow me to charge a
fee for downloading the program from my distribution site?
(
[#DoesTheGPLAllowDownloadFee](#DoesTheGPLAllowDownloadFee)) Yes. You can charge any fee you wish for distributing a copy of the program. Under GPLv2, if you distribute binaries by download, you must provide “equivalent access” to download the source—therefore, the fee to download source may not be greater than the fee to download the binary. If the binaries being distributed are licensed under the GPLv3, then you must offer equivalent access to the source code in the same way through the same place at no further charge.
- Does the GPL allow me to require
that anyone who receives the software must pay me a fee and/or
notify me?
(
[#DoesTheGPLAllowRequireFee](#DoesTheGPLAllowRequireFee)) No. In fact, a requirement like that would make the program nonfree. If people have to pay when they get a copy of a program, or if they have to notify anyone in particular, then the program is not free. See the
[definition of free software](/philosophy/free-sw.html).The GPL is a free software license, and therefore it permits people to use and even redistribute the software without being required to pay anyone a fee for doing so.
You
*can*charge people a fee to[get a copy](#DoesTheGPLAllowMoney). You can't require people to pay you when they get a copy*from you**from someone else*.- If I
distribute GPLed software for a fee, am I required to also make
it available to the public without a charge?
(
[#DoesTheGPLRequireAvailabilityToPublic](#DoesTheGPLRequireAvailabilityToPublic)) No. However, if someone pays your fee and gets a copy, the GPL gives them the freedom to release it to the public, with or without a fee. For example, someone could pay your fee, and then put her copy on a web site for the general public.
- Does the GPL allow me to distribute copies
under a nondisclosure agreement?
(
[#DoesTheGPLAllowNDA](#DoesTheGPLAllowNDA)) No. The GPL says that anyone who receives a copy from you has the right to redistribute copies, modified or not. You are not allowed to distribute the work on any more restrictive basis.
If someone asks you to sign an NDA for receiving GPL-covered software copyrighted by the FSF, please inform us immediately by writing to
[[email protected]](mailto:[email protected]).If the violation involves GPL-covered code that has some other copyright holder, please inform that copyright holder, just as you would for any other kind of violation of the GPL.
- Does the GPL allow me to distribute a
modified or beta version under a nondisclosure agreement?
(
[#DoesTheGPLAllowModNDA](#DoesTheGPLAllowModNDA)) No. The GPL says that your modified versions must carry all the freedoms stated in the GPL. Thus, anyone who receives a copy of your version from you has the right to redistribute copies (modified or not) of that version. You may not distribute any version of the work on a more restrictive basis.
- Does the GPL allow me to develop a
modified version under a nondisclosure agreement?
(
[#DevelopChangesUnderNDA](#DevelopChangesUnderNDA)) Yes. For instance, you can accept a contract to develop changes and agree not to release
*your changes*until the client says ok. This is permitted because in this case no GPL-covered code is being distributed under an NDA.You can also release your changes to the client under the GPL, but agree not to release them to anyone else unless the client says ok. In this case, too, no GPL-covered code is being distributed under an NDA, or under any additional restrictions.
The GPL would give the client the right to redistribute your version. In this scenario, the client will probably choose not to exercise that right, but does
*have*the right.- I want to get credit
for my work. I want people to know what I wrote. Can I still get
credit if I use the GPL?
(
[#IWantCredit](#IWantCredit)) You can certainly get credit for the work. Part of releasing a program under the GPL is writing a copyright notice in your own name (assuming you are the copyright holder). The GPL requires all copies to carry an appropriate copyright notice.
- Does the GPL allow me to add terms
that would require citation or acknowledgment in research papers
which use the GPL-covered software or its output?
(
[#RequireCitation](#RequireCitation)) No, this is not permitted under the terms of the GPL. While we recognize that proper citation is an important part of academic publications, citation cannot be added as an additional requirement to the GPL. Requiring citation in research papers which made use of GPLed software goes beyond what would be an acceptable additional requirement under section 7(b) of GPLv3, and therefore would be considered an additional restriction under Section 7 of the GPL. And copyright law does not allow you to place such a
[requirement on the output of software](#GPLOutput), regardless of whether it is licensed under the terms of the GPL or some other license.- Why does the GPL
require including a copy of the GPL with every copy of the program?
(
[#WhyMustIInclude](#WhyMustIInclude)) Including a copy of the license with the work is vital so that everyone who gets a copy of the program can know what their rights are.
It might be tempting to include a URL that refers to the license, instead of the license itself. But you cannot be sure that the URL will still be valid, five years or ten years from now. Twenty years from now, URLs as we know them today may no longer exist.
The only way to make sure that people who have copies of the program will continue to be able to see the license, despite all the changes that will happen in the network, is to include a copy of the license in the program.
- Is it enough just to put a copy
of the GNU GPL in my repository?
(
[#LicenseCopyOnly](#LicenseCopyOnly)) Just putting a copy of the GNU GPL in a file in your repository does not explicitly state that the code in the same repository may be used under the GNU GPL. Without such a statement, it's not entirely clear that the permissions in the license really apply to any particular source file. An explicit statement saying that eliminates all doubt.
A file containing just a license, without a statement that certain other files are covered by that license, resembles a file containing just a subroutine which is never called from anywhere else. The resemblance is not perfect: lawyers and courts might apply common sense and conclude that you must have put the copy of the GNU GPL there because you wanted to license the code that way. Or they might not. Why leave an uncertainty?
This statement should be in each source file. A clear statement in the program's README file is legally sufficient
*as long as that accompanies the code*, but it is easy for them to get separated. Why take a risk of[uncertainty about your code's license](#NoticeInSourceFile)?This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- Why should I put a license notice in each
source file?
(
[#NoticeInSourceFile](#NoticeInSourceFile)) You should put a notice at the start of each source file, stating what license it carries, in order to avoid risk of the code's getting disconnected from its license. If your repository's README says that source file is under the GNU GPL, what happens if someone copies that file to another program? That other context may not show what the file's license is. It may appear to have some other license, or
[no license at all](/licenses/license-list.html#NoLicense)(which would make the code nonfree).Adding a copyright notice and a license notice at the start of each source file is easy and makes such confusion unlikely.
This has nothing to do with the specifics of the GNU GPL. It is true for any free license.
- What if the work is not very long?
(
[#WhatIfWorkIsShort](#WhatIfWorkIsShort)) If a whole software package contains very little code—less than 300 lines is the benchmark we use—you may as well use a lax permissive license for it, rather than a copyleft license like the GNU GPL. (Unless, that is, the code is specially important.) We
[recommend the Apache License 2.0](/licenses/license-recommendations.html#software)for such cases.- Can I omit the preamble of the GPL, or the
instructions for how to use it on your own programs, to save space?
(
[#GPLOmitPreamble](#GPLOmitPreamble)) The preamble and instructions are integral parts of the GNU GPL and may not be omitted. In fact, the GPL is copyrighted, and its license permits only verbatim copying of the entire GPL. (You can use the legal terms to make
[another license](#ModifyGPL)but it won't be the GNU GPL.)The preamble and instructions add up to some 1000 words, less than 1/5 of the GPL's total size. They will not make a substantial fractional change in the size of a software package unless the package itself is quite small. In that case, you may as well use a simple all-permissive license rather than the GNU GPL.
- What does it
mean to say that two licenses are “compatible”?
(
[#WhatIsCompatible](#WhatIsCompatible)) In order to combine two programs (or substantial parts of them) into a larger work, you need to have permission to use both programs in this way. If the two programs' licenses permit this, they are compatible. If there is no way to satisfy both licenses at once, they are incompatible.
For some licenses, the way in which the combination is made may affect whether they are compatible—for instance, they may allow linking two modules together, but not allow merging their code into one module.
If you just want to install two separate programs in the same system, it is not necessary that their licenses be compatible, because this does not combine them into a larger work.
- What does it mean to say a license is
“compatible with the GPL?”
(
[#WhatDoesCompatMean](#WhatDoesCompatMean)) It means that the other license and the GNU GPL are compatible; you can combine code released under the other license with code released under the GNU GPL in one larger program.
All GNU GPL versions permit such combinations privately; they also permit distribution of such combinations provided the combination is released under the same GNU GPL version. The other license is compatible with the GPL if it permits this too.
GPLv3 is compatible with more licenses than GPLv2: it allows you to make combinations with code that has specific kinds of additional requirements that are not in GPLv3 itself. Section 7 has more information about this, including the list of additional requirements that are permitted.
- Can I write
free software that uses nonfree libraries?
(
[#FSWithNFLibs](#FSWithNFLibs)) If you do this, your program won't be fully usable in a free environment. If your program depends on a nonfree library to do a certain job, it cannot do that job in the Free World. If it depends on a nonfree library to run at all, it cannot be part of a free operating system such as GNU; it is entirely off limits to the Free World.
So please consider: can you find a way to get the job done without using this library? Can you write a free replacement for that library?
If the program is already written using the nonfree library, perhaps it is too late to change the decision. You may as well release the program as it stands, rather than not release it. But please mention in the README that the need for the nonfree library is a drawback, and suggest the task of changing the program so that it does the same job without the nonfree library. Please suggest that anyone who thinks of doing substantial further work on the program first free it from dependence on the nonfree library.
Note that there may also be legal issues with combining certain nonfree libraries with GPL-covered free software. Please see
[the question on GPL software with GPL-incompatible libraries](#GPLIncompatibleLibs)for more information.- Can I link a GPL program with a
proprietary system library? (
[#SystemLibraryException](#SystemLibraryException)) Both versions of the GPL have an exception to their copyleft, commonly called the system library exception. If the GPL-incompatible libraries you want to use meet the criteria for a system library, then you don't have to do anything special to use them; the requirement to distribute source code for the whole program does not include those libraries, even if you distribute a linked executable containing them.
The criteria for what counts as a “system library” vary between different versions of the GPL. GPLv3 explicitly defines “System Libraries” in section 1, to exclude it from the definition of “Corresponding Source.” GPLv2 deals with this issue slightly differently, near the end of section 3.
- In what ways can I link or combine
AGPLv3-covered and GPLv3-covered code?
(
[#AGPLGPL](#AGPLGPL)) Each of these licenses explicitly permits linking with code under the other license. You can always link GPLv3-covered modules with AGPLv3-covered modules, and vice versa. That is true regardless of whether some of the modules are libraries.
- What legal issues
come up if I use GPL-incompatible libraries with GPL software?
(
[#GPLIncompatibleLibs](#GPLIncompatibleLibs)) -
If you want your program to link against a library not covered by the system library exception, you need to provide permission to do that. Below are two example license notices that you can use to do that; one for GPLv3, and the other for GPLv2. In either case, you should put this text in each file to which you are granting this permission.
Only the copyright holders for the program can legally release their software under these terms. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
When other people modify the program, they do not have to make the same exception for their code—it is their choice whether to do so.
If the libraries you intend to link with are nonfree, please also see
[the section on writing Free Software which uses nonfree libraries](#FSWithNFLibs).If you're using GPLv3, you can accomplish this goal by granting an additional permission under section 7. The following license notice will do that. You must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Additional permission under GNU GPL version 3 section 7
If you modify this Program, or any covered work, by linking or combining it with
`[name of library]`(or a modified version of that library), containing parts covered by the terms of`[name of library's license]`, the licensors of this Program grant you additional permission to convey the resulting work. {Corresponding Source for a non-source form of such a combination shall include the source code for the parts of`[name of library]`used as well as that of the covered work.}If you're using GPLv2, you can provide your own exception to the license's terms. The following license notice will do that. Again, you must replace all the text in brackets with text that is appropriate for your program. If not everybody can distribute source for the libraries you intend to link with, you should remove the text in braces; otherwise, just remove the braces themselves.
Copyright (C)
`[years]``[name of copyright holder]`This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see <https://www.gnu.org/licenses>.
Linking
`[name of your program]`statically or dynamically with other modules is making a combined work based on`[name of your program]`. Thus, the terms and conditions of the GNU General Public License cover the whole combination.In addition, as a special exception, the copyright holders of
`[name of your program]`give you permission to combine`[name of your program]`with free software programs or libraries that are released under the GNU LGPL and with code included in the standard release of`[name of library]`under the`[name of library's license]`(or modified versions of such code, with unchanged license). You may copy and distribute such a system following the terms of the GNU GPL for`[name of your program]`and the licenses of the other code concerned{, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code}.Note that people who make modified versions of
`[name of your program]`are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. - How do I get a copyright on my program
in order to release it under the GPL?
(
[#HowIGetCopyright](#HowIGetCopyright)) Under the Berne Convention, everything written is automatically copyrighted from whenever it is put in fixed form. So you don't have to do anything to “get” the copyright on what you write—as long as nobody else can claim to own your work.
However, registering the copyright in the US is a very good idea. It will give you more clout in dealing with an infringer in the US.
The case when someone else might possibly claim the copyright is if you are an employee or student; then the employer or the school might claim you did the job for them and that the copyright belongs to them. Whether they would have a valid claim would depend on circumstances such as the laws of the place where you live, and on your employment contract and what sort of work you do. It is best to consult a lawyer if there is any possible doubt.
If you think that the employer or school might have a claim, you can resolve the problem clearly by getting a copyright disclaimer signed by a suitably authorized officer of the company or school. (Your immediate boss or a professor is usually NOT authorized to sign such a disclaimer.)
- What if my school
might want to make my program into its own proprietary software product?
(
[#WhatIfSchool](#WhatIfSchool)) Many universities nowadays try to raise funds by restricting the use of the knowledge and information they develop, in effect behaving little different from commercial businesses. (See “The Kept University”, Atlantic Monthly, March 2000, for a general discussion of this problem and its effects.)
If you see any chance that your school might refuse to allow your program to be released as free software, it is best to raise the issue at the earliest possible stage. The closer the program is to working usefully, the more temptation the administration might feel to take it from you and finish it without you. At an earlier stage, you have more leverage.
So we recommend that you approach them when the program is only half-done, saying, “If you will agree to releasing this as free software, I will finish it.” Don't think of this as a bluff. To prevail, you must have the courage to say, “My program will have liberty, or never be born.”
- Could
you give me step by step instructions on how to apply the GPL to my program?
(
[#CouldYouHelpApplyGPL](#CouldYouHelpApplyGPL)) See the page of
[GPL instructions](/licenses/gpl-howto.html).- I heard that someone got a copy
of a GPLed program under another license. Is this possible?
(
[#HeardOtherLicense](#HeardOtherLicense)) The GNU GPL does not give users permission to attach other licenses to the program. But the copyright holder for a program can release it under several different licenses in parallel. One of them may be the GNU GPL.
The license that comes in your copy, assuming it was put in by the copyright holder and that you got the copy legitimately, is the license that applies to your copy.
- I would like to release a program I wrote
under the GNU GPL, but I would
like to use the same code in nonfree programs.
(
[#ReleaseUnderGPLAndNF](#ReleaseUnderGPLAndNF)) To release a nonfree program is always ethically tainted, but legally there is no obstacle to your doing this. If you are the copyright holder for the code, you can release it under various different non-exclusive licenses at various times.
- Is the
developer of a GPL-covered program bound by the GPL? Could the
developer's actions ever be a violation of the GPL?
(
[#DeveloperViolate](#DeveloperViolate)) Strictly speaking, the GPL is a license from the developer for others to use, distribute and change the program. The developer itself is not bound by it, so no matter what the developer does, this is not a “violation” of the GPL.
However, if the developer does something that would violate the GPL if done by someone else, the developer will surely lose moral standing in the community.
- Can the developer of a program who distributed
it under the GPL later license it to another party for exclusive use?
(
[#CanDeveloperThirdParty](#CanDeveloperThirdParty)) No, because the public already has the right to use the program under the GPL, and this right cannot be withdrawn.
- Can I use GPL-covered editors such as
GNU Emacs to develop nonfree programs? Can I use GPL-covered tools
such as GCC to compile them?
(
[#CanIUseGPLToolsForNF](#CanIUseGPLToolsForNF)) Yes, because the copyright on the editors and tools does not cover the code you write. Using them does not place any restrictions, legally, on the license you use for your code.
Some programs copy parts of themselves into the output for technical reasons—for example, Bison copies a standard parser program into its output file. In such cases, the copied text in the output is covered by the same license that covers it in the source code. Meanwhile, the part of the output which is derived from the program's input inherits the copyright status of the input.
As it happens, Bison can also be used to develop nonfree programs. This is because we decided to explicitly permit the use of the Bison standard parser program in Bison output files without restriction. We made the decision because there were other tools comparable to Bison which already permitted use for nonfree programs.
- Do I have “fair use”
rights in using the source code of a GPL-covered program?
(
[#GPLFairUse](#GPLFairUse)) Yes, you do. “Fair use” is use that is allowed without any special permission. Since you don't need the developers' permission for such use, you can do it regardless of what the developers said about it—in the license or elsewhere, whether that license be the GNU GPL or any other free software license.
Note, however, that there is no world-wide principle of fair use; what kinds of use are considered “fair” varies from country to country.
- Can the US Government release a program under the GNU GPL?
(
[#GPLUSGov](#GPLUSGov)) If the program is written by US federal government employees in the course of their employment, it is in the public domain, which means it is not copyrighted. Since the GNU GPL is based on copyright, such a program cannot be released under the GNU GPL. (It can still be
[free software](/philosophy/free-sw.html), however; a public domain program is free.)However, when a US federal government agency uses contractors to develop software, that is a different situation. The contract can require the contractor to release it under the GNU GPL. (GNU Ada was developed in this way.) Or the contract can assign the copyright to the government agency, which can then release the software under the GNU GPL.
- Can the US Government
release improvements to a GPL-covered program?
(
[#GPLUSGovAdd](#GPLUSGovAdd)) Yes. If the improvements are written by US government employees in the course of their employment, then the improvements are in the public domain. However, the improved version, as a whole, is still covered by the GNU GPL. There is no problem in this situation.
If the US government uses contractors to do the job, then the improvements themselves can be GPL-covered.
- Does the GPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#GPLStaticVsDynamic](#GPLStaticVsDynamic)) No. Linking a GPL covered work statically or dynamically with other modules is making a combined work based on the GPL covered work. Thus, the terms and conditions of the GNU General Public License cover the whole combination. See also
[What legal issues come up if I use GPL-incompatible libraries with GPL software?](#GPLIncompatibleLibs)- Does the LGPL have different requirements
for statically vs dynamically linked modules with a covered
work? (
[#LGPLStaticVsDynamic](#LGPLStaticVsDynamic)) For the purpose of complying with the LGPL (any extant version: v2, v2.1 or v3):
(1) If you statically link against an LGPLed library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application.
(2) If you dynamically link against an LGPLed library
*already present on the user's computer*, you need not convey the library's source. On the other hand, if you yourself convey the executable LGPLed library along with your application, whether linked with statically or dynamically, you must also convey the library's sources, in one of the ways for which the LGPL provides.- Is there some way that
I can GPL the output people get from use of my program? For example,
if my program is used to develop hardware designs, can I require that
these designs must be free?
(
[#GPLOutput](#GPLOutput)) In general this is legally impossible; copyright law does not give you any say in the use of the output people make from their data using your program. If the user uses your program to enter or convert her own data, the copyright on the output belongs to her, not you. More generally, when a program translates its input into some other form, the copyright status of the output inherits that of the input it was generated from.
So the only way you have a say in the use of the output is if substantial parts of the output are copied (more or less) from text in your program. For instance, part of the output of Bison (see above) would be covered by the GNU GPL, if we had not made an exception in this specific case.
You could artificially make a program copy certain text into its output even if there is no technical reason to do so. But if that copied text serves no practical purpose, the user could simply delete that text from the output and use only the rest. Then he would not have to obey the conditions on redistribution of the copied text.
- In what cases is the output of a GPL
program covered by the GPL too?
(
[#WhatCaseIsOutputGPL](#WhatCaseIsOutputGPL)) The output of a program is not, in general, covered by the copyright on the code of the program. So the license of the code of the program does not apply to the output, whether you pipe it into a file, make a screenshot, screencast, or video.
The exception would be when the program displays a full screen of text and/or art that comes from the program. Then the copyright on that text and/or art covers the output. Programs that output audio, such as video games, would also fit into this exception.
If the art/music is under the GPL, then the GPL applies when you copy it no matter how you copy it. However,
[fair use](#GPLFairUse)may still apply.Keep in mind that some programs, particularly video games, can have artwork/audio that is licensed separately from the underlying GPLed game. In such cases, the license on the artwork/audio would dictate the terms under which video/streaming may occur. See also:
[Can I use the GPL for something other than software?](#GPLOtherThanSoftware)- If I add a module to a GPL-covered program,
do I have to use the GPL as the license for my module?
(
[#GPLModuleLicense](#GPLModuleLicense)) The GPL says that the whole combined program has to be released under the GPL. So your module has to be available for use under the GPL.
But you can give additional permission for the use of your code. You can, if you wish, release your module under a license which is more lax than the GPL but compatible with the GPL. The
[license list page](/licenses/license-list.html)gives a partial list of GPL-compatible licenses.- If a library is released under the GPL
(not the LGPL), does that mean that any software which uses it
has to be under the GPL or a GPL-compatible license?
(
[#IfLibraryIsGPL](#IfLibraryIsGPL)) Yes, because the program actually links to the library. As such, the terms of the GPL apply to the entire combination. The software modules that link with the library may be under various GPL compatible licenses, but the work as a whole must be licensed under the GPL. See also:
[What does it mean to say a license is “compatible with the GPL”?](#WhatDoesCompatMean)- If a programming language interpreter
is released under the GPL, does that mean programs written to be
interpreted by it must be under GPL-compatible licenses?
(
[#IfInterpreterIsGPL](#IfInterpreterIsGPL)) When the interpreter just interprets a language, the answer is no. The interpreted program, to the interpreter, is just data; a free software license like the GPL, based on copyright law, cannot limit what data you use the interpreter on. You can run it on any data (interpreted program), any way you like, and there are no requirements about licensing that data to anyone.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. So if these facilities are released under the GPL, the interpreted program that uses them must be released in a GPL-compatible way. The JNI or Java Native Interface is an example of such a binding mechanism; libraries that are accessed in this way are linked dynamically with the Java programs that call them. These libraries are also linked with the interpreter. If the interpreter is linked statically with these libraries, or if it is designed to
[link dynamically with these specific libraries](#GPLPluginsInNF), then it too needs to be released in a GPL-compatible way.Another similar and very common case is to provide libraries with the interpreter which are themselves interpreted. For instance, Perl comes with many Perl modules, and a Java implementation comes with many Java classes. These libraries and the programs that call them are always dynamically linked together.
A consequence is that if you choose to use GPLed Perl modules or Java classes in your program, you must release the program in a GPL-compatible way, regardless of the license used in the Perl or Java interpreter that the combined Perl or Java program will run on.
- I'm writing a Windows application with
Microsoft Visual C++ (or Visual Basic) and I will be releasing it
under the GPL. Is dynamically linking my program with the Visual
C++ (or Visual Basic) runtime library permitted under the GPL?
(
[#WindowsRuntimeAndGPL](#WindowsRuntimeAndGPL)) You may link your program to these libraries, and distribute the compiled program to others. When you do this, the runtime libraries are “System Libraries” as GPLv3 defines them. That means that you don't need to worry about including their source code with the program's Corresponding Source. GPLv2 provides a similar exception in section 3.
You may not distribute these libraries in compiled DLL form with the program. To prevent unscrupulous distributors from trying to use the System Library exception as a loophole, the GPL says that libraries can only qualify as System Libraries as long as they're not distributed with the program itself. If you distribute the DLLs with the program, they won't be eligible for this exception anymore; then the only way to comply with the GPL would be to provide their source code, which you are unable to do.
It is possible to write free programs that only run on Windows, but it is not a good idea. These programs would be “
[trapped](/philosophy/java-trap.html)” by Windows, and therefore contribute zero to the Free World.- Why is the original BSD
license incompatible with the GPL?
(
[#OrigBSD](#OrigBSD)) Because it imposes a specific requirement that is not in the GPL; namely, the requirement on advertisements of the program. Section 6 of GPLv2 states:
You may not impose any further restrictions on the recipients' exercise of the rights granted herein.
GPLv3 says something similar in section 10. The advertising clause provides just such a further restriction, and thus is GPL-incompatible.
The revised BSD license does not have the advertising clause, which eliminates the problem.
- When is a program and its plug-ins considered a single combined program?
(
[#GPLPlugins](#GPLPlugins)) It depends on how the main program invokes its plug-ins. If the main program uses fork and exec to invoke plug-ins, and they establish intimate communication by sharing complex data structures, or shipping complex data structures back and forth, that can make them one single combined program. A main program that uses simple fork and exec to invoke plug-ins and does not establish intimate communication between them results in the plug-ins being a separate program.
If the main program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single combined program, which must be treated as an extension of both the main program and the plug-ins. If the main program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.
Using shared memory to communicate with complex data structures is pretty much equivalent to dynamic linking.
- If I write a plug-in to use with a GPL-covered
program, what requirements does that impose on the licenses I can
use for distributing my plug-in?
(
[#GPLAndPlugins](#GPLAndPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate works](#GPLPlugins).If the main program and the plugins are a single combined program then this means you must license the plug-in under the GPL or a GPL-compatible free software license and distribute it with source code in a GPL-compliant way. A main program that is separate from its plug-ins makes no requirements for the plug-ins.
- Can I apply the
GPL when writing a plug-in for a nonfree program?
(
[#GPLPluginsInNF](#GPLPluginsInNF)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program this means that combination of the GPL-covered plug-in with the nonfree main program would violate the GPL. However, you can resolve that legal problem by adding an exception to your plug-in's license, giving permission to link it with the nonfree main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- Can I release a nonfree program
that's designed to load a GPL-covered plug-in?
(
[#NFUseGPLPlugins](#NFUseGPLPlugins)) Please see this question
[for determining when plug-ins and a main program are considered a single combined program and when they are considered separate programs](#GPLPlugins).If they form a single combined program then the main program must be released under the GPL or a GPL-compatible free software license, and the terms of the GPL must be followed when the main program is distributed for use with these plug-ins.
However, if they are separate works then the license of the plug-in makes no requirements about the main program.
See also the question
[I am writing free software that uses a nonfree library.](#FSWithNFLibs)- You have a GPLed program that I'd like
to link with my code to build a proprietary program. Does the fact
that I link with your program mean I have to GPL my program?
(
[#LinkingWithGPL](#LinkingWithGPL)) Not exactly. It means you must release your program under a license compatible with the GPL (more precisely, compatible with one or more GPL versions accepted by all the rest of the code in the combination that you link). The combination itself is then available under those GPL versions.
- If so, is there
any chance I could get a license of your program under the Lesser GPL?
(
[#SwitchToLGPL](#SwitchToLGPL)) You can ask, but most authors will stand firm and say no. The idea of the GPL is that if you want to include our code in your program, your program must also be free software. It is supposed to put pressure on you to release your program in a way that makes it part of our community.
You always have the legal alternative of not using our code.
- Does distributing a nonfree driver
meant to link with the kernel Linux violate the GPL?
(
[#NonfreeDriverKernelLinux](#NonfreeDriverKernelLinux)) Linux (the kernel in the GNU/Linux operating system) is distributed under GNU GPL version 2. Does distributing a nonfree driver meant to link with Linux violate the GPL?
Yes, this is a violation, because effectively this makes a larger combined work. The fact that the user is expected to put the pieces together does not really change anything.
Each contributor to Linux who holds copyright on a substantial part of the code can enforce the GPL and we encourage each of them to take action against those distributing nonfree Linux-drivers.
- How can I allow linking of
proprietary modules with my GPL-covered library under a controlled
interface only?
(
[#LinkingOverControlledInterface](#LinkingOverControlledInterface)) Add this text to the license notice of each file in the package, at the end of the text that says the file is distributed under the GNU GPL:
Linking ABC statically or dynamically with other modules is making a combined work based on ABC. Thus, the terms and conditions of the GNU General Public License cover the whole combination.
As a special exception, the copyright holders of ABC give you permission to combine ABC program with free software programs or libraries that are released under the GNU LGPL and with independent modules that communicate with ABC solely through the ABCDEF interface. You may copy and distribute such a system following the terms of the GNU GPL for ABC and the licenses of the other code concerned, provided that you include the source code of that other code when and as the GNU GPL requires distribution of source code and provided that you do not modify the ABCDEF interface.
Note that people who make modified versions of ABC are not obligated to grant this special exception for their modified versions; it is their choice whether to do so. The GNU General Public License gives permission to release a modified version without this exception; this exception also makes it possible to release a modified version which carries forward this exception. If you modify the ABCDEF interface, this exception does not apply to your modified version of ABC, and you must remove this exception when you distribute your modified version.
This exception is an additional permission under section 7 of the GNU General Public License, version 3 (“GPLv3”)
This exception enables linking with differently licensed modules over the specified interface (“ABCDEF”), while ensuring that users would still receive source code as they normally would under the GPL.
Only the copyright holders for the program can legally authorize this exception. If you wrote the whole program yourself, then assuming your employer or school does not claim the copyright, you are the copyright holder—so you can authorize the exception. But if you want to use parts of other GPL-covered programs by other authors in your code, you cannot authorize the exception for them. You have to get the approval of the copyright holders of those programs.
- I have written an application that links
with many different components, that have different licenses. I am
very confused as to what licensing requirements are placed on my
program. Can you please tell me what licenses I may use?
(
[#ManyDifferentLicenses](#ManyDifferentLicenses)) To answer this question, we would need to see a list of each component that your program uses, the license of that component, and a brief (a few sentences for each should suffice) describing how your library uses that component. Two examples would be:
- To make my software work, it must be linked to the FOO library, which is available under the Lesser GPL.
- My software makes a system call (with a command line that I built) to run the BAR program, which is licensed under “the GPL, with a special exception allowing for linking with QUUX”.
- What is the difference between an
“aggregate” and other kinds of “modified versions”?
(
[#MereAggregation](#MereAggregation)) An “aggregate” consists of a number of separate programs, distributed together on the same CD-ROM or other media. The GPL permits you to create and distribute an aggregate, even when the licenses of the other software are nonfree or GPL-incompatible. The only condition is that you cannot release the aggregate under a license that prohibits users from exercising rights that each program's individual license would grant them.
Where's the line between two separate programs, and one program with two parts? This is a legal question, which ultimately judges will decide. We believe that a proper criterion depends both on the mechanism of communication (exec, pipes, rpc, function calls within a shared address space, etc.) and the semantics of the communication (what kinds of information are interchanged).
If the modules are included in the same executable file, they are definitely combined in one program. If modules are designed to run linked together in a shared address space, that almost surely means combining them into one program.
By contrast, pipes, sockets and command-line arguments are communication mechanisms normally used between two separate programs. So when they are used for communication, the modules normally are separate programs. But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program.
- When it comes to determining
whether two pieces of software form a single work, does the fact
that the code is in one or more containers have any effect?
(
[#AggregateContainers](#AggregateContainers)) No, the analysis of whether they are a
[single work or an aggregate](#MereAggregation)is unchanged by the involvement of containers.- Why does
the FSF require that contributors to FSF-copyrighted programs assign
copyright to the FSF? If I hold copyright on a GPLed program, should
I do this, too? If so, how?
(
[#AssignCopyright](#AssignCopyright)) Our lawyers have told us that to be in the
[best position to enforce the GPL](/licenses/why-assign.html)in court against violators, we should keep the copyright status of the program as simple as possible. We do this by asking each contributor to either assign the copyright on contributions to the FSF, or disclaim copyright on contributions.We also ask individual contributors to get copyright disclaimers from their employers (if any) so that we can be sure those employers won't claim to own the contributions.
Of course, if all the contributors put their code in the public domain, there is no copyright with which to enforce the GPL. So we encourage people to assign copyright on large code contributions, and only put small changes in the public domain.
If you want to make an effort to enforce the GPL on your program, it is probably a good idea for you to follow a similar policy. Please contact
[<[email protected]>](mailto:[email protected])if you want more information.- Can I modify the GPL
and make a modified license?
(
[#ModifyGPL](#ModifyGPL)) It is possible to make modified versions of the GPL, but it tends to have practical consequences.
You can legally use the GPL terms (possibly modified) in another license provided that you call your license by another name and do not include the GPL preamble, and provided you modify the instructions-for-use at the end enough to make it clearly different in wording and not mention GNU (though the actual procedure you describe may be similar).
If you want to use our preamble in a modified license, please write to
[<[email protected]>](mailto:[email protected])for permission. For this purpose we would want to check the actual license requirements to see if we approve of them.Although we will not raise legal objections to your making a modified license in this way, we hope you will think twice and not do it. Such a modified license is almost certainly
[incompatible with the GNU GPL](#WhatIsCompatible), and that incompatibility blocks useful combinations of modules. The mere proliferation of different free software licenses is a burden in and of itself.Rather than modifying the GPL, please use the exception mechanism offered by GPL version 3.
- If I use a
piece of software that has been obtained under the GNU GPL, am I
allowed to modify the original code into a new program, then
distribute and sell that new program commercially?
(
[#GPLCommercially](#GPLCommercially)) You are allowed to sell copies of the modified program commercially, but only under the terms of the GNU GPL. Thus, for instance, you must make the source code available to the users of the program as described in the GPL, and they must be allowed to redistribute and modify it as described in the GPL.
These requirements are the condition for including the GPL-covered code you received in a program of your own.
- Can I use the GPL for something other than
software?
(
[#GPLOtherThanSoftware](#GPLOtherThanSoftware)) You can apply the GPL to any kind of work, as long as it is clear what constitutes the “source code” for the work. The GPL defines this as the preferred form of the work for making changes in it.
However, for manuals and textbooks, or more generally any sort of work that is meant to teach a subject, we recommend using the GFDL rather than the GPL.
- How does the LGPL work with Java?
(
[#LGPLJava](#LGPLJava)) [See this article for details.](/licenses/lgpl-java.html)It works as designed, intended, and expected.- Consider this situation:
1) X releases V1 of a project under the GPL.
2) Y contributes to the development of V2 with changes and new code
based on V1.
3) X wants to convert V2 to a non-GPL license.
Does X need Y's permission?
(
[#Consider](#Consider)) Yes. Y was required to release its version under the GNU GPL, as a consequence of basing it on X's version V1. Nothing required Y to agree to any other license for its code. Therefore, X must get Y's permission before releasing that code under another license.
- I'd like to incorporate GPL-covered
software in my proprietary system. I have no permission to use
that software except what the GPL gives me. Can I do this?
(
[#GPLInProprietarySystem](#GPLInProprietarySystem)) You cannot incorporate GPL-covered software in a proprietary system. The goal of the GPL is to grant everyone the freedom to copy, redistribute, understand, and modify a program. If you could incorporate GPL-covered software into a nonfree system, it would have the effect of making the GPL-covered software nonfree too.
A system incorporating a GPL-covered program is an extended version of that program. The GPL says that any extended version of the program must be released under the GPL if it is released at all. This is for two reasons: to make sure that users who get the software get the freedom they should have, and to encourage people to give back improvements that they make.
However, in many cases you can distribute the GPL-covered software alongside your proprietary system. To do this validly, you must make sure that the free and nonfree programs communicate at arms length, that they are not combined in a way that would make them effectively a single program.
The difference between this and “incorporating” the GPL-covered software is partly a matter of substance and partly form. The substantive part is this: if the two programs are combined so that they become effectively two parts of one program, then you can't treat them as two separate programs. So the GPL has to cover the whole thing.
If the two programs remain well separated, like the compiler and the kernel, or like an editor and a shell, then you can treat them as two separate programs—but you have to do it properly. The issue is simply one of form: how you describe what you are doing. Why do we care about this? Because we want to make sure the users clearly understand the free status of the GPL-covered software in the collection.
If people were to distribute GPL-covered software calling it “part of” a system that users know is partly proprietary, users might be uncertain of their rights regarding the GPL-covered software. But if they know that what they have received is a free program plus another program, side by side, their rights will be clear.
- Using a certain GNU program under the
GPL does not fit our project to make proprietary software. Will you
make an exception for us? It would mean more users of that program.
(
[#WillYouMakeAnException](#WillYouMakeAnException)) Sorry, we don't make such exceptions. It would not be right.
Maximizing the number of users is not our aim. Rather, we are trying to give the crucial freedoms to as many users as possible. In general, proprietary software projects hinder rather than help the cause of freedom.
We do occasionally make license exceptions to assist a project which is producing free software under a license other than the GPL. However, we have to see a good reason why this will advance the cause of free software.
We also do sometimes change the distribution terms of a package, when that seems clearly the right way to serve the cause of free software; but we are very cautious about this, so you will have to show us very convincing reasons.
- I'd like to incorporate GPL-covered software in
my proprietary system. Can I do this by putting a “wrapper”
module, under a GPL-compatible lax permissive license (such as the X11
license) in between the GPL-covered part and the proprietary part?
(
[#GPLWrapper](#GPLWrapper)) No. The X11 license is compatible with the GPL, so you can add a module to the GPL-covered program and put it under the X11 license. But if you were to incorporate them both in a larger program, that whole would include the GPL-covered part, so it would have to be licensed
*as a whole*under the GNU GPL.The fact that proprietary module A communicates with GPL-covered module C only through X11-licensed module B is legally irrelevant; what matters is the fact that module C is included in the whole.
- Where can I learn more about the GCC
Runtime Library Exception?
(
[#LibGCCException](#LibGCCException)) The GCC Runtime Library Exception covers libgcc, libstdc++, libfortran, libgomp, libdecnumber, and other libraries distributed with GCC. The exception is meant to allow people to distribute programs compiled with GCC under terms of their choice, even when parts of these libraries are included in the executable as part of the compilation process. To learn more, please read our
[FAQ about the GCC Runtime Library Exception](/licenses/gcc-exception-faq.html).- I'd like to
modify GPL-covered programs and link them with the portability
libraries from Money Guzzler Inc. I cannot distribute the source code
for these libraries, so any user who wanted to change these versions
would have to obtain those libraries separately. Why doesn't the
GPL permit this?
(
[#MoneyGuzzlerInc](#MoneyGuzzlerInc)) There are two reasons for this. First, a general one. If we permitted company A to make a proprietary file, and company B to distribute GPL-covered software linked with that file, the effect would be to make a hole in the GPL big enough to drive a truck through. This would be carte blanche for withholding the source code for all sorts of modifications and extensions to GPL-covered software.
Giving all users access to the source code is one of our main goals, so this consequence is definitely something we want to avoid.
More concretely, the versions of the programs linked with the Money Guzzler libraries would not really be free software as we understand the term—they would not come with full source code that enables users to change and recompile the program.
- If the license for a module Q has a
requirement that's incompatible with the GPL,
but the requirement applies only when Q is distributed by itself, not when
Q is included in a larger program, does that make the license
GPL-compatible? Can I combine or link Q with a GPL-covered program?
(
[#GPLIncompatibleAlone](#GPLIncompatibleAlone)) If a program P is released under the GPL that means *any and every part of it* can be used under the GPL. If you integrate module Q, and release the combined program P+Q under the GPL, that means any part of P+Q can be used under the GPL. One part of P+Q is Q. So releasing P+Q under the GPL says that Q any part of it can be used under the GPL. Putting it in other words, a user who obtains P+Q under the GPL can delete P, so that just Q remains, still under the GPL.
If the license of module Q permits you to give permission for that, then it is GPL-compatible. Otherwise, it is not GPL-compatible.
If the license for Q says in no uncertain terms that you must do certain things (not compatible with the GPL) when you redistribute Q on its own, then it does not permit you to distribute Q under the GPL. It follows that you can't release P+Q under the GPL either. So you cannot link or combine P with Q.
- Can I release a modified
version of a GPL-covered program in binary form only?
(
[#ModifiedJustBinary](#ModifiedJustBinary)) No. The whole point of the GPL is that all modified versions must be
[free software](/philosophy/free-sw.html)—which means, in particular, that the source code of the modified version is available to the users.- I
downloaded just the binary from the net. If I distribute copies,
do I have to get the source and distribute that too?
(
[#UnchangedJustBinary](#UnchangedJustBinary)) Yes. The general rule is, if you distribute binaries, you must distribute the complete corresponding source code too. The exception for the case where you received a written offer for source code is quite limited.
- I want to distribute
binaries via physical media without accompanying sources. Can I provide
source code by FTP?
(
[#DistributeWithSourceOnInternet](#DistributeWithSourceOnInternet)) Version 3 of the GPL allows this; see option 6(b) for the full details. Under version 2, you're certainly free to offer source via FTP, and most users will get it from there. However, if any of them would rather get the source on physical media by mail, you are required to provide that.
If you distribute binaries via FTP,
[you should distribute source via FTP.](#AnonFTPAndSendSources)- My friend got a GPL-covered
binary with an offer to supply source, and made a copy for me.
Can I use the offer myself to obtain the source?
(
[#RedistributedBinariesGetSource](#RedistributedBinariesGetSource)) Yes, you can. The offer must be open to everyone who has a copy of the binary that it accompanies. This is why the GPL says your friend must give you a copy of the offer along with a copy of the binary—so you can take advantage of it.
- Can I put the binaries on my
Internet server and put the source on a different Internet site?
(
[#SourceAndBinaryOnDifferentSites](#SourceAndBinaryOnDifferentSites)) Yes. Section 6(d) allows this. However, you must provide clear instructions people can follow to obtain the source, and you must take care to make sure that the source remains available for as long as you distribute the object code.
- I want to distribute an extended
version of a GPL-covered program in binary form. Is it enough to
distribute the source for the original version?
(
[#DistributeExtendedBinary](#DistributeExtendedBinary)) No, you must supply the source code that corresponds to the binary. Corresponding source means the source from which users can rebuild the same binary.
Part of the idea of free software is that users should have access to the source code for
*the programs they use*. Those using your version should have access to the source code for your version.A major goal of the GPL is to build up the Free World by making sure that improvement to a free program are themselves free. If you release an improved version of a GPL-covered program, you must release the improved source code under the GPL.
- I want to distribute
binaries, but distributing complete source is inconvenient. Is it ok if
I give users the diffs from the “standard” version along with
the binaries?
(
[#DistributingSourceIsInconvenient](#DistributingSourceIsInconvenient)) This is a well-meaning request, but this method of providing the source doesn't really do the job.
A user that wants the source a year from now may be unable to get the proper version from another site at that time. The standard distribution site may have a newer version, but the same diffs probably won't work with that version.
So you need to provide complete sources, not just diffs, with the binaries.
- Can I make binaries available
on a network server, but send sources only to people who order them?
(
[#AnonFTPAndSendSources](#AnonFTPAndSendSources)) If you make object code available on a network server, you have to provide the Corresponding Source on a network server as well. The easiest way to do this would be to publish them on the same server, but if you'd like, you can alternatively provide instructions for getting the source from another server, or even a
[version control system](#SourceInCVS). No matter what you do, the source should be just as easy to access as the object code, though. This is all specified in section 6(d) of GPLv3.The sources you provide must correspond exactly to the binaries. In particular, you must make sure they are for the same version of the program—not an older version and not a newer version.
- How can I make sure each
user who downloads the binaries also gets the source?
(
[#HowCanIMakeSureEachDownloadGetsSource](#HowCanIMakeSureEachDownloadGetsSource)) You don't have to make sure of this. As long as you make the source and binaries available so that the users can see what's available and take what they want, you have done what is required of you. It is up to the user whether to download the source.
Our requirements for redistributors are intended to make sure the users can get the source code, not to force users to download the source code even if they don't want it.
- Does the GPL require
me to provide source code that can be built to match the exact
hash of the binary I am distributing?
(
[#MustSourceBuildToMatchExactHashOfBinary](#MustSourceBuildToMatchExactHashOfBinary)) Complete corresponding source means the source that the binaries were made from, but that does not imply your tools must be able to make a binary that is an exact hash of the binary you are distributing. In some cases it could be (nearly) impossible to build a binary from source with an exact hash of the binary being distributed — consider the following examples: a system might put timestamps in binaries; or the program might have been built against a different (even unreleased) compiler version.
- A company
is running a modified version of a GPLed program on a web site.
Does the GPL say they must release their modified sources?
(
[#UnreleasedMods](#UnreleasedMods)) The GPL permits anyone to make a modified version and use it without ever distributing it to others. What this company is doing is a special case of that. Therefore, the company does not have to release the modified sources. The situation is different when the modified program is licensed under the terms of the
[GNU Affero GPL](#UnreleasedModsAGPL).Compare this to a situation where the web site contains or links to separate GPLed programs that are distributed to the user when they visit the web site (often written in
[JavaScript](/philosophy/javascript-trap.html), but other languages are used as well). In this situation the source code for the programs being distributed must be released to the user under the terms of the GPL.- A company is running a modified
version of a program licensed under the GNU Affero GPL (AGPL) on a
web site. Does the AGPL say they must release their modified
sources?
(
[#UnreleasedModsAGPL](#UnreleasedModsAGPL)) The
[GNU Affero GPL](/licenses/agpl.html)requires that modified versions of the software offer all users interacting with it over a computer network an opportunity to receive the source. What the company is doing falls under that meaning, so the company must release the modified source code.- Is making and using multiple copies
within one organization or company “distribution”?
(
[#InternalDistribution](#InternalDistribution)) No, in that case the organization is just making the copies for itself. As a consequence, a company or other organization can develop a modified version and install that version through its own facilities, without giving the staff permission to release that modified version to outsiders.
However, when the organization transfers copies to other organizations or individuals, that is distribution. In particular, providing copies to contractors for use off-site is distribution.
- If someone steals
a CD containing a version of a GPL-covered program, does the GPL
give the thief the right to redistribute that version?
(
[#StolenCopy](#StolenCopy)) If the version has been released elsewhere, then the thief probably does have the right to make copies and redistribute them under the GPL, but if thieves are imprisoned for stealing the CD, they may have to wait until their release before doing so.
If the version in question is unpublished and considered by a company to be its trade secret, then publishing it may be a violation of trade secret law, depending on other circumstances. The GPL does not change that. If the company tried to release its version and still treat it as a trade secret, that would violate the GPL, but if the company hasn't released this version, no such violation has occurred.
- What if a company distributes a copy of
some other developers' GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease](#TradeSecretRelease)) The company has violated the GPL and will have to cease distribution of that program. Note how this differs from the theft case above; the company does not intentionally distribute a copy when a copy is stolen, so in that case the company has not violated the GPL.
- What if a company distributes a copy
of its own GPL-covered work to me as a trade secret?
(
[#TradeSecretRelease2](#TradeSecretRelease2)) If the program distributed does not incorporate anyone else's GPL-covered work, then the company is not violating the GPL (see “
[Is the developer of a GPL-covered program bound by the GPL?](#DeveloperViolate)” for more information). But it is making two contradictory statements about what you can do with that program: that you can redistribute it, and that you can't. It would make sense to demand clarification of the terms for use of that program before you accept a copy.- Why are some GNU libraries released under
the ordinary GPL rather than the Lesser GPL?
(
[#WhySomeGPLAndNotLGPL](#WhySomeGPLAndNotLGPL)) Using the Lesser GPL for any particular library constitutes a retreat for free software. It means we partially abandon the attempt to defend the users' freedom, and some of the requirements to share what is built on top of GPL-covered software. In themselves, those are changes for the worse.
Sometimes a localized retreat is a good strategy. Sometimes, using the LGPL for a library might lead to wider use of that library, and thus to more improvement for it, wider support for free software, and so on. This could be good for free software if it happens to a large extent. But how much will this happen? We can only speculate.
It would be nice to try out the LGPL on each library for a while, see whether it helps, and change back to the GPL if the LGPL didn't help. But this is not feasible. Once we use the LGPL for a particular library, changing back would be difficult.
So we decide which license to use for each library on a case-by-case basis. There is a
[long explanation](/licenses/why-not-lgpl.html)of how we judge the question.- Why should programs say
“Version 3 of the GPL or any later version”?
(
[#VersionThreeOrLater](#VersionThreeOrLater)) From time to time, at intervals of years, we change the GPL—sometimes to clarify it, sometimes to permit certain kinds of use not previously permitted, and sometimes to tighten up a requirement. (The last two changes were in 2007 and 1991.) Using this “indirect pointer” in each program makes it possible for us to change the distribution terms on the entire collection of GNU software, when we update the GPL.
If each program lacked the indirect pointer, we would be forced to discuss the change at length with numerous copyright holders, which would be a virtual impossibility. In practice, the chance of having uniform distribution terms for GNU software would be nil.
Suppose a program says “Version 3 of the GPL or any later version” and a new version of the GPL is released. If the new GPL version gives additional permission, that permission will be available immediately to all the users of the program. But if the new GPL version has a tighter requirement, it will not restrict use of the current version of the program, because it can still be used under GPL version 3. When a program says “Version 3 of the GPL or any later version”, users will always be permitted to use it, and even change it, according to the terms of GPL version 3—even after later versions of the GPL are available.
If a tighter requirement in a new version of the GPL need not be obeyed for existing software, how is it useful? Once GPL version 4 is available, the developers of most GPL-covered programs will release subsequent versions of their programs specifying “Version 4 of the GPL or any later version”. Then users will have to follow the tighter requirements in GPL version 4, for subsequent versions of the program.
However, developers are not obligated to do this; developers can continue allowing use of the previous version of the GPL, if that is their preference.
- Is it a good idea to use a license saying
that a certain program can be used only under the latest version
of the GNU GPL?
(
[#OnlyLatestVersion](#OnlyLatestVersion)) The reason you shouldn't do that is that it could result some day in withdrawing automatically some permissions that the users previously had.
Suppose a program was released in 2000 under “the latest GPL version”. At that time, people could have used it under GPLv2. The day we published GPLv3 in 2007, everyone would have been suddenly compelled to use it under GPLv3 instead.
Some users may not even have known about GPL version 3—but they would have been required to use it. They could have violated the program's license unintentionally just because they did not get the news. That's a bad way to treat people.
We think it is wrong to take back permissions already granted, except due to a violation. If your freedom could be revoked, then it isn't really freedom. Thus, if you get a copy of a program version under one version of a license, you should
*always*have the rights granted by that version of the license. Releasing under “GPL version N or any later version” upholds that principle.- Why don't you use the GPL for manuals?
(
[#WhyNotGPLForManuals](#WhyNotGPLForManuals)) It is possible to use the GPL for a manual, but the GNU Free Documentation License (GFDL) is much better for manuals.
The GPL was designed for programs; it contains lots of complex clauses that are crucial for programs, but that would be cumbersome and unnecessary for a book or manual. For instance, anyone publishing the book on paper would have to either include machine-readable “source code” of the book along with each printed copy, or provide a written offer to send the “source code” later.
Meanwhile, the GFDL has clauses that help publishers of free manuals make a profit from selling copies—cover texts, for instance. The special rules for Endorsements sections make it possible to use the GFDL for an official standard. This would permit modified versions, but they could not be labeled as “the standard”.
Using the GFDL, we permit changes in the text of a manual that covers its technical topic. It is important to be able to change the technical parts, because people who change a program ought to change the documentation to correspond. The freedom to do this is an ethical imperative.
Our manuals also include sections that state our political position about free software. We mark these as “invariant”, so that they cannot be changed or removed. The GFDL makes provisions for these “invariant sections”.
- How does the GPL apply to fonts?
(
[#FontException](#FontException)) Font licensing is a complex issue which needs serious consideration. The following license exception is experimental but approved for general use. We welcome suggestions on this subject—please see this this
[explanatory essay](http://www.fsf.org/blogs/licensing/20050425novalis)and write to[[email protected]](mailto:[email protected]).To use this exception, add this text to the license notice of each file in the package (to the extent possible), at the end of the text that says the file is distributed under the GNU GPL:
As a special exception, if you create a document which uses this font, and embed this font or unaltered portions of this font into the document, this font does not by itself cause the resulting document to be covered by the GNU General Public License. This exception does not however invalidate any other reasons why the document might be covered by the GNU General Public License. If you modify this font, you may extend this exception to your version of the font, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- I am writing a website maintenance system
(called a “
[content management system](/philosophy/words-to-avoid.html#Content)” by some), or some other application which generates web pages from templates. What license should I use for those templates? ([#WMS](#WMS)) Templates are minor enough that it is not worth using copyleft to protect them. It is normally harmless to use copyleft on minor works, but templates are a special case, because they are combined with data provided by users of the application and the combination is distributed. So, we recommend that you license your templates under simple permissive terms.
Some templates make calls into JavaScript functions. Since Javascript is often non-trivial, it is worth copylefting. Because the templates will be combined with user data, it's possible that template+user data+JavaScript would be considered one work under copyright law. A line needs to be drawn between the JavaScript (copylefted), and the user code (usually under incompatible terms).
Here's an exception for JavaScript code that does this:
As a special exception to the GPL, any HTML file which merely makes function calls to this code, and for that purpose includes it by reference shall be deemed a separate work for copyright law purposes. In addition, the copyright holders of this code give you permission to combine this code with free software libraries that are released under the GNU LGPL. You may copy and distribute such a system following the terms of the GNU GPL for this code and the LGPL for the libraries. If you modify this code, you may extend this exception to your version of the code, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
- Can I release
a program under the GPL which I developed using nonfree tools?
(
[#NonFreeTools](#NonFreeTools)) Which programs you used to edit the source code, or to compile it, or study it, or record it, usually makes no difference for issues concerning the licensing of that source code.
However, if you link nonfree libraries with the source code, that would be an issue you need to deal with. It does not preclude releasing the source code under the GPL, but if the libraries don't fit under the “system library” exception, you should affix an explicit notice giving permission to link your program with them.
[The FAQ entry about using GPL-incompatible libraries](#GPLIncompatibleLibs)provides more information about how to do that.- Are there translations
of the GPL into other languages?
(
[#GPLTranslations](#GPLTranslations)) It would be useful to have translations of the GPL into languages other than English. People have even written translations and sent them to us. But we have not dared to approve them as officially valid. That carries a risk so great we do not dare accept it.
A legal document is in some ways like a program. Translating it is like translating a program from one language and operating system to another. Only a lawyer skilled in both languages can do it—and even then, there is a risk of introducing a bug.
If we were to approve, officially, a translation of the GPL, we would be giving everyone permission to do whatever the translation says they can do. If it is a completely accurate translation, that is fine. But if there is an error in the translation, the results could be a disaster which we could not fix.
If a program has a bug, we can release a new version, and eventually the old version will more or less disappear. But once we have given everyone permission to act according to a particular translation, we have no way of taking back that permission if we find, later on, that it had a bug.
Helpful people sometimes offer to do the work of translation for us. If the problem were a matter of finding someone to do the work, this would solve it. But the actual problem is the risk of error, and offering to do the work does not avoid the risk. We could not possibly authorize a translation written by a non-lawyer.
Therefore, for the time being, we are not approving translations of the GPL as globally valid and binding. Instead, we are doing two things:
Referring people to unofficial translations. This means that we permit people to write translations of the GPL, but we don't approve them as legally valid and binding.
An unapproved translation has no legal force, and it should say so explicitly. It should be marked as follows:
This translation of the GPL is informal, and not officially approved by the Free Software Foundation as valid. To be completely sure of what is permitted, refer to the original GPL (in English).
But the unapproved translation can serve as a hint for how to understand the English GPL. For many users, that is sufficient.
However, businesses using GNU software in commercial activity, and people doing public ftp distribution, should need to check the real English GPL to make sure of what it permits.
Publishing translations valid for a single country only.
We are considering the idea of publishing translations which are officially valid only for one country. This way, if there is a mistake, it will be limited to that country, and the damage will not be too great.
It will still take considerable expertise and effort from a sympathetic and capable lawyer to make a translation, so we cannot promise any such translations soon.
- If a programming language interpreter has a
license that is incompatible with the GPL, can I run GPL-covered
programs on it?
(
[#InterpreterIncompat](#InterpreterIncompat)) When the interpreter just interprets a language, the answer is yes. The interpreted program, to the interpreter, is just data; the GPL doesn't restrict what tools you process the program with.
However, when the interpreter is extended to provide “bindings” to other facilities (often, but not necessarily, libraries), the interpreted program is effectively linked to the facilities it uses through these bindings. The JNI or Java Native Interface is an example of such a facility; libraries that are accessed in this way are linked dynamically with the Java programs that call them.
So if these facilities are released under a GPL-incompatible license, the situation is like linking in any other way with a GPL-incompatible library. Which implies that:
- If you are writing code and releasing it under the GPL, you can state an explicit exception giving permission to link it with those GPL-incompatible facilities.
- If you wrote and released the program under the GPL, and you designed it specifically to work with those facilities, people can take that as an implicit exception permitting them to link it with those facilities. But if that is what you intend, it is better to say so explicitly.
- You can't take someone else's GPL-covered code and use it that way, or add such exceptions to it. Only the copyright holders of that code can add the exception.
- Who has the power to enforce the GPL?
(
[#WhoHasThePower](#WhoHasThePower)) Since the GPL is a copyright license, it can be enforced by the copyright holders of the software. If you see a violation of the GPL, you should inform the developers of the GPL-covered software involved. They either are the copyright holders, or are connected with the copyright holders.
In addition, we encourage the use of any legal mechanism available to users for obtaining complete and corresponding source code, as is their right, and enforcing full compliance with the GNU GPL. After all, we developed the GNU GPL to make software free for all its users.
- In an object-oriented language such as Java,
if I use a class that is GPLed without modifying, and subclass it,
in what way does the GPL affect the larger program?
(
[#OOPLang](#OOPLang)) Subclassing is creating a derivative work. Therefore, the terms of the GPL affect the whole program where you create a subclass of a GPLed class.
- If I port my program to GNU/Linux,
does that mean I have to release it as free software under the GPL
or some other Free Software license?
(
[#PortProgramToGPL](#PortProgramToGPL)) In general, the answer is no—this is not a legal requirement. In specific, the answer depends on which libraries you want to use and what their licenses are. Most system libraries either use the
[GNU Lesser GPL](/licenses/lgpl.html), or use the GNU GPL plus an exception permitting linking the library with anything. These libraries can be used in nonfree programs; but in the case of the Lesser GPL, it does have some requirements you must follow.Some libraries are released under the GNU GPL alone; you must use a GPL-compatible license to use those libraries. But these are normally the more specialized libraries, and you would not have had anything much like them on another platform, so you probably won't find yourself wanting to use these libraries for simple porting.
Of course, your software is not a contribution to our community if it is not free, and people who value their freedom will refuse to use it. Only people willing to give up their freedom will use your software, which means that it will effectively function as an inducement for people to lose their freedom.
If you hope some day to look back on your career and feel that it has contributed to the growth of a good and free society, you need to make your software free.
- I just found out that a company has a
copy of a GPLed program, and it costs money to get it. Aren't they
violating the GPL by not making it available on the Internet?
(
[#CompanyGPLCostsMoney](#CompanyGPLCostsMoney)) No. The GPL does not require anyone to use the Internet for distribution. It also does not require anyone in particular to redistribute the program. And (outside of one special case), even if someone does decide to redistribute the program sometimes, the GPL doesn't say he has to distribute a copy to you in particular, or any other person in particular.
What the GPL requires is that he must have the freedom to distribute a copy to you
*if he wishes to*. Once the copyright holder does distribute a copy of the program to someone, that someone can then redistribute the program to you, or to anyone else, as he sees fit.- Can I release a program with a license which
says that you can distribute modified versions of it under the GPL
but you can't distribute the original itself under the GPL?
(
[#ReleaseNotOriginal](#ReleaseNotOriginal)) No. Such a license would be self-contradictory. Let's look at its implications for me as a user.
Suppose I start with the original version (call it version A), add some code (let's imagine it is 1000 lines), and release that modified version (call it B) under the GPL. The GPL says anyone can change version B again and release the result under the GPL. So I (or someone else) can delete those 1000 lines, producing version C which has the same code as version A but is under the GPL.
If you try to block that path, by saying explicitly in the license that I'm not allowed to reproduce something identical to version A under the GPL by deleting those lines from version B, in effect the license now says that I can't fully use version B in all the ways that the GPL permits. In other words, the license does not in fact allow a user to release a modified version such as B under the GPL.
- Does moving a copy to a majority-owned,
and controlled, subsidiary constitute distribution?
(
[#DistributeSubsidiary](#DistributeSubsidiary)) Whether moving a copy to or from this subsidiary constitutes “distribution” is a matter to be decided in each case under the copyright law of the appropriate jurisdiction. The GPL does not and cannot override local laws. US copyright law is not entirely clear on the point, but appears not to consider this distribution.
If, in some country, this is considered distribution, and the subsidiary must receive the right to redistribute the program, that will not make a practical difference. The subsidiary is controlled by the parent company; rights or no rights, it won't redistribute the program unless the parent company decides to do so.
- Can software installers ask people
to click to agree to the GPL? If I get some software under the GPL,
do I have to agree to anything?
(
[#ClickThrough](#ClickThrough)) Some software packaging systems have a place which requires you to click through or otherwise indicate assent to the terms of the GPL. This is neither required nor forbidden. With or without a click through, the GPL's rules remain the same.
Merely agreeing to the GPL doesn't place any obligations on you. You are not required to agree to anything to merely use software which is licensed under the GPL. You only have obligations if you modify or distribute the software. If it really bothers you to click through the GPL, nothing stops you from hacking the GPLed software to bypass this.
- I would
like to bundle GPLed software with some sort of installation software.
Does that installer need to have a GPL-compatible license?
(
[#GPLCompatInstaller](#GPLCompatInstaller)) No. The installer and the files it installs are separate works. As a result, the terms of the GPL do not apply to the installation software.
- Some distributors of GPLed software
require me in their umbrella EULAs or as part of their downloading
process to “represent and warrant” that I am located in
the US or that I intend to distribute the software in compliance with
relevant export control laws. Why are they doing this and is it a
violation of those distributors' obligations under GPL?
(
[#ExportWarranties](#ExportWarranties)) This is not a violation of the GPL. Those distributors (almost all of whom are commercial businesses selling free software distributions and related services) are trying to reduce their own legal risks, not to control your behavior. Export control law in the United States
*might*make them liable if they knowingly export software into certain countries, or if they give software to parties they know will make such exports. By asking for these statements from their customers and others to whom they distribute software, they protect themselves in the event they are later asked by regulatory authorities what they knew about where software they distributed was going to wind up. They are not restricting what you can do with the software, only preventing themselves from being blamed with respect to anything you do. Because they are not placing additional restrictions on the software, they do not violate section 10 of GPLv3 or section 6 of GPLv2.The FSF opposes the application of US export control laws to free software. Not only are such laws incompatible with the general objective of software freedom, they achieve no reasonable governmental purpose, because free software is currently and should always be available from parties in almost every country, including countries that have no export control laws and which do not participate in US-led trade embargoes. Therefore, no country's government is actually deprived of free software by US export control laws, while no country's citizens
*should*be deprived of free software, regardless of their governments' policies, as far as we are concerned. Copies of all GPL-licensed software published by the FSF can be obtained from us without making any representation about where you live or what you intend to do. At the same time, the FSF understands the desire of commercial distributors located in the US to comply with US laws. They have a right to choose to whom they distribute particular copies of free software; exercise of that right does not violate the GPL unless they add contractual restrictions beyond those permitted by the GPL.- Can I use
GPLed software on a device that will stop operating if customers do
not continue paying a subscription fee?
(
[#SubscriptionFee](#SubscriptionFee)) No. In this scenario, the requirement to keep paying a fee limits the user's ability to run the program. This is an additional requirement on top of the GPL, and the license prohibits it.
- How do I upgrade from (L)GPLv2 to (L)GPLv3?
(
[#v3HowToUpgrade](#v3HowToUpgrade)) First, include the new version of the license in your package. If you're using LGPLv3 in your project, be sure to include copies of both GPLv3 and LGPLv3, since LGPLv3 is now written as a set of additional permissions on top of GPLv3.
Second, replace all your existing v2 license notices (usually at the top of each file) with the new recommended text available on
[the GNU licenses howto](/licenses/gpl-howto.html). It's more future-proof because it no longer includes the FSF's postal mailing address.Of course, any descriptive text (such as in a README) which talks about the package's license should also be updated appropriately.
- How does GPLv3 make BitTorrent distribution easier?
(
[#BitTorrent](#BitTorrent)) Because GPLv2 was written before peer-to-peer distribution of software was common, it is difficult to meet its requirements when you share code this way. The best way to make sure you are in compliance when distributing GPLv2 object code on BitTorrent would be to include all the corresponding source in the same torrent, which is prohibitively expensive.
GPLv3 addresses this problem in two ways. First, people who download this torrent and send the data to others as part of that process are not required to do anything. That's because section 9 says “Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance [of the license].”
Second, section 6(e) of GPLv3 is designed to give distributors—people who initially seed torrents—a clear and straightforward way to provide the source, by telling recipients where it is available on a public network server. This ensures that everyone who wants to get the source can do so, and it's almost no hassle for the distributor.
- What is tivoization? How does GPLv3 prevent it?
(
[#Tivoization](#Tivoization)) Some devices utilize free software that can be upgraded, but are designed so that users are not allowed to modify that software. There are lots of different ways to do this; for example, sometimes the hardware checksums the software that is installed, and shuts down if it doesn't match an expected signature. The manufacturers comply with GPLv2 by giving you the source code, but you still don't have the freedom to modify the software you're using. We call this practice tivoization.
When people distribute User Products that include software under GPLv3, section 6 requires that they provide you with information necessary to modify that software. User Products is a term specially defined in the license; examples of User Products include portable music players, digital video recorders, and home security systems.
- Does GPLv3 prohibit DRM?
(
[#DRMProhibited](#DRMProhibited)) It does not; you can use code released under GPLv3 to develop any kind of DRM technology you like. However, if you do this, section 3 says that the system will not count as an effective technological “protection” measure, which means that if someone breaks the DRM, she will be free to distribute her software too, unhindered by the DMCA and similar laws.
As usual, the GNU GPL does not restrict what people do in software, it just stops them from restricting others.
- Can I use the GPL to license hardware?
(
[#GPLHardware](#GPLHardware)) Any material that can be copyrighted can be licensed under the GPL. GPLv3 can also be used to license materials covered by other copyright-like laws, such as semiconductor masks. So, as an example, you can release a drawing of a physical object or circuit under the GPL.
In many situations, copyright does not cover making physical hardware from a drawing. In these situations, your license for the drawing simply can't exert any control over making or selling physical hardware, regardless of the license you use. When copyright does cover making hardware, for instance with IC masks, the GPL handles that case in a useful way.
- I use public key cryptography to sign my code to
assure its authenticity. Is it true that GPLv3 forces me to release
my private signing keys?
(
[#GiveUpKeys](#GiveUpKeys)) No. The only time you would be required to release signing keys is if you conveyed GPLed software inside a User Product, and its hardware checked the software for a valid cryptographic signature before it would function. In that specific case, you would be required to provide anyone who owned the device, on demand, with the key to sign and install modified software on the device so that it will run. If each instance of the device uses a different key, then you need only give each purchaser a key for that instance.
- Does GPLv3 require that voters be able to
modify the software running in a voting machine?
(
[#v3VotingMachine](#v3VotingMachine)) No. Companies distributing devices that include software under GPLv3 are at most required to provide the source and Installation Information for the software to people who possess a copy of the object code. The voter who uses a voting machine (like any other kiosk) doesn't get possession of it, not even temporarily, so the voter also does not get possession of the binary software in it.
Note, however, that voting is a very special case. Just because the software in a computer is free does not mean you can trust the computer for voting. We believe that computers cannot be trusted for voting. Voting should be done on paper.
- Does GPLv3 have a “patent retaliation
clause”?
(
[#v3PatentRetaliation](#v3PatentRetaliation)) In effect, yes. Section 10 prohibits people who convey the software from filing patent suits against other licensees. If someone did so anyway, section 8 explains how they would lose their license and any patent licenses that accompanied it.
- Can I use snippets of GPL-covered
source code within documentation that is licensed under some license
that is incompatible with the GPL?
(
[#SourceCodeInDocumentation](#SourceCodeInDocumentation)) If the snippets are small enough that you can incorporate them under fair use or similar laws, then yes. Otherwise, no.
- The beginning of GPLv3 section 6 says that I can
convey a covered work in object code form “under the terms of
sections 4 and 5” provided I also meet the conditions of
section 6. What does that mean?
(
[#v3Under4and5](#v3Under4and5)) This means that all the permissions and conditions you have to convey source code also apply when you convey object code: you may charge a fee, you must keep copyright notices intact, and so on.
- My company owns a lot of patents.
Over the years we've contributed code to projects under “GPL
version 2 or any later version”, and the project itself has
been distributed under the same terms. If a user decides to take the
project's code (incorporating my contributions) under GPLv3, does
that mean I've automatically granted GPLv3's explicit patent license
to that user?
(
[#v2OrLaterPatentLicense](#v2OrLaterPatentLicense)) No. When you convey GPLed software, you must follow the terms and conditions of one particular version of the license. When you do so, that version defines the obligations you have. If users may also elect to use later versions of the GPL, that's merely an additional permission they have—it does not require you to fulfill the terms of the later version of the GPL as well.
Do not take this to mean that you can threaten the community with your patents. In many countries, distributing software under GPLv2 provides recipients with an implicit patent license to exercise their rights under the GPL. Even if it didn't, anyone considering enforcing their patents aggressively is an enemy of the community, and we will defend ourselves against such an attack.
- If I distribute a proprietary
program that links against an LGPLv3-covered library that I've
modified, what is the “contributor version” for purposes of
determining the scope of the explicit patent license grant I'm
making—is it just the library, or is it the whole
combination?
(
[#LGPLv3ContributorVersion](#LGPLv3ContributorVersion)) The “contributor version” is only your version of the library.
- Is GPLv3 compatible with GPLv2?
(
[#v2v3Compatibility](#v2v3Compatibility)) No. Many requirements have changed from GPLv2 to GPLv3, which means that the precise requirement of GPLv2 is not present in GPLv3, and vice versa. For instance, the Termination conditions of GPLv3 are considerably more permissive than those of GPLv2, and thus different from the Termination conditions of GPLv2.
Due to these differences, the two licenses are not compatible: if you tried to combine code released under GPLv2 with code under GPLv3, you would violate section 6 of GPLv2.
However, if code is released under GPL “version 2 or later,” that is compatible with GPLv3 because GPLv3 is one of the options it permits.
- Does GPLv2 have a requirement about delivering installation
information?
(
[#InstInfo](#InstInfo)) GPLv3 explicitly requires redistribution to include the full necessary “Installation Information.” GPLv2 doesn't use that term, but it does require redistribution to include
scripts used to control compilation and installation of the executable
with the complete and corresponding source code. This covers part, but not all, of what GPLv3 calls “Installation Information.” Thus, GPLv3's requirement about installation information is stronger.- What does it mean to “cure” a violation of GPLv3?
(
[#Cure](#Cure)) To cure a violation means to adjust your practices to comply with the requirements of the license.
- The warranty and liability
disclaimers in GPLv3 seem specific to U.S. law. Can I add my own
disclaimers to my own code?
(
[#v3InternationalDisclaimers](#v3InternationalDisclaimers)) Yes. Section 7 gives you permission to add your own disclaimers, specifically 7(a).
- My program has interactive user
interfaces that are non-visual in nature. How can I comply with the
Appropriate Legal Notices requirement in GPLv3?
(
[#NonvisualLegalNotices](#NonvisualLegalNotices)) All you need to do is ensure that the Appropriate Legal Notices are readily available to the user in your interface. For example, if you have written an audio interface, you could include a command that reads the notices aloud.
- If I give a copy of a GPLv3-covered
program to a coworker at my company, have I “conveyed” the
copy to that coworker?
(
[#v3CoworkerConveying](#v3CoworkerConveying)) As long as you're both using the software in your work at the company, rather than personally, then the answer is no. The copies belong to the company, not to you or the coworker. This copying is propagation, not conveying, because the company is not making copies available to others.
- If I distribute a GPLv3-covered
program, can I provide a warranty that is voided if the user modifies
the program?
(
[#v3ConditionalWarranty](#v3ConditionalWarranty)) Yes. Just as devices do not need to be warranted if users modify the software inside them, you are not required to provide a warranty that covers all possible activities someone could undertake with GPLv3-covered software.
- Why did you decide to write the GNU Affero GPLv3
as a separate license?
(
[#SeparateAffero](#SeparateAffero)) Early drafts of GPLv3 allowed licensors to add an Affero-like requirement to publish source in section 7. However, some companies that develop and rely upon free software consider this requirement to be too burdensome. They want to avoid code with this requirement, and expressed concern about the administrative costs of checking code for this additional requirement. By publishing the GNU Affero GPLv3 as a separate license, with provisions in it and GPLv3 to allow code under these licenses to link to each other, we accomplish all of our original goals while making it easier to determine which code has the source publication requirement.
- Why did you invent the new terms
“propagate” and “convey” in GPLv3?
(
[#WhyPropagateAndConvey](#WhyPropagateAndConvey)) The term “distribute” used in GPLv2 was borrowed from United States copyright law. Over the years, we learned that some jurisdictions used this same word in their own copyright laws, but gave it different meanings. We invented these new terms to make our intent as clear as possible no matter where the license is interpreted. They are not used in any copyright law in the world, and we provide their definitions directly in the license.
- I'd like to license my code under the GPL, but I'd
also like to make it clear that it can't be used for military and/or
commercial uses. Can I do this?
(
[#NoMilitary](#NoMilitary)) No, because those two goals contradict each other. The GNU GPL is designed specifically to prevent the addition of further restrictions. GPLv3 allows a very limited set of them, in section 7, but any other added restriction can be removed by the user.
More generally, a license that limits who can use a program, or for what, is
[not a free software license](/philosophy/programs-must-not-limit-freedom-to-run.html).- Is “convey” in GPLv3 the same
thing as what GPLv2 means by “distribute”?
(
[#ConveyVsDistribute](#ConveyVsDistribute)) Yes, more or less. During the course of enforcing GPLv2, we learned that some jurisdictions used the word “distribute” in their own copyright laws, but gave it different meanings. We invented a new term to make our intent clear and avoid any problems that could be caused by these differences.
- GPLv3 gives “making available to the
public” as an example of propagation. What does this mean?
Is making available a form of conveying?
(
[#v3MakingAvailable](#v3MakingAvailable)) One example of “making available to the public” is putting the software on a public web or FTP server. After you do this, some time may pass before anybody actually obtains the software from you—but because it could happen right away, you need to fulfill the GPL's obligations right away as well. Hence, we defined conveying to include this activity.
- Since distribution and making
available to the public are forms of propagation that are also
conveying in GPLv3, what are some examples of propagation that do not
constitute conveying?
(
[#PropagationNotConveying](#PropagationNotConveying)) Making copies of the software for yourself is the main form of propagation that is not conveying. You might do this to install the software on multiple computers, or to make backups.
- Does prelinking a
GPLed binary to various libraries on the system, to optimize its
performance, count as modification?
(
[#Prelinking](#Prelinking)) No. Prelinking is part of a compilation process; it doesn't introduce any license requirements above and beyond what other aspects of compilation would. If you're allowed to link the program to the libraries at all, then it's fine to prelink with them as well. If you distribute prelinked object code, you need to follow the terms of section 6.
- If someone installs GPLed software on a laptop, and
then lends that laptop to a friend without providing source code for
the software, have they violated the GPL?
(
[#LaptopLoan](#LaptopLoan)) No. In the jurisdictions where we have investigated this issue, this sort of loan would not count as conveying. The laptop's owner would not have any obligations under the GPL.
- Suppose that two companies try to
circumvent the requirement to provide Installation Information by
having one company release signed software, and the other release a
User Product that only runs signed software from the first company. Is
this a violation of GPLv3?
(
[#TwoPartyTivoization](#TwoPartyTivoization)) Yes. If two parties try to work together to get around the requirements of the GPL, they can both be pursued for copyright infringement. This is especially true since the definition of convey explicitly includes activities that would make someone responsible for secondary infringement.
- Am I complying with GPLv3 if I offer binaries on an
FTP server and sources by way of a link to a source code repository
in a version control system, like CVS or Subversion?
(
[#SourceInCVS](#SourceInCVS)) This is acceptable as long as the source checkout process does not become burdensome or otherwise restrictive. Anybody who can download your object code should also be able to check out source from your version control system, using a publicly available free software client. Users should be provided with clear and convenient instructions for how to get the source for the exact object code they downloaded—they may not necessarily want the latest development code, after all.
- Can someone who conveys GPLv3-covered
software in a User Product use remote attestation to prevent a user
from modifying that software?
(
[#RemoteAttestation](#RemoteAttestation)) No. The definition of Installation Information, which must be provided with source when the software is conveyed inside a User Product, explicitly says: “The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.” If the device uses remote attestation in some way, the Installation Information must provide you some means for your modified software to report itself as legitimate.
- What does “rules and protocols for
communication across the network” mean in GPLv3?
(
[#RulesProtocols](#RulesProtocols)) This refers to rules about traffic you can send over the network. For example, if there is a limit on the number of requests you can send to a server per day, or the size of a file you can upload somewhere, your access to those resources may be denied if you do not respect those limits.
These rules do not include anything that does not pertain directly to data traveling across the network. For instance, if a server on the network sent messages for users to your device, your access to the network could not be denied merely because you modified the software so that it did not display the messages.
- Distributors that provide Installation Information
under GPLv3 are not required to provide “support service”
for the product. What kind of “support service”do you mean?
(
[#SupportService](#SupportService)) This includes the kind of service many device manufacturers provide to help you install, use, or troubleshoot the product. If a device relies on access to web services or similar technology to function properly, those should normally still be available to modified versions, subject to the terms in section 6 regarding access to a network.
- In GPLv3 and AGPLv3, what does it mean when it
says “notwithstanding any other provision of this License”?
(
[#v3Notwithstanding](#v3Notwithstanding)) This simply means that the following terms prevail over anything else in the license that may conflict with them. For example, without this text, some people might have claimed that you could not combine code under GPLv3 with code under AGPLv3, because the AGPL's additional requirements would be classified as “further restrictions” under section 7 of GPLv3. This text makes clear that our intended interpretation is the correct one, and you can make the combination.
This text only resolves conflicts between different terms of the license. When there is no conflict between two conditions, then you must meet them both. These paragraphs don't grant you carte blanche to ignore the rest of the license—instead they're carving out very limited exceptions.
- Under AGPLv3, when I modify the Program
under section 13, what Corresponding Source does it have to offer?
(
[#AGPLv3CorrespondingSource](#AGPLv3CorrespondingSource)) “Corresponding Source” is defined in section 1 of the license, and you should provide what it lists. So, if your modified version depends on libraries under other licenses, such as the Expat license or GPLv3, the Corresponding Source should include those libraries (unless they are System Libraries). If you have modified those libraries, you must provide your modified source code for them.
The last sentence of the first paragraph of section 13 is only meant to reinforce what most people would have naturally assumed: even though combinations with code under GPLv3 are handled through a special exception in section 13, the Corresponding Source should still include the code that is combined with the Program this way. This sentence does not mean that you
*only*have to provide the source that's covered under GPLv3; instead it means that such code is*not*excluded from the definition of Corresponding Source.- In AGPLv3, what counts as
“interacting with [the software] remotely through a computer
network?”
(
[#AGPLv3InteractingRemotely](#AGPLv3InteractingRemotely)) If the program is expressly designed to accept user requests and send responses over a network, then it meets these criteria. Common examples of programs that would fall into this category include web and mail servers, interactive web-based applications, and servers for games that are played online.
If a program is not expressly designed to interact with a user through a network, but is being run in an environment where it happens to do so, then it does not fall into this category. For example, an application is not required to provide source merely because the user is running it over SSH, or a remote X session.
- How does GPLv3's concept of
“you” compare to the definition of “Legal Entity”
in the Apache License 2.0?
(
[#ApacheLegalEntity](#ApacheLegalEntity)) They're effectively identical. The definition of “Legal Entity” in the Apache License 2.0 is very standard in various kinds of legal agreements—so much so that it would be very surprising if a court did not interpret the term in the same way in the absence of an explicit definition. We fully expect them to do the same when they look at GPLv3 and consider who qualifies as a licensee.
- In GPLv3, what does “the Program”
refer to? Is it every program ever released under GPLv3?
(
[#v3TheProgram](#v3TheProgram)) The term “the Program” means one particular work that is licensed under GPLv3 and is received by a particular licensee from an upstream licensor or distributor. The Program is the particular work of software that you received in a given instance of GPLv3 licensing, as you received it.
“The Program” cannot mean “all the works ever licensed under GPLv3”; that interpretation makes no sense for a number of reasons. We've published an
[analysis of the term “the Program”](/licenses/gplv3-the-program.html)for those who would like to learn more about this.- If I only make copies of a
GPL-covered program and run them, without distributing or conveying them to
others, what does the license require of me?
(
[#NoDistributionRequirements](#NoDistributionRequirements)) Nothing. The GPL does not place any conditions on this activity.
- If some network client software is
released under AGPLv3, does it have to be able to provide source to
the servers it interacts with?
(
[#AGPLv3ServerAsUser](#AGPLv3ServerAsUser)) -
AGPLv3 requires a program to offer source code to “all users interacting with it remotely through a computer network.” It doesn't matter if you call the program a “client” or a “server,” the question you need to ask is whether or not there is a reasonable expectation that a person will be interacting with the program remotely over a network.
- For software that runs a proxy server licensed
under the AGPL, how can I provide an offer of source to users
interacting with that code?
(
[#AGPLProxy](#AGPLProxy)) For software on a proxy server, you can provide an offer of source through a normal method of delivering messages to users of that kind of proxy. For example, a Web proxy could use a landing page. When users initially start using the proxy, you can direct them to a page with the offer of source along with any other information you choose to provide.
The AGPL says you must make the offer to “all users.” If you know that a certain user has already been shown the offer, for the current version of the software, you don't have to repeat it to that user again.
- How are the various GNU licenses
compatible with each other?
(
[#AllCompatibility](#AllCompatibility)) The various GNU licenses enjoy broad compatibility between each other. The only time you may not be able to combine code under two of these licenses is when you want to use code that's
*only*under an older version of a license with code that's under a newer version.Below is a detailed compatibility matrix for various combinations of the GNU licenses, to provide an easy-to-use reference for specific cases. It assumes that someone else has written some software under one of these licenses, and you want to somehow incorporate code from that into a project that you're releasing (either your own original work, or a modified version of someone else's software). Find the license for your project in a column at the top of the table, and the license for the other code in a row on the left. The cell where they meet will tell you whether or not this combination is permitted.
When we say “copy code,” we mean just that: you're taking a section of code from one source, with or without modification, and inserting it into your own program, thus forming a work based on the first section of code. “Use a library” means that you're not copying any source directly, but instead interacting with it through linking, importing, or other typical mechanisms that bind the sources together when you compile or run the code.
Each place that the matrix states GPLv3, the same statement about compatibility is true for AGPLv3 as well.
I want to license my code under: | |||||||
---|---|---|---|---|---|---|---|
GPLv2 only | GPLv2 or later | GPLv3 or later | LGPLv2.1 only | LGPLv2.1 or later | LGPLv3 or later | ||
I want to copy code under: | GPLv2 only | OK | OK
|
[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[6]](#compat-matrix-footnote-6)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[5]](#compat-matrix-footnote-5)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[8]](#compat-matrix-footnote-8)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[4]](#compat-matrix-footnote-4)[[2]](#compat-matrix-footnote-2)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[2]](#compat-matrix-footnote-2)[[1]](#compat-matrix-footnote-1)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[3]](#compat-matrix-footnote-3)[[7]](#compat-matrix-footnote-7)[[7]](#compat-matrix-footnote-7)[[8]](#compat-matrix-footnote-8)[[9]](#compat-matrix-footnote-9)1: You must follow the terms of GPLv2 when incorporating the code in this case. You cannot take advantage of terms in later versions of the GPL.
2: While you may release under GPLv2-or-later both your original work, and/or modified versions of work you received under GPLv2-or-later, the GPLv2-only code that you're using must remain under GPLv2 only. As long as your project depends on that code, you won't be able to upgrade the license of your own code to GPLv3-or-later, and the work as a whole (any combination of both your project and the other code) can only be conveyed under the terms of GPLv2.
3: If you have the ability to release the project under GPLv2 or any later version, you can choose to release it under GPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under GPLv3.
4: If you have the ability to release the project under LGPLv2.1 or any later version, you can choose to release it under LGPLv3 or any later version—and once you do that, you'll be able to incorporate the code released under LGPLv3.
5: You must follow the terms of LGPLv2.1 when incorporating the code in this case. You cannot take advantage of terms in later versions of the LGPL.
6: If you do this, as long as the project contains the code released under LGPLv2.1 only, you will not be able to upgrade the project's license to LGPLv3 or later.
7: LGPLv2.1 gives you permission to relicense the code under any version of the GPL since GPLv2. If you can switch the LGPLed code in this case to using an appropriate version of the GPL instead (as noted in the table), you can make this combination.
8: LGPLv3 is GPLv3 plus extra permissions that you can ignore in this case.
9: Because GPLv2 does not permit combinations with LGPLv3, you must convey the project under GPLv3's terms in this case, since it will allow that combination. |
8,763 | 解密开放容器计划(OCI)规范 | https://blog.docker.com/2017/07/demystifying-open-container-initiative-oci-specifications/ | 2017-08-09T17:09:00 | [
"容器",
"OCI",
"Docker"
] | https://linux.cn/article-8763-1.html | 
<ruby> 开放容器计划 <rt> Open Container Initiative </rt></ruby>(OCI)宣布本周完成了容器运行时和镜像的第一版规范。OCI 在是 <ruby> Linux 基金会 <rt> Linux Foundation </rt></ruby>支持下的容器解决方案标准化的成果。两年来,为了[建立这些规范](/article-8778-1.html)已经付出了大量的努力。 由此,让我们一起来回顾过去两年中出现的一些误区。

### 误区:OCI 是 Docker 的替代品
诚然标准非常重要,但它们远非一个完整的生产平台。 以万维网为例,它 25 年来一路演进,建立在诸如 TCP/IP 、HTTP 和 HTML 等可靠的核心标准之上。再以 TCP/IP 为例,当企业将 TCP/IP 合并为一种通用协议时,它推动了路由器行业,尤其是思科的发展。 然而,思科通过专注于在其路由平台上提供差异化的功能,而成为市场的领导者。我们认为 OCI 规范和 Docker 也是类似这样并行存在的。
[Docker 是一个完整的生产平台](https://www.docker.com/),提供了基于容器的开发、分发、安全、编排的一体化解决方案。Docker 使用了 OCI 规范,但它大约只占总代码的 5%,而且 Docker 平台只有一小部分涉及容器的运行时行为和容器镜像的布局。
### 误区:产品和项目已经通过了 OCI 规范认证
运行时和镜像规范本周刚发布 1.0 的版本。 而且 OCI 认证计划仍在开发阶段,所以企业在该认证正式推出之前(今年晚些时候),没法要求容器产品的合规性、一致性或兼容性。
OCI [认证工作组](https://github.com/opencontainers/certification)目前正在制定标准,使容器产品和开源项目能够符合规范的要求。标准和规范对于实施解决方案的工程师很重要,但正式认证是向客户保证其正在使用的技术真正符合标准的唯一方式。
### 误区:Docker 不支持 OCI 规范的工作
Docker 很早就开始为 OCI 做贡献。 我们向 OCI 贡献了大部分的代码,作为 OCI 项目的维护者,为 OCI 运行时和镜像规范定义提供了积极有益的帮助。Docker 运行时和镜像格式在 2013 年开源发布之后,便迅速成为事实上的标准,我们认为将代码捐赠给中立的管理机构,对于避免容器行业的碎片化和鼓励行业创新将是有益的。我们的目标是提供一个可靠和标准化的规范,因此 Docker 提供了一个简单的容器运行时 runc 作为运行时规范工作的基础,后来又贡献了 Docker V2 镜像规范作为 OCI 镜像规范工作的基础。
Docker 的开发人员如 Michael Crosby 和 Stephen Day 从一开始就是这项工作的关键贡献者,确保能将 Docker 的托管和运行数十亿个容器镜像的经验带给 OCI。等认证工作组完成(制定认证规范的)工作后,Docker 将通过 OCI 认证将其产品展示出来,以证明 OCI 的一致性。
### 误区:OCI 仅用于 Linux 容器技术
因为 OCI 是由 <ruby> Linux 基金会 <rt> Linux Foundation </rt></ruby> 负责制定的,所以很容易让人误解为 OCI 仅适用于 Linux 容器技术。 而实际上并非如此,尽管 Docker 技术源于 Linux 世界,但 Docker 也一直在与微软合作,将我们的容器技术、平台和工具带到 Windows Server 的世界。 此外,Docker 向 OCI 贡献的基础技术广泛适用于包括 Linux 、Windows 和 Solaris 在内的多种操作系统环境,涵盖了 x86、ARM 和 IBM zSeries 等多种架构环境。
### 误区:Docker 仅仅是 OCI 的众多贡献者之一
OCI 作为一个支持成员众多的开放组织,代表了容器行业的广度。 也就是说,它是一个小而专业的个人技术专家组,为制作初始规范的工作贡献了大量的时间和技术。 Docker 是 OCI 的创始成员,贡献了初始代码库,构成了运行时规范的基础和后来的参考实现。 同样地,Docker 也将 Docker V2 镜像规范贡献给 OCI 作为镜像规范的基础。
### 误区:CRI-O 是 OCI 项目
CRI-O 是<ruby> 云计算基金会 <rt> Cloud Native Computing Foundation </rt></ruby>(CNCF)的 Kubernetes 孵化器的开源项目 -- 它不是 OCI 项目。 它基于早期版本的 Docker 体系结构,而 containerd 是一个直接的 CNCF 项目,它是一个包括 runc 参考实现的更大的容器运行时。 containerd 负责镜像传输和存储、容器运行和监控,以及支持存储和网络附件等底层功能。 Docker 在五个最大的云提供商(阿里云、AWS、Google Cloud Platform(GCP)、IBM Softlayer 和 Microsoft Azure)的支持下,将 containerd 捐赠给了云计算基金会(CNCF),作为多个容器平台和编排系统的核心容器运行时。
### 误区:OCI 规范现在已经完成了
虽然首版容器运行时和镜像格式规范的发布是一个重要的里程碑,但还有许多工作有待完成。 OCI 一开始着眼于定义一个狭窄的规范:开发人员可以依赖于容器的运行时行为,防止容器行业碎片化,并且仍然允许在不断变化的容器域中进行创新。之后才将含容器镜像规范囊括其中。
随着工作组完成运行时行为和镜像格式的第一个稳定规范,新的工作考量也已经同步展开。未来的新特性将包括分发和签名等。 然而,OCI 的下一个最重要的工作是提供一个由测试套件支持的认证过程,因为第一个规范已经稳定了。
**在 Docker 了解更多关于 OCI 和开源的信息:**
* 阅读关于 [OCI v1.0 版本的运行时和镜像格式规范]的博文[1]
* 访问 [OCI 的网站](https://www.opencontainers.org/join)
* 访问 [Moby 项目网站](http://mobyproject.org/)
* 参加 [DockerCon Europe 2017](https://europe-2017.dockercon.com/)
* 参加 [Moby Summit LA](https://www.eventbrite.com/e/moby-summit-los-angeles-tickets-35930560273)
---
作者简介:
Stephen 是 Docker 开源项目总监。 他曾在 Hewlett-Packard Enterprise (惠普企业)担任董事和杰出技术专家。他的关于开源软件和商业的博客发布在 “再次违约”(<http://stephesblog.blogs.com>) 和网站 opensource.com 上。
---
via: <https://blog.docker.com/2017/07/demystifying-open-container-initiative-oci-specifications/>
作者:[Stephen](%5B1%5D:https://blog.docker.com/2017/07/oci-release-of-v1-0-runtime-and-image-format-specifications) 译者:[rieonke](https://github.com/rieonke) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,766 | cp 命令两个高效的用法 | https://opensource.com/article/17/7/two-great-uses-cp-command | 2017-08-10T13:45:29 | [
"cp"
] | https://linux.cn/article-8766-1.html |
>
> Linux 中高效的备份拷贝命令
>
>
>

在 Linux 上能使用鼠标点来点去的图形化界面是一件很美妙的事……但是如果你喜欢的开发交互环境和编译器是终端窗口、Bash 和 Vim,那你应该像我一样*经常*和终端打交道。
即使是不经常使用终端的人,如果对终端环境深入了解也能获益良多。举个例子—— `cp` 命令,据 [维基百科](https://en.wikipedia.org/wiki/Cp_(Unix)) 的解释,`cp` (意即 copy)命令是第一个版本的 [Unix](https://en.wikipedia.org/wiki/Unix) 系统的一部分。连同一组其它的命令 `ls`、`mv`、`cd`、`pwd`、`mkdir`、`vi`、`sh`、`sed` 和 `awk` ,还有提到的 `cp` 都是我在 1984 年接触 System V Unix 系统时所学习的命令之一。`cp` 命令最常见的用法是制作文件副本。像这样:
```
cp sourcefile destfile
```
在终端中执行此命令,上述命令将名为 `sourcefile` 的文件复制到名为 `destfile` 的文件中。如果在执行命令之前 `destfile` 文件不存在,那将会创建此文件,如果已经存在,那就会覆盖此文件。
这个命令我不知道自己用了多少次了(我也不想知道),但是我知道在我编写测试代码的时候,我经常用,为了保留当前正常的版本,而且又能继续修改,我会输入这个命令:
```
cp test1.py test1.bak
```
在过去的30多年里,我使用了无数次这个命令。另外,当我决定编写我的第二个版本的测试程序时,我会输入这个命令:
```
cp test1.py test2.py
```
这样就完成了修改程序的第一步。
我通常很少查看 `cp` 命令的参考文档,但是当我在备份我的图片文件夹的时候(在 GUI 环境下使用 “file” 应用),我开始思考“在 `cp` 命令中是否有个参数支持只复制新文件或者是修改过的文件。”果然,真的有!
### 高效用法 1:更新你的文件夹
比如说在我的电脑上有一个存放各种文件的文件夹,另外我要不时的往里面添加一些新文件,而且我会不时地修改一些文件,例如我手机里下载的照片或者是音乐。
假设我收集的这些文件对我而言都很有价值,我有时候会想做个拷贝,就像是“快照”一样将文件保存在其它媒体。当然目前有很多程序都支持备份,但是我想更为精确的将目录结构复制到可移动设备中,方便于我经常使用这些离线设备或者连接到其它电脑上。
`cp` 命令提供了一个易如反掌的方法。例子如下:
在我的 `Pictures` 文件夹下,我有这样一个文件夹名字为 `Misc`。为了方便说明,我把文件拷贝到 USB 存储设备上。让我们开始吧!
```
me@desktop:~/Pictures$ cp -r Misc /media/clh/4388-D5FE
me@desktop:~/Pictures$
```
上面的命令是我从按照终端窗口中完整复制下来的。对于有些人来说不是很适应这种环境,在我们输入命令或者执行命令之前,需要注意的是 `me@mydesktop:~/Pictures` 这个前缀,`me` 这个是当前用户,`mydesktop` 这是电脑名称,`~/Pictures` 这个是当前工作目录,是 `/home/me/Pictures` 完整路径的缩写。
我输入这个命令 `cp -r Misc /media/clh/4388-D5FE` 并执行后 ,拷贝 `Misc` 目录下所有文件(这个 `-r` 参数,全称 “recursive”,递归处理,意思为本目录下所有文件及子目录一起处理)到我的 USB 设备的挂载目录 `/media/clh/4388-D5FE`。
执行命令后回到之前的提示,大多数命令继承了 Unix 的特性,在命令执行后,如果没有任何异常什么都不显示,在任务结束之前不会显示像 “execution succeeded” 这样的提示消息。如果想获取更多的反馈,就使用 `-v` 参数让执行结果更详细。
下图中是我的 USB 设备中刚刚拷贝过来的文件夹 `Misc` ,里面总共有 9 张图片。

假设我要在原始拷贝路径下 `~/Pictures/Misc` 下添加一些新文件,就像这样:

现在我想只拷贝新的文件到我的存储设备上,我就使用 `cp` 的“更新”和“详细”选项。
```
me@desktop:~/Pictures$ cp -r -u -v Misc /media/clh/4388-D5FE
'Misc/asunder.png' -> '/media/clh/4388-D5FE/Misc/asunder.png'
'Misc/editing tags guayadeque.png' -> '/media/clh/4388-D5FE/Misc/editing tags guayadeque.png'
'Misc/misc on usb.png' -> '/media/clh/4388-D5FE/Misc/misc on usb.png'
me@desktop:~/Pictures$
```
上面的第一行中是 `cp` 命令和具体的参数(`-r` 是“递归”, `-u` 是“更新”,`-v` 是“详细”)。接下来的三行显示被复制文件的信息,最后一行显示命令行提示符。
通常来说,参数 `-r` 也可用更详细的风格 `--recursive`。但是以简短的方式,也可以这么连用 `-ruv`。
### 高效用法 2:版本备份
回到一开始的例子中,我在开发的时候定期给我的代码版本进行备份。然后我找到了另一种更好用的 `cp` 参数。
假设我正在编写一个非常有用的 Python 程序,作为一个喜欢不断修改代码的开发者,我会在一开始编写一个程序简单版本,然后不停的往里面添加各种功能直到它能成功的运行起来。比方说我的第一个版本就是用 Python 程序打印出 “hello world”。这只有一行代码的程序就像这样:
```
print 'hello world'
```
然后我将这个代码保存成文件命名为 `test1.py`。我可以这么运行它:
```
me@desktop:~/Test$ python test1.py
hello world
me@desktop:~/Test$
```
现在程序可以运行了,我想在添加新的内容之前进行备份。我决定使用带编号的备份选项,如下:
```
clh@vancouver:~/Test$ cp --force --backup=numbered test1.py test1.py
clh@vancouver:~/Test$ ls
test1.py test1.py.~1~
clh@vancouver:~/Test$
```
所以,上面的做法是什么意思呢?
第一,这个 `--backup=numbered` 参数意思为“我要做个备份,而且是带编号的连续备份”。所以一个备份就是 1 号,第二个就是 2 号,等等。
第二,如果源文件和目标文件名字是一样的。通常我们使用 `cp` 命令去拷贝成自己,会得到这样的报错信息:
```
cp: 'test1.py' and 'test1.py' are the same file
```
在特殊情况下,如果我们想备份的源文件和目标文件名字相同,我们使用 `--force` 参数。
第三,我使用 `ls` (意即 “list”)命令来显示现在目录下的文件,名字为 `test1.py` 的是原始文件,名字为 `test1.py.~1~` 的是备份文件
假如现在我要加上第二个功能,在程序里加上另一行代码,可以打印 “Kilroy was here.”。现在程序文件 `test1.py` 的内容如下:
```
print 'hello world'
print 'Kilroy was here'
```
看到 Python 编程多么简单了吗?不管怎样,如果我再次执行备份的步骤,结果如下:
```
clh@vancouver:~/Test$ cp --force --backup=numbered test1.py test1.py
clh@vancouver:~/Test$ ls
test1.py test1.py.~1~ test1.py.~2~
clh@vancouver:~/Test$
```
现在我有有两个备份文件: `test1.py.~1~` 包含了一行代码的程序,和 `test1.py.~2~` 包含两行代码的程序。
这个很好用的功能,我考虑做个 shell 函数让它变得更简单。
### 最后总结
第一,Linux 手册页,它在大多数桌面和服务器发行版都默认安装了,它提供了更为详细的使用方法和例子,对于 `cp` 命令,在终端中输入如下命令:
```
man cp
```
对于那些想学习如何使用这些命令,但不清楚如何使用的用户应该首先看一下这些说明,然后我建议创建一个测试目录和文件来尝试使用命令和选项。
第二,兴趣是最好的老师。在你最喜欢的搜索引擎中搜索 “linux shell tutorial”,你会获得很多有趣和有用的资源。
第三,你是不是在想,“为什么我要用这么麻烦的方法,图形化界面中有相同的功能,只用点击几下岂不是更简单?”,关于这个问题我有两个理由。首先,在我们工作中需要中断其他工作流程以及大量使用点击动作时,点击动作可就不简单了。其次,如果我们要完成流水线般的重复性工作,通过使用 shell 脚本和 shell 函数以及 shell 重命名等功能就能很轻松的实现。
你还知道关于 `cp` 命令其他更棒的使用方式吗?请在留言中积极回复哦~
(题图:stonemaiergames.com)
---
作者简介:
Chris Hermansen - 1978 年毕业于英国哥伦比亚大学后一直从事计算机相关职业,我从 2005 年开始一直使用 Linux、Solaris、SunOS,在那之前我就是 Unix 系统管理员了,在技术方面,我的大量的职业生涯都是在做数据分析,尤其是空间数据分析,我有大量的编程经验与数据分析经验,熟练使用 awk、Python、PostgreSQL、PostGIS 和 Groovy。
---
via: <https://opensource.com/article/17/7/two-great-uses-cp-command>
作者:[Chris Hermansen](https://opensource.com/users/clhermansen) 译者:[bigdimple](https://github.com/bigdimple) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | The point-and-click graphical user interface available on Linux is a wonderful thing... but if your favorite interactive development environment consists of the terminal window, Bash, Vim, and your favorite language compiler, then, like me, you use the terminal *a lot*.
But even people who generally avoid the terminal can benefit by being more aware of the riches that its environment offers. A case in point – the **cp** command. [According to Wikipedia](https://en.wikipedia.org/wiki/Cp_(Unix)), the **cp** (or copy) command was part of Version 1 of [Unix](https://en.wikipedia.org/wiki/Unix). Along with a select group of other commands—**ls**, **mv**, **cd**, **pwd**, **mkdir**, **vi**, **sh**, **sed**, and **awk** come to mind—**cp** was one of my first few steps in System V Unix back in 1984. The most common use of **cp** is to make a copy of a file, as in:
```
````cp sourcefile destfile`
issued at the command prompt in a terminal session. The above command copies the file named **sourcefile** to the file named **destfile**. If **destfile** doesn't exist before the command is issued, it's created; if it does exist, it's overwritten.
I don't know how many times I've used this command (maybe I don't want to know), but I do know that I often use it when I'm writing and testing code and I have a working version of something that I want to retain as-is before I move on. So, I have probably typed something like this:
```
````cp test1.py test1.bak`
at a command prompt at least a zillion times over the past 30+ years. Alternatively, I might have decided to move on to version 2 of my test program, in which case I may have typed:
```
````cp test1.py test2.py`
to accomplish the first step of that move.
This is such a common and simple thing to do that I have rarely ever looked at the reference documentation for **cp**. But, while backing up my Pictures folder (using the Files application in my GUI environment), I started thinking, "I wonder if there is an option to have **cp** copy over only new files or those that have changed?" And sure enough, there is!
## Great use #1: Updating a second copy of a folder
Let's say I have a folder on my computer that contains a collection of files. Furthermore, let's say that from time to time I put a new file into that collection. Finally, let's say that from time to time I might edit one of those files in some way. An example of such a collection might be the photos I download from my cellphone or my music files.
Assuming that this collection of files has some enduring value to me, I might occasionally want to make a copy of it—a kind of "snapshot" of it—to preserve it on some other media. Of course, there are many utility programs that exist for doing backups, but maybe I want to have this exact structure duplicated on a removable device that I generally store offline or even connect to another computer.
The **cp** command offers a dead-easy way to do this. Here's an example.
In my **Pictures** folder, I have a sub-folder called **Misc**. For illustrative purposes, I'm going to make a copy of it on a USB memory stick. Here we go!
```
``````
me@desktop:~/Pictures$ cp -r Misc /media/clh/4388-D5FE
me@desktop:~/Pictures$
```
The above lines are copied as-is from my terminal window. For those who might not be fully comfortable with that environment, it's worth noting that **me @mydesktop:~/Pictures$** is the command prompt provided by the terminal before every command is entered and executed. It identifies the user (**me**), the computer (**mydesktop**), and the current working directory, in this case, **~/Pictures**, which is shorthand for **/home/me/Pictures**, that is, the **Pictures** folder in my home directory.
The command I've entered and executed, **cp -r Misc /media/clh/4388-D5FE**, copies the folder **Misc** and all its contents (the **-r**, or "recursive," option indicates the contents as well as the folder or file itself) into the folder **/media/clh/4388-D5FE**, which is where my USB stick is mounted.
Executing the command returned me to the original prompt. Like with most commands inherited from Unix, if the command executes without detecting any kind of anomalous result, it won't print out a message like "execution succeeded" before terminating. People who would like more feedback can use the **-v** option to make execution "verbose."
Below is an image of my new copy of **Misc** on the USB drive. There are nine JPEG files in the directory.

opensource.com
Suppose I add a few new files to the master copy of the directory **~/Pictures/Misc**, so now it looks like this:

opensource.com
Now I want to copy over only the new files to my memory stick. For this I'll use the "update" and "verbose" options to **cp**:
```
``````
me@desktop:~/Pictures$ cp -r -u -v Misc /media/clh/4388-D5FE
'Misc/asunder.png' -> '/media/clh/4388-D5FE/Misc/asunder.png'
'Misc/editing tags guayadeque.png' -> '/media/clh/4388-D5FE/Misc/editing tags guayadeque.png'
'Misc/misc on usb.png' -> '/media/clh/4388-D5FE/Misc/misc on usb.png'
me@desktop:~/Pictures$
```
The first line above shows the **cp** command and its options (**-r** for "recursive", **-u** for "update," and **-v** for "verbose"). The next three lines show the files that are copied across. The last line shows the command prompt again.
Generally speaking, options such as **-r** can also be given in a more verbose fashion, such as **--recursive**. In brief form, they can also be combined, such as **-ruv**.
## Great use #2 – Making versioned backups
Returning to my initial example of making periodic backups of working versions of code in development, another really useful **cp** option I discovered while learning about update is backup.
Suppose I'm setting out to write a really useful Python program. Being a fan of iterative development, I might do so by getting a simple version of the program working first, then successively adding more functionality to it until it does the job. Let's say my first version just prints the string "hello world" using the Python print command. This is a one-line program that looks like this:
```
````print 'hello world'`
and I've put that string in the file **test1.py**. I can run it from the command line as follows:
```
``````
me@desktop:~/Test$ python test1.py
hello world
me@desktop:~/Test$
```
Now that the program is working, I want to make a backup of it before adding the next component. I decide to use the backup option with numbering, as follows:
```
``````
clh@vancouver:~/Test$ cp --force --backup=numbered test1.py test1.py
clh@vancouver:~/Test$ ls
test1.py test1.py.~1~
clh@vancouver:~/Test$
```
So, what does this all mean?
First, the **--backup=numbered** option says, "I want to do a backup, and I want successive backups to be numbered." So the first backup will be number 1, the second 2, and so on.
Second, note that the source file and destination file are the same. Normally, if we try to use the **cp** command to copy a file onto itself, we will receive a message like:
```
````cp: 'test1.py' and 'test1.py' are the same file`
In the special case where we are doing a backup and we want the same source and destination, we use the **--force** option.
Third, I used the **ls** (or "list") command to show that we now have a file called **test1.py**, which is the original, and another called **test1.py.~1~**, which is the backup file.
Suppose now that the second bit of functionality I want to add to the program is another print statement that prints the string "Kilroy was here." Now the program in file **test1.py** looks like this:
```
``````
print 'hello world'
print 'Kilroy was here'
```
See how simple Python programming is? Anyway, if I again execute the backup step, here's what happens:
```
``````
clh@vancouver:~/Test$ cp --force --backup=numbered test1.py test1.py
clh@vancouver:~/Test$ ls
test1.py test1.py.~1~ test1.py.~2~
clh@vancouver:~/Test$
```
Now we have two backup files: **test1.py.~1~**, which contains the original one-line program, and **test1.py.~2~**, which contains the two-line program, and I can move on to adding and testing some more functionality.
This is such a useful thing to me that I am considering making a shell function to make it simpler.
## Three points to wrap this up
First, the Linux manual pages, installed by default on most desktop and server distros, provide details and occasionally useful examples of commands like **cp**. At the terminal, enter the command:
```
````man cp`
Such explanations can be dense and obscure to users just trying to learn how to use a command in the first place. For those inclined to persevere nevertheless, I suggest creating a test directory and files and trying the command and options out there.
Second, if a tutorial is of greater interest, the search string "linux shell tutorial" typed into your favorite search engine brings up a lot of interesting and useful resources.
Third, if you're wondering, "Why bother when the GUI typically offers the same functionality with point-and-click ease?" I have two responses. The first is that "point-and-click" isn't always that easy, especially when it disrupts another workflow and requires a lot of points and a lot of clicks to make it work. The second is that repetitive tasks can often be easily streamlined through the use of shell scripts, shell functions, and shell aliases.
Are you using the **cp** command in new or interesting ways? Let us know about them in the comments.
## 13 Comments |
8,767 | Ubuntu Core:制作包含私有 snap 的工厂镜像 | https://insights.ubuntu.com/2017/07/11/ubuntu-core-making-a-factory-image-with-private-snaps/ | 2017-08-10T17:08:00 | [
"snap",
"Snapcraft"
] | https://linux.cn/article-8767-1.html | 
这篇帖子是有关 [在 Ubuntu Core 开发 ROS 原型到成品](https://insights.ubuntu.com/2017/04/06/from-ros-prototype-to-production-on-ubuntu-core/) 系列的补充,用来回答我收到的一个问题: “我想做一个工厂镜像,但我不想使我的 snap 公开” 当然,这个问题和回答都不只是针对于机器人技术。在这篇帖子中,我将会通过两种方法来回答这个问题。
开始之前,你需要了解一些制作 Ubuntu Core 镜像的背景知识,如果你已经看过 [在 Ubuntu Core 开发 ROS 原型到成品[3](https://insights.ubuntu.com/2017/04/06/from-ros-prototype-to-production-on-ubuntu-core/) 系列文章(具体是第 5 部分),你就已经有了需要的背景知识,如果没有看过的话,可以查看有关 [制作你的 Ubuntu Core 镜像](https://tutorials.ubuntu.com/tutorial/create-your-own-core-image) 的教程。
如果你已经了解了最新的情况,并且当我说 “模型定义” 或者 “模型断言” 时知道我在谈论什么,那就让我们开始通过不同的方法使用私有 sanps 来制作 Ubuntu Core 镜像吧。
### 方法 1: 不要上传你的 snap 到商店
这是最简单的方法了。首先看一下这个有关模型定义的例子——`amd64-model.json`:
```
{
"type": "model",
"series": "16",
"model": "custom-amd64",
"architecture": "amd64",
"gadget": "pc",
"kernel": "pc-kernel",
"authority-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
"brand-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
"timestamp": "2017-06-23T21:03:24+00:00",
"required-snaps": ["kyrofa-test-snap"]
}
```
让我们将它转换成模型断言:
```
$ cat amd64-model.json | snap sign -k my-key-name > amd64.model
You need a passphrase to unlock the secret key for
user: "my-key-name"
4096-bit RSA key, ID 0B79B865, created 2016-01-01
...
```
获得模型断言:`amd64.model` 后,如果你现在就把它交给 `ubuntu-image` 使用,你将会碰钉子:
```
$ sudo ubuntu-image -c stable amd64.model
Fetching core
Fetching pc-kernel
Fetching pc
Fetching kyrofa-test-snap
error: cannot find snap "kyrofa-test-snap": snap not found
COMMAND FAILED: snap prepare-image --channel=stable amd64.model /tmp/tmp6p453gk9/unpack
```
实际上商店中并没有名为 `kyrofa-test-snap` 的 snap。这里需要重点说明的是:模型定义(以及转换后的断言)只包含了一系列的 snap 的名字。如果你在本地有个那个名字的 snap,即使它没有存在于商店中,你也可以通过 `--extra-snaps` 选项告诉 `ubuntu-image` 在断言中匹配这个名字来使用它:
```
$ sudo ubuntu-image -c stable \
--extra-snaps /path/to/kyrofa-test-snap_0.1_amd64.snap \
amd64.model
Fetching core
Fetching pc-kernel
Fetching pc
Copying "/path/to/kyrofa-test-snap_0.1_amd64.snap" (kyrofa-test-snap)
kyrofa-test-snap already prepared, skipping
WARNING: "kyrofa-test-snap" were installed from local snaps
disconnected from a store and cannot be refreshed subsequently!
Partition size/offset need to be a multiple of sector size (512).
The size/offset will be rounded up to the nearest sector.
```
现在,在 snap 并没有上传到商店的情况下,你已经获得一个预装了私有 snap 的 Ubuntu Core 镜像(名为 `pc.img`)。但是这样做有一个很大的问题,ubuntu-image 会提示一个警告:不通过连接商店预装 snap 意味着你没有办法在烧录了这些镜像的设备上更新它。你只能通过制作新的镜像并重新烧录到设备的方式来更新它。
### 方法 2: 使用品牌商店
当你注册了一个商店账号并访问 [dashboard.snapcraft.io](https://dashboard.snapcraft.io/dev/snaps/) 时,你其实是在标准的 Ubuntu 商店中查看你的 snap。如果你是在系统中新安装的 snapd,默认会从这个商店下载。虽然你可以在 Ubuntu 商店中发布私有的 snap,但是你[不能将它们预装到镜像中](https://forum.snapcraft.io/t/unable-to-create-an-image-that-uses-private-snaps),因为只有你(以及你添加的合作者)才有权限去使用它。在这种情况下制作镜像的唯一方式就是公开发布你的 snap,然而这并不符合这篇帖子的目的。
对于这种用例,我们有所谓的 [品牌商店](https://docs.ubuntu.com/core/en/build-store/index?_ga=2.103787520.1269328701.1501772209-778441655.1499262639)。品牌商店仍然托管在 Ubuntu 商店里,但是它们是针对于某一特定公司或设备的一个定制的、专门的版本。品牌商店可以继承或者不继承标准的 Ubuntu 商店,品牌商店也可以选择开放给所有的开发者或者将其限制在一个特定的组内(保持私有正是我们想要的)。
请注意,这是一个付费功能。你需要 [申请一个品牌商店](https://docs.ubuntu.com/core/en/build-store/create)。请求通过后,你将可以通过访问用户名下的 “stores you can access” 看到你的新商店。

在那里你可以看到多个有权使用的商店。最少的情况下也会有两个:标准的 Ubuntu 商店以及你的新的品牌商店。选择品牌商店(红框),进去后记录下你的商店 ID(蓝框):等下你将会用到它。

在品牌商店里注册名字或者上传 snap 和标准的商店使用的方法是一样的,只是它们现在是上传到你的品牌商店而不是标准的那个。如果你将品牌商店放在 unlisted 里面,那么这些 snap 对外部用户是不可见。但是这里需要注意的是第一次上传 snap 的时候需要通过 web 界面来操作。在那之后,你可以继续像往常一样使用 Snapcraft 来操作。
那么这些是如何改变的呢?我的 “kyrofal-store” 从 Ubuntu 商店继承了 snap,并且还包含一个发布在稳定通道中的 “kyrofa-bran-test-snap” 。这个 snap 在 Ubuntu 商店里是使用不了的,如果你去搜索它,你是找不到的:
```
$ snap find kyrofa-branded
The search "kyrofa-branded" returned 0 snaps
```
但是使用我们前面记录的商店 ID,我们可以创建一个从品牌商店而不是 Ubuntu 商店下载 snap 的模型断言。我们只需要将 “store” 键添加到 JSON 文件中,就像这样:
```
{
"type": "model",
"series": "16",
"model": "custom-amd64",
"architecture": "amd64",
"gadget": "pc",
"kernel": "pc-kernel",
"authority-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
"brand-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
"timestamp": "2017-06-23T21:03:24+00:00",
"required-snaps": ["kyrofa-branded-test-snap"],
"store": "ky<secret>ek"
}
```
使用方法 1 中的方式对它签名,然后我们就可以像这样很简单的制作一个预装有我们品牌商店私有 snap 的 Ubuntu Core 镜像:
```
$ sudo ubuntu-image -c stable amd64.model
Fetching core
Fetching pc-kernel
Fetching pc
Fetching kyrofa-branded-test-snap
Partition size/offset need to be a multiple of sector size (512).
The size/offset will be rounded up to the nearest sector.
```
现在,和方法 1 的最后一样,你获得了一个为工厂准备的 `pc.img`。并且使用这种方法制作的镜像中的所有 snap 都从商店下载的,这意味着它们将能像平常一样自动更新。
### 结论
到目前为止,做这个只有两种方法。当我开始写这篇帖子的时候,我想过可能还有第三种(将 snap 设置为私有然后使用它制作镜像),[但最后证明是不行的](https://forum.snapcraft.io/t/unable-to-create-an-image-that-uses-private-snaps/1115)。
另外,我们也收到很多内部部署或者企业商店的请求,虽然这样的产品还没有公布,但是商店团队正在从事这项工作。一旦可用,我将会写一篇有关它的文章。
希望能帮助到您!
---
关于作者
Kyle 是 Snapcraft 团队的一员,也是 Canonical 公司的常驻机器人专家,他专注于 snaps 和 snap 开发实践,以及 snaps 和 Ubuntu Core 的机器人技术实现。
---
via: <https://insights.ubuntu.com/2017/07/11/ubuntu-core-making-a-factory-image-with-private-snaps/>
作者:[Kyle Fazzari][a] 译者:[Snaplee](https://github.com/Snaplee) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | null |
|
8,768 | CoreOS,一款 Linux 容器发行版 | https://medium.com/linode-cube/the-what-why-and-wow-behind-the-coreos-container-linux-fa7ceae5593c | 2017-08-11T09:40:31 | [
"CoreOS",
"容器"
] | https://linux.cn/article-8768-1.html | 
>
> CoreOS,一款最新的 Linux 发行版本,支持自动升级内核软件,提供各集群间配置的完全控制。
>
>
>
关于使用哪个版本的 Linux 服务器系统的争论,常常是以这样的话题开始的:
>
> 你是喜欢基于 [Red Hat Enterprise Linux (RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) 的 [CentOS](https://www.centos.org/) 或者 [Fedora](https://getfedora.org/),还是基于 [Debian](https://www.debian.org/) 的 [Ubuntu](https://www.ubuntu.com/),抑或 [SUSE](https://www.suse.com/) 呢?
>
>
>
但是现在,一款名叫 [CoreOS 容器 Linux](https://coreos.com/os/docs/latest) 的 Linux 发行版加入了这场“圣战”。[这个最近在 Linode 服务器上提供的 CoreOS](https://www.linode.com/docs/platform/use-coreos-container-linux-on-linode),和它的老前辈比起来,它使用了完全不同的实现方法。
你可能会感到不解,这里有这么多成熟的 Linux 发行版本,为什么要选择用 CoreOS ?借用 Linux 主干分支的维护者,也是 CoreOS 顾问的 Greg Kroah-Hartman 先生的一句话:
>
> CoreOS 可以控制发行版的升级(基于 ChromeOS 代码),并结合了 Docker 和潜在的核对/修复功能,这意味着不用停止或者重启你的相关进程,就可以[在线升级](https://plus.google.com/+gregkroahhartman/posts/YvWFmPa9kVf)。测试版本已经支持此功能,这是史无前例的。
>
>
>
当 Greg Kroah-Hartman 做出这段评价时,CoreOS 还处于 α 测试阶段,当时也许就是在硅谷的一个车库当中,[开发团队正在紧锣密鼓地开发此产品](https://www.wired.com/2013/08/coreos-the-new-linux/),但 CoreOS 不像最开始的苹果或者惠普,其在过去的四年当中一直稳步发展。
当我参加在旧金山举办的 [2017 CoreOS 大会](https://coreos.com/fest/)时,CoreOS 已经支持谷歌云、IBM、AWS 和微软的相关服务。现在有超过 1000 位开发人员参与到这个项目中,并为能够成为这个伟大产品的一员而感到高兴。
究其原因,CoreOS 从开始就是为容器而设计的轻量级 Linux 发行版,其起初是作为一个 [Docker](https://www.docker.com/) 平台,随着时间的推移, CoreOS 在容器方面走出了自己的道路,除了 Docker 之外,它也支持它自己的容器 [rkt](https://coreos.com/rkt) (读作 rocket )。
不像大多数其他的 Linux 发行版,CoreOS 没有包管理器,取而代之的是通过 Google ChromeOS 的页面自动进行软件升级,这样能提高在集群上运行的机器/容器的安全性和可靠性。不用通过系统管理员的干涉,操作系统升级组件和安全补丁可以定期推送到 CoreOS 容器。
你可以通过 [CoreUpdate 和它的 Web 界面](https://coreos.com/products/coreupdate/)上来修改推送周期,这样你就可以控制你的机器何时更新,以及更新以多快的速度滚动分发到你的集群上。
CoreOS 通过一种叫做 [etcd](https://github.com/coreos/etcd) 的分布式配置服务来进行升级,etcd 是一种基于 [YAML](http://yaml.org/) 的开源的分布式哈希存储系统,它可以为 Linux 集群容器提供配置共享和服务发现等功能。
此服务运行在集群上的每一台服务器上,当其中一台服务器需要下线升级时,它会发起领袖选举,以便服务器更新时整个Linux 系统和容器化的应用可以继续运行。
对于集群管理,CoreOS 之前采用的是 [fleet](https://github.com/coreos/fleet) 方法,这将 etcd 和 [systemd](https://www.freedesktop.org/wiki/Software/systemd/) 结合到分布式初始化系统中。虽然 fleet 仍然在使用,但 CoreOS 已经将 etcd 加入到 [Kubernetes](https://kubernetes.io/) 容器编排系统构成了一个更加强有力的管理工具。
CoreOS 也可以让你定制其它的操作系统相关规范,比如用 [cloud-config](https://coreos.com/os/docs/latest/cloud-config.html) 的方式管理网络配置、用户账号和 systemd 单元等。
综上所述,CoreOS 可以不断地自行升级到最新版本,能让你获得从单独系统到集群等各种场景的完全控制。如 CoreOS 宣称的,你再也不用为了改变一个单独的配置而在每一台机器上运行 [Chef](https://insights.hpe.com/articles/what-is-chef-a-primer-for-devops-newbies-1704.html) 了。
假如说你想进一步的扩展你的 DevOps 控制,[CoreOS 能够轻松地帮助你部署 Kubernetes](https://blogs.dxc.technology/2017/06/08/coreos-moves-in-on-cloud-devops-with-kubernetes/)。
CoreOS 从一开始就是构建来易于部署、管理和运行容器的。当然,其它的 Linux 发行版,比如 RedHat 家族的[原子项目](http://www.projectatomic.io/)也可以达到类似的效果,但是对于那些发行版而言是以附加组件的方式出现的,而 CoreOS 从它诞生的第一天就是为容器而设计的。
当前[容器和 Docker 已经逐渐成为商业系统的主流](http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/),如果在可预见的未来中你要在工作中使用容器,你应该考虑下 CoreOS,不管你的系统是在裸机硬件上、虚拟机还是云上。
如果有任何关于 CoreOS 的观点或者问题,还请在评论栏中留言。如果你觉得这篇博客还算有用的话,还请分享一下~
---
关于博主:Steven J. Vaughan-Nichols 是一位经验丰富的 IT 记者,许多网站中都刊登有他的文章,包括 [ZDNet.com](http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/)、[PC Magazine](http://www.pcmag.com/author-bio/steven-j.-vaughan-nichols)、[InfoWorld](http://www.infoworld.com/author/Steven-J.-Vaughan_Nichols/)、[ComputerWorld](http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/)、[Linux Today](http://www.linuxtoday.com/author/Steven+J.+Vaughan-Nichols/) 和 [eWEEK](http://www.eweek.com/cp/bio/Steven-J.-Vaughan-Nichols/) 等。他拥有丰富的 IT 知识 - 而且他曾参加过智力竞赛节目 Jeopardy !他的相关观点都是自身思考的结果,并不代表 Linode 公司,我们对他做出的贡献致以最真诚的感谢。如果想知道他更多的信息,可以关注他的 Twitter [*@sjvn*](http://www.twitter.com/sjvn)。
---
via: <https://medium.com/linode-cube/the-what-why-and-wow-behind-the-coreos-container-linux-fa7ceae5593c>
作者:[Steven J. Vaughan-Nichols](https://medium.com/linode-cube/the-what-why-and-wow-behind-the-coreos-container-linux-fa7ceae5593c) 译者:[吴霄/toyijiu](https://github.com/toyijiu) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 200 | OK | # The What, Why and Wow! Behind the CoreOS Container Linux
By. Steven J. Vaughan-Nichols
## Latest Linux distro automatically updates kernel software and gives full configuration control across clusters.
The usual debate over server Linux distributions begins with:
*Do you use a **Red Hat Enterprise Linux (RHEL)**-based distribution, such as **CentOS** or **Fedora**; a **Debian**-based Linux like **Ubuntu**; or **SUSE**?*
But now, [CoreOS Container Linux](https://coreos.com/os/docs/latest) joins the fracas. [CoreOS, recently offered by Linode on its servers](https://www.linode.com/docs/platform/use-coreos-container-linux-on-linode), takes an entirely different approach than its more conventional, elder siblings.
So, you may be asking yourself: “Why should I bother, when there are so many other solid Linux distros?” Well, I’ll let Greg Kroah-Hartman, the kernel maintainer for the Linux-stable branch and CoreOS advisor, start the conversation:
(CoreOS) handles distro updates (based on the ChromeOS code) combined with Docker and potentially checkpoint/restore, (which) means that you might be
[able to update the distro under your application without stopping/starting the process/container.]I’ve seen it happen in testing, and it’s scary [good].”
And that assessment came when CoreOS was in alpha. Back then, [CoreOS was being developed in — believe it or not — a Silicon Valley garage](https://www.wired.com/2013/08/coreos-the-new-linux/). While CoreOS is no Apple or HPE, it’s grown considerably in the last four years.
When I checked in on them at 2017’s [CoreOS Fest](https://coreos.com/fest/) in San Francisco, CoreOS had support from Google Cloud, IBM, Amazon Web Services, and Microsoft. The project itself now has over a thousand contributors. They think they’re on to something good, and I agree.
Why? Because, CoreOS is a lightweight Linux designed from the get-go for running containers. It started as a [Docker](https://www.docker.com/) platform, but over time CoreOS has taken its own path to containers. It now supports both its own take on containers, [rkt](https://coreos.com/rkt) (pronounced rocket), and Docker.
Unlike most Linux distributions, CoreOS doesn’t have a package manager. Instead it takes a page from Google’s ChromeOS and automates software updates to ensure better security and reliability of machines and containers running on clusters. Both operating system updates and security patches are regularly pushed to CoreOS Container Linux machines without sysadmin intervention.
You control how often patches are pushed using [CoreUpdate, with its web-based interface](https://coreos.com/products/coreupdate/). This enables you to control when your machines update, and how quickly an update is rolled out across your cluster.
Specifically, CoreOS does this with the the distributed configuration service [etcd](https://github.com/coreos/etcd). This is an open-source, distributed key value store based on [YAML](http://yaml.org/). Etcd provides shared configuration and service discovery for Container Linux clusters.
This service runs on each machine in a cluster. When one server goes down, say to update, it handles the leader election so that the overall Linux system and containerized applications keep running as each server is updated.
To handle cluster management, [CoreOS used to use fleet](https://github.com/coreos/fleet). This ties together [systemd](https://www.freedesktop.org/wiki/Software/systemd/) and etcd into a distributed init system. While fleet is still around, CoreOS has joined etcd with [Kubernetes](https://kubernetes.io/) container orchestration to form an even more powerful management tool.
CoreOS also enables you to declaratively customize other operating system specifications, such as network configuration, user accounts, and systemd units, with [cloud-config](https://coreos.com/os/docs/latest/cloud-config.html).
Put it all together and you have a Linux that’s constantly self-updating to the latest patches while giving you full control over its configuration from individual systems to thousand of container instances. Or, as CoreOS puts it, “You’ll never have to run [Chef ](https://insights.hpe.com/articles/what-is-chef-a-primer-for-devops-newbies-1704.html)on every machine in order to change a single config value ever again.”
Let’s say you want to expand your DevOps control even further. [CoreOS helps you there, too, by making it easy to deploy Kubernetes](https://blogs.dxc.technology/2017/06/08/coreos-moves-in-on-cloud-devops-with-kubernetes/).
So, what does all this mean? CoreOS is built from the ground-up to make it easy to deploy, manage and run containers. Yes, other Linux distributions, such as the Red Hat family with [Project Atomic](http://www.projectatomic.io/), also enable you to do this, but for these distributions, it’s an add-on. CoreOS was designed from day one for containers.
If you foresee using containers in your business — and you’d better because [Docker and containers are fast becoming The Way to develop and run business applications](http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/) — then you must consider CoreOS Container Linux, no matter whether you’re running on bare-metal, virtual machines, or the cloud.
*Please feel free to share below any comments or insights about your experience with or questions about CoreOS. And if you found this blog useful, please consider sharing it through social media.*
*About the blogger: Steven J. Vaughan-Nichols is a veteran IT journalist whose estimable work can be found on a host of channels, including **ZDNet.com**, **PC Magazine**, **InfoWorld**, **ComputerWorld**, **Linux Today** and **eWEEK**. Steven’s IT expertise comes without parallel — he has even been a Jeopardy! clue. And while his views and cloud situations are solely his and don’t necessarily reflect those of Linode, we are grateful for his contributions. He can be followed on Twitter (**@sjvn**).* |
8,769 | 10 个应当了解的 Unikernel 开源项目 | https://www.linux.com/news/open-cloud-report/2016/guide-open-cloud-age-unikernel | 2017-08-11T16:08:01 | [
"unikernel"
] | https://linux.cn/article-8769-1.html | 
>
> unikernel 实质上是一个缩减的操作系统,它可以与应用程序结合成为一个 unikernel 程序,它通常在虚拟机中运行。下载《开放云指南》了解更多。
>
>
>
当涉及到操作系统、容器技术和 unikernel,趋势是朝着微型化发展。什么是 unikernel?unikernel 实质上是一个缩减的操作系统(特指 “unikernel”),它可以与应用程序结合成为一个 unikernel 程序, 它通常在虚拟机中运行。它们有时被称为库操作系统,因为它包含了使应用程序能够将硬件和网络协议与一组访问控制和网络层隔离的策略相结合使用的库。
在讨论云计算和 Linux 时容器常常会被提及,而 unikernel 也在做一些变革。容器和 unikernel 都不是新事物。在 20 世纪 90 年代就有类似 unikernel 的系统,如 Exokernel,而如今流行的 unikernel 系统则有 MirageOS 和 OSv。 Unikernel 程序可以独立使用并在异构环境中部署。它们可以促进专业化和隔离化服务,并被广泛用于在微服务架构中开发应用程序。
作为 unikernel 如何引起关注的一个例子,你可以看看 Docker 收购了[基于 Cambridge 的 Unikernel 系统](http://www.infoworld.com/article/3024410/application-virtualization/docker-kicks-off-unikernel-revolution.html),并且已在许多情况下在使用 unikernel。
unikernel,就像容器技术一样, 它剥离了非必需的的部分,因此它们对应用程序的稳定性、可用性以及安全性有非常积极的影响。在开源领域,它们也吸引了许多顶级,最具创造力的开发人员。
Linux 基金会最近[宣布](https://www.linux.com/blog/linux-foundation-issues-2016-guide-open-source-cloud-projects)发布了其 2016 年度报告[开放云指南:当前趋势和开源项目指南](http://ctt.marketwire.com/?release=11G120876-001&id=10172077&type=0&url=http%3A%2F%2Fgo.linuxfoundation.org%2Frd-open-cloud-report-2016-pr)。这份第三年度的报告全面介绍了开放云计算的状况,并包含了一节关于 unikernel 的内容。你现在可以[下载该报告](http://go.linuxfoundation.org/l/6342/2016-10-31/3krbjr)。它汇总并分析研究、描述了容器、unikernel 的发展趋势,已经它们如何重塑云计算的。该报告提供了对当今开放云环境中心的各类项目的描述和链接。
在本系列文章中,我们将按类别分析指南中提到的项目,为整体类别的演变提供了额外的见解。下面, 你将看到几个重要 unikernel 项目的列表及其影响,以及它们的 GitHub 仓库的链接, 这些都是从开放云指南中收集到的:
### [ClickOS](http://cnp.neclab.eu/clickos/)
ClickOS 是 NEC 的高性能虚拟化软件中间件平台,用于构建于 MiniOS/MirageOS 之上网络功能虚拟化(NFV)
* [ClickOS 的 GitHub](https://github.com/cnplab/clickos)
### [Clive](http://lsub.org/ls/clive.html)
Clive 是用 Go 编写的一个操作系统,旨在工作于分布式和云计算环境中。
### [HaLVM](https://galois.com/project/halvm/)
Haskell 轻量级虚拟机(HaLVM)是 Glasgow Haskell 编译器工具包的移植,它使开发人员能够编写可以直接在 Xen 虚拟机管理程序上运行的高级轻量级虚拟机。
* [HaLVM 的 GitHub](https://github.com/GaloisInc/HaLVM)
### [IncludeOS](http://www.includeos.org/)
IncludeOS 是在云中运行 C++ 服务的 unikernel 操作系统。它提供了一个引导加载程序、标准库以及运行服务的构建和部署系统。在 VirtualBox 或 QEMU 中进行测试,并在 OpenStack 上部署服务。
* [IncludeOS 的 GitHub](https://github.com/hioa-cs/IncludeOS)
### [Ling](http://erlangonxen.org/)
Ling 是一个用于构建超级可扩展云的 Erlang 平台,可直接运行在 Xen 虚拟机管理程序之上。它只运行三个外部库 (没有 OpenSSL),并且文件系统是只读的,以避免大多数攻击。
* [Ling 的 GitHub](https://github.com/cloudozer/ling)
### [MirageOS](https://mirage.io/)
MirageOS 是在 Linux 基金会的 Xen 项目下孵化的库操作系统。它使用 OCaml 语言构建的 unikernel 可以用于各种云计算和移动平台上安全的高性能网络应用。代码可以在诸如 Linux 或 MacOS X 等普通的操作系统上开发,然后编译成在 Xen 虚拟机管理程序下运行的完全独立的专用 Unikernel。
* [MirageOS 的 GitHub](https://github.com/mirage/mirage)
### [OSv](http://osv.io/)
OSv 是 Cloudius Systems 为云设计的开源操作系统。它支持用 Java、Ruby(通过 JRuby)、JavaScript(通过 Rhino 和 Nashorn)、Scala 等编写程序。它运行在 VMware、VirtualBox、KVM 和 Xen 虚拟机管理程序上。
* [OSV 的 GitHub](https://github.com/cloudius-systems/osv)
### [Rumprun](http://rumpkernel.org/)
Rumprun 是一个可用于生产环境的 unikernel,它使用 rump 内核提供的驱动程序,添加了 libc 和应用程序环境,并提供了一个工具链,用于将现有的 POSIX-y 程序构建为 Rumprun unikernel。它适用于 KVM 和 Xen 虚拟机管理程序和裸机,并支持用 C、C ++、Erlang、Go、Java、JavaScript(Node.js)、Python、Ruby、Rust 等编写的程序。
* [Rumprun 的 GitHub](https://github.com/rumpkernel/rumprun)
### [Runtime.js](http://runtimejs.org/)
Runtime.js 是用于在云上运行 JavaScript 的开源库操作系统(unikernel),它可以与应用程序捆绑在一起,并部署为轻量级和不可变的 VM 镜像。它基于 V8 JavaScript 引擎,并使用受 Node.js 启发的事件驱动和非阻塞 I/O 模型。KVM 是唯一支持的虚拟机管理程序。
* [Runtime.js 的 GitHub](https://github.com/runtimejs/runtime)
### [UNIK](http://dojoblog.emc.com/unikernels/unik-build-run-unikernels-easy/)
Unik 是 EMC 推出的工具,可以将应用程序源编译为 unikernel(轻量级可引导磁盘镜像)而不是二进制文件。它允许应用程序在各种云提供商、嵌入式设备(IoT) 以及开发人员的笔记本或工作站上安全地部署,资源占用很少。它支持多种 unikernel 类型、处理器架构、管理程序和编排工具,包括 Cloud Foundry、Docker 和 Kubernetes。[Unik 的 GitHub](https://github.com/emc-advanced-dev/unik)
(题图:Pixabay)
---
via: <https://www.linux.com/news/open-cloud-report/2016/guide-open-cloud-age-unikernel>
作者:[SAM DEAN](https://www.linux.com/users/sam-dean) 译者:[geekpi](https://github.com/geekpi) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,770 | 极客漫画:Linux 内核中的兄弟打架 | http://turnoff.us/geek/brothers-conflict/ | 2017-08-12T09:38:00 | [
"内核",
"漫画",
"线程"
] | https://linux.cn/article-8770-1.html | 
多线程编程中,如何处理共享的资源是个头疼的事情。
那边那个蹦起来的就是堆栈被搞乱的,罪魁祸首显然就在这两个背手挨骂的线程当中。
---
via: <http://turnoff.us/geek/brothers-conflict/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者&校对&点评:[wxy](https://github.com/wxy) 合成:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
8,771 | 极客漫画:不要使用 SIGKILL 的原因(看哭了) | http://turnoff.us/geek/dont-sigkill/ | 2017-08-14T09:40:00 | [
"漫画",
"信号"
] | https://linux.cn/article-8771-1.html | 
在 Linux 中,通常可以发送一些信号来杀死一个进程,一般用来杀死进程的信号有 SIGTERM、 SIGKILL。但是,如果希望进程合理地终止,就不要发送硬中断信号 SIGKILL,因为该信号是不能拦截的,进程接到该信号之后会马上退出,并没有机会进行现场清理——这包括对线程的关闭等操作。更好的做法是,发送 SIGTERM 信号,这样进程在接到该信号后,可以做一些退出的准备工作。
或许你之前对如何杀死进程并没有感到什么不同,但是,看了这幅漫画,你不觉得那些孩子们(线程)很可怜么——虽然 温和的 SIGTERM 也是要全家干掉的。哭~
---
via: <http://turnoff.us/geek/dont-sigkill/>
作者:[Daniel Stori](http://turnoff.us/about/) 译者&校对&点评:[wxy](https://github.com/wxy) 合成:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
| 301 | Moved Permanently | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.